Complexity Theory

Johan H˚astad
Department of Numerical Analysis and Computing Science
Royal Institute of Technology
S-100 44 Stockholm
SWEDEN
johanh@nada.kth.se
May 13, 2009
1
Contents
1 Preface 4
2 Recursive Functions 5
2.1 Primitive Recursive Functions . . . . . . . . . . . . . . . . . . 6
2.2 Partial recursive functions . . . . . . . . . . . . . . . . . . . . 10
2.3 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Church’s thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Functions, sets and languages . . . . . . . . . . . . . . . . . . 16
2.6 Recursively enumerable sets . . . . . . . . . . . . . . . . . . . 16
2.7 Some facts about recursively enumerable sets . . . . . . . . . 19
2.8 G¨ odel’s incompleteness theorem . . . . . . . . . . . . . . . . . 26
2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.10 Answers to exercises . . . . . . . . . . . . . . . . . . . . . . . 28
3 Efficient computation, hierarchy theorems. 32
3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . 33
4 The complexity classes L, P and PSPACE. 39
4.1 Is the definition of P model dependent? . . . . . . . . . . . . 40
4.2 Examples of members in the complexity classes. . . . . . . . . 48
5 Nondeterministic computation 56
5.1 Nondeterministic Turing machines . . . . . . . . . . . . . . . 56
6 Relations among complexity classes 64
6.1 Nondeterministic space vs. deterministic time . . . . . . . . . 64
6.2 Nondeterministic time vs. deterministic space . . . . . . . . . 65
6.3 Deterministic space vs. nondeterministic space . . . . . . . . 66
7 Complete problems 69
7.1 NP-complete problems . . . . . . . . . . . . . . . . . . . . . . 69
7.2 PSPACE-complete problems . . . . . . . . . . . . . . . . . . . 78
7.3 P-complete problems . . . . . . . . . . . . . . . . . . . . . . . 82
7.4 NL-complete problems . . . . . . . . . . . . . . . . . . . . . . 85
8 Constructing more complexity-classes 86
2
9 Probabilistic computation 89
9.1 Relations to other complexity classes . . . . . . . . . . . . . . 94
10 Pseudorandom number generators 95
11 Parallel computation 106
11.1 The circuit model of computation . . . . . . . . . . . . . . . . 106
11.2 NC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
11.3 Parallel time vs sequential space . . . . . . . . . . . . . . . . 112
12 Relativized computation 116
13 Interactive proofs 123
3
1 Preface
The present set of notes have grown out of a set of courses I have given at
the Royal Institute of Technology. The courses have been given at an in-
troductory graduate level, but also interested undergraduates have followed
the courses.
The main idea of the course has been to give the broad picture of mod-
ern complexity theory. To define the basic complexity classes, give some
examples of each complexity class and to prove the most standard relations.
The set of notes does not contain the amount of detail wanted from a text-
book. I have taken the liberty of skipping many boring details and tried to
emphasize the ideas involved in the proofs. Probably in many places more
details would be helpful and I would be grateful for hints on where this is
the case.
Most of the notes are at a fairly introductory level but some of the section
contain more advanced material. This is in particular true for the section
on pseudorandom number generators and the proof that IP = PSPACE.
Anyone getting stuck in these parts of the notes should not be disappointed.
These notes have benefited from feedback from colleagues who have
taught courses based on this material. In particular I am grateful to Jens
Lagergren and Ingrid Lindstr¨ om. The students who have taken the courses
together with other people have also helped me correct many errors. Sincere
thanks to Jerker Andersson, Per Andersson, Lars Arvestad, J¨ orgen Back-
elin, Christer Berg, Christer Carlsson, Jan Frelin, Mikael Goldmann, Pelle
Grape, Joachim Hollman, Andreas Jakobik, Wojtek Janczewski, Kai-Mikael
J¨ a¨a-Aro, Viggo Kann, Mats N¨ aslund, and Peter Rosengren.
Finally, let me just note that there are probably many errors and inac-
curacies remaining and for those I must take full responsibility.
4
2 Recursive Functions
One central question in computer science is the basic question:
What functions are computable by a computer?
Oddly enough, this question preceded the invention of the modern com-
puter and thus it was originally phrased: “What functions are mechanically
computable?” The word “mechanically” should here be interpreted as “by
hand without really thinking”. Several independent attempts to answer this
question were made in the mid-1930’s. One possible reason that several
researchers independently came to consider this question is its close connec-
tions to the proof of G¨ odel’s incompleteness theorem (Theorem 2.32) which
was published in 1931.
Before we try to formalize the concept of a computable function, let
us be precise about what we mean by a function. We will be considering
functions from natural numbers (N= {0, 1, 2 . . .}) to natural numbers. This
might seem restrictive, but in fact it is not since we can code almost any
type of object as a natural number. As an example, suppose that we are
given a function from words of the English alphabet to graphs. Then we can
think of a word in the English alphabet as a number written in base 27 with
a = 1, b = 2 and so on. A graph on n nodes can be thought of as a sequence
of
_
n
2
_
binary symbols where each symbol corresponds to a potential edge
and it is 1 iff the edge actually is there. For instance suppose that we are
looking at graphs with 3 nodes, and hence the possible edges are (1, 2), (1, 3)
and (2, 3). If the graph only contains the edges (1, 3) and (2, 3) we code it
as 011. Add a leading 1 and consider the result as a number written in
binary notation (our example corresponds to (1011)
2
= 11). It is easy to
see that the mapping from graphs to numbers is easy to compute and easy
to invert and thus we can use this representation of graphs as well as any
other. Thus a function from words over the English alphabet to graphs can
be represented as a function from natural numbers to natural numbers.
In a similar way one can see that most objects that have any reasonable
formal representation can be represented as natural numbers. This fact will
be used constantly throughout these notes.
After this detour let us return to the question of which functions are me-
chanically computable. Mechanically computable functions are often called
recursive functions. The reason for this will soon be obvious.
5
2.1 Primitive Recursive Functions
The name “recursive” comes from the use of recursion, i.e. when a function
value f(x +1) is defined in terms of previous values f(0), f(1) . . . f(x). The
primitive recursive functions define a large class of computable functions
which contains most natural functions. It contains some basic functions and
then new primitive recursive functions can be built from previously defined
primitive recursive functions either by composition or primitive recursion.
Let us give a formal definition.
Definition 2.1 The following functions are primitive recursive
1. The successor function, σ(x) = x + 1.
2. Constants, m(x) = m for any constant m.
3. The projections, π
n
i
(x
1
, x
2
. . . x
n
) = x
i
for 1 ≤ i ≤ n and any n.
The primitive recursive functions are also closed under the following
two operations. Assume that g, h, g
1
, g
2
. . . g
m
are known to be prim-
itive recursive functions, then we can form new primitive recursive
functions in the following ways.
4. Composition, f(x
1
, x
2
. . . x
n
) = h(g
1
(x
1
, . . . x
n
), g
2
(x
1
, . . . , x
n
), . . . g
m
(x
1
, . . . x
n
))
5. Primitive recursion The function defined by
• f(0, x
2
, x
3
, . . . x
n
) = g(x
2
, x
3
, . . . x
n
)
• f(x
1
+ 1, x
2
, x
3
, . . . x
n
) = h(x
1
, f(x
1
, . . . x
n
), x
2
, . . . x
n
)
To get a feeling for this definition let us prove that some common functions
are primitive recursive.
Example 2.2 Addition is defined as
Add(0, x
2
) = π
1
1
(x
2
)
Add(x
1
+ 1, x
2
) = σ(π
3
2
(x
1
, Add(x
1
, x
2
), x
2
))
= σ(Add(x
1
, x
2
))
It will be very cumbersome to follow the notation of the definition of
the primitive recursive functions strictly. Thus instead of the above, not
6
very transparent (but formally correct definition) we will use the equivalent,
more transparent (but formally incorrect) version stated below.
Add(0, x
2
) = x
2
Add(x
1
+ 1, x
2
) = Add(x
1
, x
2
) + 1
Example 2.3 Multiplication can be defined as
Mult(0, x
2
) = 0
Mult(x
1
+ 1, x
2
) = Add(x
2
, Mult(x
1
, x
2
))
Example 2.4 We cannot define subtraction as usual since we require the
answer to be nonnegative
1
. However, we can define a function which takes
the same value as subtraction whenever it is positive and otherwise takes the
value 0. First define a function on one variable which is basically subtraction
by 1.
Sub1(0) = 0
Sub1(x + 1) = x
and now we can let
Sub(x
1
, 0) = x
1
Sub(x
1
, x
2
+ 1) = Sub1(Sub(x
1
, x
2
)).
Here for convenience we have interchanged the order of the arguments in
the definition of the recursion but this can be justified by the composition
rule.
Example 2.5 If f(x, y) =

y−1
i=0
g(x, i) where we let f(x, 0) = 1 and g is
primitive recursive then so is f since it can be defined by
f(x, 0) = 1
f(x, y + 1) = Mult(f(x, y), g(x, y)).
Example 2.6 We can define a miniature version of the signum function by
Sg(0) = 0
Sg(x + 1) = 1
1
This is due to the fact that we have decided to work with natural numbers. If we
instead would be working with integers the situation would be different
7
and this allows us to define equality by
Eq(m, n) = Sub(1, Add(Sg(Sub(n, m)), Sg(Sub(m, n))))
since Sub(n, m) and Sub(m, n) are both zero iff n = m. Equality is here
defined as by Eq(m, n) = 1 if m and n are equal and Eq(m, n) = 0 otherwise.
Equality is not really a function put a predicate of pairs of numbers i.e.
a property of pairs of numbers. However, as we did above, it is convenient
to identify predicates with functions that take the values 0 and 1, letting
the value of the function be 1 exactly when the predicate is true. With
this convention we define a predicate to be primitive recursive exactly when
the corresponding function is primitive recursive. This naturally leads to an
efficient way to prove that more functions are primitive recursive. Namely,
let g and h be primitive recursive functions and let P be a primitive recursive
predicate. Then the function f(x) defined by g(x) if P(x) and h(x) otherwise
will be primitive recursive since it can be written as
Add(Mult(g(x), P(x)), Mult(h(x), Sub(1, P(x))))
(which in ordinary notation is (P ∗ g + (1 −P) ∗ h).
Continuing along these lines it is not difficult (but tedious) to prove that
most simple functions are primitive recursive. Let us now argue that all
primitive recursive functions are mechanically computable. Of course this
can only be an informal argument since “mechanically computable” is only
an intuitive notion.
Each primitive recursive function is defined as a sequence of statements
starting with basic functions of the types 1-3 and then using rules 4-5. We
will call this a derivation of the function. We will argue that primitive recur-
sive functions are mechanically computable by induction over the complexity
of the derivation (i.e. the number of steps in the derivation).
The simplest functions are the basic functions 1-3 and, arguing infor-
mally, are easy to compute. In general a primitive recursive function f will
be obtained using the rules 4 and 5 from functions defined previously. Since
the derivations of these functions are subderivations of the given derivation,
we can conclude that the functions used in the definition are mechanically
computable. Suppose the new function is constructed by composition, then
we can compute f by first computing the g
i
and then computing h of the
results. On the other hand if we use primitive recursion then we can com-
pute f when the first argument is 0 since it then agrees with g which is
8
computable by induction and then we can see that we can compute f in
general by induction over the size of the first argument. This finishes the
informal argument that all primitive recursive functions are mechanically
computable.
Before we continue, let us note the following: If we look at the proof
in the case of multiplication it shows that multiplication is mechanically
computable but it gives an extremely inefficient algorithm. Thus the present
argument has nothing to do with computing efficiently.
Although we have seen that most simple functions are primitive recursive
there are in fact functions which are mechanically computable but are not
primitive recursive. We will give one such function which, we have to admit,
would not be the first one would like to compute but which certainly is very
important from a theoretical point of view.
A derivation of a primitive recursive function is just a finite number of
symbols and thus we can code it as a number. If the coding is reasonable it
is mechanically computable to decide, given a number, whether the number
corresponds to a correct derivation of a primitive recursive function in one
variable. Now let f
1
be the primitive recursive function in one variable which
corresponds to the smallest number giving such a legal derivation and then
let f
2
be the function which corresponds to the second smallest number and
so on. Observe that given x it is possible to mechanically find the derivation
of f
x
by the following mechanical but inefficient procedure. Start with 0 and
check the numbers in increasing order whether they correspond to correct
derivations of a function in one variable. The x’th legal derivation found is
the derivation of f
x
. Now let
V (x) = f
x
(x) + 1.
By the above discussion V is mechanically computable, since once we have
found the derivation of f
x
we can compute it on any input. On the other
hand we claim that V does not agree with any primitive recursive function.
If V was primitive recursive then V = f
y
for some number y. Now look at
the value of V at the point y. By the definition of V the value should be
f
y
(y) + 1. On the other hand if V = f
y
then it is f
y
(y). We have reached a
contradiction and we have thus proved:
Theorem 2.7 There are mechanically computable functions which are not
primitive recursive.
9
The method of proof used to prove this theorem is called diagonalization.
To see the reason for this name think of an infinite two-dimensional array
with natural numbers along one axis and the primitive recursive functions on
the other. At position (i, j) we write the number f
j
(i). We then construct
a function which is not primitive recursive by going down the diagonal and
making sure that our function disagrees with f
i
on input i. The idea is
similar to the proof that Cantor used to prove that the real numbers are not
denumerable.
The above proof demonstrates something very important. If we want
to have a characterization of all mechanically computable functions the de-
scription cannot be mechanically computable by itself. By this we mean that
given x we should not be able to find f
x
in a mechanical way. If we could
find f
x
then the above defined function V would be mechanically computable
and we would get a function which was not in our list.
2.2 Partial recursive functions
The way around the problem mentioned last in the last section is to allow a
derivation to define a function which is only partial i.e. is not defined for all
inputs. We will do this by giving another way of forming new function. This
modification will give a new class of functions called the partial recursive
functions.
Definition 2.8 The partial recursive functions contains the basic functions
defined by 1-3 for primitive recursive functions and are closed under the
operations 4 and 5. There is an extra way of forming new functions:
6. Unbounded search Assume that g is a partial recursive function and
let f(x
1
, . . . x
n
) be the least m such that g(m, x
1
, . . . , x
n
) = 0 and such
that g(y, x
1
, . . . , x
n
) is defined for all y < m. If no such m exists then
f(x
1
, . . . , x
n
) is undefined. Then f is partial recursive.
Our first candidate for the class of mechanically computable functions
will be a subclass of the partial recursive functions.
Definition 2.9 A function is recursive (or total recursive) if it is a partial
recursive function which is total, i.e. which is defined for all inputs.
Observe that a recursive function is in an intuitive sense mechanically
computable. To see this we just have to check that the property of mechan-
ical computability is closed under the rule 6, given that f is defined. But
10
Figure 1: A Turing machine
this follows since we just have to keep computing g until we find a value for
which it takes the value 0. The key point here is that since f is total we
know that eventually there is going to be such a value.
Also observe that there is no obvious way to determine whether a given
derivation defines a total function and thus defines a recursive function.
The problem being that it is difficult to decide whether the defined func-
tion is total (i.e. if for each value of x
1
, x
2
, . . . x
n
there is an m such that
g(m, x
1
, x
2
, . . . x
n
) = 0. This implies that we will not be able to imitate the
proof of Theorem 2.7 and thus there is some hope that this definition will
give all mechanically computable functions. Let us next describe another
approach to define mechanically computable functions.
2.3 Turing Machines
The definition of mechanically computable functions as recursive functions
given in the last section is due to Kleene. Other definitions of mechanically
computable were given by Church (effective calculability, also by equations),
Post (canonical systems, as rewriting systems) and Turing (Turing machines,
a type of primitive computer). Of these we will only look closer at Turing
machines. This is probably the definition which to most of us today, after
the invention of the modern computer, seems most natural.
A Turing machine is a very primitive computer. A simple picture of one
is given in Figure 1. The infinite tape serves as memory and input and output
device. Each square can contain one symbol from a finite alphabet which
we will denote by Σ. It is not important which alphabet the machine uses
and thus let us think of it as {0, 1, B} where B symbolizes the blank square.
The input is initially given on the tape. At each point in time the head is
located at one of the tape squares and is in one of a finite number of states.
The machine reads the content of the square the head is located at, and
11
State Symbol New State New Symbol Move
q
0
0 q
1
B R
q
0
1 q
0
B R
q
0
B q
h
1
q
1
0,1 q
1
B R
q
1
B q
h
0
Table 1: The next step function of a simple Turing machine
based on this value and its state, it writes something into the square, enters
a potentially new state and moves left or right. Formally this is described
by the next-move function
f : Q×Σ →Q×Σ ×{R, L}
where Q is the set of possible states and R(L) symbolizes moving right (left).
From an intuitive point of view the next-move function is the program of
the machine.
Initially the machine is in a special start-state, q
0
, and the head is located
on the leftmost square of the input. The tape squares that do not contain
any part of the input contain the symbol B. There is a special halt-state,
q
h
, and when the machine reaches this state it halts. The output is now
defined by the non-blank symbols on the tape.
It is possible to make the Turing machine more efficient by allowing more
than one tape. In such a case there is one head on each tape. If there are k
tapes then the next-step function depends on the contents of all k squares
where the heads are located, it describes the movements of all k heads and
what new symbols to write into the k squares. If we have several tapes then
it is common to have one tape on which the input is located, and not to allow
the machine to write on this tape. In a similar spirit there is one output-tape
which the machine cannot read. This convention separates out the tasks of
reading the input and writing the output and thus we can concentrate on
the heart of the matter, the computation.
However, most of the time we will assume that we have a one-tape Turing
machine. When we are discussing computability this will not matter, but
later when considering efficiency of computation results will change slightly.
Example 2.10 Let us define a Turing Machine which checks if the input
contains only ones and no zeros. It is given in Table 1.
12
Thus the machine starts in state q
0
and remains in this state until it has
seen a “0”. If it sees a “B” before it sees a “0” it accepts. If it ever sees a
“0” it erases the rest of the input, prints the answer 0 and then halts.
Example 2.11 Programming Turing machines gets slightly cumbersome
and as an example let us give a Turing machine which computes the sum of
two binary numbers. We assume that we are given two numbers with least
significant bit first and that there is a B between the two numbers. To make
things simpler we also assume that we have a special output-tape on which
we print the answer, also here beginning with the least significant bit.
To make the representation compact we will let the states have two
indices, The first index is just a string of letters while the other is a number,
which in general will be in the range 0 to 3. Let division be integer division
and let lsb(i) be the least significant bit of i. The program is given in Table
2.3, where we assume for notational convenience that the machine starts in
state q
0,0
:
It will be quite time-consuming to explicitly give Turing machines which
compute more complicated functions. For this reason this will be the last
Turing machine that we specify explicitly. To be honest there are more
economic ways to specify Turing machines. One can build up an arsenal
of small machines doing basic operations and then define composition of
Turing machines. However, since programming Turing machines is not our
main task we will not pursue this direction either.
A Turing machine defines only a partial function since it is not clear
that the machine will halt for all inputs. But whenever a Turing machine
halts for all inputs it corresponds to a total function and we will call such a
function Turing computable.
The “Turing computable functions” is a reasonable definition of the me-
chanically computable functions and thus the first interesting question is
how this new class of functions relates to the recursive functions. We have
the following theorem.
Theorem 2.12 A function is Turing computable iff it is recursive.
We will not give the proof of this theorem. The proof is rather tedious,
and hence we will only give an outline of the general approach. The easier
part of the theorem is to prove that if a function is recursive then it is
Turing computable. Before, when we argued that recursive functions were
mechanically computable, most people who have programmed a modern
13
State Symbol New State New Symbol Move Output
q
0,i
0, 1(= j) q
x,i+j
B R
q
x,i
0, 1 q
xm,i
same R
q
x,i
B q
xo,i
B R
q
xo,i
B q
xo,i
B R
q
xo,i
0, 1(= j) q
yc,
i+j
2
B R lsb(i+j)
q
yc,i
0, 1(= j) q
yc,
i+j
2
B R lsb(i+j)
q
yc,i
B q
h
B i
q
xm,i
0, 1 q
xm,i
same R
q
xm,i
B q
sy,i
B R
q
sy,i
B q
sy,i
B R
q
sy,i
0, 1(= j) q
y,
i+j
2
B R lsb(i+j)
q
y,i
0, 1 q
sx,i
same L
q
y,i
B q
yo,i
B L
q
sx,i
B q
sx,i
B L
q
sx,i
0, 1 q
fx,i
same L
q
fx,i
0, 1 q
fx,i
same L
q
fx,i
B q
0,i
B R
q
yo,i
B q
yo,i
B L
q
yo,i
0, 1 q
xf,i
same L
q
xf,i
0, 1 q
xf,i
same L
q
xf,i
B q
cx,i
B R
q
cx,i
0, 1(= j) q
cx,
i+j
2
B R lsb(i+j)
q
cx,i
B q
h
B i
Table 2: A Turing machine for addition
14
computer probably felt that without too much trouble one could write a
program that would compute a recursive function. It is harder to program
Turing machines, but still feasible.
For the other implication one has to show that any Turing computable
function is recursive. The way to do this is to mimic the behavior of the
Turing machine by equations. This gets fairly involved and we will not
describe this procedure here.
2.4 Church’s thesis
In the last section we stated the theorem that recursive functions are iden-
tical to the Turing computable functions. It turns out that all the other at-
tempts to formalize mechanically computable functions give the same class
of functions. This leads one to believe that we have captured the right no-
tion of computability and this belief is usually referred to as Church’s thesis.
Let us state it for future reference.
Church’s thesis: The class of recursive functions is the class of mechan-
ically computable functions, and any reasonable definition of mechanically
computable will give the same class of functions.
Observe that Church’s thesis is not a mathematical theorem but a state-
ment of experience. Thus we can use such imprecise words as “reasonable”.
Church’s thesis is very convenient to use when arguing about computabil-
ity. Since any high level computer language describes a reasonable model of
computation the class of functions computable by high level programs is in-
cluded in the class of recursive functions. Thus as long as our descriptions of
procedures are detailed enough so that we feel certain that we could write a
high level program to do the computation, we can draw the conclusion that
we can do the computation on a Turing machine or by a recursive function.
In this way we do not have to worry about actually programming the Turing
machine.
For the remainder of these notes we will use the term “recursive func-
tions” for the class of functions described by Church’s thesis. Sometimes,
instead of saying that a given function, f, is a recursive function we will
phrase this as “f is computable”. When we argue about such functions
we will usually argue in terms of Turing machines but the algorithms we
describe will only be specified quite informally.
15
2.5 Functions, sets and languages
If a function f only takes two values (which we assume without loss of
generality to be 0 and 1) then we can identify f with the set, A, of inputs
for which the function takes the value 1. In formulas
x ∈ A ⇔f(x) = 1.
In this connection sets are also called languages, e.g. the set of prime num-
bers could be called the language of prime numbers. The reason for this is
historical and comes from the theory of formal languages. The function f is
called the characteristic function of A. Sometimes the characteristic func-
tion of A will be denoted by χ
A
. A set is called recursive iff its characteristic
function is recursive. Thus A is recursive iff given x one can mechanically
decide whether x ∈ A.
2.6 Recursively enumerable sets
We have defined recursive sets to be the sets for which membership can be
tested mechanically i.e. a set A is recursive if given x it is computable to test
whether x ∈ A. Another interesting class of sets is the class of sets which
can be listed mechanically.
Definition 2.13 A set A is recursively enumerable iff there is a Turing
machine M
A
which, when started on the empty input tape, lists the members
of A on its output tape.
It is important to remember that, while any member of A will eventually
be listed, the members of A are not necessarily listed in order and that M
will probably never halt since A is infinite most of the time. Thus if we
want to know whether x ∈ A it is not clear how to use M for this purpose.
We can watch the output of M and if x appears we know that x ∈ A, but
if we have not seen x we do not know whether x ∈ A or we have not waited
long enough. If we would require that A was listed in order we could check
whether x ∈ A since we would only have had to wait until we had seen x
or a number greater than x.
2
Thus in this case we can conclude that A is
recursive, but in general this is not true.
2
There is a slightly subtle point here since it might be the case that M never outputs
such a number, which would happen in the case when A is finite and does not contain x or
any larger number. However also in this case A is recursive since any finite set is recursive.
It is interesting to note that given the machine M it is not clear which alternative should
be used to recognize A, but one of them will work and that is all we care about.
16
Theorem 2.14 If a set is recursive then it is recursively enumerable. How-
ever there are sets that are recursively enumerable that are not recursive.
Proof: That recursive implies recursively enumerable is not too hard, the
procedure below will even print the members of A in order.
For i = 0, 1 . . . ∞
If i ∈ A print i.
Since it is computable to determine whether i ∈ A this will give a correct
enumeration of A.
The other part of the theorem is harder and requires some more notation.
A Turing machine is essentially defined by the next-step function which can
be described by a number of symbols and thus can be coded as an integer.
Let us outline in more detail how this is done. We have described a Turing
machine by a number of lines where each line contains the following items:
State, Symbol, New state, New Symbol, Move and Output. Let us make
precise how to code this information. A state should be written as q
x
where
x is a natural number written in binary. A symbol is from the set {0, 1, B},
while a move is either R or L and the output is either 0, 1 or B. Each item
is separated from the next by the special symbol &, the end of a line is
marked as & & and the end of the specification is marked as & & &. We
assume that the start state is always q
0
and the halt state q
1
. With these
conventions a Turing machine is completely specified by a finite string over
the alphabet {0, 1, B, &, R, L, q}. This coding is also efficient in the sense
that given a string over this alphabet it is possible to mechanically decide
whether it is a correct description of a Turing machine (think about this for
a while). By standard coding we can think of this finite string as a number
written in base 8. Thus we can uniquely code a Turing machine as a natural
number.
For technical reason we allow the end of the specification not to be the
last symbols in the coding. If we encounter the end of the specification we
will just discard the rest of the description. This definition implies that each
Turing machine occurs infinitely many times in any natural enumeration.
We will denote the Turing machine which is given by the description
corresponding to y by M
y
. We again emphasize that given y it is possible
to mechanically determine whether it corresponds to a Turing machine and
in such a case find that Turing machine. Furthermore we claim that once
we have the description of the Turing machine we can run it on any input
17
(simulate M
y
on a given input). We make this explicit by stating a theorem
we will not prove.
Theorem 2.15 There is a universal Turing machine which on input (x, y, z)
simulates z computational steps of M
y
on input x. By this we mean that
if M
y
halts with output w on input x within z steps then also the universal
machine outputs w. If M
y
does not halt within z steps then the universal
machine gives output “not halted”. If y is not the description of a legal
Turing machine, the universal Turing machine enters a special state q
ill
,
where it usually would halt, but this can be modified at will.
We will sometimes allow z to take the value ∞. In such a case the
universal machine will simulate M
y
until it halts or go on for ever without
halting if M
y
does not halt on input x. The output will again agree with
that of M
y
.
In a more modern language, the universal Turing machine is more or less
an interpreter since it takes as input a Turing machine program together with
an input and then runs the program. We encourage the interested reader
to at least make a rough sketch of a program in his favorite programming
language which does the same thing as the universal Turing machine.
We now define a function which is in the same spirit of the function V
which we proved not to be primitive recursive. To distinguish it we call it
V
T
.
V
T
(x) =
_
1, if M
x
halts on input x with output 0;
0, otherwise.
V
T
is the characteristic function of a set which we will denote by K
D
. We
call this set “the diagonal halting set” since it is the set of Turing machines
which halt with output 0 when given their own encoding as input. We claim
that K
D
is recursively enumerable but not recursive. To prove the first claim
observe that K
D
can be enumerated by the following procedure
For i = 1, 2 . . . ∞
For j = 1, 2, . . . i, If M
j
is legal, run M
j
, i steps on input j, if it halts
within these i steps and gives output 0 and we have not listed j before,
print j.
Observe that this is an recursive procedure using the universal Turing
machine. The only detail to check is that we can decide whether j has
18
been listed before. The easiest way to do this is to observe that j has not
been listed before precisely if j = i or M
j
halted in exactly i steps. The
procedure lists K
D
since all numbers ever printed are by definition members
in K
D
and if x ∈ K
D
and M
x
halts in T steps on input x then x will be
listed for i = max(x, T) and j = x.
To see that K
D
is not recursive, suppose that V
T
can be computed by
a Turing machine M. We know that M = M
y
for some y. Consider what
happens when M is fed input y. If it halts with output 0 then V
T
(y) = 1.
On the other hand if M does not halt with output 0 then V
T
(y) = 0. In
either case M
y
makes an error and hence we have reached a contradiction.
This finishes the proof of Theorem 2.14
We have proved slightly more than was required by the theorem. We have
given an explicit function which cannot be computed by a Turing machine.
Let us state this as a separate theorem.
Theorem 2.16 The function V
T
cannot be computed by a Turing machine,
and hence is not recursive.
2.7 Some facts about recursively enumerable sets
Recursion theory is really the predecessor of complexity theory and let us
therefore prove some of the standard theorems to give us something to com-
pare with later. In this section we will abbreviate recursively enumerable as
“r.e.”.
Theorem 2.17 A is recursive if and only if both A and the complement of
A, (
¯
A) are r.e.
Proof: If A is recursive then also
¯
A is recursive (we get a machine recog-
nizing
¯
A from a machine recognizing A by changing the output). Since any
recursive set is r.e. we have proved one direction of the theorem. For the
converse, to decide whether x ∈ A we just enumerate A and
¯
A in parallel,
and when x appears in one of lists, which we know it will, we can give the
answer and halt.
From Theorem 2.16 we have the following immediate corollary.
Corollary 2.18 The complement of K
D
is not r.e..
19
For the next theorem we need the fact that we can code pairs of natural
numbers as natural numbers. For instance one such coding is given by
f(x, y) = (x +y)(x +y + 1)/2 +x.
Theorem 2.19 A is r.e. iff there is a recursive set B such that x ∈ A ⇔
∃y (x, y) ∈ B.
Proof: If there is such a B then A can be enumerated by the following
program:
For z = 0, 1, 2, . . . ∞
For x = 0, 1, 2 . . . z If for some y ≤ z we have (x, y) ∈ B and (x, y

) ∈ B
for y

< y and x has not been printed before then print x.
First observe that x has not been printed before if either x or y is equal
to z. By the relation between A and B this program will list only members
of A and if x ∈ A and y is the smallest number such that (x, y) ∈ B then x
is listed for z = max(x, y).
To see the converse, let M
A
be the Turing machine which enumerates
A. Define B to be the set of pairs (x, y) such that x is output by M
A
in at
most y steps. By the existence of the universal Turing machine it follows
that B is recursive and by definition ∃y(x, y) ∈ B precisely when x appears
in the output of M
A
, i.e. when x ∈ A. This finishes the proof of Theorem
2.19.
The last theorem says that r.e. sets are just recursive sets plus an existen-
tial quantifier. We will later see that there is a similar relationship between
the complexity classes P and NP.
Let the halting set, K, be defined by
K = {(x, y)|M
y
is legal and halts on input x}.
To determine whether a given pair (x, y) ∈ K is for natural reasons called the
halting problem. This is closely related to the diagonal halting problem which
we have already proved not to be recursive in the last section. Intuitively
this should imply that the halting problem also is not recursive and in fact
this is the case.
Theorem 2.20 The halting problem is not recursive.
20
Proof: Suppose K is recursive i.e. that there is a Turing machine M which
on input (x, y) gives output 1 precisely when M
y
is legal and halts on input
x. We will use this machine to construct a machine that computes V
T
using
M as a subroutine. Since we have already proved that no machine can
compute V
T
this will prove the theorem.
Now consider an input x and that we want to compute V
T
(x). First
decide whether M
x
is a legal Turing machine. If it is not we output 0 and
halt. If M
x
is a legal machine we feed the pair (x, x) to M. If M outputs
0 we can safely output 0 since we know that M
x
does not halt on input x.
On the other hand if M outputs 1 we use the universal machine on input
(x, x, ∞) to determine the output of M
x
on input x. If the output is 0 we give
the answer 1 and otherwise we answer 0. This gives a mechanical procedure
that computes V
T
and we have reached the desired contradiction.
It is now clear that other problems can be proved to be non-recursive by
a similar technique. Namely we assume that the given problem is recursive
and we then make an algorithm for computing something that we already
know is not recursive. One general such method is by a standard type of
reduction and let us next define this concept.
Definition 2.21 For sets A and B let the notation A ≤
m
B mean that
there is a recursive function f such that x ∈ A ⇔f(x) ∈ B.
The reason for the letter m on the less than sign is that one usually
defines several different reductions. This particular reduction is usually re-
ferred to as a many-one reduction. We will not study other definitions in
detail, but since the only reduction we have done so far was not a many-one
reduction but a more general notion called Turing reduction, we will define
also this reduction.
Definition 2.22 For sets A and B let the notation A ≤
T
B mean that given
a Turing machine that recognizes B then using this machine as a subroutine
we can construct a Turing machine that recognizes A.
The intuition for either of the above definitions is that A is not harder
to recognize than B. This is formalized as follows:
Theorem 2.23 If A ≤
m
B and B is recursive then A is recursive.
Proof: To decide whether x ∈ A, first compute f(x) and then check
whether f(x) ∈ B. Since both f and B are recursive this is a recursive
procedure and it gives the correct answer by the definition of A ≤
m
B.
21
Clearly the similar theorem with Turing reducibility rather than many-
one reducibility is also true (prove it). However in the future we will only
reason about many-one reducibility. Next let us define the hardest problem
within a given class.
Definition 2.24 A set A is r.e.-complete iff
1. A is r.e.
2. If B is r.e. then B ≤
m
A.
We have
Theorem 2.25 The halting set is r.e.-complete.
Proof: The fact that the halting problem is r.e. can be seen in a similar way
that the diagonal halting problem K
D
was seen to be r.e.. Just run more
and more machines more and more steps and output all pairs of machines
and inputs that leads to halting.
To see that it is complete we have to prove that any other r.e. set, B can
be reduced to K. Let M be the Turing machine that enumerates B. Define
M

to be the Turing machine which on input x runs M until it outputs x (if
ever) and then halts with output 0. Then M

halts precisely when x ∈ B.
Thus if M

= M
y
we can let f(x) = (x, y) and this will give a reduction
from B to K. The proof is complete.
It is also true that the diagonal halting problem is r.e.-complete, but we
omit the proof. There are many other (often more natural) problems that
can be proved r.e.-complete (or to be even harder) and let us define two such
problems.
The first problem is called tiling a can be thought of as a two-dimensional
domino game. Given a finite set of squares (which will be called tiles), each
with a marking on all four sides and one tile placed at the origin in the
plane. The question is whether it is possible to cover the entire positive
quadrant with tiles such that on any two neighboring tiles, the markings
agree on their common side and such that each tile is equal to one of the
given tiles.
Theorem 2.26 The complement problem of tiling is r.e.-complete.
22
Proof: (Outline) Given a Turing machine M
x
we will construct a set of
tiles and a tile at the origin such that the entire positive quadrant can be
tiled iff M
x
does not halt on the empty input. The problem whether a
Turing machine halts on the empty input is not recursive (this is one of the
exercises in the end of this chapter). We will construct the tiles in such a
way that the only way to put down tiles correctly will be to make them
describe a computation of M
x
. The tile at the origin will make sure that the
machine starts correctly (with some more complication this tile could have
been eliminated also).
Let the state of a tape cell be the content of the cell with the additional
information whether the head is there and in such a case which state the
machine is in. Now each tile will describe the state of three adjacent cells.
The tile to be placed at position (i, j) will describe the state of cells j, j +1
and j +2 at time i of the computation. Observe that this implies that tiles
which are to the left and right of each other will describe overlapping parts
of the tape. However, we will make sure that the descriptions do not conflict.
A tile will thus be partly be specified by three cell-states s
1
, s
2
and s
3
(we call this the signature of the tile) and we need to specify how to mark
its four sides. The left hand side will be marked by (s
1
, s
2
) and the right
hand side by (s
2
, s
3
). Observe that this makes sure that there is no conflict
in the descriptions of a cell by different tiles. The markings on the top and
the bottom will make sure that the computation proceeds correctly.
Suppose that the states of cells j, j + 1, and j + 2 are s
1
, s
2
, and s
3
at time t. Consider the states of these cells at time t + 1. If one of the
s
i
tells us that the head is present we know exactly what states the cells
will be in. On the other hand if the head is not present in any of the
three cells there might be several possibilities since the head could be in
cells j − 1 or j + 3 and move into one of our positions. In a similar way
there might be one or many (or even none) possible states for the three
cells at time t −1. For each possibility (s
−1
1
, s
−1
2
, s
−1
3
) and (s
+1
1
, s
+1
2
, s
+1
3
) of
states in the previous and next step we make a tile. The marking on the
lower side is (s
−1
1
, s
−1
2
, s
−1
3
), (s
1
, s
2
, s
3
) while the marking on the top side is
(s
1
, s
2
, s
3
), (s
+1
1
, s
+1
2
, s
+1
3
). This completes the description of the tiles.
Finally at the origin we place a tile which describes that the machine
starts in the first cell in state q
0
and blank tape. Now it is easy to see that
a valid tiling describes a computation of M
x
and the entire quadrant can be
tiled iff M
x
goes on for ever i.e. it does not halt.
There are a couple of details to take care of. Namely that new heads
don’t enter from the left and that the entire tape is blank from the beginning.
23
A couple of special markings will take care of this. We leave the details to
the reader.
The second problem we will consider is number theoretic statements,
i.e. given a number theoretic statement is it false or true? One particular
statement people have been interested in for a long time (which supposedly
was proved true in 1993) is Fermat’s last theorem, which can be written as
follows
∀n > 2 ∀x, y, zx
n
+y
n
= z
n
←xyz = 0.
In general a number theoretic statement involves the quantifiers ∀ and ∃,
variables and usual arithmetical operations. Quantifiers range over natural
numbers.
Theorem 2.27 The set of true number theoretic statements is not recur-
sive.
Remark 2.28 In fact the set of true number theoretic statements is not
even r.e. but have a much more complicated structure. To prove this would
lead us to far into recursion theory. The interested reader can consult any
standard text in recursion theory.
Proof: (Outline) Again we will prove that we can reduce the halting
problem to the given problem. This time we will let an enourmous integer
z code the computation. Thus assume we are given a Turing machine M
x
and that we want to decide whether it halts on the empty input.
The state of each cell will be given by a certain number of bits in the
binary expansion of z. Suppose that each cell has at most S ≤ 2
r
states.
A computation of M
x
that runs in time t never uses more than t tape cells
and thus such a computation can be described by the content of t
2
cells (i.e.
t cells each at t different points in time). This can now be coded as rt
2
bits and these bits concatenated will be the integer z. Now let A
x
be an
arithmetic formula such that A
x
(z, t) is true iff z is a rt
2
bit integer which
describes a correct computation for M
x
which have halted. To check that
such a formula exists requires a fair amount of detailed reasoning and let us
just sketch how to construct it. First one makes a predicate Cell(i, j, z, t, p)
which is true iff p is the integer that describes the content of cell i at time
j. This amounts to extracting the r bits of z which are in position starting
at (it +j)r. Next one makes a predicate Move(p
1
, p
2
, p
3
, q) which says that
if p
1
, p
2
and p
3
are the states of squares i −1, i and i +1 at time j then q is
24
the resulting state of square i at time j + 1. The Cell predicate is from an
intuitive point of view very arithmetic (and thus we hope the reader feels
that it can be constructed). Move on the other hand is of constant size
(there are only 2
4r
inputs, which is a constant depending only on x and
independent of t ) and thus can be coded by brute force. The predicate
A
x
(z, t) is now equivalent to the conjunction of
∀i, j, p
1
, p
2
, p
3
, q Cell(i −1, j, z, t, p
1
)∧
Cell(i, j, z, t, p
2
) ∧ Cell(i + 1, j, z, t, p
3
)∧
Cell(i, j + 1, z, t, p
q
) ⇒Move(p
1
, p
2
, p
3
, q)
and
∀q

Cell(1, t, z, t, q

) ⇒Stop(q

)
where Stop(p) is true if p is a haltstate. Now we are almost done since M
x
halts iff
∃z, tA
x
(z, t)
and thus if we can decide the truth of arithmetic formulae with quantifiers
we can decide if a given Turing machine halts. Since we know that this is
not possible we have finished the outline of the proof.
Remark 2.29 It is interesting to note that (at least to me) the proofs of
the last two theorems are in some sense counter intuitive. It seems like the
hard part of the tiling problem is what to do at points where we can put down
many different tiles (we never know if we made the correct decision). This
is not utilized in the proof. Rather at each point we have only one choice
and the hard part is to decide whether we can continue for ever. A similar
statement is true about the other proof.
Let us explicitly state a theorem we have used a couple of times.
Theorem 2.30 If A is r.e.-complete then A is not recursive.
Proof: Let B be a set that is r.e.but not recursive (e.g.the halting problem)
then by the second property of being r.e.-complete B ≤
m
A. Now if A was
recursive then by Theorem 2.7.6 we could conclude that B is recursive,
contradicting the initial assumption that B is not recursive.
25
Before we end this section let us make an informal remark. What does
it mean that the halting problem is not recursive? Experience shows that
for most programs that do not halt there is a simple reason that they do not
halt. They often tend to go into an infinite loop and of course such things can
be detected. We have only proved that there is not a single program which
when given as input the description of a Turing machine and an input to that
machine, the program will always give the correct answer to the question
whether the machine halts or not. One final definition: A problem that is
not recursive is called undecidable. Thus the halting problem is undecidable.
2.8 G¨ odel’s incompleteness theorem
Since we have done many of the pieces let us briefly outline a proof of
G¨ odel’s incompleteness theorem. This theorem basically says that there are
statements in arithmetic which neither have proof or a disproof. We want
to avoid a too elaborate machinery and hence we will be rather informal
and give an argument in the simplest case. However, before we state the
theorem we need to address what we mean by “statement in arithmetic”
and “proof”.
Statements in arithmetic will simply be the formulas considered in the
last examples, i.e. quantified formulas where the variables values which are
natural numbers. We encourage the reader to write common theorems and
conjectures in number theory in this form to check its power.
The notion of a proof is more complicated. One starts with a set of
axioms and then one is allowed to combine axioms (according to some rules)
to derive new theorems. A proof is then just such a derivation which ends
with the desired statement.
First note that most proofs used in modern mathematics is much more
informal and given in a natural language. However, proof can be formalized
(although most humans prefer informal proofs).
The most common set of axioms for number theory was proposed by
Peano, but one could think of other sets of axioms. We call a set of axioms
together with the rules how they can be combined a proofsystem. There are
two crucial properties to look for in a proofsystem. We want to be able to
prove all true theorem (this is called completeness) and we do not want to be
able to prove any false theorems (this is called that the system is consistent).
In particular, for each statement A we want to be able to prove exactly one
of A and ¬A.
Our goal is to prove that there is no proof system that is both consistent
26
and complete. Unfortunately, this is not true since we can as axioms take all
true statements and then we need no rules for deriving new theorems. This
is not a very practical proofsystem since there is no way to tell whether a
given statement is indeed an axiom. Clearly the axioms need to be specified
in a more efficient manner. We take the following definition.
Definition 2.31 A proofsystem is recursive iff the set of proofs (and hence
the set of axioms) form a recursive set.
We can now state the theorem.
Theorem 2.32 (G¨ odel) There is no recursive proofsystem which is both
consistent and complete.
Proof: Assume that there was indeed such a proofsystem. Then we claim
that also the set of all theorems would be recursive. Namely to decide
whether a statement A is true we could proceed as follows:
For z = 0, 1, 2, . . . ∞
If z is a correct proof of A output “true” and halt.
If z is a correct proof of ¬A output “false” and halt.
To check whether a given string is a correct proof is recursive by as-
sumption and since the proofsystem is consistent and complete sooner or
later there will be a proof of either A or ¬A. Thus this procedure always
halts with the correct answer. However, by Theorem 2.27 the set of true
statements is not recursive and hence we have reached a contradiction.
2.9 Exercises
Let us end this section with a couple of exercises (with answers). The reader
is encouraged to solve the exercises without looking too much at the answers.
II.1: Given x is it recursive to decide whether M
x
halts on an empty input?
II.2: Is there any fixed machine M, such that given y, deciding whether M
halts on input y is recursive?
II.3: Is there any fixed machine M, such that given y, deciding whether M
halts on input y is not recursive?
27
II.4: Is it true that for each machine M, that given y, it is recursive to
decide whether M halts on input y in y
2
steps?
II.5: Given x is it recursive to decide whether there exists a y such that M
x
halts on y?
II.6: Given x is it recursive to decide whether for all y, M
x
halts on y?
II.7: If M
x
halts on empty input let f(x) be the number of steps it needs be-
fore it halts and otherwise set f(x) = 0. Define the maximum time function
by MT(y) = max
x≤y
f(x). Is the maximum time function computable?
II.8 Prove that the maximum time function (cf ex. II.7) grows at least as
fast as any recursive function. To be more precise let g be any recursive
function, then there is an x such that MT(x) > g(x).
II.9 Given a set of rewriting rules over a finite alphabet and a starting
string and a target string, is it decidable whether we, using the rewriting
rules, can transform the starting string to the target string? An example
of this instance is: Rewriting rules ab → ba, aa → bab and bb → a. Is it
possible to transform ababba to aaaabbb?
II.10 Given a set of rewriting rules over a finite alphabet and a starting
string. Is it decidable whether we, using the rewriting rules, can transform
the starting string to an arbitrarily long string?
II.11 Given a set of rewriting rules over a finite alphabet and a starting
string. Is it decidable whether we, using the rewriting rules, can transform
the starting string to an arbitrarily long string, if we restrict the left hand
side of each rewriting rule to be of length 1?
2.10 Answers to exercises
II.1 The problem is undecidable. We will prove that if we could decide
whether M
x
halts on the empty input, then we could decide whether M
z
halts on input y for an arbitrary pair z, y. Namely given z and y we make a
machine M
x
which basically looks like M
z
but has a few special states. We
have one special state for each symbol of y. On empty input M
x
first goes
trough all its special states which writes y on the tape. The machine then
returns to the beginning of the tape and from this point on it behaves as
M
z
. This new machine halts on empty input-tape iff M
z
halted on input y
and thus if we could decide the former we could decide the latter which is
known undecidable. To conclude the proof we only have to observe that it
is recursive to compute the number x from the pair y and z.
28
II.2 There are plenty of machines of this type. For instance let M be the
machine that halts without looking at the input (or any machine defining a
total function). In one of these cases the set of y’s for which the machine
halts is everything which certainly is a decidable set.
II.3 Let M be the universal machine. Then M halts on input (x, y) iff M
x
halts on input y. Since the latter problem is undecidable so is the former.
II.4 This problem is decidable by the existence of the universal machine. If
we are less formal we could just say that running a machine a given number
of steps is easy. What makes halting problems difficult is that we do not
know for how many steps to run the machine.
II.5 Undecidable. Suppose we could decide this problem, then we show that
we could determine whether a machine M
x
halts on empty input. Given M
x
we create a machine M
z
which first erases the input and then behaves as M
x
.
We claim that M
z
halts on some input iff M
x
halts on empty input. Also
it is true that we can compute z from x. Thus if we could decide whether
M
z
halts on some input then we could decide whether M
x
halts on empty
input, but this is undecidable by exercise II.1.
II.6 Undecidable. The argument is the same as in the previous exercise.
The constructed machine M
z
halts on all inputs iff it halts on some input.
II.7 MT is not computable. Suppose it was, then we could decide whether
M
x
halts on empty input as follows: First compute MT(x) and then run M
x
for MT(x) steps on the empty input. If it halts in this number of steps, we
know the answer and if it did not halt, we know by the definition of MT that
it will never halt. Thus we always give the correct answer. However we know
by exercise II.1 that the halting problem on empty input is undecidable. The
contradiction must come from our assumption that MT is computable.
II.8 Suppose we had a recursive function g such that g(x) ≥ MT(x) for all
x. Then g(x) would work in the place of MT(x) in the proof of exercise II.7
(we would run more steps than we needed to, but we would always get the
correct answer). Thus there can be no such function.
II.9 The problem is undecidable, let us give an outline why this is true. We
will prove that if we could decide this problem then we could decide whether
a given Turing machine halts on the empty input. The letters in our finite
alphabet will be the nonblank symbols that can appear on the tape of the
Turing machine, plus a symbol for each state of the machine. A string in
this alphabet containing exactly one letter corresponding to a state of the
machine can be viewed as coding the Turing machine at one instant in time
29
by the following convention. The nonblank part of the tape is written from
left to write and next to the letter corresponding to the square where the
head is, we write the letter corresponding to the state the machine is in.
For instance suppose the Turing machine has symbols 0 and 1 and 4 states.
We choose a, b, c and d to code these states. If, at an instant in time, the
content of the tape is 0110000BBBBBBBBBBBB. . . and the head is in
square 3 and is in state 3, we could code this as: 011c000. Now it is easy
to make rewriting rules corresponding to the moves of the machine. For
instance if the machine would write 0, go into state 2 and move left when
it is in state 3 and sees a 1 this would correspond to the rewriting rule
1c → b0. Now the question whether a machine halts on the empty input
corresponds to the question whether we can rewrite a to a description of a
halted Turing machine. To make this description unique we add a special
state to the Turing machine such that instead of just halting, it erases the
tape and returns to the beginning of the tape and then halts. In this case
we get a unique halting configuration, which is used as the target string.
It is very interesting to note that although one would expect that the
complexity of this problem comes from the fact that we do not know which
rewriting rule to apply when there is a choice, this is not used in the proof.
In fact in the special cases we get from the reduction from Turing machines,
at each point there is only one rule to apply (corresponding to the move of
the Turing machine).
In the example given in the exercise there is no way to transform the
start string to the target string. This might be seen by letting a have weight
2 and b have weight 1. Then the rewriting rules preserve weight while the
two given words are of different weight.
II.10 Undecidable. Do the same reduction as in exercise II.9 to get a rewrit-
ing system and a start string corresponding to a Turing machine M
x
working
on empty input. If this system produces arbitrarily long words then the ma-
chine does not halt. On the other hand if we knew that the system did not
produce arbitrarily long words then we could simulate the machine until
it either halts or enters the same state twice (we know one of these two
cases will happen). In the first case the machine halted and in the second
it will loop forever. Thus if we could decide if a rewriting system produced
arbitrarily long strings we can decide if a Turing machine halts on empty
input.
II.11 This problem is decidable. Make a directed graph G whose nodes cor-
respond to the letters in the alphabet. There is an edge from v to w if there
30
is a rewriting rule which rewrites v into a string that contains w. Let the
weight of this string be 1 if the rewriting rule replaces v by a longer string
and 0 otherwise. Now we claim that the rewriting rules can produce arbi-
trarily long strings iff there is a circuit of positive weight that can be reached
from one of the letters contained in the starting word. The decidability now
follows from standard graph algorithms.
31
3 Efficient computation, hierarchy theorems.
To decide what is mechanically computable is of course interesting, but what
we really care about is what we can compute in practice, i.e by using an
ordinary computer for a reasonable amount of time. For the remainder of
these notes all functions that we will be considering will be recursive and
we will concentrate on what resources are needed to compute the function.
The two first such resources we will be interested in are computing time and
space.
3.1 Basic Definitions
Let us start by defining what we mean by the running time and space usage
of a Turing machine. The running time is a function of the input and
experience has showed that it is convenient to treat inputs of the same
length together.
Definition 3.1 A Turing machine M runs in time T(n) if for every input
of length n, M halts within T(n) steps.
Definition 3.2 The length of string x is denoted by |x|.
The natural definition for space would be to say that a Turing machine
uses space S(n) if its head visits at most S(n) squares on any input of
length n. This definition is not quite suitable under all circumstances. In
particular, the definition would imply that if the Turing machine looks at the
entire input then S(n) ≥ n. We will, however, also be interested in machines
which use less than linear space and to make sense of this we have to modify
the model slightly. We will assume that there is a special input-tape which
is read-only and a special output-tape which is write-only. Apart from these
two tapes the machine has one or more work-tapes which it can use in the
oldfashioned way. We will then only count the number of squares visited on
the work-tapes.
Definition 3.3 Assume that a Turing machine M has a read-only input-
tape, a write-only output-tape and one or more work-tapes. Then we will
say that M uses space S(n) if for every input of length n, M visits at most
S(n) tape squares on its work-tapes before it halts.
32
When we are discussing running times we will most of the time not be
worried about constants i.e. we will not really care if a machine runs in time
n
2
or 10n
2
. Thus the following definition is useful:
Definition 3.4 O(f(n)) is the set of functions which is bounded by cf(n)
for some positive constant c.
Having done the definitions we can go on to see whether more time
(space) actually enables us to compute more functions.
3.2 Hierarchy theorems
Before we start studying the hierarchy theorems (i.e. theorems of the type
“more time helps”) let us just prove that there are arbitrarily complex func-
tions.
Theorem 3.5 For any recursive function f(n) there is a function V
f
which
is recursive but cannot be computed in time f(n).
Proof: Define V
f
by letting V
f
(x) be 1 if M
x
is a legal Turing machine
which halts with output 0 within f(|x|) steps on input x and let V
f
(x) take
the value 0 otherwise.
We claim that V
f
cannot be computed within time f(n) on any Turing
machine. Suppose for contradiction that M
y
computes V
f
and halts within
time f(|x|) for every input x. Consider what happens on input y. Since we
have assumed that M
y
halts within time f(|y|) we see that V
f
(y) = 1 iff M
y
gives output 0, and thus we have reached a contradiction.
To finish the proof of the theorem we need to check that V
f
is recursive,
but this is fairly straightforward. We need to do two things on input x.
1. Compute f(|x|).
2. Check if M
x
is a legal Turing machine and in such a case simulate M
x
for f(|x|) steps and check whether the output is 0.
The first of these two operations is recursive by assumption while the
second can be done using the universal Turing machine as a subroutine.
This completes the proof of Theorem 3.5
33
Up to this point we have not assumed anything about the alphabet of
our Turing machines. Implicitly we have thought of it as {0, 1, B} but let
us now highlight the role of the alphabet in two theorems.
Theorem 3.6 If a Turing machine M computes a {0, 1} valued function f
in time T(n) then there is a Turing machine M

which computes f in time
2n +
T(n)
2
.
Proof: (Outline) Suppose that the alphabet of M is {0, 1, B} then the
alphabet of M

will be 5-tuples of these symbols. Then we can code every
five adjacent squares on the tape of M into a single square of M

. This will
enable M

to take several steps of M in one step provided that the head
stays within the same block of 5 symbols coded in the same square of M

.
However, it is not clear that this will help since it might be the case that
many of M’s steps will cross a boundary of 5-blocks. One can avoid this by
having the 5-tuples of M

be overlapping, and we leave this construction to
the reader.
The reason for requiring that f only takes the values 0 and 1 is to make
sure that M does not spend most of its time printing the output and the
reason for adding 2n in the running time of M

is that M

has to read the
input in the old format before it can be written down more succinctly and
then return to the intitial configuration.
The previous theorem tells us that we can gain any constant factor in
running time provided we are willing to work with a larger alphabet. The
next theorem tells us that this is all we can gain.
Theorem 3.7 If a Turing machine M computes a {0, 1} valued function
f on inputs that are binary strings in time T(n), then there is a Turing
machine M

which uses the alphabet {0, 1, B} which computes f in time
cT(n) for some constant c.
Proof: (Outline) Each symbol of M is now coded as a finite binary string
(assume for notational convenience that the length of these strings is 3 for
any symbol of M’s alphabet). To each square on the tape of M there will
be associated 3 tape squares on the tape of M

which will contain the code
of the corresponding symbol of M. Each step of M will be a sequence of
steps of M

which reads the corresponding squares. We need to introduce
some intermediate states to remember the last few symbols read and there
are some other details to take care of. However, we leave these details to
the reader.
34
The last two theorems tell us that there is no point in keeping track
of constants when analyzing computing times. The same is of course true
when analyzing space since the proofs naturally extend. The theorems also
say that it is sufficient to work with Turing machines that have the alphabet
{0, 1, B} as long as we remember that constants have no significance. For
definiteness we will state results for Turing machines with 3 tapes.
It will be important to have efficient simulations and we have the follow-
ing theorem.
Theorem 3.8 The number of operations for a universal two-tape Turing
machine needed to simulate T(n) operations of a Turing machine M is at
most αT(n) log T(n), where α is a constant dependent on M, but indepen-
dent of n. If the original machine runs in space S(n) ≥ log n, the simulation
also runs in space αS(n), where α again is a constant dependent on M, but
independent of n.
We skip the complicated proof.
Now consider the function V
f
defined in the proof of Theorem 3.5 and
let us investigate how much is needed to compute it. Of the two steps of the
algorithm, the second step can be analyzed using the above result and thus
the unknown part is how long it takes to compute f(|x|). As many times in
mathematics we define away this problem.
Definition 3.9 A function f is time constructible if there is a Turing ma-
chine that on input 1
n
computes f(n) in time f(n).
It is easy to see that most natural functions like n
2
, 2
n
and nlog n are
time constructible. More or less just collecting all the pieces of the work
already done we have the following theorem.
Theorem 3.10 If T
2
(n) is time constructible, T
1
(n) > n, and
lim
n→∞
T
2
(n)
T
1
(n) log T
1
(n)
= ∞
then there is a function computable in time O(T
2
(n)) but not in T
1
(n). Both
time bounds refer to Turing machines with three tapes.
Proof: The intuition would be to use the function V
T
1
defined previously.
To avoid some technical obstacles we work with a slightly modified function.
35
When simulating M
x
we count the steps of the simulating machine rather
than of M
x
. I.e. we first compute T
2
(n) and then run the simulation for
that many steps. We use two of the tapes for the simulation and the third
tape to keep a clock. If we get an answer within this simulation we output
1 if the answer was 0 and output 0 otherwise. If we do not get an answer
we simply answer 0. This defines a function V

T
2
and we need to check that
it cannot be computed by any M
y
in time T
1
.
Remember that there are infinitely many y
i
such that M
y
i
codes M
y
(we
allowed an end marker in the middle of the description). Now note that the
constant α in Theorem 3.8 only depends on the machine M
y
to be simulated
and thus there is a y
i
which codes M
y
such that
T
2
(|y
i
|) ≥ αT
1
(|y
i
|) log T
1
(|y
i
|).
By the standard argument M
y
will make an error for this input.
It is clear that we will be able to get the same result for space-complexity
even though there is some minor problems to take care of. Let us first prove
that there are functions which require arbitrarily large amounts of space.
Theorem 3.11 If f(n) is a recursive function then there is a recursive
function which cannot be computed in space f(n).
Proof: Define U
f
by letting U
f
(x) be 1 if M
x
is a legal Turing machine
which halts with output 0 without visiting more than f(|x|) tape squares
on input x and let U
f
(x) take the value 0 otherwise.
We claim that U
f
cannot be computed in space f(n). Given a Turing
machine M
y
which never uses more than f(n) space, then as in all previous
arguments M
y
will output 0 on input y iff U
f
(y) = 1 and otherwise U
f
(y) =
0.
To finish the theorem we need to prove that U
f
is recursive. This might
seem obvious at first since we can just use the universal machine to simulate
M
x
and all we have to keep track of is whether M
x
uses more than the allowed
amount of space. This is not quite sufficient since M
x
might run forever and
never use more than f(|x|) space. We need the following important but not
very difficult lemma.
Lemma 3.12 Let M be a Turing machine which has a work tape alphabet
of size c, Q states and k work-tapes and which uses space at most S(n).
Then on inputs of length n, M either halts within time nQS(n)
k
c
kS(n)
or it
never halts.
36
Proof: Let a configuration of M be a complete description of the machine
at an instant in time. Thus, the configuration consists of the contents of the
tapes of M, the positions of all its heads and its state.
Let us calculate the number of different configurations of M given a fixed
input of length n. Since it uses at most space S(n) there at most c
kS(n)
possible contents of it work-tapes and at most S(n)
k
possible positions of
the heads on the worktapes. The number of possible locations of the head on
the input-tape is at most n and there are Q possible states. Thus we have a
total of nQS(n)
k
c
kS(n)
possible configurations. If the machine does not halt
within this many timesteps the machine will be in the same configuration
twice. But since the future actions of the machine is completely determined
by the present configuration, whenever it returns to a configuration where
it has been previously it will return infinitely many times and thus never
halt. The proof of Lemma 3.12 is complete.
Returning to the proof of Theorem 3.11 we can now prove that U
f
is
computable. We just simulate M
x
for at most |x|Qf(|x|)
k
c
kf(|x|)
steps or
until it has halted or used more than f(|x|) space. We use a counter to count
the number of steps used. This finishes the proof of Theorem 3.11
To prove that more space actually enables us to compute more functions
we need the appropriate definition.
Definition 3.13 A function f is space constructible if there is a Turing
machine that on input 1
n
computes f(n) in space f(n).
We now can state the space-hierarchy theorem.
Theorem 3.14 If S
2
(n) is space constructible, S(n) ≥ log n and
lim
n→∞
S
2
(n)
S(n)
= ∞
then there is a function computable in space O(S
2
(n)) but not in space S(n).
These space bounds refer to machines with 3 tapes.
Proof: The function achieving the separation is basically U
S
with the
same twist as in Theorem 3.10. In other words define a function essentially
as U
S
but restrict the computation to using space S
2
of the simulating
machine. The rest of the proof is now more or less identical. The only
detail to take care of is that if S(n) ≥ log n then a counter counting up to
|x|QS(|x|)
k
c
kS(|x|)
can be implemented in space S(n).
37
The reason that we get a tighter separation between space-complexity
classes than time-complexity classes is the fact that the universal machine
just uses constant more space than the original machine.
This completes our treatment of the hierarchy theorems. These results
are due to Hartmanis and Stearns and are from the 1960’s. Next we will
continue into the 1970’s and move further away from recursion theory and
into the realm of more modern complexity theory.
38
4 The complexity classes L, P and PSPACE.
We can now start our main topic, namely the study of complexity classes.
We will in this section define the basic deterministic complexity classes, L,
P and PSPACE.
Definition 4.1 Given a set A, we say that A ∈ L iff there is a Turing
machine which computes the characteristic function of A in space O(log n).
Definition 4.2 Given a set A, we say that A ∈ P iff there is a Turing
machine which for some constant k computes the characteristic function of
A in time O(n
k
).
Definition 4.3 Given a set A, we say that A ∈ PSPACE iff there is
a Turing machine which for some constant k computes the characteristic
function of A in space O(n
k
).
There are some relations between the given complexity classes.
Theorem 4.4 L ⊂ PSPACE.
Proof: The inclusion is obvious. That it is strict follows from Theorem
3.14.
Theorem 4.5 P ⊆ PSPACE.
This is also obvious since a Turing machine cannot use more space than
time.
Theorem 4.6 L ⊆ P.
Proof: This follows from Lemma 3.12 since if S(n) ≤ c log n and we assume
that the machine uses a three letter alphabet, has k work-tapes, and Q states
and always halts, then we know it runs in time at most
nQ(c log n)
k
3
c log n
∈ O(n
2+c log 3
)
where we used that (log n)
k
∈ O(n) for any constant k. We can conclude
that a machine which runs in logarithmic space also runs in polynomial
time.
The inclusions given in Theorems 4.5 and 4.6 are believed to be strict
but this is not known. Of course, it follows from Theorem 4.4 that at least
one of the inclusions is strict, but it gives no idea to which one it is.
39
Figure 2: A Random Access Machine
4.1 Is the definition of P model dependent?
When studying mechanically computable functions we had several defini-
tions which turned out to be equivalent. This fact convinced us that we had
found the right notion i.e. that we had defined a class of functions which
captured a property of the functions rather than a property of the model.
The same argument applies here. We have to investigate whether the de-
fined complexity classes are artifacts of the particulars of Turing machines
as a computational model or if they are genuine classes of functions which
are more or less independent of the model of computation. The reader who
is not worried about such questions is adviced to skip this section.
Turing machine seems incredibly inefficient and thus we will compare
it to a model of computation which is more or less a normal computer
(programmed in assembly language). This type of computer is called a
Random Access Machine (RAM) and a pictured is given i Figure 2. A RAM
40
has a finite control, and infinite number of registers and two accumulators.
Both the registers and the accumulators can hold arbitrarily large integers.
We will let r(i) be the content of register i and ac
1
and ac
2
the contents of
the accumulators. The finite control can read a program and has a read-only
input-tape and a write-only output tape. In one step a RAM can carry out
the following instructions.
1. Add, subtract, divide (integer division) or multiply the two numbers
in ac
1
and ac
2
, the result ends up in ac
1
.
2. Make conditional and unconditional jumps. (Condition ac
1
> 0 or
ac
1
= 0).
3. Load something into an accumulator, e.g. ac
1
= r(k) for constant k
or ac
1
= r(ac
1
), similarly for ac
2
.
4. Store the content of an accumulator, e.g. r(k) = ac
1
for constant k or
r(ac
2
) = ac
1
, similarly for ac
2
.
5. Read input ac
1
= input(ac
2
).
6. Write an output.
7. Use constants in the program.
8. Halt
One might be tempted to let the time used by a RAM be the number
of operations it does (the unit-cost RAM). This turns out to give a quite
unrealistic measure of complexity and instead we will use the logarithmic
cost model.
Definition 4.7 The time to do a particular instruction on a RAM is 1 +
log(k + 1) where k is the least upper bound on the integers involved in the
instruction. The time for a computation on a RAM is the sum of the times
for the individual instructions.
This actually agrees quite well with our everyday computers. The size of
a computer word is bounded by a constant and operations on larger numbers
require us to access a number of memory cells which is proportional to
logarithm of the number used.
41
To define the amount of memory used by a RAM on a particular oper-
ation let us assume that the initial contents of all the registers are 0. Then
we have:
Definition 4.8 The space used by a RAM under a computation is the max-
imum of
log(ac
1
+ 1) + log(ac
2
+ 1) +

r(i)=0
log (i +r(i))
during the computation.
Intuitively the RAM seems more powerful than a Turing machine. We
will not try to prove exactly this, but only to establish strong enough results
to show that the class P is well defined.
Theorem 4.9 If a Turing machine can compute a function in time T(n)
and space S(n), for T(n) ≥ n and S(n) ≥ log n then the same function can
be computed in time O(T
2
(n)) and space O(S(n)) on a RAM.
Proof: (Outline) Assume for simplicity that the Turing machine just has
one work-tape and that it uses the alphabet {0, 1, B}. The RAM will sim-
ulate the computation of the Turing machine step by step. It will code the
content of the work-tape as an integer and store this integer in register 1,
the position of the head on the input-tape in accumulator 2, the position
of the head on the work-tape(s) in register 2 and the current state of the
Turing machine in register 3. To simulate a step of the Turing machine the
RAM gets the appropriate information from the work-tape by an integer di-
vision and then it follows the transition described by the next-step function.
The cost of the simulation of an individual step is the size of the integers
involved and this is bounded by O(S(n)). Since we have at most T(n) steps
and S(n) ≤ T(n) the bound for the running time follows. The bound for
the space used is obvious.
Observe that we need to store the entire contents of the work-tape in
one register to conserve space. If we instead stored the content of square i
in register i the total space used would be O(S(n) log S(n)). The running
time would be improved to O(T(n) log T(n)) but for the present purposes it
is more important to keep the space small.
Next let us see that in fact a Turing machine is not that much less powerful
than a RAM.
42
Theorem 4.10 If a function f can be computed by a RAM in time T(n)
and space S(n) then f can be computed in time O(T
2
(n)) and space S(n)
on a Turing machine.
Proof: (Outline) As many other proofs this is not a very thrilling simula-
tion argument, which we usually tend to omit. However, since the result is
central in that it proves that P is invariant under change of model, we will
at least give a reasonable outline of the proof.
The way to proceed is of course to simulate the RAM step by step.
Assume for simplicity that we do the simulation on a Turing machine which
apart from its input-tape and output-tape has 4 work-tapes. Three of the
four work-tapes will correspond to ac
1
, ac
2
, the registers, respectively, while
the forth tape is used as a scratch pad. A schematic picture is given in
Figure 3 The register tape will contain pairs (i, r(i)) where the two numbers
are separated with a B. Two different pairs are separated by BB. If some
i does not appear on the register tape this means that r(i) = 0.
The RAM-program is now translated into a next-step function of the
Turing machine. Each line is translated into a set of states and transitions
between the states as indicated by Figure 4. Let us give a few examples how
to simulate some particular instructions. We will define the Turing machine
pictorially by having circles indicate states. Inside the circle we write the
tape(s) we are currently interested in, and the labeled arrows going out
of the circle indicate which states to proceed to where the label indicates
the current symbol(s). Rectangular boxes indicate subroutines, a special
subroutine is “Rew” which is rewinding the register tape, i.e. moving the
head to the beginning, the same operation also applies to other tapes.
1. If the instruction is an arithmetical step, we just replace it by a Turing
machine which computes the arithmetical step using the ac
1
and ac
2
tapes as inputs and the scratch pad tape as work-tape.
2. If the instruction is jump-instruction we just make the next-step func-
tion take the next state which is the first state of the set of states
corresponding to that line. (See Figure 5.)
3. If the jump is conditional on the content of ac
1
being 0, then we just
search the ac
1
-tape for the symbol 1. If we do not find any 1 before we
find B the next step function directs us to the given line and otherwise
we proceed with the next line. (See Figure 6.)
43
Figure 3: A TM simulating a RAM
44
Figure 4: Basic picture
Figure 5: The jump instruction
45
Figure 6: Conditional jump
4. Let us just give an outline of how to load r(ac
2
) into ac
1
. Clearly,
what we want to do is to look for the content of ac
2
as the first part of
any pair on the register tape. If we find that no such pair exists then
we should load 0 into ac
1
. A description of this is given in Figure 7.
5. Finally let us indicate how to store ac
1
into register ac
2
. To do this
we scan the register-tape to find out the present value of r(ac
2
). If
r(ac
2
) = 0 previously this is easy. If ac
1
= 0 we store the pair (ac
2
, ac
1
)
at the end of the register-tape and otherwise we do nothing. If r(ac
2
) =
0 we erase the old copy (ac
2
, r(ac
2
)) and then move the rest of the
content of the register-tape left to avoid empty space. After we have
moved the information we write (ac
2
, ac
1
) at the end (provided ac
1
=
0).
Let us analyze the efficiency of the simulation. The space used by the Turing
machine is easily seen to be bounded by
O(log (ac
1
+ 1) + log (ac
2
+ 1) +

r(i)=0
(log (i + 1) + log (r(i) + 1) + 3))
and thus the simulation works in O(S(n)) space. To analyze the time needed
for the simulation we claim that you can do multiplication and integer divi-
sion of two m-digit numbers in time O(m
2
) on a Turing machine. This im-
plies that any arithmetical operation can be done in a factor O(S(n)) longer
46
Figure 7: Loading instruction
47
time on the Turing machine than on the RAM. The storing and retrieving
of information can also be done in time O(S(n)) and using S(n) ≤ T(n)
Theorem 4.10 follows.
Using Theorems 4.9 and 4.10 we see that P, L and PSPACE are the same
whether we use Turing machines or RAMs in the definitions. This turns out
to be true in general and this gives us a very important principle which we
can formalize as a complexity theoretic version of Church’s thesis.
Complexity theoretic version of Church’s thesis: The complexity
classes L, P and PSPACE remain the same under any reasonable computa-
tional model.
The above statement also remains true for all other complexity classes
that we will define throughout these notes and we will frequently implicitly
apply the above thesis. This works as follows. When designing algorithms
it is much easier to describe and analyze the algorithm if we use a high
level description. On the other hand when we argue about computation it
is much easier to work with Turing machines since their local behavior is so
easy to describe. By virtue of the above thesis we can take the easy road to
both things and still be correct.
4.2 Examples of members in the complexity classes.
We have defined L, P and PSPACE as families of sets. We will every now
and then abuse this notation and say that a function (not necessarily {0, 1}-
valued) lies in one of these complexity classes. This will just mean that the
function can be computed within the implied resource bounds.
Example 4.11 Given two n-digit numbers x and y written in binary, com-
pute their sum.
This can clearly be done in time O(n) as we all learned in first grade. It
is also quite easy to see that it can be done in logarithmic space. If we have
x =

n−1
i=0
x
i
2
i
and y =

n−1
i=0
y
i
2
i
then x + y is computed by the following
program:
carry= 0
For i = 0 to n −1
bit = x
i
+y
i
+carry
carry = 0
If bit ≥ 2 then carry = 1, bit = bit −2.
48
write bit
next i
write carry.
The only things that need to be remembered is the counter i and the
values of bit and carry. This can clearly be done in O(log n) space and thus
addition belongs to L.
Example 4.12 Given two n-digit numbers x and y written in binary, com-
pute their product.
This can again be done in P by first grade methods, and if we do it
as taught, it will take us O(n
2
) (this can be improved by more elaborate
methods). In fact we can also do it in L.
carry= 0
For i = 0 to 2n −2
low = max(0, i −(n −1))
high = min(n −1, i)
For j = low to high, carry = carry +x
j
∗ y
i−j
write lsb(carry)
carry = carry/2
next i
write carry with least significant bit first
If one looks more closely at the algorithm one discovers that it is the
ordinary multiplication algorithm where one saves space by computing a
number only when it is needed. The only slightly nontrivial thing to check
in order to verify that the algorithm does not use more that O(log n) space
is to verify that carry always stays less than 2n. We leave this easy detail
to the reader.
One might be tempted to think that also division could be done in L.
However, it is not known whether this is the case. Another very easy problem
that is not known how to do in L: Given an integer in base 2, convert it to
base 3.
Example 4.13 Given two n-bit integers x and y compute their greatest
common divisor.
49
We will show that this problem is in P and in fact give two different algo-
rithms to show this. First the old and basic algorithm: Euclid’s algorithm.
Assume for simplicity that x > y.
a = x
b = y
While b = 0 do
find q and r such that a = bq +r, 0 ≤ r < b
a = b
b = r
od
write a
The algorithm is correct since if d divides x and y then clearly d divides
all a and b. On the other hand if d divides any pair a and b then it also
divides x and y.
To analyze the algorithm we have to focus on two things, namely the
number of iterations and the cost of each iteration. First observe that for
each iteration the numbers get smaller and thus we will always be working
with numbers with at most n digits. The work in each iteration is essentially
a division and this can be done in O(n
2
) bit operations.
The fact that numbers get smaller at each iteration implies that there
are at most 2
n
iterations. This is not sufficient to get a polynomial time
running time and we need the following lemma.
Lemma 4.14 Let a and b have the values a
0
and b
0
at one point in time
in Euclid’s algorithm and let a
2
and b
2
be their values two iterations later,
then a
2
≤ a
0
/2.
Proof: Let a
1
and b
1
be the values of a and b after one iteration. Then if
b
0
< a
0
/2 we have a
2
< a
1
= b
0
< a
0
/2 and the conclusion of the lemma is
true. On the other hand if b
0
≥ a
0
/2 then we will have a
2
= b
1
= a
0
−b
0

a
0
/2 and thus we have proved the lemma.
The lemma implies that there are at most 2n iterations and thus the
total complexity is bounded by O(n
3
). If you are careful, however, it is pos-
sible to do better (without applying any fancy techniques) by the following
observation. If you use standard long division (with remainder) to find q
then the complexity is actually O(ns) where s is the number of bits in q.
50
Thus if q is small we can do each iteration significantly faster. On the other
hand if q is large then it is easy to see that the numbers decrease more
rapidly than given by the above lemma. If one analyzes this carefully we
actually get complexity O(n
2
).
Let us give another algorithm for the same problem. This algorithm is
called “Binary GCD”.
Let 2
dx
be the highest power of 2 that divides x and define d
y
similarly.
Set a = x2
−dx
and b = y2
−dy
. If b < a interchange a and b.
While b > 1 do
Either a +b or a −b is divisible by 4. Set r to the number that is
divisible by 4 and set a = max(b, r2
−dr
) and b = min(b, r2
−dr
)
where 2
dr
is the highest power of 2 that divides r.
od
write a2
min(dx,dy)
The algorithm is correct by a similar argument as the previous algorithm.
To analyze the complexity of the algorithm we again have to study the
number of iterations and the cost of each iteration. Again it is clear that
the numbers decrease in size and thus we will never work with numbers with
more than n digits. Each iteration only consists of a few comparisons and
shifts if the numbers are coded in binary and thus it can be implemented in
time O(n). To analyze the number of iterations we have:
Lemma 4.15 Let a and b have the values a
0
and b
0
at one point in time in
the binary GCD algorithm and let a
2
and b
2
be their values two iterations
later, then a
2
≤ a
0
/2.
Proof: If a
1
and b
1
are the numbers after one iteration then b
1
≤ (a
0
+b
0
)/4
and a
1
≤ a
0
. Since b
0
≤ a
0
this implies that a
2
≤ max(b
1
, (a
1
+ b
1
)/4) ≤
a
0
/2.
Thu again we can conclude that we have at most 2n iterations, and
hence the total work is bounded by O(n
2
). This implies that binary GCD
is a competitive algorithm in particular since the individual operations can
be implemented very efficiently when the binary representation of integers
is used.
Let us just remark that the best known greatest common divisor algo-
rithm for integers runs in time O(n(log n)
2
log log n) and is based on the
51
Euclidean algorithm. It is unknown if integer greatest common divisor can
be solved in small space.
Example 4.16 Given a nonsingular integer matrix M with entries which
are n-bit numbers, solve Mx = b for some vector of n-bit numbers.
It might seem like this problem obviously is in P since Gaussian elimina-
tion is well known to be doable in O(n
3
) steps. However, there is something
to check. We need to verify that the numbers do not get too large during the
computation i.e. that the rational numbers that appear can be represented.
To analyze what happens to the numbers assume for notational simplicity
that the upper left i × i matrix is non-singular for any i and thus we will
be able to perform Gaussian elimination without pivoting. Let us investi-
gate what the matrix looks like after we have eliminated the i’th variable.
Suppose the original matrix looks like
_
A B
C D
_
where A is the upper i×i matrix. After the i’th variable has been eliminated
the matrix will be
_
A
−1
0
−CA
−1
I
__
A B
C D
_
=
_
I A
−1
B
0 −CA
−1
B +D
_
where I is the i ×i identity matrix. Thus using the following lemma we can
bound the rational numbers involved in the computation.
Lemma 4.17 If A is a nonsingular n×n integer matrix with entries bounded
in size by m then A
−1
has rational entries with numerator and denominator
bounded by m
n
n
n
2
.
Proof: Any entry of A
−1
is an (n − 1) × (n − 1) subdeterminant of A
divided by the determinant of A. Thus we just need to bound the size of
determinants of integer matrices. A determinant can be interpreted as the
volume of the parallelipiped spanned by the rows. This volume is bounded
by the product of the lengths of the row vectors
3
which in its turn is bounded
by (m

n)
n
.
3
This is not a formal proof and the inequality indicated in this sentence is known as
Hadamard’s inequality
52
It follows from the lemma that the rational numbers involved in Gaus-
sian elimination can be represented by O(n
2
) binary digits. Since Gaussian
elimination can be done in O(n
3
) operations and each operation can be
performed in time O(n
4
) (if we use classical arithmetic) then we get total
complexity O(n
7
).
Example 4.18 The determinant of an n ×n matrix can be written as

π∈Sn
sg(π)
n

i=1
x
i,π(i)
where the sum is over all permutations of the numbers 1 through n and
sg(π) is the signum
4
of the permutation. The determinant can be computed
by Gaussian elimination and thus by the previous example it is in P. The
permanent is a very related number which is defined as

π∈Sn
n

i=1
x
i,π(i)
.
Thus we have just removed the signum part of the definition. The definition
looks simpler but it removes the nice invariance under the row operations
of Gaussain elimination. There is no known polynomial time algorithm for
computing the permanent and there is good reason to believe that there is no
such algorithm (the problem is #P-complete, we will get to this complexity
class later). It is not hard to see that the problem is in PSPACE and we will
not give the most efficient algorithm but rather the easiest to understand.
per = 0
For 1 ≤ π(1), π(2), . . . π(n) ≤ n
If π(i) = π(j) for i = j, per = per +

n
i=1
x
i,π(i)
Thus we just generate all n-tuples of numbers between 1 and n, check if
it is a permutation and, if it is, add the corresponding term to the sum. All
the space required is to store the variables π(i) and per. The space needed
for the former is bounded by (nlog n) while the second is bounded by the
size of the answer and if we assume that all entries in the original matrix
are bounded by 2
n
the per is bounded by 2
n
2
n! and thus can be stored in
space O(n
2
).
4
If you do not know the signum function just forget this definition of the determinant
53
It is interesting to note that there is a polynomial time algorithm to
decide whether a permanent of a 0, 1 matrix is nonzero but that it seems
hard to compute it.
Example 4.19 Given a prime number p and a number a, find x (if one
exists) such that x
2
≡ a (mod p).
Let us first recall some basic facts from number theory. Assume that
we have an odd prime p (the case p = 2 being easy) then a can be written
as a square mod p iff a
p−1
2
≡ 1 (mod p) (i.e. we have a solution iff this
condition holds). Remember also that, by Fermat’s little theorem, it is true
that x
p−1
≡ 1 (mod p) for any number not divisible by p.
Now if p ≡ 3 (mod 4) and if a
p−1
2
≡ 1 (mod p) then if we set x = a
p+1
4
we have
x
2
≡ a
p+1
2
≡ aa
p−1
2
≡ a. (mod p)
Thus taking square roots when p ≡ 3 (mod 4) is just computing a power.
Let us investigate how much resources are needed to compute a
p+1
4
(mod
p). Assume that p and a are at most n digit numbers. Then computing
a
p+1
4
by successive multiplications would require on the order of 2
n
multi-
plications. It is more efficient to first compute a
2
i
(mod p) for 0 ≤ i ≤ n
in n squarings. Observe here that since we are only interested in the result
(mod p), we can reduce mod p after each squaring and thus we will never
need to work with numbers with more than 2n digits. Now we write
p+1
4
in
binary and we compute a
p+1
4
by multiplying together the powers a
2
i
with
the i’s corresponding to 1’s in the binary expansion of
p+1
4
. Hence, we get
O(n) multiplications of O(n) bit numbers and this can be done in total time
O(n
3
).
Thus we have proved that taking square-roots modulo primes p with
p ≡ 3 (mod 4) can be done in polynomial time. It is not known if this is
true in general for primes p ≡ 1 (mod 4) or when p is a composite number.
We will return to these questions later in these notes.
Example 4.20 Given a directed graph G with n nodes and two distin-
guished nodes s and t in G. Is it possible to find a directed path from s to
t?
This problem is in P by the following straightforward algorithm.
Set R ={s}.
Set R
new
to the set of nodes reachable from s in one step,
i.e. the set of v such that there is an edge (s, v).
54
While R
new
is not empty do
Take an element w in R
new
and move it into R. Also take
any nodes reachable from w in one step which do not belong
to either R or R
new
and put them into R
new
.
od
If t ∈ R say yes, otherwise say no.
We claim that when the algorithm ends all the nodes reachable from s
are in R. We leave the verification of this to the reader. It is important to
know that R
new
contains the set of nodes known to be reachable from s but
whose neighbors have not yet been put into R or R
new
.
Remark 4.21 Observe that in fact R is the set of nodes reachable from s
and thus we have really solved a more general problem.
To see that the problem is in P let us analyze the time needed for the
algorithm. Since we each time the loop is executed put one node into R and
we never remove anything from R the loop is only executed n times. Each
execution in the loop can be done in time n since we just have to investigate
the neighbors of w. Thus the complexity is bounded by O(n
2
).
Next we turn to the definition of non-deterministic computation. The
obvious goal in mind is to formally define NP.
55
5 Nondeterministic computation
The two most famous complexity classes are probably P and NP. We have
already defined P and to define NP we need the concept of a nondeterministic
Turing machine. The formal definition might make nondeterminism seem
like a paper-tiger which has nothing to do with reality, but it will soon be
clear that this is not the case.
5.1 Nondeterministic Turing machines
The heart of a normal, deterministic Turing machine is the next-step func-
tion, which tells the machine what to do in a given situation. A nondeter-
ministic Turing machine also has a next-step function, but it is multivalued.
By this we mean that in a given situation the machine might do several
different things. This implies that on a given input there are several possi-
ble computations and in particular, there might be several different possible
outputs. This calls for a definition.
Definition 5.1 A nondeterministic Turing machine can only compute func-
tions which takes the values 0 and 1. The machine takes the value 1 on (or
accepts) an input x iff there is some possible computation on input x which
gives output 1. If there is no computation that gives the output 1, the ma-
chine takes value 0 (or rejects the input).
Since we will only be working with {0, 1} functions we will think of
nondeterministic machines as recognizing sets i.e. the set of inputs for which
there is an accepting computation.
Example 5.2 Suppose we want to recognize composite numbers i.e. num-
bers which are not prime and hence can be written as the product of two
numbers both greater than or equal to 2. This can be done by a nondeter-
ministic machine as follows:
On input x, write y
1
and y
2
nondeterministically with |y
i
| ≤ |x| for
i = 1, 2. Writing down y
1
is done by allowing the machine move left for
|x| steps while at each step either writing down 0,1 or an endmarker. The
machine constructs y
2
in the same way. Now the machine gives output 1 iff
y
1
y
2
= x and y
i
> 1 for i = 1, 2.
Let us see that the algorithm is correct. If x is composite then there is
some computation that outputs 1, namely if x = ab then when y
1
= a and
y
2
= b we will get the output 1. On the other hand if x is prime there is
56
no possible computation that gives output 1 since if y
1
y
2
= x then by the
definition of prime one of the y
i
is 1.
Observe that when we are considering deterministic computation recog-
nizing primes and recognizing composite numbers are very similar, since one
just changes the output routine to reverse the meaning of 0 and 1. When it
comes to nondeterministic computation there is a tremendous difference. If,
for instance, you change the output of the machine recognizing composite
numbers defined above then you get a machine that accepts everything. It
is important to keep this non-symmetry in mind.
The definitions of space and time need to be slightly modified since there
is no unique computation given the input.
Definition 5.3 A nondeterministic Turing machine M runs in time T(n)
if for every input of length n, every computation of M halts within T(n)
steps.
Definition 5.4 A nondeterministic Turing machine M runs in space S(n)
if for every input of length n, every computation of M visits at most S(n)
squares on the work-tape.
Since non-deterministic Turing machines can always be made to have
output 1 or 0 the size of the answer will always be small. This implies that
we do not need an output-tape. Some proofs will be formally easier if we
assume that the output is written on the worktape and therefore we will
assume this.
With these basic definitions done we can proceed to define some com-
plexity classes.
Definition 5.5 Given a set A, we say that A ∈ NL iff there is a nondeter-
ministic Turing machine which accepts A and runs in space O(log n).
Definition 5.6 Given a set A, we say that A ∈ NP iff there is a nondeter-
ministic Turing machine which accepts A and runs in time O(n
k
) for some
constant k.
Definition 5.7 Given a set A, we say that A ∈ NPSPACE iff there is a
nondeterministic Turing machine which accepts A and runs in space O(n
k
)
for some constant k.
57
We have similar theorems to 4.4, 4.5 and 4.6.
Theorem 5.8 NL ⊂ NPSPACE.
Proof: The inclusion is obvious. It is at this point not clear that it is
strict. This will follow from results later on and we leave it for the time
being.
Theorem 5.9 NP ⊆ NPSPACE.
Proof: This follows since also nondeterministic Turing machines cannot
use more space than time.
Theorem 5.10 NL ⊆ NP.
Proof: The proof is quite close to the proof of the corresponding deter-
ministic statement but we need an extra observation. The time bound given
in Lemma 3.12 is no longer true for nondeterministic computation. The
reason for this is that even if a nondeterministic machine is in the same
configuration twice it need not loop forever. The reason is that it can make
different non-deterministic choices the second time around. However, it is
easy to see that if a nondeterministic machine has an accepting computation
then it has a nondeterministic computation which visits each configuration
at most once. This implies that we can impose the time-restriction given
by Lemma 3.12 without changing the set of inputs accepted. This proves
Theorem 5.10.
Let us now proceed to some examples of members in the newly defined
complexity classes.
Example 5.11 Composite numbers are in NP, since the nondeterministic
algorithm given previously is easily seen to run in time O(n
2
).
It might be tempting to guess that Composite numbers are in NL since
the essential part of the algorithm is a multiplication and we know from
before that multiplication can be done in L. This is not known however, and
the reason that the given algorithm does not work is that multiplication is
in L only when the input is on a separate input-tape where we can access
any part of the input when it is needed. In the present situation we have
to write down the two factors on the work-tape and there is no room to do
this.
58
Example 5.12 Traveling Sales Person (TSP): Given n cities and a symmet-
ric integer n ×n matrix (m
ij
)
n
i,j=1
where m
ij
denotes the distance between
cities i and j, and an integer K. Is there a tour which visits all cities exactly
once and is of total length ≤ K? TSP is in NP as can be seen from the
following non-deterministic algorithm.
1. Nondeterministically write numbers b
i
, i = 1, 2 . . . , n each with at
most log n + 1 digits.
2. If 1 ≤ b
i
≤ n for all i and b
i
= b
j
for i = j then compute

n−1
i=1
m
b
i
,b
i+1
+
m
bn,b
1
. If this number is less than K output 1 and in all other cases
output 0.
Observe that the conditions 1 ≤ b
i
≤ n and b
i
= b
j
for i = j imply that
the b
i
define a tour starting in b
1
and tracing through b
i
for increasing i and
then returning to b
1
. If this tour is short enough the machine accepts the
output. It is easy to check that the algorithm runs in polynomial time and
thus we have proved that TSP ∈ NP.
Example 5.13 Boolean formula satisfiability: Given a Boolean formula,
consisting of Boolean variables x
i
, i = 1, 2 . . . n, ∧-gates (logical conjunc-
tion) ∨-gates (logical disjunction) and negation-gates, is there a setting of
the variables that satisfies the formula?
This problem is in NP, by the obvious procedure. Namely, nondeter-
ministically write down the value of every variable and then write 1 iff the
guessed assignment satisfies the formula. To check that this procedure runs
in polynomial time one has to observe that given a formula and an assign-
ment of all the variables then one can check whether the assignment satisfies
the formula in polynomial time. This is easy and we leave this as an exercise.
Let us return to the problem of graph-reachability (previously considered
in section 4.2:
Example 5.14 Directed graph reachability: Given a directed graph G and
two nodes s and t of G, is there a directed path from s to t?
We present an algorithm that uses only logarithmic space and hence we
need to be slightly careful about how the input is presented. We assume
that the graph is given as a list of the edges. Now we have the following
algorithm:
Suppose the graph has n nodes.
59
Set H= s
For i = 1, 2 . . . n
If H = t print 1 and halt.
If there is no edge out of H print 0 and halt.
Choose nondeterministically one of the edges leaving H and set H to
the endpoint of this edge.
Next i
Print 0.
This procedure uses only logarithmic space since all we need to remember
is the counter i and the value of H. The conditions given in the algorithm
are easily checked given the assumed encoding of G.
To verify that the algorithm is correct, first observe that by construction
H is always a node that can be reached from s. Thus, since the machine
output 1 only when H = t, we know that when the machine takes the value
1 then t is reachable from s. On the other hand suppose that t is reachable
from s. Then there is a path v
1
, v
2
. . . v
k
where v
1
= s, v
k
= t and there
is an edge from v
i
to v
i+1
for any i. We can assume that k ≤ n since if
v
i
= v
j
for i < j then we can eliminate v
i+1
through v
j
and still maintain a
path. Then there is a possibility that H = v
i
for every i and thus there is a
possibility that the machine outputs 1.
The argument implies that the algorithm recognizes exactly the graphs
that have a path from s to t and therefore directed graph reachability is in
NL.
We will not give any example of a language in NPSPACE and in the
next section it will be clear why.
Before we continue to establish some of the more formal properties of
NP, let us be informal for a while.
The class P is intuitively thought of as the class of functions which are
computable in practice, i.e. within moderate amounts of computation we
can solve reasonably large problems. That this is the case is not clear from
the definition and one could object that although n
100
is polynomial, it grows
too quickly. In practice however, this anomaly does not seem to appear and
thus if a problem has a polynomial time solution then the exponent tends
to be small and the algorithm is usually efficient in practice.
In a similar way NP can be thought of as the class of problems where, if
you knew the solution, it could be verified efficiently. In an abstract mean-
60
ing “the solution” must here be interpreted as the set of nondeterministic
choices that makes the machine accept. As we have seen, in practice “the
solution” is much more concrete. Thus the nondeterministic choices have
in our examples corresponded to the factors, a short tour, and a satisfying
assignment, respectively.
The recursive sets corresponded to functions that could be computed,
while the recursively enumerable sets corresponded to statements that could
be verified. The latter statement follows from the fact that if A is r.e. and
x ∈ A then this can be verified since we just wait until x is listed. On the
other hand if x ∈ A this cannot be verified since we never know if we just
haven’t waited long enough to see it listed. In view of this one can say that
recursive and r.e. have the same relation as P and NP and thus it is not
surprising that we can prove some similar theorems.
Theorem 5.15 Given a set A, then A ∈ NP iff there is a language B ∈ P
and a constant k such that
x ∈ A ⇔∃
y,|y|≤|x|
k(x, y) ∈ B.
Proof: Let us first prove that if there is such a B then A ∈ NP. In fact,
a nondeterministic algorithm for membership in A just consists of guessing
a y of the desired length and then accepting iff (x, y) ∈ B. If B can be
recognized in time O(n
c
) this procedure runs in time O(n
(1+k)c
) which is
polynomial.
To see the converse, we will need the concept of a computation tableau.
Definition 5.16 A computation tableau is a complete description of a com-
putation of a Turing machine. It consists of all configurations of the Turing
machine on a specific input (i.e one configuration for every time step) start-
ing with the input configuration and ending with the halting configuration.
The reason for the name is that we will think of it in the following way.
Assume that the Turing machine has only one tape. Then we can think of
its computation tableau as a two-dimensional array with time on one axis
and the tape squares on the other. The position (i, j) of this tableau thus
contains the symbol that is in the j’th square at time i. It also contains
information about whether the head is there and in such a case which state it
is in. A computation which starts with input x
1
, x
2
. . . x
n
on the input-tape
and ends with only a 1 on the tape is given in Table 3.
61
x
1
, q
0
x
2
x
3
x
4
. . .
0 x
2
, q
3
x
3
x
4
. . .
0 1 x
3
, q
1
x
4
. . .
. .
. .
1, q
h
B B B B
Table 3: A computation tableau
Now, we can return to the converse of Theorem 5.15. Suppose A is
recognized by a one-tape Turing machine M in nondeterministic time n
c
.
Define B to be the set of pairs (x, y) such that y describes an n
c
× n
c
computation tableau of M on input x which ends in an accepting state.
Then B satisfies the condition with respect to A of the theorem with k = 2c.
We claim that B is in P. To see this observe that to check whether a pair
(x, y) is in B we basically have to check three things.
1. That the computation described by y starts with x on the input tape.
2. That the computation is legal for M.
3. That the computation accepts.
The first and the last conditions are easy to check since they just talk
about the contents of particular squares. Also to check 2 is straightforward
since we have to check that the only square that changed value between
two timesteps is the square where the head was located, and also that the
transition by the head was a possible transition given the next-step function
of M. This finishes the proof.
Remark 5.17 One might be tempted to think that the relation given between
NP and P in Theorem 5.15 would be true also for NL and L. As the interested
reader can convince himself, this is probably not the case as even if we restrict
B to belong to L then the set of all A definable in this way is still all of NP.
62
Thus we have given the theorem about NP and P corresponding to The-
orem 2.19. Of the other theorems in Section 2.7, it is not known whether
the analogue of 2.17 is true. (The general belief is that it is not.) There is a
nice reduction theory and also a notion of complete sets and we will return
to these questions in Chapter 7.
63
6 Relations among complexity classes
Up to this point we have defined six complexity classes (L,P,PSPACE,
NL,NP, and NPSPACE) and we have observed some relations. In this
section we will establish some more relations, some obvious and some not
obvious. Let us first observe that the option of non-determinism will never
hurt and thus any deterministic complexity class is contained in the corre-
sponding nondeterministic complexity class. This gives us three immediate
theorems.
Theorem 6.1 L ⊆ NL.
Theorem 6.2 P ⊆ NP.
Theorem 6.3 PSPACE ⊆ NPSPACE.
In the next subsection we will prove the first nontrivial complexity result.
For notational convenience let TIME(T(n)) denote the class of languages
that can be recognized in deterministic time T(n) and let NTIME(T(n)) be
the class of languages that can be recognized in the same nondeterministic
time. Similarly we define SPACE(S(n)) and NSPACE(S(n)).
6.1 Nondeterministic space vs. deterministic time
The aim is to establish the following theorem.
Theorem 6.4 Suppose S(n) > log n and that S(n) is space constructible,
then NSPACE(S(n)) ⊆ TIME(2
O(S(n))
).
Proof: Let A be a language that can be recognized by a nondeterministic
Turing machine N which uses space at most S(n) on inputs of length n.
We have to design a deterministic Turing machine that runs in time 2
O(S(n))
which recognizes A.
Assume for simplicity that N has only one worktape, a three letter alpha-
bet, and Q states. Consider the set of configurations of N. Remember that
a configuration consists of the state of N, the positions of all its heads and
the contents of the worktape. By the argument in the proof of Lemma 3.12
there are at most |x|QS(|x|)3
S(|x|)
possible configurations that N may visit
on input x. Let G
x,N
be the following directed graph:
64
The nodes of G
x,N
are the configurations of N and there is an edge from
configuration C
1
to configuration C
2
iff it is possible to go from C
1
to C
2
in
one step on input x.
G
x,N
has one node C
st
which corresponds to the initial configuration
and one or more configurations where N halts with output 1. We now claim
that the machine takes value 1 on a given input exactly when there is a path
from C
st
to any of the configurations that end with output 1. This is fairly
obvious and the verification is left to the reader. By the above claim G
x,N
has at most 2
O(S(|x|))
nodes and using the fact that S is space constructible
we see that G
x,N
can be constructed in 2
O(S(|x|))
time. Now it follows from
the example in Section 4.2 that in time 2
O(S(|x|))
it is checkable whether any
configuration that outputs 1 can be reached from the initial configuration.
Since this is equivalent to N accepting x we have proved Theorem 6.4
We have the following corollary.
Corollary 6.5 NL ⊆ P.
Proof: Just insert S(n) = O(log n) in Theorem 6.4.
6.2 Nondeterministic time vs. deterministic space
This section has only one basic theorem.
Theorem 6.6 NP ⊆ PSPACE.
Proof: Remember the characterization of NP given in Theorem 5.15, i.e.
given A ∈ NP there is a B ∈ P and a k such that
x ∈ A ⇔∃y, |y| ≤ |x|
k
(x, y) ∈ B.
This gives the following algorithm to determine whether x ∈ A
found = 0
For y = 0, 1 . . . 2
|x|
k
do
If (x, y) ∈ B then found = 1
od
Write found
65
The algorithm is correct since found will be 1 exactly when there is a
short y such that (x, y) ∈ B. To see that the algorithm runs in polynomial
space observe that all we need to do is to keep track of y and to do the
computation to check whether (x, y) ∈ B. Since this latter computation is
polynomial time, we can do it in polynomial space and once we have checked
a given y we can erase the computation and use the same space for the next
y.
6.3 Deterministic space vs. nondeterministic space
Nondeterministic computation seems very powerful, and it seems for the
moment that complexity theory supports this intuition at least in the case
when we are focusing on time as the main resource. If, on the other hand,
we focus on space it turns out that nondeterminism only helps marginally.
This fact is usually referred to as Savitch’s theorem and was first proved by
W.J. Savitch in 1970.
Theorem 6.7 If S(n) is space-constructible and S(n) ≥ log n, then
NSPACE(S(n)) ⊆ SPACE(O(S
2
(n)))
.
Proof: Assume that A is accepted by the nondeterministic machine N in
space S(n). We will again work with the configurations of N and in fact if
you look closely, we solve the same graph problem as we did in the proof
of Theorem 6.4. This time however we will be concerned with saving space
and thus we will never write down the graph explicitly.
Assume for notational simplicity that N has a unique configuration
where it halts with output 1. Let us call this configuration C
acc
. Let C
1
and C
2
be any two configurations of N and let k be an integer. Then we
will be interested in the predicate GET(C
1
, C
2
, k, x) which we will interpret
“On input x it is possible to get from configuration C
1
to configuration C
2
in time ≤ 2
k
and without being in a configuration which uses more than
S(|x|) space.” (If we think about the graph in the proof of Theorem 6.4 this
can be interpreted as “There is a path of length at most 2
k
from node C
1
to node C
2
”.)
Let C
st
denote the start configuration of N and recall the argument in
the proof of Theorem 5.10 that if a machine has an accepting computation
66
then there is an accepting computation which visits each configuration at
most once and, in particular, the running time is bounded by the number of
configurations. This implies that there is a constant c such that N accepts
an input x iff GET(C
st
, C
acc
, cS(n), x) is true. Thus all we have to do is
to evaluate this predicate in small space and to achieve this, the following
observation will be crucial.
GET(C
1
, C
2
, k, x) =

(GET(C
1
, C, k −1, x) ∧ GET(C, C
2
, k −1, x))
The ∨ is here taken over all possible configurations C of N which uses
space less than S(|x|). The reason for the above relation is that if there
exists a computational path from C
1
to C
2
of length at most 2
k
which
never uses more than S(|x|) space then there is a midpoint on this path and
the configuration at this midpoint can be used as C. Conversely if there
is a C that fulfills the left hand side of the above equation, then the two
computations from C
1
to C and from C to C
2
can be concatenated to a
computation from C
1
to C
2
.
The above equation gives the following recursive algorithm to evaluate
the predicate GET.
GET(C
1
, C
2
, k, x)
If k = 0 then
Check whether the next-step function of N allows a transition from
C
1
to C
2
on input x in one step and set GET accordingly.
else
For all configurations C which uses space at most S(n):
Evaluate GET(C
1
, C, k −1, x) and GET(C, C
2
, k −1, x).
If for some C both are true, set GET to true and otherwise to false.
endif
By the above argument x ∈ A iff GET(C
1
, C
2
, cS(|x|), x) and thus to
prove the theorem we need only calculate the amount of space needed to
evaluate GET.
We prove by induction that GET(C
1
, C
2
, k, x) can be evaluated in space
D(k + 1)S(|x|) for some constant D. This is clearly to true for k = 0 since
all that need to be done is to check if one of the constantly many possible
next steps that N can do from C
1
will take it into C
2
.
To do the induction step let us specify more closely how the above pro-
cedure works. We loop over all possible C and to remember which C we are
67
currently working on requires space dS(n) for some constant d. For each C
we do two evaluations of GET with the parameter k −1. These two evalu-
ations are done sequentially and thus we can first do one of the evaluations,
remember the result and then do the other evaluation in the same space.
By the induction hypothesis this implies that the computation for a fixed C
can be done in space DkS(n) + 1. Provided that D > d the induction step
is complete and thus we have completed the proof of Theorem 6.7.
We have two obvious corollaries of the above theorem.
Corollary 6.8 NPSPACE = PSPACE.
This explains that NPSPACE is not a very famous complexity class. We
introduced it for symmetry purposes and now that we have proved that we
do not need it, we will forget it.
Corollary 6.9 NL ⊂ PSPACE.
Proof: By Theorem 6.7 everything in NL can be done in space O(log
2
n)
and thus we get a strict inclusion by Theorem 3.14.
Observe that Corollary 6.9 finishes the proof of Theorem 5.8 as promised
before.
By now we have gathered some information about the relations between
the complexity classes we have defined. Let us sum up the information in a
theorem.
Theorem 6.10 L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE. The inclusion of NL in
PSPACE is strict.
It is a sad fact for complexity theory that Theorem 6.10 reflects our total
knowledge of the relation between the given complexity-classes.
68
7 Complete problems
Even though Theorem 6.10 gives the present state of knowledge about the
defined complexity classes, there are some important things to be said. The
common belief today is that all the given inclusions are strict, but unfor-
tunately we have not yet developed the machinery to prove this. One step
on the way is to identify the hardest problems within each complexity class.
This serves two purposes. Firstly they will serve as candidates that can be
used to prove strict inclusions. Secondly, proving a problem complete will
give a good hint that it can probably not be placed in a lower complex-
ity class and thus is a good way to classify a problem. We will start by
considering a very famous class of problems; the NP-complete problem.
7.1 NP-complete problems
To identify the hardest problem we need first define the concept of “not
harder than”. There are a couple of different ways to do this but we will
only consider one.
Definition 7.1 Let A and B be two sets. Then A ≤
p
B (read as “A is
polynomial time reducible to B”) iff there is a polynomial time computable
function f such that x ∈ A ⇔f(x) ∈ B.
Clearly this definition is very close to the Definition 2.21. The only
difference is that we require the function f to be computable in polynomial
time.
We can now proceed to develop a reduction theory similar to the one
described in the end of Section 2.7. Instead of talking about recursive and
recursively enumerable sets we will talk about P and NP. Many proofs and
theorems are similar.
Theorem 7.2 If A ≤
p
B and B ∈ P, then A ∈ P.
Proof: Suppose the function f in the definition of ≤
p
can be computed
in time O(n
c
) and that B can be recognized in time O(n
k
). Then to check
whether a given input x belongs to A just compute f(x) and then check
whether f(x) ∈ B. To compute f(x) is done in time O(|x|
c
) and from this
also follows that |f(x)| ≤ O(|x|
c
) which in its turn implies that f(x) ∈ B
can be checked in time O(|x|
ck
). Thus the procedure works in polynomial
time and we can conclude that A ∈ P.
69
The definition of NP-complete is now very natural having seen the defi-
nition of r.e.-complete before.
Definition 7.3 A set A is NP-complete iff
1. A ∈ NP
2. If B ∈ NP then B ≤
p
A.
By dropping the first condition we get another known concept.
Definition 7.4 A set A is NP-hard iff for all B ∈ NP, B ≤
p
A.
Before we continue to prove some problems to be NP-complete let us
prove a simple theorem.
Theorem 7.5 If A is NP-complete then
P = NP ⇔A ∈ P
.
Proof: Clearly if NP = P then A ∈ P since A by the definition of NP-
completeness belongs to NP.
To see the converse assume that A ∈ P and take any B ∈ NP. Then
by property 2 of being NP-complete, B ≤
p
A and hence by Theorem 7.2
B ∈ P. But since B was an arbitrary language in NP we can conclude that
NP = P.
With this motivation we are ready to study our first NP-complete prob-
lem. Let SAT be the set of satisfiable Boolean formulas (as introduced in
the example in section 5.1).
Theorem 7.6 (Cook, 1971) SAT is NP-complete.
Proof: We have already established that SAT ∈ NP (see the example
in section 5.1) and thus we need to establish that B ∈ NP implies that
B ≤
p
SAT.
Assume that B is recognized by a non-deterministic Turing machine
N which has one tape, Q states, runs in time n
c
and uses the alphabet
{0, 1, B}. Remember that the computation tableau is a complete description
of a computation. We will now construct a Boolean formula such that if it is
70
satisfiable then its satisfying assignment will describe a computation tableau
of an accepting computation of N on input x.
The formula has two types of variables:
y
ijk
, 1 ≤ i, j ≤ n
c
, k ∈ {0, 1, B} and
z
ijl
, 1 ≤ i, j ≤ n
c
, 1 ≤ l ≤ Q.
The intuitive meaning of the variable will be that y
ijk
= 1 iff the symbol
k appears in square j at time i and will take the value 0 otherwise while
z
ijl
= 1 iff the head is in square j at time i and the machine at this time is
in state q
l
. Let us denote the length of x by n.
Clearly the y and z variables code a computation completely and thus
all that needs to be done is to make a Boolean formula which is true iff the
y and z variables code an accepting computation of N on input x. There
are three conditions to take care of.
1. The computation starts with x
2. It is a valid computation.
3. The computation accepts.
Of these three conditions, 1 and 3 are very easy to handle. The condition
1 is equivalent to the following conditions:
• For 1 ≤ j ≤ n we have y
1jk
= 1 iff k = x
j
.
• For n + 1 ≤ j ≤ n
c
we have y
1jk
= 1 iff k = B.
• z
1,j,l
= 0 except when j = l = 1 (assuming that q
1
is the start-state).
The condition 3 is equivalent to y
n
c
11
= 1 and z
n
c
1l
= 1 i.e. at time n
c
we
have written a 1 in square 1 and the head is located in square 1 and we have
halted (assuming that q
l
is the halting state).
To see how to translate condition 2 into a formula we will need some
more information.
Definition 7.7 A computational tableau C is locally correct if for every i
and j there is some correct computation which have the same contents as C
in squares (i

, j

) for i ≤ i

≤ i + 1 and j ≤ j

≤ j + 2.
71
That computation is a local phenomena is now formalized as follows:
Lemma 7.8 A computational tableau describes a legal computation iff it is
locally correct.
We leave the easy verification to the reader.
Armed with this lemma we can now express condition 2 in a suitable
way. To determine whether the variables y
ijk
and z
ijl
describe as legal com-
putation we only have to check all the local correctness conditions. Whether
a given local area is correct is described as a condition on 6Q+18 variables
and since any condition on K variables can be expressed as a formula of size
2
K
we can express each local correctness condition in constant size. The
conjunction of all these correctness formulas now takes care of condition 2.
The size of the formula is O(n
2c
).
We now claim that the conjunction of the formulas taking care of the
conditions 1-3 is satisfiable iff x ∈ B. This is fairly obvious since there is
a satisfying assignment iff there is an accepting computation which uses at
most space n
c
and time n
c
of N on input x, which by the definition of N
is equivalent to x ∈ B. To conclude the proof of the theorem we need just
observe that to construct the formula is clearly polynomial time.
Let us make a couple of observations about the above proof. Firstly
the final formula is the conjunction of a number of subformula where each
subformula is of constant size. Without increasing the size of the entire
formula by more than a constant we write each of the subformulas in con-
junctive normal form (i.e. as a conjunction of disjunctions). This puts the
entire formula on conjunctive normal form. This implies that satisfiability
of formulas on conjunctive normal form is NP-complete. Let us call this
problem CNF-SAT and we have the following theorem.
Theorem 7.9 CNF-SAT is NP-complete.
The second observation is that the given proof is almost identical to
the proof of Theorem 5.15. If one thinks about this, Theorem 5.15 can
be used to give another NP-complete problem, namely the existence of a
computational tableau with certain conditions. However, we do not feel
that this is a natural problem and hence we will not make that argument.
There are also striking similarities with the proof of Theorem 2.26. It is just
a question of coding a computation in a suitable way.
72
Having obtained one NP-complete problem it turns out to be easy to
construct more NP-complete problems. The main tool for this is given
below.
Theorem 7.10 If A is NP-complete and B satisfies B ∈ NP, A ≤
p
B,
then B is NP-complete
Proof: We have only to check that for any C in NP it is true that C ≤
p
B. Since A is NP-complete we know that C ≤
p
A and hence there is a
polynomial-time computable function f such that
x ∈ C ⇔f(x) ∈ A.
By the hypothesis of the theorem there is a polynomial time computable g
such that
y ∈ A ⇔g(y) ∈ B.
Now it clearly follows that
x ∈ C ⇔g(f(x)) ∈ B
and since the composition of two polynomial-time computable functions is
polynomial-time computable we have proved C ≤
p
B and thus the proof of
the theorem is complete.
To put the proof in other words: Polynomial-time reductions are tran-
sitive i.e. if we can reduce C to A and A to B then we reduce C to B by
composing the reductions.
Clearly Theorem 7.10 is much more useful for proving problems NP-
complete than the original definition. The reason is that to use Theorem 7.10
we only have to make one reduction while to use the definition we have to
make a reduction from any problem in NP.
Let 3-SAT be the problem of checking whether a restricted Boolean
formula given on conjunctive normal form is satisfiable. The restriction is
that there are exactly 3 literals (i.e. variables or negated variables) in each
disjunction. Such a formula is called a 3-CNF formula and an example is:
(x
1
∨ ¯ x
2
∨ x
3
) ∧ (¯ x
1
∨ x
2
∨ x
4
) ∧ (¯ x
2
∨ x
3
∨ x
4
)
This formula is satisfiable as can be seen from the assignment x
1
= 1, x
2
=
1, x
3
= 1 and x
4
= 0. We have
73
Theorem 7.11 3-SAT is NP-complete.
Proof: We will use Theorem 7.10 and since 3-SAT is clearly in NP all
that we need to do is to find a polynomial-time reduction from CNF-SAT
to 3-SAT.
Thus we need to given a CNF-SAT formula φ construct in polynomial
time a 3-SAT formula f(φ) such that φ is satisfiable iff f(φ) is satisfiable.
Suppose φ =
_
m
i=1
C
i
where C
i
are disjunctions containing an arbitrary
number of literals. We will call C
i
a clause and let |C
i
| denote the number
of literals in C
i
. We will replace each clause by one or more clauses each
containing exactly 3 variables. We have the following cases.
1. |C
i
| = 1.
2. |C
i
| = 2.
3. |C
i
| = 3.
4. |C
i
| > 3.
Let us take care of the cases one by one. Let x
i
, i = 1, 2 . . . n be the
variables that appear in φ and let y
ij
denote new variables.
(1.) Suppose C
i
= x
j
, then we replace it by
(x
j
∨ y
i1
∨ y
i2
) ∧ (x
j
∨ ¯ y
i1
∨ y
i2
) ∧ (x
j
∨ y
i1
∨ ¯ y
i2
) ∧ (x
j
∨ ¯ y
i1
∨ ¯ y
i2
)
.
(2.) Suppose C
i
= (x
j
∨ x
k
) then we replace it by
(x
j
∨ x
k
∨ y
i1
) ∧ (x
j
∨ x
k
∨ ¯ y
i1
)
(3.) We keep C
i
as it is.
(4.) Suppose C
i
=
_
k
j=1
u
j
for some literals u
j
we then replace C
i
by
(u
1
∨ u
2
∨ y
i1
) ∧ (
k−4

j=1
(¯ y
ij
∨ u
j+2
∨ y
i(j+1)
)) ∧ (¯ y
i(k−3)
∨ u
k−1
∨ u
k
)
The formula we obtain by these substitutions is clearly a 3-CNF formula and
it is also obvious that given the original formula it can be constructed in
polynomial time. Thus all we need to check is that φ is satisfiable precisely
when f(φ) is satisfiable.
74
First assume that φ is satisfiable. We now must find a satisfying assign-
ment for f(φ). We will give the same values to the x
i
and must find values
for the y
ij
to satisfy the formula. The clauses constructed according to rules
1-3 are already satisfied and thus will cause no problem. Look at the clauses
constructed under rule 4. Since the corresponding clause C
i
in φ is satisfied,
one of the u
j
is true and suppose this is u
j
0
. Now set y
ij
= 1 for j ≤ j
0
−2
and y
ij
= 0 for j > j
0
− 2 then it is easy to verify that this assignment
satisfies f(φ).
To prove the converse, suppose that f(φ) is satisfiable and let x
i
= α
i
be the assignment to the x variables in this satisfying assignment. We claim
that this part of the assignment will satisfy φ. For clauses that fall under
the rules 1-3 this is not too hard to see. Let us consider case 1. If C
i
= x
j
and α
j
= 0 then, no matter what the values of y
i1
and y
i2
are, at least one
of the clauses is not satisfied.
Now consider the case 4. If C
i
was not satisfied then all the literals u
j
would be false, but this implies that
y
i1
∧ (
k−4

j=1
(¯ y
ij
∨ y
i(j+1)
)) ∧ ¯ y
i(k−3)
would be satisfied, but this is clearly not possible. Thus the reduction is
correct and the proof is complete.
Proving problems NP-complete is not the main purpose of these notes
but let us at least give one more NP-completeness proof. Let 3-dimensional
matching (3DM) be the following problem:
Given a set of triplets (x
i
, y
i
, z
i
), i = 1, 2 . . . m where x
i
∈ X, y
i
∈ Y and
z
i
∈ Z where X, Y and Z are sets of cardinality q. Is there a subset S of q
of the triplets such that each element in X, Y and Z appear in exactly one
of the triplets in S?
Theorem 7.12 3DM is NP-complete.
Proof: 3DM is clearly in NP since a nondeterministic machine can just
nondeterministically pick q of the triplets and then check if each element
appears exactly once. To prove 3DM NP-complete we will reduce 3-SAT to
it. Thus given a 3-CNF formula φ we must construct an instance f(φ) of
3DM such that φ is satisfiable iff f(φ) contains a matching.
Suppose φ has n variables and m clauses. We will construct an instance
of 3DM with three types of triplets, “variable triplets”, “clause triplets”
75
Figure 8: The variable triplets
and “garbage collecting triplets”. The elements of the sets X, Y and Z will
be defined as we go along. Let us start by defining the variable triplets.
Suppose variable x
i
appears (with or without negation) in m
i
clauses then
we will associate with it the following 2m
i
triplets.
T
t
i
= {(¯ u
i
[j], a
i
[j], b
i
[j]) : 1 ≤ j ≤ m
i
}
T
f
i
= {(u
i
[j], a
i
[j + 1], b
i
[j]) : 1 ≤ j < m
i
}
_
(u
i
[m
i
], a
i
[1], b
i
[m
i
])
The elements a
i
[j] and b
i
[j] will not appear in any other triplets. As
can be seen from Figure 8 this implies that any matching M must contain
either all triplets from T
f
i
or T
t
i
for any i. We will let the choice of which of
the two sets to pick correspond to whether the variable x
i
is true or false.
Each clause C
i
will have two special values and three triplets. Suppose
C
i
= u
i
1
∨u
i
2
∨u
i
3
and it is the j
k
’th time the variable corresponding to the
76
literal u
i
k
appears. Then we include the triplets
(u
i
k
[j
k
], s[i], t[i]), k = 1, 2, 3.
Observe that the u
i
k
should here be interpreted as literals and thus corre-
sponds to either u
l
or ¯ u
l
, i.e. these are the same elements as in the variable
triplets. The elements s[i] and t[i] will not appear in any other triplets and
this implies that in any matching precisely one of the triplets correspond-
ing to each clauses will be included. Observe that we can include a triplet
precisely when one of the corresponding literals is true.
We have done the essential part of the construction and all that remains
is specify the garbage collecting triplets which will match up the x
i
[j] and
¯ x
i
[j] that have not been used. This is done by the following triplets
(x
i
[j], g
1
[k], g
2
[k]), 1 ≤ i ≤ n, 1 ≤ j ≤ m
i
, 1 ≤ k ≤ 2m
(¯ x
i
[j], g
1
[k], g
2
[k]), 1 ≤ i ≤ n, 1 ≤ j ≤ m
i
, 1 ≤ k ≤ 2m
This enables us to cover any 2mliteral-elements which have not been matched
by previous triplets.
It is clear from the above description that the set of triplets can contain
a matching only if the formula is satisfiable. Suppose on the other hand
that the formula is satisfiable. Then make the choice of which T sets to pick
based on the satisfying assignment. Then for each clause pick a variable
that satisfies it and the corresponding clause triplet. This will cover m of
the 3m literal-elements. The last 2m elements can be covered together with
the g elements by the garbage collecting triplets.
Thus there is a matching iff there is a satisfying assignment and since
the reduction is straightforward the only thing needed to check that it is
polynomial time is to check that we do not have to construct too many
triplets. However it is easy to check that there are 6m+3m+6m
2
triplets.
This concludes the proof.
There are hundreds of known NP-complete problems and many appear
in the listing in the final part of the excellent book by Garey and Johnson.
It turns out that most problems in NP that are not known to be in P are
NP-complete. One notable exception is factoring, another one is graph-
isomorphism. Let us however move on and consider problems complete for
other classes.
77
7.2 PSPACE-complete problems
The theory of PSPACE-complete problems is very similar to that of NP-
complete problems. The concept of reduction is the same and the basic
properties are the same. Of course the problems are different.
Definition 7.13 A set A is PSPACE-complete iff
1. A ∈ PSPACE.
2. If B ∈ PSPACE then B ≤
p
A
We have an immediate equivalent of Theorem 7.5.
Theorem 7.14 If A is PSPACE-complete then
P = PSPACE ⇔A ∈ P.
Proof: If you substitute PSPACE for NP in the proof of Theorem 7.5 you
get a proof of Theorem 7.14.
By a similar argument we get:
Theorem 7.15 If A is PSPACE-complete then
NP = PSPACE ⇔A ∈ NP.
One last definition for completeness before we go to business.
Definition 7.16 A set A is PSPACE-hard if for any B ∈ PSPACE, B ≤
p
A.
Now let us encounter our first PSPACE-complete problem. When deal-
ing with NP-complete problems we came across the satisfiability of Boolean
formulas. Now we will consider quantified Boolean formulas which looks
like:
∀x
1
∃x
2
. . . Qx
n
φ(x)
where each x
1
can take the value 0 or 1 and φ is a normal quantifier free
formula and Q is either ∃ or ∀ depending on whether n is even or odd. Let
TQBF be the set of True Quantified Boolean Formulas. We have:
78
Theorem 7.17 TQBF is PSPACE-complete.
Proof: Let us first check that TQBF can be recognized in polynomial
space. We claim that if the formula has n variables and the size of the
description of φ is bounded by S then to check whether:
∀x
1
∃x
2
. . . Qx
n
φ(x)
is true can be done in space O((n + 1)S). We prove this by induction and
first observe that it is certainly true for n = 0. To the induction step we use
the observation that the given formula is true iff both
∃x
2
. . . Qx
n
φ(x)|
x
1
=0
and
∃x
2
. . . Qx
n
φ(x)|
x
1
=1
are true. These two formulas can be evaluated by induction in space O(nS)
and since we can evaluate one and then evaluate the other in the same space
while only remembering the value of the first evaluation and which formula
to evaluate the claim follows. Of course if the first quantifier is ∃ we just
need to check that one of the values is true. From this the claim follows and
thus TQBF ∈ PSPACE.
Remark 7.18 By being more careful it is not to hard to see that the eval-
uation actually can be done in space O(n +S).
Next we need to take care of the slightly more difficult part of proving
that if B ∈ PSPACE then B ≤
p
TQBF. Suppose that B is recognized by
Turing machine M
B
which never uses more space than |x|
c
on input x for a
given constant c.
We will again use the predicate GET(C
1
, C
2
, k, x) which means that on
input x, M
B
will get from configuration C
1
to configuration C
2
in at most
2
k
steps and never use more space than |x|
c
. As before we have
GET(C
1
, C
2
, k, x) =

C
(GET(C
1
, C, k −1, x) ∧ GET(C, C
2
, k −1, x)) .
With the present formalism it is more convenient to think of the ∨ as an
existential quantifier and we get
GET(C
1
, C
2
, k, x) = ∃
C
(GET(C
1
, C, k −1, x) ∧ GET(C, C
2
, k −1, x)) .
79
Now we could write the two GETs to the right in the same way but this
would be mean trouble since we would then get a formula of exponential
size. However there is a way around this by replacing the ∧ by a universal
quantifier obtaining.
GET(C
1
, C
2
, k, x) = ∃
C

(A,B)∈{(C
1
,C),(C,C
2
)}
GET(A, B, k −1, x).
Now we only get one copy of GET to expand further and if we continue
recursively we get 2k quantifiers and a final formula GET(X, Y, 0, x). All
that remains to do is to check that it is sufficient to quantify over Boolean
variables, rather than the more complicated objects we are currently quanti-
fying over, and that the final application of GET can be written as a Boolean
formula.
Both these points are easy and let us just give a rough outline. It is
straightforward to encode a configuration as a set of Boolean variables. The
∀ quantification is just a binary choice and thus can be represented by a
Boolean variable which will take the value 0 if we make the first choice and
the 1 if we make the other. Finally, to check whether we can get from one
configuration to another in one step is just a simple formula where we list
all possible transitions of the Turing machine. We leave the details to the
interested reader.
Now since x ∈ B iff GET(C
st
, C
acc
, d|x|
c
, x) is true for the appropri-
ate constant d and since we know how to write the latter condition as a
quantified Boolean formula we have completed the reduction.
In fact if one writes down the final formula carefully one can write it in
CNF, i.e. if we restrict the formula φ in TQBF to be a CNF-formula we still
obtain a PSPACE-complete problem. We call this problem TQBF-CNF.
Theorem 7.19 TQBF-CNF is PSPACE-complete.
To get other PSPACE-complete problems we first state an obvious the-
orem.
Theorem 7.20 If A is PSPACE-complete and B satisfies B ∈ PSPACE,
A ≤
p
B, then B is PSPACE-complete.
PSPACE-problems are not as abundant as NP-complete problems and do
not come up in as varying contexts. The main source of PSPACE- complete
problems outside logic is games. It is a only slight exaggeration to say that
to determine who is the winner in most games is PSPACE-complete.
80
The reason that games are this hard is that already quantified Boolean
formulas can be viewed as a game between two players, “Exists” and “Forall”
in the following way. Given a formula “Exists” chooses the values of all
variables which correspond to existential quantifiers and “Forall” chooses
the values of all variables which correspond to universal quantifiers. “Exists”
wins the game iff the final total assignment satisfies the formula. It is not
hard to see that the formula is true iff “Exists” wins the game when both
players play optimally.
Of course the PSPACE-completeness cannot apply to any usual game like
chess, since chess is of a given constant size and hence not very interesting
from our point of view. But games that can be generalized to arbitrary size
are often PSPACE-complete (or hard). Thus for instance to determine who
is the winner in a given position of generalized checkers or generalized go
is PSPACE-hard. We will not get into those games but instead consider a
more childish game.
“Geography” is a two-person game where one person starts by giving the
name of a geographical place and then the two people alternatingly name
geographic places subject to the two conditions that no place is named twice
and that each name starts with the same letter that the previous name ended
by. The first person not being able to name a place with these two conditions
loses.
To get a computational problem out of this game let us generalize.
“Generalized Geography” (GG) is a graph game where two people alter-
natingly choose nodes in a directed graph. Each node must be a successor of
the previous node and no node can be chosen twice. The first person having
no choice loses the game. Initially the game starts with a given node.
The computational problem is now: Given a graph, which of the two
players has a winning strategy?
Let us first observe that clearly this is a generalization of the geography
game where the nodes corresponds to places and there is an edge from A
to B if A ends with the same letter B starts with. (On the other hand it
is a slightly cheating generalization since the skill in the normal game is to
know as many geographic names as possible.)
Theorem 7.21 Generalized geography is PSPACE-complete.
Proof: It is not hard to verify by normal procedures that GG is in PSPACE
and thus by Theorem 7.20 we need only to prove that TQBF-CNF can be
reduced to GG. We will call the players in the game ∃ and ∀. Given the
81
s
x
1
x
2
c
1
c
2
c l
Figure 9: Generalized geography graph
formula
∃x
1
∀x
2
∃x
3
[
c
1
¸ .. ¸
(x
1
∨ ¯ x
2
∨ x
3
) ∧· · ·
c
l
¸..¸
( )]
we construct a graph given in Figure 9. There is a diamond for each variable
of the formula, with the last diamond pointing to nodes representing all the
clauses of the formula and each clause node pointing to nodes representing
the literals in the clause. Finally these nodes are hooked back to the top
or the bottom of the diamond for the corresponding variable according to
whether the literal is positive or negative. The games starts at the node
named S and the ∃ and ∀ labels in the diagram show whose turn it is to
move at each stage.
We can think of ∃’s and ∀’s choices of how to move through the diamonds
as setting the variables (true if the high road is taken and false if the low
road is taken). Then ∀ gets to pick any clause that he claims to be false, and
∃ must pick a literal in that clause which he will claim is true. If ∃’s claim is
valid, ∀ will not be able to move without reusing a node, while if the claim is
not true, ∀ will be able to move and then ∃ will be stuck. Thus we see that
∃ has a winning strategy iff the formula is true. Since the reduction clearly
is polynomial time we have proved that GG is PSPACE-complete.
7.3 P-complete problems
The question P = NP? is of real practical importance since it is a question
whether many natural problems can be solved efficiently. The question
whether P is equal to L is not of the same practical importance, (although
82
it has a nice connection with parallel computation we have not seen yet) but
from a theoretical point of view it is of course of major importance.
Up to this point we have allowed polynomial time for free when we have
compared problems. This is clearly not possible when we are considering
the question P = L? and thus we need a finer reduction concept. The
modification is very slight. We just require the reduction-function to be
computable in logarithmic space.
Definition 7.22 Let A and B be two sets. Then A ≤
L
B (read as “A is
logarithmic space reducible to B”) iff there is a function f, computable in
logarithmic space, such that x ∈ A ⇔f(x) ∈ B.
Using this we can now define P-completeness.
Definition 7.23 A set A is P-complete iff
1. A ∈ P.
2. If B ∈ P then B ≤
L
A
We get the usual theorem.
Theorem 7.24 If A is P-complete then
P = L ⇔A ∈ L.
The proof is identical to the other proofs. One small lemma is needed,
namely that the composition of two functions in L is in L. We leave this as
an exercise. We are now ready to encounter our first P-complete problem.
Define a Boolean circuit to be a directed acyclic graph where each node is
labeled by either ∧, ∨ or ¬ and the number of incoming edges is at least two
in the first two cases and one in the last. The graph contains sources which
are labelled by input variables x
i
and one sink which is called the output
node. Given values of the inputs to the circuit one can evaluate the circuit
in the natural way. An example is given in Figure 10. In this ciruit all edges
are directed upwards. Let CVAL be the following problem: Given a circuit
and values of the inputs of the circuit. What is the output of the circuit?
We have:
Theorem 7.25 CVAL is P-complete.
83
x
1
x
2
x
3
V
V
V
V
Figure 10: A circuit
Proof: First observe that CVAL belongs to P since it is straightforward
to evaluate a circuit once the inputs are given.
Now take any B ∈ P. We need to reduce B to CV AL. Assume that B is
recognized by a Turing Machine M
B
that runs in time at most n
c
for inputs
of length n. We will again use the concept of a computation tableau. Since
we are considering deterministic computation there is a unique computation
tableau given the input. The content of each square of the tableau is easily
coded by a constant number of Boolean values. We construct a circuit
which successively computes these descriptions. The output of the circuit
will correspond to the output of the machine, i.e. be the content of the first
square at the final timestep.
The content of a given square of the tableau only depends on the contents
of the square itself and its two neighboring squares at the previous time step.
This means that we can build a constant piece of circuitry that computes
the Boolean variables corresponding to the square (i, j) in the computation
tableau from the variables corresponding to (i − 1, j − 1), (i − 1, j), and
(i−1, j+1). Thus to construct a circuit that given the correct input simulates
the computation tableau of M
B
we just have to copy this piece of circuitry
everywhere. To print the description of this circuit on the output tape all
we need to remember is the identities of the nodes of the circuits. This can
be done in O(log n) space. Thus in logrithmic space we can construct a
circuit and an input to this circuit such that the circuit outputs one iff M
B
outputs 1 on input x. Thus we have a correct reduction and the proof is
complete.
84
Several other P-complete problems can be constructed by making loga-
rithmic space reduction from CVAL. We will however not present any more
P-complete problems in this section.
7.4 NL-complete problems
The final question we will consider is the NL = L? question. Again we have
complete problems under L-reductions.
Definition 7.26 A set A is NL-complete iff
1. A ∈ NL.
2. If B ∈ NL then B ≤
L
A
As before we ge:
Theorem 7.27 If A is NL-complete then
NL = L ⇔A ∈ L.
We have already encountered the standard NL-complete problem, namely
graph-reachability (GR) i.e. given a directed graph G and two nodes s and
t of G, is it possible to find a directed path from s to t.
Theorem 7.28 Graph-reachability is NL-complete.
Proof: We have more or less already proved the theorem. The fact that
GR ∈ NL was established in Section 5.1.
That the problem is NL-complete was implicitly used in the proof of
Theorem 6.4. Let us recall this proof. We started with an arbitrary nonde-
terministic machine M and an input x to M. We then constructed a graph
(of configurations of M) with two special nodes s and t (corresponding to
the start configuration and the accepting configuration, respectively) where
x was accepted by M iff we could reach t from s. We then observed that
graph-reachability could be done in polynomial time and hence NL ⊆ P.
The first part of this proof is clearly the desired reduction. All we need do
is to prove that the reduction can be done in logarithmic space. This is not
hard and we leave this to the reader.
85
8 Constructing more complexity-classes
Let us just briefly mention some more complexity-classes which are very
related to the given classes. Before we have pointed out that P is symmetric
with respect to complementation i.e. if a set A belongs to P then so does its
complement
¯
A. We have also pointed out that this is not true for NP. Thus
it is natural to talk about the set of languages whose complement belongs
to NP.
Definition 8.1 A set A belongs to co-NP iff its complement
¯
A belongs to
NP.
It is in general believed that co-NP is not equal to NP. In general for
any complexity-class C that is not closed under taking complements, we can
define a corresponding complexity-class co-C. The only other such class we
have encountered is NL.
Definition 8.2 A set A belongs to co-NL iff its complement
¯
A belongs to
NL.
It was generally believed that co-NL is not equal to NL. Thus it came
as a surprise when the following theorem was proved independently by Im-
merman and Szelepcs´enyi in 1988.
Theorem 8.3 If S(n) is space constructible, S(n) ≥ log n and suppose A
can be recognized in nondeterministic space S(n), then the complement of A
can be recognized in nondeterministic space O(S(n)).
We get the following immediate corollary:
Corollary 8.4 NL=co-NL.
Remark 8.5 Although this theorem was a surprise, one already knew that
nondeterminism was not that helpful with regard to space. In particular by
Savitch’s theorem (Theorem 6.7) we know that whatever can be done in non-
deterministic space S(n) can be done in deterministic space O(S
2
(n)). On
the other hand the smallest deterministic time-class that is known to include
all things that can be done in nondeterministic time T(n) is essentially 2
T(n)
.
Thus in spite of the given collapse it is still believed that NP = co-NP.
86
Proof: For notational convenience we will only prove the corollary. The
general case will follow from just substituting S(n) for log n. We will prove
that co-NL ⊆ NL. By symmetry this will imply the equality of the two
classes.
Since graph-reachability is complete for NL, its complement is complete
for co-NL. To prove that co-NL ⊆ NL we need only prove that graph-non-
reachability is in NL. In particular we need only to describe a nondetermin-
istic algorithm which works in logarithmic space and given a graph G and
two vertices s and t accepts if there is no path from s to t. The idea behind
the algorithm is to compute the number of nodes reachable from s. Once
we know this number we can verify that t is not reachable by just guessing
(and checking) all reachable vertices. Since we cannot guess them all indi-
vidually, we need to guess them in increasing order. This way we need only
remember the number of vertices seen this far and the last one seen.
The number of reachable vertices is computed iteratively. In stage k we
compute the number of vertices which are reachable with at most k edges.
This is done by at each stage nondeterministically generating all vertices
that can be reached in k − 1 steps. Since we know their number, we know
when we have generated all, and thus we can without error decide if a given
vertex is reachable in k steps. The complete algorithm now works as follows:
N
k
= 1
for k = 1 to n do
newN
k
=0
for l = 1 to n do
check = 0
for m = 1 to n do
Nondeterministically try to generate a path from s to
v
m
of length at most k −1.
If this is successful then
check = check + 1
If v
m
is connected to v
l
(or equal to v
l
) then
set newN
k
= newN
k
+ 1
goto next l
endif
endif
next m
if check = N
k
reject and stop
next l
87
N
k
= newN
k
next k
check = 0
for m = 1 to n do
Nondeterminstically try to generate a path from s to
v
m
of length at most n −1. If this is successful then
check = check + 1
If v
m
is t reject and stop
endif
next m
if check = N
k
accept otherwise reject
We need to prove that it is correct and that it only uses logarithmic
space. Let us start with the latter part. The variables used by the program
is k, l, m, N
k
, newN
k
and check. It is easy to see that each of them is an
nonnegative integer which is at most n and thus we can store these values
in space O(log n). On top of this we need to nondeterministically guess a
path of at most a certain length at certain parts of the program. This can
be done in logarithmic space by the example in section 5.1 augmented with
a simple counter.
Now let us consider correctness. We claim that, unless the algorithm has
already halted and rejected, the counter N
k
will at stage k give the number
of vertices reachable by a path of length at most k from s. We prove this
by induction and the base case k = 0 is trivial since only s can be reached
with 0 edges and N
k
is initially 1. For the induction step observe that since
the algorithm does not halt and by the induction hypothesis, for each l the
algorithm generates all v
m
which can be reached in at most k−1 steps. Thus
it is easy to see that the algorithm decides correctly whether v
l
is reachable
in at most k steps and thus the new value of N
k
will be correct and the
induction step is complete.
Finally, for the final loop observe that if in the end check = N
k
then
we have generated all vertices that are reachable from s with at most n −1
steps (and hence reachable at all) and if t was not one of them we accept
correctly. The argument is complete and we have proved Corollary 8.4.
88
9 Probabilistic computation
From a practical point of view it is sufficient if an algorithm is fast most of
the time. On could even relax conditions even further and just ask that the
algorithm is correct most of the time.
A key point when reasoning about such algorithms is to make precise
what is meant by “most of the time” i.e. we need to introduce some proba-
bilistic assumptions. There are two basic ways to do this:
1. To consider a random input, i.e. to take a probability distribution over
the inputs and ask that the algorithm performs well for most inputs.
2. To allow the algorithm to make random choices, and require that the
algorithm is fast (correct) for every input.
Of course one could also combine the two ways of introducing randomness.
Both approaches give many interesting results, but here we will only
study the second approach.
Definition 9.1 A probabilistic Turing machine is a normal deterministic
Turing machine equipped with a special coinflipping state. When the machine
enters this state it receives a bit which is 0 with probability 1/2 and 1 with
probability 1/2.
As with nondeterministic Turing machines, a probabilistic Turing ma-
chine can do many different computations on a given input. Thus for in-
stance, the output is not uniquely determined, but rather is given by a
probability distribution. Also the running time is a random variable and we
will say that a probabilistic Turing machine runs in time S if it always halts
in time S(n) on every input of length n. Another interesting running time
characteristic is the expected running time.
We can now define a new complexity class.
Definition 9.2 A set A belongs to BPP iff there is a polynomial time prob-
abilistic Turing machine M such that
x ∈ A ⇒ Pr[M(x) = 1] ≥ 2/3
x ∈ A ⇒ Pr[M(x) = 1] ≤ 1/3
BPP is an abbreviation for Bounded Probabilistic Polynomial time.
89
Thus the machine M gives at least a reasonable guess of whether an
input x belongs to A (We will later see that this guess can be improved).
To get the ideas behind these definitions, let us next give an example of a
language in BPP not known to be in P.
Example 9.3 Checking polynomial identities: Given two polynomials P
1
and P
2
in several variables represented in some convenient way (e.g. as
determinants, products or something similar). Do P
1
and P
2
represent the
same polynomial? We require that the representation is such that if we
are given values of the variables then we can evaluate the polynomials in
polynomial time. A typical example would be to investigate whether the
equality
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 1 1 · · · 1
x
1
x
2
x
3
· · · x
n
x
2
1
x
2
2
x
2
3
· · · x
2
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
n−1
1
x
n−1
2
x
n−1
3
· · · x
n−1
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=

i>j
(x
i
−x
j
)
is a true identity.
The obvious approach to this problem is to expand the polynomials
into a sum of monomials and then compare the expansions term by term.
This procedure will in general be quite inefficient since there might be ex-
ponentially many monomials (as in the example given). Our probabilistic
algorithm will evaluate the two polynomials at randomly chosen points. If
the polynomials disagree on one of these points they are different and we
will prove that if they agree on all points then they are probably the same
polynomial. The algorithm will depend on two extra parameters, d and k.
The first parameter is a known upper bound for the degrees of the polyno-
mials in question (in our example we could take d =
n(n−1)
2
) and the second
is related to the error probability.
Input P
1
and P
2
For i = 1, 2 to k
Pick random integer values independently for x
1
through x
n
in the range
[1, 2nd]. If P
1
(x) = P
2
(x) conclude that P
1
= P
2
(answer 0) and stop.
Next i.
Conclude that P
1
= P
2
(answer 1).
90
Clearly iff we answer 0 we are always correct and to see that the algorithm
is useful we have to prove that most of the time we are correct even when
we answer 1. The key lemma is the following.
Lemma 9.4 Given a nonzero polynomial, P, in n variables and of degree
≤ d. The set
Z = {x | 1 ≤ x
i
≤ R, 1 ≤ i ≤ n ∧ P(x) = 0}
has cardinality at most dnR
n−1
.
Proof: We prove the lemma by induction over n. For n = 1 the lemma
follows from the fact that a polynomial of degree d has at most d zeroes.
For the induction step, let us consider the polynomials Q
j
in the vari-
ables x
1
. . . x
n−1
obtained by substituting j for the variable x
n
. Q
j
is a
polynomial of degree ≤ d in n − 1 variables and thus we could use the in-
duction hypothesis if we knew that Q
j
was nonzero. We claim that there
are at most d different j such that Q
j
is identically zero. To see this take
any monomial in P which appears with a nonzero coefficient (assume for
the sake of the argument that it is x
1
x
2
x
n
). Now look at the coefficient of
x
1
x
2
in Q
j
. It is the value at j of a nonzero-polynomial of degree ≤ d − 2.
Thus there are at most d − 2 values of j such that this coefficient is 0 and
in general at most d values of j such that Q
j
is identically zero.
The set Z splits into the union of sets obtained by fixing the last coordi-
nate to any value in the range 1 to R. When the corresponding polynomial
is nonzero, then by the induction hypothesis the cardinality of the set is
bounded by (n − 1)dR
n−2
and when the polynomial is zero the cardinality
is R
n−1
. Since there are at most R sets of the first kind and d of the second
we get the total estimate
R(n −1)dR
n−2
+dR
n−1
= ndR
n−1
and the induction is complete.
Using this lemma we can analyze the algorithm. If P
1
and P
2
represent
the same polynomial then we will always answer 1 and we always get the
correct answer. When P
1
and P
2
do not represent the same polynomial call
an x such that P
1
(x) = P
2
(x) an unlucky x. Thus the algorithms gives the
correct answer unless we happen to pick k unlucky x’s. By applying the
above lemma to P
1
−P
2
we see that there are at most (2dn)
n
/2 unlucky x
and thus the probability that we pick one unlucky x is bounded by
1
2
. Since
91
the k x’s are independent the probability of them all being unlucky is at
most 2
−k
. Thus if k is reasonably large we get the correct answer with high
probability.
All that remains to see that the problem lies in BPP is to observe that
the algorithm is polynomial time, but this is obvious since the essential step
of the algorithm is to evaluate the polynomials and this is polynomial time
by assumption.
In the example we saw that if we were willing to run the algorithm longer
(i.e. try more random points) then we could make the probability of error
arbitrarily small. It is not hard to see that this is true in general.
Theorem 9.5 A set A belongs to BPP iff there is a polynomial time prob-
abilistic Turing machine M such that
x ∈ A ⇒ Pr [M(x) = 1] ≥ 1 −2
−|x|−2
x ∈ A ⇒ Pr [M(x) = 1] ≤ 2
−|x|−2
Proof: Clearly the above conditions are stronger than our original defini-
tion and thus if A satisfies the above condition then it belongs to BPP.
We need to prove the converse i.e. that if A ∈ BPP we can find a
machine M which satisfies the above condition. We know by the definition
of BPP that there is a machine M

such that
x ∈ A ⇒ Pr[M

(x) = 1] ≥ 2/3
x ∈ A ⇒ Pr[M

(x) = 1] ≤ 1/3.
Now let M be defined by running M

, 2(|x| + 3)/ log(9/8) = C times with
independent random choices and outputting 1 iff M

outputs 1 at least C/2
times. We need to verify the claim that this M satisfies the condition in the
theorem.
Assume that x ∈ A and that M

outputs 1 with probability p on input
x (we know that p ≥ 2/3). Then the probability that M does not output 1
is bounded by
C/2

i=0
_
C
i
_
p
i
(1 −p)
C−i
.
The ratio of two consecutive terms in this sum is at least
p
1−p

2/3
1/3
≥ 2 and
thus if the last term is T then the sum is bounded by

C/2
i=0
2
i−C/2
T ≤ 2T.
This last term is bounded by
2
C
(2/3)
C/2
(1/3)
C/2
≤ (8/9)
C/2
≤ 2
−|x|−3
92
and thus the first condition of the theorem follows. The second condition is
proved in a similar way.
In our example we proved more than needed to establish that the problem
in question was in BPP. In particular we proved that if the input was in the
language the answer was always correct. With this additional restriction we
get a new complexity class.
Definition 9.6 A set A belongs to R iff there is a polynomial time proba-
bilistic Turing machine M such that
x ∈ A ⇒ Pr [M(x) = 1] ≥ 2/3
x ∈ A ⇒ Pr [M(x) = 1] = 0.
Remark 9.7 I believe that R is short for Random polynomial time. Hence
this class is sometimes also called RP.
While BPP is closed under complement, this is not obvious (or known)
for R and thus we also have a third probabilistic complexity class, co-R, the
set of languages whose complement lies in R. Observe that both R and co-R
are subsets of BPP. Our example “Polynomial identities” is a member of
co-R.
There are not many known examples of problems not known to be in P
that lie in BPP. The main other example is to recognize primes. We will
not discuss that algorithm here. However, by quite elaborate methods it is
possible to prove that primes belongs to R

co-R and for this class we can
make a very strong statement.
Theorem 9.8 A set A belongs to R

co-R iff there is probabilistic machine
M which runs in expected polynomial time and always decides A correctly.
Proof: By assumption there is a machine M
1
that outputs 1 with proba-
bility at least 2/3 when the input x is in A and with probability 0 when x
is not in A (since A ∈ R). Similarly since A ∈ co-R there is a machine M
2
that outputs 1 with probability at least 2/3 when x is not in A and never
when x in A. Both M
1
and M
2
run in polynomial time. Now on input x
alternate in running M
1
and M
2
until one of them answers 1. When this
happens we know that x ∈ A if the 1-answer was given by M
1
and we know
that x ∈ A if it was given by M
2
. Each time we run both machines we have
probability 2/3 of getting a decisive answer and hence it follows that the
procedure is expected polynomial time.
93
9.1 Relations to other complexity classes
Let us relate the newly defined complexity classes to our old classes. Clearly
any of the defined classes contains P since we can always ignore our possi-
bility to use randomness. We have some non-obvious relations.
Theorem 9.9 R ⊆ NP.
Proof: We know by the definition of R that if A ∈ R then there is a
machine M such that when x ∈ A then with probability ≥ 2/3 M accepts
x and when x ∈ A there are no accepting computation. But this implies
that if we replace the probabilistic choices by non-deterministic choices M
accepts x precisely when x ∈ A.
The above theorem immediately yields:
Theorem 9.10 co-R ⊆ co-NP.
Our next theorem is also not very surprising.
Theorem 9.11 Suppose A ∈ BPP and the machine M that recognizes A
runs in time T(n) and uses at most p(n) coins, then A can be recognized by
a deterministic machine that runs in time O(2
p(n)
T(n)) and space O(T(n)+
p(n)).
Proof: Just run M for all possible 2
p(n)
set of coinsflips and calculate the
probability that M accepts. A straightforward implementation gives the
given resource bounds.
We have an immediate corollary:
Corollary 9.12 BPP ⊆ PSPACE.
Apart from these theorems, nothing is known about the relation between
our probabilistic classes and our old classes. There is not a great consensus
what the true relations are, but many people think it is possible that P =
BPP.
94
10 Pseudorandom number generators
In the last section we used random numbers. Without discussing the matter,
we assumed that we had access to an unlimited number of perfectly random
coins. In practice this might not be the case. One could indeed question
whether there are any random phenomena in nature, and thus whether ran-
domness in computation at all makes sense. This is a valid question, but
it is mostly philosophical in nature and we will not discuss it. Instead we
will take the optimistic attitude that there is randomness, but there is a
problem getting enough random numbers into the computer. For the sake
of this section we will assume that we only need random bits, where each
bit is 0 and 1 with probability 1/2. This is not a severe restriction since
random bits can be turned into random numbers in many ways.
The common solution to the problem of not having enough truly random
numbers is to have a what is generally called a pseudorandom number gen-
erator (we will in the future call them pseudorandom bit generators since we
will be generating bits). This is a function which takes a short truly random
string and produces a longer “random looking” string. How the short truly
random string (which is called the seed) is produced is clearly a problem
(it is generally supplied by the user), but we will not concern us with this
problem, just assume that somehow we can get a few random bits into the
computer.
The main question we will deal with in this section is how to define
what we want from a pseudorandom generator and how to construct such
a generator. One obvious property is that it should be easy to run and
produce something useful, i.e. it should be computable in polynomial time
and the output should be longer than the input. Something that has only
these two properties is a bit generator.
Definition 10.1 A bit generator is a polynomial time computable function
that take a binary string as input and on an input of length n produces an
output of length p(n) where p is a polynomial such that p(n) > n for all n.
For technical reasons we assume that p(n) is strictly increasing with n.
Note that the definition allows for the output to be of only length n +
1 and this does not seem to be much of a generator. We will see later
(Theorem 10.8) that this is not a real problem.
The more interesting aspect of pseudorandom bit generators is to try to
formalize the “random looking” requirement of the output. Traditionally,
95
this was interpreted as that the output bits passed a small set of standard
statistical tests. This is the germ of what today is believed to be the correct
definition.
Definition 10.2 A statistical test is a function from binary strings to {0, 1}.
Intuitively the output 1 can be interpreted as the string passes the test and
the output 0 as failing.
Note, however that not even all strings produced truly at random will
pass a statistical test.
Definition 10.3 (First attempt) A bit generator passes a statistical test S
if the probability that S outputs 1 on a random output of the generator is
equal to the probability that S outputs 1 on a truly random string.
Here a random output of the generator is defined as the output on a
truly random seed. The tempting definition of pseudorandom generator is
now:
Definition 10.4 (First attempt) A bit generator is pseudorandom if it passes
all statistical tests.
A bit generator that passes all statistical tests produces a very random
looking output. However the definition is too restrictive and there is no such
generator. Take any bit generator G and consider the following statistical
test:
S
G
(x) =
_
1 if x can be output by G
0 otherwise
First observe that if G stretches strings of length n to strings of length p(n)
in time T(n) then S
G
can be implemented on strings of length p(n) to run
in time 2
n
T(n) since we just run G on all possible strings of length n and
check if one of them equals x.
When we run S
G
on the output of G then the result will always be 1. On
the other hand when we feed S
G
a truly random string then the probability
that we get output 1 is at most 1/2. This follows since there is one output
for each seed which implies that there are at most 2
n
possible outputs of
G of length p(n) (here we use that p is strictly increasing), and since there
are 2
p(n)
possible strings and p(n) ≥ n + 1 at most half of the strings are
possible outputs of G.
96
In practice, if n is large, it is not feasible to compute S
G
as described
above, since the exponential time needed to try all the seeds is usually too
much. Thus somehow this test is “cheating” and we change the definition
to take care of this.
Definition 10.5 (Final attempt) A bit generator is pseudorandom if it
passes all statistical tests that run in probabilistic polynomial time.
Remark 10.6 From the development up to this point polynomial time is
the natural requirement on efficient statistical tests. The choice to allow
statistical tests to be probabilistic is not clear, but for many reasons (we will
not go into them here) it is the better choice. Allowing randomness makes
the definition stronger since anything that passes all probabilistic polynomial
time statistical tests also pass all deterministic polynomial time statistical
tests.
We have still not overcome all problems with the definitions as can be
seen from following miniature version of S
G
.
Test s
G
On input x of length p(n) guess n
2
random seeds of length n
and run G on these seeds and output 1 if one of the outputs seen
from G is equal to x. Otherwise output 0.
Since G is assumed to be polynomial time, s
G
can be implemented in
polynomial time. Furthermore, if x is a string that could have been gen-
erated by G then there is some small but positive probability that s
g
will
output 1 while if x cannot be output by G then this probability is 0. By
the analysis of S
G
this implies that the probability that s
G
outputs 1 on a
random output of G is different from the probability that it outputs 1 on a
random input. As we have defined passing statistical tests this means that
G fails the test s
G
. This is counterintuitive since for large n the test s
G
is
very weak. We change the definition to take care of this anomaly.
Definition 10.7 (Final attempt) Let S be a statistical test and let G be a
bit generator. Let a
n
be the probability that S outputs 1 on a random output
of G of length n and let b
n
be the probability that it outputs 1 on a truly
random input of length n. G passes the statistical test S if for any k there is
a N
k
such that for all n > N
k
it is true that |a
n
−b
n
| < n
−k
. The probability
is taken over the random output of G and the random choices of S.
97
In other words the difference of the behavior of the test on the outputs
of the generator and random strings goes to 0 faster than the inverse of any
polynomial.
Let us first prove that once you have a pseudorandom generator which
extends the seed slightly, then we get an arbitrary extension.
Theorem 10.8 If there is a pseudorandom bit generator G, then for any
strictly increasing polynomial p there is a pseudorandom bit generator G

that extends from n bits to p(n) bits.
Proof: The only problem is that G might not extend the seed sufficiently.
By definition G maps n bits to more than n bits. We will assume that
G outputs n + 1 bits since if it outputs more bits we can just ignore them.
Note that G remains a pseudorandom bit generator (Prove this!) Now define
G

to be G iterated p(n) − n times, i.e. on an input of length n, we first
compute G to get a string of length n+1, then compute G on this string to
get a string of length n + 2 etc. until we have a string of length p(n). This
generator produces a string of the wanted length and it is easy to see that it
works in polynomial time. We prove that it is pseudorandom by converting
a hypothetical statistical test S which distinguishes the output of G

from
random strings to a test which distinguishes the output of G from random
strings.
Let a
n
be the probability that S outputs 1 on random outputs from G
of length p(n) and let b
n
be the corresponding probability when the input is
truly random. By assumption for some k and infinitely many (for notational
convenience we assume this is true for all) n we have |a
n
− b
n
| ≥ n
−k
.
Consider the following probability distribution R
i
, 0 ≤ i ≤ p(n) − n on
strings of length p(n). Start with a truly random string of length n +i and
iterate G p(n) −i −n times. Note that R
0
are random outputs of G

while
R
p(n)−n
are truly random strings. Let q
i
be the probability that S outputs
1 on distribution R
i
. Since q
0
= a
n
and q
p(n)−n
= b
n
and |a
n
− b
n
| ≥ n
−k
there is some i such that |q
i
−q
i+1
| ≥
1
n
k
p(n)
. Let us fix this i.
Now consider the following statistical test on strings of length n +i +1:
Given a string x iterate G p(n) − n − i − 1 times and run S. If the initial
string was random we have produced an element according to R
i+1
and the
probability of getting output 1 is q
i+1
. On the other hand if the initial
string was the output of G on a random string of length n + i, then we
have produced a string according to R
i
and the probability of getting a 1
is q
i
. This implies that we have found a way of distinguishing the output
98
for G from random strings and hence we have reached a contradiction since
G was supposed to be pseudorandom. Note that the test obviously runs in
polynomial time.
This should finish the proof, but the very careful reader will see that there
are some minor problems. The proposed test uses two auxiliary parameters
p(n) and i. The value p(n) causes no problems since it is the value of a
fixed polynomial. However it is not clear how to find i. We sketch how to
get around this problem: Let c be a constant. On a given input of length n
consider the tests given by different values of i. For each test evaluate the
test by picking n
c
random inputs according to both distributions. Let i
0
be the value that gives the biggest difference between the two distributions.
Now run the test with i = i
0
on the given input. It is a tedious (and not that
easy) exercise to check that for some c this “universal” test will distinguish
the random strings from outputs of G.
Let us next investigate the existence of pseudorandom bit generators.
Theorem 10.9 If NP ⊆ BPP then there are no pseudorandom generators.
Proof: Just observe that the test S
G
is in NP. Since this test distin-
guishes the output of G from random bits it should not run in probabilistic
polynomial time.
In particular if P = NP there are no pseudorandom generators and thus
proving the existence of such generators would prove P = NP, which we
cannot do for the moment. Thus the best we could hope for is to prove that
if P = NP then there are pseudorandom generators. Also this is probably
too much to hope for. The reason is that P vs NP is a question of the worst
case behavior of algorithms while the existence of pseudorandom generators
is an average case question. This forces us to base the construction of
pseudorandom generators on even stronger assumptions.
Definition 10.10 A function f is a one-way function if it is computable in
polynomial time and for any probabilistic polynomial time algorithm A the
following holds. Choose a random input x of length n and compute y = f(x).
If A is given y as input, then the probability that it outputs a z such that
f(z) = y goes to 0 faster than the inverse of any polynomial.
Remark 10.11 Note that we cannot ask A to actually find the initial x,
since in such a case the constant function would be one-way.
99
We have:
Theorem 10.12 If there is a pseudorandom bit generator then there is a
one-way function.
Proof: We claim that the function given by the generator (i.e. from
the seed to the output) is one-way. By Theorem 10.8 we can assume the
generator expands n bits to 2n bits. Assume that the function given by
this generator (let us by abuse of notation call the generator as well as the
function it computes by G) is not one-way, in other words that there is a k
and an A such that A finds an inverse image of a given function value with
probability at least n
−k
(for infinitely many n). Then the following test, S,
will distinguish outputs of G from random bits.
On input x run A. Suppose A outputs y, then if G(y) = x output
1 otherwise output 0.
If x is a truly random string of length 2n then the probability that the
test S outputs 1 is bounded by the probability that x can be output from
G. Since there are 2
2n
possible strings and at most 2
n
outputs from G this
probability is bounded by 2
−n
. On the other hand if x is the output of
G then the probability of output 1 is exactly the success probability of A
which by assumption is at least n
−k
(for infinitely many n). Thus this test
distinguishes the output from G from random strings contradicting that
G is pseudorandom (the test is polynomial time since both A and G are
polynomial time). This proves that G is a one-way function.
It was a long standing open question whether the converse of Theo-
rem 10.12 would also be true i.e. if starting from any one-way function it
would be possible to construct a pseudorandom bit generator. In 1990 it was
proved by H˚astad, Impagliazzo, Levin and Luby that this is indeed the case,
but their proof is much too complicated for the present set of notes. Instead
we prove the following theorem which is due to Yao (the present proof is due
to Goldreich and Levin). Let a one-way lengthpreserving permutation be a
one-way function which for each n is a 1-1 mapping on strings of length n.
Theorem 10.13 If there is a one-way lengthpreserving permutation then
there is a pseudorandom bit generator.
100
Proof: Let f be the one-way lengthpreserving permutation. Let x and r
be random strings of length n and let (x, y) be the inner product modulo 2
of the strings x and y (i.e. it is the parity of

n
i=1
x
i
y
i
). Then we claim that
the function g(x, r) = f(x), r, (r, x) is a pseudorandom bit generator. It is
a bit generator since it expands 2n bits to 2n + 1 bits and is polynomial
time computable since f is polynomial time computable. The hard part is
to prove that it is pseudorandom. The following lemma of Goldreich and
Levin will be crucial.
Lemma 10.14 Suppose we have a probabilistic polynomial time algorithm
A that on input f(x), r computes (x, r) with a probability greater than
1
2
+
1
Q(n)
where Q is a polynomial. (Here the probability is taken over a random
choice of x and r and the random choices of A). Then there is a probabilistic
polynomial time algorithm B that inverts f with probability of success at least
1
2Q(n)
.
In other words, if f is a one-way function then (x, r) looks random to
any probabilistic polynomial time machine which only has the information
f(x), r.
Let us first see how Theorem 10.13 follows from Lemma 10.14. Suppose
g is not pseudorandom and that S is a statistical test which outputs 1 with
probability a
n
on random bits and b
n
on random outputs of g. Suppose
without loss of generality that a
n
≥ b
n
+ n
−k
. Now consider the following
algorithm for predicting (x, r).
On input f(x), r run S let b
0
= S(f(x), r, 0) and b
1
= S(f(x), r, 1). Now if
b
0
= b
1
output a random bit and otherwise output i such that b
i
= 1.
Let p(x, r, i) be the probability that S outputs 1 on (f(x), r, i). Then
a
n
= 2
−2n−1

x,r,i
p(x, r, i)
and
b
n
= 2
−2n

x,r
p(x, r, (r, x)).
Consider the above algorithm on input f(x), r. Let (r, x) be the complement
of (r, x), then the probability that it outputs the correct value for f(x), r is
p(x, r, (r, x))(1 −p(x, r, (r, x))+
101
1
2
_
p(x, r, (r, x))p(x, r, (r, x)) + (1 −p(x, r, (r, x)))(1 −p(x, r, (r, x)))
_
which equals
1
2
(1 + p(x, r, (r, x)) − p(x, r, (r, x))). Hence the total prob-
ability of it being correct is
1
2
(1 +a
n
− b
n
) and now Theorem 10.13 follows
from Lemma 10.14.
Next let us prove Lemma 10.14.
Proof: (Lemma 10.14) We give a proof due to Rackoff.
First observe that for at least a fraction
1
2Q(n)
of the x’s, A predicts
(r, x) with probability (only over r) at least
1
2
+
1
2Q(n)
. We will describe a
procedure that will be successful with high probability for each such x and
this is clearly sufficient.
We compute each bit of x individually. Let e
i
be the unit-vector in the
i’th dimension. We can ask A about f(x), e
i
, but there is no reason it will
be correct for these inputs. We need to ask about many points, and we will
use a small random subspace shifted by a e
i
. The set of r’s asked will be
pairwise independent but we can guess the answers to the entire subspace
by guessing the answers on the basis vectors. Let k be a parameter and ⊕
be exclusive-or then the algorithm on input y now works as follows:
Pick k random vectors r
1
, r
2
, . . . r
k
of length n.
For each value of k bits b
1
, b
2
. . . b
k
do
For i = 1 to n do
count = 0
For all non-empty subsets S of {1, 2 . . . k} do
Ask A about y, e
i

j∈S
r
j
, suppose answer is b.
Compute b

= b ⊕
j∈S
b
j
and set count = count + 1 −2b

.
Next S
Set x
i
= 0 if count > 0 and 1 otherwise.
Next i.
If f(x) = y output x, stop
od
Report ’failure’.
Just to avoid confusion observe that count is the number of 0-guesses
minus the number of 1-guesses and hence we are doing a majority decision.
If A runs in time T(n) and f in time T
1
(n) then the algorithm runs in time
2
2k
nT(n) +T
1
(n) and thus the algorithm is polynomial time if k is O(log n).
102
We need to analyze the probability that we find the correct x. We claim that
this happens with good probability when b
i
= (r
i
, x). Let r
i
S
be e
i

j∈S
r
j
and let b
i
S
be b ⊕
j∈S
b
j
. If A gives the correct answer (i.e. (x, r
i
S
)) to y, r
i
S
then x
i
= b
i
S
. This implies that we are in pretty good shape since we know
that A gives a majority of correct answers and r
i
S
are fairly random.
Lemma 10.15 For S
1
= S
2
r
i
S
1
and r
i
S
2
are independent and uniformly
distributed on {0, 1}
n
.
Proof: Suppose j ∈ S
1
but j ∈ S
2
(if there is no such j we can inter-
change S
1
and S
2
). Now it is easy to see that r
S
2
is uniformly distributed
(its definition is an exclusive-or of several things at least one which is uni-
formly random) and that for any fixed value of r
S
2
the existence of r
j
in the
exclusive-or defining r
S
1
makes sure it still uniformly distributed.
Now it follows by Lemma 10.15 that the b
i
S
are pairwise independent.
Suppose for notational convenience that x
i
= 0 then we know that check is
a random variable with expected value at least
2
k
−1
Q(n)
and variance at most
2
k
−1. Now remember, Tchebychev’s inequality:
Theorem 10.16 Let X be a random variable with expected value µ and
variance v then the probability that |X −µ| ≥ λ is bounded by
v
λ
2
.
Using this with λ =
2
k
−1
Q(n)
and v = 2
k
−1 we see that x
i
takes the incorrect
value with probability at most
Q(n)
2
2
k
−1
. Now if 2
k
− 1 ≥ 10nQ(n)
2
then the
probability that x
i
does not take the correct value is bounded by
1
10n
. Thus
the probability that some x
i
is incorrect is bounded by 1/10. This concludes
the proof of Lemma 10.14.
Remark 10.17 We have now given a generator that extends the input by
one bit and we know by Theorem 10.8 that we can get a generator which
extends the output arbitrarily. We can take this to be the following very
natural generator: Pick x and r randomly and let b
i
= (f
i
(x), r) where f
i
is
f iterated i times, for i = 1, 2, . . . p(n).
Now that we have studied good generators it is natural to ask what
happens if we use these generators to produce the random bits needed by a
probabilistic algorithm. Suppose we have a probabilistic machine M which
recognizes a BPP-language B and let G be a pseudorandom generator.
103
Suppose M uses p(n) random bits and that for some small constant , G
extends n

bits to p(n) bits. The latter can be assumed by Theorem 10.8.
Now consider the following statistical test S
M,x
of a random string r of
length p(n):
Given x run M on input x with random coins r. Answer with
the output of M.
We know that when x ∈ B and r is random then the probability that this
test outputs 1 is at least 2/3 while otherwise it is at most 1/3. Since G by
assumption passes all statistical tests it is tempting to think that the same
is true for outputs of G. This would imply that we would get a theorem
similar to Theorem 9.11 saying that B could be recognized in time close to
2
n

since we would only have to try all seeds of G rather than all sets of
p(n) coins.
The reason this is not true is that the test has a parameter x which might
be hard to find (the parameter M is not a problem since it is of constant
size). All is not lost since we could change the statistical test to choose x
randomly and then study the behavior of M. Then we could prove that we
had a deterministic algorithm that ran in time close to 2
n

and was correct
for most inputs. However since we have not studied the concept of being
correct for most inputs we will not pursue this approach. Instead we have:
Definition 10.18 A non-uniform statistical test is a probabilistic polyno-
mial time algorithm that on inputs of length n gets an advice a
n
which is of
polynomial length.
Remark 10.19 Note that the advice is the same for all strings of length n.
The interested reader might want to prove that the given definition corre-
sponds to polynomial size circuits without any uniformity constraints.
Definition 10.20 A pseudorandom generator is non-uniformly strong if it
passes all non-uniform statistical tests.
This definition is stronger than the previous definition since we are al-
lowing stronger statistical tests. We will not do so here, but it turns out
that the existence of such generators is equivalent to the existence of one-
way functions where we allow the inverting function to have an advice. In
general all proofs for the uniform case translates to the non-uniform case.
In particular Theorem 10.8 remains true. We now finish the discussion with
a theorem of Yao.
104
Theorem 10.21 If there is a pseudorandom generator which is non-uniformly
strong then
BPP ⊆

>0
DTIME(2
n

).
Proof: The proof is as outlined above. Suppose B ∈ BPP and that it is
recognized by M which uses p(n) coins and runs in time T
1
(n) (both these
bounds are some polynomials). Let δ < and let G be a non-uniformly
strong generator which extends n
δ
bits to p(n) bits and runs in time T
2
(n)
(which also is a polynomial). Now let x be an arbitrary input of length n
and consider the above test S
M,x
. This test uses the advice x but since G
is non-uniformly strong, it passes this test. This implies that if we replace
the coins by a random output of G then we still have essentially the same
probability of acceptance. We now just try all the 2
n
δ
possible seeds for G
and take a majority decision. This can be done in time
2
n
δ
(T
1
(n) +T
2
(n))
and this is O(2
n

). Since both B and were arbitrary we have proved the
theorem.
Thus we have proved that if there are one-way functions in the non-
uniform setting then BPP can be simulated in time which is significantly
cheaper than exponential time. If one is willing to make stronger assump-
tions then one can make stronger conclusions. In particular if there is a
polynomial time computable function such that inverting this function (in
the non-uniform setting, with non-negligible success ration) on inputs of
length n requires time 2
cn
for some small n then BPP = P.
105
11 Parallel computation
The price of processors have dropped remarkably in the last decade and it is
now feasible to make computers that have a large number of processors. The
most famous multi-processor computer might be the Connection Machine
which has 2
16
= 65536 processors.
The concept of having many processors working in parallel leads to many
interesting theoretical problems. One could phrase the main question as a
variant of a traditional mathproblem. Suppose one computer can compute
a given function in one million seconds, how long would it take a million
computers to compute the same function?
The answer to this question is not known, but it seems like the answer
could be anywhere from one second to a million seconds depending on the
function. It is an important theoretical problem to identify the computa-
tional tasks that can be parallelized in an efficient manner. In this section
we will just give the first definitions and show some basic properties.
When many processors cooperate to solve a problem it is of crucial im-
portance how they communicate. In fact it seems like that in practice this
is the overshadowing problem to make large scale parallel computation effi-
cient. It is hard to get this fairly practical consideration into the theoretical
models in a suitable manner and this complication will usually get lost. We
choose here to study the circuit model of computation and as we will see,
communication between processors will be ignored. We do not want to ar-
gue that the model does not reflect reality, we only want to point out that
there is one important aspect missing.
11.1 The circuit model of computation
We have previously briefly discussed the concept of a Boolean circuit. It is
a directed acyclic graph with three type of nodes: Input nodes, operation
nodes and output nodes. The input nodes are labeled by variable names x
i
and the operation nodes are labeled by logical operators. The inputs to a
node v is the set of nodes w for which (w, v) is an edge.
We will here only allow the operators ∧, ∨ and ¬. The circuit computes
a function {0, 1}
n
→{0, 1} in the natural way. (Substitute the value of the
i’th coordinate for x
i
and then evaluate the nodes by letting each operation
node take the value which corresponds to the corresponding operator applied
to the inputs of that node.) We will be interested in two parameters of the
circuit; its size and depth. The size of a circuit C
n
will be denoted by
106
|C
n
| and is equal to the number of nodes it contains while the depth will
be denoted by d(C
n
) and is the longest directed path from the input to the
output. If there is a processor at each node of the circuit then the number of
processors is equal to the size of the circuit and the time needed to evaluate
the circuit is equal to the depth of the circuit. Thus if we are interested
in fast parallel computation it is interesting to construct small circuits with
small depth.
The functions we have been considering so far take inputs that are of
arbitrary length while a circuit can only take inputs of a given length. The
way to resolve this is to let a function be computed by a sequence of circuits
(C
n
)

n=1
where C
n
computes f on inputs of length n. We will then be
interested in the growthrate of the size and depth of C
n
as a function of n.
In particular we will say that a sequence of circuits is of polynomial size if
the growthrate of |C
n
| is not more than polynomial in n. Let us now state
a theorem that was implicitly proved in Section 7.3.
Theorem 11.1 If B ∈ P then B can be recognized by polynomial size cir-
cuits.
Proof: (Outline) In the proof of Theorem 7.25 we saw that given a Tur-
ing machine M and an input x we could construct a circuit such that the
output of the circuits was equal to the output of M on input x. The circuit
constructed the computation tableau of M row by row. If one looks closely
at that proof, one discovers that the structure of the circuit only depends
on M while x enters as the input of the circuit. In particular, given a lan-
guage B ∈ P we take the corresponding Turing machine M
B
and given n
we can now construct a circuit C
n
which will give the same output as M
B
on all inputs of length n. The size of this circuit will only be a constant
greater than the size of the computation tableau of M
B
on inputs of length
n. If M
B
runs in time O(n
c
) then this size will be O(n
2c
) and thus we have
constructed circuits for B of polynomial size.
Remark 11.2 By more efficient constructions it is possible to to give a bet-
ter simulation of Turing machines and decrease the size of the above circuit
to O(n
c
log n).
One immediate question is whether the converse of the above theorem is
true, i.e. that if a function can be computed by polynomial size circuits then
is it in fact true that the function lies in P? With the current definitions
107
this is not true. The reason for this is that we have not put any conditions
on how to obtain the circuits C
n
. To see the problem consider the following
language:
B = {x|M
|x|
halts on blank input}
As we have seen earlier this language is not even recursive. However it has
very small circuits since for each length n, either all strings of length n are
in B or no string of that length is a member of B. Thus C
n
could just be a
trivial circuit which either always outputs 0 or 1 depending on whether M
n
halts on blank input. How to decide which one to choose is non-recursive
but is of no concern in the old definition and the following definition is called
for.
Definition 11.3 A sequence of circuits (C
n
)

n=1
is P (L)-uniform iff there
is a Turing machine M, which works in polynomial time (logarithmic space),
that on input 1
n
prints a description of C
n
on its output tape.
Using this definition we get:
Theorem 11.4 B can be computed by polynomial size P-uniform circuits
iff B ∈ P.
Proof: (Outline) First just observe that the circuits described in the above
proof are P-uniform. They are in fact L-uniform by the proof of Theorem
7.25. This proves one of the implications in the theorem.
To see the reverse implication, suppose that B is recognized by polyno-
mial size P-uniform circuits. Then on input x a Turing machine can first
construct the circuit C
|x|
and then compute its value on input x. The first
part is polynomial time by the definition of P-uniform and the second part
is easily seen to be polynomial time.
11.2 NC
We can now define our main complexity class of parallel computation.
Definition 11.5 A set B is in NC
k
iff it can be recognized by a family
of L-uniform circuits (C
n
)

n=1
where C
n
is of polynomial size and d(C
n
) ≤
O((log n)
k
). Furthermore NC =


k=1
NC
k
.
108
Remark 11.6 The name NC is short for Nick’s Class. This is named after
Nick Pippenger who was one of the first researchers to study this class.
Remark 11.7 Normally one requires even stricter uniformity constraints
for NC
1
than L-uniformity. For reasons that go beyond the scope of these
notes, this gives a better definition. However to make life easier we will stick
with the above definition.
We can now make an obvious observation.
Theorem 11.8 NC ⊆ P.
Proof: This follows immediately from the definition of NC and Theorem
11.4.
From a theoretical standpoint NC is considered as the subset of P which
admits ultrafast parallel algorithms (time O((log n)
k
)). Some of the algo-
rithms we present will also be efficient in practice and some will not. When
we describe how to construct circuits, we will be quite informal and talk in
terms of processors doing simple operations. Formally this should of course
be replaced by nodes in circuits, but somehow processors seem to go better
with the intuition.
Example 11.9 Given two n-bit numbers, compute their sum. This might
look straightforward since we can have one processor which takes care of each
digit. This will be the basic idea, but we have to do something intelligent
with the carries, since if we treat them without thinking, we will need circuits
of linear depth. You see the reason for this if you try to add the binary
numbers 01111111 and 00000001. The critical point is to discover quickly if
you have a carry coming from your right. The process to do this is called
Carry-look-ahead.
We use one processor for each digit of the two numbers. This processor
checks whether that position Generates, Propagates or Stops a carry and
marks the position G, P and S accordingly. We can combine this informa-
tion in a binary tree to see how longer blocks will behave with respect to
carries. For instance a block of length two will generate a carry if it looks
like GG, GP, GS or PG, it will propagate a carry if it looks like PP and it
will stop a carry if it looks like PS, SG, SP or SS. Continuing in this way
we can quickly compute whether certain intervals propagate or stop a carry.
How to do this might best be seen by an example. Suppose the numbers
are 01111011 and 01001010. We get the representation SGPPGSGP and
109
0
0
1
1
1
0
1 1 0 1 1
0 1 0 1 0
S G P P G S G P
S P G G
G
S
S
Figure 11: Carry look ahead tree, going up
0
0
1
1
1
0
1 1 0 1 1
0 1 0 1 0
S G P P G S G P
S P G G
G
S
S
Figure 12: Carry look ahead tree, going down
we build a binary tree (see Figure 11) to find out how longer blocks behave.
Now to see if we have a carry in a given position we just have to figure out if
all suffices of the string SGPPGSGP generates a carry. It is quite easy to
see how this is done. One way to phrase it formally is the following: Suppose
you want to know if there is a carry in a given position, start at that position
and walk down the tree. Whenever you go right write down what you see
coming in from the left to that same node. Finally evaluate the string you
get. For instance if you start in position 6 in the given tree, you get the
string PG which evaluates to G and thus there is a carry in position 6. One
can also view this last step as sending the appropriate values down the tree
as indicated in Figure 12. By actually building this tree in the circuit we
see that we get a circuit of depth O(log n) which computes all the carries
and since once we know the carries the rest is simple we can conclude that
addition belongs to NC
1
.
Example 11.10 Given two n-bit numbers, we want to multiply the num-
bers. It is not hard to see that this can be reduced to adding together n,
n-digit numbers (just do the ordinary multiplication algorithm we learned
110
in first grade). Now by the previous example we can add these numbers
pairwise in depth O(log n) to obtain
n
2
numbers whose sum we want to com-
pute. Adding the numbers pairwise for log n rounds gives us the answer.
This gives a circuit of polynomial size and depth O((log n)
2
). In fact mul-
tiplication and addition of n numbers can both be done in depth O(log n).
We leave this as an exercise.
Example 11.11 Given two n ×n matrices, multiply them. Let us suppose
the entries are m bit integers. Suppose the given matrices are A = (a
ij
) and
B = (b
ij
). Then we want to compute

n
j=1
a
ij
b
jk
for all i and k. We have
the following algorithm:
1. Compute all the products a
ij
b
jk
for all i, j and k.
2. Compute the sums

n
j=1
a
ij
b
jk
for all i and k.
If we have O(m
2
n
3
) processors we can do the first operation in depth
O(log m) (by the exercise extending the multiplication example) while the
second can be done with O(n
3
m) processors in depth O(log nm) (using the
same exercise). Thus the entire computation uses a polynomial number of
processors and O(log nm) depth.
The problems that seems to be hardest to give a parallel solution to
are problems where the natural sequential algorithms are iterative in na-
ture. Examples of such problems are computing integer GCDs, solving lin-
ear equations and computing the depth-first search tree of a graph. Of these
the linear equation problem can be solved in NC, and finding a depth-first
search tree is known to be in RNC (Random NC, i.e. circuits of small depth
where you allow random inputs and only require that you have a good prob-
ability of finding a depth-first search tree), while for integer GCDs there is
not known to be any circuits of sublinear depth. Just to give an example
of something nontrivial, let us give as a last example an algorithm to com-
pute the determinant of a matrix which runs in O((log n)
2
) time and uses a
polynomial number of processors. We have to assume some facts from linear
algebra.
Example 11.12 Given a matrix M, compute its determinant. Let us recall
some facts. If λ
i
denote the eigenvalues of M, then it is well known that

n
i=1
λ
i
= det(M). The trace of a matrix M (denoted by Tr(M)) is the
sum of its diagonal elements, i.e. Tr(M) =

n
i=1
m
ii
, and it is well known
that Tr(M) =

n
i=1
λ
i
. Let s
k
= Tr(M
k
) which equals

n
i=1
λ
k
i
since the
111
eigenvalues of M
k
are λ
k
i
. The s
k
are easy to compute in parallel since
we have already shown how to compute matrix-products and M
k
can be
computed by O(log k) matrix-products done in sequence. The characteristic
polynomial of M is det(λI −M) = λ
n
+

n
i=1
λ
n−i
c
i
= c(λ). It is standard
that c
n
= det(−M) and c(λ) =

n
i=1
(λ − λ
i
). From this it follows that
c
i
=

S:|S|=i
(−1)
i

j∈S
λ
j
where S is a subset of {1, 2, . . . , n} and |S| is its
cardinality. Using this one can prove that
_
_
_
_
_
_
_
_
_
_
1 0 0 0 . . . 0
s
1
2 0 0 . . . 0
s
2
s
1
3 0 . . . 0
s
3
s
2
s
1
4 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
s
n−1
s
n−2
s
n−3
s
n−4
. . . n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
.
.
.
c
n
_
_
_
_
_
_
_
_
_
_
= −
_
_
_
_
_
_
_
_
_
_
s
1
s
2
s
3
s
4
.
.
.
s
n
_
_
_
_
_
_
_
_
_
_
Thus all that remains is to prove that we can solve Ax = b where A is a
lower-triangular matrix. If we multiply each row by a suitable number we
can assume that all the entries on the diagonal of A is unity. Then A can be
written as I −B where B is strictly lower-triangular. Now it is easy to check
that A
−1
=

n
i=0
B
i
and thus by some additional matrix-multiplications we
can compute the inverse of A and hence we can solve for the c
i
and find
c
n
= det(−M). The number of processors is quite bad but still polynomial,
and the depth is O((log n)
2
).
Once we can compute determinants we can do almost all operations in
linear algebra. The drawback in practice is that we get fairly large circuits.
11.3 Parallel time vs sequential space
A couple of the examples of problems that we could do in NC also appeared
as problems doable in small space. This is no coincidence and in fact se-
quential space and parallel time are quite related as soon as one does not
put any other restrictions on the computation.
Theorem 11.13 Suppose S(n) ≥ log n for all n. If B can be recognized in
space O(S(n)), then it can be done by circuits of depth O(S
2
(n)).
Proof: Suppose B is recognized by M
B
which runs in space O(S(n)). We
will use one processor p
C
for each possible configuration C of M
B
. There
112
are 2
O(S(n))
configurations and thus we will use many processors, but this is
of no concern for us for the moment.
At stage i of the algorithm p
C
finds out which configuration C would
change to in 2
i
computation steps. This is easy for i = 0 and in general it
is done as follows. After step i − 1, p
C
already knows what configuration
C

, C transforms to in 2
i−1
steps. On the other hand p
C
knows which
configuration C

transforms to in 2
i−1
steps and this is the desired answer.
Since M
B
runs in time 2
O(S(n))
, in O(S(n)) stages the processor corre-
sponding to the initial configuration will know the result of the computation.
Thus the critical parameter is what depth is required to do one stage.
A single stage can be done by having a binary tree of depth O(S(n))
which connects each processor to each other processor and selects the pro-
cessor corresponding to the current information. We leave the details to the
reader.
To sum up: We have O(S(n)) stages where each stage can be done in
depth O(S(n)). This gives total depth O(S
2
(n)) and thus we have proved
the theorem.
Corollary 11.14 L ⊆ NC
2
Proof: By Theorem 11.13 we know L can be done by circuits of depth
(log n)
2
. By inspection of the proof we conclude that the circuits are of
polynomial size.
There is also a close to converse result to Theorem 11.13. Let S-uniform
denote a family of circuits that can be constructed by a Turing machine that
runs in space S.
Theorem 11.15 Suppose S(n) ≥ log n for all n, then if B can be recognized
by S-uniform circuits of depth O(S(n)), then B can be recognized in space
S(n).
Proof: The idea of the proof is to do a depth first search of the circuit for
B.
By duplicating nodes we can assume that the circuit is actually a tree.
(One has to check that this does not change the condition of S-uniformity,
but it does not) We evaluate the circuit by a depth first search manner. At
each point in time we maintain a path in the circuit from the output to an
input which has the following properties. Whenever the path goes to the
113
V
V
V
0
1
V
1 x
1
V
Figure 13: The path at one point in time
V
V
V
0
1
V
0 V
x
2
Figure 14: The path at next point in time
left, we require nothing extra while when it turns right we require that we
have marked the value of the left input to that node. Also we keep track
of what kind of operation we have at each node of the path. We start with
the path always going to the left and it is now easy to see that if we always
move to the next input to the right it is easy to update the tree. This might
best be seen by an example. Suppose our path at one point is given by
Figure 13. The active path are the shaded nodes. Assuming that x
1
= 0
then at the next time-step a possible path is given by Figure 14. The path
is of length O(S(n)) and thus can be represented in this space. To update
the path we need to be able to find out what the circuit looks like locally,
but this can be done in space O(S(n)) by the uniformity condition. Thus,
114
we have completed the proof.
Using S(n) = log n we get the following immediate corollary:
Corollary 11.16 NC
1
⊆ L.
With this close connection between L and NC the following theorem is
not surprising:
Theorem 11.17 If A is P-complete then
P = NC ⇔A ∈ NC.
Proof: The proof is more or less the same as the proof of other theorems
of this type, but let us give it anyway. If P = NC then clearly A ∈ NC.
On the other hand if A ∈ NC then we have to construct NC-circuits for any
function in P. Given any B ∈ P we know by the definition of P-complete
that there is function f computable in L such that
x ∈ B ⇔f(x) ∈ A.
However we know by Corollary 11.14 that f can be computed also in NC
2
.
Combining this circuit with the NC-circuit for A becomes an NC-circuit for
B.
As a final comment let us note that for one of the most famous problems
that seem hard to do in parallel, namely integer GCDs, it is not known that
this problem is P-complete.
115
12 Relativized computation
As a tool in understanding computation, one particular way in augmenting
the power of a computation has been studied extensively. For definiteness
assume that we use the Turing machine model of computation. Let A be a
fixed set and give the machine an extra tape, called the query tape. On this
tape the machine can write a string x and then enter a special state called
the query state. In one time-step the query tape now changes content. The
new value will be 1 if x ∈ A and 0 otherwise. Thus the machine is allowed to
ask questions about the set A and very inexpensively obtain correct answers.
The set A, which is called the oracle set should be thought of as a difficult
set, since otherwise the machine could have answered the questions itself at
only a slightly higher cost. The computation is said to take place relative
to the oracle A (and hence the title relativized computation). A Turing
machine M with an oracle A is usually denoted M
A
to avoid confusion.
Now it is natural to define P
A
as the set of languages that can be rec-
ognized in polynomial time by Turing machines with oracle A. In a similar
way all the other complexity classes can be defined. One word of caution.
We will count the part of the query tape used as part of the work-tape
of the machine and hence this should be bounded when we are looking at
space bounded classes. This definition is not standard when dealing with L
and NL, but we will not consider those classes here. Instead we will only
consider
P
A
, NP
A
, BPP
A
and PSPACE
A
.
The reason this concept is interesting is that almost all proofs that are
known remain true if we allow all machines involved in the proof have access
to the same oracle. In particular this is the case for all proofs given in these
notes upto this point. Let us state some theorems that follow (the reader is
encouraged to go back and check the proofs).
Theorem 12.1 For all oracles A,
P
A
⊆ NP
A
⊆ PSPACE
A
.
Theorem 12.2 For all oracles A,
P
A
⊆ BPP
A
⊆ PSPACE
A
.
The idea is that if P ⊂ NP (i.e. that the inclusion would be strict) has
an “easy” proof then P
A
⊂ NP
A
would be true for all oracles A. However
this is not the case:
116
Theorem 12.3 If A is a PSPACE-complete set then
P
A
= NP
A
= BPP
A
= PSPACE
A
= PSPACE.
Proof: It is sufficient to prove that PSPACE ⊆ P
A
and that PSPACE
A

PSPACE.
For the first part let B be anything in PSPACE. Since A is PSPACE-
complete we have B ≤
p
A i.e. there is a polynomial time computable func-
tion f such that x ∈ B ⇔ f(x) ∈ A. But this makes B easy to recognize
for a machine with oracle A. On input x it just computes f(x), writes this
on the oracle tape, reads the answer from the oracle and outputs this as its
own answer. Thus B ∈ P
A
and we conclude that PSPACE ⊆ P
A
.
For the second part, suppose we are given a machine M
A
that recognizes
some language in PSPACE
A
. We have to convert this into an ordinary
PSPACE-machine which recognizes the same language. Essentially we have
to get rid of A. But since A is in PSPACE this is not too difficult. Build a
subroutine S which takes an input x and outputs 1 if x ∈ A and 0 otherwise.
This subroutine can be made to run in polynomial space. Now modify M
A
,
such that instead of entering the query-state it runs S. By definition the
result is the same, and it is easy to see that this modified machine also runs
in polynomial space.
Theorem 12.3 rules out the possibility of an easy proof that P = NP.
This might raise in a more serious way (at least it seems) the possibility
that P = NP. However, oracles will not support this:
Theorem 12.4 There is an oracle B such that P
B
= NP
B
.
Proof: The oracle B will not be as natural as the oracle A given above and
we will construct it piece by piece. Together with B we will also define a
language L(B) which for all B will be in NP
B
, but we will cleverly construct
B such that it is not in P
B
.
Definition 12.5 Let L(B) be a language which only contains strings which
solely consists of 1’s (such a language is called a unary language). The string
of n 1’s is in L(B) if and only if there is at least one string x of length n
such that x ∈ B.
First observe that for any oracle B, L(B) is in NP
B
. Formally L(B) is
recognized by the following algorithm.
117
1. If there is a ’0’ in the input reject and stop.
2. Nondeterministically write down a query to the oracle of the same
length as the input. If the oracles answers 1 accept otherwise reject.
To verify that this algorithm is correct is left to the reader.
Next we will have to define B such that L(B) is not in P
B
. Let M
B
i
be an enumeration of all oracle machines that run in polynomial time. This
is a slightly subtle point since whether an oracle Turing machine runs in
polynomial time depends on the oracle and we have not yet decided what
the oracle should be. This is no real problem and we get around it as follows:
Assume that M
B
i
is an enumeration of all Turing machines which has the
property that each machine appears an infinite number of times . Equip M
B
i
with a stop-watch such that if it has not halted in i|x|
i
steps on input x, it
automatically halts and outputs 1. Now all sets recognized by a polynomial
time machine is recognized by some M
B
i
(we need to repeat each machine
infinitely many times since we do not know for which i it is true that it runs
in time in
i
. We will now go through an infinite number of stages. In stage
i we determine a little bit more of the oracle B to make sure that M
B
i
does
not recognize L(B). Let a string be undetermined if we have not yet decided
whether it will be in B.
n
0
= 1
for i = 1 to ∞ do
make n
i
the smallest number bigger than n
i−1
such that 2
n
i
> in
i
i
and such that no string of length n
i
has been determined.
Run M
B
i
on input 1
n
i
. Whenever the machine asks about an
undetermined string, fix that string not to be in B
If M
B
i
accepts the input then
Make sure that no string of length n
i
is in the oracle set.
else
Put one undetermined string of length n
i
in the oracle set.
endif
next i
fix all undetermined strings not to be in B.
For the constructed B, M
B
i
will not accept L(B) since it will make
an error on 1
n
i
. Hence we need only check that the construction is not
contradictory. The only nonobvious point is that when needed there exists
118
an undetermined string of length n
i
However, since M
B
i
on input 1
n
i
only
runs for time in
i
i
and hence it can only ask this many questions. Thus only
this many new strings can be determined during stage i and since there were
no determined string of length n
i
when stage i started and 2
n
i
> in
i
i
there
is an undetermined string that can be put into B.
It turns out that also all the other questions can be relativized in the
possible way. Let us next take NP versus PSPACE.
Theorem 12.6 There is an oracle C such that NP
C
= PSPACE
C
.
Proof: This proof will very much follow the same line as the last proof.
Let us start by defining the language.
Definition 12.7 Let L

(C) be a unary language such that 1
n
∈ L

(C) iff
there is an odd number of strings of length n in C.
First observe that for any oracle C, L

(C) is in PSPACE
C
. The algo-
rithm just asks all questions of length n and keeps a counter to compute the
parity of the number of strings in the oracle. We will now construct C such
to make sure L

(C) is not in NP
C
.
Using the same argument as in the last proof there is an enumeration
N
C
1
, N
C
2
, . . . of all polynomial time nondeterministic oracle machines where
N
C
i
runs in time at most in
i
. We now construct C in stages:
n
0
= 1
for i = 1 to ∞ do
Make n
i
the smallest number bigger than n
i−1
such that 2
n
i
> in
i
i
and such that no string of length n
i
has been determined.
Consider N
C
i
on input 1
n
i
. If there is some setting of
undetermined strings to make N
C
i
accept then
Make such a setting, by fixing at most in
i
i
strings, fix the
remaining strings of length n
i
to make sure that an even number
of strings of length n
i
are in C.
else
Fix strings to make sure that an odd number of strings of length
n
i
are in C.
endif
Fix all undetermined strings not to be in C.
next i
119
Again by construction for this oracle L

(C) is not in NP
C
. The con-
struction can be seen to be correct by more or less the same reasoning as
the last construction. Please observe that if N
C
i
accepts an input then it is
sufficient to fix the answers of the questions on one accepting computation
path and hence it is sufficient to fix in
i
i
strings in the first case.
Next we have:
Theorem 12.8 There is an oracle D such that BPP
D
⊆ NP
D
.
Proof: We proceed as usual.
Definition 12.9 Let L
maj
(D) be a unary language such that 1
n
∈ L
maj
(D)
if a majority of the strings of length n is in D.
This language is not always in BPP
D
. However, if we make sure that
for each n, at least 60% or at most 40% of the strings is in the oracle set,
then a simple sampling algorithm will work. This extra condition means
that we have to be slightly careful in the oracle construction, but there is
no real problem. We again give an algorithm to determine the oracle:
n
0
= 1
for i = 1 to ∞ do
Make n
i
the smallest number bigger than n
i−1
such that
2
n
i
> 10 · in
i
i
and such that no string of length n
i
has been determined.
Fix all undetermined strings of length less than n
i
not to be in D.
Consider N
D
i
on input 1
n
i
. If there is some setting of
undetermined strings to make N
C
i
accept then
Make such a setting, by fixing at most in
i
i
strings and fix the
remaining strings of length n
i
not to be in D.
else
Put all undetermined strings of length n
i
into D.
endif
next i
The verification that this construction is correct is similar to the previous
verifications. The reason to put all undetermined strings of length at most
n
i
out of the oracle is to make sure that for n’s which are not chosen to
be one of the n
i
’s it is also true that the number of strings of length n in
the oracle is not close to half of all strings of length n. The condition that
2
n
i
≥ 10 · in
i
i
make sure that this is true for all n with n
i
≤ n < n
i+1
.
120
Our last oracle construction will be:
Theorem 12.10 There is an oracle E such that NP
E
⊆ BPP
E
.
Proof: We will use the same language as we used in the proof that there
was an oracle B such that NP
B
= P
B
. Remember that L(E) is a unary
language such that 1
n
∈ L(E) iff there is some string of length n in E.
We now construct E to make sure it is not in BPP
E
. This time let M
E
i
be an enumeration of probabilistic Turing machines. Here there is a slight
problem that M
E
i
might not define a correct machine in that the probability
of acceptance is not bounded away from 1/2 for some inputs. However, this
is only to our advantage since this means this machine will not accept any
BPP-language, and we do not have to worry that it might accept L(E). We
now construct E in stages as follows:
n
0
= 1
for i = 1 to ∞ do
Make n
i
the smallest number bigger than n
i−1
such that
2
n
i
> 10 · in
i
i
and such that no string of length n
i
has been
determined.
Run M
E
i
on input 1
n
i
. Whenever the machine asks about a string
which is not determined, pretend that this string is not in E. Let p
be the probability that M
B
i
accepts under these conditions.
If p ≥ 1/2 then
Fix all strings M
E
i
could possibly ask about not to be in E. Also
fix all other strings of length n
i
not to be in E.
else
Find one string of length n
i
such that the probability that this
string is asked by M
E
i
is at most 1/10 and put this into E. Fix all
other strings M
E
i
might possibly look at not to be in E
endif
next i
fix all undetermined strings not to be in E.
Here there are some details to check. If p ≥ 1/2 then this is actually
the correct probability of acceptance since we eventually fix all the strings
not to appear in E. In this case 1
n
i
∈ L(E) while the probability that M
E
i
accepts 1
n
i
is at least 1/2 and thus M
E
i
does not recognize L(E) in the BPP
sense. On the other hand if p < 1/2 then the final oracle does not agree with
121
the simulation. However since the probability of finding out the difference
is bounded by 1/10, the acceptance probability remains below 0.6. Since in
this case, 1
n
i
∈ L(E), also in this case M
E
i
fails to recognize L(E).
We need also check that there is a suitable string which is asked with
probability at most 1/10. Since the running time of M
E
i
on input 1
n
i
is
bounded by in
i
i
it does not ask more than this number of questions. If
PR(x) is the probability that string x is asked then

|x|=n
i
PR(x) ≤ in
i
i
and since 2
n
i
> 10 · in
i
i
there is some x with PR(x) < 1/10. The proof is
complete.
We have now established that all the unknown inclusion properties of
our main complexity classes can be relativized in different directions. The
only information this gives is that the true inclusions can not be proved
with methods that relativize. In principle, methods that do not look very
detailed at the computation will relativize. In particular when you treat the
computation as a black box which just takes an input and then produces an
output (after a certain number of steps). Thus, the main lesson to learn from
this section is that to establish the true relations of our main complexity
classes, we have to look in a very detailed way at computation.
There are a few results in complexity theory which do not relativize.
One of them (IP=PSPACE) is given in Chapter 13.
122
13 Interactive proofs
One motivation for NP is to capture the notion of “efficient provability”.
If A ∈ NP and x ∈ A then there is a short proof of this fact (the non-
deterministic choices of the algorithm which recognizes A) which can be
verified efficiently. By the definition of NP all proofs are correct and an all
powerful prover can always convince a polynomial time bounded verifier of a
correct NP-statement. As we did with regards to ordinary computation we
can introduce randomness and decrease the requirements. A proof will be a
discussion (interaction) between an all powerful prover and a probabilistic
polynomial time verifier. Before we make a formal definition let us give an
example.
Example 13.1 Given two graphs G
1
and G
2
both on n vertices. G
1
and
G
2
are said to be isomorphic iff there is a permutation π of the vertices such
that (i, j) is an edge in G
1
iff (π(i), π(j)) is an edge in G
2
. In other words
there is a relabeling of the vertices to make the two graphs identical. This
problem is in NP since one can just guess the permutation. On the other
hand it is not known to be in P (or co-NP) nor known to be NP-complete.
Now consider the following protocol for proving that two graphs are not
isomorphic.
For m = 1 to k:
The verifier chooses a random i (1 or 2) and sends a graph H which is
a random permutation of G
i
to the prover.
The prover responds j.
The verifier rejects and halts if i = j
next m
The verifier accepts.
In other words the prover tries to guess which graph the verifier started
with and the verifier accepts if he always guesses correctly. Now suppose
that G
1
and G
2
are not isomorphic. Then H is isomorphic only to G
i
and
the all powerful prover can tell the value of i and always answer correctly.
On the other hand if G
1
and G
2
are isomorphic then, independent of the
value of i, the graph H is a random graph isomorphic to both G
1
and G
2
.
Thus there is no way the prover can distinguish the two cases and thus if
he tries to answer he will each time fail with probability 1/2. Thus the
probability that he can incorrectly make the verifier accept is 2
−k
which is
123
very small if k is large. Thus, for all practical purposes if k = 100 and the
prover always answer correctly the graph will be non-isomorphic.
A discussion (or interaction) of the type described in the example will
be called an interactive proof. Let us formalize the properties wanted.
Definition 13.2 A language A admits an interactive proof iff there is an
interaction between a probabilistic polynomial time verifier V and an all
powerful prover P such that:
1. (Completeness) If x ∈ A then the probability (over V ’s random choices)
that V accepts is at least 2/3.
2. (Soundness) If x ∈ A then no matter what the prover does the proba-
bility (over V ’s random choices) that V accepts is at most 1/3.
Definition 13.3 The complexity class IP is the set of languages that admit
an interactive proof.
The number of exchanges of messages might depend on the length of the
input, but since we want the entire process to be polynomial time, we limit
this to be a polynomial number in the length of the input.
Interactive proofs were defined by Goldwasser, Micali and Rackoff in
1985. A different definition that was later proved to give the same class of
languages was given independently by Babai around the same time. Inter-
active proofs attracted a lot of attention in the end of the 1980’s and we will
only touch on the highlights of this theory. Let us first state an equivalent
of Theorem 9.5.
Theorem 13.4 If A ∈ IP then there is an interaction between a probabilis-
tic polynomial time verifier V and an all powerful prover P such that:
1. If x ∈ A then the probability (over V ’s random choices) that V accepts
is at least 1 −2
−|x|
.
2. If x ∈ A then no matter what the prover does the probability (over V ’s
random choices) that V accepts is at most 2
−|x|
.
Proof: (Outline) The proof is very similar to the proof of Theorem 9.5.
We just run many protocols in many times and make a majority decision in
the end. We leave the details to the reader.
124
A far less obvious fact is that one can in fact obtain perfect completeness
(i.e.when x ∈ A then the probability that V accepts is 1). Proving this would
take us too far and we omit this theorem.
The first couple of years, one of the main drawbacks of the theory of
interactive proofs was the small number of languages that were not in NP
that admitted interactive proofs. This was dramatically changed in Decem-
ber 1989 when work of Nisan, Fortnow, Karloff, Lund and finally Shamir
led to the following remarkable theorem:
Theorem 13.5 IP = PSPACE.
Proof: (Outline) The fact that IP ⊆ PSPACE was established quite early
in the theory of interactive proofs. A formal proof is slightly cumbersome
(but not really hard) and hence let us only give an outline. Suppose A ∈ IP
and the interaction that recognizes A contains k pairs of messages. We
denote the ith prover message by p
i
and the ith verifier by v
i
and assume that
the prover sends the first message in each round. Now let α be any partial
conversation consisting of the first s messages for some s and let Pr(x, α) be
the probability that V accepts given that the initial conversation is α and
that P plays optimally in the future and that V follows his protocol. Our
goal is to compute Pr(x, e) where e is the empty string, since this number
is at least 2/3 when x ∈ A and less then 1/3 otherwise. Now if the last
message in α is by the verifier then
Pr(x, α) = E ((x, αv
i
))
where E is expected value over the verifier message v
i
. On the other hand
if the next message is by the prover then
Pr(x, α) = max ((x, αp
i
))
where the maximum is taken over all messages p
i
. Finally when α is a full
conversation then Pr(x, α) is 1 iff the verifier would have accepted after the
conversation α and 0 otherwise. By assumption this can be computed in
polynomial time. Using these equations it is easy to give an algorithm that
proceeds in a depth first search fashion and evaluates Pr(x, e) in polynomial
space.
This inclusion was no surprise since PSPACE is a big complexity class.
It was the reverse computation that was the big surprise.
125
To prove that PSPACE ⊆ IP we need “only” give an interactive
proof which recognizes TQBF which was proved PSPACE-complete in The-
orem 7.17. We only give an outline of the argument.
In fact we will use that determining the truth of the special type of
quantified Boolean formulas constructed in the proof of Theorem 7.17 is
PSPACE-complete. Let us recall part of this proof. We wanted to con-
struct a formula GET(C
1
, C
2
, k) that said that the Turing machine could
get from configuration C
1
to configuration C
2
in 2
k
steps. This formula was
constructed recursively using:
GET(C
1
, C
2
, k, x) = ∃
C

(A,B)∈{(C
1
,C),(C,C
2
)}
GET(A, B, k −1, x).
Now encode the ∀ quantifier as a Boolean variable x
1
and rewrite the formula
to the following.
GET(C
1
, C
2
, k, x) = ∃
C

x
1

(A,B)
(x
1
→((A = C
1
)∧(B = C)))∧(¯ x
1
→((A = C)∧(B = C
2
)))∧GET(A, B, k−1, x).
Now assume that each configuration consists of n Boolean variables and that
initially k = n. In reality they are both polynomial in n but this is of no
importance. It is not difficult to write (x
1
⇒((A = C
1
)∧(B = C)))∧(¯ x
1

((A = C) ∧ (B = C
2
))) as a CNF-formula with n clauses and each clause
is of polynomial size. Furthermore note that each variable describing C
1
and C
2
does not appear in GET(A, B, k − 1). When we iterate the above
construction it will be true that no variable in any quantifier will used inside
more than 3 other quantifiers. Let us also note that GET(Y, Z, 0) can be
done by a CNF-formula with O(n) clauses of constant size. To summarize
the discussion the formula has the following properties.
• It has 3n quantifiers which appear in blocks of the form ∃∀∃ where the
∃ quantifiers quantify over n variables and 2n variables respectively
and the ∀ quantify over one variable.
• Each variable is used only inside at most one block of following quan-
tifiers.
• All formulas between quantifiers and after the last quantifier are CNF-
formulas with O(n) clauses of constant size.
Now take this formula and replace all ∃ by

and ∀ by

. Here the
sums and products extends over all variables that was originally in the scope
126
of the quantifier. Also replace ∧ by × and ∨ by +. Finally for a variable
replace ¯ x by 1 − x. Using this replacement the formula is now turned into
an expression which evaluates to an integer. It is not difficult to see that
this integer is 0 iff the original formula was false (prove this by induction).
We will show how the prover can convince the verifier with high probability
that this integer I is not 0.
First observe that I is bounded by 2
O(n2
n
)
. This is true since the value
of the final CNF-formula is at most c
n
and each

only multiplies the value
by 2
n
while each

only squares the value (remember that there is only one
variable in each

). The following lemma follows from the prime number
theorem (the reader is asked to take it on faith).
Lemma 13.6 For c < 1 and x > X
c
, the product of all primes less than x
is ≥ e
cx
where e is the base of the natural logarithm ≈ 2.718.
This lemma implies that there is some prime p, n
4
≤ p ≤ O(2
n
) such
that I ≡ 0 modulo p. To see this observe that if I > 0 and it is divisible
by a set of primes then it is at least the product of the primes. The prover
starts by giving this p together with I (mod p) (which is not 0).
Remark 13.7 In fact if one is more careful one can make I = 1 when the
formula is true. This implies that one can use a small prime. This will
make the proof slightly more efficient, but this is of no major concern for
the moment.
Now consider the outermost quantified variable. Let us call it x
1
and
suppose it is part of an ∃ quantifier (i.e. now we are summing over its two
values). Keep this variable free and evaluate the entire expression mod p
with its sums and products. Naturally the result is a polynomial P(x
1
)
and by the conditions of the formula it is of degree O(n). Here we need
both that intermediate pieces of the formula are simple CNF-formula and
that the usage of each variable is very limited. The prover now gives this
formal polynomial (mod (p)) to V . This can be done since there are O(n)
coefficients each which can be specified with O(n) bits. The verifier verifies
that P(0) + P(1) ≡ I (mod p), and responds with a random integer n
1
where n
1
is chosen randomly among 1, 2 . . . p −1. The task for the prover is
now to prove that P(n
1
) is the value of the algebraic when n
1
is substituted
for x
1
. The resulting algebraic expression has one quantified variable less
and we can now attack the next variable. Once all the variables have been
127
eliminated the verifier can himself evaluate the remaining polynomial and if
it equals the value claimed by the prover he accepts and otherwise rejects.
Let us sketch why this protocol is correct. When the formula is true
there is really no complications since the prover all the time is claiming
correct statements and thus the verifier will accept with probability 1. Note
that there is really no difference between the ∀-variables and the ∃-variables,
we only need the assumption on the structure of the formula to make the
degree of the polynomial P small.
Suppose on the other hand that the formula is false. In particular I = 0
and the first value claimed by the prover for I (mod p) is incorrect and
hence also the first polynomial P is not correct (since it takes an incorrect
value for either 0 or 1). Suppose the true polynomial is Q. Let us say that
n
1
is lucky for the prover if P(n
1
) ≡ Q(n
1
) (mod p). If the prover is lucky
once then he starts claiming correct statements and thus he will be able to
convince the verifier. On the other hand if he is never lucky then he will be
forced to continue lying and the verifier will expose him in the end. Since
P − Q is a nonzero polynomial of degree O(n) it has at most O(n) zeroes.
This implies that the probability that the prover is lucky at a single point is
O(n/p) ≤ O(n
−3
). Since there are only O(n
2
) variables, the probability that
he is ever lucky is O(n
−1
). Thus with probability 1 − O(n
−1
) the verifier
will reject and the protocol is correct.
To give a little bit perspective of this proof, let us give an example to
show how it works.
Example 13.8 For simplicity let us work with a formula on normal TQBF-
CNF form and in particular, consider
∃x
1
∀x
2
∃x
3
∀x
4
(x
1
∨ x
2
∨ x
3
) ∧ (¯ x
1
∨ ¯ x
4
).
This formulas is true since if we put x
1
= 0 and x
3
= 1 both clauses are
satisfied. It does not matter what happens with the other variables. The
formula is turned into the following arithmetical expression:
1

x
1
=0
1

x
2
=0
1

x
3
=0
1

x
4
=0
(x
1
+x
2
+x
3
)(2 −x
1
−x
4
)
This is just an integer (in fact 20). A proof would go like the following.
128
1. The prover chooses the prime 7 (in reality it should be larger, but
we are only trying to illustrate the procedure). He claims that the
expression is 6 modulo 7 and in fact that
1

x
2
=0
1

x
3
=0
1

x
4
=0
(x
1
+x
2
+x
3
)(2 −x
1
−x
4
)
as a function of x
1
is
P
1
(x
1
) = (2x
2
1
+ 2x
1
+ 1)(2x
2
1
+ 6x
1
+ 5)(1 −x
1
)
2
(2 −x
1
)
2
.
(Normally the prover represents these polynomials in a dense repre-
sentation, but this is more convenient for hand calculation).
2. The verifier checks that P
1
(0) +P
1
(1) ≡ 6 modulo 7 (in the future we
reduce everything modulo 7 without saying). Indeed P
1
(0) = 20 ≡ 6
while P
1
(1) = 0. He now chooses random value for x
1
(in our case
x
1
= 3) and wants to be convinced that
1

x
2
=0
1

x
3
=0
1

x
4
=0
(3 +x
2
+x
3
)(6 −x
4
) = P(3) = 25 ×41 ×4 ×1 ≡ 5.
3. The prover now claims that
1

x
3
=0
1

x
4
=0
(3 +x
2
+x
3
)(6 −x
4
)
as a function of x
2
is P
2
(x
2
) = 1 + 4x
2
2
.
4. The verifier checks that P
2
(0)P
2
(1) ≡ 5 and randomly chooses x
2
= 5
and asks to be convinced that
1

x
3
=0
1

x
4
=0
(1 +x
3
)(6 −x
4
) ≡ P
2
(5) ≡ 3
5. The prover claims that
1

x
4
=0
(1 +x
3
)(6 −x
4
)
as a a function of x
3
is P
3
(x
3
) = 2 + 4x
3
+ 2x
2
3
.
129
6. The verifier checks that P
3
(0) +P
3
(1) ≡ 3 and then randomly chooses
x
3
= 2 and wants to be convinced that
1

x
4
=0
3(6 −x
4
) ≡ P
3
(2) ≡ 4.
This he can to by himself and he accepts the input since 18 × 15 is
indeed 4 modulo 7.
As mentioned before, the above proof does not relativize (the IP ⊂
PSPACE does relativize, but not the second part). It is not difficult to
construct an oracle A such that IP
A
⊂ PSPACE
A
. The reason that the
proof does not relativize is that if we allow oracle questions then the condi-
tion “C
1
is the configuration that follows C
2
” cannot be described by a low
degree polynomial.
This proof which does not relativize gives some hope to attack the NP
vs P question. However it is still true that no strict inclusion that does not
relativize has been proved for any complexity class that includes NC
1
.
130

Contents
1 Preface 2 Recursive Functions 2.1 Primitive Recursive Functions . . . . . . 2.2 Partial recursive functions . . . . . . . . 2.3 Turing Machines . . . . . . . . . . . . . 2.4 Church’s thesis . . . . . . . . . . . . . . 2.5 Functions, sets and languages . . . . . . 2.6 Recursively enumerable sets . . . . . . . 2.7 Some facts about recursively enumerable 2.8 G¨del’s incompleteness theorem . . . . . o 2.9 Exercises . . . . . . . . . . . . . . . . . 2.10 Answers to exercises . . . . . . . . . . . 4 5 6 10 11 15 16 16 19 26 27 28

. . . . . . . . . . . . . . . . . . sets . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

3 Efficient computation, hierarchy theorems. 32 3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . 33 4 The complexity classes L, P and P SP ACE. 39 4.1 Is the definition of P model dependent? . . . . . . . . . . . . 40 4.2 Examples of members in the complexity classes. . . . . . . . . 48 5 Nondeterministic computation 56 5.1 Nondeterministic Turing machines . . . . . . . . . . . . . . . 56 6 Relations among complexity classes 64 6.1 Nondeterministic space vs. deterministic time . . . . . . . . . 64 6.2 Nondeterministic time vs. deterministic space . . . . . . . . . 65 6.3 Deterministic space vs. nondeterministic space . . . . . . . . 66 7 Complete problems 7.1 NP-complete problems . . . 7.2 PSPACE-complete problems 7.3 P-complete problems . . . . 7.4 NL-complete problems . . . 69 69 78 82 85 86

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

8 Constructing more complexity-classes

2

9 Probabilistic computation 89 9.1 Relations to other complexity classes . . . . . . . . . . . . . . 94 10 Pseudorandom number generators 95

11 Parallel computation 106 11.1 The circuit model of computation . . . . . . . . . . . . . . . . 106 11.2 NC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 11.3 Parallel time vs sequential space . . . . . . . . . . . . . . . . 112 12 Relativized computation 13 Interactive proofs 116 123

3

let me just note that there are probably many errors and inaccuracies remaining and for those I must take full responsibility. but also interested undergraduates have followed the courses. 4 . Wojtek Janczewski. Per Andersson. Andreas Jakobik. Anyone getting stuck in these parts of the notes should not be disappointed. The main idea of the course has been to give the broad picture of modern complexity theory. The set of notes does not contain the amount of detail wanted from a textbook. The students who have taken the courses o together with other people have also helped me correct many errors. Pelle Grape. Viggo Kann. Sincere thanks to Jerker Andersson. Probably in many places more details would be helpful and I would be grateful for hints on where this is the case. I have taken the liberty of skipping many boring details and tried to emphasize the ideas involved in the proofs. Most of the notes are at a fairly introductory level but some of the section contain more advanced material. In particular I am grateful to Jens Lagergren and Ingrid Lindstr¨m. Christer Berg. Mikael Goldmann. give some examples of each complexity class and to prove the most standard relations. Christer Carlsson. Joachim Hollman. Lars Arvestad. and Peter Rosengren.1 Preface The present set of notes have grown out of a set of courses I have given at the Royal Institute of Technology. Jan Frelin. This is in particular true for the section on pseudorandom number generators and the proof that IP = P SP ACE. These notes have benefited from feedback from colleagues who have taught courses based on this material. aa a Finally. J¨rgen Backo elin. The courses have been given at an introductory graduate level. Mats N¨slund. To define the basic complexity classes. Kai-Mikael J¨¨-Aro.

3) we code it as 011. This fact will be used constantly throughout these notes. 2 . 2). It is easy to see that the mapping from graphs to numbers is easy to compute and easy to invert and thus we can use this representation of graphs as well as any other. b = 2 and so on. Several independent attempts to answer this question were made in the mid-1930’s.32) which o was published in 1931. This might seem restrictive. but in fact it is not since we can code almost any type of object as a natural number. For instance suppose that we are looking at graphs with 3 nodes. We will be considering functions from natural numbers (N= {0. 3) and (2. After this detour let us return to the question of which functions are mechanically computable.}) to natural numbers. One possible reason that several researchers independently came to consider this question is its close connections to the proof of G¨del’s incompleteness theorem (Theorem 2. 1. Thus a function from words over the English alphabet to graphs can be represented as a function from natural numbers to natural numbers. Mechanically computable functions are often called recursive functions. suppose that we are given a function from words of the English alphabet to graphs. In a similar way one can see that most objects that have any reasonable formal representation can be represented as natural numbers. As an example.2 Recursive Functions What functions are computable by a computer? One central question in computer science is the basic question: Oddly enough. A graph on n nodes can be thought of as a sequence of n binary symbols where each symbol corresponds to a potential edge 2 and it is 1 iff the edge actually is there. Add a leading 1 and consider the result as a number written in binary notation (our example corresponds to (1011)2 = 11). The reason for this will soon be obvious. and hence the possible edges are (1. this question preceded the invention of the modern computer and thus it was originally phrased: “What functions are mechanically computable?” The word “mechanically” should here be interpreted as “by hand without really thinking”. If the graph only contains the edges (1. let us be precise about what we mean by a function. . 3) and (2. (1. 5 . Then we can think of a word in the English alphabet as a number written in base 27 with a = 1. Before we try to formalize the concept of a computable function. . 3).

Definition 2. . x2 . h. gm (x1 . . x2 )) It will be very cumbersome to follow the notation of the definition of the primitive recursive functions strictly. . . Constants. g2 . . gm are known to be primitive recursive functions. . xn ) = xi for 1 ≤ i ≤ n and any n. n 3. f (1) . x2 . xn ). . . x3 . 4. . The projections. x2 ). . . . . 2. Add(x1 . . . x2 . .1 The following functions are primitive recursive 1. xn ) = h(g1 (x1 . . g2 (x1 . Assume that g. . . i. . πi (x1 . . not 6 . f (x1 .2 Addition is defined as 1 Add(0. xn ) • f (x1 + 1. xn ) = g(x2 .1 Primitive Recursive Functions The name “recursive” comes from the use of recursion.2. m(x) = m for any constant m. Composition. g1 . x3 . xn )) 5.e. . x2 . f (x). x2 ) = π1 (x2 ) 3 Add(x1 + 1. It contains some basic functions and then new primitive recursive functions can be built from previously defined primitive recursive functions either by composition or primitive recursion. . xn ) To get a feeling for this definition let us prove that some common functions are primitive recursive. . . x3 . . x2 . then we can form new primitive recursive functions in the following ways. The primitive recursive functions are also closed under the following two operations. The primitive recursive functions define a large class of computable functions which contains most natural functions. x2 )) = σ(Add(x1 . . . x2 ) = σ(π2 (x1 . . . f (x1 . . . Thus instead of the above. Primitive recursion The function defined by • f (0. Let us give a formal definition. xn ). . . when a function value f (x + 1) is defined in terms of previous values f (0). . The successor function. xn ). xn ) = h(x1 . σ(x) = x + 1. Example 2. .

If we instead would be working with integers the situation would be different 1 7 . x2 ) = Add(x1 . 0) = 1 and g is primitive recursive then so is f since it can be defined by f (x. more transparent (but formally incorrect) version stated below. However. x2 + 1) = Sub1(Sub(x1 .very transparent (but formally correct definition) we will use the equivalent. x2 ) + 1 Example 2. x2 ) = x2 Add(x1 + 1. g(x. y + 1) = M ult(f (x. First define a function on one variable which is basically subtraction by 1.5 If f (x. i) where we let f (x. x2 ) = Add(x2 . we can define a function which takes the same value as subtraction whenever it is positive and otherwise takes the value 0. Sub1(0) = 0 Sub1(x + 1) = x and now we can let Sub(x1 . 0) = x1 Sub(x1 . x2 ) = 0 M ult(x1 + 1. Example 2. y)).6 We can define a miniature version of the signum function by Sg(0) = 0 Sg(x + 1) = 1 This is due to the fact that we have decided to work with natural numbers. y−1 Example 2. y) = i=0 g(x. M ult(x1 . 0) = 1 f (x. Here for convenience we have interchanged the order of the arguments in the definition of the recursion but this can be justified by the composition rule.4 We cannot define subtraction as usual since we require the answer to be nonnegative1 . Add(0. x2 )) Example 2.3 Multiplication can be defined as M ult(0. x2 )). y).

However. it is convenient to identify predicates with functions that take the values 0 and 1. we can conclude that the functions used in the definition are mechanically computable. as we did above. With this convention we define a predicate to be primitive recursive exactly when the corresponding function is primitive recursive. This naturally leads to an efficient way to prove that more functions are primitive recursive. P (x)))) (which in ordinary notation is (P ∗ g + (1 − P ) ∗ h). On the other hand if we use primitive recursion then we can compute f when the first argument is 0 since it then agrees with g which is 8 . Suppose the new function is constructed by composition. n) = 1 if m and n are equal and Eq(m. Each primitive recursive function is defined as a sequence of statements starting with basic functions of the types 1-3 and then using rules 4-5. The simplest functions are the basic functions 1-3 and. n) = Sub(1. Continuing along these lines it is not difficult (but tedious) to prove that most simple functions are primitive recursive. We will argue that primitive recursive functions are mechanically computable by induction over the complexity of the derivation (i. We will call this a derivation of the function.e. n)))) since Sub(n. Equality is here defined as by Eq(m. In general a primitive recursive function f will be obtained using the rules 4 and 5 from functions defined previously. Add(Sg(Sub(n. Since the derivations of these functions are subderivations of the given derivation. Then the function f (x) defined by g(x) if P (x) and h(x) otherwise will be primitive recursive since it can be written as Add(M ult(g(x). arguing informally. Let us now argue that all primitive recursive functions are mechanically computable. Namely. the number of steps in the derivation). m) and Sub(m. a property of pairs of numbers. then we can compute f by first computing the gi and then computing h of the results. n) are both zero iff n = m. P (x)).and this allows us to define equality by Eq(m. n) = 0 otherwise. M ult(h(x). Sg(Sub(m. Equality is not really a function put a predicate of pairs of numbers i. let g and h be primitive recursive functions and let P be a primitive recursive predicate. are easy to compute. Of course this can only be an informal argument since “mechanically computable” is only an intuitive notion. letting the value of the function be 1 exactly when the predicate is true. m)). Sub(1.e.

since once we have found the derivation of fx we can compute it on any input. This finishes the informal argument that all primitive recursive functions are mechanically computable. Although we have seen that most simple functions are primitive recursive there are in fact functions which are mechanically computable but are not primitive recursive. If the coding is reasonable it is mechanically computable to decide. 9 . Now let f1 be the primitive recursive function in one variable which corresponds to the smallest number giving such a legal derivation and then let f2 be the function which corresponds to the second smallest number and so on. We have reached a contradiction and we have thus proved: Theorem 2.computable by induction and then we can see that we can compute f in general by induction over the size of the first argument. By the above discussion V is mechanically computable. Now let V (x) = fx (x) + 1. would not be the first one would like to compute but which certainly is very important from a theoretical point of view. Thus the present argument has nothing to do with computing efficiently. On the other hand if V = fy then it is fy (y). We will give one such function which. Now look at the value of V at the point y. The x’th legal derivation found is the derivation of fx . By the definition of V the value should be fy (y) + 1. Observe that given x it is possible to mechanically find the derivation of fx by the following mechanical but inefficient procedure.7 There are mechanically computable functions which are not primitive recursive. Before we continue. If V was primitive recursive then V = fy for some number y. let us note the following: If we look at the proof in the case of multiplication it shows that multiplication is mechanically computable but it gives an extremely inefficient algorithm. we have to admit. given a number. A derivation of a primitive recursive function is just a finite number of symbols and thus we can code it as a number. On the other hand we claim that V does not agree with any primitive recursive function. whether the number corresponds to a correct derivation of a primitive recursive function in one variable. Start with 0 and check the numbers in increasing order whether they correspond to correct derivations of a function in one variable.

j) we write the number fj (i). At position (i. But 10 . Definition 2. Our first candidate for the class of mechanically computable functions will be a subclass of the partial recursive functions.The method of proof used to prove this theorem is called diagonalization. . The idea is similar to the proof that Cantor used to prove that the real numbers are not denumerable. . i. Definition 2. xn ) is undefined. . The above proof demonstrates something very important. x1 . xn ) is defined for all y < m. If we want to have a characterization of all mechanically computable functions the description cannot be mechanically computable by itself. There is an extra way of forming new functions: 6.2 Partial recursive functions The way around the problem mentioned last in the last section is to allow a derivation to define a function which is only partial i.e.9 A function is recursive (or total recursive) if it is a partial recursive function which is total. . To see this we just have to check that the property of mechanical computability is closed under the rule 6. . . given that f is defined. .8 The partial recursive functions contains the basic functions defined by 1-3 for primitive recursive functions and are closed under the operations 4 and 5. . xn ) be the least m such that g(m. If we could find fx then the above defined function V would be mechanically computable and we would get a function which was not in our list. . . xn ) = 0 and such that g(y.e. Unbounded search Assume that g is a partial recursive function and let f (x1 . We then construct a function which is not primitive recursive by going down the diagonal and making sure that our function disagrees with fi on input i. We will do this by giving another way of forming new function. . x1 . . By this we mean that given x we should not be able to find fx in a mechanical way. which is defined for all inputs. . Observe that a recursive function is in an intuitive sense mechanically computable. To see the reason for this name think of an infinite two-dimensional array with natural numbers along one axis and the primitive recursive functions on the other. Then f is partial recursive. is not defined for all inputs. . 2. This modification will give a new class of functions called the partial recursive functions. If no such m exists then f (x1 . .

as rewriting systems) and Turing (Turing machines. The problem being that it is difficult to decide whether the defined function is total (i. . It is not important which alphabet the machine uses and thus let us think of it as {0. The machine reads the content of the square the head is located at. Let us next describe another approach to define mechanically computable functions. x2 . 1.3 Turing Machines The definition of mechanically computable functions as recursive functions given in the last section is due to Kleene.Figure 1: A Turing machine this follows since we just have to keep computing g until we find a value for which it takes the value 0. The input is initially given on the tape. if for each value of x1 . A Turing machine is a very primitive computer. The key point here is that since f is total we know that eventually there is going to be such a value. x1 . after the invention of the modern computer. xn there is an m such that g(m. . seems most natural. and 11 . Of these we will only look closer at Turing machines. This implies that we will not be able to imitate the proof of Theorem 2. x2 . The infinite tape serves as memory and input and output device. . also by equations). Other definitions of mechanically computable were given by Church (effective calculability. . Also observe that there is no obvious way to determine whether a given derivation defines a total function and thus defines a recursive function.7 and thus there is some hope that this definition will give all mechanically computable functions. At each point in time the head is located at one of the tape squares and is in one of a finite number of states. a type of primitive computer). . 2. B} where B symbolizes the blank square. This is probably the definition which to most of us today. xn ) = 0. . Post (canonical systems. Each square can contain one symbol from a finite alphabet which we will denote by Σ.e. A simple picture of one is given in Figure 1.

In such a case there is one head on each tape.10 Let us define a Turing Machine which checks if the input contains only ones and no zeros. it describes the movements of all k heads and what new symbols to write into the k squares. It is given in Table 1. it writes something into the square. The tape squares that do not contain any part of the input contain the symbol B. L} where Q is the set of possible states and R(L) symbolizes moving right (left). If we have several tapes then it is common to have one tape on which the input is located. and when the machine reaches this state it halts. In a similar spirit there is one output-tape which the machine cannot read. qh . 12 . the computation. The output is now defined by the non-blank symbols on the tape. This convention separates out the tasks of reading the input and writing the output and thus we can concentrate on the heart of the matter. If there are k tapes then the next-step function depends on the contents of all k squares where the heads are located.1 B New State q1 q0 qh q1 qh New Symbol B B 1 B 0 Move R R R Table 1: The next step function of a simple Turing machine based on this value and its state. When we are discussing computability this will not matter. Formally this is described by the next-move function f : Q × Σ → Q × Σ × {R. and not to allow the machine to write on this tape. There is a special halt-state.State q0 q0 q0 q1 q1 Symbol 0 1 B 0. From an intuitive point of view the next-move function is the program of the machine. most of the time we will assume that we have a one-tape Turing machine. Initially the machine is in a special start-state. It is possible to make the Turing machine more efficient by allowing more than one tape. and the head is located on the leftmost square of the input. However. q0 . enters a potentially new state and moves left or right. but later when considering efficiency of computation results will change slightly. Example 2.

The “Turing computable functions” is a reasonable definition of the mechanically computable functions and thus the first interesting question is how this new class of functions relates to the recursive functions. We will not give the proof of this theorem. If it ever sees a “0” it erases the rest of the input. To make things simpler we also assume that we have a special output-tape on which we print the answer. The easier part of the theorem is to prove that if a function is recursive then it is Turing computable. For this reason this will be the last Turing machine that we specify explicitly. prints the answer 0 and then halts. when we argued that recursive functions were mechanically computable. We have the following theorem.11 Programming Turing machines gets slightly cumbersome and as an example let us give a Turing machine which computes the sum of two binary numbers. Theorem 2.0 : It will be quite time-consuming to explicitly give Turing machines which compute more complicated functions. If it sees a “B” before it sees a “0” it accepts. But whenever a Turing machine halts for all inputs it corresponds to a total function and we will call such a function Turing computable. most people who have programmed a modern 13 . To be honest there are more economic ways to specify Turing machines. The program is given in Table 2. A Turing machine defines only a partial function since it is not clear that the machine will halt for all inputs.Thus the machine starts in state q0 and remains in this state until it has seen a “0”. where we assume for notational convenience that the machine starts in state q0. To make the representation compact we will let the states have two indices. However. Let division be integer division and let lsb(i) be the least significant bit of i. The first index is just a string of letters while the other is a number. which in general will be in the range 0 to 3. and hence we will only give an outline of the general approach. One can build up an arsenal of small machines doing basic operations and then define composition of Turing machines.12 A function is Turing computable iff it is recursive. We assume that we are given two numbers with least significant bit first and that there is a B between the two numbers.3. The proof is rather tedious. Before. since programming Turing machines is not our main task we will not pursue this direction either. Example 2. also here beginning with the least significant bit.

1 B B 0.i qcx.i qxo. 1 0. i+j 2 lsb(i+j) i qh Table 2: A Turing machine for addition 14 .i qxo.i qf x.i qyo.i qsy.i qyc.State q0.i qxf. i+j 2 lsb(i+j) qsx.i qxf.i qxm.i qf x.i qyc.i qy. 1 B 0.i qxo.i qf x.i qcx. i+j 2 2 New Symbol B same B B B B B same B B B same B B same same B B same same B B B Move R R R R R R R R R R L L L L L R L L L R R Output lsb(i+j) lsb(i+j) i qyc.i qyo.i q0. 1(= j) 0.i qx. 1 B B 0. 1 B B 0.i qf x.i qsy.i qsy. 1(= j) B New State qx.i qcx.i qsx.i qy.i qyo. 1(= j) B 0.i qxf. i+j qh qxm.i qsx.i qxf.i qsy. 1 B B 0.i qy.i Symbol 0. 1(= j) 0.i qxm.i qcx.i qyo.i qx.i qyc. 1(= j) 0. 1 0.i qxo.i qsx.i+j qxm.

This leads one to believe that we have captured the right notion of computability and this belief is usually referred to as Church’s thesis. 15 . Since any high level computer language describes a reasonable model of computation the class of functions computable by high level programs is included in the class of recursive functions.4 Church’s thesis In the last section we stated the theorem that recursive functions are identical to the Turing computable functions. This gets fairly involved and we will not describe this procedure here. we can draw the conclusion that we can do the computation on a Turing machine or by a recursive function. instead of saying that a given function. Thus we can use such imprecise words as “reasonable”. Church’s thesis is very convenient to use when arguing about computability. In this way we do not have to worry about actually programming the Turing machine. The way to do this is to mimic the behavior of the Turing machine by equations. Thus as long as our descriptions of procedures are detailed enough so that we feel certain that we could write a high level program to do the computation. Church’s thesis: The class of recursive functions is the class of mechanically computable functions. It turns out that all the other attempts to formalize mechanically computable functions give the same class of functions. 2. but still feasible.computer probably felt that without too much trouble one could write a program that would compute a recursive function. f . For the remainder of these notes we will use the term “recursive functions” for the class of functions described by Church’s thesis. and any reasonable definition of mechanically computable will give the same class of functions. Observe that Church’s thesis is not a mathematical theorem but a statement of experience. Sometimes. is a recursive function we will phrase this as “f is computable”. For the other implication one has to show that any Turing computable function is recursive. It is harder to program Turing machines. When we argue about such functions we will usually argue in terms of Turing machines but the algorithms we describe will only be specified quite informally. Let us state it for future reference.

lists the members of A on its output tape. Thus if we want to know whether x ∈ A it is not clear how to use M for this purpose. In this connection sets are also called languages. Another interesting class of sets is the class of sets which can be listed mechanically. There is a slightly subtle point here since it might be the case that M never outputs such a number.13 A set A is recursively enumerable iff there is a Turing machine MA which. of inputs for which the function takes the value 1. which would happen in the case when A is finite and does not contain x or any larger number. but in general this is not true. while any member of A will eventually be listed. e. 2. Thus A is recursive iff given x one can mechanically decide whether x ∈ A. but if we have not seen x we do not know whether x ∈ A or we have not waited long enough.6 Recursively enumerable sets We have defined recursive sets to be the sets for which membership can be tested mechanically i. Definition 2.2 Thus in this case we can conclude that A is recursive. If we would require that A was listed in order we could check whether x ∈ A since we would only have had to wait until we had seen x or a number greater than x. It is important to remember that. We can watch the output of M and if x appears we know that x ∈ A. A.e.5 Functions. The function f is called the characteristic function of A. sets and languages If a function f only takes two values (which we assume without loss of generality to be 0 and 1) then we can identify f with the set. 2 16 .2. a set A is recursive if given x it is computable to test whether x ∈ A.g. A set is called recursive iff its characteristic function is recursive. It is interesting to note that given the machine M it is not clear which alternative should be used to recognize A. the members of A are not necessarily listed in order and that M will probably never halt since A is infinite most of the time. the set of prime numbers could be called the language of prime numbers. Sometimes the characteristic function of A will be denoted by χA . The reason for this is historical and comes from the theory of formal languages. but one of them will work and that is all we care about. However also in this case A is recursive since any finite set is recursive. In formulas x ∈ A ⇔ f (x) = 1. when started on the empty input tape.

New state.Theorem 2. 1. Thus we can uniquely code a Turing machine as a natural number. We assume that the start state is always q0 and the halt state q1 . . This coding is also efficient in the sense that given a string over this alphabet it is possible to mechanically decide whether it is a correct description of a Turing machine (think about this for a while). However there are sets that are recursively enumerable that are not recursive. 1 or B. We will denote the Turing machine which is given by the description corresponding to y by My . the end of a line is marked as & & and the end of the specification is marked as & & &. 1. 1 . ∞ If i ∈ A print i. R. B. Each item is separated from the next by the special symbol &. Let us make precise how to code this information. Furthermore we claim that once we have the description of the Turing machine we can run it on any input 17 . Move and Output. This definition implies that each Turing machine occurs infinitely many times in any natural enumeration. A state should be written as qx where x is a natural number written in binary. B}. while a move is either R or L and the output is either 0. The other part of the theorem is harder and requires some more notation. L. If we encounter the end of the specification we will just discard the rest of the description. Symbol. For technical reason we allow the end of the specification not to be the last symbols in the coding. Let us outline in more detail how this is done. With these conventions a Turing machine is completely specified by a finite string over the alphabet {0. the procedure below will even print the members of A in order. A symbol is from the set {0. Since it is computable to determine whether i ∈ A this will give a correct enumeration of A. We again emphasize that given y it is possible to mechanically determine whether it corresponds to a Turing machine and in such a case find that Turing machine. .14 If a set is recursive then it is recursively enumerable. We have described a Turing machine by a number of lines where each line contains the following items: State. Proof: That recursive implies recursively enumerable is not too hard. A Turing machine is essentially defined by the next-step function which can be described by a number of symbols and thus can be coded as an integer. By standard coding we can think of this finite string as a number written in base 8. For i = 0. &. New Symbol. q}.

VT is the characteristic function of a set which we will denote by KD . 2. where it usually would halt. We encourage the interested reader to at least make a rough sketch of a program in his favorite programming language which does the same thing as the universal Turing machine. run Mj . In a more modern language. i. VT (x) = 1.(simulate My on a given input). y. By this we mean that if My halts with output w on input x within z steps then also the universal machine outputs w. if it halts within these i steps and gives output 0 and we have not listed j before. print j. if Mx halts on input x with output 0. ∞ For j = 1. . the universal Turing machine is more or less an interpreter since it takes as input a Turing machine program together with an input and then runs the program. Observe that this is an recursive procedure using the universal Turing machine. otherwise. To prove the first claim observe that KD can be enumerated by the following procedure For i = 1. To distinguish it we call it VT . Theorem 2. In such a case the universal machine will simulate My until it halts or go on for ever without halting if My does not halt on input x. i steps on input j. . . z) simulates z computational steps of My on input x. . We claim that KD is recursively enumerable but not recursive. but this can be modified at will. We will sometimes allow z to take the value ∞. 2 . If Mj is legal. We call this set “the diagonal halting set” since it is the set of Turing machines which halt with output 0 when given their own encoding as input. 0. the universal Turing machine enters a special state qill . The only detail to check is that we can decide whether j has 18 . If My does not halt within z steps then the universal machine gives output “not halted”. We now define a function which is in the same spirit of the function V which we proved not to be primitive recursive.15 There is a universal Turing machine which on input (x. We make this explicit by stating a theorem we will not prove. If y is not the description of a legal Turing machine. . The output will again agree with that of My .

18 The complement of KD is not r. To see that KD is not recursive. and hence is not recursive. The easiest way to do this is to observe that j has not been listed before precisely if j = i or Mj halted in exactly i steps. For the ¯ converse. T ) and j = x. 2. we have proved one direction of the theorem.e. to decide whether x ∈ A we just enumerate A and A in parallel. This finishes the proof of Theorem 2.16 The function VT cannot be computed by a Turing machine. We have given an explicit function which cannot be computed by a Turing machine.14 We have proved slightly more than was required by the theorem. we can give the answer and halt. Theorem 2. On the other hand if M does not halt with output 0 then VT (y) = 0.e.been listed before.e. Corollary 2. Consider what happens when M is fed input y. suppose that VT can be computed by a Turing machine M .. From Theorem 2. In this section we will abbreviate recursively enumerable as “r.7 Some facts about recursively enumerable sets Recursion theory is really the predecessor of complexity theory and let us therefore prove some of the standard theorems to give us something to compare with later. (A) are r.17 A is recursive if and only if both A and the complement of ¯ A. ¯ Proof: If A is recursive then also A is recursive (we get a machine recog¯ from a machine recognizing A by changing the output). Theorem 2. Since any nizing A recursive set is r. which we know it will.”. We know that M = My for some y. The procedure lists KD since all numbers ever printed are by definition members in KD and if x ∈ KD and Mx halts in T steps on input x then x will be listed for i = max(x. and when x appears in one of lists. If it halts with output 0 then VT (y) = 1. Let us state this as a separate theorem. 19 . In either case My makes an error and hence we have reached a contradiction.e.16 we have the following immediate corollary.

y)|My is legal and halts on input x}. ∞ For x = 0.19 A is r. This is closely related to the diagonal halting problem which we have already proved not to be recursive in the last section. Proof: If there is such a B then A can be enumerated by the following program: For z = 0. i. y) ∈ K is for natural reasons called the halting problem. By the existence of the universal Turing machine it follows that B is recursive and by definition ∃y(x. y) ∈ B then x is listed for z = max(x.19. y).sets are just recursive sets plus an existential quantifier.20 The halting problem is not recursive. 2. First observe that x has not been printed before if either x or y is equal to z. y) ∈ B. be defined by K = {(x. . y) ∈ B and (x. To determine whether a given pair (x. By the relation between A and B this program will list only members of A and if x ∈ A and y is the smallest number such that (x. when x ∈ A.e.e. y) ∈ B precisely when x appears in the output of MA . let MA be the Turing machine which enumerates A. iff there is a recursive set B such that x ∈ A ⇔ ∃y (x. . . Intuitively this should imply that the halting problem also is not recursive and in fact this is the case.e. The last theorem says that r. y) such that x is output by MA in at most y steps. Theorem 2. This finishes the proof of Theorem 2. z If for some y ≤ z we have (x.For the next theorem we need the fact that we can code pairs of natural numbers as natural numbers. Define B to be the set of pairs (x. We will later see that there is a similar relationship between the complexity classes P and N P . K. To see the converse. 2 . Let the halting set. . For instance one such coding is given by f (x. 20 . y) = (x + y)(x + y + 1)/2 + x. 1. . Theorem 2. y ) ∈ B for y < y and x has not been printed before then print x. 1.

∞) to determine the output of Mx on input x. 21 . On the other hand if M outputs 1 we use the universal machine on input (x. x) to M . If the output is 0 we give the answer 1 and otherwise we answer 0.Proof: Suppose K is recursive i. We will not study other definitions in detail. The intuition for either of the above definitions is that A is not harder to recognize than B. Since we have already proved that no machine can compute VT this will prove the theorem.23 If A ≤m B and B is recursive then A is recursive. First decide whether Mx is a legal Turing machine. Now consider an input x and that we want to compute VT (x). We will use this machine to construct a machine that computes VT using M as a subroutine. If it is not we output 0 and halt.e. If Mx is a legal machine we feed the pair (x. but since the only reduction we have done so far was not a many-one reduction but a more general notion called Turing reduction. One general such method is by a standard type of reduction and let us next define this concept.21 For sets A and B let the notation A ≤m B mean that there is a recursive function f such that x ∈ A ⇔ f (x) ∈ B. Definition 2.22 For sets A and B let the notation A ≤T B mean that given a Turing machine that recognizes B then using this machine as a subroutine we can construct a Turing machine that recognizes A. The reason for the letter m on the less than sign is that one usually defines several different reductions. Definition 2. first compute f (x) and then check whether f (x) ∈ B. Proof: To decide whether x ∈ A. y) gives output 1 precisely when My is legal and halts on input x. that there is a Turing machine M which on input (x. If M outputs 0 we can safely output 0 since we know that Mx does not halt on input x. This particular reduction is usually referred to as a many-one reduction. x. It is now clear that other problems can be proved to be non-recursive by a similar technique. This gives a mechanical procedure that computes VT and we have reached the desired contradiction. we will define also this reduction. This is formalized as follows: Theorem 2. Since both f and B are recursive this is a recursive procedure and it gives the correct answer by the definition of A ≤m B. Namely we assume that the given problem is recursive and we then make an algorithm for computing something that we already know is not recursive.

then B ≤m A.e. each with a marking on all four sides and one tile placed at the origin in the plane.e. Just run more and more machines more and more steps and output all pairs of machines and inputs that leads to halting..e. the markings agree on their common side and such that each tile is equal to one of the given tiles. Define M to be the Turing machine which on input x runs M until it outputs x (if ever) and then halts with output 0. Given a finite set of squares (which will be called tiles). It is also true that the diagonal halting problem is r.-complete iff 1. If B is r. We have Theorem 2. The question is whether it is possible to cover the entire positive quadrant with tiles such that on any two neighboring tiles. B can be reduced to K.e. A is r. Proof: The fact that the halting problem is r. Next let us define the hardest problem within a given class. Theorem 2. y) and this will give a reduction from B to K.24 A set A is r.-complete.-complete (or to be even harder) and let us define two such problems.-complete. 22 . The first problem is called tiling a can be thought of as a two-dimensional domino game.e.can be seen in a similar way that the diagonal halting problem KD was seen to be r.e. Thus if M = My we can let f (x) = (x. Definition 2. Let M be the Turing machine that enumerates B.e.-complete. The proof is complete. To see that it is complete we have to prove that any other r.26 The complement problem of tiling is r. 2. However in the future we will only reason about many-one reducibility.25 The halting set is r.e. set.e.e. but we omit the proof.Clearly the similar theorem with Turing reducibility rather than manyone reducibility is also true (prove it). Then M halts precisely when x ∈ B. There are many other (often more natural) problems that can be proved r.

However. This completes the description of the tiles. s+1 ) of 1 2 3 1 2 3 states in the previous and next step we make a tile. There are a couple of details to take care of. s3 ). s3 ) while the marking on the top side is 1 2 3 (s1 . s2 and s3 (we call this the signature of the tile) and we need to specify how to mark its four sides. j) will describe the state of cells j. The left hand side will be marked by (s1 . s−1 ).Proof: (Outline) Given a Turing machine Mx we will construct a set of tiles and a tile at the origin such that the entire positive quadrant can be tiled iff Mx does not halt on the empty input. Observe that this implies that tiles which are to the left and right of each other will describe overlapping parts of the tape. The tile at the origin will make sure that the machine starts correctly (with some more complication this tile could have been eliminated also). and j + 2 are s1 . Suppose that the states of cells j. j + 1. The markings on the top and the bottom will make sure that the computation proceeds correctly. (s1 . it does not halt. s2 . s2 . s−1 ) and (s+1 . The tile to be placed at position (i. and s3 at time t. Consider the states of these cells at time t + 1. we will make sure that the descriptions do not conflict. (s+1 . Let the state of a tape cell be the content of the cell with the additional information whether the head is there and in such a case which state the machine is in.e. Namely that new heads don’t enter from the left and that the entire tape is blank from the beginning. j + 1 and j + 2 at time i of the computation. s+1 ). On the other hand if the head is not present in any of the three cells there might be several possibilities since the head could be in cells j − 1 or j + 3 and move into one of our positions. s+1 . If one of the si tells us that the head is present we know exactly what states the cells will be in. Now each tile will describe the state of three adjacent cells. s3 ). Now it is easy to see that a valid tiling describes a computation of Mx and the entire quadrant can be tiled iff Mx goes on for ever i. The marking on the lower side is (s−1 . s2 . Observe that this makes sure that there is no conflict in the descriptions of a cell by different tiles. For each possibility (s−1 . s−1 . 1 2 3 Finally at the origin we place a tile which describes that the machine starts in the first cell in state q0 and blank tape. In a similar way there might be one or many (or even none) possible states for the three cells at time t − 1. We will construct the tiles in such a way that the only way to put down tiles correctly will be to make them describe a computation of Mx . s+1 . s−1 . s2 ) and the right hand side by (s2 . The problem whether a Turing machine halts on the empty input is not recursive (this is one of the exercises in the end of this chapter). 23 . A tile will thus be partly be specified by three cell-states s1 .

t) is true iff z is a rt2 bit integer which describes a correct computation for Mx which have halted.A couple of special markings will take care of this. This amounts to extracting the r bits of z which are in position starting at (it + j)r. Remark 2. This time we will let an enourmous integer z code the computation. The second problem we will consider is number theoretic statements. y. i. Suppose that each cell has at most S ≤ 2r states. A computation of Mx that runs in time t never uses more than t tape cells and thus such a computation can be described by the content of t2 cells (i.e. p2 and p3 are the states of squares i − 1. This can now be coded as rt2 bits and these bits concatenated will be the integer z. In general a number theoretic statement involves the quantifiers ∀ and ∃. i and i + 1 at time j then q is 24 . given a number theoretic statement is it false or true? One particular statement people have been interested in for a long time (which supposedly was proved true in 1993) is Fermat’s last theorem. Theorem 2.e. which can be written as follows ∀n > 2 ∀x. Next one makes a predicate M ove(p1 . p2 . j. First one makes a predicate Cell(i. Now let Ax be an arithmetic formula such that Ax (z. Thus assume we are given a Turing machine Mx and that we want to decide whether it halts on the empty input. The state of each cell will be given by a certain number of bits in the binary expansion of z. zxn + y n = z n ← xyz = 0. To prove this would lead us to far into recursion theory.28 In fact the set of true number theoretic statements is not even r. t. t cells each at t different points in time). Proof: (Outline) Again we will prove that we can reduce the halting problem to the given problem. To check that such a formula exists requires a fair amount of detailed reasoning and let us just sketch how to construct it. p) which is true iff p is the integer that describes the content of cell i at time j. Quantifiers range over natural numbers. We leave the details to the reader. z. but have a much more complicated structure. The interested reader can consult any standard text in recursion theory.e.27 The set of true number theoretic statements is not recursive. variables and usual arithmetical operations. p3 . q) which says that if p1 .

p3 )∧ Cell(i.-complete B ≤m A. t. contradicting the initial assumption that B is not recursive. p3 . z. Proof: Let B be a set that is r. pq ) ⇒ M ove(p1 . It seems like the hard part of the tiling problem is what to do at points where we can put down many different tiles (we never know if we made the correct decision). p2 ) ∧ Cell(i + 1. Rather at each point we have only one choice and the hard part is to decide whether we can continue for ever. Remark 2. t) and thus if we can decide the truth of arithmetic formulae with quantifiers we can decide if a given Turing machine halts. Now if A was recursive then by Theorem 2.29 It is interesting to note that (at least to me) the proofs of the last two theorems are in some sense counter intuitive. j. M ove on the other hand is of constant size (there are only 24r inputs. z.e.e. p2 . Theorem 2. t. p1 . A similar statement is true about the other proof. t. Let us explicitly state a theorem we have used a couple of times. p1 )∧ Cell(i.e. t) is now equivalent to the conjunction of ∀i. t.30 If A is r.the halting problem) then by the second property of being r.but not recursive (e. q ) ⇒ Stop(q ) where Stop(p) is true if p is a haltstate. q Cell(i − 1. The Cell predicate is from an intuitive point of view very arithmetic (and thus we hope the reader feels that it can be constructed). This is not utilized in the proof. t. z. p2 . z. 25 . j. j.g. z. t. tAx (z. j. which is a constant depending only on x and independent of t ) and thus can be coded by brute force.-complete then A is not recursive. j + 1.7. The predicate Ax (z. q) and ∀q Cell(1.the resulting state of square i at time j + 1.6 we could conclude that B is recursive. Since we know that this is not possible we have finished the outline of the proof. p3 . Now we are almost done since Mx halts iff ∃z.

proof can be formalized (although most humans prefer informal proofs). First note that most proofs used in modern mathematics is much more informal and given in a natural language. One starts with a set of axioms and then one is allowed to combine axioms (according to some rules) to derive new theorems. However. quantified formulas where the variables values which are natural numbers. 2. We want to be able to prove all true theorem (this is called completeness) and we do not want to be able to prove any false theorems (this is called that the system is consistent). The notion of a proof is more complicated. Thus the halting problem is undecidable.Before we end this section let us make an informal remark. They often tend to go into an infinite loop and of course such things can be detected. We have only proved that there is not a single program which when given as input the description of a Turing machine and an input to that machine. This theorem basically says that there are o statements in arithmetic which neither have proof or a disproof. What does it mean that the halting problem is not recursive? Experience shows that for most programs that do not halt there is a simple reason that they do not halt. for each statement A we want to be able to prove exactly one of A and ¬A. We encourage the reader to write common theorems and conjectures in number theory in this form to check its power. In particular. A proof is then just such a derivation which ends with the desired statement. We want to avoid a too elaborate machinery and hence we will be rather informal and give an argument in the simplest case. The most common set of axioms for number theory was proposed by Peano.e. i. However. the program will always give the correct answer to the question whether the machine halts or not. but one could think of other sets of axioms. Our goal is to prove that there is no proof system that is both consistent 26 . Statements in arithmetic will simply be the formulas considered in the last examples.8 G¨del’s incompleteness theorem o Since we have done many of the pieces let us briefly outline a proof of G¨del’s incompleteness theorem. before we state the theorem we need to address what we mean by “statement in arithmetic” and “proof”. There are two crucial properties to look for in a proofsystem. One final definition: A problem that is not recursive is called undecidable. We call a set of axioms together with the rules how they can be combined a proofsystem.

Namely to decide whether a statement A is true we could proceed as follows: For z = 0. Proof: Assume that there was indeed such a proofsystem.and complete. Theorem 2. However. . We can now state the theorem.27 the set of true statements is not recursive and hence we have reached a contradiction. II. deciding whether M halts on input y is recursive? II. Unfortunately. deciding whether M halts on input y is not recursive? 27 . such that given y.31 A proofsystem is recursive iff the set of proofs (and hence the set of axioms) form a recursive set. This is not a very practical proofsystem since there is no way to tell whether a given statement is indeed an axiom. this is not true since we can as axioms take all true statements and then we need no rules for deriving new theorems. such that given y.9 Exercises Let us end this section with a couple of exercises (with answers). Definition 2.1: Given x is it recursive to decide whether Mx halts on an empty input? II.2: Is there any fixed machine M .3: Is there any fixed machine M . To check whether a given string is a correct proof is recursive by assumption and since the proofsystem is consistent and complete sooner or later there will be a proof of either A or ¬A. The reader is encouraged to solve the exercises without looking too much at the answers. We take the following definition. If z is a correct proof of ¬A output “false” and halt. Then we claim that also the set of all theorems would be recursive. 2. 2. by Theorem 2. . . ∞ If z is a correct proof of A output “true” and halt. Clearly the axioms need to be specified in a more efficient manner. 1.32 (G¨del) There is no recursive proofsystem which is both o consistent and complete. Thus this procedure always halts with the correct answer.

II.7: If Mx halts on empty input let f (x) be the number of steps it needs before it halts and otherwise set f (x) = 0.7) grows at least as fast as any recursive function.10 Answers to exercises II. can transform the starting string to an arbitrarily long string. This new machine halts on empty input-tape iff Mz halted on input y and thus if we could decide the former we could decide the latter which is known undecidable. Is it decidable whether we. can transform the starting string to an arbitrarily long string? II. then we could decide whether Mz halts on input y for an arbitrary pair z. Is the maximum time function computable? II.11 Given a set of rewriting rules over a finite alphabet and a starting string. using the rewriting rules. aa → bab and bb → a. using the rewriting rules. We have one special state for each symbol of y. Namely given z and y we make a machine Mx which basically looks like Mz but has a few special states.6: Given x is it recursive to decide whether for all y.5: Given x is it recursive to decide whether there exists a y such that Mx halts on y? II. Is it possible to transform ababba to aaaabbb? II. To conclude the proof we only have to observe that it is recursive to compute the number x from the pair y and z. II. it is recursive to decide whether M halts on input y in y 2 steps? II. On empty input Mx first goes trough all its special states which writes y on the tape.10 Given a set of rewriting rules over a finite alphabet and a starting string.1 The problem is undecidable. We will prove that if we could decide whether Mx halts on the empty input. can transform the starting string to the target string? An example of this instance is: Rewriting rules ab → ba. Mx halts on y? II. is it decidable whether we. 28 . that given y.II. The machine then returns to the beginning of the tape and from this point on it behaves as Mz .8 Prove that the maximum time function (cf ex. Define the maximum time function by M T (y) = maxx≤y f (x). using the rewriting rules. To be more precise let g be any recursive function. then there is an x such that M T (x) > g(x).4: Is it true that for each machine M . y.9 Given a set of rewriting rules over a finite alphabet and a starting string and a target string. Is it decidable whether we. if we restrict the left hand side of each rewriting rule to be of length 1? 2.

II.2 There are plenty of machines of this type. For instance let M be the machine that halts without looking at the input (or any machine defining a total function). In one of these cases the set of y’s for which the machine halts is everything which certainly is a decidable set. II.3 Let M be the universal machine. Then M halts on input (x, y) iff Mx halts on input y. Since the latter problem is undecidable so is the former. II.4 This problem is decidable by the existence of the universal machine. If we are less formal we could just say that running a machine a given number of steps is easy. What makes halting problems difficult is that we do not know for how many steps to run the machine. II.5 Undecidable. Suppose we could decide this problem, then we show that we could determine whether a machine Mx halts on empty input. Given Mx we create a machine Mz which first erases the input and then behaves as Mx . We claim that Mz halts on some input iff Mx halts on empty input. Also it is true that we can compute z from x. Thus if we could decide whether Mz halts on some input then we could decide whether Mx halts on empty input, but this is undecidable by exercise II.1. II.6 Undecidable. The argument is the same as in the previous exercise. The constructed machine Mz halts on all inputs iff it halts on some input. II.7 M T is not computable. Suppose it was, then we could decide whether Mx halts on empty input as follows: First compute M T (x) and then run Mx for M T (x) steps on the empty input. If it halts in this number of steps, we know the answer and if it did not halt, we know by the definition of M T that it will never halt. Thus we always give the correct answer. However we know by exercise II.1 that the halting problem on empty input is undecidable. The contradiction must come from our assumption that M T is computable. II.8 Suppose we had a recursive function g such that g(x) ≥ M T (x) for all x. Then g(x) would work in the place of M T (x) in the proof of exercise II.7 (we would run more steps than we needed to, but we would always get the correct answer). Thus there can be no such function. II.9 The problem is undecidable, let us give an outline why this is true. We will prove that if we could decide this problem then we could decide whether a given Turing machine halts on the empty input. The letters in our finite alphabet will be the nonblank symbols that can appear on the tape of the Turing machine, plus a symbol for each state of the machine. A string in this alphabet containing exactly one letter corresponding to a state of the machine can be viewed as coding the Turing machine at one instant in time 29

by the following convention. The nonblank part of the tape is written from left to write and next to the letter corresponding to the square where the head is, we write the letter corresponding to the state the machine is in. For instance suppose the Turing machine has symbols 0 and 1 and 4 states. We choose a, b, c and d to code these states. If, at an instant in time, the content of the tape is 0110000BBBBBBBBBBBB . . . and the head is in square 3 and is in state 3, we could code this as: 011c000. Now it is easy to make rewriting rules corresponding to the moves of the machine. For instance if the machine would write 0, go into state 2 and move left when it is in state 3 and sees a 1 this would correspond to the rewriting rule 1c → b0. Now the question whether a machine halts on the empty input corresponds to the question whether we can rewrite a to a description of a halted Turing machine. To make this description unique we add a special state to the Turing machine such that instead of just halting, it erases the tape and returns to the beginning of the tape and then halts. In this case we get a unique halting configuration, which is used as the target string. It is very interesting to note that although one would expect that the complexity of this problem comes from the fact that we do not know which rewriting rule to apply when there is a choice, this is not used in the proof. In fact in the special cases we get from the reduction from Turing machines, at each point there is only one rule to apply (corresponding to the move of the Turing machine). In the example given in the exercise there is no way to transform the start string to the target string. This might be seen by letting a have weight 2 and b have weight 1. Then the rewriting rules preserve weight while the two given words are of different weight. II.10 Undecidable. Do the same reduction as in exercise II.9 to get a rewriting system and a start string corresponding to a Turing machine Mx working on empty input. If this system produces arbitrarily long words then the machine does not halt. On the other hand if we knew that the system did not produce arbitrarily long words then we could simulate the machine until it either halts or enters the same state twice (we know one of these two cases will happen). In the first case the machine halted and in the second it will loop forever. Thus if we could decide if a rewriting system produced arbitrarily long strings we can decide if a Turing machine halts on empty input. II.11 This problem is decidable. Make a directed graph G whose nodes correspond to the letters in the alphabet. There is an edge from v to w if there 30

is a rewriting rule which rewrites v into a string that contains w. Let the weight of this string be 1 if the rewriting rule replaces v by a longer string and 0 otherwise. Now we claim that the rewriting rules can produce arbitrarily long strings iff there is a circuit of positive weight that can be reached from one of the letters contained in the starting word. The decidability now follows from standard graph algorithms.

31

Definition 3. the definition would imply that if the Turing machine looks at the entire input then S(n) ≥ n. however. 32 . We will. The running time is a function of the input and experience has showed that it is convenient to treat inputs of the same length together. but what we really care about is what we can compute in practice. The two first such resources we will be interested in are computing time and space. The natural definition for space would be to say that a Turing machine uses space S(n) if its head visits at most S(n) squares on any input of length n. hierarchy theorems. For the remainder of these notes all functions that we will be considering will be recursive and we will concentrate on what resources are needed to compute the function. In particular. We will assume that there is a special input-tape which is read-only and a special output-tape which is write-only. To decide what is mechanically computable is of course interesting. Apart from these two tapes the machine has one or more work-tapes which it can use in the oldfashioned way.1 A Turing machine M runs in time T (n) if for every input of length n. This definition is not quite suitable under all circumstances. 3. Definition 3. Then we will say that M uses space S(n) if for every input of length n.3 Assume that a Turing machine M has a read-only inputtape.1 Basic Definitions Let us start by defining what we mean by the running time and space usage of a Turing machine. We will then only count the number of squares visited on the work-tapes. Definition 3.2 The length of string x is denoted by |x|. also be interested in machines which use less than linear space and to make sense of this we have to modify the model slightly. a write-only output-tape and one or more work-tapes.e by using an ordinary computer for a reasonable amount of time. M visits at most S(n) tape squares on its work-tapes before it halts. M halts within T (n) steps. i.3 Efficient computation.

When we are discussing running times we will most of the time not be worried about constants i.e. we will not really care if a machine runs in time n2 or 10n2 . Thus the following definition is useful: Definition 3.4 O(f (n)) is the set of functions which is bounded by cf (n) for some positive constant c. Having done the definitions we can go on to see whether more time (space) actually enables us to compute more functions.

3.2

Hierarchy theorems

Before we start studying the hierarchy theorems (i.e. theorems of the type “more time helps”) let us just prove that there are arbitrarily complex functions. Theorem 3.5 For any recursive function f (n) there is a function Vf which is recursive but cannot be computed in time f (n). Proof: Define Vf by letting Vf (x) be 1 if Mx is a legal Turing machine which halts with output 0 within f (|x|) steps on input x and let Vf (x) take the value 0 otherwise. We claim that Vf cannot be computed within time f (n) on any Turing machine. Suppose for contradiction that My computes Vf and halts within time f (|x|) for every input x. Consider what happens on input y. Since we have assumed that My halts within time f (|y|) we see that Vf (y) = 1 iff My gives output 0, and thus we have reached a contradiction. To finish the proof of the theorem we need to check that Vf is recursive, but this is fairly straightforward. We need to do two things on input x. 1. Compute f (|x|). 2. Check if Mx is a legal Turing machine and in such a case simulate Mx for f (|x|) steps and check whether the output is 0. The first of these two operations is recursive by assumption while the second can be done using the universal Turing machine as a subroutine. This completes the proof of Theorem 3.5

33

Up to this point we have not assumed anything about the alphabet of our Turing machines. Implicitly we have thought of it as {0, 1, B} but let us now highlight the role of the alphabet in two theorems. Theorem 3.6 If a Turing machine M computes a {0, 1} valued function f in time T (n) then there is a Turing machine M which computes f in time 2n + T (n) . 2 Proof: (Outline) Suppose that the alphabet of M is {0, 1, B} then the alphabet of M will be 5-tuples of these symbols. Then we can code every five adjacent squares on the tape of M into a single square of M . This will enable M to take several steps of M in one step provided that the head stays within the same block of 5 symbols coded in the same square of M . However, it is not clear that this will help since it might be the case that many of M ’s steps will cross a boundary of 5-blocks. One can avoid this by having the 5-tuples of M be overlapping, and we leave this construction to the reader. The reason for requiring that f only takes the values 0 and 1 is to make sure that M does not spend most of its time printing the output and the reason for adding 2n in the running time of M is that M has to read the input in the old format before it can be written down more succinctly and then return to the intitial configuration. The previous theorem tells us that we can gain any constant factor in running time provided we are willing to work with a larger alphabet. The next theorem tells us that this is all we can gain. Theorem 3.7 If a Turing machine M computes a {0, 1} valued function f on inputs that are binary strings in time T (n), then there is a Turing machine M which uses the alphabet {0, 1, B} which computes f in time cT (n) for some constant c. Proof: (Outline) Each symbol of M is now coded as a finite binary string (assume for notational convenience that the length of these strings is 3 for any symbol of M ’s alphabet). To each square on the tape of M there will be associated 3 tape squares on the tape of M which will contain the code of the corresponding symbol of M . Each step of M will be a sequence of steps of M which reads the corresponding squares. We need to introduce some intermediate states to remember the last few symbols read and there are some other details to take care of. However, we leave these details to the reader. 34

The last two theorems tell us that there is no point in keeping track of constants when analyzing computing times. The same is of course true when analyzing space since the proofs naturally extend. The theorems also say that it is sufficient to work with Turing machines that have the alphabet {0, 1, B} as long as we remember that constants have no significance. For definiteness we will state results for Turing machines with 3 tapes. It will be important to have efficient simulations and we have the following theorem. Theorem 3.8 The number of operations for a universal two-tape Turing machine needed to simulate T (n) operations of a Turing machine M is at most αT (n) log T (n), where α is a constant dependent on M , but independent of n. If the original machine runs in space S(n) ≥ log n, the simulation also runs in space αS(n), where α again is a constant dependent on M , but independent of n. We skip the complicated proof. Now consider the function Vf defined in the proof of Theorem 3.5 and let us investigate how much is needed to compute it. Of the two steps of the algorithm, the second step can be analyzed using the above result and thus the unknown part is how long it takes to compute f (|x|). As many times in mathematics we define away this problem. Definition 3.9 A function f is time constructible if there is a Turing machine that on input 1n computes f (n) in time f (n). It is easy to see that most natural functions like n2 , 2n and n log n are time constructible. More or less just collecting all the pieces of the work already done we have the following theorem. Theorem 3.10 If T2 (n) is time constructible, T1 (n) > n, and T2 (n) =∞ n→∞ T1 (n) log T1 (n) lim then there is a function computable in time O(T2 (n)) but not in T1 (n). Both time bounds refer to Turing machines with three tapes. Proof: The intuition would be to use the function VT1 defined previously. To avoid some technical obstacles we work with a slightly modified function. 35

M either halts within time nQS(n)k ckS(n) or it never halts. Remember that there are infinitely many yi such that Myi codes My (we allowed an end marker in the middle of the description).12 Let M be a Turing machine which has a work tape alphabet of size c. This defines a function VT2 and we need to check that it cannot be computed by any My in time T1 . We claim that Uf cannot be computed in space f (n). then as in all previous arguments My will output 0 on input y iff Uf (y) = 1 and otherwise Uf (y) = 0.8 only depends on the machine My to be simulated and thus there is a yi which codes My such that T2 (|yi |) ≥ αT1 (|yi |) log T1 (|yi |). To finish the theorem we need to prove that Uf is recursive. This might seem obvious at first since we can just use the universal machine to simulate Mx and all we have to keep track of is whether Mx uses more than the allowed amount of space. I. If we do not get an answer we simply answer 0. This is not quite sufficient since Mx might run forever and never use more than f (|x|) space. Theorem 3. It is clear that we will be able to get the same result for space-complexity even though there is some minor problems to take care of. We need the following important but not very difficult lemma. Let us first prove that there are functions which require arbitrarily large amounts of space. By the standard argument My will make an error for this input. Lemma 3. Given a Turing machine My which never uses more than f (n) space. We use two of the tapes for the simulation and the third tape to keep a clock. Proof: Define Uf by letting Uf (x) be 1 if Mx is a legal Turing machine which halts with output 0 without visiting more than f (|x|) tape squares on input x and let Uf (x) take the value 0 otherwise.When simulating Mx we count the steps of the simulating machine rather than of Mx . 36 .e. Q states and k work-tapes and which uses space at most S(n). we first compute T2 (n) and then run the simulation for that many steps. Then on inputs of length n. If we get an answer within this simulation we output 1 if the answer was 0 and output 0 otherwise.11 If f (n) is a recursive function then there is a recursive function which cannot be computed in space f (n). Now note that the constant α in Theorem 3.

This finishes the proof of Theorem 3. Let us calculate the number of different configurations of M given a fixed input of length n.11 To prove that more space actually enables us to compute more functions we need the appropriate definition. Thus we have a total of nQS(n)k ckS(n) possible configurations. the configuration consists of the contents of the tapes of M .12 is complete.14 If S2 (n) is space constructible. The number of possible locations of the head on the input-tape is at most n and there are Q possible states. But since the future actions of the machine is completely determined by the present configuration.10. We use a counter to count the number of steps used. Thus. If the machine does not halt within this many timesteps the machine will be in the same configuration twice. the positions of all its heads and its state.13 A function f is space constructible if there is a Turing machine that on input 1n computes f (n) in space f (n). These space bounds refer to machines with 3 tapes. Definition 3. In other words define a function essentially as US but restrict the computation to using space S2 of the simulating machine. We now can state the space-hierarchy theorem. The only detail to take care of is that if S(n) ≥ log n then a counter counting up to |x|QS(|x|)k ckS(|x|) can be implemented in space S(n). 37 . Since it uses at most space S(n) there at most ckS(n) possible contents of it work-tapes and at most S(n)k possible positions of the heads on the worktapes. We just simulate Mx for at most |x|Qf (|x|)k ckf (|x|) steps or until it has halted or used more than f (|x|) space. Returning to the proof of Theorem 3. The proof of Lemma 3. Theorem 3. whenever it returns to a configuration where it has been previously it will return infinitely many times and thus never halt.Proof: Let a configuration of M be a complete description of the machine at an instant in time. Proof: The function achieving the separation is basically US with the same twist as in Theorem 3. The rest of the proof is now more or less identical.11 we can now prove that Uf is computable. S(n) ≥ log n and n→∞ lim S2 (n) =∞ S(n) then there is a function computable in space O(S2 (n)) but not in space S(n).

Next we will continue into the 1970’s and move further away from recursion theory and into the realm of more modern complexity theory.The reason that we get a tighter separation between space-complexity classes than time-complexity classes is the fact that the universal machine just uses constant more space than the original machine. This completes our treatment of the hierarchy theorems. These results are due to Hartmanis and Stearns and are from the 1960’s. 38 .

We can now start our main topic.14. Definition 4. There are some relations between the given complexity classes.6 L ⊆ P . P and PSPACE. Proof: This follows from Lemma 3. Theorem 4. The inclusion is obvious. 39 . we say that A ∈ P iff there is a Turing machine which for some constant k computes the characteristic function of A in time O(nk ). We will in this section define the basic deterministic complexity classes. we say that A ∈ P SP ACE iff there is a Turing machine which for some constant k computes the characteristic function of A in space O(nk ). and Q states and always halts. This is also obvious since a Turing machine cannot use more space than time. then we know it runs in time at most nQ(c log n)k 3c log n ∈ O(n2+c log 3 ) where we used that (log n)k ∈ O(n) for any constant k. but it gives no idea to which one it is.5 P ⊆ P SP ACE.2 Given a set A.4 that at least one of the inclusions is strict. Of course. We can conclude that a machine which runs in logarithmic space also runs in polynomial time. we say that A ∈ L iff there is a Turing machine which computes the characteristic function of A in space O(log n). The inclusions given in Theorems 4.6 are believed to be strict but this is not known. namely the study of complexity classes.4 The complexity classes L. P and P SP ACE.12 since if S(n) ≤ c log n and we assume that the machine uses a three letter alphabet.1 Given a set A. Definition 4.3 Given a set A. Theorem 4. Proof: 3.5 and 4. That it is strict follows from Theorem Theorem 4.4 L ⊂ P SP ACE. L. it follows from Theorem 4. has k work-tapes. Definition 4.

We have to investigate whether the defined complexity classes are artifacts of the particulars of Turing machines as a computational model or if they are genuine classes of functions which are more or less independent of the model of computation. Turing machine seems incredibly inefficient and thus we will compare it to a model of computation which is more or less a normal computer (programmed in assembly language). This type of computer is called a Random Access Machine (RAM) and a pictured is given i Figure 2. The same argument applies here. that we had defined a class of functions which captured a property of the functions rather than a property of the model. A RAM 40 .Figure 2: A Random Access Machine 4. The reader who is not worried about such questions is adviced to skip this section. This fact convinced us that we had found the right notion i.e.1 Is the definition of P model dependent? When studying mechanically computable functions we had several definitions which turned out to be equivalent.

The size of a computer word is bounded by a constant and operations on larger numbers require us to access a number of memory cells which is proportional to logarithm of the number used. Add. 2.has a finite control. (Condition ac1 > 0 or ac1 = 0). the result ends up in ac1 . 6. r(k) = ac1 for constant k or r(ac2 ) = ac1 .g. 41 . The time for a computation on a RAM is the sum of the times for the individual instructions. similarly for ac2 . The finite control can read a program and has a read-only input-tape and a write-only output tape. Load something into an accumulator.7 The time to do a particular instruction on a RAM is 1 + log(k + 1) where k is the least upper bound on the integers involved in the instruction. This turns out to give a quite unrealistic measure of complexity and instead we will use the logarithmic cost model. and infinite number of registers and two accumulators. divide (integer division) or multiply the two numbers in ac1 and ac2 . Write an output. 4. Halt One might be tempted to let the time used by a RAM be the number of operations it does (the unit-cost RAM). Both the registers and the accumulators can hold arbitrarily large integers. Use constants in the program.g. 3. e. Store the content of an accumulator. Definition 4. This actually agrees quite well with our everyday computers. Read input ac1 = input(ac2 ). similarly for ac2 . 5. 1. subtract. e. In one step a RAM can carry out the following instructions. We will let r(i) be the content of register i and ac1 and ac2 the contents of the accumulators. ac1 = r(k) for constant k or ac1 = r(ac1 ). 8. Make conditional and unconditional jumps. 7.

1. Observe that we need to store the entire contents of the work-tape in one register to conserve space. Theorem 4. the position of the head on the work-tape(s) in register 2 and the current state of the Turing machine in register 3. B}. The RAM will simulate the computation of the Turing machine step by step. We will not try to prove exactly this. To simulate a step of the Turing machine the RAM gets the appropriate information from the work-tape by an integer division and then it follows the transition described by the next-step function. for T (n) ≥ n and S(n) ≥ log n then the same function can be computed in time O(T 2 (n)) and space O(S(n)) on a RAM. Since we have at most T (n) steps and S(n) ≤ T (n) the bound for the running time follows. It will code the content of the work-tape as an integer and store this integer in register 1. Proof: (Outline) Assume for simplicity that the Turing machine just has one work-tape and that it uses the alphabet {0. The running time would be improved to O(T (n) log T (n)) but for the present purposes it is more important to keep the space small. Next let us see that in fact a Turing machine is not that much less powerful than a RAM. The bound for the space used is obvious.8 The space used by a RAM under a computation is the maximum of log (i + r(i)) log(ac1 + 1) + log(ac2 + 1) + r(i)=0 during the computation.9 If a Turing machine can compute a function in time T (n) and space S(n). The cost of the simulation of an individual step is the size of the integers involved and this is bounded by O(S(n)). but only to establish strong enough results to show that the class P is well defined. 42 . If we instead stored the content of square i in register i the total space used would be O(S(n) log S(n)).To define the amount of memory used by a RAM on a particular operation let us assume that the initial contents of all the registers are 0. Then we have: Definition 4. Intuitively the RAM seems more powerful than a Turing machine. the position of the head on the input-tape in accumulator 2.

) 3. If we do not find any 1 before we find B the next step function directs us to the given line and otherwise we proceed with the next line.10 If a function f can be computed by a RAM in time T (n) and space S(n) then f can be computed in time O(T 2 (n)) and space S(n) on a Turing machine. we just replace it by a Turing machine which computes the arithmetical step using the ac1 and ac2 tapes as inputs and the scratch pad tape as work-tape. 2. The RAM-program is now translated into a next-step function of the Turing machine. If some i does not appear on the register tape this means that r(i) = 0. Proof: (Outline) As many other proofs this is not a very thrilling simulation argument.e. i. which we usually tend to omit. a special subroutine is “Rew” which is rewinding the register tape. we will at least give a reasonable outline of the proof. the same operation also applies to other tapes. 1. However. and the labeled arrows going out of the circle indicate which states to proceed to where the label indicates the current symbol(s). Rectangular boxes indicate subroutines. Assume for simplicity that we do the simulation on a Turing machine which apart from its input-tape and output-tape has 4 work-tapes. Three of the four work-tapes will correspond to ac1 . r(i)) where the two numbers are separated with a B. If the instruction is an arithmetical step. ac2 . then we just search the ac1 -tape for the symbol 1. while the forth tape is used as a scratch pad. since the result is central in that it proves that P is invariant under change of model. (See Figure 6.Theorem 4. moving the head to the beginning. If the instruction is jump-instruction we just make the next-step function take the next state which is the first state of the set of states corresponding to that line. A schematic picture is given in Figure 3 The register tape will contain pairs (i. Inside the circle we write the tape(s) we are currently interested in.) 43 . Let us give a few examples how to simulate some particular instructions. Each line is translated into a set of states and transitions between the states as indicated by Figure 4. Two different pairs are separated by BB. We will define the Turing machine pictorially by having circles indicate states. (See Figure 5. the registers. respectively. The way to proceed is of course to simulate the RAM step by step. If the jump is conditional on the content of ac1 being 0.

Figure 3: A TM simulating a RAM 44 .

Figure 4: Basic picture Figure 5: The jump instruction 45 .

Clearly. The space used by the Turing machine is easily seen to be bounded by O(log (ac1 + 1) + log (ac2 + 1) + (log (i + 1) + log (r(i) + 1) + 3)) r(i)=0 and thus the simulation works in O(S(n)) space.Figure 6: Conditional jump 4. If we find that no such pair exists then we should load 0 into ac1 . Let us analyze the efficiency of the simulation. Finally let us indicate how to store ac1 into register ac2 . To do this we scan the register-tape to find out the present value of r(ac2 ). A description of this is given in Figure 7. ac1 ) at the end (provided ac1 = 0). 5. If r(ac2 ) = 0 we erase the old copy (ac2 . Let us just give an outline of how to load r(ac2 ) into ac1 . To analyze the time needed for the simulation we claim that you can do multiplication and integer division of two m-digit numbers in time O(m2 ) on a Turing machine. If ac1 = 0 we store the pair (ac2 . If r(ac2 ) = 0 previously this is easy. ac1 ) at the end of the register-tape and otherwise we do nothing. This implies that any arithmetical operation can be done in a factor O(S(n)) longer 46 . r(ac2 )) and then move the rest of the content of the register-tape left to avoid empty space. After we have moved the information we write (ac2 . what we want to do is to look for the content of ac2 as the first part of any pair on the register tape.

Figure 7: Loading instruction 47 .

P and PSPACE as families of sets. bit = bit − 2. Complexity theoretic version of Church’s thesis: The complexity classes L.10 we see that P.2 Examples of members in the complexity classes. P and PSPACE remain the same under any reasonable computational model.time on the Turing machine than on the RAM.10 follows. L and PSPACE are the same whether we use Turing machines or RAMs in the definitions. If we have x = n−1 xi 2i and y = n−1 yi 2i then x + y is computed by the following i=0 i=0 program: carry= 0 For i = 0 to n − 1 bit = xi + yi + carry carry = 0 If bit ≥ 2 then carry = 1. This turns out to be true in general and this gives us a very important principle which we can formalize as a complexity theoretic version of Church’s thesis.9 and 4. The storing and retrieving of information can also be done in time O(S(n)) and using S(n) ≤ T (n) Theorem 4. This will just mean that the function can be computed within the implied resource bounds. 1}valued) lies in one of these complexity classes. Example 4. This can clearly be done in time O(n) as we all learned in first grade. It is also quite easy to see that it can be done in logarithmic space. Using Theorems 4. We have defined L. The above statement also remains true for all other complexity classes that we will define throughout these notes and we will frequently implicitly apply the above thesis. On the other hand when we argue about computation it is much easier to work with Turing machines since their local behavior is so easy to describe. This works as follows. When designing algorithms it is much easier to describe and analyze the algorithm if we use a high level description. We will every now and then abuse this notation and say that a function (not necessarily {0. compute their sum. 48 . By virtue of the above thesis we can take the easy road to both things and still be correct. 4.11 Given two n-digit numbers x and y written in binary.

13 Given two n-bit integers x and y compute their greatest common divisor. However. Another very easy problem that is not known how to do in L: Given an integer in base 2. The only things that need to be remembered is the counter i and the values of bit and carry. it will take us O(n2 ) (this can be improved by more elaborate methods). convert it to base 3. This can clearly be done in O(log n) space and thus addition belongs to L.12 Given two n-digit numbers x and y written in binary. it is not known whether this is the case.write bit next i write carry. The only slightly nontrivial thing to check in order to verify that the algorithm does not use more that O(log n) space is to verify that carry always stays less than 2n. carry= 0 For i = 0 to 2n − 2 low = max(0. i − (n − 1)) high = min(n − 1. compute their product. 49 . i) For j = low to high. One might be tempted to think that also division could be done in L. and if we do it as taught. We leave this easy detail to the reader. Example 4. In fact we can also do it in L. Example 4. This can again be done in P by first grade methods. carry = carry + xj ∗ yi−j write lsb(carry) carry = carry/2 next i write carry with least significant bit first If one looks more closely at the algorithm one discovers that it is the ordinary multiplication algorithm where one saves space by computing a number only when it is needed.

If you are careful. On the other hand if d divides any pair a and b then it also divides x and y. Then if b0 < a0 /2 we have a2 < a1 = b0 < a0 /2 and the conclusion of the lemma is true. The lemma implies that there are at most 2n iterations and thus the total complexity is bounded by O(n3 ).14 Let a and b have the values a0 and b0 at one point in time in Euclid’s algorithm and let a2 and b2 be their values two iterations later. On the other hand if b0 ≥ a0 /2 then we will have a2 = b1 = a0 − b0 ≤ a0 /2 and thus we have proved the lemma. Proof: Let a1 and b1 be the values of a and b after one iteration. however. The fact that numbers get smaller at each iteration implies that there are at most 2n iterations.We will show that this problem is in P and in fact give two different algorithms to show this. If you use standard long division (with remainder) to find q then the complexity is actually O(ns) where s is the number of bits in q. then a2 ≤ a0 /2. First observe that for each iteration the numbers get smaller and thus we will always be working with numbers with at most n digits. namely the number of iterations and the cost of each iteration. 0 ≤ r < b a=b b=r od write a The algorithm is correct since if d divides x and y then clearly d divides all a and b. The work in each iteration is essentially a division and this can be done in O(n2 ) bit operations. This is not sufficient to get a polynomial time running time and we need the following lemma. To analyze the algorithm we have to focus on two things. it is possible to do better (without applying any fancy techniques) by the following observation. a=x b=y While b = 0 do find q and r such that a = bq + r. Lemma 4. 50 . Assume for simplicity that x > y. First the old and basic algorithm: Euclid’s algorithm.

This implies that binary GCD is a competitive algorithm in particular since the individual operations can be implemented very efficiently when the binary representation of integers is used. Each iteration only consists of a few comparisons and shifts if the numbers are coded in binary and thus it can be implemented in time O(n). To analyze the complexity of the algorithm we again have to study the number of iterations and the cost of each iteration.dy ) The algorithm is correct by a similar argument as the previous algorithm. r2−dr ) and b = min(b. (a1 + b1 )/4) ≤ a0 /2. If b < a interchange a and b. If one analyzes this carefully we actually get complexity O(n2 ). Again it is clear that the numbers decrease in size and thus we will never work with numbers with more than n digits. r2−dr ) where 2dr is the highest power of 2 that divides r. On the other hand if q is large then it is easy to see that the numbers decrease more rapidly than given by the above lemma. Set r to the number that is divisible by 4 and set a = max(b.15 Let a and b have the values a0 and b0 at one point in time in the binary GCD algorithm and let a2 and b2 be their values two iterations later. od write a2min(dx . Thu again we can conclude that we have at most 2n iterations. While b > 1 do Either a + b or a − b is divisible by 4. then a2 ≤ a0 /2. Let us just remark that the best known greatest common divisor algorithm for integers runs in time O(n(log n)2 log log n) and is based on the 51 . To analyze the number of iterations we have: Lemma 4.Thus if q is small we can do each iteration significantly faster. Let us give another algorithm for the same problem. This algorithm is called “Binary GCD”. Let 2dx be the highest power of 2 that divides x and define dy similarly. Since b0 ≤ a0 this implies that a2 ≤ max(b1 . Proof: If a1 and b1 are the numbers after one iteration then b1 ≤ (a0 +b0 )/4 and a1 ≤ a0 . and hence the total work is bounded by O(n2 ). Set a = x2−dx and b = y2−dy .

Euclidean algorithm. Suppose the original matrix looks like A B C D where A is the upper i×i matrix. To analyze what happens to the numbers assume for notational simplicity that the upper left i × i matrix is non-singular for any i and thus we will be able to perform Gaussian elimination without pivoting. Lemma 4. Example 4.16 Given a nonsingular integer matrix M with entries which are n-bit numbers. Proof: Any entry of A−1 is an (n − 1) × (n − 1) subdeterminant of A divided by the determinant of A. After the i’th variable has been eliminated the matrix will be A−1 0 −1 I −CA A B C D = I A−1 B 0 −CA−1 B + D where I is the i × i identity matrix. However. A determinant can be interpreted as the volume of the parallelipiped spanned by the rows. This is not a formal proof and the inequality indicated in this sentence is known as Hadamard’s inequality 3 52 . solve M x = b for some vector of n-bit numbers. there is something to check. This volume is bounded by the product of the lengths of the row vectors3 which in its turn is bounded √ by (m n)n . We need to verify that the numbers do not get too large during the computation i.e. It might seem like this problem obviously is in P since Gaussian elimination is well known to be doable in O(n3 ) steps. Let us investigate what the matrix looks like after we have eliminated the i’th variable. Thus we just need to bound the size of determinants of integer matrices. Thus using the following lemma we can bound the rational numbers involved in the computation. It is unknown if integer greatest common divisor can be solved in small space. that the rational numbers that appear can be represented.17 If A is a nonsingular n×n integer matrix with entries bounded in size by m then A−1 has rational entries with numerator and denominator n bounded by mn n 2 .

check if it is a permutation and. Example 4. π∈Sn i=1 Thus we have just removed the signum part of the definition.It follows from the lemma that the rational numbers involved in Gaussian elimination can be represented by O(n2 ) binary digits. It is not hard to see that the problem is in PSPACE and we will not give the most efficient algorithm but rather the easiest to understand. The space needed for the former is bounded by (n log n) while the second is bounded by the size of the answer and if we assume that all entries in the original matrix 2 are bounded by 2n the per is bounded by 2n n! and thus can be stored in space O(n2 ). The determinant can be computed by Gaussian elimination and thus by the previous example it is in P. per = per + n i=1 xi.π(i) Thus we just generate all n-tuples of numbers between 1 and n. π(n) ≤ n If π(i) = π(j) for i = j. There is no known polynomial time algorithm for computing the permanent and there is good reason to believe that there is no such algorithm (the problem is #P -complete. per = 0 For 1 ≤ π(1). . .π(i) . add the corresponding term to the sum. if it is. .18 The determinant of an n × n matrix can be written as n sg(π) π∈Sn i=1 xi.π(i) where the sum is over all permutations of the numbers 1 through n and sg(π) is the signum4 of the permutation. 4 If you do not know the signum function just forget this definition of the determinant 53 . The permanent is a very related number which is defined as n xi. Since Gaussian elimination can be done in O(n3 ) operations and each operation can be performed in time O(n4 ) (if we use classical arithmetic) then we get total complexity O(n7 ). All the space required is to store the variables π(i) and per. π(2). The definition looks simpler but it removes the nice invariance under the row operations of Gaussain elimination. we will get to this complexity class later).

Assume that we have an odd prime p (the case p = 2 being easy) then a can be written p−1 as a square mod p iff a 2 ≡ 1 (mod p) (i. i. We will return to these questions later in these notes. (mod p) Thus taking square roots when p ≡ 3 (mod 4) is just computing a power. 1 matrix is nonzero but that it seems hard to compute it. Observe here that since we are only interested in the result (mod p). Set R ={s}. by Fermat’s little theorem.e. Is it possible to find a directed path from s to t? This problem is in P by the following straightforward algorithm. Assume that p and a are at most n digit numbers. v). Then computing p+1 a 4 by successive multiplications would require on the order of 2n multii plications. Set Rnew to the set of nodes reachable from s in one step. It is not known if this is true in general for primes p ≡ 1 (mod 4) or when p is a composite number. we get 4 O(n) multiplications of O(n) bit numbers and this can be done in total time O(n3 ). Thus we have proved that taking square-roots modulo primes p with p ≡ 3 (mod 4) can be done in polynomial time. Example 4. find x (if one exists) such that x2 ≡ a (mod p). Hence. Let us first recall some basic facts from number theory. It is more efficient to first compute a2 (mod p) for 0 ≤ i ≤ n in n squarings. 54 .19 Given a prime number p and a number a.e. the set of v such that there is an edge (s.It is interesting to note that there is a polynomial time algorithm to decide whether a permanent of a 0.20 Given a directed graph G with n nodes and two distinguished nodes s and t in G. we can reduce mod p after each squaring and thus we will never need to work with numbers with more than 2n digits. Example 4. it is true that xp−1 ≡ 1 (mod p) for any number not divisible by p. Remember also that. Now we write p+1 in 4 p+1 i binary and we compute a 4 by multiplying together the powers a2 with the i’s corresponding to 1’s in the binary expansion of p+1 . p−1 p+1 Now if p ≡ 3 (mod 4) and if a 2 ≡ 1 (mod p) then if we set x = a 4 we have p+1 p−1 x2 ≡ a 2 ≡ aa 2 ≡ a. p+1 Let us investigate how much resources are needed to compute a 4 (mod p). we have a solution iff this condition holds).

Remark 4. 55 . We claim that when the algorithm ends all the nodes reachable from s are in R. Each execution in the loop can be done in time n since we just have to investigate the neighbors of w. It is important to know that Rnew contains the set of nodes known to be reachable from s but whose neighbors have not yet been put into R or Rnew . The obvious goal in mind is to formally define N P . otherwise say no. We leave the verification of this to the reader. od If t ∈ R say yes. Thus the complexity is bounded by O(n2 ). Also take any nodes reachable from w in one step which do not belong to either R or Rnew and put them into Rnew . To see that the problem is in P let us analyze the time needed for the algorithm.While Rnew is not empty do Take an element w in Rnew and move it into R.21 Observe that in fact R is the set of nodes reachable from s and thus we have really solved a more general problem. Since we each time the loop is executed put one node into R and we never remove anything from R the loop is only executed n times. Next we turn to the definition of non-deterministic computation.

5 Nondeterministic computation The two most famous complexity classes are probably P and NP. there might be several different possible outputs.e. 5. Definition 5. namely if x = ab then when y1 = a and y2 = b we will get the output 1.e.1 Nondeterministic Turing machines The heart of a normal. Now the machine gives output 1 iff y1 y2 = x and yi > 1 for i = 1. but it is multivalued.1 A nondeterministic Turing machine can only compute functions which takes the values 0 and 1. but it will soon be clear that this is not the case. which tells the machine what to do in a given situation. Since we will only be working with {0. If x is composite then there is some computation that outputs 1. If there is no computation that gives the output 1. We have already defined P and to define NP we need the concept of a nondeterministic Turing machine. The machine takes the value 1 on (or accepts) an input x iff there is some possible computation on input x which gives output 1. The formal definition might make nondeterminism seem like a paper-tiger which has nothing to do with reality. By this we mean that in a given situation the machine might do several different things. This implies that on a given input there are several possible computations and in particular. Let us see that the algorithm is correct. 1} functions we will think of nondeterministic machines as recognizing sets i.1 or an endmarker. Writing down y1 is done by allowing the machine move left for |x| steps while at each step either writing down 0. On the other hand if x is prime there is 56 . A nondeterministic Turing machine also has a next-step function. numbers which are not prime and hence can be written as the product of two numbers both greater than or equal to 2. 2. the machine takes value 0 (or rejects the input). 2. deterministic Turing machine is the next-step function. The machine constructs y2 in the same way. Example 5. This can be done by a nondeterministic machine as follows: On input x. This calls for a definition. write y1 and y2 nondeterministically with |yi | ≤ |x| for i = 1.2 Suppose we want to recognize composite numbers i. the set of inputs for which there is an accepting computation.

If. every computation of M halts within T (n) steps.3 A nondeterministic Turing machine M runs in time T (n) if for every input of length n.6 Given a set A.5 Given a set A. With these basic definitions done we can proceed to define some complexity classes. Definition 5. It is important to keep this non-symmetry in mind. 57 . since one just changes the output routine to reverse the meaning of 0 and 1. Definition 5. Some proofs will be formally easier if we assume that the output is written on the worktape and therefore we will assume this. we say that A ∈ N P iff there is a nondeterministic Turing machine which accepts A and runs in time O(nk ) for some constant k. Definition 5. The definitions of space and time need to be slightly modified since there is no unique computation given the input.no possible computation that gives output 1 since if y1 y2 = x then by the definition of prime one of the yi is 1. every computation of M visits at most S(n) squares on the work-tape. Since non-deterministic Turing machines can always be made to have output 1 or 0 the size of the answer will always be small. for instance. Observe that when we are considering deterministic computation recognizing primes and recognizing composite numbers are very similar. we say that A ∈ N L iff there is a nondeterministic Turing machine which accepts A and runs in space O(log n). Definition 5.4 A nondeterministic Turing machine M runs in space S(n) if for every input of length n. you change the output of the machine recognizing composite numbers defined above then you get a machine that accepts everything.7 Given a set A. When it comes to nondeterministic computation there is a tremendous difference. Definition 5. This implies that we do not need an output-tape. we say that A ∈ N P SP ACE iff there is a nondeterministic Turing machine which accepts A and runs in space O(nk ) for some constant k.

since the nondeterministic algorithm given previously is easily seen to run in time O(n2 ). and the reason that the given algorithm does not work is that multiplication is in L only when the input is on a separate input-tape where we can access any part of the input when it is needed.9 N P ⊆ N P SP ACE. Theorem 5.12 is no longer true for nondeterministic computation. Let us now proceed to some examples of members in the newly defined complexity classes.12 without changing the set of inputs accepted. The time bound given in Lemma 3. This implies that we can impose the time-restriction given by Lemma 3.10.11 Composite numbers are in NP.We have similar theorems to 4. This proves Theorem 5. However. 4. It is at this point not clear that it is strict. This will follow from results later on and we leave it for the time being. It might be tempting to guess that Composite numbers are in NL since the essential part of the algorithm is a multiplication and we know from before that multiplication can be done in L. The reason for this is that even if a nondeterministic machine is in the same configuration twice it need not loop forever. Example 5. In the present situation we have to write down the two factors on the work-tape and there is no room to do this. This is not known however. Theorem 5.6. Proof: The inclusion is obvious.4. Proof: This follows since also nondeterministic Turing machines cannot use more space than time. Theorem 5.5 and 4. Proof: The proof is quite close to the proof of the corresponding deterministic statement but we need an extra observation.8 N L ⊂ N P SP ACE. it is easy to see that if a nondeterministic machine has an accepting computation then it has a nondeterministic computation which visits each configuration at most once. The reason is that it can make different non-deterministic choices the second time around. 58 .10 N L ⊆ N P .

This is easy and we leave this as an exercise. Let us return to the problem of graph-reachability (previously considered in section 4.Example 5. We assume that the graph is given as a list of the edges. ∧-gates (logical conjunction) ∨-gates (logical disjunction) and negation-gates. i = 1. i = 1. n each with at most log n + 1 digits. n. and an integer K. 59 . consisting of Boolean variables xi . . It is easy to check that the algorithm runs in polynomial time and thus we have proved that T SP ∈ N P . by the obvious procedure. 2 . Observe that the conditions 1 ≤ bi ≤ n and bi = bj for i = j imply that the bi define a tour starting in b1 and tracing through bi for increasing i and then returning to b1 .bi+1 + i=1 mbn . 2 . 2. Now we have the following algorithm: Suppose the graph has n nodes.12 Traveling Sales Person (TSP): Given n cities and a symmetric integer n × n matrix (mij )n i. . . If 1 ≤ bi ≤ n for all i and bi = bj for i = j then compute n−1 mbi .b1 . is there a directed path from s to t? We present an algorithm that uses only logarithmic space and hence we need to be slightly careful about how the input is presented. nondeterministically write down the value of every variable and then write 1 iff the guessed assignment satisfies the formula. If this tour is short enough the machine accepts the output. . Example 5.14 Directed graph reachability: Given a directed graph G and two nodes s and t of G. Is there a tour which visits all cities exactly once and is of total length ≤ K? TSP is in NP as can be seen from the following non-deterministic algorithm.j=1 where mij denotes the distance between cities i and j. Namely.13 Boolean formula satisfiability: Given a Boolean formula. .2: Example 5. Nondeterministically write numbers bi . To check that this procedure runs in polynomial time one has to observe that given a formula and an assignment of all the variables then one can check whether the assignment satisfies the formula in polynomial time. is there a setting of the variables that satisfies the formula? This problem is in NP. 1. If this number is less than K output 1 and in all other cases output 0.

if you knew the solution. In practice however. This procedure uses only logarithmic space since all we need to remember is the counter i and the value of H. The argument implies that the algorithm recognizes exactly the graphs that have a path from s to t and therefore directed graph reachability is in NL. Thus. i.e. We can assume that k ≤ n since if vi = vj for i < j then we can eliminate vi+1 through vj and still maintain a path. we know that when the machine takes the value 1 then t is reachable from s. In an abstract mean60 .Set H= s For i = 1. let us be informal for a while. . n If H = t print 1 and halt. . first observe that by construction H is always a node that can be reached from s. The conditions given in the algorithm are easily checked given the assumed encoding of G. 2 . If there is no edge out of H print 0 and halt. it could be verified efficiently. this anomaly does not seem to appear and thus if a problem has a polynomial time solution then the exponent tends to be small and the algorithm is usually efficient in practice. Before we continue to establish some of the more formal properties of NP. In a similar way NP can be thought of as the class of problems where. To verify that the algorithm is correct. Then there is a path v1 . . within moderate amounts of computation we can solve reasonably large problems. v2 . since the machine output 1 only when H = t. We will not give any example of a language in NPSPACE and in the next section it will be clear why. vk where v1 = s. Next i Print 0. The class P is intuitively thought of as the class of functions which are computable in practice. it grows too quickly. Choose nondeterministically one of the edges leaving H and set H to the endpoint of this edge. On the other hand suppose that t is reachable from s. That this is the case is not clear from the definition and one could object that although n100 is polynomial. Then there is a possibility that H = vi for every i and thus there is a possibility that the machine outputs 1. . vk = t and there is an edge from vi to vi+1 for any i.

Definition 5.e one configuration for every time step) starting with the input configuration and ending with the halting configuration. A computation which starts with input x1 . in practice “the solution” is much more concrete. and a satisfying assignment.ing “the solution” must here be interpreted as the set of nondeterministic choices that makes the machine accept. Then we can think of its computation tableau as a two-dimensional array with time on one axis and the tape squares on the other.e. j) of this tableau thus contains the symbol that is in the j’th square at time i. 61 .16 A computation tableau is a complete description of a computation of a Turing machine. To see the converse. we will need the concept of a computation tableau. It also contains information about whether the head is there and in such a case which state it is in. respectively. xn on the input-tape and ends with only a 1 on the tape is given in Table 3. If B can be recognized in time O(nc ) this procedure runs in time O(n(1+k)c ) which is polynomial. and x ∈ A then this can be verified since we just wait until x is listed. y) ∈ B. The latter statement follows from the fact that if A is r.15 Given a set A. The position (i. while the recursively enumerable sets corresponded to statements that could be verified. Assume that the Turing machine has only one tape. On the other hand if x ∈ A this cannot be verified since we never know if we just haven’t waited long enough to see it listed. x2 . . In view of this one can say that recursive and r. Theorem 5. a nondeterministic algorithm for membership in A just consists of guessing a y of the desired length and then accepting iff (x.e. a short tour. Thus the nondeterministic choices have in our examples corresponded to the factors. In fact. The reason for the name is that we will think of it in the following way.|y|≤|x|k (x. It consists of all configurations of the Turing machine on a specific input (i. Proof: Let us first prove that if there is such a B then A ∈ N P . . y) ∈ B. have the same relation as P and NP and thus it is not surprising that we can prove some similar theorems. then A ∈ N P iff there is a language B ∈ P and a constant k such that x ∈ A ⇔ ∃y. As we have seen. The recursive sets corresponded to functions that could be computed.

..17 One might be tempted to think that the relation given between NP and P in Theorem 5.x1 . q1 . 1. this is probably not the case as even if we restrict B to belong to L then the set of all A definable in this way is still all of NP. That the computation accepts. . 62 .. y) is in B we basically have to check three things. Also to check 2 is straightforward since we have to check that the only square that changed value between two timesteps is the square where the head was located. That the computation is legal for M . As the interested reader can convince himself. qh x2 x2 . The first and the last conditions are easy to check since they just talk about the contents of particular squares. Define B to be the set of pairs (x. we can return to the converse of Theorem 5. Then B satisfies the condition with respect to A of the theorem with k = 2c. B B Table 3: A computation tableau Now. We claim that B is in P. and also that the transition by the head was a possible transition given the next-step function of M . y) such that y describes an nc × nc computation tableau of M on input x which ends in an accepting state. B x4 x4 x4 . . q0 0 0 . . This finishes the proof. q3 1 B x3 x3 x3 . To see this observe that to check whether a pair (x. 3. 1.15 would be true also for NL and L.. That the computation described by y starts with x on the input tape.. .. Suppose A is recognized by a one-tape Turing machine M in nondeterministic time nc . 2. Remark 5.15.

Thus we have given the theorem about NP and P corresponding to Theorem 2. 63 . (The general belief is that it is not.7. Of the other theorems in Section 2.19. it is not known whether the analogue of 2.) There is a nice reduction theory and also a notion of complete sets and we will return to these questions in Chapter 7.17 is true.

Theorem 6. Proof: Let A be a language that can be recognized by a nondeterministic Turing machine N which uses space at most S(n) on inputs of length n. Let us first observe that the option of non-determinism will never hurt and thus any deterministic complexity class is contained in the corresponding nondeterministic complexity class. NL.6 Relations among complexity classes Up to this point we have defined six complexity classes (L.N be the following directed graph: 64 . In the next subsection we will prove the first nontrivial complexity result. Similarly we define SP ACE(S(n)) and N SP ACE(S(n)). then N SP ACE(S(n)) ⊆ T IM E(2O(S(n)) ). This gives us three immediate theorems. and NPSPACE) and we have observed some relations. Consider the set of configurations of N .1 Nondeterministic space vs. 6.NP.2 P ⊆ N P .P. Let Gx. Remember that a configuration consists of the state of N . Theorem 6. deterministic time The aim is to establish the following theorem. and Q states.3 P SP ACE ⊆ N P SP ACE.12 there are at most |x|QS(|x|)3S(|x|) possible configurations that N may visit on input x. By the argument in the proof of Lemma 3. a three letter alphabet. We have to design a deterministic Turing machine that runs in time 2O(S(n)) which recognizes A. Theorem 6.PSPACE. some obvious and some not obvious. the positions of all its heads and the contents of the worktape. Assume for simplicity that N has only one worktape. In this section we will establish some more relations.4 Suppose S(n) > log n and that S(n) is space constructible. For notational convenience let T IM E(T (n)) denote the class of languages that can be recognized in deterministic time T (n) and let N T IM E(T (n)) be the class of languages that can be recognized in the same nondeterministic time. Theorem 6.1 L ⊆ N L.

y) ∈ B. We now claim that the machine takes value 1 on a given input exactly when there is a path from Cst to any of the configurations that end with output 1. .4 We have the following corollary. Since this is equivalent to N accepting x we have proved Theorem 6. .2 Nondeterministic time vs. Theorem 6. Gx. 1 . Proof: Just insert S(n) = O(log n) in Theorem 6.e.N has at most 2O(S(|x|)) nodes and using the fact that S is space constructible we see that Gx. Proof: Remember the characterization of NP given in Theorem 5. Corollary 6.N has one node Cst which corresponds to the initial configuration and one or more configurations where N halts with output 1.4.6 N P ⊆ P SP ACE. |y| ≤ |x|k (x.2 that in time 2O(S(|x|)) it is checkable whether any configuration that outputs 1 can be reached from the initial configuration. given A ∈ N P there is a B ∈ P and a k such that x ∈ A ⇔ ∃y.N can be constructed in 2O(S(|x|)) time. i. 6. This is fairly obvious and the verification is left to the reader.15. 2|x| do If (x.The nodes of Gx.5 N L ⊆ P . Now it follows from the example in Section 4.N are the configurations of N and there is an edge from configuration C1 to configuration C2 iff it is possible to go from C1 to C2 in one step on input x. This gives the following algorithm to determine whether x ∈ A f ound = 0 k For y = 0. deterministic space This section has only one basic theorem. y) ∈ B then f ound = 1 od Write f ound 65 . By the above claim Gx.

7 If S(n) is space-constructible and S(n) ≥ log n.” (If we think about the graph in the proof of Theorem 6.J. C2 . We will again work with the configurations of N and in fact if you look closely. This time however we will be concerned with saving space and thus we will never write down the graph explicitly. on the other hand.3 Deterministic space vs. we can do it in polynomial space and once we have checked a given y we can erase the computation and use the same space for the next y. To see that the algorithm runs in polynomial space observe that all we need to do is to keep track of y and to do the computation to check whether (x. we focus on space it turns out that nondeterminism only helps marginally. y) ∈ B.10 that if a machine has an accepting computation 66 .4 this can be interpreted as “There is a path of length at most 2k from node C1 to node C2 ”.4. Then we will be interested in the predicate GET (C1 . Let C1 and C2 be any two configurations of N and let k be an integer. y) ∈ B. nondeterministic space Nondeterministic computation seems very powerful. Savitch in 1970. x) which we will interpret “On input x it is possible to get from configuration C1 to configuration C2 in time ≤ 2k and without being in a configuration which uses more than S(|x|) space. Let us call this configuration Cacc . Since this latter computation is polynomial time. Theorem 6. we solve the same graph problem as we did in the proof of Theorem 6.The algorithm is correct since f ound will be 1 exactly when there is a short y such that (x. 6. k. and it seems for the moment that complexity theory supports this intuition at least in the case when we are focusing on time as the main resource. This fact is usually referred to as Savitch’s theorem and was first proved by W. If. then N SP ACE(S(n)) ⊆ SP ACE(O(S 2 (n))) . Assume for notational simplicity that N has a unique configuration where it halts with output 1. Proof: Assume that A is accepted by the nondeterministic machine N in space S(n).) Let Cst denote the start configuration of N and recall the argument in the proof of Theorem 5.

k. x)) The ∨ is here taken over all possible configurations C of N which uses space less than S(|x|). else For all configurations C which uses space at most S(n): Evaluate GET (C1 . x) If k = 0 then Check whether the next-step function of N allows a transition from C1 to C2 on input x in one step and set GET accordingly. C2 . Conversely if there is a C that fulfills the left hand side of the above equation. C2 . GET (C1 . x) and GET (C. set GET to true and otherwise to false. If for some C both are true. the running time is bounded by the number of configurations. This implies that there is a constant c such that N accepts an input x iff GET (Cst . We loop over all possible C and to remember which C we are 67 . cS(|x|). The reason for the above relation is that if there exists a computational path from C1 to C2 of length at most 2k which never uses more than S(|x|) space then there is a midpoint on this path and the configuration at this midpoint can be used as C. then the two computations from C1 to C and from C to C2 can be concatenated to a computation from C1 to C2 . This is clearly to true for k = 0 since all that need to be done is to check if one of the constantly many possible next steps that N can do from C1 will take it into C2 . x). C. Thus all we have to do is to evaluate this predicate in small space and to achieve this. C2 . endif By the above argument x ∈ A iff GET (C1 . x) can be evaluated in space D(k + 1)S(|x|) for some constant D. We prove by induction that GET (C1 .then there is an accepting computation which visits each configuration at most once and. k − 1. x) = (GET (C1 . k. The above equation gives the following recursive algorithm to evaluate the predicate GET . To do the induction step let us specify more closely how the above procedure works. Cacc . x) ∧ GET (C. k − 1. k. cS(n). C2 . in particular. GET (C1 . k − 1. k − 1. C2 . C2 . x) is true. x) and thus to prove the theorem we need only calculate the amount of space needed to evaluate GET . the following observation will be crucial. C.

By now we have gathered some information about the relations between the complexity classes we have defined. Provided that D > d the induction step is complete and thus we have completed the proof of Theorem 6. The inclusion of NL in PSPACE is strict. This explains that NPSPACE is not a very famous complexity class.10 reflects our total knowledge of the relation between the given complexity-classes. We introduced it for symmetry purposes and now that we have proved that we do not need it. For each C we do two evaluations of GET with the parameter k − 1.8 as promised before.currently working on requires space dS(n) for some constant d. we will forget it. Proof: By Theorem 6.14. By the induction hypothesis this implies that the computation for a fixed C can be done in space DkS(n) + 1.9 N L ⊂ P SP ACE. 68 . Observe that Corollary 6.9 finishes the proof of Theorem 5. remember the result and then do the other evaluation in the same space. It is a sad fact for complexity theory that Theorem 6. These two evaluations are done sequentially and thus we can first do one of the evaluations.8 N P SP ACE = P SP ACE.7 everything in NL can be done in space O(log2 n) and thus we get a strict inclusion by Theorem 3. Corollary 6.10 L ⊆ N L ⊆ P ⊆ N P ⊆ P SP ACE. Corollary 6. Theorem 6. Let us sum up the information in a theorem.7. We have two obvious corollaries of the above theorem.

This serves two purposes. the NP-complete problem. Instead of talking about recursive and recursively enumerable sets we will talk about P and NP. Secondly.10 gives the present state of knowledge about the defined complexity classes.2 If A ≤p B and B ∈ P . but unfortunately we have not yet developed the machinery to prove this. One step on the way is to identify the hardest problems within each complexity class. there are some important things to be said. There are a couple of different ways to do this but we will only consider one. Many proofs and theorems are similar. 7. We will start by considering a very famous class of problems.21. 69 . Then A ≤p B (read as “A is polynomial time reducible to B”) iff there is a polynomial time computable function f such that x ∈ A ⇔ f (x) ∈ B. Proof: Suppose the function f in the definition of ≤p can be computed in time O(nc ) and that B can be recognized in time O(nk ). proving a problem complete will give a good hint that it can probably not be placed in a lower complexity class and thus is a good way to classify a problem. Theorem 7. To compute f (x) is done in time O(|x|c ) and from this also follows that |f (x)| ≤ O(|x|c ) which in its turn implies that f (x) ∈ B can be checked in time O(|x|ck ). Firstly they will serve as candidates that can be used to prove strict inclusions. then A ∈ P . The only difference is that we require the function f to be computable in polynomial time. Then to check whether a given input x belongs to A just compute f (x) and then check whether f (x) ∈ B. The common belief today is that all the given inclusions are strict. Thus the procedure works in polynomial time and we can conclude that A ∈ P .7 Complete problems Even though Theorem 6. Clearly this definition is very close to the Definition 2.1 NP-complete problems To identify the hardest problem we need first define the concept of “not harder than”. We can now proceed to develop a reduction theory similar to the one described in the end of Section 2.1 Let A and B be two sets.7. Definition 7.

B}. A ∈ N P 2.5 If A is NP-complete then P = NP ⇔ A ∈ P . Definition 7. If B ∈ N P then B ≤p A.1). Assume that B is recognized by a non-deterministic Turing machine N which has one tape. B ≤p A and hence by Theorem 7. runs in time nc and uses the alphabet {0. 1. Theorem 7. Proof: We have already established that SAT ∈ N P (see the example in section 5. B ≤p A. To see the converse assume that A ∈ P and take any B ∈ N P . 1971) SAT is NP-complete. By dropping the first condition we get another known concept.The definition of NP-complete is now very natural having seen the definition of r. Let SAT be the set of satisfiable Boolean formulas (as introduced in the example in section 5. Then by property 2 of being NP-complete.1) and thus we need to establish that B ∈ N P implies that B ≤p SAT . But since B was an arbitrary language in NP we can conclude that NP = P. Proof: Clearly if N P = P then A ∈ P since A by the definition of NPcompleteness belongs to NP. Q states. With this motivation we are ready to study our first NP-complete problem.4 A set A is NP-hard iff for all B ∈ N P . Theorem 7. Remember that the computation tableau is a complete description of a computation. We will now construct a Boolean formula such that if it is 70 .2 B ∈ P .6 (Cook. Before we continue to prove some problems to be NP-complete let us prove a simple theorem. Definition 7.e.-complete before.3 A set A is NP-complete iff 1.

j ) for i ≤ i ≤ i + 1 and j ≤ j ≤ j + 2. 1 ≤ i. B} and 1 ≤ i. • For n + 1 ≤ j ≤ nc we have y1jk = 1 iff k = B. 1 and 3 are very easy to handle. j ≤ nc . The condition 1 is equivalent to the following conditions: • For 1 ≤ j ≤ n we have y1jk = 1 iff k = xj . To see how to translate condition 2 into a formula we will need some more information. Definition 7. The condition 3 is equivalent to ync 11 = 1 and znc 1l = 1 i. The formula has two types of variables: yijk . zijl . Clearly the y and z variables code a computation completely and thus all that needs to be done is to make a Boolean formula which is true iff the y and z variables code an accepting computation of N on input x. k ∈ {0. j ≤ nc . Of these three conditions.e.satisfiable then its satisfying assignment will describe a computation tableau of an accepting computation of N on input x. The computation starts with x 2. 1 ≤ l ≤ Q. It is a valid computation. Let us denote the length of x by n. There are three conditions to take care of. 1. • z1. 71 .j. The computation accepts. at time nc we have written a 1 in square 1 and the head is located in square 1 and we have halted (assuming that ql is the halting state).7 A computational tableau C is locally correct if for every i and j there is some correct computation which have the same contents as C in squares (i . 3. 1. The intuitive meaning of the variable will be that yijk = 1 iff the symbol k appears in square j at time i and will take the value 0 otherwise while zijl = 1 iff the head is in square j at time i and the machine at this time is in state ql .l = 0 except when j = l = 1 (assuming that q1 is the start-state).

26. Let us call this problem CNF-SAT and we have the following theorem. The second observation is that the given proof is almost identical to the proof of Theorem 5. To conclude the proof of the theorem we need just observe that to construct the formula is clearly polynomial time. Without increasing the size of the entire formula by more than a constant we write each of the subformulas in conjunctive normal form (i. This puts the entire formula on conjunctive normal form. 72 . The conjunction of all these correctness formulas now takes care of condition 2. Firstly the final formula is the conjunction of a number of subformula where each subformula is of constant size.15 can be used to give another NP-complete problem.That computation is a local phenomena is now formalized as follows: Lemma 7. Whether a given local area is correct is described as a condition on 6Q + 18 variables and since any condition on K variables can be expressed as a formula of size 2K we can express each local correctness condition in constant size. This implies that satisfiability of formulas on conjunctive normal form is NP-complete. as a conjunction of disjunctions). The size of the formula is O(n2c ). It is just a question of coding a computation in a suitable way. we do not feel that this is a natural problem and hence we will not make that argument. To determine whether the variables yijk and zijl describe as legal computation we only have to check all the local correctness conditions. There are also striking similarities with the proof of Theorem 2. namely the existence of a computational tableau with certain conditions.8 A computational tableau describes a legal computation iff it is locally correct. which by the definition of N is equivalent to x ∈ B. Let us make a couple of observations about the above proof.9 CNF-SAT is NP-complete.e. Theorem 7. However. We now claim that the conjunction of the formulas taking care of the conditions 1-3 is satisfiable iff x ∈ B. Theorem 5.15. This is fairly obvious since there is a satisfying assignment iff there is an accepting computation which uses at most space nc and time nc of N on input x. If one thinks about this. Armed with this lemma we can now express condition 2 in a suitable way. We leave the easy verification to the reader.

The reason is that to use Theorem 7. The restriction is that there are exactly 3 literals (i. Theorem 7. The main tool for this is given below. To put the proof in other words: Polynomial-time reductions are transitive i.10 is much more useful for proving problems NPcomplete than the original definition.10 we only have to make one reduction while to use the definition we have to make a reduction from any problem in NP.e. By the hypothesis of the theorem there is a polynomial time computable g such that y ∈ A ⇔ g(y) ∈ B. Since A is NP-complete we know that C ≤p A and hence there is a polynomial-time computable function f such that x ∈ C ⇔ f (x) ∈ A. x3 = 1 and x4 = 0. then B is NP-complete Proof: We have only to check that for any C in NP it is true that C ≤p B. Let 3-SAT be the problem of checking whether a restricted Boolean formula given on conjunctive normal form is satisfiable.e. Clearly Theorem 7. variables or negated variables) in each disjunction. if we can reduce C to A and A to B then we reduce C to B by composing the reductions. A ≤p B.10 If A is NP-complete and B satisfies B ∈ N P . Now it clearly follows that x ∈ C ⇔ g(f (x)) ∈ B and since the composition of two polynomial-time computable functions is polynomial-time computable we have proved C ≤p B and thus the proof of the theorem is complete. We have 73 . x2 = 1.Having obtained one NP-complete problem it turns out to be easy to construct more NP-complete problems. Such a formula is called a 3-CNF formula and an example is: ¯ x x (x1 ∨ x2 ∨ x3 ) ∧ (¯1 ∨ x2 ∨ x4 ) ∧ (¯2 ∨ x3 ∨ x4 ) This formula is satisfiable as can be seen from the assignment x1 = 1.

We will call Ci a clause and let |Ci | denote the number of literals in Ci . (1.Theorem 7.10 and since 3-SAT is clearly in NP all that we need to do is to find a polynomial-time reduction from CNF-SAT to 3-SAT. |Ci | = 1. . 2. |Ci | > 3.11 3-SAT is NP-complete. i = 1. n be the variables that appear in φ and let yij denote new variables. We have the following cases. Thus we need to given a CNF-SAT formula φ construct in polynomial time a 3-SAT formula f (φ) such that φ is satisfiable iff f (φ) is satisfiable. Suppose φ = m Ci where Ci are disjunctions containing an arbitrary i=1 number of literals. 3. 2 . |Ci | = 2. 1.) We keep Ci as it is. |Ci | = 3.) Suppose Ci = k uj for some literals uj we then replace Ci by j=1 k−4 (u1 ∨ u2 ∨ yi1 ) ∧ ( j=1 (¯ij ∨ uj+2 ∨ yi(j+1) )) ∧ (¯i(k−3) ∨ uk−1 ∨ uk ) y y The formula we obtain by these substitutions is clearly a 3-CNF formula and it is also obvious that given the original formula it can be constructed in polynomial time.) Suppose Ci = (xj ∨ xk ) then we replace it by ¯ (xj ∨ xk ∨ yi1 ) ∧ (xj ∨ xk ∨ yi1 ) (3.) Suppose Ci = xj . Thus all we need to check is that φ is satisfiable precisely when f (φ) is satisfiable. then we replace it by ¯ ¯ ¯ ¯ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) . 4. Let us take care of the cases one by one. 74 . (4. We will replace each clause by one or more clauses each containing exactly 3 variables. Let xi . . (2. Proof: We will use Theorem 7.

If Ci = xj and αj = 0 then. We will construct an instance of 3DM with three types of triplets. Proving problems NP-complete is not the main purpose of these notes but let us at least give one more NP-completeness proof. Let 3-dimensional matching (3DM) be the following problem: Given a set of triplets (xi . Let us consider case 1. yi ∈ Y and zi ∈ Z where X. We now must find a satisfying assignment for f (φ). If Ci was not satisfied then all the literals uj would be false.12 3DM is NP-complete. 2 . i = 1. . yi . Is there a subset S of q of the triplets such that each element in X. but this implies that k−4 yi1 ∧ ( j=1 (¯ij ∨ yi(j+1) )) ∧ yi(k−3) y ¯ would be satisfied. To prove the converse. To prove 3DM NP-complete we will reduce 3-SAT to it. suppose that f (φ) is satisfiable and let xi = αi be the assignment to the x variables in this satisfying assignment. Look at the clauses constructed under rule 4. zi ). m where xi ∈ X. at least one of the clauses is not satisfied. Now consider the case 4. Thus given a 3-CNF formula φ we must construct an instance f (φ) of 3DM such that φ is satisfiable iff f (φ) contains a matching. For clauses that fall under the rules 1-3 this is not too hard to see. The clauses constructed according to rules 1-3 are already satisfied and thus will cause no problem.First assume that φ is satisfiable. “variable triplets”. but this is clearly not possible. Y and Z appear in exactly one of the triplets in S? Theorem 7. Now set yij = 1 for j ≤ j0 − 2 and yij = 0 for j > j0 − 2 then it is easy to verify that this assignment satisfies f (φ). Thus the reduction is correct and the proof is complete. no matter what the values of yi1 and yi2 are. We will give the same values to the xi and must find values for the yij to satisfy the formula. Suppose φ has n variables and m clauses. Proof: 3DM is clearly in NP since a nondeterministic machine can just nondeterministically pick q of the triplets and then check if each element appears exactly once. . one of the uj is true and suppose this is uj0 . “clause triplets” 75 . Since the corresponding clause Ci in φ is satisfied. We claim that this part of the assignment will satisfy φ. Y and Z are sets of cardinality q.

The elements of the sets X. ai [j + 1]. ai [j]. Each clause Ci will have two special values and three triplets. Let us start by defining the variable triplets. bi [j]) : 1 ≤ j < mi } (ui [mi ]. u Tit = {(¯i [j].Figure 8: The variable triplets and “garbage collecting triplets”. ai [1]. As can be seen from Figure 8 this implies that any matching M must contain either all triplets from Tif or Tit for any i. bi [j]) : 1 ≤ j ≤ mi } Tif = {(ui [j]. bi [mi ]) The elements ai [j] and bi [j] will not appear in any other triplets. We will let the choice of which of the two sets to pick correspond to whether the variable xi is true or false. Suppose Ci = ui1 ∨ ui2 ∨ ui3 and it is the jk ’th time the variable corresponding to the 76 . Suppose variable xi appears (with or without negation) in mi clauses then we will associate with it the following 2mi triplets. Y and Z will be defined as we go along.

3. 77 . g1 [k]. The last 2m elements can be covered together with the g elements by the garbage collecting triplets. Observe that the uik should here be interpreted as literals and thus corre¯ sponds to either ul or ul . It turns out that most problems in NP that are not known to be in P are NP-complete. g1 [k]. g2 [k]). Thus there is a matching iff there is a satisfying assignment and since the reduction is straightforward the only thing needed to check that it is polynomial time is to check that we do not have to construct too many triplets. The elements s[i] and t[i] will not appear in any other triplets and this implies that in any matching precisely one of the triplets corresponding to each clauses will be included.literal uik appears. This is done by the following triplets ¯ (xi [j]. 1 ≤ j ≤ mi . Suppose on the other hand that the formula is satisfiable. It is clear from the above description that the set of triplets can contain a matching only if the formula is satisfiable. 1 ≤ k ≤ 2m (¯i [j]. However it is easy to check that there are 6m + 3m + 6m2 triplets. There are hundreds of known NP-complete problems and many appear in the listing in the final part of the excellent book by Garey and Johnson. 1 ≤ j ≤ mi . k = 1. 1 ≤ i ≤ n. Observe that we can include a triplet precisely when one of the corresponding literals is true. these are the same elements as in the variable triplets. i. This will cover m of the 3m literal-elements. This concludes the proof. 1 ≤ i ≤ n. Let us however move on and consider problems complete for other classes. We have done the essential part of the construction and all that remains is specify the garbage collecting triplets which will match up the xi [j] and xi [j] that have not been used. Then for each clause pick a variable that satisfies it and the corresponding clause triplet. One notable exception is factoring. Then we include the triplets (uik [jk ]. g2 [k]). Then make the choice of which T sets to pick based on the satisfying assignment. t[i]). 2.e. s[i]. another one is graphisomorphism. 1 ≤ k ≤ 2m x This enables us to cover any 2m literal-elements which have not been matched by previous triplets.

Of course the problems are different.14 If A is PSPACE-complete then P = P SP ACE ⇔ A ∈ P. When dealing with NP-complete problems we came across the satisfiability of Boolean formulas. The concept of reduction is the same and the basic properties are the same. .13 A set A is PSPACE-complete iff 1. We have: 78 .7. Let TQBF be the set of True Quantified Boolean Formulas.5 you get a proof of Theorem 7. If B ∈ P SP ACE then B ≤p A We have an immediate equivalent of Theorem 7. Definition 7. Now we will consider quantified Boolean formulas which looks like: ∀x1 ∃x2 . . B ≤p A.15 If A is PSPACE-complete then N P = P SP ACE ⇔ A ∈ N P. Qxn φ(x) where each x1 can take the value 0 or 1 and φ is a normal quantifier free formula and Q is either ∃ or ∀ depending on whether n is even or odd. 2. Now let us encounter our first PSPACE-complete problem. Theorem 7.2 PSPACE-complete problems The theory of PSPACE-complete problems is very similar to that of NPcomplete problems.14. One last definition for completeness before we go to business.5.16 A set A is PSPACE-hard if for any B ∈ P SP ACE. Proof: If you substitute PSPACE for NP in the proof of Theorem 7. Definition 7. By a similar argument we get: Theorem 7. A ∈ P SP ACE.

18 By being more careful it is not to hard to see that the evaluation actually can be done in space O(n + S). x) ∧ GET (C. Suppose that B is recognized by Turing machine MB which never uses more space than |x|c on input x for a given constant c. C2 . . Qxn φ(x) is true can be done in space O((n + 1)S). C2 . Of course if the first quantifier is ∃ we just need to check that one of the values is true. k − 1. We prove this by induction and first observe that it is certainly true for n = 0. C2 . . We claim that if the formula has n variables and the size of the description of φ is bounded by S then to check whether: ∀x1 ∃x2 .Theorem 7. x) = C (GET (C1 . MB will get from configuration C1 to configuration C2 in at most 2k steps and never use more space than |x|c . As before we have GET (C1 . Qxn φ(x)|x1 =1 are true. k.17 TQBF is PSPACE-complete. k − 1. k. Qxn φ(x)|x1 =0 and ∃x2 . k. Next we need to take care of the slightly more difficult part of proving that if B ∈ P SP ACE then B ≤p T QBF . x) = ∃C (GET (C1 . From this the claim follows and thus T QBF ∈ P SP ACE. . Remark 7. 79 . x) which means that on input x. With the present formalism it is more convenient to think of the ∨ as an existential quantifier and we get GET (C1 . k − 1. These two formulas can be evaluated by induction in space O(nS) and since we can evaluate one and then evaluate the other in the same space while only remembering the value of the first evaluation and which formula to evaluate the claim follows. x)) . . C2 . x) ∧ GET (C. x)) . C. . k − 1. To the induction step we use the observation that the given formula is true iff both ∃x2 . We will again use the predicate GET (C1 . C. Proof: Let us first check that TQBF can be recognized in polynomial space. C2 . .

It is a only slight exaggeration to say that to determine who is the winner in most games is PSPACE-complete. Theorem 7. Now since x ∈ B iff GET (Cst . k. to check whether we can get from one configuration to another in one step is just a simple formula where we list all possible transitions of the Turing machine. then B is PSPACE-complete. In fact if one writes down the final formula carefully one can write it in CNF. Cacc . x) = ∃C ∀(A. GET (C1 . Y. i. A ≤p B. PSPACE-problems are not as abundant as NP-complete problems and do not come up in as varying contexts. rather than the more complicated objects we are currently quantifying over. x). We leave the details to the interested reader.B)∈{(C1 . Now we only get one copy of GET to expand further and if we continue recursively we get 2k quantifiers and a final formula GET (X.C). C2 . The ∀ quantification is just a binary choice and thus can be represented by a Boolean variable which will take the value 0 if we make the first choice and the 1 if we make the other. The main source of PSPACE.19 TQBF-CNF is PSPACE-complete. k − 1. B.(C.complete problems outside logic is games. 0. It is straightforward to encode a configuration as a set of Boolean variables. x). Both these points are easy and let us just give a rough outline. All that remains to do is to check that it is sufficient to quantify over Boolean variables. 80 .C2 )} GET (A.20 If A is PSPACE-complete and B satisfies B ∈ P SP ACE. x) is true for the appropriate constant d and since we know how to write the latter condition as a quantified Boolean formula we have completed the reduction. if we restrict the formula φ in TQBF to be a CNF-formula we still obtain a PSPACE-complete problem. d|x|c . We call this problem TQBF-CNF. To get other PSPACE-complete problems we first state an obvious theorem. Theorem 7. and that the final application of GET can be written as a Boolean formula.e.Now we could write the two GETs to the right in the same way but this would be mean trouble since we would then get a formula of exponential size. Finally. However there is a way around this by replacing the ∧ by a universal quantifier obtaining.

The first person not being able to name a place with these two conditions loses. Each node must be a successor of the previous node and no node can be chosen twice. We will call the players in the game ∃ and ∀. It is not hard to see that the formula is true iff “Exists” wins the game when both players play optimally. since chess is of a given constant size and hence not very interesting from our point of view. Initially the game starts with a given node.21 Generalized geography is PSPACE-complete.The reason that games are this hard is that already quantified Boolean formulas can be viewed as a game between two players. Thus for instance to determine who is the winner in a given position of generalized checkers or generalized go is PSPACE-hard. “Exists” wins the game iff the final total assignment satisfies the formula. Of course the PSPACE-completeness cannot apply to any usual game like chess. (On the other hand it is a slightly cheating generalization since the skill in the normal game is to know as many geographic names as possible. We will not get into those games but instead consider a more childish game. To get a computational problem out of this game let us generalize. which of the two players has a winning strategy? Let us first observe that clearly this is a generalization of the geography game where the nodes corresponds to places and there is an edge from A to B if A ends with the same letter B starts with. The computational problem is now: Given a graph.) Theorem 7. Given the 81 . Proof: It is not hard to verify by normal procedures that GG is in PSPACE and thus by Theorem 7. Given a formula “Exists” chooses the values of all variables which correspond to existential quantifiers and “Forall” chooses the values of all variables which correspond to universal quantifiers. “Geography” is a two-person game where one person starts by giving the name of a geographical place and then the two people alternatingly name geographic places subject to the two conditions that no place is named twice and that each name starts with the same letter that the previous name ended by.20 we need only to prove that TQBF-CNF can be reduced to GG. “Exists” and “Forall” in the following way. But games that can be generalized to arbitrary size are often PSPACE-complete (or hard). “Generalized Geography” (GG) is a graph game where two people alternatingly choose nodes in a directed graph. The first person having no choice loses the game.

and ∃ must pick a literal in that clause which he will claim is true. Finally these nodes are hooked back to the top or the bottom of the diamond for the corresponding variable according to whether the literal is positive or negative. There is a diamond for each variable of the formula.c 1 s x 1 x c2 2 cl Figure 9: Generalized geography graph formula c1 cl ¯ ∃x1 ∀x2 ∃x3 [(x1 ∨ x2 ∨ x3 ) ∧ · · · ( )] we construct a graph given in Figure 9. ∀ will not be able to move without reusing a node. (although 82 . Thus we see that ∃ has a winning strategy iff the formula is true. The question whether P is equal to L is not of the same practical importance. ∀ will be able to move and then ∃ will be stuck. Then ∀ gets to pick any clause that he claims to be false. If ∃’s claim is valid. The games starts at the node named S and the ∃ and ∀ labels in the diagram show whose turn it is to move at each stage. 7. Since the reduction clearly is polynomial time we have proved that GG is PSPACE-complete. We can think of ∃’s and ∀’s choices of how to move through the diamonds as setting the variables (true if the high road is taken and false if the low road is taken). while if the claim is not true. with the last diamond pointing to nodes representing all the clauses of the formula and each clause node pointing to nodes representing the literals in the clause.3 P-complete problems The question P = N P ? is of real practical importance since it is a question whether many natural problems can be solved efficiently.

A ∈ P . Using this we can now define P-completeness. 83 . The graph contains sources which are labelled by input variables xi and one sink which is called the output node. namely that the composition of two functions in L is in L. This is clearly not possible when we are considering the question P = L? and thus we need a finer reduction concept. computable in logarithmic space. In this ciruit all edges are directed upwards.22 Let A and B be two sets.23 A set A is P-complete iff 1. such that x ∈ A ⇔ f (x) ∈ B. One small lemma is needed. Then A ≤L B (read as “A is logarithmic space reducible to B”) iff there is a function f . Given values of the inputs to the circuit one can evaluate the circuit in the natural way. An example is given in Figure 10. The proof is identical to the other proofs. We leave this as an exercise. The modification is very slight. Define a Boolean circuit to be a directed acyclic graph where each node is labeled by either ∧. We just require the reduction-function to be computable in logarithmic space. Up to this point we have allowed polynomial time for free when we have compared problems. If B ∈ P then B ≤L A We get the usual theorem. Let CVAL be the following problem: Given a circuit and values of the inputs of the circuit. Theorem 7.25 CVAL is P-complete. 2.it has a nice connection with parallel computation we have not seen yet) but from a theoretical point of view it is of course of major importance. ∨ or ¬ and the number of incoming edges is at least two in the first two cases and one in the last. Definition 7. We are now ready to encounter our first P-complete problem. What is the output of the circuit? We have: Theorem 7. Definition 7.24 If A is P-complete then P = L ⇔ A ∈ L.

Since we are considering deterministic computation there is a unique computation tableau given the input. The content of each square of the tableau is easily coded by a constant number of Boolean values. We construct a circuit which successively computes these descriptions. This means that we can build a constant piece of circuitry that computes the Boolean variables corresponding to the square (i. The output of the circuit will correspond to the output of the machine. Thus in logrithmic space we can construct a circuit and an input to this circuit such that the circuit outputs one iff MB outputs 1 on input x. This can be done in O(log n) space. Thus we have a correct reduction and the proof is complete.V V V V x 1 x 2 x 3 Figure 10: A circuit Proof: First observe that CVAL belongs to P since it is straightforward to evaluate a circuit once the inputs are given. We need to reduce B to CV AL. j − 1). The content of a given square of the tableau only depends on the contents of the square itself and its two neighboring squares at the previous time step. To print the description of this circuit on the output tape all we need to remember is the identities of the nodes of the circuits. j).e. Thus to construct a circuit that given the correct input simulates the computation tableau of MB we just have to copy this piece of circuitry everywhere. Assume that B is recognized by a Turing Machine MB that runs in time at most nc for inputs of length n. j+1). Now take any B ∈ P . We will again use the concept of a computation tableau. (i − 1. 84 . j) in the computation tableau from the variables corresponding to (i − 1. i. and (i−1. be the content of the first square at the final timestep.

4. Theorem 7. If B ∈ N L then B ≤L A As before we ge: Theorem 7. 7. We then constructed a graph (of configurations of M ) with two special nodes s and t (corresponding to the start configuration and the accepting configuration. The fact that GR ∈ N L was established in Section 5. All we need do is to prove that the reduction can be done in logarithmic space.Several other P-complete problems can be constructed by making logarithmic space reduction from CVAL. Proof: We have more or less already proved the theorem. We started with an arbitrary nondeterministic machine M and an input x to M . This is not hard and we leave this to the reader.4 NL-complete problems The final question we will consider is the N L = L? question. given a directed graph G and two nodes s and t of G. is it possible to find a directed path from s to t. respectively) where x was accepted by M iff we could reach t from s. Let us recall this proof.e. Definition 7.1. We have already encountered the standard NL-complete problem. 2. The first part of this proof is clearly the desired reduction. 85 . Again we have complete problems under L-reductions.27 If A is NL-complete then N L = L ⇔ A ∈ L. namely graph-reachability (GR) i. A ∈ N L. We then observed that graph-reachability could be done in polynomial time and hence N L ⊆ P .28 Graph-reachability is NL-complete. That the problem is NL-complete was implicitly used in the proof of Theorem 6.26 A set A is NL-complete iff 1. We will however not present any more P-complete problems in this section.

In general for any complexity-class C that is not closed under taking complements. We get the following immediate corollary: Corollary 8. then the complement of A can be recognized in nondeterministic space O(S(n)). e Theorem 8. On the other hand the smallest deterministic time-class that is known to include all things that can be done in nondeterministic time T (n) is essentially 2T (n) .8 Constructing more complexity-classes Let us just briefly mention some more complexity-classes which are very related to the given classes.3 If S(n) is space constructible. one already knew that nondeterminism was not that helpful with regard to space. S(n) ≥ log n and suppose A can be recognized in nondeterministic space S(n). Before we have pointed out that P is symmetric with respect to complementation i.e.1 A set A belongs to co-N P iff its complement A belongs to NP. Thus it is natural to talk about the set of languages whose complement belongs to N P .2 A set A belongs to co-N L iff its complement A belongs to N L. Thus it came as a surprise when the following theorem was proved independently by Immerman and Szelepcs´nyi in 1988. ¯ Definition 8. The only other such class we have encountered is N L. It is in general believed that co-N P is not equal to N P . It was generally believed that co-N L is not equal to N L. if a set A belongs to P then so does its ¯ complement A.4 N L=co-N L. 86 . Thus in spite of the given collapse it is still believed that N P = co-N P . In particular by Savitch’s theorem (Theorem 6. We have also pointed out that this is not true for N P . ¯ Definition 8. we can define a corresponding complexity-class co-C.7) we know that whatever can be done in nondeterministic space S(n) can be done in deterministic space O(S 2 (n)).5 Although this theorem was a surprise. Remark 8.

The general case will follow from just substituting S(n) for log n. The idea behind the algorithm is to compute the number of nodes reachable from s.Proof: For notational convenience we will only prove the corollary. This way we need only remember the number of vertices seen this far and the last one seen. The complete algorithm now works as follows: Nk = 1 for k = 1 to n do newNk =0 for l = 1 to n do check = 0 for m = 1 to n do Nondeterministically try to generate a path from s to vm of length at most k − 1. its complement is complete for co-NL. We will prove that co-N L ⊆ N L. If this is successful then check = check + 1 If vm is connected to vl (or equal to vl ) then set newNk = newNk + 1 goto next l endif endif next m if check = Nk reject and stop next l 87 . To prove that co-N L ⊆ N L we need only prove that graph-nonreachability is in NL. This is done by at each stage nondeterministically generating all vertices that can be reached in k − 1 steps. Since we know their number. Since we cannot guess them all individually. and thus we can without error decide if a given vertex is reachable in k steps. Once we know this number we can verify that t is not reachable by just guessing (and checking) all reachable vertices. In stage k we compute the number of vertices which are reachable with at most k edges. we need to guess them in increasing order. Since graph-reachability is complete for NL. In particular we need only to describe a nondeterministic algorithm which works in logarithmic space and given a graph G and two vertices s and t accepts if there is no path from s to t. we know when we have generated all. The number of reachable vertices is computed iteratively. By symmetry this will imply the equality of the two classes.

unless the algorithm has already halted and rejected. It is easy to see that each of them is an nonnegative integer which is at most n and thus we can store these values in space O(log n). This can be done in logarithmic space by the example in section 5. Thus it is easy to see that the algorithm decides correctly whether vl is reachable in at most k steps and thus the new value of Nk will be correct and the induction step is complete. Now let us consider correctness. the counter Nk will at stage k give the number of vertices reachable by a path of length at most k from s. for each l the algorithm generates all vm which can be reached in at most k−1 steps. Nk . The variables used by the program is k. Finally. On top of this we need to nondeterministically guess a path of at most a certain length at certain parts of the program. If this is successful then check = check + 1 If vm is t reject and stop endif next m if check = Nk accept otherwise reject We need to prove that it is correct and that it only uses logarithmic space.Nk = newNk next k check = 0 for m = 1 to n do Nondeterminstically try to generate a path from s to vm of length at most n − 1. for the final loop observe that if in the end check = Nk then we have generated all vertices that are reachable from s with at most n − 1 steps (and hence reachable at all) and if t was not one of them we accept correctly.4.1 augmented with a simple counter. newNk and check. Let us start with the latter part. For the induction step observe that since the algorithm does not halt and by the induction hypothesis. We prove this by induction and the base case k = 0 is trivial since only s can be reached with 0 edges and Nk is initially 1. l. The argument is complete and we have proved Corollary 8. We claim that. 88 . m.

When the machine enters this state it receives a bit which is 0 with probability 1/2 and 1 with probability 1/2.2 A set A belongs to BPP iff there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r[M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r[M (x) = 1] ≤ 1/3 BPP is an abbreviation for Bounded Probabilistic Polynomial time. a probabilistic Turing machine can do many different computations on a given input.e. Another interesting running time characteristic is the expected running time. Both approaches give many interesting results. We can now define a new complexity class.e. A key point when reasoning about such algorithms is to make precise what is meant by “most of the time” i. but here we will only study the second approach. and require that the algorithm is fast (correct) for every input. To allow the algorithm to make random choices. Of course one could also combine the two ways of introducing randomness. Definition 9. 2. we need to introduce some probabilistic assumptions. Thus for instance. but rather is given by a probability distribution.9 Probabilistic computation From a practical point of view it is sufficient if an algorithm is fast most of the time. to take a probability distribution over the inputs and ask that the algorithm performs well for most inputs. To consider a random input. On could even relax conditions even further and just ask that the algorithm is correct most of the time. Definition 9. the output is not uniquely determined. Also the running time is a random variable and we will say that a probabilistic Turing machine runs in time S if it always halts in time S(n) on every input of length n. 89 . As with nondeterministic Turing machines.1 A probabilistic Turing machine is a normal deterministic Turing machine equipped with a special coinflipping state. i. There are two basic ways to do this: 1.

Example 9. . To get the ideas behind these definitions. Do P1 and P2 represent the same polynomial? We require that the representation is such that if we are given values of the variables then we can evaluate the polynomials in polynomial time. . . A typical example would be to investigate whether the equality 1 x1 x2 1 . Our probabilistic algorithm will evaluate the two polynomials at randomly chosen points. If the polynomials disagree on one of these points they are different and we will prove that if they agree on all points then they are probably the same polynomial. . 1 x3 x2 3 . . 1 xn x2 n . ··· ··· ··· . . . products or something similar). The first parameter is a known upper bound for the degrees of the polynomials in question (in our example we could take d = n(n−1) ) and the second 2 is related to the error probability. = i>j (xi − xj ) xn−1 xn−1 xn−1 · · · xn−1 n 1 2 3 90 . The algorithm will depend on two extra parameters. Next i. let us next give an example of a language in BPP not known to be in P .g. . d and k. 1 x2 x2 2 .Thus the machine M gives at least a reasonable guess of whether an input x belongs to A (We will later see that this guess can be improved). is a true identity..3 Checking polynomial identities: Given two polynomials P1 and P2 in several variables represented in some convenient way (e. . This procedure will in general be quite inefficient since there might be exponentially many monomials (as in the example given). 2 to k Pick random integer values independently for x1 through xn in the range [1. Input P1 and P2 For i = 1. 2nd]. If P1 (x) = P2 (x) conclude that P1 = P2 (answer 0) and stop. as determinants. The obvious approach to this problem is to expand the polynomials into a sum of monomials and then compare the expansions term by term. Conclude that P1 = P2 (answer 1).

The key lemma is the following.Clearly iff we answer 0 we are always correct and to see that the algorithm is useful we have to prove that most of the time we are correct even when we answer 1. To see this take any monomial in P which appears with a nonzero coefficient (assume for the sake of the argument that it is x1 x2 xn ). Now look at the coefficient of x1 x2 in Qj . Since there are at most R sets of the first kind and d of the second we get the total estimate R(n − 1)dRn−2 + dRn−1 = ndRn−1 and the induction is complete. For n = 1 the lemma follows from the fact that a polynomial of degree d has at most d zeroes. By applying the above lemma to P1 − P2 we see that there are at most (2dn)n /2 unlucky x and thus the probability that we pick one unlucky x is bounded by 1 .4 Given a nonzero polynomial. Thus there are at most d − 2 values of j such that this coefficient is 0 and in general at most d values of j such that Qj is identically zero. in n variables and of degree ≤ d. The set Z splits into the union of sets obtained by fixing the last coordinate to any value in the range 1 to R. xn−1 obtained by substituting j for the variable xn . 1 ≤ i ≤ n ∧ P (x) = 0} has cardinality at most dnRn−1 . Lemma 9. We claim that there are at most d different j such that Qj is identically zero. Qj is a polynomial of degree ≤ d in n − 1 variables and thus we could use the induction hypothesis if we knew that Qj was nonzero. Thus the algorithms gives the correct answer unless we happen to pick k unlucky x’s. The set Z = {x | 1 ≤ xi ≤ R. For the induction step. When the corresponding polynomial is nonzero. Since 2 91 . . It is the value at j of a nonzero-polynomial of degree ≤ d − 2. When P1 and P2 do not represent the same polynomial call an x such that P1 (x) = P2 (x) an unlucky x. Using this lemma we can analyze the algorithm. . let us consider the polynomials Qj in the variables x1 . P . If P1 and P2 represent the same polynomial then we will always answer 1 and we always get the correct answer. Proof: We prove the lemma by induction over n. then by the induction hypothesis the cardinality of the set is bounded by (n − 1)dRn−2 and when the polynomial is zero the cardinality is Rn−1 .

All that remains to see that the problem lies in BPP is to observe that the algorithm is polynomial time. We know by the definition of BPP that there is a machine M such that x∈A⇒ P r[M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r[M (x) = 1] ≤ 1/3. It is not hard to see that this is true in general.e. but this is obvious since the essential step of the algorithm is to evaluate the polynomials and this is polynomial time by assumption. 2(|x| + 3)/ log(9/8) = C times with independent random choices and outputting 1 iff M outputs 1 at least C/2 times. Assume that x ∈ A and that M outputs 1 with probability p on input x (we know that p ≥ 2/3).e.5 A set A belongs to BPP iff there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r [M (x) = 1] ≥ 1 − 2−|x|−2 x ∈ A ⇒ P r [M (x) = 1] ≤ 2−|x|−2 Proof: Clearly the above conditions are stronger than our original definition and thus if A satisfies the above condition then it belongs to BPP. i=0 2 The ratio of two consecutive terms in this sum is at least thus if the last term is T then the sum is bounded by This last term is bounded by 2C (2/3)C/2 (1/3)C/2 ≤ (8/9)C/2 ≤ 2−|x|−3 92 . Then the probability that M does not output 1 is bounded by C/2 i=0 C i p (1 − p)C−i . We need to prove the converse i. In the example we saw that if we were willing to run the algorithm longer (i. try more random points) then we could make the probability of error arbitrarily small. i 2/3 p 1−p ≥ 1/3 ≥ 2 and C/2 i−C/2 T ≤ 2T . that if A ∈ BP P we can find a machine M which satisfies the above condition. Theorem 9. Thus if k is reasonably large we get the correct answer with high probability. Now let M be defined by running M .the k x’s are independent the probability of them all being unlucky is at most 2−k . We need to verify the claim that this M satisfies the condition in the theorem.

In our example we proved more than needed to establish that the problem in question was in BPP. Similarly since A ∈ co-R there is a machine M2 that outputs 1 with probability at least 2/3 when x is not in A and never when x in A. However. Now on input x alternate in running M1 and M2 until one of them answers 1. Definition 9. this is not obvious (or known) for R and thus we also have a third probabilistic complexity class. Both M1 and M2 run in polynomial time. We will not discuss that algorithm here. Remark 9. Hence this class is sometimes also called RP. Observe that both R and co-R are subsets of BPP. the set of languages whose complement lies in R.7 I believe that R is short for Random polynomial time.8 A set A belongs to R co-R iff there is probabilistic machine M which runs in expected polynomial time and always decides A correctly. co-R. Theorem 9. In particular we proved that if the input was in the language the answer was always correct.6 A set A belongs to R iff there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r [M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r [M (x) = 1] = 0. There are not many known examples of problems not known to be in P that lie in BPP. While BPP is closed under complement. by quite elaborate methods it is possible to prove that primes belongs to R co-R and for this class we can make a very strong statement. The main other example is to recognize primes. With this additional restriction we get a new complexity class. When this happens we know that x ∈ A if the 1-answer was given by M1 and we know that x ∈ A if it was given by M2 . The second condition is proved in a similar way. Our example “Polynomial identities” is a member of co-R. 93 . Each time we run both machines we have probability 2/3 of getting a decisive answer and hence it follows that the procedure is expected polynomial time. Proof: By assumption there is a machine M1 that outputs 1 with probability at least 2/3 when the input x is in A and with probability 0 when x is not in A (since A ∈ R).and thus the first condition of the theorem follows.

Theorem 9.10 co-R ⊆ co-N P . 94 . Proof: Just run M for all possible 2p(n) set of coinsflips and calculate the probability that M accepts. The above theorem immediately yields: Theorem 9. We have some non-obvious relations. Theorem 9. then A can be recognized by a deterministic machine that runs in time O(2p(n) T (n)) and space O(T (n)+ p(n)). nothing is known about the relation between our probabilistic classes and our old classes. Clearly any of the defined classes contains P since we can always ignore our possibility to use randomness. But this implies that if we replace the probabilistic choices by non-deterministic choices M accepts x precisely when x ∈ A. Proof: We know by the definition of R that if A ∈ R then there is a machine M such that when x ∈ A then with probability ≥ 2/3 M accepts x and when x ∈ A there are no accepting computation.9 R ⊆ N P . We have an immediate corollary: Corollary 9. Our next theorem is also not very surprising. but many people think it is possible that P = BP P . A straightforward implementation gives the given resource bounds.12 BP P ⊆ P SP ACE. Apart from these theorems. There is not a great consensus what the true relations are.11 Suppose A ∈ BP P and the machine M that recognizes A runs in time T (n) and uses at most p(n) coins.1 Relations to other complexity classes Let us relate the newly defined complexity classes to our old classes.9.

The common solution to the problem of not having enough truly random numbers is to have a what is generally called a pseudorandom number generator (we will in the future call them pseudorandom bit generators since we will be generating bits). it should be computable in polynomial time and the output should be longer than the input. For technical reasons we assume that p(n) is strictly increasing with n. The more interesting aspect of pseudorandom bit generators is to try to formalize the “random looking” requirement of the output. 95 . just assume that somehow we can get a few random bits into the computer. We will see later (Theorem 10. Traditionally.10 Pseudorandom number generators In the last section we used random numbers. where each bit is 0 and 1 with probability 1/2. How the short truly random string (which is called the seed) is produced is clearly a problem (it is generally supplied by the user).8) that this is not a real problem. This is a function which takes a short truly random string and produces a longer “random looking” string. Without discussing the matter. and thus whether randomness in computation at all makes sense. Definition 10.e. For the sake of this section we will assume that we only need random bits. One obvious property is that it should be easy to run and produce something useful. but there is a problem getting enough random numbers into the computer. we assumed that we had access to an unlimited number of perfectly random coins. Something that has only these two properties is a bit generator. Instead we will take the optimistic attitude that there is randomness. but it is mostly philosophical in nature and we will not discuss it. One could indeed question whether there are any random phenomena in nature. This is a valid question. Note that the definition allows for the output to be of only length n + 1 and this does not seem to be much of a generator. This is not a severe restriction since random bits can be turned into random numbers in many ways. but we will not concern us with this problem. i. In practice this might not be the case. The main question we will deal with in this section is how to define what we want from a pseudorandom generator and how to construct such a generator.1 A bit generator is a polynomial time computable function that take a binary string as input and on an input of length n produces an output of length p(n) where p is a polynomial such that p(n) > n for all n.

Intuitively the output 1 can be interpreted as the string passes the test and the output 0 as failing. 1}. Here a random output of the generator is defined as the output on a truly random seed. A bit generator that passes all statistical tests produces a very random looking output. Definition 10. When we run SG on the output of G then the result will always be 1. 96 .2 A statistical test is a function from binary strings to {0. However the definition is too restrictive and there is no such generator.this was interpreted as that the output bits passed a small set of standard statistical tests. however that not even all strings produced truly at random will pass a statistical test. Note. This is the germ of what today is believed to be the correct definition.3 (First attempt) A bit generator passes a statistical test S if the probability that S outputs 1 on a random output of the generator is equal to the probability that S outputs 1 on a truly random string. Definition 10. The tempting definition of pseudorandom generator is now: Definition 10.4 (First attempt) A bit generator is pseudorandom if it passes all statistical tests. On the other hand when we feed SG a truly random string then the probability that we get output 1 is at most 1/2. and since there are 2p(n) possible strings and p(n) ≥ n + 1 at most half of the strings are possible outputs of G. Take any bit generator G and consider the following statistical test: SG (x) = 1 if x can be output by G 0 otherwise First observe that if G stretches strings of length n to strings of length p(n) in time T (n) then SG can be implemented on strings of length p(n) to run in time 2n T (n) since we just run G on all possible strings of length n and check if one of them equals x. This follows since there is one output for each seed which implies that there are at most 2n possible outputs of G of length p(n) (here we use that p is strictly increasing).

7 (Final attempt) Let S be a statistical test and let G be a bit generator. By the analysis of SG this implies that the probability that sG outputs 1 on a random output of G is different from the probability that it outputs 1 on a random input. Thus somehow this test is “cheating” and we change the definition to take care of this. sG can be implemented in polynomial time. Definition 10. We change the definition to take care of this anomaly. Test sG On input x of length p(n) guess n2 random seeds of length n and run G on these seeds and output 1 if one of the outputs seen from G is equal to x. 97 . if x is a string that could have been generated by G then there is some small but positive probability that sg will output 1 while if x cannot be output by G then this probability is 0. but for many reasons (we will not go into them here) it is the better choice. Let an be the probability that S outputs 1 on a random output of G of length n and let bn be the probability that it outputs 1 on a truly random input of length n.5 (Final attempt) A bit generator is pseudorandom if it passes all statistical tests that run in probabilistic polynomial time. This is counterintuitive since for large n the test sG is very weak. since the exponential time needed to try all the seeds is usually too much. it is not feasible to compute SG as described above. As we have defined passing statistical tests this means that G fails the test sG . Since G is assumed to be polynomial time. if n is large. The choice to allow statistical tests to be probabilistic is not clear. The probability is taken over the random output of G and the random choices of S. We have still not overcome all problems with the definitions as can be seen from following miniature version of SG . Remark 10. G passes the statistical test S if for any k there is a Nk such that for all n > Nk it is true that |an − bn | < n−k . Allowing randomness makes the definition stronger since anything that passes all probabilistic polynomial time statistical tests also pass all deterministic polynomial time statistical tests. Definition 10. Furthermore.6 From the development up to this point polynomial time is the natural requirement on efficient statistical tests. Otherwise output 0.In practice.

Note that R0 are random outputs of G while Rp(n)−n are truly random strings. 0 ≤ i ≤ p(n) − n on strings of length p(n). then we get an arbitrary extension. On the other hand if the initial string was the output of G on a random string of length n + i. then for any strictly increasing polynomial p there is a pseudorandom bit generator G that extends from n bits to p(n) bits.e. Proof: The only problem is that G might not extend the seed sufficiently.8 If there is a pseudorandom bit generator G. By definition G maps n bits to more than n bits. This implies that we have found a way of distinguishing the output 98 . on an input of length n. Since q0 = an and qp(n)−n = bn and |an − bn | ≥ n−k 1 there is some i such that |qi − qi+1 | ≥ nk p(n) . then we have produced a string according to Ri and the probability of getting a 1 is qi . Let us first prove that once you have a pseudorandom generator which extends the seed slightly. i. Theorem 10. Start with a truly random string of length n + i and iterate G p(n) − i − n times. Let an be the probability that S outputs 1 on random outputs from G of length p(n) and let bn be the corresponding probability when the input is truly random. We will assume that G outputs n + 1 bits since if it outputs more bits we can just ignore them. We prove that it is pseudorandom by converting a hypothetical statistical test S which distinguishes the output of G from random strings to a test which distinguishes the output of G from random strings. Let qi be the probability that S outputs 1 on distribution Ri . Consider the following probability distribution Ri . Let us fix this i. By assumption for some k and infinitely many (for notational convenience we assume this is true for all) n we have |an − bn | ≥ n−k .In other words the difference of the behavior of the test on the outputs of the generator and random strings goes to 0 faster than the inverse of any polynomial. then compute G on this string to get a string of length n + 2 etc. Note that G remains a pseudorandom bit generator (Prove this!) Now define G to be G iterated p(n) − n times. This generator produces a string of the wanted length and it is easy to see that it works in polynomial time. Now consider the following statistical test on strings of length n + i + 1: Given a string x iterate G p(n) − n − i − 1 times and run S. we first compute G to get a string of length n + 1. If the initial string was random we have produced an element according to Ri+1 and the probability of getting output 1 is qi+1 . until we have a string of length p(n).

for G from random strings and hence we have reached a contradiction since G was supposed to be pseudorandom.10 A function f is a one-way function if it is computable in polynomial time and for any probabilistic polynomial time algorithm A the following holds. However it is not clear how to find i. but the very careful reader will see that there are some minor problems. We sketch how to get around this problem: Let c be a constant. Remark 10. For each test evaluate the test by picking nc random inputs according to both distributions. Choose a random input x of length n and compute y = f (x). since in such a case the constant function would be one-way. If A is given y as input. It is a tedious (and not that easy) exercise to check that for some c this “universal” test will distinguish the random strings from outputs of G. Since this test distinguishes the output of G from random bits it should not run in probabilistic polynomial time. The proposed test uses two auxiliary parameters p(n) and i. Definition 10. Let i0 be the value that gives the biggest difference between the two distributions. The reason is that P vs NP is a question of the worst case behavior of algorithms while the existence of pseudorandom generators is an average case question.11 Note that we cannot ask A to actually find the initial x. Theorem 10. which we cannot do for the moment. Let us next investigate the existence of pseudorandom bit generators. Thus the best we could hope for is to prove that if P = N P then there are pseudorandom generators. Now run the test with i = i0 on the given input. This forces us to base the construction of pseudorandom generators on even stronger assumptions. This should finish the proof. 99 .9 If N P ⊆ BP P then there are no pseudorandom generators. Proof: Just observe that the test SG is in N P . Also this is probably too much to hope for. In particular if P = N P there are no pseudorandom generators and thus proving the existence of such generators would prove P = N P . then the probability that it outputs a z such that f (z) = y goes to 0 faster than the inverse of any polynomial. The value p(n) causes no problems since it is the value of a fixed polynomial. On a given input of length n consider the tests given by different values of i. Note that the test obviously runs in polynomial time.

On input x run A. Assume that the function given by this generator (let us by abuse of notation call the generator as well as the function it computes by G) is not one-way. This proves that G is a one-way function.e. in other words that there is a k and an A such that A finds an inverse image of a given function value with probability at least n−k (for infinitely many n).13 If there is a one-way lengthpreserving permutation then there is a pseudorandom bit generator. will distinguish outputs of G from random bits. Theorem 10. Instead we prove the following theorem which is due to Yao (the present proof is due to Goldreich and Levin). Then the following test. but their proof is much too complicated for the present set of notes. Proof: We claim that the function given by the generator (i. S. Let a one-way lengthpreserving permutation be a one-way function which for each n is a 1-1 mapping on strings of length n. Impagliazzo. If x is a truly random string of length 2n then the probability that the test S outputs 1 is bounded by the probability that x can be output from G. Levin and Luby that this is indeed the case. Since there are 22n possible strings and at most 2n outputs from G this probability is bounded by 2−n . In 1990 it was proved by H˚ astad. if starting from any one-way function it would be possible to construct a pseudorandom bit generator.We have: Theorem 10. It was a long standing open question whether the converse of Theorem 10. Thus this test distinguishes the output from G from random strings contradicting that G is pseudorandom (the test is polynomial time since both A and G are polynomial time). On the other hand if x is the output of G then the probability of output 1 is exactly the success probability of A which by assumption is at least n−k (for infinitely many n).12 If there is a pseudorandom bit generator then there is a one-way function. Suppose A outputs y.12 would also be true i. 100 .8 we can assume the generator expands n bits to 2n bits.e. By Theorem 10. from the seed to the output) is one-way. then if G(y) = x output 1 otherwise output 0.

Lemma 10. (r.14 Suppose we have a probabilistic polynomial time algorithm A that on input f (x). r. i) and bn = 2−2n x. r computes (x. x) be the complement of (r. Now if b0 = b1 output a random bit and otherwise output i such that bi = 1. The hard part is to prove that it is pseudorandom. Now consider the following algorithm for predicting (x. Let (r. The following lemma of Goldreich and Levin will be crucial.Proof: Let f be the one-way lengthpreserving permutation. 1). if f is a one-way function then (x. r. r. r run S let b0 = S(f (x). r) = f (x). it is the parity of n xi yi ). Let us first see how Theorem 10. (Here the probability is taken over a random choice of x and r and the random choices of A). x))+ 101 . r. Suppose without loss of generality that an ≥ bn + n−k .r p(x.e. x))(1 − p(x. r is p(x. Then an = 2−2n−1 x. (r.r. x) is a pseudorandom bit generator. 0) and b1 = S(f (x). (r. Let x and r be random strings of length n and let (x. (r.i p(x. On input f (x).13 follows from Lemma 10. i) be the probability that S outputs 1 on (f (x). x)). r) looks random to any probabilistic polynomial time machine which only has the information f (x). r) with a probability greater than 1 + 2 1 Q(n) where Q is a polynomial. r. r). x). y) be the inner product modulo 2 of the strings x and y (i. Then we claim that i=1 the function g(x. r. i). Then there is a probabilistic polynomial time algorithm B that inverts f with probability of success at least 1 2Q(n) . Consider the above algorithm on input f (x). r.14. Let p(x. then the probability that it outputs the correct value for f (x). It is a bit generator since it expands 2n bits to 2n + 1 bits and is polynomial time computable since f is polynomial time computable. r. r. Suppose g is not pseudorandom and that S is a statistical test which outputs 1 with probability an on random bits and bn on random outputs of g. r. r. In other words.

x)) + (1 − p(x. 1 First observe that for at least a fraction 2Q(n) of the x’s. stop od Report ’failure’. We will describe a 2 procedure that will be successful with high probability for each such x and this is clearly sufficient. 2 . Proof: (Lemma 10. Next let us prove Lemma 10. x))). Compute b = b ⊕j∈S bj and set count = count + 1 − 2b . Next i. ei ⊕j∈S rj .14) We give a proof due to Rackoff.14. Just to avoid confusion observe that count is the number of 0-guesses minus the number of 1-guesses and hence we are doing a majority decision. . (r. We can ask A about f (x). If A runs in time T (n) and f in time T1 (n) then the algorithm runs in time 22k nT (n) + T1 (n) and thus the algorithm is polynomial time if k is O(log n). ei . . x))p(x. Let k be a parameter and ⊕ be exclusive-or then the algorithm on input y now works as follows: Pick k random vectors r1 . Hence the total prob2 ability of it being correct is 1 (1 + an − bn ) and now Theorem 10. We compute each bit of x individually.13 follows 2 from Lemma 10. and we will use a small random subspace shifted by a ei . r. We need to ask about many points. (r. Let ei be the unit-vector in the i’th dimension. rk of length n. Next S Set xi = 0 if count > 0 and 1 otherwise. k} do Ask A about y. x))) 2 which equals 1 (1 + p(x. (r.14. r2 . The set of r’s asked will be pairwise independent but we can guess the answers to the entire subspace by guessing the answers on the basis vectors. x)))(1 − p(x. . x) with probability (only over r) at least 1 + 2Q(n) . r. r. x)) − p(x. suppose answer is b. b2 .1 p(x. r. For each value of k bits b1 . A predicts 1 (r. bk do For i = 1 to n do count = 0 For all non-empty subsets S of {1. . . If f (x) = y output x. (r. r. (r. . r. . but there is no reason it will be correct for these inputs. 102 . (r.

We need to analyze the probability that we find the correct x. Now it is easy to see that rS2 is uniformly distributed (its definition is an exclusive-or of several things at least one which is uniformly random) and that for any fixed value of rS2 the existence of rj in the exclusive-or defining rS1 makes sure it still uniformly distributed. S Suppose for notational convenience that xi = 0 then we know that check is k a random variable with expected value at least 2 −1 and variance at most Q(n) 2k − 1.14. rS i . Thus the probability that some xi is incorrect is bounded by 1/10.17 We have now given a generator that extends the input by one bit and we know by Theorem 10. . . Now it follows by Lemma 10. distributed on {0. rS )) to y. for i = 1. Now if 2k − 1 ≥ 10nQ(n)2 then the 2k −1 1 probability that xi does not take the correct value is bounded by 10n . We can take this to be the following very natural generator: Pick x and r randomly and let bi = (f i (x).16 Let X be a random variable with expected value µ and v variance v then the probability that |X − µ| ≥ λ is bounded by λ2 . Tchebychev’s inequality: Theorem 10. This concludes the proof of Lemma 10. x). . Suppose we have a probabilistic machine M which recognizes a BP P -language B and let G be a pseudorandom generator. We claim that i this happens with good probability when bi = (ri . Now remember. i i Lemma 10. p(n). Using this with λ = 2k −1 Q(n) and v = 2k −1 we see that xi takes the incorrect 2 value with probability at most Q(n) . (x. r) where f i is f iterated i times. Remark 10. 2.15 For S1 = S2 rS1 and rS2 are independent and uniformly n. If A gives the correct answer (i. Let rS be ei ⊕j∈S rj i be b ⊕ i i and let bS j∈S bj .8 that we can get a generator which extends the output arbitrarily.e. 103 .15 that the bi are pairwise independent. 1} Proof: Suppose j ∈ S1 but j ∈ S2 (if there is no such j we can interchange S1 and S2 ). This implies that we are in pretty good shape since we know then xi = bS i that A gives a majority of correct answers and rS are fairly random. Now that we have studied good generators it is natural to ask what happens if we use these generators to produce the random bits needed by a probabilistic algorithm.

All is not lost since we could change the statistical test to choose x randomly and then study the behavior of M .8 remains true. Now consider the following statistical test SM. This definition is stronger than the previous definition since we are allowing stronger statistical tests. In general all proofs for the uniform case translates to the non-uniform case. In particular Theorem 10.20 A pseudorandom generator is non-uniformly strong if it passes all non-uniform statistical tests. Then we could prove that we had a deterministic algorithm that ran in time close to 2n and was correct for most inputs. We will not do so here. G extends n bits to p(n) bits. Remark 10.8. but it turns out that the existence of such generators is equivalent to the existence of oneway functions where we allow the inverting function to have an advice. This would imply that we would get a theorem similar to Theorem 9. Definition 10. The reason this is not true is that the test has a parameter x which might be hard to find (the parameter M is not a problem since it is of constant size). We now finish the discussion with a theorem of Yao.Suppose M uses p(n) random bits and that for some small constant . The interested reader might want to prove that the given definition corresponds to polynomial size circuits without any uniformity constraints.19 Note that the advice is the same for all strings of length n. However since we have not studied the concept of being correct for most inputs we will not pursue this approach. The latter can be assumed by Theorem 10. Since G by assumption passes all statistical tests it is tempting to think that the same is true for outputs of G.11 saying that B could be recognized in time close to 2n since we would only have to try all seeds of G rather than all sets of p(n) coins. 104 . Instead we have: Definition 10. Answer with the output of M .18 A non-uniform statistical test is a probabilistic polynomial time algorithm that on inputs of length n gets an advice an which is of polynomial length.x of a random string r of length p(n): Given x run M on input x with random coins r. We know that when x ∈ B and r is random then the probability that this test outputs 1 is at least 2/3 while otherwise it is at most 1/3.

it passes this test. This implies that if we replace the coins by a random output of G then we still have essentially the same δ probability of acceptance. Suppose B ∈ BP P and that it is recognized by M which uses p(n) coins and runs in time T1 (n) (both these bounds are some polynomials). We now just try all the 2n possible seeds for G and take a majority decision. Let δ < and let G be a non-uniformly strong generator which extends nδ bits to p(n) bits and runs in time T2 (n) (which also is a polynomial). with non-negligible success ration) on inputs of length n requires time 2cn for some small n then BP P = P .x . Since both B and theorem.21 If there is a pseudorandom generator which is non-uniformly strong then DT IM E(2n ). This can be done in time 2n (T1 (n) + T2 (n)) and this is O(2n ). In particular if there is a polynomial time computable function such that inverting this function (in the non-uniform setting. 105 . were arbitrary we have proved the δ Thus we have proved that if there are one-way functions in the nonuniform setting then BPP can be simulated in time which is significantly cheaper than exponential time. This test uses the advice x but since G is non-uniformly strong.Theorem 10. If one is willing to make stronger assumptions then one can make stronger conclusions. BP P ⊆ >0 Proof: The proof is as outlined above. Now let x be an arbitrary input of length n and consider the above test SM.

It is a directed acyclic graph with three type of nodes: Input nodes. The concept of having many processors working in parallel leads to many interesting theoretical problems. We do not want to argue that the model does not reflect reality. In this section we will just give the first definitions and show some basic properties.1 The circuit model of computation We have previously briefly discussed the concept of a Boolean circuit. The input nodes are labeled by variable names xi and the operation nodes are labeled by logical operators. We choose here to study the circuit model of computation and as we will see. 1} in the natural way. Suppose one computer can compute a given function in one million seconds. The inputs to a node v is the set of nodes w for which (w. We will here only allow the operators ∧. When many processors cooperate to solve a problem it is of crucial importance how they communicate. communication between processors will be ignored. The size of a circuit Cn will be denoted by 106 . The most famous multi-processor computer might be the Connection Machine which has 216 = 65536 processors. 1}n → {0. (Substitute the value of the i’th coordinate for xi and then evaluate the nodes by letting each operation node take the value which corresponds to the corresponding operator applied to the inputs of that node. The circuit computes a function {0. It is an important theoretical problem to identify the computational tasks that can be parallelized in an efficient manner. ∨ and ¬. v) is an edge. we only want to point out that there is one important aspect missing. how long would it take a million computers to compute the same function? The answer to this question is not known. 11. It is hard to get this fairly practical consideration into the theoretical models in a suitable manner and this complication will usually get lost.11 Parallel computation The price of processors have dropped remarkably in the last decade and it is now feasible to make computers that have a large number of processors. its size and depth. In fact it seems like that in practice this is the overshadowing problem to make large scale parallel computation efficient.) We will be interested in two parameters of the circuit. but it seems like the answer could be anywhere from one second to a million seconds depending on the function. One could phrase the main question as a variant of a traditional mathproblem. operation nodes and output nodes.

The functions we have been considering so far take inputs that are of arbitrary length while a circuit can only take inputs of a given length.1 If B ∈ P then B can be recognized by polynomial size circuits.2 By more efficient constructions it is possible to to give a better simulation of Turing machines and decrease the size of the above circuit to O(nc log n).e. If there is a processor at each node of the circuit then the number of processors is equal to the size of the circuit and the time needed to evaluate the circuit is equal to the depth of the circuit.25 we saw that given a Turing machine M and an input x we could construct a circuit such that the output of the circuits was equal to the output of M on input x. Thus if we are interested in fast parallel computation it is interesting to construct small circuits with small depth. The circuit constructed the computation tableau of M row by row. If MB runs in time O(nc ) then this size will be O(n2c ) and thus we have constructed circuits for B of polynomial size.|Cn | and is equal to the number of nodes it contains while the depth will be denoted by d(Cn ) and is the longest directed path from the input to the output. Remark 11. given a language B ∈ P we take the corresponding Turing machine MB and given n we can now construct a circuit Cn which will give the same output as MB on all inputs of length n. One immediate question is whether the converse of the above theorem is true. The way to resolve this is to let a function be computed by a sequence of circuits (Cn )∞ where Cn computes f on inputs of length n. The size of this circuit will only be a constant greater than the size of the computation tableau of MB on inputs of length n. If one looks closely at that proof. We will then be n=1 interested in the growthrate of the size and depth of Cn as a function of n. Let us now state a theorem that was implicitly proved in Section 7. Theorem 11. i. In particular. Proof: (Outline) In the proof of Theorem 7. In particular we will say that a sequence of circuits is of polynomial size if the growthrate of |Cn | is not more than polynomial in n. one discovers that the structure of the circuit only depends on M while x enters as the input of the circuit.3. that if a function can be computed by polynomial size circuits then is it in fact true that the function lies in P ? With the current definitions 107 .

The first part is polynomial time by the definition of P -uniform and the second part is easily seen to be polynomial time. Then on input x a Turing machine can first construct the circuit C|x| and then compute its value on input x.2 NC We can now define our main complexity class of parallel computation.this is not true. k=1 108 . either all strings of length n are in B or no string of that length is a member of B. They are in fact L-uniform by the proof of Theorem 7.3 A sequence of circuits (Cn )∞ is P (L)-uniform iff there n=1 is a Turing machine M . suppose that B is recognized by polynomial size P -uniform circuits. The reason for this is that we have not put any conditions on how to obtain the circuits Cn .5 A set B is in N C k iff it can be recognized by a family of L-uniform circuits (Cn )∞ where Cn is of polynomial size and d(Cn ) ≤ n=1 O((log n)k ). Thus Cn could just be a trivial circuit which either always outputs 0 or 1 depending on whether Mn halts on blank input. This proves one of the implications in the theorem. However it has very small circuits since for each length n. which works in polynomial time (logarithmic space).25. How to decide which one to choose is non-recursive but is of no concern in the old definition and the following definition is called for. 11. To see the problem consider the following language: B = {x|M|x| halts on blank input} As we have seen earlier this language is not even recursive. To see the reverse implication. that on input 1n prints a description of Cn on its output tape. Definition 11. Proof: (Outline) First just observe that the circuits described in the above proof are P -uniform. Furthermore N C = ∞ N C k . Definition 11. Using this definition we get: Theorem 11.4 B can be computed by polynomial size P -uniform circuits iff B ∈ P .

This might look straightforward since we can have one processor which takes care of each digit. For instance a block of length two will generate a carry if it looks like GG. Continuing in this way we can quickly compute whether certain intervals propagate or stop a carry. The critical point is to discover quickly if you have a carry coming from your right.4. From a theoretical standpoint N C is considered as the subset of P which admits ultrafast parallel algorithms (time O((log n)k )). We get the representation SGP P GSGP and 109 .9 Given two n-bit numbers. Remark 11. This processor checks whether that position Generates.8 N C ⊆ P. Proof: This follows immediately from the definition of N C and Theorem 11. For reasons that go beyond the scope of these notes.6 The name NC is short for Nick’s Class. it will propagate a carry if it looks like P P and it will stop a carry if it looks like P S. We use one processor for each digit of the two numbers. we will need circuits of linear depth. Example 11. since if we treat them without thinking. Suppose the numbers are 01111011 and 01001010. We can now make an obvious observation. This is named after Nick Pippenger who was one of the first researchers to study this class. SG. How to do this might best be seen by an example. Propagates or Stops a carry and marks the position G. GS or P G. but somehow processors seem to go better with the intuition. We can combine this information in a binary tree to see how longer blocks will behave with respect to carries. SP or SS. However to make life easier we will stick with the above definition. The process to do this is called Carry-look-ahead. but we have to do something intelligent with the carries. GP . this gives a better definition. You see the reason for this if you try to add the binary numbers 01111111 and 00000001.Remark 11.7 Normally one requires even stricter uniformity constraints for N C 1 than L-uniformity. When we describe how to construct circuits. Formally this should of course be replaced by nodes in circuits. P and S accordingly. Theorem 11. we will be quite informal and talk in terms of processors doing simple operations. Some of the algorithms we present will also be efficient in practice and some will not. compute their sum. This will be the basic idea.

start at that position and walk down the tree. It is not hard to see that this can be reduced to adding together n. By actually building this tree in the circuit we see that we get a circuit of depth O(log n) which computes all the carries and since once we know the carries the rest is simple we can conclude that addition belongs to N C 1 . you get the string P G which evaluates to G and thus there is a carry in position 6. Finally evaluate the string you get. going down we build a binary tree (see Figure 11) to find out how longer blocks behave.10 Given two n-bit numbers. One way to phrase it formally is the following: Suppose you want to know if there is a carry in a given position. Example 11. we want to multiply the numbers. n-digit numbers (just do the ordinary multiplication algorithm we learned 110 . going up S S G S P G G S 0 0 G 1 1 P 1 0 P 1 0 G 1 1 S 0 0 G 1 1 P 1 0 Figure 12: Carry look ahead tree. It is quite easy to see how this is done.S S G S P G G S 0 0 G 1 1 P 1 0 P 1 0 G 1 1 S 0 0 G 1 1 P 1 0 Figure 11: Carry look ahead tree. Whenever you go right write down what you see coming in from the left to that same node. Now to see if we have a carry in a given position we just have to figure out if all suffices of the string SGP P GSGP generates a carry. One can also view this last step as sending the appropriate values down the tree as indicated in Figure 12. For instance if you start in position 6 in the given tree.

circuits of small depth where you allow random inputs and only require that you have a good probability of finding a depth-first search tree). If we have O(m2 n3 ) processors we can do the first operation in depth O(log m) (by the exercise extending the multiplication example) while the second can be done with O(n3 m) processors in depth O(log nm) (using the same exercise). i. Now by the previous example we can add these numbers pairwise in depth O(log n) to obtain n numbers whose sum we want to com2 pute. and it is well known i=1 that T r(M ) = n λi . Just to give an example of something nontrivial. 2. The problems that seems to be hardest to give a parallel solution to are problems where the natural sequential algorithms are iterative in nature. Let us recall some facts. solving linear equations and computing the depth-first search tree of a graph. j and k. In fact multiplication and addition of n numbers can both be done in depth O(log n). Adding the numbers pairwise for log n rounds gives us the answer. Compute all the products aij bjk for all i. Then we want to compute n aij bjk for all i and k. We leave this as an exercise. The trace of a matrix M (denoted by T r(M )) is the sum of its diagonal elements. Example 11. then it is well known that n i=1 λi = det(M ). and finding a depth-first search tree is known to be in RNC (Random NC. let us give as a last example an algorithm to compute the determinant of a matrix which runs in O((log n)2 ) time and uses a polynomial number of processors. Let sk = T r(M k ) which equals n λk since the i=1 i=1 i 111 . Thus the entire computation uses a polynomial number of processors and O(log nm) depth. Of these the linear equation problem can be solved in NC. Example 11. while for integer GCDs there is not known to be any circuits of sublinear depth. If λi denote the eigenvalues of M . Compute the sums n j=1 aij bjk for all i and k. compute its determinant.e. We have to assume some facts from linear algebra. Suppose the given matrices are A = (aij ) and B = (bij ). This gives a circuit of polynomial size and depth O((log n)2 ).11 Given two n × n matrices. Examples of such problems are computing integer GCDs.e.in first grade). i. Let us suppose the entries are m bit integers. T r(M ) = n mii . multiply them.12 Given a matrix M . We have j=1 the following algorithm: 1.

We will use one processor pC for each possible configuration C of MB . Now it is easy to check that A−1 = n B i and thus by some additional matrix-multiplications we i=0 can compute the inverse of A and hence we can solve for the ci and find cn = det(−M ). . It is standard i=1 that cn = det(−M ) and c(λ) = n (λ − λi ). n Thus all that remains is to prove that we can solve Ax = b where A is a lower-triangular matrix. 2. The drawback in practice is that we get fairly large circuits. The sk are easy to compute in parallel since i we have already shown how to compute matrix-products and M k can be computed by O(log k) matrix-products done in sequence. .13 Suppose S(n) ≥ log n for all n. . . and the depth is O((log n)2 ). The number of processors is quite bad but still polynomial. sn           sn−1 sn−2 sn−3 sn−4 . . Proof: Suppose B is recognized by MB which runs in space O(S(n)). then it can be done by circuits of depth O(S 2 (n)). .3 Parallel time vs sequential space A couple of the examples of problems that we could do in NC also appeared as problems doable in small space. . . . cn            = −         s1 s2 s3 s4 . If we multiply each row by a suitable number we can assume that all the entries on the diagonal of A is unity. . . 0 0 0 4 . Then A can be written as I − B where B is strictly lower-triangular. This is no coincidence and in fact sequential space and parallel time are quite related as soon as one does not put any other restrictions on the computation. . 0 0 0 0          0  c1 c2 c3 c4 .. From this it follows that i=1 ci = S:|S|=i(−1)i j∈S λj where S is a subset of {1. . .. . .. . If B can be recognized in space O(S(n)). . There 112 . . .. Using this one can prove that           1 s1 s2 s3 . . 0 0 3 s1 .. The characteristic polynomial of M is det(λI − M ) = λn + n λn−i ci = c(λ). 11.. n} and |S| is its cardinality. . Once we can compute determinants we can do almost all operations in linear algebra. Theorem 11. 0 2 s1 s2 ...eigenvalues of M k are λk . .. .

in O(S(n)) stages the processor corresponding to the initial configuration will know the result of the computation. After step i − 1. Theorem 11. but it does not) We evaluate the circuit by a depth first search manner. Since MB runs in time 2O(S(n)) . This is easy for i = 0 and in general it is done as follows. Thus the critical parameter is what depth is required to do one stage. (One has to check that this does not change the condition of S-uniformity. At each point in time we maintain a path in the circuit from the output to an input which has the following properties.14 L ⊆ N C 2 Proof: By Theorem 11. Whenever the path goes to the 113 . To sum up: We have O(S(n)) stages where each stage can be done in depth O(S(n)). then if B can be recognized by S-uniform circuits of depth O(S(n)).are 2O(S(n)) configurations and thus we will use many processors. C transforms to in 2i−1 steps. A single stage can be done by having a binary tree of depth O(S(n)) which connects each processor to each other processor and selects the processor corresponding to the current information. but this is of no concern for us for the moment.13. pC already knows what configuration C . At stage i of the algorithm pC finds out which configuration C would change to in 2i computation steps. Proof: The idea of the proof is to do a depth first search of the circuit for B. Corollary 11. By duplicating nodes we can assume that the circuit is actually a tree. Let S-uniform denote a family of circuits that can be constructed by a Turing machine that runs in space S. By inspection of the proof we conclude that the circuits are of polynomial size.13 we know L can be done by circuits of depth (log n)2 . There is also a close to converse result to Theorem 11.15 Suppose S(n) ≥ log n for all n. On the other hand pC knows which configuration C transforms to in 2i−1 steps and this is the desired answer. then B can be recognized in space S(n). This gives total depth O(S 2 (n)) and thus we have proved the theorem. We leave the details to the reader.

114 V 0 V 0 V V V x 1 V V x 2 . Also we keep track of what kind of operation we have at each node of the path.V 1 1 Figure 13: The path at one point in time V V 1 0 Figure 14: The path at next point in time left. Suppose our path at one point is given by Figure 13. This might best be seen by an example. The active path are the shaded nodes. but this can be done in space O(S(n)) by the uniformity condition. Assuming that x1 = 0 then at the next time-step a possible path is given by Figure 14. we require nothing extra while when it turns right we require that we have marked the value of the left input to that node. The path is of length O(S(n)) and thus can be represented in this space. We start with the path always going to the left and it is now easy to see that if we always move to the next input to the right it is easy to update the tree. Thus. To update the path we need to be able to find out what the circuit looks like locally.

but let us give it anyway.17 If A is P-complete then P = N C ⇔ A ∈ N C. However we know by Corollary 11. Given any B ∈ P we know by the definition of P-complete that there is function f computable in L such that x ∈ B ⇔ f (x) ∈ A. Using S(n) = log n we get the following immediate corollary: Corollary 11.16 N C 1 ⊆ L. Combining this circuit with the NC-circuit for A becomes an NC-circuit for B.we have completed the proof. With this close connection between L and NC the following theorem is not surprising: Theorem 11. On the other hand if A ∈ N C then we have to construct NC-circuits for any function in P. As a final comment let us note that for one of the most famous problems that seem hard to do in parallel. If P = N C then clearly A ∈ N C. 115 . namely integer GCDs. it is not known that this problem is P-complete. Proof: The proof is more or less the same as the proof of other theorems of this type.14 that f can be computed also in N C 2 .

which is called the oracle set should be thought of as a difficult set. one particular way in augmenting the power of a computation has been studied extensively. Now it is natural to define P A as the set of languages that can be recognized in polynomial time by Turing machines with oracle A. Thus the machine is allowed to ask questions about the set A and very inexpensively obtain correct answers. The set A. However this is not the case: 116 . that the inclusion would be strict) has an “easy” proof then P A ⊂ N P A would be true for all oracles A. The computation is said to take place relative to the oracle A (and hence the title relativized computation). We will count the part of the query tape used as part of the work-tape of the machine and hence this should be bounded when we are looking at space bounded classes. In particular this is the case for all proofs given in these notes upto this point. P A ⊆ N P A ⊆ P SP ACE A .2 For all oracles A. N P A . Let A be a fixed set and give the machine an extra tape. Theorem 12. In a similar way all the other complexity classes can be defined. BP P A and P SP ACE A . In one time-step the query tape now changes content. P A ⊆ BP P A ⊆ P SP ACE A . called the query tape.1 For all oracles A. A Turing machine M with an oracle A is usually denoted M A to avoid confusion. This definition is not standard when dealing with L and N L.12 Relativized computation As a tool in understanding computation. but we will not consider those classes here.e. since otherwise the machine could have answered the questions itself at only a slightly higher cost. The reason this concept is interesting is that almost all proofs that are known remain true if we allow all machines involved in the proof have access to the same oracle. One word of caution. The new value will be 1 if x ∈ A and 0 otherwise. On this tape the machine can write a string x and then enter a special state called the query state. Instead we will only consider P A . For definiteness assume that we use the Turing machine model of computation. The idea is that if P ⊂ N P (i. Let us state some theorems that follow (the reader is encouraged to go back and check the proofs). Theorem 12.

writes this on the oracle tape. oracles will not support this: Theorem 12. Since A is P SP ACEcomplete we have B ≤p A i. By definition the result is the same. For the second part. reads the answer from the oracle and outputs this as its own answer. For the first part let B be anything in P SP ACE. such that instead of entering the query-state it runs S. Thus B ∈ P A and we conclude that P SP ACE ⊆ P A .3 If A is a P SP ACE-complete set then P A = N P A = BP P A = P SP ACE A = P SP ACE. Together with B we will also define a language L(B) which for all B will be in N P B . suppose we are given a machine M A that recognizes some language in P SP ACE A .5 Let L(B) be a language which only contains strings which solely consists of 1’s (such a language is called a unary language).Theorem 12. Now modify M A .3 rules out the possibility of an easy proof that P = N P . Theorem 12. Definition 12. However. But this makes B easy to recognize for a machine with oracle A. On input x it just computes f (x).e. First observe that for any oracle B. Proof: The oracle B will not be as natural as the oracle A given above and we will construct it piece by piece. L(B) is in N P B . We have to convert this into an ordinary P SP ACE-machine which recognizes the same language. Formally L(B) is recognized by the following algorithm. Build a subroutine S which takes an input x and outputs 1 if x ∈ A and 0 otherwise. Essentially we have to get rid of A. but we will cleverly construct B such that it is not in P B . Proof: It is sufficient to prove that P SP ACE ⊆ P A and that P SP ACE A ⊆ P SP ACE. and it is easy to see that this modified machine also runs in polynomial space.4 There is an oracle B such that P B = N P B . The string of n 1’s is in L(B) if and only if there is at least one string x of length n such that x ∈ B. But since A is in P SP ACE this is not too difficult. there is a polynomial time computable function f such that x ∈ B ⇔ f (x) ∈ A. This subroutine can be made to run in polynomial space. 117 . This might raise in a more serious way (at least it seems) the possibility that P = N P .

Run MiB on input 1ni . MiB will not accept L(B) since it will make an error on 1ni . To verify that this algorithm is correct is left to the reader. Now all sets recognized by a polynomial time machine is recognized by some MiB (we need to repeat each machine infinitely many times since we do not know for which i it is true that it runs in time ini . 2. The only nonobvious point is that when needed there exists 118 . Let MiB be an enumeration of all oracle machines that run in polynomial time. If the oracles answers 1 accept otherwise reject. else Put one undetermined string of length ni in the oracle set. Whenever the machine asks about an undetermined string. endif next i fix all undetermined strings not to be in B. Let a string be undetermined if we have not yet decided whether it will be in B. it automatically halts and outputs 1.1. If there is a ’0’ in the input reject and stop. This is a slightly subtle point since whether an oracle Turing machine runs in polynomial time depends on the oracle and we have not yet decided what the oracle should be. Next we will have to define B such that L(B) is not in P B . n0 = 1 for i = 1 to ∞ do make ni the smallest number bigger than ni−1 such that 2ni > ini i and such that no string of length ni has been determined. For the constructed B. Equip MiB with a stop-watch such that if it has not halted in i|x|i steps on input x. Hence we need only check that the construction is not contradictory. We will now go through an infinite number of stages. In stage i we determine a little bit more of the oracle B to make sure that MiB does not recognize L(B). This is no real problem and we get around it as follows: Assume that MiB is an enumeration of all Turing machines which has the property that each machine appears an infinite number of times . Nondeterministically write down a query to the oracle of the same length as the input. fix that string not to be in B If MiB accepts the input then Make sure that no string of length ni is in the oracle set.

L⊕ (C) is in P SP ACE C . Let us next take N P versus P SP ACE. Definition 12. Consider NiC on input 1ni . If there is some setting of undetermined strings to make NiC accept then Make such a setting. by fixing at most ini strings. of all polynomial time nondeterministic oracle machines where N1 2 NiC runs in time at most ini . We will now construct C such to make sure L⊕ (C) is not in N P C . First observe that for any oracle C. It turns out that also all the other questions can be relativized in the possible way. Theorem 12. N C .6 There is an oracle C such that N P C = P SP ACE C . Proof: This proof will very much follow the same line as the last proof. . . We now construct C in stages: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > ini i and such that no string of length ni has been determined. The algorithm just asks all questions of length n and keeps a counter to compute the parity of the number of strings in the oracle. since MiB on input 1ni only runs for time ini and hence it can only ask this many questions. Thus only i this many new strings can be determined during stage i and since there were no determined string of length ni when stage i started and 2ni > ini there i is an undetermined string that can be put into B.an undetermined string of length ni However. next i 119 . Using the same argument as in the last proof there is an enumeration C . endif Fix all undetermined strings not to be in C. else Fix strings to make sure that an odd number of strings of length ni are in C. fix the i remaining strings of length ni to make sure that an even number of strings of length ni are in C. Let us start by defining the language. .7 Let L⊕ (C) be a unary language such that 1n ∈ L⊕ (C) iff there is an odd number of strings of length n in C.

i 120 . then a simple sampling algorithm will work. Proof: We proceed as usual. However. endif next i The verification that this construction is correct is similar to the previous verifications.8 There is an oracle D such that BP P D ⊆ N P D . The condition that 2ni ≥ 10 · ini make sure that this is true for all n with ni ≤ n < ni+1 .9 Let Lmaj (D) be a unary language such that 1n ∈ Lmaj (D) if a majority of the strings of length n is in D.Again by construction for this oracle L⊕ (C) is not in N P C . but there is no real problem. at least 60% or at most 40% of the strings is in the oracle set. i Next we have: Theorem 12. i Fix all undetermined strings of length less than ni not to be in D. Consider NiD on input 1ni . Please observe that if NiC accepts an input then it is sufficient to fix the answers of the questions on one accepting computation path and hence it is sufficient to fix ini strings in the first case. We again give an algorithm to determine the oracle: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > 10 · ini and such that no string of length ni has been determined. else Put all undetermined strings of length ni into D. if we make sure that for each n. This language is not always in BP P D . The reason to put all undetermined strings of length at most ni out of the oracle is to make sure that for n’s which are not chosen to be one of the ni ’s it is also true that the number of strings of length n in the oracle is not close to half of all strings of length n. Definition 12. This extra condition means that we have to be slightly careful in the oracle construction. If there is some setting of undetermined strings to make NiC accept then Make such a setting. by fixing at most ini strings and fix the i remaining strings of length ni not to be in D. The construction can be seen to be correct by more or less the same reasoning as the last construction.

Our last oracle construction will be: Theorem 12.10 There is an oracle E such that N P E ⊆ BP P E . Proof: We will use the same language as we used in the proof that there was an oracle B such that N P B = P B . Remember that L(E) is a unary language such that 1n ∈ L(E) iff there is some string of length n in E. We now construct E to make sure it is not in BP P E . This time let MiE be an enumeration of probabilistic Turing machines. Here there is a slight problem that MiE might not define a correct machine in that the probability of acceptance is not bounded away from 1/2 for some inputs. However, this is only to our advantage since this means this machine will not accept any BP P -language, and we do not have to worry that it might accept L(E). We now construct E in stages as follows: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > 10 · ini and such that no string of length ni has been i determined. Run MiE on input 1ni . Whenever the machine asks about a string which is not determined, pretend that this string is not in E. Let p be the probability that MiB accepts under these conditions. If p ≥ 1/2 then Fix all strings MiE could possibly ask about not to be in E. Also fix all other strings of length ni not to be in E. else Find one string of length ni such that the probability that this string is asked by MiE is at most 1/10 and put this into E. Fix all other strings MiE might possibly look at not to be in E endif next i fix all undetermined strings not to be in E. Here there are some details to check. If p ≥ 1/2 then this is actually the correct probability of acceptance since we eventually fix all the strings not to appear in E. In this case 1ni ∈ L(E) while the probability that MiE accepts 1ni is at least 1/2 and thus MiE does not recognize L(E) in the BP P sense. On the other hand if p < 1/2 then the final oracle does not agree with 121

the simulation. However since the probability of finding out the difference is bounded by 1/10, the acceptance probability remains below 0.6. Since in this case, 1ni ∈ L(E), also in this case MiE fails to recognize L(E). We need also check that there is a suitable string which is asked with probability at most 1/10. Since the running time of MiE on input 1ni is bounded by ini it does not ask more than this number of questions. If i P R(x) is the probability that string x is asked then P R(x) ≤ ini i
|x|=ni

and since 2ni > 10 · ini there is some x with P R(x) < 1/10. The proof is i complete. We have now established that all the unknown inclusion properties of our main complexity classes can be relativized in different directions. The only information this gives is that the true inclusions can not be proved with methods that relativize. In principle, methods that do not look very detailed at the computation will relativize. In particular when you treat the computation as a black box which just takes an input and then produces an output (after a certain number of steps). Thus, the main lesson to learn from this section is that to establish the true relations of our main complexity classes, we have to look in a very detailed way at computation. There are a few results in complexity theory which do not relativize. One of them (IP=PSPACE) is given in Chapter 13.

122

13

Interactive proofs

One motivation for NP is to capture the notion of “efficient provability”. If A ∈ N P and x ∈ A then there is a short proof of this fact (the nondeterministic choices of the algorithm which recognizes A) which can be verified efficiently. By the definition of NP all proofs are correct and an all powerful prover can always convince a polynomial time bounded verifier of a correct NP-statement. As we did with regards to ordinary computation we can introduce randomness and decrease the requirements. A proof will be a discussion (interaction) between an all powerful prover and a probabilistic polynomial time verifier. Before we make a formal definition let us give an example. Example 13.1 Given two graphs G1 and G2 both on n vertices. G1 and G2 are said to be isomorphic iff there is a permutation π of the vertices such that (i, j) is an edge in G1 iff (π(i), π(j)) is an edge in G2 . In other words there is a relabeling of the vertices to make the two graphs identical. This problem is in NP since one can just guess the permutation. On the other hand it is not known to be in P (or co-NP) nor known to be NP-complete. Now consider the following protocol for proving that two graphs are not isomorphic. For m = 1 to k: The verifier chooses a random i (1 or 2) and sends a graph H which is a random permutation of Gi to the prover. The prover responds j. The verifier rejects and halts if i = j next m The verifier accepts. In other words the prover tries to guess which graph the verifier started with and the verifier accepts if he always guesses correctly. Now suppose that G1 and G2 are not isomorphic. Then H is isomorphic only to Gi and the all powerful prover can tell the value of i and always answer correctly. On the other hand if G1 and G2 are isomorphic then, independent of the value of i, the graph H is a random graph isomorphic to both G1 and G2 . Thus there is no way the prover can distinguish the two cases and thus if he tries to answer he will each time fail with probability 1/2. Thus the probability that he can incorrectly make the verifier accept is 2−k which is

123

for all practical purposes if k = 100 and the prover always answer correctly the graph will be non-isomorphic. If x ∈ A then no matter what the prover does the probability (over V ’s random choices) that V accepts is at most 2−|x| . A discussion (or interaction) of the type described in the example will be called an interactive proof. Interactive proofs were defined by Goldwasser. Theorem 13. 2. A different definition that was later proved to give the same class of languages was given independently by Babai around the same time. Definition 13.4 If A ∈ IP then there is an interaction between a probabilistic polynomial time verifier V and an all powerful prover P such that: 1. we limit this to be a polynomial number in the length of the input. Interactive proofs attracted a lot of attention in the end of the 1980’s and we will only touch on the highlights of this theory. We leave the details to the reader.2 A language A admits an interactive proof iff there is an interaction between a probabilistic polynomial time verifier V and an all powerful prover P such that: 1. The number of exchanges of messages might depend on the length of the input. 124 . Let us formalize the properties wanted. (Completeness) If x ∈ A then the probability (over V ’s random choices) that V accepts is at least 2/3. Proof: (Outline) The proof is very similar to the proof of Theorem 9. Let us first state an equivalent of Theorem 9. If x ∈ A then the probability (over V ’s random choices) that V accepts is at least 1 − 2−|x| . (Soundness) If x ∈ A then no matter what the prover does the probability (over V ’s random choices) that V accepts is at most 1/3.3 The complexity class IP is the set of languages that admit an interactive proof. Definition 13. 2.very small if k is large.5.5. Thus. Micali and Rackoff in 1985. but since we want the entire process to be polynomial time. We just run many protocols in many times and make a majority decision in the end.

It was the reverse computation that was the big surprise. The first couple of years. α) is 1 iff the verifier would have accepted after the conversation α and 0 otherwise. Karloff. By assumption this can be computed in polynomial time. α) = E ((x. We denote the ith prover message by pi and the ith verifier by vi and assume that the prover sends the first message in each round. This was dramatically changed in December 1989 when work of Nisan.e. Fortnow. α) be the probability that V accepts given that the initial conversation is α and that P plays optimally in the future and that V follows his protocol. Now let α be any partial conversation consisting of the first s messages for some s and let P r(x. Using these equations it is easy to give an algorithm that proceeds in a depth first search fashion and evaluates P r(x. Now if the last message in α is by the verifier then P r(x.when x ∈ A then the probability that V accepts is 1). one of the main drawbacks of the theory of interactive proofs was the small number of languages that were not in NP that admitted interactive proofs. e) where e is the empty string. αvi )) where E is expected value over the verifier message vi . Finally when α is a full conversation then P r(x. α) = max ((x. 125 . Proof: (Outline) The fact that IP ⊆ P SP ACE was established quite early in the theory of interactive proofs. A formal proof is slightly cumbersome (but not really hard) and hence let us only give an outline. e) in polynomial space.A far less obvious fact is that one can in fact obtain perfect completeness (i. Our goal is to compute P r(x. αpi )) where the maximum is taken over all messages pi .5 IP = P SP ACE. since this number is at least 2/3 when x ∈ A and less then 1/3 otherwise. Proving this would take us too far and we omit this theorem. Suppose A ∈ IP and the interaction that recognizes A contains k pairs of messages. On the other hand if the next message is by the prover then P r(x. Lund and finally Shamir led to the following remarkable theorem: Theorem 13. This inclusion was no surprise since P SP ACE is a big complexity class.

B. To summarize the discussion the formula has the following properties.To prove that P SP ACE ⊆ IP we need “only” give an interactive proof which recognizes TQBF which was proved PSPACE-complete in Theorem 7. • All formulas between quantifiers and after the last quantifier are CNFformulas with O(n) clauses of constant size. GET (C1 . k) that said that the Turing machine could get from configuration C1 to configuration C2 in 2k steps. In reality they are both polynomial in n but this is of no x importance. k − 1. We wanted to construct a formula GET (C1 . x Now assume that each configuration consists of n Boolean variables and that initially k = n. x). B. In fact we will use that determining the truth of the special type of quantified Boolean formulas constructed in the proof of Theorem 7. k. Let us recall part of this proof.C2 )} GET (A. Now encode the ∀ quantifier as a Boolean variable x1 and rewrite the formula to the following. Z. We only give an outline of the argument. x) = ∃C ∀(A. Furthermore note that each variable describing C1 and C2 does not appear in GET (A. and ∀ by . k.17 is PSPACE-complete. When we iterate the above construction it will be true that no variable in any quantifier will used inside more than 3 other quantifiers. • Each variable is used only inside at most one block of following quantifiers. C2 . x) = ∃C ∀x1 ∃(A. • It has 3n quantifiers which appear in blocks of the form ∃∀∃ where the ∃ quantifiers quantify over n variables and 2n variables respectively and the ∀ quantify over one variable. This formula was constructed recursively using: GET (C1 . k − 1).(C. x). C2 . Let us also note that GET (Y. Here the Now take this formula and replace all ∃ by sums and products extends over all variables that was originally in the scope 126 .17. It is not difficult to write (x1 ⇒ ((A = C1 ) ∧ (B = C))) ∧ (¯1 ⇒ ((A = C) ∧ (B = C2 ))) as a CNF-formula with n clauses and each clause is of polynomial size. k−1. 0) can be done by a CNF-formula with O(n) clauses of constant size.C). C2 .B)∈{(C1 . B.B) (x1 → ((A = C1 )∧(B = C)))∧(¯1 → ((A = C)∧(B = C2 )))∧GET (A.

Using this replacement the formula is now turned into ¯ an expression which evaluates to an integer. It is not difficult to see that this integer is 0 iff the original formula was false (prove this by induction). Keep this variable free and evaluate the entire expression mod p with its sums and products. The verifier verifies that P (0) + P (1) ≡ I (mod p). The task for the prover is now to prove that P (n1 ) is the value of the algebraic when n1 is substituted for x1 . The prover now gives this formal polynomial (mod (p)) to V . This is true since the value only multiplies the value of the final CNF-formula is at most cn and each by 2n while each only squares the value (remember that there is only one variable in each ).6 For c < 1 and x > Xc . . but this is of no major concern for the moment. and responds with a random integer n1 where n1 is chosen randomly among 1. 2 . The prover starts by giving this p together with I (mod p) (which is not 0). We will show how the prover can convince the verifier with high probability that this integer I is not 0. The following lemma follows from the prime number theorem (the reader is asked to take it on faith). This can be done since there are O(n) coefficients each which can be specified with O(n) bits. the product of all primes less than x is ≥ ecx where e is the base of the natural logarithm ≈ 2. now we are summing over its two values). This implies that one can use a small prime. To see this observe that if I > 0 and it is divisible by a set of primes then it is at least the product of the primes.718. n4 ≤ p ≤ O(2n ) such that I ≡ 0 modulo p. Naturally the result is a polynomial P (x1 ) and by the conditions of the formula it is of degree O(n). Let us call it x1 and suppose it is part of an ∃ quantifier (i.e. n First observe that I is bounded by 2O(n2 ) . Once all the variables have been 127 . Now consider the outermost quantified variable.7 In fact if one is more careful one can make I = 1 when the formula is true. The resulting algebraic expression has one quantified variable less and we can now attack the next variable. . Finally for a variable replace x by 1 − x. Remark 13. This lemma implies that there is some prime p. p − 1. Lemma 13. This will make the proof slightly more efficient.of the quantifier. Also replace ∧ by × and ∨ by +. Here we need both that intermediate pieces of the formula are simple CNF-formula and that the usage of each variable is very limited.

Suppose on the other hand that the formula is false. When the formula is true there is really no complications since the prover all the time is claiming correct statements and thus the verifier will accept with probability 1. consider x ¯ ∃x1 ∀x2 ∃x3 ∀x4 (x1 ∨ x2 ∨ x3 ) ∧ (¯1 ∨ x4 ). This formulas is true since if we put x1 = 0 and x3 = 1 both clauses are satisfied. The formula is turned into the following arithmetical expression: 1 1 1 1 (x1 + x2 + x3 )(2 − x1 − x4 ) x1 =0 x2 =0 x3 =0 x4 =0 This is just an integer (in fact 20). Note that there is really no difference between the ∀-variables and the ∃-variables. On the other hand if he is never lucky then he will be forced to continue lying and the verifier will expose him in the end. Let us sketch why this protocol is correct. This implies that the probability that the prover is lucky at a single point is O(n/p) ≤ O(n−3 ). Since there are only O(n2 ) variables. Thus with probability 1 − O(n−1 ) the verifier will reject and the protocol is correct. It does not matter what happens with the other variables. the probability that he is ever lucky is O(n−1 ). To give a little bit perspective of this proof. In particular I = 0 and the first value claimed by the prover for I (mod p) is incorrect and hence also the first polynomial P is not correct (since it takes an incorrect value for either 0 or 1). we only need the assumption on the structure of the formula to make the degree of the polynomial P small. Suppose the true polynomial is Q. Since P − Q is a nonzero polynomial of degree O(n) it has at most O(n) zeroes.eliminated the verifier can himself evaluate the remaining polynomial and if it equals the value claimed by the prover he accepts and otherwise rejects. Example 13.8 For simplicity let us work with a formula on normal TQBFCNF form and in particular. A proof would go like the following. If the prover is lucky once then he starts claiming correct statements and thus he will be able to convince the verifier. let us give an example to show how it works. 128 . Let us say that n1 is lucky for the prover if P (n1 ) ≡ Q(n1 ) (mod p).

He now chooses random value for x1 (in our case x1 = 3) and wants to be convinced that 1 1 1 (3 + x2 + x3 )(6 − x4 ) = P (3) = 25 × 41 × 4 × 1 ≡ 5. The prover now claims that 1 1 (3 + x2 + x3 )(6 − x4 ) x3 =0 x4 =0 as a function of x2 is P2 (x2 ) = 1 + 4x2 . but we are only trying to illustrate the procedure). 2 4. 2. 1 1 (Normally the prover represents these polynomials in a dense representation. He claims that the expression is 6 modulo 7 and in fact that 1 1 1 (x1 + x2 + x3 )(2 − x1 − x4 ) x2 =0 x3 =0 x4 =0 as a function of x1 is P1 (x1 ) = (2x2 + 2x1 + 1)(2x2 + 6x1 + 5)(1 − x1 )2 (2 − x1 )2 . The prover chooses the prime 7 (in reality it should be larger. Indeed P1 (0) = 20 ≡ 6 while P1 (1) = 0. The verifier checks that P2 (0)P2 (1) ≡ 5 and randomly chooses x2 = 5 and asks to be convinced that 1 1 (1 + x3 )(6 − x4 ) ≡ P2 (5) ≡ 3 x3 =0 x4 =0 5. but this is more convenient for hand calculation). x2 =0 x3 =0 x4 =0 3. 3 129 . The prover claims that 1 (1 + x3 )(6 − x4 ) x4 =0 as a a function of x3 is P3 (x3 ) = 2 + 4x3 + 2x2 .1. The verifier checks that P1 (0) + P1 (1) ≡ 6 modulo 7 (in the future we reduce everything modulo 7 without saying).

x4 =0 This he can to by himself and he accepts the input since 18 × 15 is indeed 4 modulo 7. The reason that the proof does not relativize is that if we allow oracle questions then the condition “C1 is the configuration that follows C2 ” cannot be described by a low degree polynomial. As mentioned before. the above proof does not relativize (the IP ⊂ P SP ACE does relativize.6. but not the second part). It is not difficult to construct an oracle A such that IP A ⊂ P SP ACE A . However it is still true that no strict inclusion that does not relativize has been proved for any complexity class that includes N C 1 . The verifier checks that P3 (0) + P3 (1) ≡ 3 and then randomly chooses x3 = 2 and wants to be convinced that 1 3(6 − x4 ) ≡ P3 (2) ≡ 4. 130 . This proof which does not relativize gives some hope to attack the NP vs P question.

Sign up to vote on this title
UsefulNot useful