0 Up votes0 Down votes

0 views47 pagesComplexity Theory

Feb 16, 2018

NP

© © All Rights Reserved

PDF, TXT or read online from Scribd

Complexity Theory

© All Rights Reserved

0 views

NP

Complexity Theory

© All Rights Reserved

- The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine who Outwitted America's Enemies
- Steve Jobs
- After the Funeral
- Hidden Figures Young Readers' Edition
- Cryptonomicon
- Make Your Mind Up: My Guide to Finding Your Own Style, Life, and Motavation!
- The Golden Notebook: A Novel
- Alibaba: The House That Jack Ma Built
- The 10X Rule: The Only Difference Between Success and Failure
- Autonomous: A Novel
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
- Algorithms to Live By: The Computer Science of Human Decisions
- Console Wars: Sega, Nintendo, and the Battle that Defined a Generation
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution

You are on page 1of 47

Complexity Theory

– Teor Skladnost Obqislen~ –

Lviv Polytechnic National University

Winter Term 2014

November 17 - 21 and November 24 - 28

Room 107, Time: 1200 − 1600

40 hours = 5 ECTS

Prof. Dr. Klaus Wagner - University of Würzburg (Germany)

started in the sixties of the 20th century and provided deep insights into the nature of

computation.

Contents

1 Computability and Decidability 3

1.1 History of Computability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Words as Inputs to Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Computable Functions and Decidable Sets . . . . . . . . . . . . . . . . . . 5

1.4 Main Theorem and Church’s Thesis . . . . . . . . . . . . . . . . . . . . . . 6

1.5 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.6 The Halting Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Complexity of Computations 12

2.1 Relating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2 Complexity Bounded Computations . . . . . . . . . . . . . . . . . . . . . . 13

2.3 The Equivalence of Different Types of Algorithms . . . . . . . . . . . . . . 15

2.4 Can Problems Be Arbitrarily Complex? . . . . . . . . . . . . . . . . . . . . 16

3 Complexity Classes 17

3.1 Time and Space Complexity Classes . . . . . . . . . . . . . . . . . . . . . . 17

3.2 Hierarchies of Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . 19

3.3 Polynomially Fuzzy Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.4 On Time Versus Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.1 The Classes P and FP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.2 Graph Problems in P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.3 The class NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.4 The P-NP Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1 Polynomial Time Reducibility . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.3 Complete Problems for Special Classes . . . . . . . . . . . . . . . . . . . . 37

8 Appendix 3: Exercises 46

definitions

examples

theorems, propositions, corollaries

proofs

important statements

2

1 Computability and Decidability

The subject of this lecture is the complexity of computations and the complexity of objects

(functions, sets) computed by such computations. Hence, we have to start with the notion

of computation.

Roughly speaking, a computation is the application of an algorithm to a given input.

The word algorithm comes from the Persian mathematician Al Chwarismi of the ninth

century who wrote books on computing with Indian numerals and on solving algebraic

equations.

But how to define the notion of algorithm?

An algorithm is an exact instruction on how a given system of rules has to be applied

step-by-step to a given input.

– An instruction how to find the lecture room is an algorithm.

– A recipe how to prepare Borw is an algorithm.

However, we will restrict ourselves to algorithms which are applied to formal objects like

numbers, texts, pictures etc.

Mathematically exact definitions of the notion of algorithm.

Why we need a mathematical definition?

• As long as people only write algorithms to solve problems, no mathematical defini-

tion is necessary.

• However, if we want to prove that there is no algorithm to solve a given problem,

then we need a mathematical definition of the notion “algorithm”.

Until the beginning of the 20th century mathematicians believed: If one has a precise

definition of a problem then there exists an algorithm to decide this problem. But then

some important problems came up for which no algorithm could be found, e.g.:

Does there exist an algorithm which, for each polynomial p(x1 , x2 , . . . , xn ) with

integer coefficients, tells us whether the equation p(x1 , x2 , . . . , xn ) = 0 has

integer solutions?

• Mathematical Theories.

For an mathematical theory (as arithmetic, geometry, etc.), does there exist an

algorithm which tells us whether a given statement is true in this theory?

• Halting Problem.

Does there exist an algorithm which, for every algorithm A and every input x,

tells us whether A will stop when running on x?

3

Later people discussed the possibility that such algorithms do not exist. To prove that,

an exact mathematical definition of “algorithm” was necessary. Beginning in 1936 (before

the first computers!) dozens of different such definitions were made. Surprise:

This strongly supports the famous Church’s thesis (1936) which says that these defi-

nitions of “algorithm” describe exactly what the very nature of “algorithm” is. We will

make this precise in what follows.

On this base, it was shown that there are no algorithms to solve Hilbert’s 10th problem,

the theory of arithmetic, or the halting problem.

is one of the most important scientific achievements of the 20th century.

Algorithms can process only inputs that are presented in a digitized form. Thus inputs

must be described by strings of symbols.

• Σ is called an alphabet.

• a1 , a2 , . . . , an are called the symbols or letters of this alphabet.

• {a, b, c, . . . , z}

• {0, 1}

• {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

• {§, ℵ, &, $, %, ♥, ∞}

• The string c1 c2 . . . cm is called a word over Σ

• |c1 c2 . . . cm | =def m is called the length of the word c1 c2 . . . cm .

• The empty word ε is the only word which consists of no letter. |ε| = 0.

• Σ∗ is the set of all words over Σ.

∗

• xk =def xxx | {z. . . x} for x ∈ Σ and k ≥ 1.

k times

{a, b}∗ = {ε, a, b, aa, ab, ba, bb, aaa, aab, aba, abb, baa, bab, bba, bbb, ...}.

4

Natural numbers as inputs are given as their binary descriptions. For the definition of

the binary description of natural numbers we need the following

Proposition 1.3 Every natural number n ≥ 1 can be written in P exactly one way as

m

n = am ·2 + am−1 ·2 m−1

+ · · · + a2 ·2 + a1 ·2 + a0 ·2 ( = m

2 1 0 i

i=0 ai ·2 )

where am = 1 and a0 , a1 , a2 , . . . , am−1 ∈ {0, 1}.

Proof. Omitted.

– bin(0) =def 0 .

P

– For n ≥ 1, let n = m i

i=0 ai ·2 such that am = 1 and a0 , a1 , . . . , am−1 ∈ {0, 1}.

bin(n) =def am am−1 . . . a1 a0 .

Thus, bin : N → {0, 1}∗ .

Pm−1 i

Pm−1 i 2m −1 m

2. bin(n) = 111 .

| {z }. . 1 =⇒ n = i=0 1·2 = i=0 2 = 2−1 = 2 −1.

m digits

We start with a very general definition of computability and decidability which ap-

plies to every notion of algorithm. For such an algorithm M the following must be defined:

• How an input x ∈ (Σ∗ )n (for some alphabet Σ and n ≥ 0) is presented to M ?

• When does it mean that the algorithm produces a result?

the result of M on x, if M on input x produces a result

• rM (x) =def

not defined, if M on input x does not produce a result

If we have that we can define:

• An algorithm M decides a set A ⊆ (Σ∗ )n ⇔def rM = cA .

5

1.4 Main Theorem and Church’s Thesis

• Turing machines (human-based model, to be defined in the next subsection)

• Register machines (very simple machine-based model like an assembly language)

Using registers R0, R1, R2, R3, . . . , they have instructions like Ri ← k,

Ri ← Rj + Rk, Ri ← Rj − Rk, and IF (Ri = 0) GOTO k, for i, j, k ∈ N.

• Partial recursive functions (algebraic model, generating from very simple functions

more complicated functions using the operators of substitution and minimization)

• C++

• Java

• ...

• ...

It turns out that these definitions of the notion of algorithm are equivalent. More precisely:

Let A and B be two different notions of algorithm from the list above.

1. A function ϕ can be computed by an algorithm of type A

if and only if ϕ can be computed by an algorithm of type B

2. A set S can be decided by an algorithm of type A

if and only if S can be decided by an algorithm of type B

algorithms. This very elaborate simulations are not given here. Statement 2 is an

immediate consequence of Statement 1.

(1936) can also be computed by a Turing machine.

allowed to write algorithms informally as pseudocode, and we can be sure that there is a

Turing machine computing the same function.

(m : r is the largest integer not greater than mr )

input integer n

k := 0;

for m := 2 to n do

r := m − 1; s := 1;

while r ≥ 2 do

if (m : r) ∗ r = m then s := 0;

r := r − 1;

k := k + s;

output k;

(See also Exercise 5.)

6

1.5 Turing Machines

The British mathematican Alan Turing developed 1936 a model simulating the activity

of a person who executes an algorithm. Such a person writes symbols on a paper and

modifies them, following some fixed rules. In the model,

• the paper is represented by a tape with infinitely many cells with symbols, and

• the rules are located in a finite control unit which interacts with the tape.

finite control

Finite unit

... b a b b b a a b a b b a a b ...

infinite tape

finite control unit: – in each moment it is in one state of a finite set S of states;

– there are a special starting state and a special stopping state;

– a program controls the activities.

tape: – has infinitely many cells;

– in each cell there is a symbol from a finite alphabet Σ;

– there is a special emptiness symbol .

head: – reads the symbol in the current cell;

– can change the symbol and move to a neighbored cell.

A Turing machine (TM) works step-by-step. In one step it does the following:

dependend on

– the current state s and

– the symbol a read by the head

the TM

– changes the current state s into a state s0 (s0 = s is possible),

– changes the read symbol a into a symbol a0 (a0 = a is possible), and

– moves the head one cell left (L), one cell right (R), or does not move (O)

This behavior is described by the instruction sa → s0 a0 τ where τ ∈ {L, R, O}.

For every situation (state, symbol) there is exactly one such instruction.

The set of these instructions is the program of the TM.

7

Example 1.7 TM adds 1 to an arbitrary natural number (given in binary presen-

tation).

Start and stop with the head on the leftmost digit.

s0 – starting state, s1 – stopping state

... 1 0 1 1 0 0 0 1 0 1 0 1 ...

instructions comments

s0 1 → s0 1 R s0 – start, move to right

s0 0 → s0 0 R

s0 → s3 L

s3 1 → s3 0 L s3 – carry 1, move to left

s3 0 → s2 1 L

s3 → s1 1 O

s2 1 → s2 1 L s2 – carry 0, move to left

s2 0 → s2 0 L

s2 → s1 R

the reversal or the mirror of a1 a2 . . . an .

A word x ∈ Σ∗ is called symmetric or a palindrom if xR = x.

English: madam, racecar, rotator, Anna, Hannah

A man a plan a canal Panama.

Did Anna see bees Anna did.

Ukrainian: o gomn nmogo

kozak z kazok

kt utk

tri psihi pili pilipihi spirt

Let PAL =def {w : w ∈ {a, b}∗ ∧ w = wR } be the set of palindroms in {a, b}∗ .

Start and stop with the head on the leftmost symbol.

If x is symmetric then M stops with only 1 on the tape.

If x is not symmetric then M stops with only 0 on the tape.

Idea of the TM: Compare the leftmost symbol with the rightmost one, erase both of

them. Go back to leftmost symbol and repeat as long as leftmost

symbol and rightmost symbol coincide or the word is empty.

Note that a TM can “memorize” something by going into a special state.

s0 – starting state, s1 – stopping state

8

instructions comments

s0 a → sa R sa – memorize a, move right

s0 b → sb R sb – memorize b, move right

s0 → s1 1 O stop for a symmetric word of even length

sa a → sa a R

sa b → sa b R

sa → ra L ra – right end of word, go one step to the left, memorize a

sb a → sb a R

sb b → sb b R

sb → rb L rb – right end of word, go one step to the left, memorize b

ra a → s 2 L s2 – test positive, back to left end

ra b → s 3 L s3 – test negative, back to left end and erase word

ra → s 1 1 O stop for a symmetric word of odd length

rb b → s2 L

rb a → s3 L

rb → s1 a O stop for a symmetric word of odd length

s2 a → s2 a L

s2 b → s2 b L

s2 → s0 R left end of the word, start next round

s3 a → s3 L

s3 b → s3 L

s3 → s1 0 O stop for a non-symmetric word

Let M be a TM with

– the set S of states with s0 , s1 ∈ S (s0 - starting state, s1 - stopping state)

– the set Σ of tape symbols with ∈ Σ ( - emptyness symbol)

Let # ∈ Σ, and let Π, ∆ be alphabets with Π ∪ ∆ ⊆ Σ − {, #}.

Let ϕ : (Π∗ )n → ∆∗ be a function.

If for every x1 , x2 , . . . , xn ∈ Π∗ the following holds then we say that the TM M

computes the function ϕ:

• Let M start with tape content . . . x1 #x2 # . . . #xn . . . , with state s0

and the head on the leftmost symbol of x1 .

• If ϕ(x1 , x2 , . . . , xn ) is not defined then M does not stop.

• If ϕ(x1 , x2 , . . . , xn ) is defined then M stops with the tape content

. . . ϕ(x1 , x2 , . . . , xn ) . . . , with state s1 and the head on the leftmost

symbol of ϕ(x1 , x2 , . . . , xn ).

Example 1.10 1. Let S(x) =def x + 1. Using the binary representation of natural

numbers, we have shown in Example 1.7 that S can be computed by a TM.

2. Example 1.9 shows that the set PAL of palindroms can be decided by a TM.

Turing machines are a very simple concept of “algorithm” and “computability”, much

simpler then programming languages like Java. However, the Main Theorem of algorithm

theory (Theorem 1.5) says that every function computed by a Java program can also be

computed by a Turing machine. Moreover, by Church’s Thesis, every function, computed

in an intuitive sense, can also be computed by a Turing machine. The big advantage of

this concept is that, using Turing machines, it is much easier to prove certain assumptions

on computability.

9

1.6 The Halting Problem

Is every set M of words decidable? For theoretical reasons (using cardinality arguments)

this is not the case. In a sense, an abundant majority of the sets are not decidable.

However, these arguments do not tell us which sets are decidable or not.

What about some given, easy to define sets? We will show that the famous halting problem

is not decidable.

We consider all TMs with tape alphabet {0, 1, }. The program of such a TM M can

easily be encoded by a word code(M ) ∈ {0, 1}∗ .

K =def {(code(M ), x) | the TM M stops on input x}.

One can also define the halting problem using Java programs instead of TMs. The results

are the same. It surely would be interesting if there is an algorithm telling us whether a

given program stops on a given input. But unfortunately:

K0 =def {code(M ) | the TM M stops on input code(M )}.

Obviously: code(M ) ∈ K0 ⇔ (code(M ), code(M )) ∈ K.

Thus, a decision algorithm for K yields also a decision algorithm for K0 .

This means: If K is decidable then K0 is decidable.

Contraposition of this statement: If K0 is not decidable then K is not decidable.

So, to prove the theorem, it is sufficient to prove that K0 is not decidable.

We prove that by contradiction. Assume, in the contrary, that K0 is decidable.

That means that its charcteristic function cK0 is computable.

1, if x ∈ K0

cK0 (x) = for every x ∈ {0, 1}∗ .

0, if x 6∈ K0

Then also the function ϕ is computable which is defined as

not defined, if x ∈ K0

ϕ(x) =def for every x ∈ {0, 1}∗ .

0, if x 6∈ K0

(Simply modify an algorithm for cK0 as follows: If it stops with result 1 then go into

an infinite loop. This new algorithm computes ϕ.)

Let M0 be a TM computing ϕ. We observe:

code(M0 ) 6∈ K0 ⇔ M0 does not stop on input code(M0 ) (definition of K0 )

⇔ ϕ(code(M0 )) is not defined (since M0 computes ϕ)

⇔ code(M0 ) ∈ K0 (definition of ϕ)

This is a contradiction. Hence our assumption “K0 is decidable” is wrong.

The very reason for this contradiction is the fact, that we mix two different levels of

objects. Generally, assume there are two levels of objects, and the objects of the first

level (the TMs) can do something with the objects of the second level (the inputs). If

10

we identify these levels then an object can do something with itself. This may produce a

contradiction. Another example for this phenomenon:

Example 1.12 The barber paradox. Assume there is only one barber who lives

in town and every men in town used to be shaved, either by himself or by this bar-

ber. It seems to be correct to define the job of the barber as follows:

The barber is the man who shaves all those,

and only those men in town who do not shave themselves.

Or, a little more formalized:

For every man M in town there holds:

the barber shaves M ⇐⇒ M does not shave himself.

Now, the question arises: Who shaves the barber?

Applying the above definition to the barber we can conclude:

the barber shaves himself ⇐⇒ the barber does not shave himself

A contradiction. Consequently, the above definition of a barber was not a good idea.

The contradiction was produced by the fact that we mixed the levels of “the shaver”

and “the being shaved”.

11

2 Complexity of Computations

f ≤g ⇔def f (n) ≤ g(n) for all n ≥ 0

f ≤ae g ⇔def there exists an n0 ≥ 1 such that f (n) ≤ g(n) for all n ≥ n0

f ∈ O(g) ⇔def there exists a constant k ≥ 1 such that f ≤ae k·g

f ∈ Pol(g) ⇔def there exists a constant k ≥ 1 such that f ≤ae g k

f ∈ o(g) ⇔def limn→∞ fg(n)

(n)

=0

1. If f ≤ g and g ≤ h then f ≤ h.

2. If f ≤ae g and g ≤ae h then f ≤ae h.

3. If f ∈ O(g) and g ∈ O(h) then f ∈ O(h).

4. If f ∈ Pol(g) and g ∈ Pol(h) then f ∈ Pol(h).

5. If f ∈ o(g) and g ∈ o(h) then f ∈ o(h).

1. If f ≤ae g then f ≤ g+k for some k ≥ 1.

2. If f ≤ g then f ≤ae g.

3. If f ∈ o(g) then f ≤ae g.

4. If f ≤ae g then f ∈ O(g).

5. If f ∈ O(g) then f ∈ Pol(g).

P

6. If f (n) = m i

i=0 ai ·g(n) with m, a0 , a1 , . . . , am ∈ N then f ∈ Pol(g).

P Pm Pm

6. Because

Pm of f (n) = m i

i=0 (ai ·g(n) ) ≤ a0 +

m

i=1 (ai ·g(n) ) ≤ n+(

m

i=1 ai )·g(n) ≤

m m

(1+ i=1 ai )·g(n) for all n ≥ a0 we have f ∈ O(g ). By Statement 5 we obtain

f ∈ Pol(g m ). Hence there exists a k ≥ 1 such that f ≤ae (g m )k = g m·k which means

f ∈ Pol(g).

12

2.2 Complexity Bounded Computations

Let M be an algorithm that computes a total function f : (Σ∗ )n → ∆∗ . We define

the functions tM : (Σ∗ )n → N and sM : (Σ∗ )n → N for inputs:

tM (x1 , . . . , xm ) =def “running time” of M on input (x1 , . . . , xm ).

sM (x1 , . . . , xm ) =def “memory used” by M on input (x1 , . . . , xm ).

and the functions tM : Nn → N and sM : Nn → N for input length:

tM (n) =def max{tM (x1 , . . . , xm ) | |x1 x2 . . . xm | = n}

sM (n) =def max{sM (x1 , . . . , xm ) | |x1 x2 . . . xm | = n}

The exact meaning of “running time” and “used memory” has to be defined for every

notion of algorithms.

As already mentioned, a natural number n is given in binary representation bin(n).

We set |n| =def |bin(n)| and get 2|n|−1 ≤ n < 2|n| and log2 n < |n| ≤ log2 n + 1, for n ≤ 1

(see Exercise 9.

Definition. (time and space complexity for TMs) Let M be a TM. We define

3 BERECHNUNGSKOMPLEXIT

tM (x1 , . . . , xm ) =def number ÄT

of steps of M on input (x1 , . . . , xm ). 40

sM (x1 , . . . , xm ) =def number of tape cells used by M on input (x1 , . . . , xm ).

Laufzeitfunktionen für RAM und TM:

Notice that an input of length n uses already n tape cells. Hence sM (n) ≥ n for every

tM (xthe

TM M . For 1 , . .time

. , xmcomplexity

) =def Anzahl

we der

canTakte

have tvon M bei Eingabe (x1 , . TMs

M (n) < n. But e.g. for

. . , xmthis

). means

that the head cannot see every letter of the input. We will not consider this case.

Beispiel: 1-Band-TM zur Entscheidung der symmetrischen Wörter (Beispiel von früher).

ExamplePAL 2.3 =TM def {w

M :deciding the∗ ∧

w ∈ {a, b} = wRof} palindroms from Example 1.9.

setw PAL

For symmetric words the head moves of M are as follows:

... a b a a b a a b a ...

Hence,

w ∈ for

{a, an

b}∗arbitrary wordP ∈ {a, b}∗1we have:

x |w|+1 = 21 |w|2 + 32 |w| + 1

Für P|x|t(w) ≤ i=1 i1 = 2 (|w| + 1)(|w| + 2)

gilt:

tM (x) ≤ |x| + i=1 i + 1 = |x| + 2 |x| · (|x| + 1) + 1 = 2 |x|2 + 32 |x| + 1

1

Für symmetrische Wörter Gleichheit.

=⇒ TM entscheidet symmetrische Wörter in der Zeit 21 n2 + 23 n + 1, also in O(n2 ). £

For each of the different notions of “algorithm” mentioned on page 6 (partial recursive

Beispiel: Für Multiplikation hatten

++ wir früher folgende RAM

functions, register machines, C , Java) and many others one can M konstruiert:

define “running time”

and “used memory” in an appropriate way. We also informally estimate the running time

0 R3 ← 1

of informal algorithms (pseudocode). Here we assume that the arithmetic operations of

1 IF R1 = 0 GOTO 5

addition and subtraction cost O(n) steps, and the arithmetic

operations of multiplication

2 2 R2 ← R2 + R0 Schleife

and division cost O(n ) steps.

3 R1 ← R1 − R3

4 GOTO 1

5 R0 ← R2 13

6 STOP

Definition. 1. Let t(n), s(n) ≥ n be functions, let M be an algorithm,

let f : (Σ∗ )m → ∆∗ be a function, and let A ⊆ (Σ∗ )m be a set.

M computes f in time t ⇔def M computes f and tM ≤ae t.

M computes f in time O(t) ⇔def M computes f and tM ∈ O(t).

M computes f in time Pol(t) ⇔def M computes f and tM ∈ Pol(t).

M computes f in space s ⇔def M computes f and sM ≤ae s.

M computes f in space O(s) ⇔def M computes f and sM ∈ O(s).

M computes f in space Pol(s) ⇔def M computes f and sM ∈ Pol(s).

2. The same definitions are made for “M decides A” instead of “M computes f ”.

2. The arithmetic functions sum, sub, mul and div can be computed by TMs in

time O(n2 ). (See Exercises 13 and 14.)

3. The pseudocode algorithm in Example 1.6 (see also Exercise 5) runs in time

O(n2 ·22n ).

It is not hard to see that PRIME can be decided in time O(n2 ·2n/2 ). (Exercise 15.)

In 2002 the Indian mathematicians Agrawal, Kayal and Saxena proved that PRIME

can be decided in time O(n12 ) which was later improved to O(n6 ).

Primality testing is important for some cryptosystems (RSA, PGP). However, in

these systems much faster probabilistic primality testing algorithm are used which are

based on pseudorandom numbers.

14

2.3 The Equivalence of Different Types of Algorithms

When we consider a fixed problem, how the running times and the used memory of

algorithms of different types deciding this problem differ? For example, if a Java program

decides a problem in time t, which time does a TM need to decide this problem? Or

similarly: If a TM simulates a given Java program, how much more time does the TM

need? It turns out that the running times of the different types of algorithms mentioned

on page 6 are polynomially related and that the used memories of the different types of

algorithms are linearly related.

Theorem 2.6 Consider algorithms of type A and type B from the list on page 6.

1. There exists a constant k ≥ 1 such that: If an algorithm M of type A computes

a given function (decides a given set) then there exists an algorithm M 0 of

type B which computes the same function (decides the same set) such that

tM 0 ≤ae (tM )k .

2. There exists a constant k ≥ 1 such that: If an algorithm M of type A computes

a given function (decides a given set) then there exists an algorithm M 0 of

type B which computes the same function (decides the same set) such that

sM 0 ≤ae k·sM .

The following corollary is the time resp. space bounded version of Theorem 1.5 (Main

Theorem of algorithm theory).

1. The following statements are equivalent for a function f (a problem A):

– f can be computed (A can be decided) by a Turing machine in time Pol(t).

– f can be computed (A can be decided) by a register machine in time Pol(t).

– f can be computed (A can be decided) by a C++ program in time Pol(t).

– f can be computed (A can be decided) by a Java program in time Pol(t).

2. The following statements are equivalent for a function f (a problem A):

– f can be computed (A can be decided) by a Turing machine in space O(s).

– f can be computed (A can be decided) by a register machine in space O(s).

– f can be computed (A can be decided) by a C++ program in space O(s).

– f can be computed (A can be decided) by a Java program in space O(s).

1. If an algorithm M of type A computes the function f in time Pol(t) then

tM ∈ Pol(t). Theorem 2.6.1 yields an algorithm M 0 of type B such that

tM 0 ∈ Pol(tM ). By Proposition 2.1.4 we get tM 0 ∈ Pol(t)

Statement 2 can be proven in the same manner using Proposition 2.1.3.

Because deciding a set is nothing else than computing its characteristic function, the

same is true for the decision of sets.

15

Corollary 2.7 says that the type of algorithm does not matter for decidability in time

Pol(t) or in space O(s). Hence, we say simply a function is computable in time Pol(t) or

in space O(s), resp., and a problem is decidable in time Pol(t) or in space O(s), without

mentioning any type of algorithm.

theorem gives an affirmative answer for time and space complexity.

Theorem 2.8 1. For every computable function t(n) ≥ n there exists a decidable

problem A such that no TM can decide A in time t.

2. For every computable function s(n) ≥ n there exists a decidable problem A such

that no TM can decide A in space s.

This theorem is a complexity analog of Theorem 1.11. There we proved that there are

problems which are not decidable. Here we prove that there are decidable problems which

cannot be decided within a given time or space bound. Also the proof is similar. We use

a modification of the special halting problem.

Proof. 1. For a computable function t(n) ≥ n, define the modified halting problem

Kt =def {z | z = code(M ) for some TM M and

M stops on input z with result 1 and tM (z) ≤ t(|z|)}.

Because t is computable, the problem Kt is decidable. Hence, Kt is decidable.

Assume that there exists a TM M which decides Kt in time t.

Hence there exists an n0 ≥ 1 such that tM (n) ≤ t(n) for n ≥ n0 .

We modify M to get a TM M 0 by adding redundant states and instructions such

that |code(M 0 )| ≥ n0 .

Hence M 0 also decides Kt and tM 0 (n) = tM (n) ≤ t(n) for n ≥ n0 . We obtain:

code(M 0 ) ∈ Kt ⇔ M 0 stops on code(M 0 ) with 1 (M 0 decides Kt )

⇔ M 0 stops on code(M 0 ) with 1

and tM 0 (code(M 0 )) ≤ t(|code(M 0 )|) (|code(M 0 )| ≥ n0 )

0

⇔ code(M ) ∈ Kt (definition of Kt )

This is obviously a contradiction. Hence M does not decide A in time t.

2. The space result can be proven similarly.

16

3 Complexity Classes

It would be really nice if for every decidable problem there would be a “best” algorithm

to solve it. For example, for time complexity this would mean for every problem A: There

exists an algorithm M which decides A and for every algorithm M 0 deciding A there holds

tM ≤ae tM 0 .

Unfortunately, this is not true. On the contrary: For every algorithm M which decides a

problem A with tM (n) ≥ 2n there exists another algorithm M 0 deciding A with tM 0 (n) ≤ae

1

·t (n). The same is true for space. (∗)

2 M

It is still worse. There are decidable problems A such that: For every algorithm M

deciding A there exists another algorithm M 0 deciding A with tM 0 (n) ≤ae log2 (tM (n)).

So, we cannot attach to every problem a minimum complexity. Hence, we go the other

way around. We fix a function t(n) ≥ n and look which problems can be decided in time

O(t) (we use O(t) instead of t because of (∗)).

TIME(t) =def {A | problem A can be decided by a TM in time O(t)}.

SPACE(s) =def {A | problem A can be decided by a TM in space O(s)}.

Because of Corollary 2.7.2 the space classes SPACE(s) do not really depend on a given

type of algorithms, contrary to the time classes TIME(t).

The smallest time complexity class is the class of regular languages, see page 38.

Proof. Omitted.

We remark that PAL is not in TIME(n). This can be shown by using Theorem 3.1

and proving that PAL is not regular.

17

Every time complexity class is included in the space complexity class with the same bound.

2. TIME(n) ⊂ SPACE(n).

There exist k, n0 ≥ 1 and a TM M deciding A that runs on an input of length

n ≥ n0 at most k·t(n) steps. Hence it can visit at most k·t(n) tape cells. Adding the

number of cells for the input, we obtain sM (n) ≤ k·t(n)+n ≤ (k+1)·t(n) for n ≥ n0 .

So M works within space O(t) and we have A ∈ SPACE(t).

2. We get TIME(n) ⊆ SPACE(n) from Statement 1. The set PAL of palindroms is

not in TIME(n), see Example 3.2. On the other side, the TM in Example 2.3

decides PAL in space n+1 ≤ae 2n ∈ O(n).

The time and space complexity classes are closed under the set operations of union,

intersection and complement.

1. If A, B ∈ TIME(t) then A ∪ B, A ∩ B, A ∈ TIME(t).

2. If A, B ∈ SPACE(s) then A ∪ B, A ∩ B, A ∈ SPACE(s).

An obvious consequence of the the fact that complexity classes are defined with O:

2. Let s(n), s0 (n) ≥ n. If s0 ∈ O(s) then SPACE(s0 ) ⊆ SPACE(s).

From this and t0 ∈ O(t) we conclude tM ∈ O(t) using Proposition 2.1.3.

The proof of Statement 2 is completely analog.

18

3.2 Hierarchies of Complexity Classes

In Theorem 2.8 we have seen that problems can have arbitrary large time complexities.

What does this mean for complexity classes?

Theorem 3.6 For every computable function t(n) ≥ n there exists a computable

function t0 ≥ t such that TIME(t) ⊂ TIME(t0 ).

Theorem 2.8 there exists a decidable problem A such that there is no TM M that

decides A with tM ≤ae t2 .

Assume that A ∈ TIME(t). Then there exists a TM M and a k ≥ 1 such that

tM (n) ≤ae k·t(n) ≤ae n·t(n) ≤ae t(n)2 . Consequently, tM ≤ae t2 , a contradiction.

Hence A 6∈ TIME(t).

Since A is decidable there exists a TM M deciding A. Obviously, A ∈ TIME(tM ).

Define the function t0 (n) =def max{tM (n), t(n)}. Consequently, t0 is computable,

t0 ≥ tM and t0 ≥ t. Hence, TIME(t) ⊆ TIME(t0 ) and A ∈ TIME(t0 ).

In fact, the jump from a time complexity class to a larger one is rather tight. For such

a tight result we need bounding functions that are “not hard to compute”. We restrict

ourselves to monotonic functions.

1. The function t is a time function if it can computed in time O(t(2n )).

2. The function s is a space function if it can computed in space O(s(2n )).

The demand on a function to be a time (space) function is not very strong. So many

functions fulfill this requirement.

Theorem 3.7 1. For every computable function f (n) ≥ n there exists a time

(space) function g ≥ f .

2. If f and g are time (space) functions then f ·g is a time (space) function.

k

3. nk , 2n : n, 2kn , 2kn ·n, 2n (for every k ≥ 1) are time and space functions.

Proof. We prove only Statement 3 for the function n2 . We have to prove that n2

can be computed in time (2n )2 = 22n . In fact, we can do it in time n2 (see Example

2.4.2).

19

Theorem 3.8 (Hierarchy Theorem)

1. Let t(n) ≥ n2 and t0 (n) ≥ n.

If t is a time function and t0 ·log2 (t0 ) ∈ o(t) then TIME(t0 ) ⊂ TIME(t).

2. Let s(n), s0 (n) ≥ n.

If s is a space function and s0 ∈ o(s) then SPACE(s0 ) ⊂ SPACE(s).

Proof. The proof follows the idea of the proof of Theorem 2.8, but it is much more

technical.

2. SPACE(n) ⊂ SPACE(n2 ) ⊂ SPACE(n3 ) ⊂ . . .

S

3. ∞ k n

k=1 TIME(n ) ⊂ TIME(2 )

S∞

4. k=1 SPACE(nk ) ⊂ SPACE(2n )

By Theorem 3.7.3 the function nk+1 is a time function.

By Theorem 3.8.1 we obtain TIME(nk ) ⊂ TIME(nk+1 ).

3. Let k ≥ 1. Because of nk ∈ o(2n/2 ) we have k n/2

S∞ n ∈ O(2 k ) and hence n/2

k n/2

TIME(n ) ⊆ TIME(2 ). Consequently, k=1 TIME(n ) ⊆ TIME(2 ).

By Theorem 3.7.3 the function 2n is a time function.

Since 2n/2 ·log2 (2n/2 ) = o(2n ) we get TIME(2n/2 ) ⊂ TIME(2n ) by Theorem 3.8.1.

The statements 2. and 4. are proven in a similar way as 1. and 3., resp.

Theorem 2.7 says that the time and space complexities of algorithms of different type for

the same problem are polynomially related (where space complexities are even linearly

related). Thus it makes sense to define “polynomially fuzzy” complexity classes which do

not depend on a given type of algorithms.

TIME(Pol(t)) =def {A | problem A can be decided in time Pol(t)}.

SPACE(Pol(s)) =def {A | problem A can be decided in space Pol(s)}.

PSPACE =def SPACE(Pol(n)).

Because of their definition with Pol these classes are the union of infinitely many classes.

20

S

Proposition 3.10 1. TIME(Pol(t)) = ∞ k

k=1 TIME(t ) for all t(n) ≥ n.

S ∞

2. SPACE(Pol(s)) = k=1 SPACE(sk ) for all s(n) ≥ n.

S

3. P= ∞ k=1 TIME(n

k

).

S∞

4. PSPACE = k=1 SPACE(nk )

Proof. We conlude

A ∈ TIME(Pol(t)) ⇔

⇔ A can be decided in time Pol(t)

⇔ there exists a TM M that decides A in time Pol(t)

⇔ there exists a TM M that decides A and tM ∈ Pol(t)

⇔ there exists a TM M that decides A and a k ≥ 1 such that tM ≤ae tk

⇔ thereSexists a k ≥ 1 such that A ∈ TIME(tk )

⇔ A∈ ∞ k=1 TIME(t )

k

The statements 3 and 4 are the special cases t(n) = n of statements 1 and 2, resp.

2. P ⊆ PSPACE

S

EXP =def ∞

k

Definition. TIME(2n )

S k=1

EXPSPACE =def ∞ nk

k=1 SPACE(2 ).

2. PSPACE ⊂ EXPSPACE.

3. EXP ⊆ EXPSPACE.

S S∞

P= ∞ k

k=1 TIME(n ) ⊂ TIME(2 ) ⊆

n nk

k=1 TIME(2 ) = EXP.

Statement 3 is an immediate consequence of Proposition 3.3.1.

21

3.4 On Time Versus Space

In Proposition 3.3 we have seen that time classes are included in the space classes with the

same bound, i.e., TIME(t) ⊆ SPACE(t) for every t(n) ≥ n. But what about including

space classes in time classes? Here we have an exponential blow-up of the bound.

k, n0 ≥ 1 such that sM (n) ≤ k·s(n) for all n ≥ n0 .

Let Σ be the alphabet and S be set of states of M . Set m =def #Σ and b =def #S.

For an input x such that |x| ≥ n0 we consider the global state of M in a given

moment. A global state of M is described by

• the content C ∈ Σ∗ of the tape where |C| ≤ k·s(|x|),

• the head position h ∈ {1, 2, . . . , k·s(|x|)} and

• the internal state z ∈ S.

We call K = (C, h, z) the configurations of M in a given moment. We emphasize

that such a configuration completely determines the further work of the TM M . If

two different moments of the work of M on input x have the same configuration then

there is a cycle in the work of M , and consequently M does not stop. Since M

decides the set A it does stop on every input. Hence the running time of M on input

x is bounded by the number of different configurations M can have during its work

on x:

tM (x) ≤ number of different configurations of M on x

= #(different tape contents)·#(different head positions)·#(different states)

= mk·s(|x|) · k·s(|x|) · b

= b·2log2 (m)·k·s(|x|)+log2 (k·s(|x|))

≤ b·2log2 (m)·k·s(|x|)+k·s(|x|)

= b·2c·s(|x|) for some c ≥ 1

Consequently, tM (n) = max{tM (x) | |x| = n} ≤ b·2c·s(n) = b·(2s(n) )c for all n ≥ n0 ,

and hence A ∈ TIME((2s )c ) ⊆ TIME(Pol(2s )).

Proof. We conclude

S

PSPACE =def SPACE(Pol(n)) = ∞ k

k=1 SPACE(n ) 3.10.4

S∞ k

⊆ k=1 TIME(Pol(2n )) 3.13

S S∞

= ∞ m=1 TIME((2 ) )

nk m

3.10.1

Sk=1 S

= ∞ ∞

m=1 TIME(2

m·nk

)

Sk=1

∞ S ∞ nk+1 k k+1

⊆ k=1 m=1 TIME(2 ) 3.5.1 and 2m·n ∈ O(2n )

S∞ nk+1

= k=1 TIME(2 )

S∞ nk

⊆ k=1 TIME(2 ) def = EXP

22

Between our favorite complexity classes we have the following inclusion-chain:

2. PSPACE ⊂ EXP or EXP ⊂ EXPSPACE

We emphasize that the writing in the proposition is “. . . or. . . ” and not “either . . . or . . . ”.

P ⊂ EXP (Theorem 3.12.1).

2. This follows from PSPACE ⊆ EXP ⊆ EXPSPACE (Proposition 3.15) and

PSPACE ⊂ EXPSPACE (Theorem 3.12.2).

Finally let us mention that equalities between complexity classes translate upwards. As

an example we prove:

|x|k

Proof. For a set A ∈ Σ∗ we choose an a 6∈ Σ and define BAk =def {xa2 −|x| | x ∈ A}

k

for every k ≥ 1. This padding of the input from length |x| to length 2|x| reduces the

complexity because it is related to the length of the input. One can prove (Exercise

23):

k

(a) If A ∈ SPACE(2n ) then BAk ∈ SPACE(n).

k+1

(b) If BAk ∈ P then A ∈ TIME(2n ).

Now assume P = PSPACE. For an A ∈ EXPSPACE there exists a k ≥ 1 such

k

that A ∈ SPACE(2n ). By (a) we obtain BAk ∈ SPACE(n) ⊆ PSPACE = P.

k+1

Using (b) we get A ∈ TIME(2n ) ⊆ EXP.

23

4 Polynomial Time Computability

The smallest time bound we consider is given by the identity function I(n) =def n. On

page 20 we defined the class P as the class of problems which can be decided in time

Pol(n). We define an analogous class for functions.

Because of Corollary 2.7.1 the classes FP and P are independent of the choice of the type

of algorithm. To prove that a function is in FP (a problem is in P) it is sufficient to find

a polynomial time algorithm of any type for that function (problem).

The classes FP and P are of great practical interest because every function which can be

computed (every problem which can be solved) by a computer in a reasonable amount of

time is in FP (P, resp.). This becomes clear in the light of the following table where we

compare the running times of algorithms working in polynomial time and such not working

in polynomial time. The algorithms are assumed to be implemented on a computer with

200.000 MIPS, i.e. which executes 2 · 1011 instructions per second.

running time

poly- n ≈10−10 sec ≈10−10 sec ≈10−9 sec ≈10−9 sec

nomial n3 ≈10−8 sec ≈10−7 sec ≈10−6 sec ≈10−5 sec

exponential 2n ≈10−5 sec ≈5 sec ≈58 days ≈1011 years

(The universe exists only ≈1.4 · 1010 years.)

Faster computers can improve the data of the table only by a constant factor.

By definition every function in f ∈ FP can be computed by a TM M such that tM (x) ≤ae

|x|k for some k ≥ 1. We show that also the length of f (x) can be bounded in such a way.

Proposition 4.2

For every f ∈ FP there exist k, n0 ≥ 1 and a TM M computing f such that

tM (x) ≤ |x|k and |f (x)| ≤ |x|k for all x such that |x| ≥ n0 .

tM (n) ≤ nm for all n ≥ n0 . Since M can print at most one output symbol per step

and since the input can be part of the output we get for all x such that |x| ≥ n0 :

|f (x)| ≤ |x|m +|x| ≤ 2·|x|m ≤ |x|m+1 . Set k = m + 1.

24

Example 4.3 (polynomial time computable functions and polynomial time decid-

able problems)

1. sum, sub, mul, div ∈ FP.

P

2. p ∈ FP for all polynomials p(x) = m i=0 ai x

i

(m, a0 , a1 , . . . , am ∈ N)

(follows from Statement 1)

3. {(x, y, z) | x, y, z ∈ N and x2 + y 2 = z 2 } ∈ P.

4. exp 6∈ FP (|exp(x)| ≈ 2|x| is too long).

But: {(x, y, z) : x, y, z ∈ N and xy = z} ∈ P.

5. PRIME =def {x : x ∈ N and x ist eine Primzahl} ∈ P. (See Example 2.5.)

6. It is not known whether SF ∈ FP, where SF(n) =def smallest factor k ≥ 2 of n.

(This is related to the prime number decomposition of natural numbers. Note

that the security of cryptographic systems like RSA and PGP rests on the fact

that SF ∈ FP is not known. But their usability is due to the fact that

PRIME ∈ P.)

7. PAL ∈ P, the palindrom set. (See Example 2.4.1.)

8. Pattern matching problem

PM =def {(m, t) : m, t ∈ {a, b}∗ ∧ there exist x, y such that t = xmy} ∈ P.

A TM can do it in O(n2 ); with higher programming languages or pseudocode it

can be done in O(n) implementing the Knuth-Morris-Pratt algorithm.

1. If f, g ∈ FP then f +g, f −g,

2. If A, B ∈ P then A ∪ B, A ∩ B, A ∈ P.

Proof. 1. Let f, g ∈ FP. By Proposition 4.2 there exist k, n0 ≥ 1 and TMs M and

M 0 computing f and g, resp., such that for all x such that |x| ≥ n0 :

tM (x) ≤ |x|k , tM 0 (x) ≤ |x|k , |f (x)| ≤ |x|k , and |g(x)| ≤ |x|k .

An new algorithm M 00 simulates first M and then M 0 to compute f (x) and g(x),

resp. Finally M 00 computes f (x)·g(x) in time (|f (x)|+|g(x)|)2 (Example 2.4).

Consequently, for all x such that |x| ≥ max{n0 , 6}:

tM 00 (x) ≤ tM (x)+tM 0 (x)+(|f (x)|+|g(x)|)2 ≤ 2·|x|k +(2·|x|k )2 ≤ 6·|x|2k ≤ |x|2k+1 .

So we have tM 00 ≤ae n2k+1 and hence f ·g ∈ FP.

Subtraction, multiplication and division are analogous.

For the substitution f ◦ g let M and M 0 be as above. The algorithm M 00 computes

first g(x) and then f (g(x)). Consequently, for all x such that |x| ≥ max{n0 , 2}:

2 2

tM 00 (x) ≤ tM 0 (x)+tM (g(x)) ≤ |x|k +(|x|k )k ≤ 2·|x|k ≤ |x|k +1 .

2

So we have tM 00 ≤ae nk +1 and hence f ◦g ∈ FP.

S∞

2. For A, B ∈ P = k=1 TIME(nk ) there exists a k ≥ 1 such that

A, B ∈ TIME(nk ). By 3.4.1. we have A ∪ B, A ∩ B, A ∈ TIME(nk ) ⊆ P.

25

4.2 Graph Problems in P

Graph problems are a very interesting and important group of problems, because many

problems in practice can be easily formulated as graph problems; e.g. transport problems

and scheduling problems.

A graph G = (V, E) consists of a set V of vertices and a binary relation E ⊆ V × V

whose elements are called edges. We say that G is an undirected graph if the edges do not

have a direction, i.e., with (u, v) ∈ E we have automatically (v, u) ∈ E (however, in the

description of the graph usually only one of them is listed). Otherwise, G is said to be a

directed graph.

We consider only finite graphs, i.e. graphs with a finite set of vertices. Such graphs can

be presented in a natural way as shown in the following example. Note that an edge (u, v)

of a directed graph is presented as u >v whereas an edge (u, v) of a undirected graph

is presented as u v.

Example 4.5 Consider the undirected graph of the three houses and the three foun-

tains 3H3F =def (V, E) where V = {h1, h2, h3, f 1, f 2, f 3} and

E = {(h1, f 1), (h1, f 2), (h1, f 3), (h2, f 1), (h2, f 2), (h2, f 3), (h3, f 1), (h3, f 2), (h3, f 3)}.

which is represented as

h1 h2 h3

f1 f2 f3

edges of G. We say that v1 and vr are connected by this path. This path is called a circuit

if v1 = vr . In the graph of Example 4.5 the sequence (h1, f 1), (f 1, h2), (h2, f 2), (f 2, h1)

is a circuit.

How to present a finite graph as an input to an algorithm?

Let G = (V, E) be a finite graph. Without loss of generality let V = {v1 , v2 , . . . , vm }. The

adjacency matrix of G is defined as

e11 e12 . . . e1m

e21 e22 . . . e2m

1, if (vi , vj ) ∈ E

AG =def .. .. .. where eij =def

. . . 0 sonst

em1 em2 . . . emm

The adjacency matrix in its turn can be described by the word

wG = e11 e12 . . . e1m e21 e22 . . . e2m . . .em1 em2 . . . emm ∈ {0, 1}∗

in which form the graph G is given to an algorithm.

26

Example 4.6 Consider the graph of the three houses and the three fountains 3H3F

from Example 4.5. Setting v1 = h1, v2 = h2, v3 = h3, v4 = f 1, v5 = f 2 and v6 = f 3 we

get the adjacency matrix

0 0 0 1 1 1

0 0 0 1 1 1

0 0 0 1 1 1

A3H3F =def

1 1 1 0 0 0

1 1 1 0 0 0

1 1 1 0 0 0

and the input word w3H3F = 000111000111000111111000111000111000

A graph is said to be planar if its representation in the plane can be drawn in such

a way that the edges do not intersect. For example, the graph in Example 4.5 is not

planar. However, if we remove any of its edges, we get a planar graph. We define

PLANAR =def {G : graph G is planar}

One can prove that PLANAR ∈ P, but this is not easy.

A undirected graph G = (V, E) is connected if every pair of vertices is connected by a

path. We define

CONNECT =def {G : the graph G is connected}

It is easy to see, that CONNECT ∈ P (see Exercise 24).

(Leonhard Euler, Swiss mathematican, 1707-1783)

An Eulerian circuit of an undirected graph G is a circuit in G which includes every

edge exactly once.

The graph in example 4.5 does not have an Eulerian circuit.

circuit: Follow the edges in

alphabetical order.

The problem EULER is in P. This is not easy to see, but it becomes easy (see Exer-

cise 25) if one knows the following theorem from graph theory (Euler 1736).

A graph has an Eulerian circuit if and only if it is connected and every

vertex is connected by an edge with an even number of other vertices.

27

Example 4.10 Graph coloring.

An undirected graph G = ({1, 2, . . . , m}, E) has a k-coloring ⇔def

there exist c1 , c2 , . . . cm ∈ {1, . . . , k} such that ci 6= cj for all (i, j) ∈ E;

i.e., every vertex gets one of the colors 1, 2, . . . , k such that two vertices have differ-

ent colors when connected by an edge. For example, the graph in Example 4.5 has a

2-coloring , and the following graph has a 3-coloring but not a 2-coloring.

It is not known whether k-COLOR ∈ P for k ≥ 3.

It is easy to see that 2-COLOR ∈ P. (See Exercise 26.)

In this context the famous four color problem should be mentioned. This is the problem

of whether every planar graph has a 4-coloring. An equivalent formulation of the problem

is: For any given map of geographical regions, can the regions be colored with four colors

in such a way that adjacent regions have different colors?

The figures show a 4-coloring of the regions of Ukraina as a map and as a graph. Each of

the regions corresponds to a vertex of the graph. If two regions are adjacent (neighboring)

then the corresponding vertices of the graph are connected by an edge. This results in a

planar graph.

The four color problem was a long-standing open problem. In 1976 it was answered in

the affirmative by the American mathematicans Appel and Haken.

28

4.3 The class NP

There are many practically relevant problems which are not known to be in P; i.e., for

which we do not know algorithms solving them in a reasonable amount of time. Many of

these problems are so-called polynomial search problems.

a set B ∈ P such that x ∈ A ⇔ ∃y(|y| ≤ |x|k +k ∧ (x, y) ∈ B) for every x.

– x is in A ⇔ x has a solution y

– y is a solution for x ⇔ (x, y) ∈ B (B is the solution space)

– B ∈ P means: one can easily check whether y is a solution for x.

– |y| ≤ |x|k +k means: we consider only solutions which are not too long

Such problems can be hard to solve because one has to check for an exponential number

k

(about 2|x| +k ) of different y whether (x, y) ∈ B.

Let us consider some examples.

Example 4.11 The graph coloring problem k-COLOR for k ≥ 2 (see Example 4.10).

For a graph G = ({1, 2, . . . , m}, E) we can write:

G has a k-coloring ⇔ ∃c(c is a k-coloring for G) ⇔ ∃c((G, c) ∈ B)

where B =def {(G, c) | c is a k-coloring of G}.

Using the colors 1, 2, . . . , k, a k-coloring attaches to to each vertex i ∈ {1, 2, . . . , m}

a color ci ∈ {1, ... , k} such that (i, j) ∈ E implies ci 6= cj . Hence we can write

B = {({1, ... , m}, E, (c1 , ... , cm )) | c1 , ... , cm ∈ {1, ... , k} ∧ ∀i∀j((i, j) ∈ E → ci 6= cj )}

It is easy to see that B ∈ P. Because we choose only one color ci for every vertex i

there holds |(c1 , ..., cm )| ≤ |G|. Thus we can write equivalently

G has a k-coloring ⇔ ∃c(|c| ≤ |G| ∧ (G, c) ∈ B).

Consequently, k-COLOR is a polynomial search problem. In Example 4.10 we have

seen that 2-COLOR is in P which is not known for k ≥ 3.

29

Example 4.12 The Hamiltonian circuit problem

(William Rowan Hamilton, Irish mathematician, 1805-1865).

A Hamiltonian circuit of an undirected graph G is a circuit which includes every

vertex exactly once.

HAMILTONIAN CIRCUIT =def {G : the graph G has a Hamiltonian circuit}

This problem seems to be similar to the Eulerian circuit problem. However, the

latter problem is in P which is not known for the Hamiltonian circuit problem.

As for the graph coloring problem one can prove that the Hamiltonian circuit prob-

lem is a polynomial search problem. (See Exercise 27.)

Example 4.13 The sum of subset problem. The problem is defined as the set of all

pairs (K, b) where K is a finite subset of N and b ∈ N such that b is the sum of

a subset of K:

P

SOS =def {(K, b) | K ⊂ N is finite ∧ b ∈ N ∧ ∃L(L ⊆ K ∧ b = a∈L a)}

or, equivalently (set ci = cL (ai )),

P

SOS =def {(a1 , ... , am , b) : m, a1 , ... , am , b ∈ N ∧ ∃(c1 , ... , cm ∈ {0, 1})(b = mi=1 c1 ·ai )}.

Using this form it is easy to see that SOS is a polynomial search problem (see Exer-

cise 28).

Definition. NP =def {A | there exist a k ≥ 1 and a set B ∈ P such that for all x

x ∈ A ⇔ ∃y(|y| ≤ |x|k +k ∧ (x, y) ∈ B) }

“NP” is an acronym of nondeterministic polynomial time. This comes from the fact that

the NP problems can be characterized as exactly those problems which can be solved by

nondeterministic algorithms in polynomial time. However, we will not consider here this

type of algorithms, which is a special form of parallel algorithms.

30

Proposition 4.14

k-COLOR (k ≥ 3), HAMILTONIAN CIRCUIT, and SOS are in NP.

Theorem 4.16 P ⊆ NP

x ∈ A ⇐⇒ (x, x) ∈ B ⇐⇒ ∃z(|z| ≤ |x| ∧ (x, z) ∈ B).

Many problems of practical interest are in NP; e.g., cost optimization problems, trans-

port problems, storage problems, scheduling problems, and computer operation problems.

Hence, it is of great interest whether the inclusion in Theorem 4.16 is an equation or not.

This is just the P-NP problem, the most famous and intensively attacked problem of

Theoretical Computer Science:

P-NP Problem.

What is correct: P ⊂ NP or P = NP?

Commonly believed: P ⊂ NP.

Most important unsolved problem of Theoretical Computer Science.

There is a $ 1,000,000 prize for solving this problem!

Many practical and theoretical consequences.

In Theorem 4.15 we have seen that NP is closed under union and intersection. But what

about complementation? This is another famous open problem.

NP Complement Problem:

Is NP closed under complement? I.e.: does A ∈ NP imply A ∈ NP?

Commonly believed: No.

Since P is closed under complement (see Theorem 4.4.2) the answer P = NP to the P-NP

problem would give an affirmative answer to the NP complement problem.

31

There is a third important open problem. It is connected with the relationship between

the classes NP and PSPACE. From Corollary 3.11 we know P ⊆ PSPACE. This can

be strengthened:

x ∈ A ⇔ ∃y(|y| ≤ |x|k +k ∧ (x, y) ∈ B) for all x.

Because of P ⊆ PSPACE (Corollary 3.11.2) there exists m, n0 ≥ 1 and an algorithm

M deciding B such that sM (x, y) ≤ |xy|m for all x, y such that |xy| ≥ n0 .

This suggests the following algorithm M 0 deciding A:

input: x

s := 0;

for all y such that |y| ≤ |x|k +k do

apply M to (x, y)

if the result is 1 then s := 1

output: s

The algorithm needs memory for x, for y, and for the computation of M on (x, y).

Thus we obtain for x such that |x| ≥ max{n0 , k, 3+3m }:

sM 0 (x) ≤ max|y|≤|x|k +k (|x|+|y|+sM (x, y))

≤ max|y|≤|x|k +k (|x|+|y|+(|xy|)m )

≤ |x| + |x|k +k + (|x|+|x|k +k)m

≤ 3·|x|k + (3·|x|k )m

≤ (3+3m )·|x|km

≤ |x|km+1

So we have sM 0 ≤ae nkm+1 and hence A ∈ PSPACE.

NP-PSPACE Problem.

What is correct: NP ⊂ PSPACE or NP = PSPACE?

Commonly believed: NP ⊂ PSPACE.

From Proposition 3.15, Theorem 4.16, and Theorem 4.17 we have the following inclusion-

chain between our favorite complexity classes:

32

5 Polynomial Time Reducibility and Completeness

For a moment, assume P ⊂ NP. In that case it seems that the sets from P are the

simplest sets in NP, and the most complicated sets of NP are not in P. But what means

“simpler” or “more complicated”? Intuitively: A problem A is simpler as or equally

simple to a set B if B ∈ C implies A ∈ C for “well-formed” complexity classes C like P,

NP, PSPACE, EXP, and EXPSPACE. I.e., A cannot be in a larger complexity class

than B. This leads us to the following definition.

• A ≤ B ⇔def there exist an f ∈ FP such that x ∈ A ↔ f (x) ∈ B for all x ∈ Σ∗1

(A is reducible to B in polynomial time.)

• A ≡ B ⇔def A ≤ B and B ≤ A

(A and B are polynomial time equivalent.)

of volume V and m items with volumes v1 , . . . , vm and with values w1 , . . . , wm ,

can one choose a subset of the items whose overall volume is not larger than V

and whose overall value is not smaller than a given value W ? More precisely, for

v1 , . . . , vm , w1 , . . . , wm , V, W ∈ N,

(v1 , . . . , vm , w1 , . . . , wm , V, W ) ∈ KNAPSACK ⇔def P P

there exists I ⊆ {1, . . . , m} such that i∈I vi ≤ V and i∈I wi ≥ W }

(a1 , . . . , am , b) ∈ SOS ⇔

P

⇔ there exists I ⊆ {1, . . . , m} such that ai = b

Pi∈I P

⇔ there exists I ⊆ {1, . . . , m} such that i∈I ai ≤ b and i∈I ai ≥ b

⇔ (a1 , . . . , am , a1 , . . . , am , b, b) ∈ KNAPSACK

⇔ f (a1 , . . . , am , b) ∈ KNAPSACK

where f (a1 , . . . , am , b) =def (a1 , . . . , am , a1 , . . . , am , b, b)

Because of f ∈ FP there holds SOS ≤ KNAPSACK.

2. If A ≤ B and B ≤ C then A ≤ C (≤ ist transitiv)

3. If A ≤ B then A ≤ B

33

Proof. 1. x ∈ A ⇔ I(x) ∈ A and I ∈ FP.

2. If A ≤ B then there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B.

If B ≤ C then there exists an g ∈ FP such that x ∈ B ⇔ g(x) ∈ C.

We conclude: x ∈ A ⇔ f (x) ∈ B ⇔ g(f (x)) ∈ C ⇔ (g ◦ f )(x) ∈ C.

By Theorem 4.4.1 we obtain g ◦ f ∈ FP.

3. A ≤ B ⇐⇒ there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B

⇐⇒ there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B

⇐⇒ A≤B

The sets in P are the simplest sets w.r.t. polynomial time reducibility:

2. If A 6∈ P and B ∈ P then A 6≤ B.

3. If A, B ∈ P and A, B 6∈ {∅, ∅} then A ≡ B.

4. If A ≤ ∅ then A = ∅.

5. If A ≤ ∅ then A = ∅.

in a lower box can be reduced to

sets in a higher box, but not vice

versa. The sets from P − {∅, ∅}

can be reduced to each other.

Proof. 1. We choose

some a ∈ B and b ∈ B, and we define

a, if x ∈ A

f (x) =def

b, if x 6∈ A

Since A can be decided in polynomial

time, f can be computed in polynomial time.

x ∈ A ⇒ f (x) = a ∈ B

=⇒ x ∈ A ⇔ f (x) ∈ B =⇒ A ≤ B

x ∈ A ⇒ f (x) = b ∈ B

2. Let B ∈ P which means cB ∈ FP. Assume A ≤ B. Then there exists an f ∈ FP

such cA = cB ◦ f . By Theorem 4.4.1 we get cA ∈ FP . Consequently, A ∈ P, a

contradiction. Hence A 6≤ B.

Statement 3 is an immediate consequence of Statement 1.

The statements 4 and 5 are obvious.

Now we will see whether the notion ≤ really does the job we discussed at the beginning

of this subsection.

all sets A, B the following holds: If A ≤ B and B ∈ C then A ∈ C.

34

So we hope that the most important complexity classes are closed under polynomial time

reducibility.

S∞ k

Theorem

S∞ 5.4 For monotonic functions t(n), s(n) ≥ n the classes k=1 TIME(t(n ))

k

and k=1 SPACE(t(n )) are closed under polynomial time reducibility.

Proof. We prove the time case, the space case being analogous.

Let A ≤ B and B ∈ TIME(t(nk )). There exist m, n0 ≥ 1 and a TM M deciding B

such that tM (n) ≤ m·t(nk ) for all n ≥ n0 .

Further, there is a function f ∈ FP such that x ∈ A ↔ f (x) ∈ B for all x.

By Proposition 4.2 there exist r, n1 ≥ 1 and a TM M 0 computing f such that

tM 0 (x) ≤ |x|r and |f (x)| ≤ |x|r for all x such that |x| ≥ n1 .

The following algorithm M 00 obviously decides A. Let x be the input.

(a) M 00 simulates M 0 on input x and computes in such a way f (x).

(b) M 00 simulates M on input f (x) and tests in such a way whether f (x) ∈ B.

(c) The result of M 00 is given by the result of M .

What is the running time of M 00 on x with |x| ≥ max{n0 , n1 }?

Phase (a) takes |x|r steps.

Phase (b) takes m·t(|f (x)|k ) ≤ m·t((|x|r )k ) steps (since t is monotonic).

So we obtain tM 00 (n) ≤ nr +m·t((nr )k ) ≤ (m+1)·t(nrk ) for all n ≥ max{n0 , n1 }.

Hence tM 00 (n) ∈ O(t(nrk )) and consequently A ∈ TIME(t(nrk )).

Theorem 5.5 The classes P, NP, EXP, PSPACE, and EXPSPACE are closed

under polynomial time reducibility.

S S

Proof. For the classes P = ∞k=1 TIME(nk ), PSPACE = ∞ k=1 SPACE(n ),

k

S∞ S

EXP = k=1 TIME(2n ), and EXPSPACE = ∞

k nk

k=1 SPACE(2 ) this is a

immediate consequence of Theorem 5.4.

The class NP needs a separate treatment, using methods similar to the one in the

proof of Theorem 5.4. We omit this proof.

S

Finally

S∞ we mention that complexity classes which are not of type ∞ k

k=1 TIME(t(n )) or

k

k=1 SPACE(t(n )) are often not

S closed underk·n

polynomial time reducibility. For example

this is true for SPACE(n) and ∞ k=1 TIME(2 ). (See also Exercise 32.)

5.2 Completeness

Complexity classes are defined by a bounding function, i.e., all problems are in such a

class which can be decided with a complexity that does not exceed the bounding function.

Hence, also very simple problems are in every complexity class. But what about hardest

problems in a complexity class? Do they exist? We need a good definition.

complexity. A problem B ∈ C is C-complete if A ≤ B for every A ∈ C.

35

This means: A C-complete problem is at least as hard to decide than every other problem

in C; it belongs to the hardest problems in C. Some properties of C-complete sets:

1. If A is also C-complete then A ≡ B.

2. If B ≤ C and C ∈ C then C is C-complete.

C-complete problem B. To prove that a problem C ∈ C is also C-complete it is sufficient

to show B ≤ C.

Proof. 1. A is C-complete, B ∈ C =⇒ B ≤ A

=⇒ A ≡ B

B is C-complete, A ∈ C =⇒ A ≤ B

2. Let A be an arbitrary set from C. We have to show A ≤ C.

A ∈ C and B is C-complete =⇒ A ≤ B.

A ≤ B and B ≤ C =⇒ A ≤ C (by Proposition 5.2.2).

The following theorem shows that C-complete problems are really the hardest problems

in C. Namely, if a C-complete problem is in a subclass of C then all sets from C are in this

subclass. Thus, a C-complete problem incorporates the whole complexity of the class C.

polynomial time reducibility. Then B ∈ D if and only if D = C.

Assume B ∈ D. Let A be an arbitrary set from C.

A ∈ C and B is C-complete =⇒ A ≤ B.

A ≤ B, B ∈ D and D is closed under polynomial time reducibility =⇒ A ∈ D.

Consequently, C = D.

Assume C = D. Since B is C-complete we have B ∈ C and consequently B ∈ D.

2. If A is PSPACE-complete then A ∈ NP ⇔ NP = PSPACE

3. If A is EXP-complete then A ∈ PSPACE ⇔ PSPACE = EXP

4. If A is EXPSPACE-complete then A ∈ EXP ⇔ EXP = EXPSPACE

C-completeness of a problem can also imply that this problem cannot be in a smaller

complexity class.

2. If A is EXPSPACE-complete then A 6∈ PSPACE.

36

Proof. 1. Assume A ∈ P. Since A is EXP-complete, and P is closed under

polynomial time reducibility, Theorem 5.7 yields P = EXP. This contradicts

Theorem 3.12.1.

Statement 2 can be proven in the same way.

For complexity classes which are closed under polynomial time reducibility we introduced

the notion of complete problems. However, this does not automatically mean that there

really exist complete problems for these classes. What about the special classes we have

studied?

We start with the class P. From Theorem 5.3 we obtain directly a complete answer.

2. The problems ∅ and ∅ are not P-complete.

For the other of our favorite classes (and many others) it is not hard to proof the existence

of complete sets.

Theorem 5.11 NP, PSPACE, EXP and EXPSPACE have complete problems.

Proof. We give the proof for PSPACE. For the other classes the proofs is similar.

We use the encoding “code” of TMs we already used in Subsection 1.6. Define

U =def {(code(M ), x, z) | TM M stops on x with result 1 using at most |z| cells}.

It is not hard to see that U ∈ SPACE(n2 ) (Exercise 31).

For an A ∈ PSPACE there exists a TM M deciding A such that sM (n) ≤ae nk .

By Proposition 2.2.1 there exists a m ≥ 1 such that sM (n) ≤ nk + m for all n ≥ 0.

k

Consequently, x ∈ A ⇔ (code(M ), x, 0|x| +m ) ∈ U ⇔ f (x) ∈ U

k

where f (x) =def (code(M ), x, 0|x| +m ). Since f ∈ FP, we get A ≤ U .

However, the complete problem in the preceding proof is rather artificial. Are there

natural complete problems for the classes NP, PSPACE, EXP and EXPSPACE?

Answer: Yes!

k ≥ 3 are NP-complete.

Proof. Omitted.

More NP-complete problems can be found in Section 6 and in the book by M.R.

Garey, D.S. Johnson: Computers and Intractibility, A Guide to the Theory of NP-

Completeness.

37

For more complete languages we need the notion of regular languages.

Definition.

1. Let Σ be an alphabet, and let L, L0 ⊆ Σ∗ .

– L · L0 =def {xy | x ∈ L ∧ y ∈ L0 } (concatenation of L and L0 )

– L0 =def {ε}

S∞ andk L

k+1

=def Lk · L for k ≥ 0

∗

– L =def k=0 L = {x1 x2 ... xk | k ≥ 0 ∧ x1 , x2 , ... , xk ∈ L} (iteration of L)

2. The regular languages over Σ are the languages which can be generated from ∅

and the sets {a} (a ∈ Σ) by repeated application of union, concatenation and

iteration. In other terms:

– the empty set ∅ is regular

– {a} is regular for every a ∈ Σ

– if L, L0 are regular then L ∪ L0 , L · L0 and L∗ are regular

3. In such a way we obtain regular languages like

(({a}·{b}∗ ) ∪ {c})∗ ∪ ({c}∗ ·(({b}·{c}) ∪ {a})∗ )

For simplicity we omit the set-braces { and } and obtain

((a·b∗ ) ∪ c)∗ ∪ (c∗ ·((b·c) ∪ a)∗ ).

This is called a regular expression describing the corresponding regular set.

Example 5.13 1. (a ∪ b)∗ a describes the set of all words over {a, b} ending with a.

2. (0∗·1·0∗·1)∗·0∗ describes the set of all words over {0, 1} with an even number of 1.

3. (a1 ∪ a2 ∪ · · · ∪ an )∗ = {a1 , a2 , . . . , an }∗ describes the set of all words over the

alphabet {a1 , a2 , . . . , an }.

4. ∅∗ = {ε}.

5. Different regular expressions can describe the same regular language, e.g.:

0·(0 ∪ 1) and (0·0) ∪ ∅ ∪ ((0·1)·∅∗ ) describe the same language {00, 01}.

Definition.

EQ(∪, ·,∗ ) =def {(E, E 0 ) | the regular expressions E, E 0 with operations ∪, · and ∗

describe the same regular language}

EQ(∪, ·, , ) =def {(E, E ) | the regular expressions E, E 0 with operations ∪, ·,2 and ∗

∗ 2 0

2. The problem EQ(∪, ·,∗ ,2 ) is EXPSPACE-complete.

Proof. Omitted.

Since the complement of a regular language is also a regular language, one can consider also

− ∗ −

regular expressions with

2

complement . We notice that the related problem EQ(∪, ·, , )

S ·· log(nk )

is ∞k=1 SPACE(2 2·

)-complete. This is a tremendously large complexity class.

38

Finally let us consider the complexity of some games, an very interesting field of com-

plexity theory.

Hex. Two players put alternately stones on an empty cell of a hex board: Red has red

stones and blue has blue stones. The players try to form a connected path of their own

stones linking the opposing sides of the board marked by their colors. The first player to

complete his connection wins the game.

Definition: HEX

Input: an n×n hex board having already blue and red stones in some of the cells

Question: Does player blue have a winning strategy?

Proof. Omitted.

Checkers and Go. In the same way we can define generalized versions of checkers and

go. That means the size of the board can be different from the original size 8×8 or 19×19,

resp. So we get the problems CHECKERS and GO.

Proof. Omitted.

39

Here are our favorite complexity classes with some complete sets:

In Complexity Theory much more complexity classes with interesting complete sets are

investigated. Some examples:

• Using a more sensitive notion of reducibility it turns out that not all problems in P

are equally complex. One of the hardest problems in P is the problem of evaluating

a logical circuit.

• Using nondeterministic algorithms (a special kind of parallel algorithms) the class

NL of problems solvable in logarithmic space is defined. There holds NL ⊆ P.

One of the hardest problems in NL is the problem of whether there exists a path

between two given vertices of a graph.

• The class NEXP of problems which can be solved in exponential time by nondeter-

ministic algorithms. There holds EXP ⊆ NEXP ⊆ EXPSPACE. The problem

EQ(∪, ·,2 ) is NEXP-complete.

40

6 Appendix 1: List of NP-complete problems

Instead of A=def {x : E(x)} we write PROBLEM A

Input: x

Question: Is E(x) true?

CLIQUE

Input: undirected graph G = (V, E), k ∈ N

Question: Does G have a clique with k vertices?

(a clique is a subset C ⊆ V such that C × C ⊆ E)

INDEPENDENT SET

Input: undirected graph G = (V, E), k ∈ N

Question: Does G have an independent set with k vertices?

(a independent set is a subset C ⊆ V such that C × C ⊆ E)

VERTEX COVER

Input: undirected graph G = (V, E), k ∈ N

Question: Does G have a vertex cover with at most k vertices?

(a vertex cover is a subset K ⊆ V such that E ⊆ (K × V ) ∪ (V × K))

k-COLOR (k ≥ 3)

Input: undirected graph G = (V, E)

Question: Does G have a k-coloring?

(a k-coloring is a coloring of the vertices with k colors such that two

vertices have different colors if they are connected by an edge, i.e., it is

a function c : V → {1, 2, . . . , k} such that c(u) 6= c(v) for every (u, v) ∈ E)

HAMILTONIAN CIRCUIT

Input: undirected graph G = (V, E)

Question: Does G have a Hamiltonian circuit?

(a Hamiltonian circuit is a closed path in G that visits every vertex

exactly once, i.e. it is a sequence v1 , v2 , . . . , vm such that m = #V ,

{v1 , v2 , . . . , vm } = V , and (v1 , v2 ), (v2 , v3 ), . . . , (vm−1 , vm ), (vm , v1 ) ∈ E)

TRAVELLING SALESMAN

Input: m ≥ 2 and towns 1, 2, . . . , m a travelling salesman has to visit,

costs M (i, j) ∈ N for travelling from town i to town j,

bound k ∈ N to the overall cost

Question: Is there a round trip with overall cost ≤ k?

I.e., do there exist s1 , s2 . . . , sm such that {s1 , s2 . . . , sm } = {1, 2, . . . , m}

P

and m−1i=1 M (si , si+1 ) + M (sm , s1 ) ≤ k?

41

LONGEST PATH

Input: undirected graph G = (V, E), k ∈ N

Question: Does G have a a path of length k?

I.e., does there exists a sequence of different v0 , v1 , . . . , vk ∈ V

such that (v0 , v1 ), (v1 , v2 ), . . . , (vk−1 , vk ) ∈ E?

BIN PACKING

Input: items 1, 2, . . . , m with volumes a1 , a2 , . . . , am ∈ N,

k bins with volume l each

Question: Can the items be placed in the bins?

I.e.: Do there exists Z1 , Z2 , . . . , Zk suchP that

Z1 ∪ Z2 ∪ · · · ∪ Zk = {1, 2, . . . , m} and j∈Zi aj ≤ l for i = 1, 2, . . . , k?

SOS

Input: n, a1 , a2 . . . , an , b ∈ N

Question: Does there exist an I ⊆ {1, 2, . . . , n} such that Σi∈I ai = b?

PARTITION

Input: n, a1 , a2 . . . , an ∈ N

Question: Does there exist an I ⊆ {1, 2, . . . , n} such that Σi∈I ai = Σi6∈I ai ?

KNAPSACK

Input: m items with volumes g1 , g2 . . . gm ∈ N and values v1 , v1 . . . , vm ∈ N

volume G of a knapsack, value V to be transported

Question: Can one choose some of the items whose overall volume is not greater

than G and whose overall valueP is at least V ? I.e.:

P Does there exist

an I ⊆ {1, 2, . . . , m} such that i∈I gi ≤ G and i∈I vi ≥ V ?

QUADRATIC EQUATION

Input: a, b, c ∈ N

Question: Do there exist x, y ∈ N such that ax2 + by = c?

Input: sets A, B, C of persons with sex 1, sex 2, and sex 3, resp., such that

#A = #B = #C, a preference set S ⊆ A × B × C

((a, b, c) ∈ S means: a, b and c would accept to marry each other)

Question: Is there an arrangement to wed all persons from A ∪ B ∪ C (without

polygamy)? I.e.: Do there exists (a1 , b1 , c1 ), . . . , (am , bm , cm ) ∈ S such

that m = #A, {a1 , . . . , am } = A, {b1 , . . . , bm } = B and {c1 , . . . , cm } = C?

Remark: The traditional marriage problem with two sexes is in P (slava bogu!).

Much more NP-complete problems can be found in the book by M.R. Garey, D.S.

Johnson: Computers and Intractibility, A Guide to the Theory of NP-Completeness.

42

7 Appendix 2: General Definitions and Notations

Logics

Simple statements and conditions are often composed to more complicated ones using

logical connectives and quantifiers. We use the following:

connective/quantifier formula formula is true if

negation ¬A A is not true

conjunction (A ∧ B) A and B are true

disjunction (A ∨ B) at least one of A and B is true

implication (A → B) (¬A ∨ B) is true

equivalence (A ↔ B) A is true if and only if B is true

existential quantifier ∃xA(x) A(x) is true for at least one x

universal quantifier ∀xA(x) A(x) is true for all x

We also write A ⇒ B instead of A → B, and A ⇔ B instead of A ↔ B.

If, for an object O, we introduce by definition a name or denotation B, we write B =def O

or O def = B. If, for a condition C, we introduce by definition a name or denotation B,

we write B ⇔def C or C def ⇔ B.

Sets

We use sets in the naive sense, i.e. as collections of objects. If a object a belongs to the

set A then we say that a is an element of A: We write

a ∈ A for a is an element of the set A

a 6∈ A for ¬(a ∈ A)

A ⊆ B for ∀a((a ∈ A) → (a ∈ B)) (A is a subset of B)

A 6⊆ B for ¬(A ⊆ B) (A is not a subset of B)

A ⊂ B for (A ⊆ B) ∧ (B 6⊆ A) (A is a proper subset of B)

A = B for (A ⊆ B) ∧ (B ⊆ A) (A and B are equal)

A set can be described in different ways. If a set A consists of n ≥ 1 different elements

a1 , a2 , . . . , an then we write A = {a1 , a2 , . . . , an }. Such sets are finite, and we denote by

#A =def n the number of its elements. The only set which has no element is the empty

set which is denoted by ∅ with #∅ =def 0.

Another way to describe sets is by a defining property. If E is a property an object can

possess or not, then we write A = {a | a possesses the property E}.

For example, the set Q of squares of natural numbers can be written in different ways as

Q = {0, 1, 4, 9, 16, 25, 36, . . . }

= {m | m is the square of a natural number}

= {n2 | n is a natural number}

For a set A we define P(A) =def {B | B ⊆ A} as the power set A.

From given sets more complicated sets can be build using set operations.

43

A∪B =def {x | x ∈ A ∨ x ∈ B} (union)

A∩B =def {x | x ∈ A ∧ x ∈ B} (intersection)

A =def {x | x 6∈ A} (complement)

A−B =def {x | x ∈ A ∧ x 6∈ B} (set difference)

The following rules for set operations are important:

A∪B = B∪A commutativity

A∩B = B∩A commutativity

A ∪ (B ∪ C) = (A ∪ B) ∪ C associativity

A ∩ (B ∩ C) = (A ∩ B) ∩ C associativity

A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) distributivity

A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) distributivity

A∪B = A∩B De Morgan’s law

A∩B = A∪B De Morgan’s law

A∪A = A∩A=A=A

We also consider unions and intersections of infinitely many sets. We define

S∞ T∞

i=m Ai =def {x | ∃i(i ≥ m ∧ x ∈ Ai )} and i=m Ai =def {x | ∀i(i ≥ m → x ∈ Ai )}.

In a set there is no special order of the elements, so e. g. we have {a, b} = {b, a}. If in

a set {a1 , a2 , . . . , an } the order of the elements should be fixed as a1 , a2 , . . . , an then we

write (a1 , a2 , . . . , an ), and we call that an n-tuple. For sets A1 , A2 , . . . , An we define

A1 ×A2 ×. . .×An =def {(a1 , a2 , . . . , an ) | a1 ∈ A1 , a2 ∈ A2 , . . . , an ∈ An }

as the cartesian product of A1 , A2 , . . . , An . In particular, An =def A | ×A× {z· · · × A}.

ntimes

Relations

relation on (A1 , A2 , . . . , An ). Of particular interest are binary relations R ⊆ A × A. In

this notation, for example, the ≤-relation on the set N of natural numbers is nothing else

than the set ≤= {(x, y) | x ∈ N ∧ y ∈ N ∧ x ≤ y} ⊆ N × N, and we can write (x, y) ∈≤

instead of x ≤ y. However, this is not comfortable, and hence we use for binary relations

R the operational notation xRy rather than (x, y) ∈ R.

A relation R ⊆ A × A is said to be

reflexive ⇔def ∀a(aRa)

transitive ⇔def ∀a∀b∀c((aRb ∧ bRc) → aRc)

symmetric ⇔def ∀a∀b(aRb → bRa)

antisymmetric ⇔def ∀a∀b((aRb ∧ bRa) → a = b)

total ⇔def ∀a∀b(aRb ∨ bRa)

a partial order ⇔def R reflexive, transitive, and antisymmetric

an order ⇔def R is a total partial order

an equivalence relation ⇔def R is reflexive, transitive, and symmetric

44

Functions

Functions are a special kind of sets. Intuitively, a function maps elements of a set onto

elements of another set. Let A and B are sets. A set ϕ ⊆ A×B is called a function if for

every a ∈ A there exists at most one b ∈ B such that (a, b) ∈ ϕ. We write ϕ : A → B. If,

for an a ∈ A, there exists an element b ∈ B such that (a, b) ∈ ϕ then ϕ(a) is defined and

we write ϕ(a) = b. Otherwise, ϕ(a) is said to be not defined. We set

Dϕ =def {a | a ∈ A and ϕ(a) defined} (domain of f )

Wϕ =def {ϕ(b) | a ∈ A and ϕ(a) defined} (codomain of f )

A function f : A → B is called total if Df = A. For total functions f : A → B and

g : B → C we define the function (g ◦ f ) : A → C by (g ◦ f )(a) =def g(f (a)).

1, if x ∈ A

The characteristic function cA of a set A is defined as cA (x) =def for x ∈ A.

0, if x 6∈ A

Natural Numbers

Let N =def {0, 1, 2, 3, . . . } be the set of natural numbers, and let + and · denote the

operations of addition and multiplication on N. The application of the subtraction and

division does not necessarily result in a natural number. Therefore we modify these

operations in such a way that the result is in any case a natural number. We define the

modified subtraction − · and the modified division : for all x, y ∈ N by

· y =def x − y, if x ≥ y

x−

0 else

the greatest z ∈ N such that z · y ≤ x, if y > 0

x : y =def

x, if y = 0

Besides the operational symbols +, −· , ·, and : for the arithmetical operations we will also

use the functional symbols sum, sub, mul and div, which are defined for all x, y ∈ N by

sum(x, y) =def x + y, sub(x, y) =def x − · y, mul(x, y) =def x · y, and div(x, y) =def x : y.

Moreover we define the exponential function exp by exp(x, y) =def xy for all x, y ∈ N.

We usePa special notation for sums of more than two natural numbersPai . For r ≤ s we

define si=r ai =def ar + ar+1 + · · · + as , and for a finite set I we define i∈I ai as the sum

of all ai such that i ∈ I.

Let f and g be functions and k ≥ 2. We define the functions (f + g), (f − g), (f · g), (f :

g), (f g ), (k·f ), (f k ), and (k f ) by

(f +g)(x) =def f (x) + g(x) (f g )(x) =def f (x)g(x)

(f −g)(x) =def ·

f (x) −g(x) (k·f )(x) =def k·f (x)

(f · g)(x) =def f (x) · g(x) (f k )(x) =def f (x)k

(f : g)(x) =def f (x) : g(x) (k f )(x) =def k f (x)

Let A ⊆ N be a nonempty finite set. The greatest (smallest) Element w.r.t the order ≤

on N is called the maximum (minimum, resp.) of A, and it is denoted by max A (min A,

resp.).

45

8 Appendix 3: Exercises

1. How many words are in the set {x | x ∈ {a, b} ∧ |x| ≤ n}, for n ≥ 0?

2. How many words of length n ≥ 2 are in the set {xayb | x, y ∈ {a, b}∗ }?

3. Let bin(n) = |101010

{z. . . 10}. Give a formula for n (as in Example 1.4.2).

2m digits

of the decimal description of natural numbers.

5. Describe the function f : N → N computed by the algorithm in Example 1.6.

6. Construct a TM which subtracts 1 from an arbitrary natural number n > 0 (given

in binary presentation). For n = 0 the result is 0.

7. The TM M (starting state s0 , stopping state s2 ) is given by the instructions

s0 0 → s0 0R, s0 1 → s0 0R, s0 → s1 L

s1 0 → s1 0L, s1 1 → s1 1L, s1 → s2 1O

Which function g : N → N does M compute?

8. Construct a TM which computes the function h : {a, b}∗ → {a, b}∗ defined by

h(x) =def aaa

| {z. . . a} bbb

| {z. . . }b where k is the number of a’s in x and l is the number of

k digits l digits

b’s in x; i.e., the TM has to sort the letters of the input.

9. Prove 2|bin(n)|−1 ≤ n < 2|bin(n)| for n ≥ 1.

10. Estimate the running time of the TM you constructed in Exercise 8.

11. Prove the statements 1-5 of Proposition 2.1).

12. Prove the statements 1-4 of Proposition 2.2).

13. Construct a TM (program!) which computes the function sum (addition of natural

numbers). What is the running time of your TM?

14. Describe the behavior of a TM (do not write a program) that computes the function

mul (multiplication of natural numbers). What is the running time of your TM?

15. Write a pseudocode algorithm that decides PRIME (Example 2.5) in time O(n2·2n )

or, better, in time O(n2 ·2n/2 )

16. Prove Theorem 3.4: For t(n), s(n) ≥ n:

(a) If A, B ∈ TIME(t) then A ∪ B, A ∩ B, A ∈ TIME(t).

(b) If A, B ∈ SPACE(s) then A ∪ B, A ∩ B, A ∈ SPACE(s).

17. For functions s1 (n), s2 (n) ≥ n, find a small as possible function s(n) ≥ n such that

SPACE(s1 ) ∪ SPACE(s2 ) ⊆ SPACE(s).

18. Prove that 2n is a time function.

19. Prove SPACE(n2 ) ⊂ SPACE(n2 ·log n).

20. Prove TIME(2n ) ⊂ TIME(22n ).

21. For functions s(n), t(n) ≥ n, find a small as possible function r(n) ≥ n such that

SPACE(s) ∪ TIME(t) ⊆ TIME(r).

22. Find a large as possible function s(n) ≥ n such that SPACE(s) ⊆ TIME(nn ).

Hint: Try with s(n) = n. If this fulfills the above inclusion then enlarge s as much

as possible.

46

23. (See proof of Theorem 3.17.) Prove:

k

(a) If A ∈ SPACE(2n ) then BAk ∈ SPACE(n).

k+1

(b) If BAk ∈ P then A ∈ TIME(2n ).

25. Prove that EULER is in P (see Example 4.9).

26. Prove that 2-COLOR is in P (see Example 4.10).

27. Prove that HAMILTONIAN CIRCUIT is a polynomial search problem. (See Ex-

ample 4.12.)

28. Prove SOS ∈ NP. (See Example 4.13.)

29. Prove Theorem 4.15: If A, B ∈ NP then A ∩ B, A ∪ B ∈ NP.

30. Prove that not every problem in EXP is EXP-complete.

(Hint: Use the fact that P is closed under polynomial time reducibility.)

31. (See proof of Theorem 5.11) Prove that

U =def {(code(M ), x, z) | TM M stops on x with result 1 using at most |z| cells}

is in SPACE(n2 ).

32. Prove that SPACE(n2 ) is not closed under polynomial time reducibility.

Hint: Look at the proof of Theorem 5.11.

33. Prove (a) PARTITION ≤ SOS.

(b) SOS ≤ PARTITION.

34. Prove CLIQUE ≤ INDEPENDENT SET.

35. Give a regular expression that describes the set {a, b}∗ − {xaay | x, y ∈ {a, b}∗ },

i.e., the set of all words over the alphabet {a, b} which do not have two consecutive

letters a.

47

- Step 2—Plan the AlgorithmUploaded byRichille Beth Catungal Bangsoy
- Limitations+of+Algorithm+PowerUploaded byAkhilesh Kumar
- scimakelatex.18132.Evander+DemataUploaded byJohn
- ComplexityUploaded byMahak Ahuja
- Provable 8Uploaded byMuhammad Rizwan
- tmp2A66Uploaded byFrontiers
- Qu QueryUploaded byLouis Davis
- heloaodajdUploaded byblanknoiz
- Scimakelatex.24514.Oyo.patoUploaded byLK
- GPS ConsumerUploaded byruchi
- time-and-space-complexity.docUploaded byTio Luiz
- AnsUploaded bySunil Patil
- Analysis and Design of Algorithms _ADA_ _Elective IUploaded byDebashish Mahapatra
- 18CCP-410 Model Question Papers.docxUploaded byAnonymous oEusAsazq4
- Software Engg.Uploaded bycsedepartmentsistecr
- array problems.docUploaded byBipin Jaiswal
- HW11Uploaded byClaire
- DAAUploaded byPavithraRam
- preskill_1_to_6(LectureNote).pdfUploaded byIvan Cheung
- FlowchartsUploaded byHimanshu Tipre
- Algorithms and Flowcharts 1Uploaded byJordan Tagao Colcol
- CIS Syllabus M Tech FinalUploaded byThushara Valsalan
- scimakelatex.566.Ruben+Judocus.Xander+HendrikUploaded byJohn
- Daimio: Trainable, Omniscient ModalitiesUploaded byjunkdump
- calculation of sequent depth.pdfUploaded byKrunal Mahajan
- Exam GuideffUploaded bylalallll
- patro2015Uploaded bytan pham
- scimakelatex.32154.Boe+GusUploaded byAnonymous rkuIgH
- 1-s2.0-S1005888513600205-mainUploaded byPinky Bhagwat
- AlgorithmsUploaded bySiddharth Gupta

- Cpp LabsUploaded byosssay94
- Chris Kemper-Beginning Neo4j-Apress (2015)Uploaded byosssay94
- 2017_PLDI_GraalTutorialUploaded byosssay94
- 2015 CGO GraalUploaded byosssay94
- Introduction to Taxonomc Analysis of Amplicon and Shotgun Data Using Qiime Sep 2014Uploaded byosssay94
- We Ka ManualUploaded byGiovani Ângelo
- modprobUploaded byHadi Rad
- Fundamentals of Electrical Engineering I - Don H. Johnson.pdfUploaded byAri Kurniawan

- Corporate Strategy Associate or Corporate Finance Associate or DUploaded byapi-121429068
- TOE-C843-9.20DUploaded byramon nava
- Code of ObligationsUploaded byBosko
- Lathe.pdfUploaded bythanhvutsmvn
- fisa v=e 8Uploaded byCorina Manolache
- 87 Scra 294 - Santiago vs RepublicUploaded byMj Garcia
- Senior Electrical Estimator or Senior Project ManagerUploaded byapi-77224504
- Aurel Rustoiu Shooting the Evil. ScythiaUploaded byŽeljko Slijepčević
- Dorotheo AnalysisUploaded bythe saint
- Hazard Analysis Critical Control Point (Haccp) Certification of Micro and Small Scale Food Companies in the PhilippinesUploaded byGRDS Matter
- Introduction to Industrial Safety and Accident PreventionUploaded byAshwani Dogra
- The Poetical Works of Thomas HoodUploaded byGuillermo Romero von Zeschau
- F. H. McGraw & Co. v. New England Foundation Co., Inc, 210 F.2d 62, 1st Cir. (1954)Uploaded byScribd Government Docs
- The information structure of adverbial clauses in Chinese discourseUploaded byFrancesco Nati
- Final Project ReportUploaded bysantu_khinchi
- ANALISIS ITEM BAHASA INGGERIS TAHUN 4 MAC 2019.docxUploaded byIzzatie Nadia Subri
- Correlation between Bearing capacity and KUploaded byShamim Ahsan Zubery
- Want to manage tacit knowledge?Uploaded byShawn Callahan
- Presentaion UET Taxila1Uploaded byJibran Khalid
- 2015 BMW F 700 GSUploaded byariman5678582
- Terence McKenna - 1998 - Comments on Mercury (exerpt)Uploaded bygalaxy5111
- Agilent Gprs EdgeUploaded byKhåïrul Akmar Timin
- UAW Opposition to the Motions to InterveneUploaded byTom Gara
- 110610-Premier Ted Bailliue - COMPLAINT - EtcUploaded byGerrit Hendrik Schorel-Hlavka
- 6. Don vs LacsaUploaded byArkhaye Salvatore
- The Spider's Thread Activity.docxUploaded byGabriel Rubia
- Natres Cases 1-4Uploaded byJessa Lo
- Vmware Esxi DatasheetUploaded byantonioclj
- MAGNETROL ECLIPSE OPERATING MANUALUploaded byabdi123456
- MN_2015-10-06Uploaded bymoorabool

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.