You are on page 1of 75

CHAPTER III: Computability

 Under this chapter we will learn about:-


 Recursive Functions
 Recursive Languages
 Recursively Enumerable Languages

1 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Recursive Functions
 Recursive function theory, like the theory of Turing machines, is one way to
make formal and precise the intuitive, informal, and imprecise notion of an
effective method.
 It happens to identify the very same class of functions as those that are
Turing computable.
 This fact is informal or inductive support for Church's Thesis, asserting that
every function that is effectively computable in the intuitive sense is
computable in these formal ways.
 Recursive function theory begins with some very elementary functions that
are intuitively effective.
 Our elementary functions are all functions of the natural numbers.
 Example
o z(x) = 0
o s(x) = successor of x (roughly, "x + 1")
o id(x) = x

2 Compiled by :Mr.Abdisa L. AUWC Dept of CS


The Building Operations
 We have 3 operation
1. Composition
2. Primitive Recursion and
3. Minimization

Composition
 If we start with the successor function, s(x) then we may
replace its argument, x, with a function.
 If we replace the argument, x, with the zero function, z(x)
then the result is the successor of zero.
 For example: s(z(x)) = 1, s(s(z(x))) = 2 and so on.
 In this way, compounding some of the initial functions can
describe the natural numbers.

3 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Primitive Recursion
 The second building operation is called primitive recursion.
 Function h is defined through functions f and g by primitive recursion
when
o h(x,0) = f(x)
o h(x,s(y)) = g(x,h(x,y))

 First, remember that f and g are known computable Functions.


 Primitive recursion is a method of defining a new function, h, through
old functions, f and g.

4 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Partial and Total Recursive Functions
 Each computable function f takes a fixed, finite number of natural numbers
as arguments.
 Because the functions are partial in general, they may not be defined for
every possible choice of input.
 If a computable function is defined for a certain input, then it returns a
single natural number as output.
 These functions are also called partial recursive functions.
 In computability theory, the domain of a function is taken to be the set of all
inputs for which the function is defined.
 A function which is defined for all possible arguments is called total.
 If a computable function is total, it is called a total computable function
or total recursive function.

5 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Characteristics of Procedure
 The basic characteristic of a computable function is that there must be a
finite procedure telling how to compute the function.
 There must be exact instructions (i.e. a program), finite in
length, for the procedure.
 Thus every computable function must have a finite program that completely
describes how the function is to be computed.
 “If the procedure is given a k-tuple x in the domain of f, then
after a finite number of discrete steps the procedure must
terminate and produce f(x).”
 Intuitively, the procedure proceeds step by step, with a specific rule to cover
what to do at each step of the calculation.
 “If the procedure is given a k-tuple x which is not in the domain
of f, then the procedure might go on forever, never halting. Or it
might get stuck at some point, but it must not pretend to
produce a value for f at x.”

6 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Problems solved by a computational process: Decision
Problems
 A decision problem is stated as a question with a "yes" or "no" answer,
such as:
o Is the number 23171 prime?
o Does 2005 January 1 fall on a Friday?
o Is the list of names in the file 'clients.txt' in sorted order?
o Does she love me?
 If the problem is stated as a Boolean statement --- an assertion which is
either true or false --- we call the statement a predicate.
 If there is an effective procedure for answering the question or
evaluating the assertion, we say that the problem is decidable.

7 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Problems solved by a computational process: Functions
 Other problems require a unique but particular answer:
o What is the smallest prime factor of 23171?
o On which day of the week will the year 2005 begin?

 This type of problem can be viewed as the evaluation of a function,


since the answer is unique.
Problems solved by a computational process: Relations
 Some problems may have multiple correct answers or no correct
answers.
o Find a prime factor of 23171.
o What month in the year 2005 will have a Friday the 13th?
o What is a possible shuffled ordering of the following list of
cards?
Compiled by :Mr.Abdisa L. AUWC Dept of CS
8
Recursive Languages
 Definition: Let L be a language. Then L is recursive if there exists a TM M
such that L = L(M) and M halts on all inputs.
 If L is recursive then L = L(M) for some TM M, and
 If x is in L then M halts in a final (accepting) state.
 If x is not in L then M halts in a non-final (non-accepting) state or no
transition is available (does not go to infinite loop)
 L is recursive if there is a total TM.
 That means here we have 2 states.
1. Halt and Accept
2. Halt and reject

 The set of all recursive languages is a subset of the set of all recursively
enumerable languages

9 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Recursively Enumerable Languages
 Definition: Let L be a language. Then L is recursively enumerable if
there exists a TM M such that L = L(M).
 If L is RE. then L = L(M) for some TM M, and
 If x is in L then M halts in a final (accepting) state.
 If x is not in L then M may halt in a non-final (non-accepting) state or
no transition is available, or loop forever.

 That means here we have 3 states.


1. Halt and Accept
2. Halt and reject
3. Never halt

10 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Observation: Let L be an r.e. language. Then there is an infinite list M0,
M1, … of TMs such that L = L(Mi).
 Question: Let L be a recursive language, and M0, M1, … a list of all TMs
such that L = L(Mi), and choose any i>=0. Does Mi always halt?
 Answer: Maybe, maybe not, but at least one in the list does.
 Question: Let L be a recursive enumerable language, and M0, M1, … a
list of all TMs such that L = L(Mi), and choose any i>=0. Does Mi always
halt?
 Answer: Maybe, maybe not. Depending on L, none might halt or some may
halt.
 If L is also recursive then L is recursively enumerable, recursive is subset of r.e.
 Question: Let L be a r.e. language that is not recursive (L is in r.e. – r), and
M0, M1, … a list of all TMs such that L = L(Mi), and choose any i>=0.
Does Mi always halt?
 Answer: No! If it did, then L would not be in r.e. – r, it would be recursive.

11 Compiled by :Mr.Abdisa L. AUWC Dept of CS


See the Hierarchy of this diagram

12 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 L is Recursively enumerable:
TM exist: M 0, M1, …
They accept string in L, and do not accept any string outside L.
 L is Recursive:
At least one TM halts on L and on ∑*-L, others may or may not.
 L is Recursively enumerable but not Recursive:
TM exist: M 0, M1, …
but none halts on all x in ∑*-L
M0 goes on infinite loop on a string p in ∑*-L, while M1 on q in ∑*-L
However, each correct TM accepts each string in L, and none in ∑*-L
 L is not R.E:
no TM exists

13 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Let M be a TM.
Question: Is L(M) r.e.?
 Answer:Yes! By definition it is.
Question: Is L(M) recursive?
 Answer: Don’t know, we don’t have enough information.
Let M be a TM that halts on all inputs:
 Question: IsL(M) recursively enumerable?
 Answer:Yes! By definition it is.
Question: Is L(M) recursive?
Answer:Yes! By definition it is.

14 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Let M be a TM.
As noted previously, L(M) is recursively enumerable, but may
or may not be recursive.
Question: Suppose, we know L(M) is recursive. Does that
mean M always halts?
Answer: Not necessarily. However, some TM M’ must exist
such that L(M’) =L(M) and M’ always halts.
 Let M be a TM, and suppose that M loops forever on some
string x.
 Question: Is L(M) recursively enumerable?
 Answer:Yes! By definition it is. But, obviously x is not in L(M).
 Question: Is L(M) recursive?
 Answer: Don’t know. Although M doesn’t always halt, some other TM
M’ may exist such that L(M’) = L(M) and M’ always halts.

15 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Closure Properties for Recursive and Recursively Enumerable Languages
TM Block Diagrams:

 If L is a recursive language, then a TM M that accepts L and always halts can be


pictorially represented by a “chip” or “box” that has one input and two
outputs.

 If L is a recursively enumerable language, then a TM M that accepts L can be


pictorially represented by a “box” that has one output.

 Conceivably, M could be provided with an output for “no,” but this output
cannot be counted on. Consequently, we simply ignore it.

16 Compiled by :Mr.Abdisa L. AUWC Dept of CS


17 Compiled by :Mr.Abdisa L. AUWC Dept of CS
Complement Recursive Language

18 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Question: How is the construction achieved? Do we
simply complement the final states in the TM? No! A
string in L could end up in the complement of L.
 Suppose q5 is an accepting state in M, but q0 is not.

 If we simply complemented the final and non-final


states, then q0 would be an accepting state in M’ but
q5 would not.
 Since q0 is an accepting state, by definition all
strings are accepted by M’

19 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Union Recursive Language

20 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Union Recursive Enumarable Language

21 Compiled by :Mr.Abdisa L. AUWC Dept of CS


22 Compiled by :Mr.Abdisa L. AUWC Dept of CS
23 Compiled by :Mr.Abdisa L. AUWC Dept of CS
24 Compiled by :Mr.Abdisa L. AUWC Dept of CS
25 Compiled by :Mr.Abdisa L. AUWC Dept of CS
Intersection of Recursive Sets

26 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Intersection of RE Sets

Observe: if w is in the intersection then both machines


will accept and halt on w => This machine M will halt
and accept w

27 Compiled by :Mr.Abdisa L. AUWC Dept of CS


28 Compiled by :Mr.Abdisa L. AUWC Dept of CS
29 Compiled by :Mr.Abdisa L. AUWC Dept of CS
CHAPTER IV: Computational Complexity

 Under this Chapter we will learn about:-


Big-O Notation
 Class P versus Class NP
Polynomial Time Reduction and NP-Complete Problems
Cook‘s Theorem

30 Compiled by :Mr.Abdisa L. AUWC Dept of CS


What is an algorithm?
 An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
 The definition can be illustrated by the following diagram.

31 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


 An algorithm is a clearly specified set of simple
instructions to be followed to solve a problem.
 Any well-defined computational procedure that takes
some value (or set of values) as an input and produces some
value (or set of values) as an output.
 A sequence of computational steps that transforms the
input into the output
 A set of well-defined, finite rules used for problem
solving.
 A finite set of instructions that, if followed, accomplish a
particular task. It is a precise, systematic method for
producing a specified result.
Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS
32
Properties of an algorithm
 From the above definitions, algorithm has the following properties:
Sequence, Unambiguous, Input, Output, Finite
Sequence
 It is a step-by-step procedure for solving a given problem
 Every algorithm should have a beginning (start) and a halt (end) step
 The first step (start step) and last step (halt step) must be clearly noted
 Between the two every step should have preceding and succeeding steps
 That is, each step must have a uniquely defined preceding and succeeding
step

33 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Unambiguous
Define rigorously the sequence of operations performed for
transforming the inputs into the outputs
No ambiguous statements are allowed: Each step of an algorithm
must be clearly and precisely defined, having one and only one
interpretation.
 At each point in computation, one should be able to tell exactly
what will happen next
 Algorithms must specify every step. It must be composed of
concrete steps
Every detail of each step must be spelled out, including how to
handle errors
This ensures that if the algorithm is performed at different times
or by different systems using the same data, the output will be the
same.

34 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Input specified
The inputs are the data that will be transformed during the
computation to produce the output.
An input to an algorithm specifies an instance of the problem the
algorithm solves.
Every algorithm should have a specified number (zero or more)
input values (or quantities) which are externally supplied.
 We must specify the type of data and the amount of data
 Note that, correct algorithm is not one that works most of the
time but one that works correctly for all legitimate input

35 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Output specified
The output is the data resulting from the computation.
Every algorithm should have one or a sequence of output
values.
A possible output for some computations is a statement that
there can be no output, i.e., no solution is possible.
The algorithm can be proved to produce the correct output
given a valid input.

36 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Finiteness: It must terminate
Every valid algorithm must complete or terminate after a
finite number of steps.
If you trace out the instructions of an algorithm, then for all
cases the algorithm must terminate after a finite number
of steps.
It must eventually stop either with the right output or with a
statement that no solution is possible.
Finiteness is an issue for computer algorithms because
Computer algorithms often repeat instructions
If the algorithm doesn’t specify when to stop, the
computer will continue to repeat the instructions
37
forever.
Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS
Need of Algorithm
 To understand the basic idea of the problem.
 To find an approach to solve the problem.
 To improve the efficiency of existing techniques.
 To understand the basic principles of designing the algorithms.
 To compare the performance of the algorithm with respect to
other techniques.
 It is the best method of description without describing the
implementation detail.
 The Algorithm gives a clear description of requirements and goal
of the problem to the designer.
 A good design can produce a good solution.
 To understand the flow of the problem.
Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS
38
 To understand the principle of designing.
 We can measure and analyze the complexity (time and space) of
the problems concerning input size without implementing and
running it
 It will reduce the cost of design.
 To measure the behavior (or performance) of the methods in all
cases (best cases, worst cases, average cases)
 With the help of an algorithm, we can also identify the resources
(memory, input-output) cycles required by the algorithm.
 With the help of algorithm, we convert art into a science.

39 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Analysis of algorithm
 The analysis is a process of estimating the efficiency of an algorithm.
 There are two fundamental parameters based on which we can analysis
the algorithm:
 Space Complexity: The space complexity can be understood as the
amount of space required by an algorithm to run to completion.
 Time Complexity: Time complexity is a function of input size n that
refers to the amount of time needed by an algorithm to run to
completion.
 In general, if there is a problem P1, then it may have many solutions,
such that each of these solutions is regarded as an algorithm.
 So, there may be many algorithms such as A1, A2, A3, …, An.
 Before you implement any algorithm as a program, it is better to find
out which among these algorithms are good in terms of time and
memory.
40 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS
 Every algorithm can be analysed in terms of Time that execute faster
and Memory corresponding will take less memory.
 Generally, we make three types of analysis, which is as follows:
 Worst-case time complexity: For 'n' input size, the worst-case time
complexity can be defined as the maximum amount of time needed by
an algorithm to complete its execution.
 Average case time complexity: For 'n' input size, the average-case
time complexity can be defined as the average amount of time needed
by an algorithm to complete its execution.
 Best case time complexity: For 'n' input size, the best-case time
complexity can be defined as the minimum amount of time needed by
an algorithm to complete its execution.

41 Compiled By:Mr.Abdisa Lechisa AUWC Dept of CS


Algorithm Analysis
 We only analyze correct algorithms, an algorithm is correct If, for every
input instance, it halts with the correct output.
 Incorrect algorithms might not halt at all on some input instances or might
halt with other than the desired answer .
 Analyzing an algorithm means predicting the resources that the algorithm
requires
 Resources include
a. Memory
b. Communication bandwidth
c. Computational time
 Factors affecting the running time
a. Computer
b. Compiler
c. Algorithm used
d. Input to the algorithm
42 Compiled by :Mr.Abdisa L. AUWC Dept of CS
What is complexity theory?
 Complexity theory is concerned with the resources, such as time and
space, needed to solve computational problems.
 Complexity theory is the appropriate setting for the study of such
problems.
 It is also the home of one of the most fundamental open problems in
mathematics, namely the famous NP versus P problem.

43 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Turing machines
 Complexity theory Turing machines are allowed to have any finite
number of tapes.
 Given a Turing machine M and an input x, we use the notation M(x) ↓
to denote that the computation of M on x halts in a finite number of
steps, and we write M(x)↑ if this is not the case.
 A set or a function is computable if there is a Turing machine
computing it.
 A set is computably enumerable (c.e.) if it is empty or the range of a
computable function (cf).
 Given a computable set A and a machine M that computes A, we also
say that M recognizes A, or that M accepts A.

44 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Complexity - Big-O
 T(n) = O(f(n)) means c.f(n) is an upper bound on T(n), where there
exists some constant c such that T(n) is always <= c.f(n) for large
enough n.
 Example: n3 + 3n2 + 6n + 5 is O(n3) or n2 + n logn is O(n2).
 f(N) = O(g(N)), then there are positive constants c and n0 such that
f(N)<=c g(N) when N>=n0.
 The growth rate of f(N) is less than or equal to growth rate of g(N),
g(N) is an upper bound on f(N).
Big-Oh: example, Let f(N) = 2N2. Then
 f(N) = O(N4)
 f(N) = O(N3)
 f(N) = O(N2) (best answer, asymptotically tight)
45 Compiled by :Mr.Abdisa L. AUWC Dept of CS
Big Oh: more examples
 N2 / 2 – 3N = O(N2)
 1 + 4N = O(N)
 log10 N = log2 N / log2 10 = O(log2 N) = O(log N)
 sin N = O(1); 10 = O(1), 1010 = O(1)
 log N + N = O(N)
 logk N = O(N) for any constant k
 Some rules when considering the growth rate of a function using
Big-Oh
Ignore the lower order terms and the coefficients of the
highest-order term
No need to specify the base of logarithm, changing the base
from one constant to another changes the value of the
logarithm by only a constant factor

46 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Demonstrating the Big-O Concept
 Each of the algorithms below has O(n3) time complexity.
 (In fact, the execution time for Algorithm A is n3 + n2 + n, and the
execution time for Algorithm B is n3 + 101n2 + n.)

47 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Another demonstration of Big-O Concept
 Each of the algorithms below has O(n2) time complexity.
 (In fact, the execution time for Algorithm C is n2 + 2n + 3, and the
execution time for Algorithm D is n2 + 1002n + 3.)

48 Compiled by :Mr.Abdisa L. AUWC Dept of CS


What are NP, P, NP-complete and NP-Hard problems?
 P is set of problems that can be solved by a deterministic
Turing machine in Polynomial time.
 NP is set of decision problems that can be solved by a Non-
deterministic Turing Machine in Polynomial time.
 P is subset of NP (any problem that can be solved by
deterministic machine in polynomial time can also be solved
by non-deterministic machine in polynomial time).
 Informally, NP is set of decision problems which can be
solved by a polynomial time via a “Lucky Algorithm”, a
magical algorithm that always makes a right guess among the
given set of choices.
49 Compiled by :Mr.Abdisa L. AUWC Dept of CS
 NP-complete problems are the hardest problems in NP set. A decision
problem L is NP-complete if:
L is in NP (Any given solution for NP-complete problems can
be verified quickly, but there is no efficient known solution).
Every problem in NP is reducible to L in polynomial time.
• A problem is NP-Hard if it follows property 2 mentioned
above, doesn’t need to follow property 1.
• Therefore, NP-Complete set is also a subset of NP-Hard
set.

50 Compiled by :Mr.Abdisa L. AUWC Dept of CS


51 Compiled by :Mr.Abdisa L. AUWC Dept of CS
NP-Completeness
 NP-completeness is a form of bad news: evidence that many
important problems can't be solved quickly.
Why should we care?
 These NP-complete problems really come up all the time.
 Knowing they're hard lets you stop beating your head against a wall
trying to solve them, and do something better:
 Use a heuristic. If you can't quickly solve the problem with a good
worst case time, maybe you can come up with a method for solving a
reasonable fraction of the common cases.
 Solve the problem approximately instead of exactly. A lot of the time it
is possible to come up with a provably fast algorithm, that doesn't
solve the problem exactly but comes up with a solution you can prove
is close to right.

52 Compiled by :Mr.Abdisa L. AUWC Dept of CS


 Use an exponential time solution anyway. If you really have to solve
the problem exactly, you can settle down to writing an exponential
time algorithm and stop worrying about finding a better solution.
 Choose a better abstraction. The NP-complete abstract problem
you're trying to solve presumably comes from ignoring some of the
seemingly unimportant details of a more complicated real world
problem.
 Perhaps some of those details shouldn't have been ignored, and make
the difference between what you can and can't solve.

53 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Classification of Problems
 The subject of computational complexity theory is dedicated to classifying
problems by how hard they are.
 There are many different classifications; some of the most common
and useful are the following.
 P.
NP.
PSPACE.
EXPTIME.
Undecidable.

54 Compiled by :Mr.Abdisa L. AUWC Dept of CS


P
 P. Problems that can be solved in polynomial time. ("P" stands for polynomial.)
 P is a very good approximation to the class of problems which can be
solved quickly in practice usually if this is true, we can prove a
polynomial worst case time bound, and conversely the polynomial time
bounds we can prove are usually small enough that the corresponding
algorithms really are practical.
NP.
 NP. This stands for "nondeterministic polynomial time" where
nondeterministic is just a fancy way of talking about guessing a solution.
 A problem is in NP if you can quickly (in polynomial time) test whether a
solution is correct (without worrying about how hard it might be to find
the solution).
 Problems in NP are still relatively easy: if only we could guess the right
solution, we could then quickly test it.
 NP does not stand for "non-polynomial".
 There are many complexity classes that are much harder than NP.
55 Compiled by :Mr.Abdisa L. AUWC Dept of CS
PSPACE
 PSPACE. Problems that can be solved using a reasonable amount of
memory (again defined formally as a polynomial in the input size)
without regard to how much time the solution takes.

EXPTIME.
 EXPTIME. Problems that can be solved in exponential time.
 This class contains most problems you are likely to run into, including
everything in the previous three classes.
 It may be surprising that this class is not all-inclusive: there are
problems for which the best algorithms take even more than
exponential time.

56 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Undecidable.
 Undecidable. For some problems, we can prove that there is no
algorithm that always solves them, no matter how much time or
space is allowed.
 One very uninformative proof of this is based on the fact that there
are as many problems as there real numbers, and only as many
programs as there are integers, so there are not enough programs
to solve all the problems.
 But we can also define explicit and useful problems which can't be
solved.
NP-completeness
 NP-completeness theory is concerned with the distinction between
the first two classes, P and NP.

57 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Reductions and completeness
 One of the goals of complexity theory is to classify problems according to
their complexity.
 The main tool for doing this is to consider effective reductions between
problems.
 A key insight is that classes such as NP contain hardest problems.
 Reductions also allow us to substantiate the idea that various problems,
though differently formulated, are actually the same.
Many-one reductions
 Definition :- Given sets A and B, A is many-one reducible or simply m-
reducible to B, written , if for some computable function f,
x ∈ A ⇐⇒ f(x) ∈ B for every x.
 If f is in addition polynomial time computable then we say that A is p-m-
reducible to B, written .
 We write if both and . The set is called the p-m-degree of A.

58 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Reductions, Hardness and NP-Completness

59 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Theorem

60 Compiled by :Mr.Abdisa L. AUWC Dept of CS


61 Compiled by :Mr.Abdisa L. AUWC Dept of CS
62 Compiled by :Mr.Abdisa L. AUWC Dept of CS
63 Compiled by :Mr.Abdisa L. AUWC Dept of CS
64 Compiled by :Mr.Abdisa L. AUWC Dept of CS
65 Compiled by :Mr.Abdisa L. AUWC Dept of CS
66 Compiled by :Mr.Abdisa L. AUWC Dept of CS
67 Compiled by :Mr.Abdisa L. AUWC Dept of CS
68 Compiled by :Mr.Abdisa L. AUWC Dept of CS
69 Compiled by :Mr.Abdisa L. AUWC Dept of CS
70 Compiled by :Mr.Abdisa L. AUWC Dept of CS
71 Compiled by :Mr.Abdisa L. AUWC Dept of CS
72 Compiled by :Mr.Abdisa L. AUWC Dept of CS
The Cook-Levin Theorem
 Let SAT be the language of all satisfiable CNF formulae and 3SAT be
the language of all satisfiable 3CNF formulae. Then,
1. SAT is NP-complete.
2. 3SAT is NP-complete.
 Both SAT and 3SAT are clearly in NP, since a satisfying assignment can
serve as the certificate that a formula is satisfiable.
 Thus we only need to prove that they are NP-hard.
 We do so by first proving that SAT is NP-hard and then showing that
SAT is polynomial-time Karp reducible to 3SAT.
 This implies that 3SAT is NP-hard by the transitivity of polynomial-
time reductions.
 Thus the following lemma is the key to the proof.

73 Compiled by :Mr.Abdisa L. AUWC Dept of CS


Lemma 2.12 SAT is NP-hard.
 Notice, to prove this we have to show how to reduce every NP
language L to SAT, in other words give a polynomial-time
transformation that turns any x 2 {0, 1}* into a CNF formula x
 such that x  L iff x is satisfiable.
 Since we know nothing about the language L except that it is in NP,
this reduction has to rely just upon the definition of computation, and
express it in some way using a Boolean formula.

74 Compiled by :Mr.Abdisa L. AUWC Dept of CS


75 Compiled by :Mr.Abdisa L. AUWC Dept of CS

You might also like