You are on page 1of 23

UNIT-V

Turning Machine

Turning Machine: Definition, Model,


Representation of TMs-Instantaneous Descriptions,
Transition Tables and Transition Diagrams,
Language of a TM, Design of TMs,
Types of TMs,
Church’s Thesis,
Universal and Restricted TM,
Decidable and Un-decidable Problems,
Halting Problem of TMs,
Post’s Correspondence Problem,
Modified PCP,
Classes of P and NP, NP-Hard and NP-Complete Problems.
Turing Machine
• Turing machine was invented in 1936 by Alan Turing.
• It is an accepting device which accepts Recursive Enumerable Language generated by type 0
grammar.
There are various features of the Turing machine:
1. It has an external memory which remembers arbitrary long sequence of input.
2. It has unlimited memory capability.
3. The model has a facility by which the input at left or right on the tape can be read easily.
4. The machine can produce a certain output based on its input. Sometimes it may be required that
the same input has to be used to generate the output. So in this machine, the distinction between
input and output has been removed. Thus a common set of alphabets can be used for the Turing
machine.

Formal definition of Turing machine :


• A Turing machine can be defined as a collection of 7 components:
Q: the finite set of states
∑: the finite set of input symbols
T: the tape symbol
q0: the initial state
F: a set of final states
B: a blank symbol used as a end marker for input
δ: a transition or mapping function.
• The mapping function shows the mapping from states of finite automata and input symbol on the
tape to the next states, external symbols and the direction for moving the tape head.
• This is known as a triple or a program for Turing machine.
(q0, a) → (q1, A, R)
• That means in q0 state, if we read symbol 'a' then it will go to state q1, replaced a by X and move
ahead right(R stands for right).
Example: Construct TM for the language L ={0n1n} where n>=1.
Solution:
• We have already solved this problem by PDA.
• In PDA, we have a stack to remember the previous symbol.
• The main advantage of the Turing machine is we have a tape head which can be moved forward or
backward, and the input tape can be scanned.
• The simple logic which we will apply is read out each '0' mark it by A and then move ahead along
with the input tape and find out 1 convert it to B.
• Now, repeat this process for all a's and b's.
• Now we will see how this turing machine work for 0011.
The simulation for 0011 can be shown as below:

Now, we will see how this turing machine will works for 0011. Initially, state is q0 and head points to 0
as:

The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A and head will
move to the right as:

The move will be δ(q1, 0) = δ(q1, 0, R) which means it will not change any symbol, remain in the same
state and move to the right as:

The move will be δ(q1, 1) = δ(q2, B, L) which means it will go to state q2, replaced 1 by B and head will
move to left as:
Now move will be δ(q2, 0) = δ(q2, 0, L) which means it will not change any symbol, remain in the same
state and move to left as:

The move will be δ(q2, A) = δ(q0, A, R), it means will go to state q0, replaced A by A and head will move
to the right as:

The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A, and head will
move to right as:

The move will be δ(q1, B) = δ(q1, B, R) which means it will not change any symbol, remain in the same
state and move to right as:

The move will be δ(q1, 1) = δ(q2, B, L) which means it will go to state q2, replaced 1 by B and head will
move to left as:

The move δ(q2, B) = (q2, B, L) which means it will not change any symbol, remain in the same state and
move to left as:

Now immediately before B is A that means all the 0’s are market by A. So we will move right to ensure
that no 1 is present. The move will be δ(q2, A) = (q0, A, R) which means it will go to state q0, will not
change any symbol, and move to right as:
The move δ(q0, B) = (q3, B, R) which means it will go to state q3, will not change any symbol, and move
to right as:

The move δ(q3, B) = δ(q3, B, R) which means it will not change any symbol, remain in the same state and
move to right as:

The move δ(q3,Δ) = (q4,Δ,R) which means it will go to state q4 which is the HALT state and HALT state
is always an accept state for any TM.

Language accepted by Turing machine :


• The turing machine accepts all the language even though they are recursively enumerable.
• Recursive means repeating the same set of rules for any number of times and enumerable means a
list of elements.
• The TM also accepts the computable functions, such as addition, multiplication, subtraction,
division, power function, and many more.
Example: Construct a turing machine which accepts the language of aba over ∑ = {a, b}.
Solution: We will assume that on input tape the string 'aba' is placed like this:

• The tape head will read out the sequence up to the Δ characters.
• If the tape head is readout 'aba' string then TM will halt after reading Δ.
• Now, we will see how this turing machine will work for aba.
• Initially, state is q0 and read head points to a as:

The move will be δ(q0, a) = δ(q1, A, R) which means it will go to state q1, replaced a by A and head will
move to right as:
The move will be δ(q1, b) = δ(q2, B, R) which means it will go to state q2, replaced b by B and head will
move to right as:

The move will be δ(q2, a) = δ(q3, A, R) which means it will go to state q3, replaced a by A and head will
move to right as:

The move δ(q3, Δ) = (q4, Δ, S) which means it will go to state q4 which is the HALT state and HALT state
is always an accept state for any TM.

The same TM can be represented by Transition Table:


States a b Δ

q0 (q1, A, R) – –

q1 – (q2, B, R) –

q2 (q3, A, R) – –

q3 – – (q4, Δ, S)

q4 – – –

The same TM can be represented by Transition Diagram:

Example 2: Construct a TM for the language L = {0n1n2n} where n≥1


Solution:
• L = {0n1n2n | n≥1} represents language where we use only 3 character, i.e., 0, 1 and 2.
• In this, some number of 0's followed by an equal number of 1's and then followed by an equal
number of 2's.
• Any type of string which falls in this category will be accepted by this language.
• The simulation for 001122 can be shown as below:
Now, we will see how this Turing machine will work for 001122.
Initially, state is q0 and head points to 0 as:

The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A and head will
move to the right as:

The move will be δ(q1, 0) = δ(q1, 0, R) which means it will not change any symbol, remain in the same
state and move to the right as:

The move will be δ(q1, 1) = δ(q2, B, R) which means it will go to state q2, replaced 1 by B and head will
move to right as:

The move will be δ(q2, 1) = δ(q2, 1, R) which means it will not change any symbol, remain in the same
state and move to right as:

The move will be δ(q2, 2) = δ(q3, C, R) which means it will go to state q3, replaced 2 by C and head will
move to right as:

Now move δ(q3, 2) = δ(q3, 2, L) and δ(q3, C) = δ(q3, C, L) and δ(q3, 1) = δ(q3, 1, L) and δ(q3, B) =
δ(q3, B, L) and δ(q3, 0) = δ(q3, 0, L), and then move δ(q3, A) = δ(q0, A, R), it means will go to state q0,
replaced A by A and head will move to right as:
The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A, and head will
move to right as:

The move will be δ(q1, B) = δ(q1, B, R) which means it will not change any symbol, remain in the same
state and move to right as:

The move will be δ(q1, 1) = δ(q2, B, R) which means it will go to state q2, replaced 1 by B and head will
move to right as:

The move will be δ(q2, C) = δ(q2, C, R) which means it will not change any symbol, remain in the same
state and move to right as:

The move will be δ(q2, 2) = δ(q3, C, L) which means it will go to state q3, replaced 2 by C and head will
move to left until we reached A as:

immediately before B is A that means all the 0's are market by A. So we will move right to ensure that no
1 or 2 is present. The move will be δ(q2, B) = (q4, B, R) which means it will go to state q4, will not
change any symbol, and move to right as:

The move will be (q4, B) = δ(q4, B, R) and (q4, C) = δ(q4, C, R) which means it will not change any
symbol, remain in the same state and move to right as:

The move δ(q4, X) = (q5, X, R) which means it will go to state q5 which is the HALT state and HALT
state is always an accept state for any TM.
The same TM can be represented by Transition Diagram:

Transition Function, Instantaneous Descriptions, and Moves :

The transition function for Turing machines is given by


:Q Q {L, R}

This means: When the machine is in a given state (Q) and reads a given symbol ( ) from the tape, it
replaces the symbol on the tape with some other symbol ( ), goes to some other state (Q), and moves the
tape head one square left (L) or right (R).

An instantaneous description or configuration of a Turing machine requires


(1) the state the Turing machine is in,
(2) the contents of the tape, and
(3) the position of the tape head on the tape.
This can be summarized in a string of the form
xi...xj qm xk...xl

where the x's are the symbols on the tape, qm is the current state, and the tape head is on the square
containing xk (the symbol immediately following qm).

A move of a Turing machine can therefore be represented as a pair of instaneous descriptions, separated
by the symbol " ".

For example, if (q5, b) = (q8, c, R) then a possible move might be

abbabq5babb abbabcq8abb
Types of Turing Machine :-
1. Multiple track Turing Machine:
1. A k-tack Turing machine(for some k>0) has k-tracks and one R/W head that reads and writes all of
them one by one.
2. A k-track Turing Machine can be simulated by a single track Turing machine
2. Two-way infinite Tape Turing Machine:
1. Infinite tape of two-way infinite tape Turing machine is unbounded in both directions left and right.
2. Two-way infinite tape Turing machine can be simulated by one-way infinite Turing
machine(standard Turing machine).
3. Multi-tape Turing Machine:
1. It has multiple tapes and controlled by a single head.
2. The Multi-tape Turing machine is different from k-track Turing machine but expressive power is
same.
3. Multi-tape Turing machine can be simulated by single-tape Turing machine.
4. Multi-tape Multi-head Turing Machine:
1. The multi-tape Turing machine has multiple tapes and multiple heads
2. Each tape controlled by separate head
3. Multi-Tape Multi-head Turing machine can be simulated by standard Turing machine.
5. Multi-dimensional Tape Turing Machine:
1. It has multi-dimensional tape where head can move any direction that is left, right, up or down.
2. Multi dimensional tape Turing machine can be simulated by one-dimensional Turing machine
6. Multi-head Turing Machine:
1. A multi-head Turing machine contain two or more heads to read the symbols on the same tape.
2. In one step all the heads sense the scanned symbols and move or write independently.
3. Multi-head Turing machine can be simulated by single head Turing machine.
7. Non-deterministic Turing Machine:
1. A non-deterministic Turing machine has a single, one way infinite tape.
2. For a given state and input symbol has atleast one choice to move (finite number of choices for the
next move), each choice several choices of path that it might follow for a given input string.
3. A non-deterministic Turing machine is equivalent to deterministic Turing machine.

Church’s Thesis for Turing Machine :


• In 1936, A method named as lambda-calculus was created by Alonzo Church in which the Church
numerals are well defined, i.e. the encoding of natural numbers.
• Also in 1936, Turing machines (earlier called theoretical model for machines) was created by Alan
Turing, that is used for manipulating the symbols of string with the help of tape.
Church Turing Thesis :
• Turing machine is defined as an abstract representation of a computing device such as hardware in
computers.
• Alan Turing proposed Logical Computing Machines (LCMs), i.e. Turing’s expressions for Turing
Machines.
• This was done to define algorithms properly. So, Church made a mechanical method named as ‘M’
for manipulation of strings by using logic and mathematics.
• This method M must pass the following statements:
a) Number of instructions in M must be finite.
b) Output should be produced after performing finite number of steps.
c) It should not be imaginary, i.e. can be made in real life.
d) It should not require any complex understanding.
• Using these statements Church proposed a hypothesis called Church’s Turing thesis that can be
stated as:
• “The assumption that the intuitive notion of computable functions can be identified with partial
recursive functions.”
• In 1930, this statement was first formulated by Alonzo Church and is usually referred to as
Church’s thesis, or the Church-Turing thesis. However, this hypothesis cannot be proved.
The recursive functions can be computable after taking following assumptions:
1. Each and every function must be computable.
2. Let ‘F’ be the computable function and after performing some elementary operations to ‘F’, it will
transform a new function ‘G’ then this function ‘G’ automatically becomes the computable function.
3. If any functions that follow above two assumptions must be states as computable function

Example:- Let A = {(G) : G is a connected undirected graph }. Describe a TM deciding A. Proof:-


M = “On input (G), the encoding of a graph G:
1 Select the first node of G and mark it.
2 Repeat until no new node is marked:
• For each node in G, mark it if there is an edge connecting the node to a marked node.
3 Check if all nodes of G are marked. If yes, accept; otherwise, reject.”
• ” Double quotes (“ and ”) mean the description is informal.

Universal Turing Machine:

a) A Turing Machine is the mathematical tool equivalent to a digital computer.


b) It was suggested by the mathematician Turing in the 30s, and has been since then the most widely
used model of computation in computability and complexity theory.
c) The model consists of an input output relation that the machine computes.
d) The input is given in binary form on the machine's tape, and the output consists of the contents of
the tape when the machine halts.
e) What determines how the contents of the tape change is a finite state machine (or FSM, also called
a finite automaton) inside the Turing Machine.
f) The FSM is determined by the number of states it has, and the transitions between them.
g) At every step, the current state and the character read on the tape determine the next state the FSM
will be in, the character that the machine will output on the tape (possibly the one read, leaving the
contents unchanged), and which direction the head moves in, left or right.
h) The problem with Turing Machines is that a different one must be constructed for every new
computation to be performed, for every input output relation.
i) This is why we introduce the notion of a universal turing machine (UTM), which along with the
input on the tape, takes in the description of a machine M.
j) The UTM can go on then to simulate M on the rest of the contents of the input tape.
k) A universal turing machine can thus simulate any other machine.

Restricted Turing Machines:

• Turing Machine accepts the recursively enumerable language. It is more powerful than any other
automata such as FA, PDA, and LBA. It computes the partial recursive function.
• It can be further divided into Deterministic Turing Machine(DTM) or Non-Deterministic
Machine(NTM). By default, Turing Machine is DTM, and the power of DTM and NTM are the same.
• This machine acts as a Recognizer or Acceptor and as an Enumerator.
• The machine is said to be as acceptor which accepts or recognizes the strings of a recursively
enumerable language (L) over the input alphabet(∑ ) and the machine is said to be as enumerator
which enumerates the string of recursively enumerable language over the input alphabet ∑.

The restricted Turing machines can be of the following types :


1. Halting Turing Machine :
A Turing Machine is said to be a halting Turing machine if it always halts for every input string.
It can accept the recursive language and is less powerful than Turing machine.
2. Linear Bounded Automata :
It behaves as a Turing machine but the storage space of tape is restricted only to the length of the input
string.
It is less powerful than a Turing machine but more powerful than push down automata.
3. Unidirectional Turing Machine :
The head of this type of turing machine can move only in one direction.
It can accept the only regular language.
It has the same power as finite automata but less powerful than push down automata.
4. Read Only Turing Machine :
It is equivalent to finite automata. It contains a read head only which doesn’t have written capability. It
accepts only regular languages.
5. Read Only-Unidirectional Turing Machine :
It is similar to finite automata. It contains a read-only head and can move only in one direction.
It accepts a regular language.
Decidability and Undecidability in Problems:
Identifying languages (or problems*) as decidable, undecidable or partially decidable is a very common
question.
Decidable language -A decision problem P is said to be decidable (i.e., have an algorithm) if the language
L of all yes instances to P is decidable.
Example- (I) (Acceptance problem for DFA) Given a DFA does it accept a given word?
(II) (Emptiness problem for DFA) Given a DFA does it accept any word?
(III) (Equivalence problem for DFA) Given two DFAs, do they accept the same language?

Undecidable language-
• A decision problem P is said to be undecidable if the language L of all
yes instances to P is not decidable or a language is undecidable if it is not decidable.
• An undecidable language maybe a partially decidable language or something else but not
decidable. If a language is not even partially decidable , then there exists no Turing machine
for that language.

Partially decidable or Semi-Decidable Language-


• A decision problem P is said to be semi-decidable (i.e., have a semi-algorithm) if the language
L of all yes instances to P is RE.
• A language ‘L’ is partially decidable if ‘L’ is a RE but not REC language.

Recursive language(REC)-
• A language ‘L’ is said to be recursive if there exists a Turing machine which will accept all the
strings in ‘L’ and reject all the strings not in ‘L’.
• The Turing machine will halt every time and give an answer(accepted or rejected) for each and
every string input.
• A language ‘L’ is decidable if it is a recursive language. All decidable languages are recursive
languages and vice-versa.

Recursively enumerable language(RE)-


• A language ‘L’ is said to be a recursively enumerable language if there exists a Turing machine
which will accept (and therefore halt) for all the input strings which are in ‘L’ but may or may
not halt for all input strings which are not in ‘L’.
• By definition , all REC languages are also RE languages but not all RE languages are REC
languages.
Now let’s solve some examples –

One way to solve decidability problems is by trying to reduce an already known undecidable problem to
the given problem. By reducing a problem P1 to P2, we mean that we are trying to solve P1 by using the
algorithm used to solve P2.
If we can reduce an already known undecidable problem P1 to a given problem P2 , then we can surely
say that P2 is also undecidable. If P2 was decidable, then P1 would also be decidable but that becomes a
contradiction because P1 is known to be undecidable.

1. Given a Turing machine ‘M’, we need to find out whether a state ‘Q’ is ever reached when a
string ‘w’ is entered in ‘M’. This problem is also known as the ‘State Entry problem’.
Solution :-
1. Now let’s try to reduce the Halting problem to the State Entry problem.
2. A Turing machine only halts when a transition function ? (qi,a) is not defined. Change every
undefined function ?(qi,a) to ?(qi,a) = (Q, a, L or R). Note that the state Q can only be reached
when the Turing machine halts.
3. Suppose we have an algorithm for solving the State Entry problem which will halt every time and
tell us whether state Q can be reached or not.
4. By telling us that we can or cannot reach state Q every time, it is telling us that the Turing machine
will or will not halt, every time.
5. But we know that is not possible because the halting problem is undecidable.
6. That means that our assumption that there exists an algorithm which solves the State Entry
problem and halts and gives us an answer every time, is false.
7. Hence, the state entry problem is undecidable.

2. Given two regular languages L1 and L2, is the problem of finding whether a string ‘w’ exists in
both L1 and L2, a decidable problem or not.
Solution :-
1. First we make two Turing machines TM1 and TM2 which simulate the DFAs of languages L1 and
L2 respectively.
2. We know that a DFA always halts, so a Turing machine simulating a DFA will also always halt.
3. We enter the string ‘w’ in TM1 and TM2. Both Turing machines will halt and give us an answer.
4. We can connect the outputs of the Turing machines to a modified ‘AND’ gate which will output
‘yes’ only when both the Turing machines answer ‘yes’. Otherwise it will output ‘no’.
5. Since this system of two Turing machines and a modified AND gate will always stop, this problem
is a decidable problem.
6. There are a lot of questions on this topic. There is no universal algorithm to solve them.
7. *The words ‘language’ and ‘problem’ can be used synonymously in Theory of computation. For
eg. The ‘Halting problem’ can also be written as ‘L = {<M, w> | Turing machine ‘M’ halts on
input ‘w’}’. Here ‘L’ is a language.
Halting Problem of TMs
Input − A Turing machine and an input string w.
Problem − Does the Turing machine finish computing of the string w in a finite number of steps? The
answer must be either yes or no.
Proof − At first, we will assume that such a Turing machine exists to solve this problem and then we will
show it is contradicting itself.
We will call this Turing machine as a Halting machine that produces a ‘yes’ or ‘no’ in a finite amount of
time.
If the halting machine finishes in a finite amount of time, the output comes as ‘yes’, otherwise as ‘no’.
The following is the block diagram of a Halting machine −

Now we will design an inverted halting machine (HM)’ as −


• If H returns YES, then loop forever.
• If H returns NO, then halt.
The following is the block diagram of an ‘Inverted halting machine’ −

Further, a machine (HM)2 which input itself is constructed as follows −

• If (HM)2 halts on input, loop forever.


• Else, halt.
Here, we have got a contradiction. Hence, the halting problem is undecidable
Post Correspondence Problem
• Post Correspondence Problem is a popular undecidable problem that was introduced by Emil
Leon Post in 1946. It is simpler than Halting Problem.
• In this problem we have N number of Dominos (tiles).
• The aim is to arrange tiles in such order that string made by Numerators is same as string made by
Denominators.
In simple words, lets assume we have two lists both containing N words, aim is to find out
concatenation of these words in some sequence such that both lists yield same result.
• Let’s try understanding this by taking two lists A and B
• A=[aa, bb, abb] and B=[aab, ba, b]
• Now for sequence 1, 2, 1, 3 first list will yield aabbaaabb and second list will yield same string
aabbaaabb.
So the solution to this PCP becomes 1, 2, 1, 3.
• Post Correspondence Problems can be represented in two ways:
1. Domino’s Form :

2. Table Form :
Let’s consider following examples.
Example-1:

Explanation –
• Step-1:
We will start with tile in which numerator and denominator are starting with same number, so we can
start with either 1 or 2.
Lets go with second tile, string made by numerator- 10111, string made by denominator is 10.
• Step-2:
We need 1s in denominator to match 1s in numerator so we will go with first tile, string made by
numerator is 10111 1, string made by denominator is 10 111.
• Step-3:
There is extra 1 in numerator to match this 1 we will add first tile in sequence, string made by
numerator is now 10111 1 1, string made by denominator is 10 111 111.
• Step-4:
Now there is extra 1 in denominator to match it we will add third tile, string made by numerator is
10111 1 1 10, string made by denominator is 10 111 111 0.
• Final Solution - 2 1 1 3

• String made by numerators: 101111110


String made by denominators: 101111110
As we can see, strings are same. So it is in MPCP.
Example-2:

Explanation –
• Step-1:
We will start from tile 1 as it is our only option, string made by numerator is 100, string made by
denominator is 1.
• Step-2:
We have extra 00 in numerator, to balance this only way is to add tile 3 to sequence, string made by
numerator is 100 1, string made by denominator is 1 00.
• Step-3:
There is extra 1 in numerator to balance we can either add tile 1 or tile 2. Lets try adding tile 1 first,
string made by numerator is 100 1 100, string made by denominator is 1 00 1.
• Step-4:
There is extra 100 in numerator, to balance this we can add 1st tile again, string made by numerator is
100 1 100 100, string made by denominator is 1 00 1 1 1. The 6th digit in numerator string is 0 which
is different from 6th digit in string made by denominator which is 1. So not in MPCP.

Undecidability of Post Correspondence Problem :


As we know, PCP is undecidable. That is, there is no particular algorithm that determines whether any
Post Correspondence System has solution or not.
Proof –
We already know about undecidablitiy of Turing Machine. If we are able to reduce Turing Machine to
PCP then we will prove that PCP is undecidable as well.
Consider Turing machine M to simulate PCP’s input string w can be represented as .

•If there is match in input string w, then Turing machine M halts in accepting state.
•This halting state of Turing machine is acceptance problem ATM. We know that acceptance
problem ATM is undecidable.
• Therefore PCP problem is also undecidable.
To force simulation of M, we make 2 modifications to Turing Machine M and one change to our PCP
problem.
a) M on input w can never attempt to move tape head beyond left end of input tape.
b) If input is empty string € we use _ .
c) PCP problem starts match with first domino [u1/v1] This is called Modified PCP problem.
d) MPCP = {[D] | D is instance of PCP starts with first domino}
Construction Steps –
a) Put [# / (#q0w1w2..wn#)] into D as first domino, where is instance of D is MPCP. A partial match
is obtained in first domino is # at one face is same #symbol in other face.
b) Transition functions for Turing Machine M can have moves Left L, Right R. For every x, y z in
tape alphabets and q, r in Q where q is not equal to qreject. If transition(q, x) = (r, y, R) put domino
[qx / by] into D and transition(q, x) =(r, y, L) put domino [zqx / rzy] into D.
c) For every tape alphabet x put [x / x] into D. To mark separation of each configurations put [# / #]
and [# / _#] into D .
d) To read input alphabets x even after Turing Machine is accepting state put [xqa / qa] and [qax /
qa]and [qa# / #] into D. These steps concludes construction of D.
Since this instance of MPCP, we need to convert this to PCP.
Converting Modified PCP(MPCP) to PCP :-
Given two sequence of strings w1,w2,w3…..wn and x1,x2,x3….xn over ⅀ such that

wi1,wi2,wi3….wik = xi1,xi2,xi3…..xik

1. Introduce the new symbol *


2. In the first list wi , the new symbol * appears after every symbol and in the second list xi, the
new symbol * appear before every symbol.
3. Take the first pair ,w1.x1 from the given MPCP and add to the PCP instance another pair in which
*’s remain as modified , but an extra * is added to w1 at the beginning.
4. Since the first list has an extra * at the end add another pair $ and *$ to the PCP instance. This is
referred as final pair.

Ex :- Convert the following MPCP into equivalent PCP. i wi xi


1 01 011
2 10 0
3 01 11
4 1 0
Answer :-
o The given problem is in MPCP because there is a sequence 1,4,3,2 for wi : 0110110
and a sequence 1,4,3,2 for xi : 0110110
o Adding * after every symbol in w and before every symbol in x.
i wi xi
1 0*1* *0*1*1
2 1*0* *0
3 0*1* *1*1
4 1* *0
o Add an extra * in w1 at beginning and include new row as $ and *$ in wi and xi
i wi xi
1 *0*1* *0*1*1
2 1*0* *0
3 0*1* *1*1
4 1* *0
5 $ *$
o This is a PCP as there exist a sequence 1,4,3,2,5 which generates *0*1*1*0*1*1*0*$
in wi and xi

o As we know the pair[wi/xi] so w/x is:


[*0*1*/*0*1*1].[1*/*0].[0*1*/*1*1].[1*0*/*0].[$/*$]
o Therefore all are cancelled so It is in PCP.
P, NP, NP-Complete and NP-Hard Problems in Computer Science :-
In theoretical computer science, the classification and complexity of common problem definitions
have two major sets;
• P which is “Polynomial” time and NP which “Non-deterministic Polynomial” time.
• There are also NP-Hard and NP-Complete sets, which we use to express more sophisticated
problems.
• In the case of rating from easy to hard, we might label these as “easy”, “medium”, “hard”, and
finally “hardest”:
• Easy →P ( GCD, Prime, All sorting & searching problems)
• Medium →NP (su-do-ku, Prime factor, Scheduling, Travelling Salesman)
• Hard →NP-Complete ( Knapsack Decision Problems, Chess game)
• Hardest → NP-Hard (Knapsack Optimization Problems)
• and we can visualize their relationship, too:

• Using the diagram, we assume that P and NP are not the same set, or, in other words, we assume
that P≠NP. This is our apparently-true, but yet-unproven assertion.
• Of course, another interesting aspect of this diagram is that we’ve got some overlap
between NP and NP-Hard. We call NP-Complete when the problem belongs to both of these sets.
• So we’ve mapped P, NP, NP-Hard and NP-Complete to “easy”, “medium”, “hard” and “hardest”.
• Here, we generally prefer not to use units like “seconds” or “milliseconds”.
• Instead, we prefer proportional expressions like n,n2 ,log2(n), and nn, using Big-O notation.
• Those mathematical expressions give us a clue about the algorithmic complexity of a problem.
Problem Definitions
some common Big-O values:

• O(1) – constant-time
• O(log2(n)) – logarithmic-time
• O(n) – linear-time
• O(n2) – quadratic-time
• O(nk) – polynomial-time
• O(kn) – exponential-time
• O(n!) – factorial-time

where k is a constant and n is the input size. The size of n also depends on the problem definition. For
example, using a number set with a size of n , the search problem has an average complexity between
linear-time and logarithmic-time depending on the data structure in use.
Polynomial Algorithms

1) The first set of problems are polynomial algorithms that we can solve in polynomial time, like
logarithmic, linear or quadratic time.
2) If an algorithm is polynomial, we can formally define its time complexity as:
T(n)=O(C*nk) where C>0 and k>0 where C and k are constants and n is input size.
3) In general, for polynomial-time algorithms k is expected to be less than n.
4) Many algorithms complete in polynomial time:
• All basic mathematical operations; addition, subtraction, division, multiplication
• Testing for primacy
• Hashtable lookup, string operations, sorting problems
• Shortest Path Algorithms; Djikstra, Bellman-Ford, Floyd-Warshall
• Linear and Binary Search Algorithms for a given set of numbers

a) All of these have a complexity of O(nk) for some k, and that fact places them all in P.
b) Each input is a polynomial, multiplying them will still be a polynomial.
c) For example, in graphs, we use E for edges and V for vertices, which gives O(E*V) for Bellman-
Ford’s shortest path algorithm.
d) Even if the size of the edge set is E=V2, the time complexity is still a polynomial, O(V3), so we’re still
in n.
NP Algorithms

1. The second set of problems cannot be solved in polynomial time.


2. However, they can be verified (or certified) in polynomial time.
3. We expect these algorithms to have an exponential complexity, which we’ll define as:
T(n)=O(C1*kC2*n) where C1>0, C2>0 and k>0 where C1, C2 and k are constants and n is the input size.
4. T(n) is a function of exponential-time when at least C1=1 and C2=1. As a result, we get O(kn).
5. For example, we’ll see complexities like O(nn),O(2n), O(20.000001*n) in this set of problems.
6. There are several algorithms that fit this description. Among them are:
• Integer Factorization and
• Graph Isomorphism
a) Both of these have two important characteristics:
Their complexity is O(kn) for some k and their results can be verified in polynomial time.
b) Those two facts place them all in NP, that is, the set of “Non-deterministic Polynomial” algorithms.
c) Now, formally, we also state that these problems must be decision problems – have a yes or no answer
– though note that practically speaking, all function problems can be transformed into decision
problems.
d) To speak precisely, then, an algorithm is in NP if it can’t be solved in polynomial time and the set of
solutions to any decision problem can be verified in polynomial time by a “Deterministic Turing
Machine“.
e) What makes Integer Factorization and Graph Isomorphism interesting is that while we believe they are
in NP, there’s no proof of whether they are in P and NP-Complete.
f) Normally, all NP-Complete algorithms are in NP, but they have another property that makes them
more complex compared to NP problems.
NP-Complete Algorithms
1) Taking a look at the diagram, all of these all belong to NP, but are among the hardest in the set.
2) At present, there are more than 3000 of these problems, and the theoretical computer science
community populates the list quickly.
3) What makes them different from other NP problems is a useful distinction called completeness.
4) For any NP problem that’s complete, there exists a polynomial-time algorithm that can transform
the problem into any other NP-complete problem.
5) This transformation requirement is also called reduction.
6) There are numerous NP problems proven to be complete. Among them are:
a) Traveling Salesman
b) Knapsack, and
c) Graph Coloring
A. Curiously, what they have in common, aside from being in NP, is that each can be reduced into the
other in polynomial time. These facts together place them in NP-Complete.
B. The major and primary work of NP-Completeness belongs to Karp.
C. And his 21 NP-Complete problems are fundamental to this theoretical computer science topics.
D. These works are founded on the Cook-Levin theorem and prove that the Satisfiability (SAT)
problem is NP-Complete:

NP-Hard Algorithms
a) Our last set of problems contains the hardest, most complex problems in computer science.
b) They are not only hard to solve but are hard to verify as well.
c) In fact, some of these problems aren’t even decidable.
d) Among the hardest computer science problems are:
• K-means Clustering
• Traveling Salesman Problem, and
• Graph Coloring
• These algorithms have a property similar to ones in NP-Complete – they can all be reduced to any
problem in NP.
• Because of that, these are in NP-Hard and are at least as hard as any other problem in NP.
• A problem can be both in NP and NP-Hard, which is another aspect of being NP-Complete.
• Since NP and NP-Complete problems can be verified in polynomial time, proving that an
algorithm cannot be verified in polynomial time is also sufficient for placing the algorithm in NP-
Hard.
So, Does P=NP?

A question that’s fascinated many computer scientists is whether or not all algorithms in belong to:

a) It’s an interesting problem because it would mean, for one, that any NP or NP-Complete problem
can be solved in polynomial time.
b) For our definitions, we assumed that P!=NP, however ,P=NP may be possible. If it were so, aside
from NP or NP-Complete problems being solvable in polynomial time, certain algorithms in NP-
Hard would also dramatically simplify.
c) For example, if their verifier is NP or NP-Complete, then it follows that they must also be solvable
in polynomial time, moving them into P=NP=NP-Complete as well.
d) We can conclude that P=NP means a radical change in computer science and even in the real-
world scenarios. Currently, some security algorithms have the basis of being a requirement of too
long calculation time. Many encryption schemes and algorithms in cryptography are based on
the number factorization which the best-known algorithm with exponential complexity.
e) If we find a polynomial-time algorithm, these algorithms become vulnerable to attacks.
f) All NP-Complete problems are NP-Hard but all NP-Hard problems are not NP-Complete.
g) Briefly we can conclude a generalized classification as follows:
1) P, problems are quick to solve
2) NP, problems are quick to verify but slow to solve
3) NP-Complete, problems are also quick to verify, slow to solve and can be reduced to any
other NP-Complete problem
4) NP-Hard problems are slow to verify, slow to solve and can be reduced to any
other NP problem

You might also like