You are on page 1of 21

Analysis and Design of Algorithms Unit 13

Unit 13 Limitations of Algorithm Power


Structure:
13.1 Introduction
Objectives
13.2 Lower – Bound Arguments
Trivial lower bounds
Information theoretic arguments
Adversary arguments
Problem reduction
13.3 Decision Trees
Decision trees for sorting algorithms
Decision trees for searching a sorted array
13.4 P, NP and NP – Complete Problems
Non – deterministic algorithm
NP – hard and NP – complete classes
Cook’s theorem
13.5 Summary
13.6 Glossary
13.7 Terminal Questions
13.8 Answers

13.1 Introduction
So far in the previous units we have studied many algorithms, and learnt
how they play a significant role in solving a range of problems. But, it is not
possible to solve all problems using algorithms. The power of algorithms is
limited to some extent.
The reasons for these limitations are:
 Some problems which can be solved using algorithms are not solved
within polynomial time.
 Even if we can solve some problems within the polynomial time, the
efficiency of the algorithm is in lower bound.
This unit covers the limitations of algorithm power with respect to lower–
bound arguments of algorithms. It explains decision trees with examples. It
also analyzes P, NP and NP–complete problems.

Sikkim Manipal University B1480 Page No. 275


Analysis and Design of Algorithms Unit 13

Objectives:
After studying this unit you should be able to:
 explain the lower–bound arguments of algorithms
 describe and implement decision trees
 define P, NP and NP–complete problems

13.2 Lower – Bound Arguments


In this section we will discuss about the lower bound arguments in
algorithms. Lower–bound means calculating the minimum amount of work
required to solve the problem. While obtaining the lower–bound of the
algorithm we look for the limits of efficiency of any algorithm that can solve
the problem.
The following two methods help to make an algorithm more efficient:
1) First we verify the asymptotic efficiency class of the algorithm.
2) Then we check the class of the given problem to see where it fits in the
hierarchy of the efficiency classes (i.e., whether the problem lies in
linear, quadric, logarithmic or exponential category of efficiency class).
The efficiency of different algorithms is given in the table 13.1.
Table 13.1: Efficiency of Different Algorithms
Algorithms Efficiency
2
Insertion sort n
Quick sort n log n
Heap sort n log n
Linear search n/2
Binary search log 2 n

When we are finding the efficiency of an algorithm, it is better to compare it


with those algorithms that are used to solve similar kind of problems. For
example if we want to determine the efficiency of insertion sort then we have
to compare it with other sorting methods. We cannot determine the
efficiency of insertion sort if we compare it with the efficiency of Tower of
Hanoi problem because these are two different types of problems.
When we are determining the efficiency of an algorithm with respect to other
algorithms that are used to solve the same problem, it is better to know the
best possible efficiency of an algorithm which can solve that problem.

Sikkim Manipal University B1480 Page No. 276


Analysis and Design of Algorithms Unit 13

Knowing this helps us to improve the algorithm. If there is a gap between


the best lower bound and the fastest algorithm then there is a possibility of
improving the algorithm i.e. either there is an algorithm with the fastest
lower-bound or we can prove the better lower-bound for that algorithm can
be established.
Following are the different methods for obtaining the lower–bound of an
algorithm:
 Trivial lower bounds
 Information – theoretic arguments
 Adversary arguments
 Problem reduction
13.2.1 Trivial lower bounds
This is the simple method used for obtaining lower bound class of an
algorithm. Trivial lower bound is obtained by counting the input data that the
algorithm reads and the output that it produces.
For example, the trivial lower bond for generating all permutation of n
numbers will be (n!) because the output size here is factorial n. This
algorithm is tight because good algorithms for generating permutations
spend a constant time on each of them expect the initial one.
Similarly if we calculate the trivial lower bound for finding the product of two
n  n matrices, it is (n2). This is because this algorithm takes two n
elements as the inputs and produces n2 elements as output. It is still not
known whether this bond is tight.
Limitations of this method
Trivial lower bounds are less useful. For instance let us consider the
problem of traveling salesman. We see that the trivial lower bound for this
algorithm is (n2) as the algorithm has n (n-1)/2 distances as the input and
produces n + 1 cities as the output. This trivial lower bound is not useful in
this case because there is no similar problem to compare it with.
There is one more problem in obtaining the lower-bound using this method.
When we are finding the lower-bound of an algorithm using this method it is
necessary to determine which part of the input has to be processed. For
instance, searching an element in the sorted array does not require
processing of all the input elements.
Sikkim Manipal University B1480 Page No. 277
Analysis and Design of Algorithms Unit 13

Let us consider another example, the problem of determining the


connectivity of an undirected graph using its adjacency matrix. It is possible
for such algorithms to check for the existence of n (n-1)/2 potential edges,
but the proof of this problem is not trivial.
13.2.2 Information – theoretic arguments
While the trivial lower bound method takes into account the input and output
size of the problem, this method defines the lower bound of an algorithm
based on the amount of the information the algorithm produces.
For example let us consider the game of finding a positive integer in
between 1 and n. We get the integer by asking questions for which the
answer would be yes/no. Any algorithm used to solve this problem is
measured as log2n, which is the number of bits required to specify an
integer among the n possibilities. The answer to each question produces
one bit of information about the output of the algorithm. Therefore any
algorithm used to solve this problem requires log2n steps before it produces
the output.
This method is called as information–theoretic arguments because it is
connected with information theory. We can find the lower bound of an
algorithm using this method with the help of a mechanism called decision
tree. We can use this method for finding the lower bound of those algorithms
which are based on comparison methods or searching or sorting.
Let us next discuss the adversary arguments method of finding a lower-
bound of an algorithm.
13.2.3 Adversary Arguments
An adversary is an opponent that a comparison algorithm uses. Its ultimate
goal is to minimize the number of comparisons that the algorithm makes
while adding items into the list of input to the problem.
Let us consider an example of comparing some array elements. According
to adversary arguments, if a[1] > a[2], then a[2] > a[1] will never be possible.
We use the codes as given below:
 N – Not used
 T – True once but never false
 F – False once but never true
 TF – True and false at least once

Sikkim Manipal University B1480 Page No. 278


Analysis and Design of Algorithms Unit 13

The table 13.2 gives all the possible outcomes.


Table 13.2: Possible Outcomes
Status Outcome New status Value
NN x>y T,F 2
T,N x>y T,F 1
TF,N x>y TF,F 1
F,N X<y F,T 1
T,T x>y T,TF 1
F,F x>y TF,F 1
T,F;TF,F;T,TF x>y N/C 0
F,T;F,TF,TF,T x<y N/C 0
TF,TF Consistent N/C 0

This problem requires 2n-2 bits of information to solve. All keys except one
must be false and all keys except one must true. Comparing N, N pairs
gives n/2 comparisons and n value bits. We also need n-2 additional bits.
Each element must be compared once. For lower bound, we require a total
of 3n/2-2 comparisons. To find the upper bound we have to group the true
condition and the false condition separately and find their maximum. This
will also be computed as total 3n/2-2 comparisons.
The above example demonstrates the adversary method of obtaining lower
bound. We measure the amount of comparisons required to minimize the
total comparisons to find the lower-bound.
Let us next analyze the method of problem reduction which is used to find
the lower-bound of an algorithm.
13.2.4 Problem reduction
In this method an unsolvable problem A is reduced to a solvable problem B
with a known algorithm. We can use the same reduction idea to find the
lower bound of an algorithm. If we have a problem A which is at least as
hard a problem B whose lower bound is known, we have to reduce B to A so
that any algorithm solving A would also solve B. then the lower bound for B
will also be the lower bound for A.

Sikkim Manipal University B1480 Page No. 279


Analysis and Design of Algorithms Unit 13

Let us consider the example of Euclidean minimum spanning tree problem:


Given n points in the Cartesian plane, construct a tree of minimum total
length whose vertices are the given points. Just adding 0 as a point to the y
coordinate that is (x1, o), (x2, 0), ……, ( xn, 0) in the Cartesian plane we can
transform the set of x1, x2,…….. xn of n real numbers into a set of n points.
Let T be the minimum spanning tree for this set of points. While T must
contain the shortest edge, checking if T contains a zero length edge will
answer the question about the individuality of the given number. This
reduction means that Ω(n log n) is the lower bound of the Euclidean
minimum spanning tree problem. The reduction method is used frequently to
compare the relative complexity of a problem as the final results of many
problems are not known.
Self Assessment Question
1. _____________________ means calculating the minimum amount of
work required to solve the problem.
2. Trivial lower bound is obtained by the count of the input data that the
algorithm reads and the output it produces.
3. _______________________ method defines the lower bound of an
algorithm based on the amount of the comparisons the algorithm
makes.

13.3 Decision Tree


In the previous section we have studied about lower bound and different
methods of obtaining lower bound. In this section we will study about
decision trees and the methods of implementing it.
Decision tree means to represent a program in the form of tree with
branches. Here each node represents a decision. First the root node is
tested and then the control is passed to one of its subtrees, depending on
the result of the test. This flow is continued till the leaf node with the element
of interest is reached.
A real life example for a decision tree is given in figure 13.2. Let us assume
that we are the income tax is calculated for salaried citizens under the age
on 60.

Sikkim Manipal University B1480 Page No. 280


Analysis and Design of Algorithms Unit 13

Figure 13.2: Example for Decision Tree

First we check if the citizen is a male or a female. If the citizen is a male


then check if the income is less than 2,00,000. If yes then the citizen is not
eligible for tax. If not, check if the income is less than 5,00,000. If yes then
the tax is 10% else check if the income is less than 10,00,000. If that is true
then the tax is 20% else it is 30%. If it is a female then first we check if the
less than 2,50,000 then proceed with the same process followed fro the
male citizen.
Some algorithms like sorting and searching need to compare their input
elements. To study such algorithms, we use decision trees.
Let us now discuss the implementation of decision trees for sorting
algorithms.
Sikkim Manipal University B1480 Page No. 281
Analysis and Design of Algorithms Unit 13

13.3.1 Decision trees for sorting algorithms


Sorting algorithms are comparison based. They compare the elements in a
list that has to be sorted. Studying the decision tree for comparison based
sorting algorithm gives the lower bound time efficiency for these algorithms.
We can deduce the output of a sorting algorithm as finding the permutation
of the indices of elements in the input list which is in ascending order. For
example let us consider the output of p < q < r achieved by sorting a list of
input elements p, q and r. Therefore, the possible output for sorting the n
elements in the input list is equal to factorial n. The height of the decision
tree for any comparison based sorting algorithm is h ≥ log2 l where h is the
height of the tree and l is the no of leaves. The worst case number of
comparisons made by such algorithm is not less than log2n!. The figure 13.1
explains the decision tree for a three element selection sort.

Figure 13.1: Decision Tree for Three Element Selection Sort

Cworst (n) ≥ log2n!


n
Using Stirling’s formula for n! (i.e n! ~ 2n ( ) n ) we get
e
log2n ! = log22(n/e)n

= n log2n - n log2e + log2n / 2 + log22 / 2


= n log2n Eq: 13.1

Sikkim Manipal University B1480 Page No. 282


Analysis and Design of Algorithms Unit 13

Equation 13.1 gives the comparison for any sorting algorithm with n
elements in the input list. If we use merge sort for this comparison it gives
this comparison in its worst case. This means that the lower bound n log2n is
tight and cannot be improved. We however have to show that we can
improve the lower bound of log2n! for some values of n.
We can use the concept of decision trees for analyzing the average case of
comparison based sorting algorithms. We calculate the average number of
comparisons for an algorithm based on the average depth of its decision
tree. For example let us consider the insertion sort for three elements.
Figure 13.2 depicts the decision tree for a three element insertion sort.

Figure 13.2: Decision Tree for Three Element Insertion Sort

The lower bound of the average number of comparison Cavg for any
comparison based sorting algorithm is given as
Cavg (n) ≥ log2n !
According to equation 13.1 we have seen that the lower bound for this is n
log2n. Here the lower bound for both average and worst case are almost
same. However these lower bounds are obtained as a result of maximizing
Sikkim Manipal University B1480 Page No. 283
Analysis and Design of Algorithms Unit 13

the number of comparisons that are made for average and worst case. For
sorting algorithms the average case efficiencies are better than their worst
case efficiencies.
13.3.2 Decision trees for searching a sorted array
Let us now see how decision trees can be used for obtaining the lower
bound in searching a sorted array of n elements A [0] < A [1] < …< A [n-1].
The basic algorithm for this problem is binary search algorithm. Cworst gives
the number of comparisons made in the worst case. Equation 13.2 gives the
worst case for searching a sorted array problem.
Cworst (n) = log2n + 1
= log2 (n + 1) Eq: 13.2
Now let us use a decision tree to establish whether this is the least possible
number of comparisons.
Here we are considering the three way comparison where the search
element ‘key’ is compared with some element x to check if key <x, key=x or
key>x. The figure 13.3 shows the decision tree for the case n = 5. Consider
that the elements of the array are 1, 2, 3, 6 and 9. We will start the
comparison with the middle element, 3. The internal nodes of the tree signify
the elements of the array that are being compared with the search element
‘key’. The leaves signify whether the search is successful or unsuccessful.
For an array of n elements such decision trees have 2n +1 leaves where n
for successful search and n + 1 for unsuccessful search. If the least height h
of a decision tree with l leaves is log3l then equation 13.3 gives the lower
bound of this problem based on the number of worst case comparisons.
Cworst (n) ≥ log3 (2n + 1) Eq: 13.3
This lower bound is smaller than the number of worst case comparisons for
binary search at least for higher value of n.

Sikkim Manipal University B1480 Page No. 284


Analysis and Design of Algorithms Unit 13

Figure 13.3: Decision Tree for Binary Search in a Five Element Array

Activity 1
Draw a decision tree, to sort three numbers using selection sort
algorithm.

Self Assessment Question


4. Comparison is the basic operation of _____________algorithm.
5. For sorting algorithms the average case efficiencies are better than
their worst case efficiencies.
6. We use a ________________ decision tree to represent an algorithm
for searching a sorted array with three way comparisons.

13.4 P, NP and NP–Complete Problems


In the previous section we have discussed about decision trees and their
implementations. In this section let us discuss about P, NP, and
NP–complete problems.
In the study of computational complexity the first concern for computer
professionals is to solve any problem within polynomial time.
These problems are classified into two different groups:
1) The first group consists of those problems that are solved within
polynomial time. Such problems are called tractable. For example
searching an element, sorting an array and so on.

Sikkim Manipal University B1480 Page No. 285


Analysis and Design of Algorithms Unit 13

2) The second group consists of those problems that are solved in non-
deterministic polynomial time. For example, knapsack problem and
traveling salesperson problem can be solved in non deterministic
polynomial time.
Definition of P – Stands for polynomial time.
Definition of NP – Stands for non-deterministic polynomial time.
The NP class problems are further divided into NP–complete and NP–hard
problems.
The following reasons justify the restriction of P to decision problems:
1) First it is reasonable to eliminate problems that we are not able to solve
in polynomial time because of the exponentially better output. For
example, generating the subsets of a given set of permutations of n
distinct items. But from the output we see that this cannot be solved in
polynomial time.
2) Secondly we can reduce some of problems that are not decision
problems to a sequence of decision problems that are easy to study. For
example let us consider colours of a graph. Here instead of asking for
the minimum number of colours required to color the vertices of a graph
so that no two vertices are coloured with the same colour, we can verify
whether there is any coloring of the graph’s vertices with more than m
colors.
Not all decision problems are solved in polynomial time. Some decision
problems cannot be solved using any algorithm, such problems are called
undecidable.
There are many problems that have no polynomial time algorithms. Let us
see some examples of those algorithms that do not have polynomial time
algorithm:
Hamiltonian circuit – A Hamiltonian circuit (or Hamiltonian Cycle) is
defined as a circuit in graph G starts and ends at the same vertex and
includes every vertex of G exactly once. If a graph G contains a Hamiltonian
cycle, it is called Hamiltonian Graph.
Traveling salesman – If we have a set of cities and the distances between
them, this problem determines the shortest path starting from a given city,
passing through all the other cities and returning to the first city.
Sikkim Manipal University B1480 Page No. 286
Analysis and Design of Algorithms Unit 13

Knapsack problem – If a set of items are given, each with a weight and a
value, this problem determines the number of items that minimizes the total
weight and maximizes the total value.
Partition problem – This determines whether it is possible to partition the
given set of integers into two that have the same sum.
Bin packing – Bin packing is a hard problem which has the goal to pack a
given number of objects into the minimum number of fixed-size bins.
Graph coloring – This finds the chromatic number of the given graph.
Integer linear programming – This finds the maximum or minimum values
of linear functions for a several integer valued variables subject to a finite
set of constraints.
Another common characteristic that we find in decision problems is that
solving such problems can be computationally difficult, whereas checking
whether a planned solution solves the problem is easy. For example let us
consider the Hamiltonian circuit; it is easy to check if the proposed list of
vertices for a graph with n vertices is in Hamiltonian circuit. We just have to
check whether the list contains n +1 vertices.
Let us first discuss Non-deterministic algorithms.
13.4.1 Non–deterministic algorithms
An algorithm which defines every operation exclusively is called
deterministic algorithm.
An algorithm where every operation may not have an exclusive result and
there is a specified set of possibilities for every operation is called non–
deterministic algorithms.
Non–deterministic algorithm is a two staged algorithm. The two stages are
as follows:
Non–deterministic stage – This is the guessing stage here a random string
is generates which can be thought as a candidate solution to the given
instance.
Deterministic stage – This is the verification stage. In this stage it takes both
the candidate solution and the instance as the input and returns ‘yes’ if the
candidate solution represents the actual solution for the instance.

Sikkim Manipal University B1480 Page No. 287


Analysis and Design of Algorithms Unit 13

Let us consider the following nondetermine algorithm.


Algorithm nondetermine
Algorithm nondetermine s)
// H is an array of n elements
// we have to determine the index p of H at which the search element s is
located
{// the following for loop is the guessing stage
for p = 1 to n do
H[p] = choose(p)
// Next is the verification - the deterministic stage
If (H [p] = s) then
{
write (p)
success ()
}
Else{
write (0)
fail ()
}
}
Let us now trace the nondetermine algorithm
Algorithm tracing for algorithm nondetermine
Let us consider n=4, s = 4, H[ ] = {1, 4, 6, 7}
// H is an array of 4 elements
// we have to determine the index p of H at which the search element s is
//located
for p = 1 to 4 do // this for loop is the guessing stage it executes for 2
times //from p = 1 to 4
H[1] = choose(1)
If (H [1] = s) then // H[1] is not equal to 4 so the else condition executes
//this is the verification stage
{
write (1)
success ()
}
Else{
write (0)
fail ()
}

Sikkim Manipal University B1480 Page No. 288


Analysis and Design of Algorithms Unit 13

We see that the above algorithm nondetermine has the following three
functions:
1) choose – randomly chooses one of the elements form the given input
2) success – signifies the successful completion
3) fail – signifies the unsuccessful completion
The algorithm has non-deterministic complexity O(1). If A is ordered then
the deterministic search algorithm has the complexity as (n).
We say that a non-deterministic problem solves the decision problem if for
every instance ‘yes’ of the problem it returns ‘yes’ on some execution. If the
efficiency of the non deterministic algorithm’s verification stage is polynomial
than it is said to be a non deterministic polynomial
Let us next discuss NP-hard and NP-complete classes.
13.4.2 NP–hard and NP–complete classes
We know that NP stands for non-deterministic polynomial. These are the
problems that are solved using the non deterministic algorithms. The
NP– complete class can be further classified in to two, they are:
1) NP–complete
2) NP–hard
NP–complete problems
NP–complete problems are problems that belong to class NP, i.e. they are
the subset of class NP. A problem Q is said to be NP–complete if,
1) Q belongs to the class NP.
2) All other problems in class NP can be reduced to Q in polynomial time.
This implies that NP–complete problems are tough to solve within
polynomial time. If we are able to solve NP–complete problems in
polynomial time, then we can solve all other problems in class NP within
polynomial time.
NP–hard problems
NP-hard problems are similar but more difficult than NP–complete problems.
All problems in class NP can be reduced to NP–hard.
Every problem in NP can be solved in polynomial time. If a NP–hard
problem can be solved in polynomial time, then all NP–complete problems

Sikkim Manipal University B1480 Page No. 289


Analysis and Design of Algorithms Unit 13

can also be solved in polynomial time. All NP–complete problems are


NP– hard but not all NP–hard problems are NP–complete.
13.4.3 Cook’s theorem
Stephen Cook in 1971 stated that
“Any NP problem can be converted into SAT (Satisfiability problem) in
polynomial time”
Satisfiability problem SAT – This is a decision problem whose instance
uses only AND, OR and NOT variables. Given a finite set of clauses, it
determines whether there is a true value assigned for the schematic letters
that appears in the clauses which makes all the clauses true.
To prove this, we need a consistent way of representing NP problems. What
makes a problem NP is the existence of a polynomial-time algorithm more
specifically, a Turing machine for checking candidate certificates. Cook used
a similar method to that of Turing’s which showed that the Entscheidungs
problem was equivalent to the Halting Problem. He showed how to encode
as propositional calculus clauses both the relevant facts about the problem
instance and the Turing machine which does the certificate checking, in a
way that the resulting set of clauses is satisfiable if and only if the original
problem instance is positive. Thus the problem of determining the latter is
reduced to the problem of determining the former.
Proof of Cook’s theorem
Assume, then, that we are given an NP decision problem D. From the
definition of NP, there is a polynomial function P and a Turing machine M
which, when given any instance I for D, together with a candidate certificate
c, will check in time no greater than P(n), where n is the length of I, whether
or not c is a certificate of I.
Let us say that M has q conditions numbered 0; 1; 2,…….., q – 1, and a
tape alphabet a1; a2,……. . We shall assume that the operation of the Turing
machine is governed by the functions T, U, and D. We also assume that the
first tape is inscribed with the problem instance on the squares 1; 2; 3,…, n
and the putative certificate on the squares –m,……, 2; -1.
Square zero can be understood to contain a designated separator symbol.
We also assume that the machine halts scanning square 0, and that the
symbol in this square at that stage will be a1 if and only if the candidate
Sikkim Manipal University B1480 Page No. 290
Analysis and Design of Algorithms Unit 13

certificate is a true. Note that we must have m ≤ P (n). This is because with
a problem instance of length n the computation is completed in at most P (n)
steps. During this process, the Turing machine head cannot move more
than P(n) steps to the left of its starting point.
We define some propositions with their intended interpretations as follows:
1) For i = 0, 1, ……, P(n) and j = 0; 1,…….., q – 1, the proposition Qij
indicates that after i computation steps, M is in state of j.
2) For i = 0; 1,…….., P(n), j = - P(n) ,…….; P(n), and k = 1; 2,……., s, the
proposition Sijk indicates that after i computation steps, square j of the
tape contains the symbol ak.
3) i = 0, 1, ……, P(n) and j = -P (n),……, P (n), the proposition Tij indicates
that after i computation steps, the machine M is scans for square j of the
tape.
Now, we define some clauses to describe the computation executed by M:
1) In each calculation step, M is in at least at one state. For each
i = 0…., P(n) we have the clause
Qi0 v Qi1……..Qi (q -1);
which gives (P (n) + 1)q = O(P(n)) literals altogether.
2) In each computation step, M is in at most one at state. For each
i = 0…., P(n) and for each pair j; k of different states, we have the
clause
(Qij Λ Qik);
which gives a total of q (q - 1) (P(n) + 1) = O(P(n)) literals.
3) In each step, the tape square contains at least one alphabet symbol.
For each i = 0,….., P(n) and -P(n) ≤ j ≤ P(n) we have the clause
Sij1 v Sij2 …….. Sijs;
Which gives (P (n) + 1) (2P (n) + 1) s = O(P (n)2) literals.
4) In each step, the tape square contains at most one alphabet symbol.
For each i = 0,……. , P(n) and -P(n) ≤ j ≤ P(n), and each distinct pair
ak; al of symbols we have the clause
(Sijk Λ Sijl);
which gives a total of (P (n) + 1)(2P(n) + 1)s(s - 1) = O(P(n)2) literals
altogether

Sikkim Manipal University B1480 Page No. 291


Analysis and Design of Algorithms Unit 13

5) In each step, the tape is scans at least one square. For each i = 0,
……, P(n), we have the clause
Ti(-P(n)) vTi(1-P(n)) v v Ti(P(n)-1) v TiP (n);
Which gives (P (n) + 1)(2P(n) + 1) = O(P(n)2) literals.
6) In each step, the tape is scans at most one square. For each i = 0
…., P(n), and each distinct pair of j and k of tape squares from -P(n)
to P(n), we have the clause
(Tij Λ Tik);
which gives the total of 2P (n)(2P(n) + 1)(P(n) + 1) = O(P(n)3) literals.
7) Initially, the machine is in state 1 scanning square 1. This is
expressed by the two clauses
Q-1; T-1;
this gives just two literals.
8) The configuration at every step after the first is determined from the
configuration of the previous step by the functions T, U, and D
defining the machine M. For each i = 0,........, P(n), -P(n) ≤ j ≤ P(n),
k = 0,………, q - 1, and l = 1,……., s, we have the clauses
Tij Λ Qik Λ Sijl ! Q(i+1)T(k;l)
Tij Λ Qik Λ Sijl ! S(i+1)jU(k;l)
Tij Λ Qik Λ Sijl ! T(i+1)(j+D(k;l))
Sijk ! Tij Λ S(i+1)jk
The fourth among these clauses ensures that the contents of any
tape square other than the currently scanned square remains the
same (to see this, note that the given clause is equivalent to the
formula Sijk Λ :Tij S(i+1)jk). These clauses contribute a total of
(12s + 3)(P(n) + 1)(2P(n) + 1)q = O(P(n)2) literals.
9) Initially, the string ai1, ai2 ,………., ain defining the problem instance I
is inscribed on squares 1; 2, ……, n of the tape. This is expressed by
the n clauses
S01i1 ; S02i2, …., S0nin;
a total of n literals.

Sikkim Manipal University B1480 Page No. 292


Analysis and Design of Algorithms Unit 13

10) By the P(n)th step, the machine has arrive at the stop state, and is
then is scanning for square 0, which contains the symbol a1. This is
expressed by the three clauses
QP(n)0, SP(n)01, ……., TP(n)0;
this gives another 3 literals.
On the whole, the number of literals involved in these clauses is O(P(n) 3).
Note that ‘q’ and ‘s’ are constants and depend only on the machine and do
not vary with the problem instance. Also, it does not contribute to the growth
of the number of literals with increasing problem size, which is what the
notation ‘O’ captures. The procedure for setting up these clauses, provided
the original machine M and the instance I of problem D, can be achieved in
polynomial time.
Now demonstrate that you have succeeded in converting D into SAT. Say, I
is a positive instance of problem D which means there is a certificate c
which halts the scanning symbol a1 on square 0 when M is run with inputs c;
I. This implies there is some sequence of symbols that can be placed initially
on squares -P(n),……., -1 of the tape so that all the above clauses are
fulfilled.
In other words, if I is a negative instance of problem D then there is no
certificate for I, which means that when the computation halts, the machine
will not be scanning a1 on square 0. This implies that any symbols can be
placed on squares -P(n), ......., -1 of the tape and the set of above clauses
cannot be fulfilled, and thus forms a negative instance of SAT.
We can conclude the following from the instance I of problem D: In
polynomial time, a set of clauses forms a positive instance of SAT if and
only if I is a positive instance of D. In other words, the problem D is
converted into SAT in polynomial time. Since D was an arbitrary NP
problem, it follows that; any NP problem can be converted to SAT in
polynomial time.

Activity 2
Find some examples of NP–hard and NP–complete problems from the
Internet and analyze how they are solved.

Sikkim Manipal University B1480 Page No. 293


Analysis and Design of Algorithms Unit 13

Self Assessment Questions


7. Problems that are solved within polynomial time are called
_______________.
8. ____________ problem finds the chromatic number of the given graph.
9. An algorithm in which every operation is exclusively defined is called
______________ algorithm.

13.5 Summary
Let us summarize what we have discussed in this unit.
Limitations of algorithms power includes lower bound arguments, decision
trees and P, NP and NP complete problems
Lower bound arguments include different types of obtaining lower bounds
like trivial lower bound, information–theoretic arguments, adversary
arguments and problem reduction.
Decision trees are used for sorting and searching algorithms which have to
compare their input elements.
We have also analyzed P, NP and NP–complete problems. We have also
discussed the proof for Cook’s theorem.

13.6 Glossary
Term Description
Node A node is a theoretical basic unit used to build linked data
structures such as trees, linked lists, and computer-based
representations of graphs.
Polynomial time An algorithm is in Polynomial time if its running time is
upper bounded by a polynomial in the size of the input for
the algorithm
Turing machine A Turing machine is a theoretical machine that
manipulates symbols contained on a strip of tape.
Chromatic number This is the minimum number of colors used for the vertices
of a given graph such that no two adjacent vertices are of
the same color.

Sikkim Manipal University B1480 Page No. 294


Analysis and Design of Algorithms Unit 13

13.7 Terminal Questions


1. What are the different types of lower bounds?
2. Explain trivial lower bound with example.
3. Explain the sorting problem with the help of a decision tree
4. What are non-deterministic algorithms?
5. Explain Cook’s theorem.

13.8 Answers
Self Assessment Questions
1. Lower – bound
2. True
3. Information – theoretic
4. Sorting
5. True
6. Ternary
7. Tractable
8. Graph coloring
9. Deterministic

Terminal Questions
1. Refer section 13.2 – Lower bound arguments
2. Refer section 13.2.1 – Trivial lower bound arguments
3. Refer section 13.3.1 – Decision tree for sorting algorithm
4. Refer section 13.4.1 – Non-deterministic algorithms
5. Refer section 13.4.4 – Cook’s theorem

References
 Puntambekar, A. A. (2008). Design and Analysis of Algorithms. Technical
Publication, Pune.
 Anany Levitin (2009). Introduction to Design and Analysis of Algorithms.
Dorling Kindersley, India

E-References
 http://benchoi.info/Bens/Teaching/Development/Algorithm/PowerPoint/
CH05.ppt http://cs.baylor.edu/~maurer/aida/courses/adversar.ppt
 www.inf.ed.ac.uk/teaching/courses/propm/papers/Cook.pdf
 http://www.vidyasagar.ac.in/journal/maths/Vol12/JPS12-23.pdf

Sikkim Manipal University B1480 Page No. 295

You might also like