You are on page 1of 19

Complexity Theory – Chapter 01

Computational Problem

Computational complexity theory is a branch of the theory of computation in


theoretical computer science that focuses on classifying computational problems
according to their inherent difficulty, and relating those classes to each other. A
computational problem is understood to be a task that is in principle amenable to
being solved by a computer, which is equivalent to stating that the problem may be
solved by mechanical application of mathematical steps, such as an algorithm.
A problem is regarded as inherently difficult if its solution requires significant
resources, whatever the algorithm used. The theory formalizes this intuition, by
introducing mathematical models of computation to study these problems and
quantifying the amount of resources needed to solve them, such as time and
storage. Other complexity measures are also used, such as the amount of
communication (used in communication complexity), the number of gates in a
circuit (used in circuit complexity) and the number of processors (used in parallel
computing). One of the roles of computational complexity theory is to determine
the practical limits on what computers can and cannot do.
Closely related fields in theoretical computer science are analysis of algorithms and
computability theory. A key distinction between analysis of algorithms and
computational complexity theory is that the former is devoted to analyzing the
amount of resources needed by a particular algorithm to solve a problem, whereas
the latter asks a more general question about all possible algorithms that could be
used to solve the same problem. More precisely, computational complexity theory
tries to classify problems that can or cannot be solved with appropriately restricted
resources. In turn, imposing restrictions on the available resources is what

1
distinguishes computational complexity from computability theory: the latter
theory asks what kind of problems can, in principle, be solved algorithmically.

Problem instances

A computational problem can be viewed as an infinite collection of instances


together with a solution for every instance. The input string for a computational
problem is referred to as a problem instance, and should not be confused with the
problem itself. In computational complexity theory, a problem refers to the
abstract question to be solved. In contrast, an instance of this problem is a rather
concrete utterance, which can serve as the input for a decision problem. For
example, consider the problem of primality testing. The instance is a number (e.g.
15) and the solution is "yes" if the number is prime and "no" otherwise (in this case
"no"). Stated another way, the instance is a particular input to the problem, and the
solution is the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the
following instance of the decision version of the traveling salesman problem: Is
there a route of at most 2000 kilometres passing through all of Germany's 15
largest cities? The quantitative answer to this particular problem instance is of little
use for solving other instances of the problem, such as asking for a round trip
through all sites in Milan whose total length is at most 10 km. For this reason,
complexity theory addresses computational problems and not particular problem
instances.

2
Decision problems as formal languages

A decision problem has only two possible outputs, yes or no (or alternately 1 or 0)
on any input. Decision problems are one of the central objects of study in
computational complexity theory. A decision problem is a special type of
computational problem whose answer is either yes or no, or alternately either 1 or
0. A decision problem can be viewed as a formal language, where the members of
the language are instances whose output is yes, and the non-members are those
instances whose output is no. The objective is to decide, with the aid of an
algorithm, whether a given input string is a member of the formal language under
consideration. If the algorithm deciding this problem returns the answer yes, the
algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary
graph. The problem consists in deciding whether the given graph is connected, or
not. The formal language associated with this decision problem is then the set of all
connected graphs—of course, to obtain a precise definition of this language, one
has to decide how graphs are encoded as binary strings.
Function problems
A function problem is a computational problem where a single output (of a total
function) is expected for every input, but the output is more complex than that of a

3
decision problem, that is, it isn't just yes or no. Notable examples include the
traveling salesman problem and the integer factorization problem.
It is tempting to think that the notion of function problems is much richer than the
notion of decision problems. However, this is not really the case, since function
problems can be recast as decision problems. For example, the multiplication of
two integers can be expressed as the set of triples (a, b, c) such that the relation a ×
b = c holds. Deciding whether a given triple is a member of this set corresponds to
solving the problem of multiplying two numbers.

COMPLEXITY THEORY IN BUSINESS


Complexity theory is used in business as a way to encourage innovative thinking
and real-time responses to change by allowing business units to self-organize.
Sherman and Schultz (as related by Hout) argue that modern business moves in a
nonlinear fashion, with no continuity in the flow of competitive events, except
when observed from hindsight. In order to effectively put complexity theory to
work, however, organization leaders need to give up rigid control of these systems
from above. Far more can be learned by stepping back from the day-to-day running
of the organization and watching for emergent properties and organizational
patterns. Those conditions or patterns that bring about the best solutions should be
preserved whenever possible. Managers also need to allow organizations to evolve
in response to ongoing messages from customers. As Hout states:
No intelligence from on high can match the quality of solutions to market problems
that arise from players who are constantly communicating with one another on the
ground level. The invisible hand of the marketplace should displace the visible
hand of the manager. The markets can determine where one team or initiative or
company ends and another begins. Managers interfere at their peril.

4
Turing Machine

A Turing machine is the original idealized model of a computer, invented by Alan


Turing in 1936. Turing machines are equivalent to modern electronic computers at
a certain theoretical level, but differ in many details.

A Turing machine consists of a line of cells known as the "tape", together with a
single active cell, known as the "head". The cells on the tape can have a certain set
of possible colors, and the head can be in a certain set of possible states.

Any particular Turing machine is defined by a rule which specifies what the head
should do at each step. The rule looks at the state of the head, and the color of the
cell that the head is on. Then it specifies what the new state of the head should be,
what color the head should "write" onto the tape, and whether the head should
move left or right.

The prize Turing machine has two possible states of its head, and three possible
colors on its tape. The animation below shows the operation of the machine, with
the states of the head represented by the orientation of the arrows. In the example
shown, the Turing machine starts from a
"blank" tape in which every cell is white. In the analogy with a computer, the "tape" of the
Turing machine is the computer memory, idealized to extend infinitely in each direction.

The initial arrangement of colors of cells on the tape corresponds to the input given
to the computer. This input can contain both a "program" and "data". The steps of
the Turing machine correspond to the running of the computer.

The rules for the Turing machine are analogous to machine-code instructions for
the computer. Given particular input, each part of the rule specifies what
"operation" the machine should perform.

The remarkable fact is that certain Turing machines are "universal", in the sense
that with appropriate input, they can be made to perform any ordinary computation.

5
Not every Turing machine has this property; many can only behave in very simple
ways. In effect, they can only do specific computations; they cannot act as
"general-purpose computers".
This prize is about determining how simple the rules for a Turing machine can be,
while still allowing the Turing machine to be "universal".

A universal Turing machine has the property that it can emulate any other Turing
machine---or indeed any computer or software system. Given rules for the thing to
be emulated, there is a way to create initial conditions for the universal Turing
machine that will make it do the emulation.

Turing machines are widely used in theoretical computer science for proving
abstract theorems. Studying specific Turing machines has been rare.

Other machine models


Many machine models different from the standard multi-tape Turing machines
have been proposed in the literature, for example random access machines. Perhaps
surprisingly, each of these models can be converted to another without providing
any extra computational power. The time and memory consumption of these
alternate models may vary.[1] What all these models have in common is that the
machines operate deterministically.
However, some computational problems are easier to analyze in terms of more
unusual resources. For example, a non-deterministic Turing machine is a
computational model that is allowed to branch out to check many different
possibilities at once. The non-deterministic Turing machine has very little to do
with how we physically want to compute algorithms, but its branching exactly
captures many of the mathematical models we want to analyze, so that
nondeterministic time is a very important resource in analyzing computational
problems.

6
Complexity measures
For a precise definition of what it means to solve a problem using a given amount
of time and space, a computational model such as the deterministic Turing machine
is used. The time required by a deterministic Turing machine M on input x is the
total number of state transitions, or steps, the machine makes before it halts and
outputs the answer ("yes" or "no"). A Turing machine M is said to operate within
time f(n), if the time required by M on each input of length n is at most f(n). A
decision problem A can be solved in time f(n) if there exists a Turing machine
operating in time f(n) that solves the problem. Since complexity theory is interested
in classifying problems based on their difficulty, one defines sets of problems
based on some criteria. For instance, the set of problems solvable within time f(n)
on a deterministic Turing machine is then denoted by DTIME(f(n)).
Analogous definitions can be made for space requirements. Although time and
space are the most well-known complexity resources, any complexity measure can
be viewed as a computational resource. Complexity measures are very generally
defined by the Blum complexity axioms. Other complexity measures used in
complexity theory include communication complexity, circuit complexity, and
decision tree complexity. The complexity of an algorithm is often expressed using
big O notation.

The notion of communication complexity was introduced by Yao in 1979, who


investigated the following problem involving two separated parties (Alice and
Bob). Alice receives an n-bit string x and Bob another n-bit string y, and the goal is
for one of them (say Bob) to compute a certain function f(x,y) with the least
amount of communication between them. Note that here we are not concerned
about the number of computational steps, or the size of the computer memory used.

7
Communication complexity tries to quantify the amount of communication
required for such distributed computations.
Of course they can always succeed by having Alice send her whole n-bit string to
Bob, who then computes the function, but the idea here is to find clever ways of
calculating f with fewer than n bits of communication.
This abstract problem is relevant in many contexts: in VLSI circuit design, for
example, one wants to minimize energy used by decreasing the amount of electric
signals required between the different components during a distributed
computation. The problem is also relevant in the study of data structures, and in the
optimization of computer networks.

Alan Turing Analysis of Computation

Alan Turing’s analysis attempting to formalize the class of all effective procedures
was carried out in 1936, resulting in the notion of a Turing machine. Its importance
is that it was the first really general analysis to understand how it is that
computation takes place, and that it led to a convincing and widely accepted
abstraction of the concept of effective procedure.

It is worth noting that Turing’s analysis was done before any computers more
powerful than desk calculators had been invented. His insights led, more or less
directly, to John von Neumann’s invention in the 1940’s of the stored program
digital computer, a machine with essentially the same underlying architecture as
today’s computers.

A “computer” Turing means a human who is solving a computational problem in a


mechanical way, not a machine.

Any other changes can be split up into simple changes of this kind. The situation in
regard to the squares whose symbols may be altered in this way is the same as in
regard to the observed squares. We may, therefore, without loss of generality,
assume that the squares whose symbols are changed are always “observed”
squares. Besides these changes of symbols, the simple operations must include
8
changes of distribution of observed squares. The new observed squares must be
immediately recognizable by the computer.

In particular, squares marked by special symbols might be taken as immediately


recognizable. Now if these squares are marked only by single symbols there can be
only a finite number of them, and we should not upset our theory by adjoining
these marked squares to the observed squares. If, on the other hand, they are
marked by a sequence of symbols, we cannot regard the process of recognition as a
simple process. This is a fundamental point and should be illustrated.

- The Term computation refers to the process of productivity on output from a


set of inputs in a finite number of steps.

 An alphabet is a finite, non-empty set of symbols


We use the symbol Σ (sigma) to denote an alphabet
Examples: Binary: Σ = {0, 1}
All lower case letters: Σ = {a, b, c, z}
Alphanumeric: Σ = {a-z, A-Z, 0-9}
DNA molecule letters: Σ = {a, c, g, t}

 A string or word is a finite sequence of symbols chosen from Σ


Empty string is ε (or “epsilon”)
Length of a string w, denoted by “|w|”, is equal to the
number of (non- ε) characters in the string
E.g., x = 010100 |x| = 6
x = 01 ε 0 ε 1 ε 00 ε |x| = ? 
xy = concatenation of two strings x
and y
 L is a said to be a language over alphabet Σ, only if L ⊆ Σ*

9
this is because Σ* is the set of all strings (of all possible length
including 0) over the given alphabet Σ Examples:
1. Let L be the language of all strings consisting of n 0’s followed by n 1’s:
L = {ε,01, 0011, 000111,…}
Let L be the language of all strings of with equal number of 0’s
and 1’s: L = {ε,01, 10, 0011, 1100, 0101, 1010, 1001,
…}

 Given a string w ∈Σ*and a language L over Σ, decide whether or not w ∈


L.
Let w = 100011
2. Q) Is w ∈ the language of strings with equal number of 0s and 1s?
 Computation is not just a practical tool but also a major scientific concepts.
i.e DNA, Physics theories, Biology etc.
 In 1930-50 researcher focused on computability and showed that many
inheritable task were incomputable.
 Complexity theory concern with how much computation resources are
required to solve a given task.
o many computational tasks involve searching for a solution
o many searching task in a host of disciplines including life sciences,
social sciences and operation research.
o Randomness does not help speed up computation.
o Any probabilistic algorithm can be replace with a deterministic
algorithm o Theory of error correcting codes.

Existing Problems

- Decision Problem
- Search Problem
- Optimization Problem

10
- Counting Problem

There are many questions that raise the problem concern on the social life and
make a complex status of computability like:

- Is there any use of computationally hard problems?


- Can we use any algorithm to construct secret codes that are unbreakable life
time? (using Digital Cryptography for ecommerce and security also).
- The digital Cryptography is intimately related to P v NP.
- Can we make quantum mechanical properties to solve hard problems?
- Can we generate mathematical proofs automatically?

Meaning of Efficiency as Complexity

Let’s start with the multiplication of two Integers. There may be two possible
operations for the multiplication of the integers, First method is repeated addition
and second is grade school algorithm.

In repeated addition method add the value a to b-1 times..

Example- multiplies 577 by 423.

it needs 422 times addition using the first method but using grade school algorithm
it need only 3 multiplication and 3 addition.

It shows that the size of inputs is the number of digits in the number.

The number of basic operation used to multiply two digit number is at most 2n 2 for
the grade school algorithm and at least n10n-1 for repeated addition.

Model of Computation

• Input/output format defined. What else?


• Model of Computation should:
 Define what we mean with \computation", “actions"
11
 Be simple and easy to use
 but yet powerful
• Models: Recursive Functions, Rewriting Systems, Turing Machines.  All
equally powerful.

Turing Machine Structure


A Turing Machine is like a Pushdown Automaton. Both have a finite-state machine
as a central component, both have additional storage. A Pushdown Automaton uses
a “stack” for storage whereas a Turing Machine use a “tape”, which is actually
infinite in both the directions. The tape consists of a series of “squares”, each of
which can hold a single symbol. The “tape-head”, or “read-write head”, can read a
symbol from the tape, write a symbol to the tape and move one square in either
direction. There are two kinds of Turing Machine available.

(a)Deterministic Turing Machine.


(b)Non-deterministic Turing Machine.
We will discuss about Deterministic Machines only. A Turing Machine does not
read “input”, unlike the other automata. Instead, there are usually symbols on the
tape before the Turing Machine begins, the Turing Machine might read some. all,
or none of these symbols. The initial tape may, if desired, be thought of as “input”.
Turing machines are useful in several ways. As an automaton, the Turing machine
is the most general model. It accepts type-O languages. It can also be used for
computing functions. It turns out to be a mathematical model of partial recursive
functions. Turing machines are also used for determining the decidability of certain
languages and measuring the space and time complexity of problems. These are the
topics of discussion in this chapter and some of the subsequent chapters.

12
For finalizing computability, Turing assumed that, while computing, a person
writes symbols on a one-dimensional paper (instead of a two dimension paper as is
usually done) which can be viewed as a tape divided into cells.
One scans the cells one at a time and usually performs one of the three simple
operations, namely (i) writing a new symbol in the cell being currently scanned,
(ii) moving to the cell left of the present cell and (iii) moving to the cell light of the
present cell. With these observations in mind, Turing proposed his 'computing
machine.'

“Acceptors” produce only a binary (accept/reject) output. “Transducers” can


produce more complicated results. So far all our previous discussions were only
with acceptors. A Turing Machine also accepts or rejects its input. The results left
on the tape when the Turing Machine finishes can be regarded as the “output” of
the computation. Therefore a Turing Machine is a “Transducer”.

Transducer and Acceptors

• Definition so far: Receive input, compute output


 Call this a transducer: Interpret a TM M as a function f : Σ* Σ*
 All such f are called computable functions
 Partial functions may be undefined for some inputs w
In case M does not halt for them ( M (w) =% )
 Total functions are defined for all inputs
• For decision problems L: Only want a positive or negative answer
 Call this an acceptor :
 Interpret M as halting in
Either state qyes for positive instances w ϵ L
Or in state qno for negative instances w
not ϵ L  Output does not matter, only final state
 M accepts the language L(M):
13
L(M) := { w ϵ Σ* | for all y, z ϵ Γ* : (e, q0, w ) |-* (y, q yes,
z) }

Turing Machine Model


The Turing machine can be thought of as finite control connected to a R/W
(read/write) head. It has one tape which is divided into a number of cells. The
block diagram of the basic model for the Turing machine is given in Fig.1.

a1 a2 a3 ….. b b …..

R/W head

Finite Control
Fig. 1 Turing machine model.
Each cell can store only one symbol. The input to and the output from the finite
state automaton are effected by the R/W head which can examine one cell at a
time. In one move, the machine examines the present symbol under the R/W head
on the tape and the present state of an automaton to determine
(i)a new symbol to be written on the tape in the cell under the RAY head,
(ii) a motion of the RAY head along the tape: either the head moves one cell left
(L), or one cell right (R),
(iii) the next state of the automaton, and (iv) Whether to halt or not.
The above model can be rigorously defined as follows:

14
Definition of Turing Machines
A Turing Machine M is a 7-tuple
(Q, Σ, Γ, δ, qo , # , F )
Where,
Q is a set of states
Σ is a finite set of symbols, “input alphabet”.
Γ is a finite set of symbols on “tape alphabet”, including the blank: ϵ , Γ;
δ is the partial transition function, mapping (q, x) onto (qt, y, D) where D denotes
the direction of movement of R!W head: D =L or R according as the movement
is to the left or right.
δ : (Q - F) × Γ Q × Γ × { R, N, L }:
qo is the initial state, q0 ϵ Q;
F is the set of final states, F subset Q;
 Turing Machines are like simplified computers containing:  A
tape to read/write on
Contains squares with one symbol each
Is used for input, output and temporary storage
Unbounded
 A read/write head
Can change the symbol on the tape at current position
Moves step by step in either direction
 A finite state machine
Including an initial state and final states
• Looks simple, but is very powerful
• Standard model for the rest of the course

Operation:

• Start in state q0, input w is on tape, head over its _rst symbol

15
• Each step:
Read current state q and symbol a at current position
Lookup δ (q, a) = (p, b, D )
Change to state p, write b, move according to D
• Stop as soon as q ϵ F. Left on tape: Output
- Conguration (w, q, v) denotes status after each step:
 Tape contains wv (with innately many around)
 Head is over first symbol of v
 Machine is in state q
- Start conguration: (e, q0, w) if input is w
- End conguration: (v; q; z) for a q 2 F
 Output is z, denoted by M(w)
 In case machine doesn't halt (!): M(w) = %

Turing Machine: The Universal Machine

• Turing machine model is quite simple


• Can be easily simulated by a human
 Provided enough pencils, tape space and patience
• Important result: Machines can simulate machines
 Turing machines are finite objects!
 Effective encoding into words over an alphabet
 Also configurations are finite! Encode them also
• Simulator machine U only needs to
 Receive an encoded M as input
 Input of M is w, give that also to U
 U maintains encoded configurations of M and
applies steps  Let (M) be encoding of machine M.

Theorem (The Universal Machine)


There exists a universal Turing machine U, such that for all Turing machines
M and all words w ϵ Σ*:

16
U ( ( M), w) = M(w)
In particular, U does not halt iff M does
not halt.

Transition Function, Instantaneous Description and Moves


Snapshots' of a Turing machine in action can be used to describe a Turing machine.
These give 'instantaneous descriptions' of a Turing machine. We have defined
instantaneous descriptions of a pda in terms of the current state. The input string to
be processed, and the topmost symbol of the pushdown store.
But the input string to be processed is not sufficient to be defined as the ill of a
Turing machine, for the R1\V head can move to the left as well. So an ill of a
Turing machine is defined in terms of the entire input string and the current state.
Definition. An ill of a Turing machine M is a string a f3 y, where f3 is the present
state of M, the entire input string is split as αγ, the first symbol of y is the current
symbol α under the R/W head and y has all the subsequent symbols of the input
string, and the string ex is the substring of the input string formed by all the
symbols to the left of a.

Example

A Turing machine is shown that obtain the instantaneous description.

b a4 a1 a2 a1 a2 a2 a1 a4 a2 b b

R/W head

State q3

17
Solution
The present symbol under the RJW head is a 1. The present state is q3' So al is
written to the right of q3. The nonblank symbols to the left of al form the string
a4a1a2a1a2a2, which is written to the left of q3' The sequence of nonblank
symbols to the right of a1 is a4a2. Thus the ID is as
given in Figure below a4a1a2a1a2a2 q3 a1 a4a2

Left sequence Right sequence

Present state Symbol under R/W head

Figure: Representation of ID

Notes: (1) For constructing the ID, we simply insert the current state in the input
string to the left of the symbol under the RIW head.
(2) We observe that the blank symbol may occur as part of the left or right
substring.

Moves in a TM

As in the case of pushdown automata, 8(q, x) induces a change in ID of the Turing


machine. We call this change in ID a move.
Suppose 8(q, Xj) =(P, y, L). The input string to be processed is X1,X2, ... Xn, and
the present symbol under the R/W head is Xj So the ID before processing Xi is

18
X1X2……Xj-1qXi……..Xn
After processing Xi, the resulting ID is

X1…….Xj-2 p Xj-1 y Xi+1………….Xn

Complexity theory and knowledge management

Complexity theory also relates to knowledge management (KM) and organizational


learning (OL). "Complex systems are, by any other definition, learning
organizations. Complexity theory is, therefore, on the verge of making a huge
contribution to both KM and OL." Complexity
Theory, KM, and OL are all complimentary and co-dependent. “KM and OL each
lack a theory of how cognition happens in human social systems complexity theory
offers this missing piece”. In 1997, a think tank called Knowledge Management
Consortium International (KCMI) was formed in Washington, DC. The formation
of the group acknowledged, "the profound connection between complexity theory
and knowledge management". Complexity theory offers new approaches to some
of the questions that Peter Senge has posed in the field of KM. "In what has only
recently become apparent, the issues Senge speaks of are precisely those that
scholars and researchers of complexity theory have been dealing with for the past
15 years.

Complexity theory in organizations

Beginning in the early 1990s, theorists began linking complexity theory to


organizational change. Complexity Theory rejects the idea of organizations as a
machine, as well as a planned approach to organizational change. Rather, it agrees
with the emergent approach that power and constant change are crucial elements of
organizational life. It can also be used to explain the often paradoxical nature of
organizations.

19

You might also like