You are on page 1of 50

Theory of computation

Rakesh KR

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 20 Oct 2012 01:16:54 UTC

Contents
Articles
Theory of computation Automata theory Regular language Deterministic finite automaton Context-free language Pushdown automaton Recursively enumerable set Turing machine Undecidable problem 1 5 10 14 18 20 25 27 43

References
Article Sources and Contributors Image Sources, Licenses and Contributors 46 47

Article Licenses
License 48

Theory of computation

Theory of computation
In theoretical computer science and mathematics, the theory of computation is the branch that deals with whether and how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory, computability theory and computational complexity theory.[1] In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see ChurchTuring thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a bounded amount of memory.

History
The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics.

Some pioneers of the theory of computation were Alonzo Church, Alan Turing, Stephen Kleene, John von Neumann and Claude Shannon.

Relationship between computability theory, complexity theory and formal language theory.

Branches
Automata theory
Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word () which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set.

Theory of computation

Computability theory
Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory.

Computational complexity theory


Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps does it take to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grows linearly in the size of the problem. To simplify this problem, computer scientists have adopted Big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example we might say that the problem requires steps to solve. Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP.

Models of computation
Aside from a Turing machine, other equivalent (See: ChurchTuring thesis) models of computation are in use. Lambda calculus A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction. Combinatory logic is a concept which has many similarities to -calculus, but also important differences exist (e.g. fixed point -calculus). Combinatory logic was combinator Y has normal form in combinatory logic but not in

developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics). mu-recursive functions a computation consists of a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function the functions and appear, then terms of the form

Theory of computation 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or mu recursion. For instance if then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs. Markov algorithm a string rewriting system that uses grammar-like rules to operate on strings of symbols. Register machine is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gdel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriate huge natural number unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques. P Like Turing machines, P uses an infinite tape of symbols (without random access), and a rather minimalistic set of instructions. But these instructions are very different, thus, unlike Turing machines, P does not need to maintain a distinct state, because all memory-like functionality can be provided only by the tape. Instead of rewriting the current symbol, it can perform a modular arithmetic incrementation on it. P has also a pair of instructions for a cycle, inspecting the blank symbol. Despite its minimalistic nature, it has become the parental formal language of an implemented and (for entertainment) used programming language called Brainfuck. In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions. Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained.

Theory of computation

References
[1] Sipser

Further reading
Textbooks aimed at computer scientists (There are many textbooks in this area; this list is by necessity incomplete.) Hopcroft, John E., and Jeffrey D. Ullman (2006). Introduction to Automata Theory, Languages, and Computation. 3rd ed Reading, MA: Addison-Wesley. ISBN 978-0-321-45536-9 One of the standard references in the field. Michael Sipser (2006). Introduction to the Theory of Computation 2ed. PWS Publishing. ISBN0-534-95097-3. Eitan Gurari (1989). An Introduction to the Theory of Computation (http://www.cse.ohio-state.edu/~gurari/ theory-bk/theory-bk.html). Computer Science Press. ISBN0-7167-8182-4. Hein, James L. (1996) Theory of Computation. Sudbury, MA: Jones & Bartlett. ISBN 978-0-86720-497-1 A gentle introduction to the field, appropriate for second-year undergraduate computer science students. Taylor, R. Gregory (1998). Models of Computation and Formal Languages. New York: Oxford University Press. ISBN 978-0-19-510983-2 An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students. Lewis, F. D. (2007). Essentials of theoretical computer science A textbook covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results. Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers a wider range of topics than most other introductory books, including program semantics and quantification theory. Aimed at graduate students. Books on computability theory from the (wider) mathematical perspective Hartley Rogers, Jr (1987). Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1 S. Barry Cooper (2004). Computability Theory. Chapman and Hall/CRC. ISBN1-58488-237-9.. Carl H. Smith, A recursive introduction to the theory of computation, Springer, 1994, ISBN 0-387-94332-3. A shorter textbook suitable for graduate students in Computer Science. Historical perspective Richard L. Epstein and Walter A. Carnielli (2000). Computability: Computable Functions, Logic, and the Foundations of Mathematics, with Computability: A Timeline (2nd ed.). Wadsworth/Thomson Learning. ISBN0-534-54644-7..

External links
Computability Logic (http://www.cis.upenn.edu/~giorgi/cl.html) - A theory of interactive computation. The main web source on this subject.

Automata theory

Automata theory
In theoretical computer science, automata theory is the study of mathematical objects called abstract machines or automata and the computational problems that can be solved using them. Automata comes from the Greek word meaning "self-acting". The figure at right illustrates a finite state machine, which belongs to one well-known variety of automaton. This automaton consists of states (represented in the figure by circles), and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function (which takes the current state and the recent symbol as its inputs).

An example of an automaton. The study of the mathematical properties of such automata is automata theory

Automata theory is also closely related to formal language theory. An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they are able to recognize. Automata play a major role in theory of computation, compiler design, parsing and formal verification.

Automata
Following is an introductory definition of one type of automaton, which attempts to help one grasp the essential concepts involved in automata theory.

Informal description
An automaton is supposed to run on some given sequence of inputs in discrete time steps. At each time step, an automaton gets one input that is picked up from a set of symbols or letters, which is called an alphabet. At any time, the symbols so far fed to the automaton as input form a finite sequence of symbols, which is called a word. An automaton contains a finite set of states. At each instance in time of some run, the automaton is in one of its states. At each time step when the automaton reads a symbol, it jumps or transits to a next state that is decided by a function that takes current state and the symbol currently read as parameters. This function is called transition function. The automaton reads the symbols of the input word one after another and transits from state to state according to the transition function, until the word is read completely. Once the input word has been read, the automaton is said to have been stopped and the state at which automaton has stopped is called final state. Depending on the final state, it's said that the automaton either accepts or rejects an input word. There is a subset of states of the automaton, which is defined as the set of accepting states. If the final state is an accepting state, then the automaton accepts the word. Otherwise, the word is rejected. The set of all the words accepted by an automaton is called the language recognized by the automaton. In short, an automaton is a mathematical object that takes a word as input and decides either to accept it or reject it. Since all computational problems are reducible into the accept/reject question on words (all problem instances can be represented in a finite length of symbols), automata theory plays a crucial role in computational theory.

Automata theory

Formal definition
Automaton An automaton is represented formally by a 5-tuple (Q,,,q0,F), where: Q is a finite set of states. is a finite set of symbols, called the alphabet of the automaton. is the transition function, that is, :QQ. q0 is the start state, that is, the state of the automaton before any input has been processed, where q0 Q. F is a set of states of Q (i.e. FQ) called accept states.

Input word An automaton reads a finite string of symbols a1,a2,...., an , where ai, which is called an input word. The set of all words is denoted by *. Run A sequence of states q0,q1,q2,...., qn, where qiQ such that q0 is the start state and qi=(qi-1,ai) for 0<in, is a run of the automaton on an input word w = a1,a2,...., an*. In other words, at first the automaton is at the start state q0, and then the automaton reads symbols of the input word in sequence. When the automaton reads symbol ai it jumps to state qi=(qi-1,ai). qn is said to be the final state of the run. Accepting word A word w* is accepted by the automaton if qnF. Recognized language An automaton can recognize a formal language. The language L* recognized by an automaton is the set of all the words that are accepted by the automaton. Recognizable languages The recognizable languages are the set of languages that are recognized by some automaton. For the above definition of automata the recognizable languages are regular languages. For different definitions of automata, the recognizable languages are different.

Variant definitions of automata


Automata are defined to study useful machines under mathematical formalism. So, the definition of an automaton is open to variations according to the "real world machine", which we want to model using the automaton. People have studied many variations of automata. The most standard variant, which is described above, is called a deterministic finite automaton. The following are some popular variations in the definition of different components of automata. Input Finite input: An automaton that accepts only finite sequence of symbols. The above introductory definition only encompasses finite words. Infinite input: An automaton that accepts infinite words (-words). Such automata are called -automata. Tree word input: The input may be a tree of symbols instead of sequence of symbols. In this case after reading each symbol, the automaton reads all the successor symbols in the input tree. It is said that the automaton makes one copy of itself for each successor and each such copy starts running on one of the successor symbol from the state according to the transition relation of the automaton. Such an automaton is called tree automaton. Infinite tree input : The two extensions above can be combined, so the automaton reads a tree structure with (in)finite branches. Such an automaton is called infinite tree automaton States

Automata theory Finite states: An automaton that contains only a finite number of states. The above introductory definition describes automata with finite numbers of states. Infinite states: An automaton that may not have a finite number of states, or even a countable number of states. For example, the quantum finite automaton or topological automaton has uncountable infinity of states. Stack memory: An automaton may also contain some extra memory in the form of a stack in which symbols can be pushed and popped. This kind of automaton is called a pushdown automaton Transition function Deterministic: For a given current state and an input symbol, if an automaton can only jump to one and only one state then it is a deterministic automaton. Nondeterministic: An automaton that, after reading an input symbol, may jump into any of a number of states, as licensed by its transition relation. Notice that the term transition function is replaced by transition relation: The automaton non-deterministically decides to jump into one of the allowed choices. Such automata are called nondeterministic automata. Alternation: This idea is quite similar to tree automaton, but orthogonal. The automaton may run its multiple copies on the same next read symbol. Such automata are called alternating automata. Acceptance condition must satisfy all runs of such copies to accept the input. Acceptance condition Acceptance of finite words: Same as described in the informal definition above. Acceptance of infinite words: an omega automaton cannot have final states, as infinite words never terminate. Rather, acceptance of the word is decided by looking at the infinite sequence of visited states during the run. Probabilistic acceptance: An automaton need not strictly accept or reject an input. It may accept the input with some probability between zero and one. For example, quantum finite automaton, geometric automaton and metric automaton have probabilistic acceptance. Different combinations of the above variations produce many classes of automaton.

Automata theory
Automata theory is a subject matter that studies properties of various types of automata. For example, the following questions are studied about a given type of automata. Which class of formal languages is recognizable by some type of automata? (Recognizable languages) Are certain automata closed under union, intersection, or complementation of formal languages? (Closure properties) How much is a type of automata expressive in terms of recognizing class of formal languages? And, their relative expressive power? (Language Hierarchy) Automata theory also studies if there exist any effective algorithm or not to solve problems similar to the following list. Does an automaton accept any input word? (emptiness checking) Is it possible to transform a given non-deterministic automaton into deterministic automaton without changing the recognizable language? (Determinization) For a given formal language, what is the smallest automaton that recognizes it? (Minimization).

Automata theory

Classes of automata
The following is an incomplete list of types of automata.
Automaton Nondeterministic/Deterministic Finite state machine (FSM) Deterministic pushdown automaton (DPDA) Pushdown automaton (PDA) Linear bounded automaton (LBA) Turing machine Deterministic Bchi automaton Nondeterministic Bchi automaton Recognizable language regular languages deterministic context-free languages context-free languages context-sensitive languages recursively enumerable languages -limit languages -regular languages

Rabin automaton, Streett automaton, Parity automaton, Muller automaton -regular languages

Discrete, continuous, and hybrid automata


Normally automata theory describes the states of abstract machines but there are analog automata or continuous automata or hybrid discrete-continuous automata, which use analog data, continuous time, or both.

Applications
Each model in automata theory plays important roles in several applied areas. Finite automata are used in text processing, compilers, and hardware design. Context-free grammar (CFGs) are used in programming languages and artificial intelligence. Originally, CFGs were used in the study of the human languages. Cellular automata are used in the field of biology, the most common example being John Conway's Game of Life. Some other examples which could be explained using automata theory in biology include mollusk and pine cones growth and pigmentation patterns. Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists. The idea originated in the work of Konrad Zuse, and was popularized in America by Edward Fredkin.

Automata Simulators
Automata simulators are pedagogical tools used to teach, learn and research automata theory. An automata simulator takes as input the description of an automaton and then simulates its working for an arbitrary input string. The description of the automaton can be entered in several ways. An automaton can be defined in a symbolic language or its specification may be entered in a predesigned form or its transition diagram may be drawn by clicking and dragging the mouse. Well known automata simulators include Turings World, JFLAP, VAS, TAGS and SimStudio.[1]

Connection to Category theory


One can define several distinct categories of automata[2] following the automata classification into different types described in the previous section. The mathematical category of deterministic automata, sequential machines or sequential automata, and Turing machines with automata homomorphisms defining the arrows between automata is a Cartesian closed category,[3][4] it has both categorical limits and colimits. An automata homomorphism maps a quintuple of an automaton Ai onto the quintuple of another automaton Aj.[5] Automata homomorphisms can also be considered as automata transformations or as semigroup homomorphisms, when the state space,S, of the automaton is defined as a semigroup Sg. Monoids are also considered as a suitable setting for automata in monoidal

Automata theory categories.[6][7][8] Categories of variable automata One could also define a variable automaton, in in the sense of Norbert Wiener in his book on "Human Use of Human Beings" via the endomorphisms Ai-->Ai. Then, one can show that such variable automata homomorphisms form a mathematical group. In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, a variable automaton groupoid. Therefore, in the most general case, categories of variable automata of any kind are categories of groupoids[9] or groupoid categories. Moreover, the category of reversible automata is then a 2-category, and also a subcategory of the 2-category of groupoids, or the groupoid category.

References
[1] Chakraborty, P., Saxena, P. C., Katti, C. P. 2011. Fifty Years of Automata Simulation: A Review. ACM Inroads, 2(4):5970. http:/ / dl. acm. org/ citation. cfm?id=2038893& dl=ACM& coll=DL& CFID=65021406& CFTOKEN=86634854 [2] Jir Admek and Vera Trnkov. 1990. Automata and Algebras in Categories. Kluwer Academic Publishers:Dordrecht and Prague [3] S. Mac Lane, Categories for the Working Mathematician, Springer, New York (1971) [4] http:/ / planetmath. org/ encyclopedia/ CartesianClosedCategory. html Cartesian closed category [5] http:/ / planetmath. org/ encyclopedia/ SequentialMachine3. html The Category of Automata [6] http:/ / www. csee. wvu. edu/ ~jworthing/ asl2010. pdf James Worthington.2010.Determinizing, Forgetting, and Automata in Monoidal Categories. ASL North American Annual Meeting,March 17, 2010 [7] Aguiar, M. and Mahajan, S.2010. "Monoidal Functors, Species, and Hopf Algebras". [8] Meseguer, J., Montanari, U.: 1990 Petri nets are monoids. Information and Computation 88:105155 [9] http:/ / en. wikipedia. org/ wiki/ Groupoid#Category_of_groupoids Category of groupoids

John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman (2000). Introduction to Automata Theory, Languages, and Computation (2nd Edition). Pearson Education. ISBN0-201-44124-1. Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN0-534-94728-X. Part One: Automata and Languages, chapters 12, pp.29122. Section 4.1: Decidable Languages, pp.152159. Section 5.1: Undecidable Problems from Language Theory, pp.172183. James P. Schmeiser, David T. Barnard (1995). Producing a top-down parse order with bottom-up parsing. Elsevier North-Holland. D'Souza, D. and Shankar, P. (2012). Modern Applications of Automata Theory. World Scientific Publishing, Singapore.

Further reading
Salomaa, Arto (1985). Coputation and automata. Encyclopedia of Mathematics and Its Applications. 25. Cambridge University Press. ISBN0-521-30245-5. Zbl0565.68046. Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge University Press. ISBN978-0-521-84425-3. Zbl1188.68177.

External links
Visual Automata Simulator (http://www.cs.usfca.edu/~jbovet/vas.html), A tool for simulating, visualizing and transforming finite state automata and Turing Machines, by Jean Bovet JFLAP (http://www.jflap.org) dk.brics.automaton (http://www.brics.dk/automaton) libfa (http://www.augeas.net/libfa/index.html) Proyecto SEPa (in Spanish) (http://www.ucse.edu.ar/fma/sepa/) Exorciser (in German) (http://www.swisseduc.ch/informatik/exorciser/index.html) Automata Made it on Java, with the Sources so you can see (http://torturo.com/programas-hechos-en-java/)

Regular language

10

Regular language
In theoretical computer science and formal language theory, a regular language is a formal language that can be expressed using a regular expression. Note that the "regular expression" features provided with many programming languages are augmented with features that make them capable of recognizing languages that can not be expressed by the formal regular expressions (as formally defined below). In the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars (regular grammars). Regular languages are very useful in input parsing and programming language design.

Formal definition
The collection of regular languages over an alphabet is defined recursively as follows: The empty language is a regular language. For each a (a belongs to ), the singleton language {a} is a regular language. If A and B are regular languages, then A B (union), A B (concatenation), and A* (Kleene star) are regular languages. No other languages over are regular. See regular expression for its syntax and semantics. Note that the above cases are in effect the defining rules of regular expression. Examples All finite languages are regular; in particular the empty string language {} = * is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of as, or the language consisting of all strings of the form: several as followed by several bs. A simple example of a language that is not regular is the set of strings .[1] Intuitively, it cannot be

recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.

Equivalence to other formalisms


A regular language satisfies the following equivalent properties: it can be accepted by a (nondeterministic) finite automaton, but also by the restricted deterministic finite automaton, or the more general alternating finite automaton it can be generated by a regular grammar it can be generated by a prefix grammar it can be accepted by a read-only Turing machine it can be defined in monadic second-order logic it is recognized by some finite monoid, meaning it is the preimage of a subset of a finite monoid under a homomorphism from the free monoid on its alphabet The above properties are sometimes used as alternative definition of regular languages.

Regular language

11

Closure properties
The regular languages are closed under the various operations, that is, if the languages K and L are regular, so is the result of the following operations: the set theoretic Boolean operations: union , intersection , and complement . From this also difference follows. the regular operations: union , concatenation , and Kleene star . the trio operations: string homomorphism, inverse string homomorphism, and intersection with regular languages. As a consequence they are closed under arbitrary finite state transductions, like quotient with a regular language. Even more, regular languages are closed under quotients with arbitrary languages: If L is regular then L/K is regular for any K. the reverse (or mirror image) .

Deciding whether a language is regular


To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language such as this is regular, one often uses the MyhillNerode theorem or the pumping lemma among other methods.[2] There are two purely algebraic approaches to define regular languages. If: is a finite alphabet, * denotes the free monoid over consisting of all strings over ,
Regular language in classes of Chomsky hierarchy.

f : * M is a monoid homomorphism where M is a finite monoid, S is a subset of M then the set is regular. Every regular language arises in this fashion.

If L is any subset of *, one defines an equivalence relation ~ (called the syntactic relation) on * as follows: u ~ v is defined to mean uw L if and only if vw L for all w * The language L is regular if and only if the number of equivalence classes of ~ is finite (A proof of this is provided in the article on the syntactic monoid). When a language is regular, then the number of equivalence classes is equal to the number of states of the minimal deterministic finite automaton accepting L. A similar set of statements can be formulated for a monoid concept of a recognizable language. . In this case, equivalence over M leads to the

Regular language

12

Complexity results
In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULAR AC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[3] On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language can both be recognized in AC0.[4] If a language is not regular, it requires a machine with at least (log log n) space to recognize (where n is the input size).[5] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.

Subclasses
Important subclasses of regular languages include Finite languages - those containing only a finite number of words. These are regular languages, as one can create a regular expression that is the union of every word in the language. Star-free languages, those that can be described by a regular expression constructed from the empty symbol, letters, concatenation and all boolean operators including complementation but not the Kleene star: this class includes all finite languages.[6] Cyclic languages, satisfying the conditions and .[7]

The number of words in a regular language


Let denote the number of words of length in . The ordinary generating function for L is the formal power series

The generating function of a language L is a rational function if and only if L is regular.[7] Hence for any infinite regular language there exist constants and polynomials such that for every the number
[8][9]

of words of length

in

satisfies the equation can be proved by counting the words of a given length in . in

. Thus, non-regularity of certain infinite languages

Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length the Dyck language is equal to the Catalan number the non-regularity of the Dyck language. The zeta function of a language L is[7] , which is not of the form

, witnessing

The zeta function of a regular language is not in general rational, but that of a cyclic language is.[10]

Regular language

13

Generalizations
The notion of a regular language has been generalized to infinite words (see -automata) and to trees (see tree automaton).

References
Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN0-534-94728-X. Chapter 1: Regular Languages, pp.3190. Subsection "Decidable Problems Concerning Regular Languages" of section 4.1: Decidable Languages, pp.152155. Eilenberg, Samuel (1974). Automata, Languages, and Machines. A. New York: Academic Press.
[1] [2] [3] [4] Eilenberg (1974), p. 16 (Example II, 2.8) and p. 25 (Example II, 5.2). How to prove that a language is not regular? (http:/ / cs. stackexchange. com/ questions/ 1031/ how-to-prove-that-a-language-is-not-regular) M. Furst, J. B. Saxe, and M. Sipser. Parity, circuits, and the polynomial-time hierarchy. Math. Systems Theory, 17:1327, 1984. Cook, Stephen; Nguyen, Phuong (2010). Logical foundations of proof complexity (1. publ. ed.). Ithaca, NY: Association for Symbolic Logic. pp.75. ISBN0-521-51729-X. [5] J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179190. 1965. [6] Volker Diekert, Paul Gastin (2008). "First-order definable languages" (http:/ / www. lsv. ens-cachan. fr/ Publis/ PAPERS/ PDF/ DG-WT08. pdf). In Jrg Flum, Erich Grdel, Thomas Wilke. Logic and automata: history and perspectives. Amsterdam University Press. ISBN978-90-5356-576-6. . [7] Honkala, Juha (1989). "A necessary condition for the rationality of the zeta function of a regular language". Theor. Comput. Sci. 66 (3): 341-347. Zbl0675.68034. [8] Proof of theorem for irreducible DFAs (http:/ / cs. stackexchange. com/ a/ 1048/ 55) [9] Number of words of a given length in a regular language (http:/ / cs. stackexchange. com/ q/ 1045/ 55) [10] Berstel, Christophe; Reutenauer (1990). "Zeta functions of formal languages". Trans. Am. Math. Soc. 321 (2): 533-546. Zbl0797.68092.

External links
Complexity Zoo: Class REG (http://qwiki.stanford.edu/index.php/Complexity_Zoo:R#reg)

Deterministic finite automaton

14

Deterministic finite automaton


In automata theory, a branch of theoretical computer science, a deterministic finite automaton (DFA)also known as deterministic finite state machineis a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string.[1] 'Deterministic' refers to the uniqueness of the computation. In search of simplest models to capture the finite state machines, McCulloch and Pitts were among the first researchers to introduce a concept similar to finite automaton in 1943.[2][3]

An example of a deterministic finite automaton that accepts only binary numbers that are multiples of 3. The state S0 is both the start state and an accept state.

The figure at right illustrates a deterministic finite automaton using state diagram. In the automaton, there are three states: S0, S1, and S2 (denoted graphically by circles). The automaton takes finite sequence of 0s and 1s as input. For each state, there is a transition arrow leading out to a next state for both 0 and 1. Upon reading a symbol, a DFA jumps deterministically from a state to another by following the transition arrow. For example, if the automaton is currently in state S0 and current input symbol is 1 then it deterministically jumps to state S1. A DFA has a start state (denoted graphically by an arrow coming in from nowhere) where computations begin, and a set of accept states (denoted graphically by a double circle) which help define when a computation is successful. A DFA is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, a DFA can model a software that decides whether or not online user-input such as email addresses are valid.[4] (see: finite state machine for more practical examples). DFAs recognize exactly the set of regular languages[5] which are, among other things, useful for doing lexical analysis and pattern matching. DFAs can be built from nondeterministic finite automata through the powerset construction.

Formal definition
A deterministic finite automaton M is a 5-tuple, (Q, , , q0, F), consisting of a finite set of states (Q) a finite set of input symbols called the alphabet () a transition function ( : Q Q) a start state (q0 Q) a set of accept states (F Q)

Let w = a1a2 ... an be a string over the alphabet . The automaton M accepts the string w if a sequence of states, r0,r1, ..., rn, exists in Q with the following conditions: 1. r0 = q0 2. ri+1 = (ri, ai+1), for i = 0, ..., n1 3. rn F. In words, the first condition says that the machine starts in the start state q0. The second condition says that given each character of string w, the machine will transition from state to state according to the transition function . The last condition says that the machine accepts w if the last input of w causes the machine to halt in one of the accepting states. Otherwise, it is said that the automaton rejects the string. The set of strings M accepts is the language

Deterministic finite automaton recognized by M and this language is denoted by L(M). A deterministic finite automaton without accept states and without a starting state is known as a transition system or semiautomaton. For more comprehensive introduction of the formal definition see automata theory.

15

Example
The following example is of a DFA M, with a binary alphabet, which requires that the input contains an even number of 0s. M = (Q, , , q0, F) where Q = {S1, S2}, = {0, 1}, q0 = S1, F = {S1}, and is defined by the following state transition table:

The state diagram for M

S1 S2 S1 S2 S1 S2

The state S1 represents that there has been an even number of 0s in the input so far, while S2 signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, M will finish in state S1, an accepting state, so the input string will be accepted. The language recognized by M is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".

Closure properties
If DFAs recognize the languages that are obtained by applying an operation on the DFA recognizable languages then DFAs are said to be closed under the operation. The DFAs are closed under the following operations. Union Intersection Concatenation Negation Kleene closure

Since DFAs are equivalent to nondeterministic finite automaton(NFA), the above closures are proved using closure properties of NFA.

Deterministic finite automaton

16

Accept and Generate modes


A DFA representing a regular language can be used either in an accepting mode to validate that an input string is part of the language, or in a generating mode to generate a list of all the strings in the language. In the accept mode an input string is provided which the automaton can read in left to right, one symbol at a time. The computation begins at the start state and proceeds by reading the first symbol from the input string and following the state transition corresponding to that symbol. The system continues reading symbols and following transitions until there are no more symbols in the input, which marks the end of the computation. If after all input symbols have been processed the system is in an accept state then we know that the input string was indeed part of the language, and it is said to be accepted, otherwise it is not part of the language and it is not accepted. The generating mode is similar except that rather than validating an input string its goal is to produce a list of all the strings in the language. Instead of following a single transition out of each state, it follows all of them. In practice this can be accomplished by massive parallelism (having the program branch into two or more processes each time it is faced with a decision) or through recursion. As before, the computation begins at the start state and then proceeds to follow each available transition, keeping track of which branches it took. Every time the automaton finds itself in an accept state it knows that the sequence of branches it took forms a valid string in the language and it adds that string to the list that it is generating. If the language this automaton describes is infinite (ie contains an infinite number or strings, such as "all the binary string with an even number of 0s) then the computation will never halt. Given that regular languages are, in general, infinite, automata in the generating mode tends to be more of a theoretical construct .

DFA as a transition monoid


Alternatively a run can be seen as a sequence of compositions of transition function with itself. Given an input symbol , one may write the transition function as , using the simple trick of currying, that is, writing for all . This way, the transition function can be seen in simpler terms: it's just something that "acts" on a state in Q, yielding another state. One may then consider the result of function composition repeatedly applied to the various functions , , and so on. Using this notion we define . Given a pair of letters , where , one may define a new function , by insisting that denotes function composition. Clearly, this process can be recursively continued. So, we

have following recursive definition where is empty string and where is defined for all words and .

. Repeated function composition forms a monoid. For the transition functions,

this monoid is known as the transition monoid, or sometimes the transformation semigroup. The construction can also be reversed: given a , one can reconstruct a , and so the two descriptions are equivalent.

Deterministic finite automaton

17

Advantages and disadvantages


DFAs were invented to model real world finite state machines in contrast to the concept of a Turing machine, which was too general to study properties of real world machines. DFAs are one of the most practical models of computation, since there is a trivial linear time, constant-space, online algorithm to simulate a DFA on a stream of input. Also, there are efficient algorithms to find a DFA recognizing: the complement of the language recognized by a given DFA. the union/intersection of the languages recognized by two given DFAs. Because DFAs can be reduced to a canonical form (minimal DFAs), there are also efficient algorithms to determine: whether a DFA accepts any strings whether a DFA accepts all strings whether two DFAs recognize the same language the DFA with a minimum number of states for a particular regular language

DFAs are equivalent in computing power to nondeterministic finite automata (NFAs). This is because, firstly any DFA is also an NFA, so an NFA can do what a DFA can do. Also, given an NFA, using the powerset construction one can build a DFA that recognizes the same language as the NFA, although the DFA could have exponentially larger number of states than the NFA. On the other hand, finite state automata are of strictly limited power in the languages they can recognize; many simple languages, including any problem that requires more than constant space to solve, cannot be recognized by a DFA. The classical example of a simply described language that no DFA can recognize is bracket language, i.e., language that consists of properly paired brackets such as word "(()())". No DFA can recognize the bracket language because there is no limit to recursion, i.e., one can always embed another pair of brackets inside. It would require an infinite amount of states to recognize. Another simpler example is the language consisting of strings of the form anbnsome finite number of a's, followed by an equal number of b's.

Notes
[1] [2] [3] [4] [5] Hopcroft 2001: McCulloch and Pitts (1943): Rabin and Scott (1959): Gouda, Prabhakar, Application of Finite automata Hopcroft 2001:

References
Hopcroft, JE.; R. Motwani, JD Ullman (2001). Introduction to Automata Theory, Languages and Computation. Addison Wesley. ISBN0-201-44124-1. McCulloch, W. S.; Pitts, E. (1943). "A logical calculus of the ideas imminent in nervous activity". Bulletin of Mathematical Biophysics: 541544. Rabin, M. O.; Scott, D. (1959). "Finite automata and their decision problems.". IBM J. Res. Develop.: 114125. Michael Sipser, Introduction to the Theory of Computation. PWS, Boston. 1997. ISBN 0-534-94728-X. Section 1.1: Finite Automata, pp.3147. Subsection "Decidable Problems Concerning Regular Languages" of section 4.1: Decidable Languages, pp.152155.4.4 DFA can accept only regular language

Deterministic finite automaton

18

External links
DFA Simulator - an open source graphical editor and simulator of DFA (http://home.arcor.de/kai.w1986/ dfasimulator/)

Context-free language
In formal language theory, a context-free language is a language generated by some context-free grammar. The set of all context-free languages is identical to the set of languages accepted by pushdown automata.

Examples
An archetypical context-free language is strings, the entire first halves of which are the grammar , and is , the language of all non-empty even-length 's, and the entire second halves of which are accepted by the where is defined as follows: 's. is generated by automaton pushdown

Context-free languages have many applications in programming languages; for example, the language of all properly matched parentheses is generated by the grammar . Also, most arithmetic expressions are generated by context-free grammars.

Closure properties
Context-free languages are closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well: the union of L and P the reversal of L the concatenation of L and P the Kleene star of L of L under a homomorphism

the image

the image of L under an inverse homomorphism the cyclic shift of L (the language ) Context-free languages are not closed under complement, intersection, or difference. However, if L is a context-free language and D is a regular language then both their intersection and their difference are context-free languages.

Context-free language

19

Nonclosure under intersection and complement


The context-free languages are not closed under intersection. This can be seen by taking the languages and , which are both context-free. Their intersection is , which can be shown to be non-context-free by the pumping lemma for context-free languages. Context-free languages are also not closed under complementation, as for any languages A and B: .

Decidability properties
The following problems are undecidable for arbitrary context-free grammars A and B: Equivalence: is is ? ? (However, the intersection of a context-free language and a regular language is

context-free, so if were a regular language, this problem becomes decidable.) is ? is ? The following problems are decidable for arbitrary context-free languages: is is ? finite? , does ? (membership problem is even polynomially decidable - see

Membership: given any word

CYK algorithm and Earley's Algorithm)

Properties of context-free languages


The reverse of a context-free language is context-free, but the complement need not be. Every regular language is context-free because it can be described by a context-free grammar. The intersection of a context-free language and a regular language is always context-free. There exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages or a number of other methods, such as Ogden's lemma, Parikh's theorem, or using closure properties.[1] Context Free Languages are closed under Union, Concatenation, and Kleene star.[2]

Parsing
Determining an instance of the membership problem; i.e. given a string , determine whether where is the language generated by some grammar ; is also known as parsing. Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and the Earley's Algorithm. A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser.[3] See also parsing expression grammar as an alternative approach to grammar and parser.

Context-free language

20

References
[1] How to prove that a language is not context-free? (http:/ / cs. stackexchange. com/ questions/ 265/ how-to-prove-that-a-language-is-not-context-free) [2] (http:/ / infolab. stanford. edu/ ~ullman/ ialc/ spr10/ slides/ cfl5. pdf) [3] Knuth, Donald (July 1965). "On the Translation of Languages from Left to Right" (http:/ / www. cs. dartmouth. edu/ ~mckeeman/ cs48/ mxcom/ doc/ knuth65. pdf). Information and Control 8: 707 - 639. . Retrieved 29 May 2011.

Seymour Ginsburg (1966). The Mathematical Theory of Context-Free Languages. New York, NY, USA: McGraw-Hill, Inc.. Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN0-534-94728-X. Chapter 2: Context-Free Languages, pp.91122. Jean-Michel Autebert, Jean Berstel, Luc Boasson, Context-Free Languages and Push-Down Automata (http:// www-igm.univ-mlv.fr/~berstel/Articles/1997CFLPDA.pdf), in: G. Rozenberg, A. Salomaa (eds.), Handbook of Formal Languages, Vol. 1, Springer-Verlag, 1997, 111-174.

Pushdown automaton
In computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack. The PDA is used in theories about what can be computed by machines. It is more capable than a finite-state machine but less capable than a Turing machine. Because its input can be described with a formal grammar, it can be used in parser design. The deterministic pushdown automaton can handle all deterministic context-free languages while the nondeterministic version can handle all context-free languages. The term "pushdown" refers to the fact that the stack can be regarded as being "pushed down" like a tray dispenser at a cafeteria, since the operations never work on elements other than the top element. A stack automaton, by contrast, does allow access to and operations on deeper elements. Stack automata can recognize a strictly larger set of languages than deterministic pushdown automata. A nested stack automaton allows full access, and also allows stacked values to be entire sub-stacks rather than just single finite symbols. The remainder of this article describes the nondeterministic pushdown automaton.

Operation
Pushdown automata differ from finite state machines in two ways: 1. They can use the top of the stack to decide which transition to take. 2. They can manipulate the stack as part of performing a transition. Pushdown automata choose a transition by indexing a table by input signal, current state, and the symbol at the top of the stack. This means that those three parameters completely a diagram of the pushdown automaton determine the transition path that is chosen. Finite state machines just look at the input signal and the current state: they have no stack to work with. Pushdown automata add the stack as a parameter for choice. Pushdown automata can also manipulate the stack, as part of performing a transition. Finite state machines choose a new state, the result of following the transition. The manipulation can be to push a particular symbol to the top of the

Pushdown automaton stack, or to pop off the top of the stack. The automaton can alternatively ignore the stack, and leave it as it is. The choice of manipulation (or no manipulation) is determined by the transition table. Put together: Given an input signal, current state, and stack symbol, the automaton can follow a transition to another state, and optionally manipulate (push or pop) the stack. In general, pushdown automata may have several computations on a given input string, some of which may be halting in accepting configurations. If in every situation only one transition is available as continuation of the computation, the result is a deterministic pushdown automaton (DPDA) otherwise it is a nondeterministic PDA (NDPDA or NPDA). Only context-free languages for which an unambiguous grammar exists can be recognized by a DPDA, namely the deterministic context-free languages. Not all context-free languages are deterministic and the problem of deciding whether a context-free language is deterministic is unsolvable.[1] As a consequence of the above the DPDA is a strictly weaker variant of the PDA and there exists no algorithm for converting a PDA to an equivalent DPDA, if such a DPDA exists. If we allow a finite automaton access to two stacks instead of just one, we obtain a more powerful device, equivalent in power to a Turing machine. A linear bounded automaton is a device which is more powerful than a pushdown automaton but less so than a Turing machine.

21

Relation to backtracking
Nondeterministic PDAs are able to handle situations where more than one choices of action are available. In principle it is enough to create in every such case new automaton instances that will handle the extra choices. The problem with this approach is that in practice most of these instances quickly fail. This can severely affect the automaton's performance as the execution of multiple instances is a costly operation. Situations such as these can be identified in the design phase of the automaton by examining the grammar the automaton uses. This makes possible the use of backtracking in every such case in order to improve performance.

Formal Definition
We use standard formal language notation: string. A PDA is formally defined as a 7-tuple: where An element with pop is a finite set of states is a finite set which is called the input alphabet is a finite set which is called the stack alphabet is a finite subset of is the start state is the initial stack symbol is the set of accepting states is a transition of on the input and with , replacing it by pushing . The . It has the intended meaning that as topmost stack symbol, may read , in state , , , the transition relation. denotes the set of strings over alphabet and denotes the empty

, change the state to

component of the transition relation is used to

formalize that the PDA can either read a letter from the input, or proceed leaving the input untouched. In many texts the transition relation is replaced by an (equivalent) formalization, where Here writes is the transition function, mapping contains all possible actions in state with into finite subsets of on the stack, while reading . on the input. One

for the function precisely when

for the relation. Note that finite

Pushdown automaton in this definition is essential. Computations In order to formalize the semantics of the pushdown automaton a description of the current situation is introduced. Any 3-tuple is called an instantaneous description (ID) of , which includes the current state, the part of the input tape that has not been read, and the contents of the stack (topmost symbol written first). The transition relation defines the step-relation For instruction of on instantaneous descriptions. there , for every exists a step and every

22

. In general pushdown automata are nondeterministic meaning that in a given instantaneous description there may be several possible steps. Any of these steps can be chosen in a computation. With the above definition in each step always a single symbol (top of the stack) is popped, replacing it with as many symbols as necessary. As a consequence no step is defined when the stack is empty. the initial stack symbol on the stack, and a string

a step of the pushdown automaton

Computations of the pushdown automaton are sequences of steps. The computation starts in the initial state on the input tape, thus with initial description

with .

There are two modes of accepting. The pushdown automaton either accepts by final state, which means after reading its input the automaton reaches an accepting state (in ), or it accepts by empty stack ( ), which means after reading its input the automaton empties its stack. The first acceptance mode uses the internal memory (state), the second the external memory (stack). Formally one defines 1. 2. Here consecutive steps (zero, one or more). For each single pushdown automaton these two languages need to have no relation: they may be equal but usually this is not the case. A specification of the automaton should also include the intended mode of acceptance. Taken over all pushdown automata both acceptance conditions define the same family of languages. Theorem. For each pushdown automaton such that one may construct a pushdown automaton such that , and vice versa, for each pushdown automaton one may construct a pushdown automaton with with and (final state) (empty stack) meaning any number of

represents the reflexive and transitive closure of the step relation

Pushdown automaton

23

Example
The following is the formal description of the PDA which recognizes the language , where by final state:

consists of the following six instructions: , , In words, in state , for each symbol read, one by . , , and . In state to state

PDA for

(by final state)

. on top of another is popped. At any to accepting state for each symbol read one

is pushed onto the stack. Pushing symbol , while it may move from state

is formalized as replacing top

moment the automaton may move from state only when the stack consists of a single

There seems to be no generally used representation for PDA. Here we have depicted the instruction by an edge from state to state labelled by (read ; replace by ).

Understanding the computation process


The following illustrates how the above PDA computes on different input strings. The subscript from the step symbol is here omitted. (a) Input string = 0011. There are various computations, depending on the moment the move from state to state is made. Only one of these is (i) has not been read. (ii) (iii) . Accepting computation: ends in accepting state, while complete input has been read. (b) Input string = 00111. Again there are various computations. None of these is accepting. (i) accepted this way as it has not been read. (ii) (iii) . The final state is accepting, but the input is not accepted this way as it has not been (completely) read. . No further steps possible. . The final state is accepting, but the input is not . No further steps possible. accepting.
accepting computation for

. The final state is accepting, but the input is not accepted this way as it

Pushdown automaton

24

PDA and Context-free Languages


Every context-free grammar can be transformed into an equivalent pushdown automaton. The derivation process of the grammar is simulated in a leftmost way. Where the grammar rewrites a nonterminal, the PDA takes the topmost nonterminal from its stack and replaces it by the right-hand part of a grammatical rule (expand). Where the grammar generates a terminal symbol, the PDA reads a symbol from input when it is the topmost symbol on the stack (match). In a sense the stack of the PDA contains the unprocessed data of the grammar, corresponding to a pre-order traversal of a derivation tree. Technically, given a context-free grammar, the PDA is constructed as follows. 1. 2. for each rule (expand) for each terminal symbol (match) , accepting the context-free language by

As a result we obtain a single state pushdown automaton, the state here is

empty stack. Its initial stack symbol equals the axiom of the context-free grammar. The converse, finding a grammar for a given PDA, is not that easy. The trick is to code two states of the PDA into the nonterminals of the grammar. Theorem. For each pushdown automaton . one may construct a context-free grammar such that

Generalized Pushdown Automaton (GPDA)


A GPDA is a PDA which writes an entire string of some known length to the stack or removes an entire string from the stack in one step. A GPDA is formally defined as a 6-tuple:

where Q, :

, q0 and F are defined the same way as a PDA.

is the transition function. Computation rules for a GPDA are the same as a PDA except that the ai+1's and bi+1's are now strings instead of symbols. GPDA's and PDA's are equivalent in that if a language is recognized by a PDA, it is also recognized by a GPDA and vice versa. One can formulate an analytic proof for the equivalence of GPDA's and PDA's using the following simulation: Let where (q1, w, x1x2...xm) , (q2, y1y2...yn) be a transition of the GPDA , , , , .

Construct the following transitions for the PDA: (q1, w, x1) (p1, (pm-1, (pm, (pm+1, (pm+n-1, , , , x2) , xm) ) ) (p1, (p2, ) )

(pm,

(pm+1, yn) (pm+2, yn-1) ) (q2, y1)

Pushdown automaton

25

References
[1] Greibach, Sheila (October 1966). "The Unsolvability of the Recognition of Linear Context-Free Languages". Journal of the ACM 13 (4).

Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN0-534-94728-X. Section 2.2: Pushdown Automata, pp.101114. Jean-Michel Autebert, Jean Berstel, Luc Boasson, Context-Free Languages and Push-Down Automata (http:// www-igm.univ-mlv.fr/~berstel/Articles/1997CFLPDA.pdf), in: G. Rozenberg, A. Salomaa (eds.), Handbook of Formal Languages, Vol. 1, Springer-Verlag, 1997, 111-174.

External links
non-deterministic pushdown automaton (http://planetmath.org/encyclopedia/PushdownAutomaton.html), on Planet Math. JFLAP (http://www.jflap.org), simulator for several types of automata including nondeterministic pushdown automata

Recursively enumerable set


In computability theory, traditionally called recursion theory, a set S of natural numbers is called recursively enumerable, computably enumerable, semidecidable, provable or Turing-recognizable if: There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S. Or, equivalently, There is an algorithm that enumerates the members of S. That means that its output is simply a list of the members of S: s1, s2, s3, ... . If necessary, this algorithm may run forever. The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase. In computational complexity theory, the complexity class containing all recursively enumerable sets is RE. In recursion theory, the lattice of r.e. sets under inclusion is denoted .

Formal definition
A set S of natural numbers is called recursively enumerable if there is a partial recursive function (synonymously, a partial computable function) whose domain is exactly S, meaning that the function is defined if and only if its input is a member of S.

Equivalent formulations
The following are all equivalent properties of a set S of natural numbers: Semidecidability: The set S is recursively enumerable. That is, S is the domain (co-range) of a partial recursive function. There is a partial recursive function f such that:

Enumerability: The set S is the range of a partial recursive function.

Recursively enumerable set The set S is the range of a total recursive function or empty. If S is infinite, the function can be chosen to be injective. The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of values may be necessary in this case. Diophantine: There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over the natural numbers such that

26

There is a polynomial from the integers to the integers such that the set S contains exactly the non-negative numbers in its range. The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing. The Diophantine characterizations of a recursively enumerable set, while not as straightforward or intuitive as the first definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert's Tenth Problem. Diophantine sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence was only remarked more than three decades after the introduction of recursively enumerable sets). The number of bound variables in the above definition of the Diophantine set is the best known so far; it might be that a lower number can be used to define all diophantine sets.

Examples
Every recursive set is recursively enumerable, but it is not true that every recursively enumerable set is recursive. A recursively enumerable language is a recursively enumerable subset of a formal language. The set of all provable sentences in an effectively presented axiomatic system is a recursively enumerable set. Matiyasevich's theorem states that every recursively enumerable set is a Diophantine set (the converse is trivially true). The simple sets are recursively enumerable but not recursive. The creative sets are recursively enumerable but not recursive. Any productive set is not recursively enumerable. Given a Gdel numbering Cantor pairing function and of the computable functions, the set indicates (where is the is defined) is recursively enumerable. This set encodes the is recursively

halting problem as it describes the input parameters for which each Turing machine halts. Given a Gdel numbering of the computable functions, the set

enumerable. This set encodes the problem of deciding a function value. Given a partial function f from the natural numbers into the natural numbers, f is a partial recursive function if and only if the graph of f, that is, the set of all pairs such that f(x) is defined, is recursively enumerable.

Properties
If A and B are recursively enumerable sets then A B, A B and A B (with the ordered pair of natural numbers mapped to a single natural number with the Cantor pairing function) are recursively enumerable sets. The preimage of a recursively enumerable set under a partial recursive function is a recursively enumerable set. A set is recursively enumerable if and only if it is at level A set of the arithmetical hierarchy. is recursively enumerable.

is called co-recursively enumerable or co-r.e. if its complement

Equivalently, a set is co-r.e. if and only if it is at level of the arithmetical hierarchy. A set A is recursive (synonym: computable) if and only if both A and the complement of A are recursively enumerable. A set is recursive if and only if it is either the range of an increasing total recursive function or finite.

Recursively enumerable set Some pairs of recursively enumerable sets are effectively separable and some are not.

27

Remarks
According to the Church-Turing thesis, any effectively calculable function is calculable by a Turing machine, and thus a set S is recursively enumerable if and only if there is some algorithm which yields an enumeration of S. This cannot be taken as a formal definition, however, because the Church-Turing thesis is an informal conjecture rather than a formal axiom. The definition of a recursively enumerable set as the domain of a partial function, rather than the range of a total recursive function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion theories, such as -recursion theory, the definition corresponding to domains has been found to be more natural. Other texts use the definition in terms of enumerations, which is equivalent for recursively enumerable sets.

References
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1; ISBN 0-07-053522-1. Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. ISBN 3-540-15299-7. Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 11491181.

Turing machine
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer. The "Turing" machine was described in 1936 by Alan Turing[1] who called it an "a-machine" (automatic machine). The Turing machine is An artistic representation of a Turing machine not intended as practical computing technology, but rather as a (Rules table not represented) hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation. Turing gave a succinct definition of the experiment in his 1948 essay, "Intelligent Machinery". Referring to his 1936 publication, Turing wrote that the Turing machine, here called a Logical Computing Machine, consisted of: ...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behaviour of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.[2] (Turing 1948, p. 61) A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). A more mathematically oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the ChurchTuring thesis. The thesis states that Turing machines indeed capture the informal

Turing machine notion of effective method in logic and mathematics, and provide a precise definition of an algorithm or 'mechanical procedure'. Studying their abstract properties yields many insights into computer science and complexity theory.

28

Informal description
For visualizations of Turing machines, see Turing machine gallery. The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On computable numbers, with an application to the Entscheidungsproblem", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner"). More precisely, a Turing machine consists of: 1. A tape which is divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as 'B') and one or more other symbols. The tape is assumed to be arbitrarily extendable to the left and to the right, i.e., the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written to before are assumed to be filled with the blank symbol. In some models the tape has a left end marked with a special symbol; the tape extends or is indefinitely extensible to the right.

The head is always over a particular square of the tape; only a finite stretch of squares is shown. The instruction to be performed (q4) is shown over the scanned square. (Drawing after Kleene (1952) p.375.)

Here, the internal state (q1) is shown inside the head, and the illustration describes the tape as being infinite and pre-filled with "0", the symbol serving as blank. The system's full state (its configuration) consists of the internal state, the contents of the shaded squares including the blank scanned by the head ("11B"), and the position of the head. (Drawing after Minsky (1967) p. 121).

2. A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time. In some models the head moves and the tape is stationary. 3. A state register that stores the state of the Turing machine, one of finitely many. There is one special start state with which the state register is initialized. These states, writes Turing, replace the "state of mind" a person performing computations would ordinarily be in. 4. A finite table (occasionally called an action table or transition function) of instructions (usually quintuples [5-tuples] : qiajqi1aj1dk, but sometimes 4-tuples) that, given the state(qi) the machine is currently in and the symbol(aj) it is reading on the tape (symbol currently under the head) tells the machine to do the following in sequence (for the 5-tuple models): Either erase or write a symbol (replacing aj with aj1), and then Move the head (which is described by dk and can have values: 'L' for one step left or 'R' for one step right or 'N' for staying in the same place), and then Assume the same or a new state as prescribed (go to state qi1). In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. Specifically, the table tells the machine to (ia) erase or write a symbol or (ib) move the head

Turing machine left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state then the machine will halt; other models require all entries to be filled. Note that every part of the machineits state and symbol-collectionsand its actionsprinting, erasing and tape motionis finite, discrete and distinguishable; it is the potentially unlimited amount of tape that gives it an unbounded amount of storage space.

29

Formal definition
Hopcroft and Ullman (1979, p.148) formally define a (one-tape) Turing machine as a 7-tuple where is a finite, non-empty set of states is a finite, non-empty set of the tape alphabet/symbols is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation) is the set of input symbols is the initial state is the set of final or accepting states. is a partial function called the transition function, where L is left shift,

R is right shift. (A relatively uncommon variant allows "no shift", say N, as a third element of the latter set.) Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this (see more about this busy beaver at Turing machine examples):

("blank") (the initial state) see state-table below

Initially all tape cells are marked with 0.

State table for 3 state, 2 symbol busy beaver


Tape symbol Current state A Current state B Current state C

Write symbol Move tape Next state Write symbol Move tape Next state Write symbol Move tape Next state 0 1 1 1 R L B C 1 1 L R A B 1 1 L R B HALT

Turing machine

30

Additional details required to visualize or implement Turing machines


In the words of van Emde Boas (1990), p.6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like." For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely. The shift left and shift right operations may shift the tape head across the tape, but when actually building a Turing machine it is more practical to make the tape slide back and forth under the head instead. The tape can be finite, and automatically extended with blanks as needed (which is closest to the mathematical definition), but it is more common to think of it as stretching infinitely at both ends and being pre-filled with blanks except on the explicitly given finite fragment the tape head is on. (This is, of course, not implementable in practice.) The tape cannot be fixed in length, since that would not correspond to the given definition and would seriously limit the range of computations the machine can perform to those of a linear bounded automaton.

Alternative definitions
Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, changing the set to , where N ("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right, does not increase the machine's computational power. The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in Undecidable, p.126-127 and Davis (2000) p.152): (definition 1): (qi, Sj, Sk/E/N, L/R/N, qm) ( current state qi , symbol scanned Sj , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N , new state qm ) Other authors (Minsky (1967) p.119, Hopcroft and Ullman (1979) p.158, Stone (1972) p.9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj: (definition 2): (qi, Sj, qm, Sk/E/N, L/R/N) ( current state qi , symbol scanned Sj , new state qm , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N ) For the remainder of this article "definition 1" (the Turing/Davis convention) will be used.

Example: state table for the 3-state 2-symbol busy beaver reduced to 5-tuples
Current state Scanned symbol A A B B C C 0 1 0 1 0 1 Print symbol Move tape Final (i.e. next) state 1 1 1 1 1 1 R L L R L N B C A B B H 5-tuples (A, 0, 1, R, B) (A, 1, 1, L, C) (B, 0, 1, L, A) (B, 1, 1, R, B) (C, 0, 1, L, B) (C, 1, 1, N, H)

In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf Turing in Undecidable, p.126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or

Turing machine "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf footnote 12 in Post (1947), Undecidable p.300). The abbreviations are Turing's (Undecidable p.119). Subsequent to Turing's original paper in 19361937, machine-models have allowed all nine possible types of five-tuples:
Current m-configuration (Turing state) N1 qi qi qi qi qi qi qi qi qi Tape symbol Sj Sj Sj Sj Sj Sj Sj Sj Sj Print-operation Tape-motion Final m-configuration (Turing state) qm qm qm qm qm qm qm qm qm 5-tuple 5-tuple comments "blank" = S0, 1=S1, etc. "blank" = S0, 1=S1, etc. "blank" = S0, 1=S1, etc. (qi, Sj, Sk, qm) (qi, Sj, L, qm) (qi, Sj, R, qm) Direct "jump" (qi, Sj, N, qm) 4-tuple

31

Print(Sk) Print(Sk) Print(Sk) None N

Left L

(qi, Sj, Sk, L, qm) (qi, Sj, Sk, R, qm) (qi, Sj, Sk, N, qm) (qi, Sj, N, L, qm) (qi, Sj, N, R, qm) (qi, Sj, N, N, qm) (qi, Sj, E, L, qm) (qi, Sj, E, R, qm) (qi, Sj, E, N, qm)

N2

Right R

N3

None N

Left L

None N

Right R

None N

None N

Erase

Left L

Erase

Right R

Erase

None N

(qi, Sj, E, qm)

Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples. Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at PostTuring machine.

The "state"
The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performedi.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", (its internal state) and the machine's (or person's) "state of progress" through the computation - the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape: Thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, the state of the system may be described by a single expression (sequence of symbols) consisting of the symbols on the tape followed by (which we suppose not to appear elsewhere) and then by the note of instructions. This expression is called the 'state formula'. Undecidable, p.139140, emphasis added Earlier in his paper Turing carried this even further: he gives an example where he places a symbol of the current "m-configuration"the instruction's labelbeneath the scanned square, together with all the symbols on the tape

Turing machine (Undecidable, p.121); this he calls "the complete configuration" (Undecidable, p.118). To print the "complete configuration" on one line he places the state-label/m-configuration to the left of the scanned symbol. A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gdel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p.374-375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p.149). Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below): 1A1 This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01 ; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B. "State" in the context of Turing machines should be clarified as to which is being described: (i) the current instruction, or (ii) the list of symbols on the tape together with the current instruction, or (iii) the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol. Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion.

32

Turing machine "state" diagrams The table for the 3-state busy beaver ("P" = print/write a "1")
Tape symbol Current state A Current state B Current state C

Write symbol Move tape Next state Write symbol Move tape Next state Write symbol Move tape Next state 0 1 P P R L B C P P L R A B P P L R B HALT

To the right: the above TABLE as expressed as a "state transition" diagram. Usually large TABLES are better left as tables (Booth, p.74). They are more readily simulated by computer in tabular form (Booth, p.74). However, certain conceptse.g. machines with "reset" states and machines with repeating patterns (cf Hill and Peterson p.244ff)can be more readily seen when viewed as a drawing. Whether a drawing represents an improvement on its TABLE must be decided by the reader for the particular context. See Finite state machine for more.

The "3-state busy beaver" Turing machine in a finite state representation. Each circle represents a "state" of the TABLEan "m-configuration" or "instruction". "Direction" of a state transition is shown by an arrow. The label (e.g.. 0/P,R) near the outgoing state (at the "tail" of the arrow) specifies the scanned symbol that causes a particular transition (e.g. 0) followed by a slash /, followed by the subsequent "behaviors" of the machine, e.g. "P Print" then move tape "R Right". No general accepted format exists. The convention shown is after McClusky (1965), Booth (1967), Hill, and Peterson (1974).

Turing machine

33

The reader should again be cautioned that such diagrams represent a snapshot of their TABLE frozen in time, not the course ("trajectory") of a computation through time and/or space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters". The diagram "Progress of the The evolution of the busy-beaver's computation starts at the top and proceeds to the computation" shows the 3-state busy bottom. beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", HopcroftUllman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf Turing (1936) Undecidable pp.139140).

Models equivalent to the Turing machine model


Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p.159, cf Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (Recall that the ChurchTuring thesis hypothesizes this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.) A Turing machine is equivalent to a pushdown automaton that has been made more flexible and concise by relaxing the last-in-first-out requirement of its stack. At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model. Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state. Read-only, right-moving Turing machines are equivalent to NDFAs (as well as DFAs by conversion using the NDFA to DFA conversion algorithm). For practical and didactical intentions the equivalent register machine can be used as a usual assembly programming language.

Turing machine

34

Choice c-machines, Oracle o-machines


Early in his paper (1936) Turing makes a distinction between an "automatic machine"its "motion ... completely determined by the configuration" and a "choice machine": ...whose motion is only partially determined by the configuration ... When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems. Undecidable, p. 118 Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote , Undecidable, p.138) This is indeed the technique by which a deterministic (i.e. a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration. An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"an unspecified entity "apart from saying that it cannot be a machine" (Turing (1939), Undecidable p.166168). The concept is now actively used by mathematicians.

Universal Turing machines


As Turing wrote in Undecidable, p.128 (italics added): It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with the tape on the beginning of which is written the string of quintuples separated by semicolons of some computing machine M, then U will compute the same sequence as M. This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine""U" for shortis considered by some (cf Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the Stored-program computer. Turing's paper ... contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it. Minsky (1967), p. 104 In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9)

Turing machine

35

Comparison with real machines


It is often said that Turing machines, unlike simpler automata, are as powerful as real machines, and are able to execute any operation that a real program can. What is missed in this statement is that, because a real machine can only be in finitely many configurations, in fact this "real machine" is nothing but a linear bounded automaton. On the other hand, Turing machines are equivalent to machines that have an unlimited amount of storage space for their computations. In fact, Turing machines are not intended to model computers, but rather they are intended to model computation itself; historically, computers, which compute only on their (fixed) internal storage, were developed only later.

A Turing machine realization in LEGO

There are a number of ways to explain why Turing machines are useful models of real computers: 1. Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p.157). A large enough FSA can also model any real computer, disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to real computers. 2. The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data. 3. Like a Turing machine, a real machine can have its storage space enlarged as needed, by acquiring more disks or other storage media. If the supply of these runs short, the Turing machine may become less useful as a model. But the fact is that neither Turing machines nor real machines need astronomical amounts of storage space in order to perform useful computation. The processing time required is usually much more of a problem. 4. Descriptions of real machine programs using simpler abstract models are often much more complex than descriptions using Turing machines. For example, a Turing machine describing an algorithm may have a few hundred states, while the equivalent deterministic finite automaton on a given real machine has quadrillions. This makes the DFA representation infeasible to analyze. 5. Turing machines describe algorithms independent of how much memory they use. There is a limit to the memory possessed by any current machine, but this limit can rise arbitrarily in time. Turing machines allow us to make statements about algorithms which will (theoretically) hold forever, regardless of advances in conventional computing machine architecture. 6. Turing machines simplify the statement of algorithms. Algorithms running on Turing-equivalent abstract machines are usually more general than their counterparts running on real machines, because they have arbitrary-precision data types available and never have to deal with unexpected conditions (including, but not limited to, running out of memory). One way in which Turing machines are a poor model for programs is that many real programs, such as operating systems and word processors, are written to receive unbounded input over time, and therefore do not halt. Turing machines do not model such ongoing computation well (but can still model portions of it, such as individual procedures).

Turing machine

36

Limitations of Turing machines


Computational complexity theory A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random access stored program machine or RASP machine model. Like the Universal Turing machine the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g. the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a 'false lower bound' can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model. Concurrency Another limitation of Turing machines is that they do not model concurrency well. For example, there is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing machine starting on a blank tape. (See article on unbounded nondeterminism.) By contrast, there are always-halting concurrent systems with no inputs that can compute an integer of unbounded size. (A process can be created with local storage that is initialized with a count of 0 that concurrently sends itself both a stop and a go message. When it receives a go message, it increments its count by 1 and sends itself a go message. When it receives a stop message, it stops with an unbounded number in its local storage.)

History
They were described in 1936 by Alan Turing.

Historical background: computational machinery


Robin Gandy (19191995)a student of Alan Turing (19121954) and his lifelong friendtraces the lineage of the notion of "calculating machine" back to Babbage (circa 1834) and actually proposes "Babbage's Thesis": That the whole of development and operations of analysis are now capable of being executed by machinery. (italics in Babbage as cited by Gandy, p. 54) Gandy's analysis of Babbage's Analytical Engine describes the following five operations (cf p.5253): 1. 2. 3. 4. 5. The arithmetic functions +, , where indicates "proper" subtraction xy=0 if yx Any sequence of operations is an operation Iteration of an operation (repeating n times an operation P) Conditional iteration (repeating n times an operation P conditional on the "success" of test T) Conditional transfer (i.e. conditional "goto").

Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p.53). He cites other proposals for "universal calculating machines" included those of Percy Ludgate (1909), Leonardo Torres y Quevedo (1914), Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However:

Turing machine ...the emphasis is on programming a fixed iterable sequence of arithmetical operations. The fundamental importance of conditional iteration and conditional transfer for a general theory of calculating machines is not recognized ... Gandy p. 55

37

The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900


With regards to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for #10 is as follows: 10. Determination of the solvability of a Diophantine equation. Given a Diophantine equation with any number of unknown quantities and with rational integral coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers. The Entscheidungsproblem [decision problem for first-order logic] is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability ... The Entscheidungsproblem must be considered the main problem of mathematical logic. quoted, with this translation and the original German, in Dershowitz and Gurevich, 2008 By 1922, this notion of "Entscheidungsproblem" had developed a bit, and H. Behmann stated that ...most general form of the Entscheidungsproblem [is] as follows: A quite definite generally applicable prescription is required which will allow one to decide in a finite number of steps the truth or falsity of a given purely logical assertion ... Gandy p. 57, quoting Behmann Behmann remarks that ... the general problem is equivalent to the problem of deciding which mathematical propositions are true. ibid. If one were able to solve the Entscheidungsproblem then one would have a "procedure for solving many (or even all) mathematical problems". ibid., p. 92 By the 1928 international congress of mathematicians Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p.91, Hawking p.1121). The first two questions were answered in 1930 by Kurt Gdel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the thirdthe Entscheidungsproblemhad to wait until the mid-1930s. The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 67 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gdel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions. But what Church had done was something rather different, and in a certain sense weaker. ... the Turing construction was more direct, and provided an argument from first principles, closing the gap in Church's

Turing machine demonstration. Hodges p. 112 And Post had only proposed a definition of calculability and criticized Church's "definition", but had proved nothing.

38

Alan Turing's a- (automatic-)machine


In the spring of 1935 Turing as a young Master's student at King's College Cambridge, UK, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gdel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes: To the question 'what is a "mechanical" process?' Turing returned the characteristic answer 'Something that can be done by a machine' and he embarked on the highly congenial task of analysing the general notion of a computing machine. Gandy, p. 74 Gandy states that: I suppose, but do not know, that Turing, right from the start of his work, had as his goal a proof of the undecidability of the Entscheidungsproblem. He told me that the 'main idea' of the paper came to him when he was lying in Grantchester meadows in the summer of 1935. The 'main idea' might have either been his analysis of computation or his realization that there was a universal machine, and so a diagonal argument to prove unsolvability. ibid., p. 76 While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p.96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function": It was stated above that 'a function is effectively calculable if its values can be found by some purely mechanical process'. We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability with effective calculability. It is not difficult, though somewhat laborious, to prove that these three definitions [the 3rd is the -calculus] are equivalent. Turing (1939) in The Undecidable, p. 160 When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p.318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper On Computable Numbers, With an Application to the Entscheidungsproblem (1937): [that] the Hilbert Entscheidungsproblem can have no solution ... I propose, therefore to show that there can be no general process for determining whether a given formula U of the functional calculus K is provable, i.e. that there can be no machine which, supplied with any one U of these formulae, will eventually say whether U is provable. from Turing's paper as reprinted in The Undecidable, p. 145

Turing machine Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable".

39

19371970: The "digital computer", the birth of "computer science"


In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p.138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p.138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by the Axis and Allied military in World War II (cf Hodges p.298299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post-Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random access machine modelsbut basically all are just multi-tape Turing machines with an arithmetic-like instruction set.

1970present: the Turing machine as a model of computation


Today the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine: Depending on the objects one likes to manipulate in the computations (numbers like nonnegative integers or alphanumeric strings), two models have obtained a dominant position in machine-based complexity theory: the off-line multitape Turing machine..., which represents the standard model for string-oriented computation, and the random access machine (RAM) as introduced by Cook and Reckhow ..., which models the idealized Von Neumann style computer. van Emde Boas 1990:4 Only in the related area of analysis of algorithms this role is taken over by the RAM model. van Emde Boas 1990:16

Turing machine

40

Notes
[1] The idea came to him in mid-1935 (perhaps, see more in the History section) after a question posed by M. H. A. Newman in his lectures: "Was there a definite method, or as Newman put it, a mechanical process which could be applied to a mathematical statement, and which would come up with the answer as to whether it was provable" (Hodges 1983:93). Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its Proceedings (cf Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf Hodges 1983:129). [2] See the definition of "innings" on Wiktionary

References
Primary literature, reprints, and compilations
B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford UK, ISBN 0-19-825079-7. Contains the Turing papers plus a draft letter to Emil Post re his criticism of "Turing's convention", and Donald W. Davies' Corrections to Turing's Universal Computing Machine Martin Davis (ed.) (1965), The Undecidable, Raven Press, Hewlett, NY. Emil Post (1936), "Finite Combinatory ProcessesFormulation 1", Journal of Symbolic Logic, 1, 103105, 1936. Reprinted in The Undecidable pp.289ff. Emil Post (1947), "Recursive Unsolvability of a Problem of Thue", Journal of Symbolic Logic, vol. 12, pp.111. Reprinted in The Undecidable pp.293ff. In the Appendix of this paper Post comments on and gives corrections to Turing's paper of 19361937. In particular see the footnotes 11 with corrections to the universal computing machine coding and footnote 14 with comments on Turing's first and second proofs. Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungs problem". Proceedings of the London Mathematical Society. 2 42: 23065. 1937. doi:10.1112/plms/s2-42.1.230. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society. 2 43 (6): 5446. 1937. doi:10.1112/plms/s2-43.6.544.). Reprinted in many collections, e.g. in The Undecidable pp.115154; available on the web in many places, e.g. at Scribd (http://www.scribd.com/doc/2937039/ Alan-M-Turing-On-Computable-Numbers). Alan Turing, 1948, "Intelligent Machinery." Reprinted in "Cybernetics: Key Papers." Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p.31. F. C. Hennie and R. E. Stearns. Two-tape simulation of multitape Turing machines. JACM, 13(4):533546, 1966.

Computability theory
Boolos, George; Richard Jeffrey (1989, 1999). Computability and Logic (3rd ed.). Cambridge UK: Cambridge University Press. ISBN0-521-20402-X. Boolos, George; John Burgess, Richard Jeffrey, (2002). Computability and Logic (4th ed.). Cambridge UK: Cambridge University Press. ISBN0-521-00758-5 (pb.). Some parts have been significantly rewritten by Burgess. Presentation of Turing machines in context of Lambek "abacus machines" (cf Register machine) and recursive functions, showing their equivalence. Taylor L. Booth (1967), Sequential Machines and Automata Theory, John Wiley and Sons, Inc., New York. Graduate level engineering text; ranges over a wide variety of topics, Chapter IX Turing Machines includes some recursion theory. Martin Davis (1958). Computability and Unsolvability. McGraw-Hill Book Company, Inc, New York.. On pages 1220 he gives examples of 5-tuple tables for Addition, The Successor Function, Subtraction (x y), Proper Subtraction (0 if x < y), The Identity Function and various identity functions, and Multiplication.

Turing machine Davis, Martin; Ron Sigal, Elaine J. Weyuker (1994). Computability, Complexity, and Languages and Logic: Fundamentals of Theoretical Computer Science (2nd ed.). San Diego: Academic Press, Harcourt, Brace & Company. ISBN0-12-206382-1. Hennie, Fredrick (1977). Introduction to Computability. AddisonWesley, Reading, Mass.. On pages 90103 Hennie discusses the UTM with examples and flow-charts, but no actual 'code'. John Hopcroft and Jeffrey Ullman, (1979). Introduction to Automata Theory, Languages and Computation (1st ed.). AddisonWesley, Reading Mass. ISBN0-201-02988-X.. A difficult book. Centered around the issues of machine-interpretation of "languages", NP-completeness, etc. Hopcroft, John E.; Rajeev Motwani, Jeffrey D. Ullman (2001). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Reading Mass: AddisonWesley. ISBN0-201-44124-1. Distinctly different and less intimidating than the first edition. Stephen Kleene (1952), Introduction to Metamathematics, NorthHolland Publishing Company, Amsterdam Netherlands, 10th impression (with corrections of 6th reprint 1971). Graduate level text; most of Chapter XIII Computable functions is on Turing machine proofs of computability of recursive functions, etc. Knuth, Donald E. (1973). Volume 1/Fundamental Algorithms: The Art of computer Programming (2nd ed.). Reading, Mass.: AddisonWesley Publishing Company.. With reference to the role of Turing machines in the development of computation (both hardware and software) see 1.4.5 History and Bibliography pp.225ff and 2.6 History and Bibliographypp.456ff. Zohar Manna, 1974, Mathematical Theory of Computation. Reprinted, Dover, 2003. ISBN 978-0-486-43238-0 Marvin Minsky, Computation: Finite and Infinite Machines, PrenticeHall, Inc., N.J., 1967. See Chapter 8, Section 8.2 "Unsolvability of the Halting Problem." Excellent, i.e. relatively readable, sometimes funny. Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN0-201-53082-1. Chapter 2: Turing machines, pp.1956. Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN0-534-94728-X. Chapter 3: The ChurchTuring Thesis, pp.125149. Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1st ed.). New York: McGrawHill Book Company. ISBN0-07-061726-0. Peter van Emde Boas 1990, Machine Models and Simulations, pp.366, in Jan van Leeuwen, ed., Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, [place?], ISBN 0-444-88071-2 (Volume A). QA76.H279 1990. Valuable survey, with 141 references.

41

Church's thesis
Nachum Dershowitz; Yuri Gurevich (September 2008). "A natural axiomatization of computability and proof of Church's Thesis" (http://research.microsoft.com/en-us/um/people/gurevich/Opera/188.pdf). Bulletin of Symbolic Logic 14 (3). Retrieved 2008-10-15. Roger Penrose (1989, 1990). The Emperor's New Mind (2nd ed.). Oxford University Press, New York. ISBN0-19-851973-7.

Turing machine

42

Small Turing machines


Rogozhin, Yurii, 1998, " A Universal Turing Machine with 22 States and 2 Symbols (http://web.archive.org/ web/20050308141040/http://www.imt.ro/Romjist/Volum1/Vol1_3/turing.htm)", Romanian Journal Of Information Science and Technology, 1(3), 259265, 1998. (surveys known results about small universal Turing machines) Stephen Wolfram, 2002, A New Kind of Science (http://www.wolframscience.com/nksonline/page-707), Wolfram Media, ISBN 1-57955-008-8 Brunfiel, Geoff, Student snags maths prize (http://www.nature.com/news/2007/071024/full/news.2007. 190.html), Nature, October 24. 2007. Jim Giles (2007), Simplest 'universal computer' wins student $25,000 (http://technology.newscientist.com/ article/dn12826-simplest-universal-computer-wins-student-25000.html), New Scientist, October 24, 2007. Alex Smith, Universality of Wolframs 2, 3 Turing Machine (http://www.wolframscience.com/prizes/tm23/ TM23Proof.pdf), Submission for the Wolfram 2, 3 Turing Machine Research Prize. Vaughan Pratt, 2007, " Simple Turing machines, Universality, Encodings, etc. (http://cs.nyu.edu/pipermail/ fom/2007-October/012156.html)", FOM email list. October 29, 2007. Martin Davis, 2007, " Smallest universal machine (http://cs.nyu.edu/pipermail/fom/2007-October/012132. html)", and Definition of universal Turing machine (http://cs.nyu.edu/pipermail/fom/2007-October/012145. html) FOM email list. October 2627, 2007. Alasdair Urquhart, 2007 " Smallest universal machine (http://cs.nyu.edu/pipermail/fom/2007-October/ 012140.html)", FOM email list. October 26, 2007. Hector Zenil (Wolfram Research), 2007 " smallest universal machine (http://cs.nyu.edu/pipermail/fom/ 2007-October/012163.html)", FOM email list. October 29, 2007. Todd Rowland, 2007, " Confusion on FOM (http://forum.wolframscience.com/showthread.php?s=& threadid=1472)", Wolfram Science message board, October 30, 2007.

Other
Martin Davis (2000). Engines of Logic: Mathematicians and the origin of the Computer (1st ed.). W. W. Norton & Company, New York. ISBN0-393-32229-7 pbk.. Robin Gandy, "The Confluence of Ideas in 1936", pp.51102 in Rolf Herken, see below. Stephen Hawking (editor), 2005, God Created the Integers: The Mathematical Breakthroughs that Changed History, Running Press, Philadelphia, ISBN 978-0-7624-1922-7. Includes Turing's 19361937 paper, with brief commentary and biography of Turing as written by Hawking. Rolf Herken (1995). The Universal Turing MachineA Half-Century Survey. Springer Verlag. ISBN3-211-82637-8. Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Ivars Peterson (1988). The Mathematical Tourist: Snapshots of Modern Mathematics (1st ed.). W. H. Freeman and Company, New York. ISBN0-7167-2064-7 (pbk.). Paul Strathern (1997). Turing and the ComputerThe Big Idea. Anchor Books/Doubleday. ISBN0-385-49243-X. Hao Wang, "A variant to Turing's theory of computing machines", Journal of the Association for Computing Machinery (JACM) 4, 6392 (1957). Charles Petzold, Petzold, Charles, The Annotated Turing (http://www.theannotatedturing.com/), John Wiley & Sons, Inc., ISBN 0-470-22905-5 Arora, Sanjeev; Barak, Boaz, "Complexity Theory: A Modern Approach" (http://www.cs.princeton.edu/ theory/complexity/), Cambridge University Press, 2009, ISBN 978-0-521-42426-4, section 1.4, "Machines as

Turing machine strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9" Isaiah Pinchas Kantorovitz, "A note on turing machine computability of rule driven systems", ACM SIGACT News December 2005.

43

External links
Turing Machine on Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/entries/turing-machine/) Detailed info on the ChurchTuring Hypothesis (http://plato.stanford.edu/entries/church-turing/) (Stanford Encyclopedia of Philosophy) Turing Machine-Like Models (http://www.weizmann.ac.il/mathusers/lbn/new_pages/Research_Turing. html) in Molecular Biology, to understand life mechanisms with a DNA-tape processor. The Turing machine (http://www.SaschaSeidel.de/html/programmierung/download_The_Turing_machine. php)Summary about the Turing machine, its functionality and historical facts The Wolfram 2,3 Turing Machine Research Prize (http://www.wolframscience.com/prizes/tm23/)Stephen Wolfram's $25,000 prize for the proof or disproof of the universality of the potentially smallest universal Turing Machine. The contest has ended, with the proof affirming the machine's universality. " Turing Machine Causal Networks (http://demonstrations.wolfram.com/TuringMachineCausalNetworks/)" by Enrique Zeleny, Wolfram Demonstrations Project. Turing Machines (http://www.dmoz.org/Computers/Computer_Science/Theoretical/Automata_Theory/ Turing_Machines/) at the Open Directory Project Purely mechanical Turing Machine (http://www.turing2012.fr/?p=530&lang=en)

Undecidable problem
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is impossible to construct a single algorithm that always leads to a correct yes-or-no answer. A decision problem is any arbitrary yes-or-no question on an infinite set of inputs. Because of this, it is traditional to define the decision problem equivalently as the set of inputs for which the problem returns yes. These inputs can be natural numbers, but also other values of some other kind, such as strings of a formal language. Using some encoding, such as a Gdel numbering, the strings can be encoded as natural numbers. Thus, a decision problem informally phrased in terms of a formal language is also equivalent to a set of natural numbers. To keep the formal definition simple, it is phrased in terms of subsets of the natural numbers. Formally, a decision problem is a subset of the natural numbers. The corresponding informal problem is that of deciding whether a given number is in the set. A decision problem A is called decidable or effectively solvable if A is a recursive set. A problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Partially decidable problems and any other problems that are not decidable are called undecidable.

The undecidable problem in computability theory


In computability theory, the halting problem is a decision problem which can be stated as follows: Given a description of a program and a finite input, decide whether the program finishes running or will run forever. Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.

Undecidable problem

44

Relationship with Gdel's incompleteness theorem


The concepts raised by Gdel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that a complete, consistent and sound axiomatization of all statements about natural numbers is unachievable. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. It is important to observe that the statement of the standard form of Gdel's First Incompleteness Theorem is completely unconcerned with the question of truth, but only concerns the issue of whether it can be proven. The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a consistent and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers such that, for all the true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt. This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

Examples of undecidable problems


Undecidable problems can be related to different topics, such as logic, abstract machines or topology. Note that since there are uncountably many undecidable problems, any list is necessarily incomplete.

Examples of undecidable statements


There are two distinct senses of the word "undecidable" in contemporary use. The first of these is the sense used in relation to Gdel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either "the answer to A is yes" or "the answer to A is no". Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. It can mean just "not provable", leaving open whether an independent statement might be refuted. Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools. One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1952.

Undecidable problem The combined work of Gdel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gdel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC. In 1970, Soviet mathematician Yuri Matiyasevich showed that Hilbert's Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert's challenge sought an algorithm which finds all solutions of a Diophantine equation. A Diophantine equation is a more general case of Fermat's Last Theorem; we seek the integer roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n variables, infinitely many solutions exist (and are easy to find) in the complex plane; the problem becomes difficult (impossible) by constraining solutions to integer values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gdel's Incompleteness Theorem.[1] In 1936, Alan Turing proved that the halting problemthe question of whether or not a Turing machine halts on a given programis undecidable, in the second sense of the term. This result was later generalized by Rice's theorem. In 1973, the Whitehead problem in group theory was shown to be undecidable, in the first sense of the term, in standard set theory. In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic. Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism. Goodstein's theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic. Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gdel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox. Douglas Hofstadter gives a notable alternative proof of incompleteness, inspired by Gdel, in his book Gdel, Escher, Bach. In 2006, researchers Kurtz and Simon, building on earlier work by J.H. Conway in the 1970s, proved that a natural generalization of the Collatz problem is undecidable.[2]

45

References
[1] Matiyasevich, Yuri (1970). " [Enumerable sets are Diophantine]" (in Russian). Doklady Akademii Nauk SSSR 191: 279282. [2] Kurtz, Stuart A.; Simon, Janos, "The Undecidability of the. Generalized Collatz Problem" (http:/ / books. google. com/ books?id=mhrOkx-xyJIC& lpg=PA542& dq=Kurtz collatz problem& pg=PA542#v=onepage& q=& f=false), in Proceedings of the 4th International Conference on Theory and Applications of Models of Computation, TAMC 2007, held in Shanghai, China in May 2007. ISBN 3-540-72503-2. doi:10.1007/978-3-540-72504-6_49

Article Sources and Contributors

46

Article Sources and Contributors


Theory of computation Source: http://en.wikipedia.org/w/index.php?oldid=516774751 Contributors: 4C, Abatasigh, Britannica, CBM, CRGreathouse, Cic, Cometman, Conversion script, Cromwellt, DCDuring, DaGizza, Diego Moya, Dmcq, EricR, Eser.aygun, Gombang, Graham87, Gregbard, HRV, HamburgerRadio, Harborsparrow, Helix84, Hydrogen Iodide, Ivan tambuk, Jamiltonio, Janet Davis, Jmath666, Jonsafari, Julian Mendez, Kilva, Kirrages, LcawteHuggle, Mattisse, Michael Hardy, Mikeblas, Miym, Multipundit, Ntalamai, Oleg Alexandrov, Oxymoron83, Pcap, Physis, Pmt6sbc, Protez, Rajah, Readams, Ruud Koot, SLi, SingCal, SkyWalker, Sligocki, SparsityProblem, Taeshadow, That Guy, From That Show!, The Grumpy Hacker, Thomas.ohland, Tobias Bergemann, Udqbpn, Urdutext, Ute in DC, VasilievVV, WikiKatateochi, XDanielx, Zondor, 80 anonymous edits Automata theory Source: http://en.wikipedia.org/w/index.php?oldid=516453632 Contributors: A Generic Reality, Aaron Schulz, Ahmad.shahwan, Ahoerstemeier, Alansohn, Allan McInnes, Andrew Eisenberg, Ankog, Arslan asghar, Ashutosh y0078, AxelBoldt, Bci2, Begoon, Bethnim, Bjankuloski06en, Bookandcoffee, Bzier, CRGreathouse, ChuckHG, Cjlim, Cominging, Crystallina, Dcoetzee, Deldotvee, Deleet, Deltahedron, DerHexer, Dmcq, Dudesleeper, Dj Vu, Ehn, Ericszhao1, Eslip17, Fluffernutter, FoxLogick, Fudo, Gaius Cornelius, Gareth Griffith-Jones, Giftlite, GlasGhost, Gmb7-NJITWILL, Gonzonoir, HRV, Helder.wiki, Helix84, Helptry, Hermel, Hjfreyer, Ilyaroz, IvanAndreevich, JHolman, JK the unwise, Jackson, Jagged 85, Jcarroll, Jdoe87, Jeff G., Jeffrey Mall, Jibarraj, Jim1138, Jimbryho, Jiri 1984, Jitse Niesen, Jjovanw, Johndbritton, Jokes Free4Me, Joost, Joseph Solis in Australia, Jpbowen, Jpceayene, Jpvinall, Juniuswikia, KSlayer, KSmrq, Kmarinas86, Knverma, Konradek, Linas, Ling.Nut, Lobner, MONGO, Machine Elf 1735, Magmi, Mangledorf, Maple.writes, Mark lee stillwell, MarkSweep, Marudubshinki, MathMartin, MatthewUND, Mct mht, Mean as custard, Melidee, Mhhwang2002, Michael Devore, Mirshadk, Msoos, Musiphil, Nunquam Dormio, Nuttycoconut, Nxavar, Oddity-, Oleg Alexandrov, Omnipaedista, Pinar, Quoth, Qwertyus, Rjwilmsi, Ruud Koot, Ryanli (usurped), S h i v a (Visnu), SOMNIVM, Saber girl08, Saforrest, Salix alba, Schwarzbichler, Sgkay, Sharaar22, Sietse Snel, Silvonen, Snowolf, Spoon!, Spot, Stephen Shaw, Suruena, The Thing That Should Not Be, Thunderboltz, TimBentley, Tobias Bergemann, Toolnut, Ubermonkey, Unbitwise, Valodzka, Vegpuff, Vento, Vojta, Wasbeer, Wjmallard, Yosef Berman, Zahnradzacken, Ze miguel, Zero sharp, Zundark, 532 ,anonymous edits Regular language Source: http://en.wikipedia.org/w/index.php?oldid=513590168 Contributors: 142.166.108.xxx, Adam McMaster, Ahoerstemeier, Akerbos, Alex Dainiak, Arjayay, Ashutosh y0078, AxelBoldt, Beefman, BenFrantzDale, Bender2k14, Bryan Derksen, ChKa, ChrisGualtieri, ClaesWallin, ConradPino, Conversion script, DFRussia, DanielKO, David Eppstein, Dcoetzee, Delirium, Deltahedron, DemonThing, Diegodm, DopefishJustin, Dratman, EngineerScotty, Eric119, Ericbodden, Erudecorp, Flyhighplato, Fropuff, Fudo, Gachet, Haham hanuka, Hannes Hirzel, Harp, Helder.wiki, Hermel, Isometric, Jaredwf, Jellystones, Jerryseb, Jochgem, Jsnx, Karl Dickman, King of Hearts, LC, Linas, Lord Roem, Luqui, Lzur, Mani1, Michael Hardy, Muu-karhu, Mzharrison, NTF, Nitishkorula, Olivier, Pako, Pichpich, Prohlep, R'n'B, RPHv, Rich Farmbrough, Ripper234, Rjwilmsi, Runtime, Schneelocke, Shouvik, Simeon, Stevertigo, Tannin, That Guy, From That Show!, Timwi, Tlev04, Tobias Bergemann, Twri, Tyler McHenry, UKoch, WayneMokane, Zahnradzacken, 102 anonymous edits Deterministic finite automaton Source: http://en.wikipedia.org/w/index.php?oldid=517386816 Contributors: 1ForTheMoney, A3 nm, Afa86, Alexsmail, Andrasek, Ashutosh y0078, Beland, Binris, Booyabazooka, Canderra, Charles Matthews, Crazytales, Curseofgnome, Cyrius, Dcoetzee, Decrypt3, Epbr123, Eric119, FauxFaux, Frap, Grunt, Hermel, Insanity Incarnate, Interiot, JDCMAN, JameySharp, Jaredwf, Jiri 1984, Jonsafari, LOL, Linas, Mararo, Marozols, Mike Cline, Mlwilson, Neurodivergent, Nivix, Ounsworth, Pacdude9, Paulbmann, RPHv, Radiojon, Raghunandan ma, Rbonvall, Reyk, S.K., Sahuagin, SaintNULL, Spoon!, Sviemeister, Tardis, Tbtietc, ThomasOwens, Thowa, Thumperward, Tobias Bergemann, Vevek, Wyverald, Xvr, ZeroOne, Zxombie, 47 ,anonymous edits Context-free language Source: http://en.wikipedia.org/w/index.php?oldid=518713145 Contributors: AgarwalSumeet, Akerbos, Alayth, Alem Dain, AxelBoldt, Ben Standeven, Boris Alexeev, Byelf2007, CBM, Ceroklis, ChKa, Charles Matthews, Conversion script, Creidieki, Dcoetzee, Euchiasmus, Fffrv, Frungi, Gogo Dodo, Hermel, Iridescent, Jamelan, Jbalint, LungZeno, Mark Dingemanse, Miaow Miaow, Misof, Mouhaned99, Muu-karhu, Neeleshmal123, Neilc, Neworder1, Nxavar, Obradovic Goran, PasabaPorAqui, Phil Boswell, Quadrescence, Rp, Schiffli, Stelio, Thumperward, Tobias Bergemann, Tyler McHenry, UKoch, Vantelimus, Woohookitty, 53 anonymous edits Pushdown automaton Source: http://en.wikipedia.org/w/index.php?oldid=516700421 Contributors: 193.188.81.xxx, Alansohn, Andris, Artem M. Pelenitsyn, Arthur MILCHIOR, Arvindn, Ashutosh y0078, Bjankuloski06en, Bk314159, BlueNovember, Brest, CBKAtTopsails, Cadr, Caesura, Calculuslover, Cerby87, Conversion script, Cyfal, DBSand, Damian Yerrick, Dcoetzee, Diego Queiroz, Doradus, Dysprosia, Elektron, Emperorbma, FrenchIsAwesome, Germet, Ghettoblaster, Hairy Dude, Henrik, Hermel, Ilya Voyager, Jaredwf, Jochgem, Jonsafari, Josh Cherry, Karl Stroetmann, Kku, LKenzo, Lim Wei Quan, Linas, LionKimbro, Livingthingdan, LungZeno, Mage327, MarXidad, MathMartin, McGeddon, Mihai Damian, Mike-de-S, Mormegil, Nasira111111, Nxavar, Obradovic Goran, Pascal.Tesson, Poor Yorick, Pwroberts, Qsebas, Saltwater Rat, SamuelRiv, Spencer, Stevertigo, TakuyaMurata, Tamalet, Tarotcards, That Guy, From That Show!, TimBentley, Urhixidur, Utkwes, Vincent Balat, Xjirak03, 92 anonymous edits Recursively enumerable set Source: http://en.wikipedia.org/w/index.php?oldid=518061224 Contributors: Aleph4, Amelio Vzquez, Archelon, BMF81, Bender235, Benkovsky, ByteBaron, CBM, Caesura, DYLAN LENNON, David Eppstein, Dissident, Epbr123, Frank Stephan, Gregbard, Hackwrench, Henning Makholm, Hermel, Hmackiernan, JRSpriggs, Jamelan, Jpbowen, Julian Mendez, Justin W Smith, MKI, Mariannealexandrino, MathMartin, Mets501, Mhss, Michael Hardy, Neilc, NekoDaemon, O.fasching.logic.at, Pcap, Per Ardua, Pkirlin, Pritish.kamath, Satyadev, That Guy, From That Show!, Trovatore, Viebel, VladimirReshetnikov, Wanderer57, Zde, Zero sharp, 27 anonymous edits Turing machine Source: http://en.wikipedia.org/w/index.php?oldid=517884974 Contributors: -Ril-, Abovechief, Abune, Accelerometer, Ad88110, AdamPeterman, Ahoerstemeier, Ahpook, Alain Vey, Alamino, Alaniaris, Alejo2083, Alex Vinokur, Aliazimi, Allan McInnes, Altenmann, Altg20April2nd, Andre Engels, Anonymous Dissident, Antandrus, Anton203, Apocalyps956, ArmadniGeneral, Artur adib, Arvindn, Ashutosh y0078, Ask123, Asmeurer, AxelBoldt, B.d.mills, Barak, BearMachine, Ben.c.roberts, BenRG, Bensin, Bigmantonyd, Blahma, Blaxthos, Blehfu, BorgHunter, Brest, Brion VIBBER, Brouhaha, Bryan Derksen, Byrial, CBM, CRGreathouse, Cal 1234, Calibwam, Calliopejen1, Can't sleep, clown will eat me, Carleas, Centrx, Cgay88, Chas zzz brown, Cheran, Cholmes75, Chridd, Chris Pressey, Claygate, Cocteau834, Cole Kitchen, Conversion script, Creidieki, CryptoDerk, D6, DARTH SIDIOUS 2, DKqwerty, DYLAN LENNON, Dac04, Damian Yerrick, DanielCristofani, Davewho2, David H Braun (1964), David Koller, DavisSta, Dcoetzee, Derek Ross, Dicklyon, Diego Queiroz, Dominus, Doradus, Dratman, DroEsperanto, Droll, Drunkasian, Duncan.Hull, Dzonatas, Ec5618, Edetic, Edward, Ehn, Ellywa, Ender2101, ErikTheBikeMan, Eszett, Eubulides, Eus Kevin, Ewakened, False vacuum, Ferkel, Fleuryeric, Frap, Ftiercel, Fuzheado, Gachet, Gaius Cornelius, Gavia immer, Gdr, Gene Nygaard, GermanX, Giftlite, Gioto, Glacialfox, Glome83, Golbez, GrafZahl, Graham87, GrahamDavies, Greenmatter, Grover cleveland, Gtxfrance, Gubbubu, Gwern, Gwythoff, Hairy Dude, HamburgerRadio, HarisM, Harmil, Head, HenryCorp, Heooo, Heron, HeyStopThat, Howard McCay, Hu12, Hydrogen Iodide, Hzenilc, IMSoP, IShadowed, Iamfscked, Ieee8023, Ilia Kr., Iridescent, Isaac Rabinovitch, J. Spencer, JForget, JMK, JRSpriggs, JSimmonz, Jan Hidders, Jaredwf, Jauhienij, Jayc, Jeph paul, Jheiv, Jidan, Jinwicked, Johnuniq, Jon Awbrey, Jpbowen, Jpmelos, Jsorr, Jurvetson2, K.Nevelsteen, Kadin2048, Kaisershatner, Karl Dickman, Kevin143, Khym Chanur, Kieff, King mike, Kirsted, Kku, Klickagent, Kne1p, Kntg, Knutux, Krauss, Kris Schnee, L Kensington, LC, Laminatrix, LarryLACa, Ld100, Liftarn, Lingwitt, LittleDan, Loisel, Lokentaren, LoopZilla, Lotje, Lousyd, MDoggNoGFresh, MSGJ, Machine Elf 1735, Maester mensch, Malleus Fatuorum, Marco.caminati, Martynas Patasius, Materialscientist, MathMartin, Matman132, MattGiuca, Merzbow, Metaeducation, Meursault2004, Michael Hardy, Mitch Ames, Miym, Mmernex, MoraSique, Mormegil, Mousomer, Mrengy, Mvanveen, Nagato, Nanshu, Napoleon Dynamite42, NapoliRoma, Nikitadanilov, Nitishkorula, Nuno Tavares, Nynexman4464, Obradovic Goran, Oleg Alexandrov, Olivier, Oneiros, Opticon, OrgasGirl, Ott2, P.L.A.R., Pagw, Parhamr, Pascal.Tesson, Patrick, Paul Stansifer, Pcap, Penumbra2000, Pet5, Pexatus, Pfhyde, Phil Boswell, Philip Trueman, Pleasantville, Plumpy, Polymath69, Populus, Pritamworld, Prolog, Psinu, Punctilius, Q17, QuiteUnusual, R.e.s., RCX, Ramesh Chandra, Raul654, Rbkillea, Readams, Reedy, Reinderien, Rjpryan, Rjwilmsi, Roadrunner, Rob-nick, Robert Merkel, RobertG, RossPatterson, Rp, Ruud Koot, Saforrest, Satyr9, Schadel, ScottSteiner, Sheerfirepower, Shell Kinney, Shizhao, Shreevatsa, SimonP, Sligocki, Slike2, Slowking Man, Smimram, Smmurphy, Snoyes, SpNeo, Spikey, Stevertigo, Strangethingintheland, Streakofhope, Sun Creator, Sundar, Supertouch, Svick, Sviemeister, Syko, TShilo12, TakuyaMurata, Tarcieri, Tarotcards, TedColes, That Guy, From That Show!, The Anome, Thecheesykid, Themfromspace, Therebelcountry, Theroadislong, Three887, Tide rolls, Tillwe, Timwi, TobiasKlaus, Tom-, TomT0m, Topbanana, Tristanb, Trovatore, UberScienceNerd, Ventolin, Verne Equinox, VictorAnyakin, Vineetgupta, Vrenator, Wavelength, Wednesday Next, WikiTony999, Wikiwikifast, Wolfrock, Wshun, Wuffe, Wvbailey, XJamRastafire, Yan Kuligin, Yipdw, Yourmomblah, Zbxgscqf, Zeno Gantner, , 694 ,anonymous edits Undecidable problem Source: http://en.wikipedia.org/w/index.php?oldid=516989228 Contributors: Aleph4, Architectual, Bibliomaniac15, Diego Queiroz, Dtrebbien, Erxnmedia, Giftlite, Googl, Gregbard, Identityandconsulting, Iterator12n, Ixfd64, John Quiggin, Jordan Gray, Longhair, Michael Hardy, Mike Rosoft, Phil Bridger, Pdraig Coogan, Ratfox, SchreyP, Shreevatsa, Sligocki, Stephen B Streater, Stevenrasnick, Swatjester, TedPavlic, Trovatore, Ultimus, UsaSatsui, Woohookitty, 15 anonymous edits

Image Sources, Licenses and Contributors

47

Image Sources, Licenses and Contributors


Image:Theoretical computer science.svg Source: http://en.wikipedia.org/w/index.php?title=File:Theoretical_computer_science.svg License: GNU Free Documentation License Contributors: User:RobinK File:DFAexample.svg Source: http://en.wikipedia.org/w/index.php?title=File:DFAexample.svg License: Public Domain Contributors: Cepheus Image:Chomsky-hierarchy.svg Source: http://en.wikipedia.org/w/index.php?title=File:Chomsky-hierarchy.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: J. Finkelstein File:DFA example multiplies of 3.svg Source: http://en.wikipedia.org/w/index.php?title=File:DFA_example_multiplies_of_3.svg License: Public Domain Contributors: Self-made Image:Pushdown-overview.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pushdown-overview.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jochgem Image:Pushdown-step.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pushdown-step.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jochgem Image:Pda-example.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pda-example.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jochgem Image:Pda-steps.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pda-steps.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jochgem Image:Maquina.png Source: http://en.wikipedia.org/w/index.php?title=File:Maquina.png License: Public Domain Contributors: Schadel (http://turing.izt.uam.mx) Image:Turing machine 2a.svg Source: http://en.wikipedia.org/w/index.php?title=File:Turing_machine_2a.svg License: GNU Free Documentation License Contributors: User:Nynexman4464 Image:Turing machine 2b.svg Source: http://en.wikipedia.org/w/index.php?title=File:Turing_machine_2b.svg License: Public Domain Contributors: User:Nynexman4464. Original uploader was Nynexman4464 at en.wikipedia Image:State diagram 3 state busy beaver 2B.svg Source: http://en.wikipedia.org/w/index.php?title=File:State_diagram_3_state_busy_beaver_2B.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Diego Queiroz Image:State diagram 3 state busy beaver 4 .JPG Source: http://en.wikipedia.org/w/index.php?title=File:State_diagram_3_state_busy_beaver_4_.JPG License: GNU Free Documentation License Contributors: User:Wvbailey File:Lego Turing Machine.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Lego_Turing_Machine.jpg License: Creative Commons Attribution 3.0 Contributors: TomT0m

License

48

License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

You might also like