This action might not be possible to undo. Are you sure you want to continue?
Models of Language Generation:
Grammars
288
11. Ambiguity of Contextfree Grammars
Ambiguity in a language occurs either when a symbol or an expression has more than one meaning (e.g., story), or
when an expression can be (grammatically) parsed in two different ways. The former is called lexical (or semantic)
ambiguity, and the later syntactic (or structural) ambiguity. For example, in natural language, the sentence “A man
entered the room with a picture” can be interpreted (i.e., parsed) into two different grammatical structures as follows.
man
A
entered room
the
with picture
a
man
A
entered room
the
with picture
a
This sentence is syntactically ambiguous. With no further information, it is impossible to know which way the sentence
should be translated. In formal language, given a grammar G and a sentence x (i.e., a string, in the formal language jargon),
parsing shows how x can be derived by the grammar. If x can be derived in two different ways, grammar G is ambiguous.
Parsing is one of the main functions of the compiler of a programming language. In this chapter we will study syntactic
ambiguity.
289
Ambiguity
Dear Dad & Dear Son
Dear Dad,
$chool i$ really great. I am making lot$ of friend$ and $tudying very hard. With all my $tuff, I $imply can`t
think of anything I need, $o if you would like, you can ju$t $end me a card, a$ I would love to hear from you.
Love,
Your $on
The Reply:
Dear Son,
I kNOw that astroNOmy, ecoNOmics, and oceaNOgraphy are eNOugh to keep even an hoNOr student busy.
Do NOt forget that the pursuit of kNOwledge is a NOble task, and you can never study eNOugh.
Love,
Dad
 Adrea 
Break Time
11.1 Parse tree 290
11.2 Parse Tree and Ambiguity 293
11.3 Eliminating Ambiguity of an ambiguous CFG 295
Using parenthesis, Fixing the order of rule applications
Eliminating redundant rules
Setting up precedence and associativity
Rumination 305
Exercises 306
290
11.1 Parse Tree
Ambiguity
In the formal language, the syntax (i.e., structure) of a string generated by a
grammar depends on the rules applied as well as their order of application. Syntax
provides critical information for the compiler to translate the string (i.e., a program)
into an object code. Thus, if a string can be derived in two different ways, it is
impossible to give a unique translation.
A parse tree is an efficient data structure to represent the syntax of a string derived
by the grammar and to translate it by the compiler. For example, figure (b) below
shows a parse tree for string pvq.r generated by grammar G in figure (a).
S
S S
S S
A
A A
p
q r
v
.
(b) (a)
G: S ÷SvS  S.S  ~S  A
A ÷p  q  r
291
Given a derivation (i.e., sequence of rules applied) for a string w, the parse tree for
w with respect to the derivation is constructed as follows. First put the root node with
the start symbol S. Then, for each leaf node on the current tree with a nonterminal
label, say A, recursively expand the tree as follows:
Suppose that A →  is a rule applied next to derive w. For each symbol X appearing
in , in the order from left to right, a child node with label X is add to A. This
procedure repeats until the tree has no leaf nodes (labeled with a nonterminal symbol)
to expand. Reading all the leaf nodes left to right on the final tree should give the
string w. This string of terminal symbols is called the yield of the parse tree.
Ambiguity
Parse Tree
S
S S
S S
A
A A
p
q r
v
.
(b)
(a)
G: S ÷SvS  S.S  ~S  A
A ÷p  q  r
292
parse tree
In general, given a source code in programming environments, the compiler
constructs a parse tree of the code and then traversing the tree bottom up, left to
right, generates an object code (machine language program). For example, suppose
that for the string pvq.r the compiler has generated the parse tree in figure (b)
below.
The compiler generates machine language instructions that will access the values
of variables q and r, compute q.r and store the result (usually in a register), access
the value of p, and finally execute the OR operation with the stored result of q.r.
Because of the way of traversing the tree, bottom up, left to right, to generate the
object code, the order of logical operations depends on the tree. For the example,
the compiler generates an object code that will execute the logical expression
pvq.r in the order of pv( q.r ).
Ambiguity
S
S S
S S
A
A A
p
q r
v
.
(b) (a)
G: S ÷SvS  S.S  ~S  A
A ÷p  q  r
293
(b) pv(q.r)
S
S
S
S
S
A
A A
p
q r
v
.
S
S
S
S S A
A A
p q
r
(c) (pvq).r
.
v
G: S ÷SvS  S.S  ~S  A
A÷p  q  r
(a)
11.2 Parse Tree and Ambiguity
Ambiguity
The two parse trees in figures (b) and (c) below show two parse trees yielding the
same string pvq.r. In other words, it can be derived by grammar G in figure (a) in
two ways. Consequently, the two parse trees imply that the expression pvq.r can be
evaluated in two different ways, i.e., pv(q.r) and (pv)q.r. This implies that for
grammar G, the operator precedence between the two logical operations v (OR) and
. (AND) is ambiguous.
294
As we saw in the previous example, the existence of two parse trees yielding the
same string is a problematic property, called ambiguity, of a CFG that should be
eliminated. In real application, we cannot expect the correct result from a program
written in the language of an ambiguous grammar. Before we discuss how to
eliminate ambiguity from a CFG, we need a formal definition of it.
Definition (Ambiguity): A CFG G is ambiguous if there is a string x eL(G) for
which there are two parse trees yielding x.
Unfortunately, it is an unsolvable problem to decide whether an arbitrary CFG is
ambiguous or not. Also, there is no algorithm available that given an ambiguous
CFG, converts it to an unambiguous grammar. However, for a certain restricted
construct, it is possible to solve the problems. In this section we will present several
techniques with some examples.
Ambiguity
Parse Tree and Ambiguity
295
11.3 Eliminating Ambiguity of a CFG
(1) Binding with parenthesis.
Example: We know that the CFG G
1
below is ambiguous because there are two
parse trees yielding the same string pvq.r. The ambiguity occurs because it can
generate the same string by applying S ÷SvS followed by S ÷S.S, or vice versa,
as shown, respectively, in figure (a) and figure (b).
G
1
: S ÷SvS  S.S  ~S  A
A÷p  q  r
Ambiguity
S
S
S
S
S
A
A A
p
q r
v
.
S
S
S
S S A
A A
p q
r
.
v
(a) (b)
296
Eliminating Ambiguity
(b): ((p v q ) . r)
S
S
S
S S
A
A A
p q
r
.
v
(
)
( )
.
(a): (p v (q . r))
S
S
S
S
A
A A
p
q r
v
(
)
(
)
S
Ambiguous G
1
: S ÷SvS  S.S  ~S  A A÷p  q  r
Unambiguous G
2
: S ÷(SvS)  (S.S)  ~S  A A÷p  q  r
This ambiguity can be eliminated by parenthesizing the right side of those two
rules as shown in G
2
below. The parentheses make the yields of the two parse trees
different as shown in figures (a) and (b).
Ambiguity
297
The parenthesizing technique is simple, but has a serious drawback, because we
are altering the language by adding new terminal symbols, i.e., the parentheses.
However, this is a popular technique used in programming languages. Instead of the
parentheses they use other notations, for example, in Pascal “begin” and “end,” and
in C and C++, the braces „{‟ and „}‟.
Ambiguity
Eliminating Ambiguity
(2) Fixing the order of applying rules.
Example 1. The language generated by CFG G
3
below is {b
i
cb
j
 i, j > 0}. This
grammar is ambiguous because, for example, the string bcb can be derived by
generating the left side b first then the right side b, or vice versa, as shown below.
S
b
S
b
S
c
S
b
S
b
S
c
G
3
: S ÷bS  Sb  c
298
Ambiguous G
3
: S ÷bS  Sb  c
Unambiguous G
4
: S ÷bS  A A÷Ab  c
We can simply modify the grammar G
3
to G
4
as shown below such that left side
b‟s, if any, are always generated first. Figure (b) is the only parse tree for string bcb.
Grammar G
4
is unambiguous.
S
b
S
b
S
c
S
b
S
b
S
c
(a)
S
b
S
b
A
A
c
(b)
Ambiguity
Eliminating Ambiguity
299
Example 2. Using the technique of fixing the order of derivation, the ambiguous
CFG G
1
that we have examined can be converted to an unambiguous grammar G
5
shown in figure (a). Notice that this grammar generates the operators left to right in
the order they appear in the string.
S
S A
S A
S
A
p
q
v
.
v
S
~
A
p
q
(b): ~pvq.qvp
G
1
: S ÷SvS  S.S  ~S  A
A÷p  q  r
Ambiguous:
G
5
: S ÷AvS  A.S  ~S  A
A ÷p  q  r
Unambiguous:
(a)
Ambiguity
Eliminating Ambiguity
300
(3) Eliminating redundant rules
Ambiguous G
8
: S ÷BD B ÷bBcc D ÷dDec
Unambiguous G
9
: S ÷BDc B ÷bBcbc
D ÷dDede
The CFG G
8
below is ambiguous because it can generate c in two ways. Applying
the technique for minimizing the number of cproduction rules, we can convert it to
an unambiguous grammar G
9
.
Ambiguous G
6
: S ÷BD B ÷abb D ÷abd
Unambiguous G
7
: S ÷BD B ÷abb D ÷d
The CFG G
6
below is ambiguous because it can generate ab either by B or D. We
can simply delete one of the two and make the grammar unambiguous (see G
7
).
Ambiguity
Eliminating Ambiguity
301
(4) Implementing operator precedence and associativity
Operator precedence and associativity are important rules for evaluating
mathematical expressions. In programming languages, the rules are defined by the
grammar. As we know, multiplication (*) and division (/) are given higher precedence
than addition (+) and subtraction (). The assignment operator (=) is given the lowest.
Operator = is right associative, and all the others are left associative. According to this
order of precedence and associativity, the mathematical expression in figure (a) will be
evaluated as shown in figure (b).
a = b + c * d – e / f
(a)
Ambiguity
(b)
a = ((b + (c * d)) – (e / f))
(5) (3) (1) (4) (2)
Eliminating Ambiguity
302
Ambiguous
G
1
: S ÷SvSS.S~SA
A÷pqr
Unambiguous
G
10
: S ÷DvSD D ÷C.DC
C ÷ ~CA A ÷pqr
(a)
C
~
A
D
C
D
.
A
A
S
S
D
D
S
C
v
v
C
A
C
p q
r
p
(b)
Example. Assume that the logical operators ~, ., and v are right associative with
precedence given in that order (i.e., ~ is at the top followed by ., and v at the bottom).
The ambiguous CFG G
1
(repeated below) can be modified into an unambiguous G
10
by implementing the precedence and associativity into the production rules.
Notice that every OR (v) operator in the string should be generated by S ÷DvS
before generating others. Then AND (.) operators, if any, are generated by D ÷C.D,
and finally NOT (~) operators. Also notice that with D, there is no way to derive v,
and with C neither v nor . can be derived.
Ambiguity
Eliminating Ambiguity
303
G
10
: S ÷DvSD D ÷C.DC
C ÷ ~CA A ÷pqr
C
~
A
D
C
D
.
A
A
S
S
D
D
S
C
v
v
C
A
C
p q
r
p
(b)
This fixed order of rule applications facilitates the compiler generating an object
code to evaluate the string (i.e., a logical expression) according to the operator
precedence and associativity.
For example, because NOT (~) operators are generated last, they appear on a
subtree of the parent node of a leaf node labeled with . or v. Hence, the compiler,
traversing the tree bottom up, left to right, generates the instructions which will
execute the NOT (~) operators before the instructions executing other operators.
Similarly, we can see how the compiler generates the instructions to execute AND
(.) operators before OR (v) operators.
Ambiguity
Eliminating Ambiguity
(a)
~p v q v r . p ¬((~p) v (q v (r . p)))
(1) (4) (3) (2)
304
Now, for the associativity, notice that identical operators are generated left to
right in the order they appear in the string. For example, all OR (v) operators are
derived by applying S ÷DvS recursively. Thus, the compiler, traversing the parse
tree bottom up, left to right, will generate an object code to evaluate the logic
expression following the order of right associativity.
If we want to implement left associativity for an operator used in the language,
we can simply reverse the right side of the rule which derives it. For example, we
use S ÷SvD instead to make operator v left associative.
G
10
: S ÷DvSD D ÷C.DC
C ÷ ~CA A ÷pqr
C
~
A
D
C
D
.
A
A
S
S
D
D
S
C
v
v
C
A
C
p q
r
p
(b)
Eliminating Ambiguity Ambiguity
(a)
~p v q v r . p ¬((~p) v (q v (r . p)))
(1) (4) (3) (2)
305
We learned that for a simple CFG, it is possible to analyze it to decide whether it is ambiguous or not, and if it is
ambiguous, convert it to an unambiguous one. However, as we mentioned before, most of the problems concerning ambiguity
of CFG‟s in general are very difficult or even unsolvable. Here are some interesting facts.
Rumination (1): ambiguity
Ambiguity
• There are some languages that can only be generated by ambiguous contextfree grammars. Such languages are call
inherently ambiguous. The following language L is an example.
L = {a
n
b
n
c
m
 m, n > 1 } {a
n
b
m
c
m
 m, n > 1}
• It is an unsolvable problem to tell whether an arbitrary CFG is ambiguous or not.
• It is an unsolvable problem to convert an arbitrary ambiguous CFG, whose language is not inherently ambiguous, to an
unambiguous CFG.
Today‟s Quote
Be slow in choosing a friend, slower in changing.
 Benjamin Franklin 
Break Time
306
Exercises
11.2 (a) Show that the following CFG is ambiguous.
G: S ÷ S + S  S * S  T T ÷ a  b
(b) In the CFG G above, let + and * be the addition and multiplication operators, respectively, and a and b be integer
variables. Convert G to a CFG G‟ that satisfies the following conditions.
(i) L(G) = L(G‟)
(ii) Operator * has higher precedence than operator +.
(iii) Both operators are left associative.
You should also present a parse tree showing that your grammar meets the required order of operator precedence and
associativity.
11.1 Show that each of the following CFG‟s is ambiguous, and convert it to an unambiguous CFG.
(a) S ÷ Sa  A  a A ÷ a  bA
(b) S ÷ AB  c A ÷ a  c B ÷ b  c
Ambiguity
307
11.3 The syntax flow graph below defines a simplified <Ifstatement> of the Pascal programming language. Following
the convention, symbols in a circle or oval are terminals, and the words in rectangles correspond to nonterminal symbols.
(a) Transform the flow graph to a CFG G and show that G is ambiguous.
(b) Convert grammar G to an unambiguous CFG and explain in detail how you got your answer.
<IFstatement>
IF (
<bool>
)
then
else <statement> <statement>
c
d
<bool>
<IFstatement>
a + b
<statement>
a  b
The following problem is concerned with the ambiguity of the ifstatement, which appears often in the text. Question (a)
is easy, but question (b) is challenging.
Exercises
Ambiguity
308
Hierarchy of the Models
309
In Chapter 7 we saw the Chomsky hierarchy among languages, automata, and other models and proved the horizontal
relations (i.e., the characterization) at the lowest level of the hierarchy. Figures (a) and (b), respectively, are the copies of the
relations among the classes of languages and automata presented in Chapter 7 (see next slide). Recall that if a language is a
member of a class of languages at a lower level, then it is also a member of an upper class. If a language is recognizable by an
automaton at a lower level, then it is also recognizable by an automaton at an upper level. But the reverse of these relations
does not hold. An upper lever language class has a member that does not belong to a lower level class, and there is an
automaton belonging to an upper level whose language cannot be recognized by any automaton belonging to a lower level of
the hierarchy.
Type0L
Type1L
Type2L
Type3L
(a) Containment relations of
the language classes
TM
LBA
PDA
FA
(b) Containment relations of
automata capability
12. Hierarchy of the Models: Proper Containment
310
The Chomsky Hierarchy
Review
Hierarchy: Proper Containment
Recursively
Enumerable
Sets (type 0)
Turing
Machines
(TM)
Post System,
Markov Algorithms,
urecursive Functions
Regular
Expression
Contextsensitive
Languages(type 1)
Contextfree
Languages(type 2)
Regular
Languages(type3)
Linearbounded
Automata
(LBA)
Pushdown
Automata
(PDA)
Finite State
Automata
(FA)
.
.
.
.
.
Languages (grammars)
Machines Other Models
 : Containment
÷: Characterization
311
Hierarchy: Proper Containment
To prove the proper containment relations of the hierarchy it requires indepth logical arguments that we have developed in
Chapter 1. In this chapter we will prove the containment relations up to the class of contextsensitive languages. In Chapter 15
we will complete the proof of the hierarchy by proving the characterizations of contextfree, contextsensitive and phrase
structured languages, together with the proof of the last proper containment relation between type 0 and type 1. The logics
involved in these proofs are so elegant that it worth a challenge.
12.1 Relationship of the language classes: proper containment 312
12.2 The pumping lemma 316
The pumping lemma and proof
12.3 Applications of the pumping lemma 323
Examples
12.4 The pumping lemma for contextfree languages 328
12.5 Application of the pumping lemma for contextfree languages 331
12.6 Ogden’s lemma 336
The lemma and an application
12.7 A proof of the pumping lemma for contextfree languages 339
Rumination 346
Exercises 350
312
Before the proof of the proper containment relations () between two levels of the
hierarchy, we will prove the containment relations (_) by the following theorem.
Theorem 12.1 Let TypeiL denote type i (i = 0, 1, 2, 3) language class. Then for all i
< 3, the following relation holds.
TypeiL _ Type(i+1)L
Proof. Let o÷ be a rule of a grammar G = (V
T
, V
N
, P, S). We know that by
definition, type 3 (regular) grammar rules should have the form (i.e., o = 1) of type 2
(contextfree) grammar rules with additional restriction that either  = xB or  = x ,
where x e (V
T
)
*
and B e V
N
. It follows that every type 3 grammar is a type 2
grammar, and hence, all type 3 languages belong to the class of type 2 languages, i.e.,
Type2L _ Type3L.
12.1 The containment relations
Hierarchy: Proper Containment
313
Containment Relations
Proof (continued). By definition, type 1 grammar (CSG) rules are type 0 grammar
rules with the additional restriction that the left side of each rule cannot be longer
than the right side. It follows that every type 1 (contextsensitive) grammar is also a
type 0 (phrase structured) grammar. That is,
Type0L _ Type1L
Now, we prove that type 1 language class contains type 2 (contextfree)
language class. Recall that type 1 grammar rules are noncontracting. In other
words, type 1 grammar rules are type 0 grammar rules with the restriction that for
any rule o÷, the left side cannot be longer than the right side, i.e., o s . Type
2 grammar rules are type 0 grammar rules having one nonterminal symbol on their
left side, i.e., o = 1.
Both type 1 and type 2 grammars are defined by adding a restriction on type 0
grammar rules. So for the proof of the containment relation between type 1 and
type 2 language classes, we cannot apply the simple logic that we used for the
other relations between other language classes.
Hierarchy: Proper Containment
314
Grammar G' is a CSG because every rule in G' is noncontracting except for the
rule S ÷ c, if any. This implies that every CFL can be generated by a CSG. It follows
that Type1L _ Type2L.
Clearly, since every CFG rule has exactly one nonterminal symbol on its left
side, as far as the left side of grammar rules are concerned, CFG rules belong
to CSG rules. The only problem is the cproduction rules allowed in CFG‟s, which
violate the requirement that the right side of CSG rules cannot be shorter than the left
side, except for S ÷ c. All the other CFG rules, which do not produce c, are CSG
rules. Fortunately, we have a way to overcome this problem. Recall Theorem 10.1
repeated below.
Theorem 10.1 Given a CFG G, we can construct a CFG G' that satisfies the following
conditions.
• L(G) = L(G').
• If c e L(G), G' has no cproduction rule. Otherwise, S ÷ c is the only cproduction
rule in G'.
Containment Relations
Hierarchy: Proper Containment
315
In summary, we showed the following relations.
Type0L _ Type1L _ Type2L _ Type3L
Now, we will show that all these containment relations are proper, i.e.,
Type0L Type1L Type2L Type3L
For the proof it is enough to show that for each i = 0, 1, 2, there is a language in class
TypeiL that does not belong to class Type(i+1)L. In this chapter we will first show
that the familiar type 2 language {a
i
b
i
 i > 0} does not belong to Type3L, i.e., the class
of regular languages. Then we will show that the language {a
i
b
i
c
i
 i > 0 }, which was
presented as a typical type 1 language in Chapter 2, does not belong to Type2L. These
two results prove the following relations.
Type1L Type2L Type3L
We will defer the proof of the last part Type0L Type1L until Chapter 15.
Containment Relations
Hierarchy: Proper Containment
316
12.2 The Pumping Lemma
To prove the relation Type2L Type3L, it is enough to find a contextfree
language that is not regular. Our approach is to first find a common property of all
regular languages, and then show a contextfree language that does not satisfy this
property. Since every finite language is regular (we leave the proof of this claim for
the reader), the property we are looking for must be found among infinite regular
languages. The following lemma presents such property.
Pumping Lemma: For every infinite regular language L, there is a constant integer
n that satisfies the following conditions:
Let ¯ be the alphabet of the language. Then for every string z e L whose length is
greater than or equal to n, there are u, v, w e ¯
*
such that
• z = uvw, uv s n, v > 1, and
• for all i > 0, uv
i
w e L
Hierarchy: Proper Containment
317
Pumping Lemma
Proof: For the proof we need the following two terminologies; a transition path is
a directed path on a state transition graph and a path label is the string constructed
by picking the input symbols along a transition path. For example, in the state
transition graph shown below, aabba is the path label on the state transition path 1
÷2 ÷4 ÷3 ÷2 ÷4. Given the string aabba as an input, the DFA will take a
sequence of transitions along the transition path 1 ÷2 ÷4 ÷3 ÷2 ÷ 4.
To help the reader understand the argument, we will use the following state
transition graph throughout the proof, together with other figures when needed.
a
b
b
a
a
b
start
1
3
2
4
3
1
Hierarchy: Proper Containment
318
Given an infinite regular language L, let M be a DFA that recognizes L and let n
be the number of states of this automaton. For an input string z e L, whose length is
greater than or equal to n (i.e., z > n), examine the state transition path with path
label z.
Notice that because M is deterministic, there is only one transition path with path
label z. Since L is infinite and the number of states n is finite, there must be a string
z in L such that z > n. Otherwise, L cannot be infinite.
Let m be the length of z (i.e., z = m > n ) and let z = a
1
a
2
…a
m
.
Pumping Lemma
a
b
b
a
a
b
start
1
3
2
4
3
1
Hierarchy: Proper Containment
319
Since z e L, the transition path with label z must end in an accepting state.
Because z ≥ n, the path involves a cycle and hence, there should be a state,
say q, visited a second time by reading some jth symbol a
j
(j s n) in z as
illustrated in figure (b) below. (This can be proved by the pigeonhole principle.
Refer the application example of the pigeonhole principle in Chapter 1.)
(b) Transition path with path label
z = a
1
a
2
…a
m
, m > n
q
start
a
1
M
a
2
a
i
.
.
.
a
i+1
a
j
a
j+1
.
.
.
a
m
z = m > n
(a) Transition path with path
label aabba
Pumping Lemma
a
b
b
a
a
b
start
1
3
2
4
3
1
Hierarchy: Proper Containment
320
As shown in figure (b), let v be the path label of the cyclic transition path, and let
u and w be, respectively, the prefix and the suffix of z partitioned by v. (If q is the
start state, then u = c, and if q is the last state in the path, w = c.) Since the cycle
involves at least one edge, v > 1. Since j s n, we have uv s n.
z = m > n v > 1
z = a
1
a
2
…a
i
a
i+1
. . . . a
j
a
j+1
. . . .a
m
u v
w
q
start
a
1
M
a
2
a
i
.
.
.
a
i+1
a
j
a
j+1
.
.
.
a
m
u
v
w
u = a, v = abb, w = a
Pumping Lemma
a
b
b
a
a
b
start
1
3
2
4
3
1
uv s n
(b) Transition path with path label
z = a
1
a
2
…a
m
, m > n
(a) Transition path with path
label aabba
Hierarchy: Proper Containment
321
Sine uvw is in L, M accepts uvw. It follows that all the following strings
corresponding to the path labels of the transition paths with the cycle repeated
zero or more times are also accepted by M.
uw = uv
0
w, uvvw = uv
2
w, uvvvw = uv
3
w, . . . .
That is, for all i > 0, uv
i
w e L. (Refer to figure (a) for an example.)
Pumping Lemma
z = a
1
a
2
…a
i
a
i+1
. . . . a
j
a
j+1
. . . .a
m
u v
w
M
q
start
a
1
a
2
a
i
.
.
.
a
i+1
a
j
a
j+1
.
.
.
a
m
u
v
w
For all i > 0, uv
i
w e L
z = m > n v > 1 uv s n
(b) Transition path with path label
z = a
1
a
2
…a
m
, m > n
For all i > 0, a(abb)
i
a e L
u = a, v = abb, w = a
a
b
b
a
a
b
start
1
3
2
4
3
1
(a) Transition path with path
label aabba
Hierarchy: Proper Containment
322
Summarizing the observations, we get the pumping lemma repeated here.
Pumping Lemma: For every infinite regular language L, there is a constant integer
n which satisfies the following conditions:
Let ¯ be the alphabet of the language. Then for every string z e L whose length is
greater than or equal to n, there are u, v, w e ¯
*
such that
• z = uvw, uv s n, v > 1, and
• for all i > 0, uv
i
w e L
Pumping Lemma
Hierarchy: Proper Containment
Job Applicant Bloopers
Interviewer: “Do you think you can handle a variety of tasks?”
Applicant: “I should say so. I‟ve had nine totally different jobs in the past five months.”
 Jim 
Break Time
323
Now, as an application of the pumping lemma we will show that there is a CFL
which is not regular. Specifically, using the lemma, we will prove that the language
{a
i
b
i
 i > 1} does not satisfy the pumping lemma to show that Type2L Type3L,
the proper containment relation between the classes of type 2 and type 3 in the
Chomsky hierarchy.
For the convenience of the application, the lemma is rewritten below to expose its
logical structure. (Notice the quantified parts highlighted.)
12.3 Application of the Pumping Lemma
(1) For every infinite regular language L, // ¬ infinite regular language L,
(2) there exists a constant n such that //  a constant n such that . . .
(3) for every string z e L, such that  z  > n, // ¬ string z e L,  z  > n
(4) there exist strings u, v, w e ¯
*
such that //  u, v, w that satisfy . . .
(i) z = uvw, (ii) uv s n, (iii) v > 1,
(iv) for all i > 0, uv
i
w e L. // ¬ i > 0, uv
i
w e L.
Hierarchy: Proper Containment
324
Before using the pumping lemma for the proof, recall the techniques (in Section 1.3)
for dealing with existential quantification () and universal quantification (¬) in a
statement to prove. In this example, we are going to prove that for the given context
free language, the pumping lemma is not true. Hence, for the existentially quantified
parts of the lemma, i.e., the constant n and strings u, v, and w, we should consider all
possible cases. For the universally quantified parts, i.e., regular language L, string z e
L, and integer i > 0, it is enough to consider a single case, which can be chosen for our
convenience of the proof.
The argument of the proof will run as a game between the proponent of the lemma
and us who are going to show that the lemma does not hold for the given language.
Application of the Pumping Lemma
(1) For every infinite regular language L, // ¬ infinite regular language L,
(2) there exists a constant n such that //  a constant n such that . . .
(3) for every string z e L, such that  z  > n, // ¬ string z e L,  z  > n
(4) there exist strings u, v, w e ¯
*
such that //  u, v, w that satisfy . . .
(i) z = uvw, (ii) uv s n, (iii) v > 1,
(iv) for all i > 0, uv
i
w e L. // ¬ i > 0, uv
i
w e L.
Hierarchy: Proper Containment
325
Now, we show that our familiar CFL L = {a
i
b
i
 i > 1} does not satisfy the pumping
lemma, and hence, it is not regular. Suppose that L is a regular language. (By this
sentence the reader will promptly recognize that we are going to use the proofby
contradiction technique.) We will carefully analyze the pumping lemma to find a
contradicting part against our supposition. (The numbers in the following arguments
match the ones that we put in the lemma.)
(1) For every infinite regular
language L,
(2) there exists a constant n s.t.
(3) . . . . .
(4) . . . .
Pumping lemma says
Our argument
L is infinite regular language. Hence,
there should be a constant n such that
conditions (3) and (4) are satisfied.
Let n be just that constant. Using n as a
variable, we are going to consider all
possible constant values of n in our
argument.
Application of the Pumping Lemma
Hierarchy: Proper Containment
326
(3) For all strings z e L such that
 z  > n ,
(4) there exist strings u, v and w
that satisfy the following .
(i) z = uvw,
(ii) uv s n,
(iii) v > 1,
(iv) for all i > 0, uv
i
w e L.
We choose string z = a
n
b
n
in L, and examine
whether for all strings u, v, w satisfying
conditions (i) – (iii), the condition (iv) is
satisfied.
Let z = a
n
b
n
= uvw according to (i). Then
by (ii) and (iii), v contains only (and at least
one) a‟s. (See the figure at the bottom left.)
It follows that string uv
2
w contains more a‟s
than b‟s. That is, for i = 2, we claim that
uv
2
w e L.
The language L does not satisfy the
pumping lemma. This contradicts the
assumption that L is regular. Therefore, L is
not regular.
Pumping lemma says Our argument
Application of the Pumping Lemma
aa . . . . . . . .a bb . . . . . . . . .b
n n
u v w
uv s n
Hierarchy: Proper Containment
327
We know that L = {a
i
b
i
 i > 1 } is a CFL. Since we have just shown that L is
not regular, the containment relation Type2L _ Type3L that we have
established in the previous section can be refined to the proper containment
relation, i.e., Type2L Type3L.
Application of the Pumping Lemma
Hierarchy: Proper Containment
Secrets of Success
The other day I had the opportunity to drop by my department head's office. He's a friendly guy and on the rare
opportunities that I have to pay him a visit, we have had enjoyable conversations. While I was in his office
yesterday I asked him, "Sir, What is the secret of your success?"
He said, "Two words."
"And, Sir, what are they?"
"Right decisions."
"But how do you make right decisions?"
"One word." He responded.
"And, sir, What is that?"
"Experience."
"And how do you get Experience?"
"Two words."
"And, Sir, what are they?"
"Wrong decisions."
 Rubin 
Break Time
328
12.4 The pumping lemma for contextfree languages
The pumping lemma for contextfree languages. For every infinite CFL L, there is
a constant integer p that satisfies the following conditions:
Let ¯ be the alphabet of the language. Then for every string z e L whose length is
greater than or equal to p, there are u, v, w e ¯
*
such that
• z = uvwxy, vwx s p, vx > 1
• for all i > 0, uv
i
wx
i
y e L
Now, we prove the proper containment relation between the next upper two levels
of the language classes, i.e., Type1L Type2L. We use the same approach that we
took for proving the containment relation Type2L Type3L. With no proof we
first present a common property of infinite contextfree languages, called the
pumping lemma for contextfree languages (see below). Then, as an application of
the lemma, we will show that the contextsensitive language {a
i
b
i
c
i
 i ≥ 1 }, which
we introduced in Section 2.3, does not satisfy the lemma. We shall present the proof
of the lemma after showing the application.
Hierarchy: Proper Containment
329
Pumping Lemma for CFL’s
• z = uvw, uv s n, v > 1
• for all i > 0, uv
i
w e L
The lemma for regular language
u v w
uv s n
• z = uvwxy, vwx s p, vx > 1
• for all i > 0, uv
i
wx
i
y e L
The lemma for CFL‟s
u v w x y
vwx s p
z
The pumping lemma for CFL‟s looks very similar to the one for regular languages.
The figure below shows the difference. Both lemmas claim the existence of a constant
(i.e, n for regular languages and p for CFL‟s). The lemma for regular languages divides
string z into three segments (i.e., z = uvw), while the lemma for CFL‟s divides it into
five segments (i.e., z = uvwxy). The lemma for regular languages provides one site,
named v, to pump (see the up arrow), while the lemma for CFL‟s provides two sites v
and x to pump simultaneously.
Hierarchy: Proper Containment
330
Recall that when we applied the pumping lemma for regular languages, we took
advantage of the range of the pump site v that is restricted within the prefix of length n
of string z, given by the condition uv s n.
Unfortunately, for the pumping lemma for CFL‟s the segment vwx, which contains
the two pump sites, can be flanked by u to the left and by y to the right. So the pump
sites can be located anywhere in z in the window of width vwx s p. When we apply
the lemma to prove a language is not contextfree, we must consider all possible
locations of the window vwx, from the left end of z up to the right end.
Pumping Lemma for CFL’s
• z = uvw, uv s n, v > 1
• for all i > 0, uv
i
w e L
The lemma for regular language
u v w
uv s n
• z = uvwxy, vwx s p, vx > 1
• for all i > 0, uv
i
wx
i
y e L
The lemma for CFL‟s
u v w x y
vwx s p
z
Hierarchy: Proper Containment
331
12.5 Application of the pumping lemma for CFL’s
Again, for the clarity of the argument, we rewrite the lemma as follows.
(1) For every infinite CFL L, // ¬ infinite CFL L,
(2) there exists a constant p such that //  a constant p such that . . .
(3) for every string z e L, such that  z  > p, // ¬ string z e L,  z  > p
(4) there exist strings u, v, w, x and y that //  u, v, w, x, y that . . .
satisfy the following conditions.
(i) z = uvwxy,
(ii) vwx s p,
(iii) vx > 1,
(iv) for all i > 0, uv
i
wx
i
y e L. // ¬ i > 0, uv
i
wx
i
y e L.
Hierarchy: Proper Containment
332
Now, we show that the CSL L = {a
i
b
i
c
i
 i > 1}, that we have introduced in Section
2.3, does not satisfy the pumping lemma for CFL‟s. Our argument is similar to the
one for proving that language {a
i
b
i
 i > 1} is not regular.
Suppose L is a CFL. It should satisfy the lemma. With L we walk along the lemma
and come out with a contradiction.
Application of CFL Pumping Lemma
(1) For every infinite CFL L,
(2) there exists a constant p s.t.
(3) . . . . .
(4) . . . .
Pumping Lemma’s claim
Our claim
L is a infinite CFL. Hence, there should
be a constant p such that conditions (3)
and (4) are satisfied.
Let p be the constant. Using p as a
variable, we are going to consider all
possible constant values of p in our
argument.
Hierarchy: Proper Containment
333
(3) For all strings z e L such that
 z  > p ,
(4) there exist strings u, v, w, x, y
that satisfy the following.
(i) z = uvw,
(ii) uv s n,
(iii) v > 1,
(iv) for all i > 0, uv
i
w e L.
We choose string z = a
p
b
p
c
p
in L and
examine whether, for all strings u, v, w
satisfying conditions (i) – (iii), it satisfies
condition (iv).
By (i) let Let z = a
p
b
p
b
p
= uvwxy, and
examine what kinds of symbols can be in
the two pump sites v and x.
Recall that vwx, whose length s p, can
be anywhere in z.
Pumping Lemma’ claim
Our claim
Application of CFL Pumping Lemma
aa . . . . . . . . .abb . . . . . . . .bcc. . . . . . . . . c
u
vwx
p
p
p
y
Hierarchy: Proper Containment
334
(i) z = uvwxy,
(ii) vwx s p,
(iii) vx > 1,
(iv) for all i > 0,
uv
i
wx
i
y e L.
As we can see in the figure below, by condition (ii) vwx s p, the pump sites v and x
together can contain no more than two different symbols (a‟s and b‟s or b‟s and c‟s).
By condition (iii) vx s 1, strings v and x together should contain at least one symbol.
It follows that in uv
2
wx
2
y the numbers of a‟s, b‟s, and c‟s cannot be the same, which
implies that uv
2
wx
2
y e L. The language does not satisfy condition (iv) of the lemma
when i = 2. Hence, L is not contextfree.
vwx s p
aa . . . . . . . . .abb . . . . . . . .bcc. . . . . . . . . c
u
vwx
p
p
p
y
Application of CFL Pumping Lemma
Hierarchy: Proper Containment
335
For the language {a
i
b
i
c
i
 i > 1 }, it was fairly straightforward to apply the lemma and
show that the language is not contextfree. However, this lemma is so weak that there
are many noncontextfree languages for which it is either hard or impossible to apply it.
For example, consider the following language L, which is not contextfree. Whichever
string z (of length greater than or equal to p) we pick from this language, it is
impossible to pump and come to a contradiction.
L = {xa
i
x  i > 0, x e {0,1}
+
} {x  x e {0,1}
+
}
For example, suppose that we picked string z = 1
p
0
p
aaa1
p
0
p
to see if z = uvwxy can
be pumped to come up with a contradiction. For the case of v = a, w = a, and x = a, it
satisfies that uv
i
wx
i
y e L, for all i > 0. Even when there is one a (i.e., z = 1
p
0
p
a1
p
0
p
),
if we set v = a, and w = x = c , we get uv
i
wx
i
y e L, for all i > 0. Notice that picking a
binary string for z from the second part of the language doesn‟t work either. Ogden
has introduced a lemma to strengthen this weakness.
Application of CFL Pumping Lemma
Hierarchy: Proper Containment
336
12.6 Ogden’s Lemma
Ogden‟s lemma, shown below, is very similar to the original lemma except
for the additional (highlighted) parts restricting the locations of the pump sites.
(1) For every infinite CFL L,
(2) there exists a constant p such that
(3) for every string z e L such that z > p, if we mark p symbols in z
(4) there exist strings u, v, w, x and y that
satisfy the following conditions.
(i) z = uvwxy,
(ii) vwx s p,
(iii) vx > 1, and string vx contains at least one marked symbol,
(iv) for all i > 0, uv
i
wx
i
y e L.
We omit the proof of this lemma. Interested readers are referred to the book
“Introduction to Formal Languages, Automata Theory, Languages and Computation”
by Hopcroft, Motwani, and Ullman (2001. Addison Wedley).
Hierarchy: Proper Containment
337
Ogden’s Lemma
Now, we can apply Ogden‟s lemma and show that the following language is not
CFL, which we failed to prove with the original lemma.
L = {xa
i
x  i > 0, x e {0,1}
+
} {x  x e {0,1}
+
}
Suppose L is a CFL. Then it should satisfy Ogden‟s lemma. Let p be the constant.
Pick string z = 1
p
0
p
a1
p
0
p
, let z = uvwxy, and mark the prefix 1
p
. According to the
lemma, vwx s p, and string vx should contain at least one marked symbol. It follows
that string vx can never include the symbol a in the middle. We see that string z' =
uv
2
wx
2
y e L, because in z' the binary string on the left side of a must be different
from the string on the right side of a. We have a contradiction. It follows that L is not
contextfree.
By the two application examples of the pumping lemmas for regular languages and
contextfree languages, we have proven the relations of proper containment between
the lower two levels of Chomsky hierarchy (repeated below). We will complete the
remaining proofs of other relations in Chapter 15.
Hierarchy: Proper Containment
338
Chomsky Hierarchy
Recursively
Enumerable
Sets (type 0)
Turing
Machines
(TM)
Post System,
Markov Algorithms,
urecursive Functions
Regular
Expression
Contextsensitive
Languages(type 1)
Contextfree
Languages(type 2)
Regular
Languages(type3)
Linearbounded
Automata
(LBA)
Pushdown
Automata
(PDA)
Finite State
Automata
(FA)
.
.
.
.
.
Languages (grammars)
Machines Other Models
Review
* Colored arrows indicate
proven relations.
 : Containment
÷: Characterization
339
12.7 Proof of the pumping lemma for CFL’s
Let‟s examine the parse tree of the grammar yielding the following string.
d(hd)
3
hc(d)
3
dchdchd
S ÷AB A ÷DE B ÷FD E ÷FC
C ÷DB  c F ÷HC D ÷d H ÷h
In this section, we will present a proof of the lemma with a CFG to support the
argument with some illustrations. (Detailed formal proof is omitted.)
Consider the CFG below, which is in Chomsky normal form (CNF) and has no
cproduction rule. (Recall that every CFL can be generated by a CFG in CNF with
no cproduction rules, except for S ÷ c in the case when the language contains c.)
Hierarchy: Proper Containment
340
Proof of the CFL Pumping Lemma
F
C
H
h c
d
F
C
B
H
D
h
d
D
d
F
C
B
H
D
h
d D
d
F
C
B
H
D
h
d
D
D
B
F
C H
h c
d
S
E
A
D
d
D
d
C
B
F
C
D
d
H
h c
S AB A DE
E FC
C DB  c
F HC
D d H h
B ÷FD
G:
Notice that on the longest trunk,
there appears a repeating
subsequence of nonteminals,
SAEFCBFCBFCBFC
(see the highlighted part). Also
notice that in the yield of this tree
d(hd)
3
hc(d)
3
dchdchd,
the highlighted substrings that are
produced by the repeating
nonterminals also repeat.
Since every nonterminal has
two children, it can produce
“fruits” (i.e., a terminal string)
either on its left side branches or
on its right side branches. In the
example, F, C and B, respectively,
produce h, d and d.
yield: d(hd)
3
hc(d)
3
dchdchd
Hierarchy: Proper Containment
341
z = d ( h d )
3
h c (d)
3
d h c d h c d
u v w x y
Proof of the CFL Pumping Lemma
On the longest trunk, FCB
repeats. This repeating segment
can be inserted (or deleted),
proportionally increasing (or
decreasing) the “fruits” on both
sides of the trunk.
Notice that each repeating
segment FCB bears hb to the left
and d to the right.
For CFG G, it satisfies the
condition that for all i ≥ 0,
z = d(hd)
i
hc(d)
i
dhcdhcd is in
the language L(G).
F
C
H
h c
d
F
C
B
H
D
h
d
D
d
F
C
B
H
D
h
d D
d
F
C
B
H
D
h
d
D
D
B
F
C H
h c
d
S
E
A
D
d
D
d
C
B
F
C
D
d
H
h c
S AB A DE
E FC
C DB  c
F HC
D d H h
B ÷FD
G:
Hierarchy: Proper Containment
342
We can make the trunk of the suffix tree grow by inserting more repeating
subsequences of FCB, or cut it short by deleting them. Since the number of repeats
can be arbitrary, there can be infinite number of parse trees. Every repeating trunk has
a repeating pattern of branches to its left or right and every branch should bear “fruit”
(i.e., a terminal symbol). (Recall that there is no cproduction rule except S → c.)
Hence, the parse tree can yield a repeating substring either to the left (bh in the
example) or to the right (b in the example) of the trunk. For CFG G , we can claim
that for all i > 0, string d(hd)
i
hc(d)
i
dchdchd is a yield of a parse tree of G, i.e.,
d(hd)
i
hc(d)
i
dchdchd e L(G).
We can generalize the above observation as follows. Let G be a CFG in Chomsky
normal form whose language is infinite. Consider a string z in L(G) whose length is
long enough (i.e., some constant which is proportional to the size of the nonterminal
alphabet) such that the parse tree yielding z has a trunk where a nonterminal symbol
A repeats.
Proof of the CFL Pumping Lemma Hierarchy: Proper Containment
343
Let AoA be a pattern appearing on a trunk of a parse tree, where A is a
repeating nonterminal symbol, and α and  are strings of nonterminals. In
general this parse tree will have the structure shown below.
Let v and x be the yields, respectively, from the left branches and the right
branches of Ao as illustrated in the figure below. Let w be the yield
generated by the branches from the “tip” A, and let u and y be,
respectively, the yields from the left subtrees and the right subtrees, if any, of
the trunk.
A
α
A

S
u
v
w
x
y
Proof of the CFL Pumping Lemma Hierarchy: Proper Containment
344
A
α
A

S
u
v
w
x
y
A
α
A
α
S
u
v
w
x
y
A

v x
We see that string uvwxy e L(G), and for all i > 0, uv
i
wx
i
y e L(G), because if
AoA is a legal pattern on a trunk, then for all i > 0, (Ao)
i
A is also a legal pattern.
Let V
N
be the nonterminal alphabet of the grammar. The length of string z for
which every parse tree has a trunk with a repeating pattern is proportional to  V
N
,
i.e., for some constant p, z s p = cV
N
. It follows that vwx s p and vx > 1.
Proof of the CFL Pumping Lemma
Hierarchy: Proper Containment
345
Summarizing the above observations, we have the following pumping lemma
for CFL‟s.
The pumping lemma for contextfree languages. For every infinite CFL L, there
is a constant integer p that satisfies the following conditions:
Let ¯ be the alphabet of the language. Then for every string z e L whose length is
greater than or equal to p, there are u, v, w e ¯* such that
• z = uvwxy, vwx s p, vx > 1
• for all i > 0, uv
i
wx
i
y e L
Proof of the CFL Pumping Lemma
Hierarchy: Proper Containment
346
Rumination (1): Application of the Pumping Lemma
• To prove that {a
i
b
i
 i > 1} is not regular, we showed that for this language the pumping lemma does not hold. Let‟s review
how we dealt with the parts of the lemma quantified existentially (i.e., prefixed with “there is,” “there exist” or “there are”)
or universally (i.e., prefixed with “for all” or “for every”). To show that the lemma does not hold for the language, we must
consider all possible cases for each part existentially quantified. To contrary, for each part universally quantified, it is enough
for us to pick just one case (for our convenience) that will lead to a contradiction.
The lemma uses an existential quantification in the following two parts;
(2) There is a constant n …
(4) There are strings u, v and w . . .
So, in the proof, by using n as a variable, we took into consideration of all possible constant values of n, and examined all
possible strings u, v, and w that satisfy the conditions given in (i) – (iii) of the lemma.
In the following three parts, the lemma uses a universal quantification;
(1) For every infinite regular language,
(3) For every string z e L,
(4)(iv) For all i > 0,
For part (1) we picked language {a
i
b
i
 i > 1} that we supposed to be a regular language, for part (3) we picked z = a
n
b
n
, and
for part (4)(iv) we picked i =2. (If we wanted, we could have chosen others than z = a
n
b
n
and i = 2.)
Hierarchy: Proper Containment
347
• Here is an application example of the pumping lemma with a logical bug that we often come across while grading
homework assignments.
Question: Is the language L = {abca
i
abc  i > 0} regular? Justify your answer.
Answer: L is not regular. Suppose L is regular. Then L should satisfy the pumping lemma. Let n be the constant of the lemma,
and choose string z = abca
n
abc, which is in L. Since z > n, string z satisfies condition (2) of the lemma. Let z = uvw, and
consider the case where u = a, v = b, and w = ca
n
abc. Then we get uv
2
w = abbca
n
abc, which is not in L. It follows that L does
not satisfy the pumping lemma. L is not regular language!!
We can show that L is regular by showing an FA which recognizes L (see the figure below). What is wrong in the above
argument? It is in the argument dealing with the existential quantification for strings u, v, and w in part (4) of the lemma. The
argument took u, v, and w only for the specific values of u = a, v = b, and w = ca
n
abc, not considering other cases satisfying
conditions (i)(iii). We should show that uv
i
w e L, for all possible locations of the “pump site” v in the prefix of length n of
string z and for all possible lengths of v (i.e., 1s v s n). Actually, for the case where v contains some a‟s in the substring a
n
, it
satisfies that for all i > 0, uv
i
w e L. The proof fails.
a b c a c b
a
(4) There exist strings u, v and w that satisfy the
following conditions.
(i) z = uvw,
(ii) uv s n,
(iii) v > 1,
(iv) for all i > 0, uv
i
w e L.
Rumination (1): Application of the Pumping Lemma
Hierarchy
348
• We know that every infinite regular language satisfies the pumping lemma. Is the converse of this statement true? In other
words, can we say that if an infinite language satisfies the pumping lemma, then the language is regular? We didn‟t prove that
every infinite language satisfies the pumping lemma. We proved it only for infinite regular languages. Actually, Appendix E
shows a nonregular CFL which satisfies the lemma. Thus, it is wrong to prove the regularity of a language by showing that it
satisfies the pumping lemma.
Hierarchy
• The pumping lemma presents a common property of all infinite regular languages. Recall the last claim of the lemma, i.e.,
“for all i ≥ 0, uv
i
w e L”, which is based on the existence of a cyclic path in the state transition graph of every DFA which
recognizes an infinite regular language. Notice that every finite language is regular. Let L = {w
1
, w
2
, . . . , w
k
} be a finite
language. Then we can simply construct a regular grammar with rules S → w
1
 w
2
 . . . w
k
. Therefore, every nonregular
language is infinite.
Application of the pumping lemma for CFL
• When we apply the pumping lemma for CFL, we should keep in mind the difference between the lemmas for regular
languages and CFL‟s. Recall that for regular languages, the region x, where we can pump, is limited in the prefix of length n of z,
while for the lemma for CFL‟s, the two regions v and x, where we can pump, can be anywhere in z within the window of width
vwx s p. Thus, for CFL‟s it requires more cases to analyze.
Rumination (1): Application of the Pumping Lemma
349
Rumination (2): Application of Ogden’s Lemma
• Often we hear that programming languages are contextfree. However, they are not clean contextfree languages, and carry
some contextdependency in their syntax. For example, in C it is not allowed to declare a variable more than once in a block,
and in FORTRAN there are statements requiring a matching label (e.g., 10 in the following DOloop).
. . . .
DO 10 I = 1, 100, 1
SUM = SUM + I
10 PRO = PRO * I
. . . .
The formal language L below has a similar contextdependency of programming languages. String x corresponds to the label
(i.e., a string of decimal digits) and a
i
corresponds to the loop body. Using Ogden‟s lemma we showed that L is not a contextfree
language.
L = {xa
i
x  i > 0, x e {0, 1}
+
} {x  x e {0, 1}
+
}
Hierarchy: Proper Containment
350
12.1 Using the pumping lemma for regular languages, we are going to prove that L = {xx
R
 x e{a, b, c}
*
}is not regular as
follows. Complete the proof by answering each of the following questions.
Proof. Suppose that L is regular, and let n be the constant of the lemma. All the following strings are in L.
(a) abccba (b) a
100
b
100
b
100
a
100
(c ) a
n/2
b
n/2
b
n/2
a
n/2
(d) a
n
b
n
b
n
a
n
Question 1: Which of the strings above are you going to pick as the string z for your proof? Why are other strings not your
choice? If you don‟t like any, choose another string in L and explain why.
Question 2: Consider strings u, v, w e{a, b, c}
*
that satisfy the conditions (i) z = uvw, (ii) uv s n, and (iii) v > 1. What
will be in v? Briefly explain all possible contents of string v.
Question 3: To show that L is not regular, how are you going to use the last part of the lemma: “For all i > 0, string uv
i
w e
L.?” Write your argument.
12.2 Which of the following languages are regular? Justify your answer.
L
1
= {a
i
b
j
c
k
 i, j, k > 0 } L
2
= {aaaa
i
b
j
c
i
 i, j > 0 }
12.3 For the symbols a, b and c, and a string x, let #a(x), #b(x) and #c(x), respectively, be the number of a‟s, b‟s and c‟s in
string x. Which of the following languages are contextfree? Justify your answer.
L
3
= { x  x e{a,b,c}
+
, and #a(x) > #b(x) }
L
4
= { x  x e{a,b,c}
+
, and #a(x) = #b(x) = #c(x) }
L
5
= {x  x e{a,b,c}
+
, and #a(x) < #b(x) < #c(x) }
12.4 We learned that language { w#w
R
 w e {a, b}
*
} is contextfree. The following languages are not contextfree. Prove
why.
L
6
= {w#w  w e {a, b}
*
} L
7
= {xyx  x, y e {a, b}
*
}
Exercises
Hierarchy: Proper Containment
351
Practical Application:
Parsing
352
13. Parsing
Parsing is one of the major functions of the compiler of a programming language. Given a source code w, the parser examines w
to see whether it can be derived by the grammar of the programming language, and, if it can be, the parser constructs a parse tree
yielding w. Based on this parse tree, the compiler generates an object code. So, the parser acts as a membership test algorithm
designed for a given grammar G that, given a string w, tells us whether w is in L(G) or not, and, if it is, outputs a parse tree.
Notice that the parser tests the membership based on the given grammar. Recall that when we practiced constructing a PDA for a
given language, say {a
i
b
i
 i > 0 }, we used the structural information of the language, such as a‟s come first, then b‟s, and the
number of a‟s and b‟s are same. Consider the two CFG‟s G
1
and G
2
shown below in figure (a), which generate the same language
{a
i
b
i
 i > 0 }. Figure (b) shows a PDA that recognizes this language. For an input string w, this PDA does not give any information
about the grammar and how the string w is derived. Hence, we need a different approach to construct a parser based on the
grammar, not the language.
There are several algorithms available for parsing that, given an arbitrary CFG G and a string x, tell whether x e L(G) or not, and
if it is, output how x is derived. (CYK algorithm is a typical example, which is shown in Appendix F.) However, these algorithms
are too slow to be practical. (For example, CYK algorithm takes O(n
3
) time for an input string of length n) Thus, we restrict CFG‟s
to a subclass for which we can build a fast practical parser. This chapter presents two parsing strategies applicable to such restricted
grammars together with several design examples. Finally, the chapter briefly introduces Lex (the lexical analyzer generator) and
YACC (the parser generator).
G
1
: S ÷aSb  ab
G
2
: S ÷aA A÷Sb  b
(a)
start
(a, Z
0
/aZ
0
)
( a, a/aa )
(b, a/c )
(b, a/ c )
(c, Z
0
/Z
0
)
(b)
353
Parsing
Memories
Two very elderly ladies were enjoying the sunshine on a park bench in Miami. They had been meeting at the park
every sunny day, for over 12 years, chatting, and enjoying each others friendship. One day, the younger of the two
ladies, turns to the other and says, “Please don't be angry with me dear, but I am embarrassed, after all these years...
What is your name ? I am trying to remember, but I just can't.”
The older friend stares at her, looking very distressed, says nothing for 2 full minutes, and finally with tearful
eyes, says, “How soon do you have to know ?”
 overheard by Rubin 
Break Time
13.1 Derivation 354
Leftmost derivation, Rightmostderivation
Derivations and parse trees
13.2 LL(k) parsing strategy 357
13.3 Designing an LL(k) parser 367
Examples
Definition of LL(k) grammars
13.4 LR(k) parsing strategy 379
13.5 Designing LR(k) parsers 387
Examples
Definition of LR(k) grammars
13.6 Lex and YACC 404
Rumination 409
Exercises 412
354
13.1 Derivation
The parser of a grammar generates a parse tree for a given input string. For
convenience, the tree is commonly presented in a sequence of rules applied in
one of the following two ways to derive the input string starting with S.
 Leftmost derivation: A rule is applied with the leftmost nonterminal symbol
in the current sentential form.
 Rightmost derivation: A rule is applied with the rightmost nonterminal
symbol in the current sentential form.
Example: G: S ÷ABC A÷aa B ÷a C ÷cC  c
Leftmost derivation: S ¬ABC ¬aaBC ¬aaaC ¬ aaacC ¬aaacc
Rightmost derivation: S ¬ABC ¬ABcC ¬ABcc ¬Aacc ¬ aaacc
Parsing
355
G: S ÷ABC A÷aa B ÷bD
C ÷cC  c D ÷bd
The derivation sequences, either leftmost or rightmost, are more
convenient to deal with than the tree data structure. However, to generate an
object code there must be a simple way to translate the derivation sequence
into its unique parse tree. The following two observations shows how it can
be done.
Observation 1: The sequence of rules applied according to the leftmost
derivation corresponds to the order of the nodes visited, when you traverse
the parse tree topdown (i.e., breadth first), lefttoright. (See the following
example.)
Derivation
Parsing
Leftmost derivation:
S ¬ABC ¬aaBC ¬aabDC
¬ aabbdC ¬aabbdcC ¬aabbdcc
1 2 3
4 5 6
C
c
c
aa
S
A B C
b D
bd
1
2 3 5
4
6
1 2 3 4 5 6
356
Derivation
Parsing
G: S ÷ABC A÷aa B ÷bD
C ÷cC  c D ÷bd
Observation 2: The reverse order of the rules applied according to the
rightmost derivation corresponds to the nodes visited, when you
traverse the parse tree bottomup, lefttoright. (See the following
example.)
Rightmost derivation:
S ¬ABC ¬ABcC ¬ABcc
¬AbDcc ¬Abbdcc ¬ aabbdcc
1
2
3
4 5 6
6 5 4 3 2 1
S
C
c
c
aa
A B C
b D
bd
6
5
4
3
2
1
357
13.2 LL(k) parsing strategy
We know that parsers are different from PDA‟s, because their membership test
should be based on the given CFG. Let‟s try to build a conventional DPDA which,
with the grammar G stored in the finite control, tests whether the input string x is in
L(G), and, if it is, outputs a sequence of rules applied to derive x. We equip the
finite control with an output port for the output (see figure (b) below).
Our first strategy is to derive the same input string x in the stack. Because any
string must be derived starting with the start symbol, we let the machine push S into
the stack and enter state q
1
for the next move. For convenience, we assign a rule
number to each rule as shown in figure (a).
Parsing
a a a a a a a a a a b b b
SZ
0
output
port
q
1
G
(b)
S ÷?
(a)
(1) (2) (3)
G: S ÷AB  AC A÷aaaaaaaaaa
(4) (5) (6) (7)
B ÷bB  b C ÷cC  c
L(G) = {a
10
x  x = b
i
or x = c
i
, i > 1 }
358
Now, we ask which rule, either (1) or (2), the machine should apply with S to
eventually derive the string on the input tape. If the input string is derived using rule
(1) (rule (2)) first, then there should be the symbol b (respectively, symbol c) after
the 10th a. Unfortunately, our conventional DPDA model cannot lookahead the
input before reading it. Recall that conventional DPDA‟s decide whether they will
read the input or not depending on the stack top symbol. Only after reading the
input does the machine knows what it is. Thus, without reading up to the 11th input
symbol, there is no way for the machine in the figure to identify the symbol at that
position.
LL(k) Parsing
Parsing
a a a a a a a a a a b b b
SZ
0
output
port
q
1
G
(b)
S ÷?
(a)
(1) (2) (3)
G: S ÷AB  AC A÷aaaaaaaaaa
(4) (5) (6) (7)
B ÷bB  b C ÷cC  c
L(G) = {a
10
x  x = b
i
or x = c
i
, i > 1 }
359
To overcome this problem, we equip the finite state control with a “telescope”
with which the machine can look some finite k cells ahead on the input tape. For
the grammar G, it is enough to have a telescope with the range of 11 cells.
(Notice that for the range to look ahead, we also include the cell under the head.)
With this new capability, the machine scans the input string ahead in the range,
and, based on what it sees ahead, it takes the next move. While looking ahead, the
input head does not move.
Parsing
S ÷AB !
A
N
I
(1) (2) (3)
G: S ÷AB  AC A÷aaaaaaaaaa
(4) (5) (6) (7)
B ÷bB  b C ÷cC  c
L(G) = {a
10
x  x = b
i
or x = c
i
, i > 1 }
(a)
a a a a a a a a a a b b b
SZ
0
q
1
G
(b)
LL(k) Parsing
360
Now, the parser, looking ahead 11 cells, sees aaaaaaaaaab. Since there is b at
the end, the machine chooses rule (1) (i.e., S ÷AB), rewrites the stack top S with
AB and outputs rule number (1) as shown in figure (a).
Let q, o, and  be, respectively, the current state, the remaining input portion to
read, and the current stack contents. From now on, for convenience we shall use
the triple (q, o, ), called the configuration, instead of drawing the cumbersome
diagram to show the parser.
Parsing
(1) (2)
G: S ÷AB  AC
(3)
A÷aaaaaaaaaa
(4) (5)
B ÷bB  b
(6) (7)
C ÷cC  c
(b) Configuration (q, o, )

q G
o
q
1
a a a a a a a a a a b b b
G
(1)
(a) Apply rule S ÷AB
A B Z
0
LL(k) Parsing
361
Looking ahead 11 cells in the current configuration (q
0
, aaaaaaaaaabbb, SZ
0
), the
parser applies rule (1) by rewriting the stack top S with the rule‟s right side AB.
Consequently, the configuration changes as follows.
(1) (2) (3) (4) (5) (6) (7)
G: S ÷AB  AC A÷aaaaaaaaaa B ÷bB  b C ÷cC  c
Parsing
(q
0
, aaaaaaaaaabbb, Z
0
) ¬(q
1
, aaaaaaaaaabbb, SZ
0
) ¬(q
1
, aaaaaaaaaabbb, ABZ
0
)
(1)
lookahead 11 cells
(3)
(q
1
, aaaaaaaaaabbb, ABZ
0
) ¬(q
1
, aaaaaaaaaabbb, aaaaaaaaaaBZ
0
)
Now, with nonterminal symbol A at the stack top, the parser must find a rule to
apply. Since A has only one rule, i.e., rule (3), there is no choice. So, the parser
applies rule (3), consequently changing the configuration as follows.
LL(k) Parsing
362
(1) (2) (3) (4) (5) (6) (7)
G: S ÷AB  AC A÷aaaaaaaaaa B ÷bB  b C ÷cC  c
LL(k) Parsing Parsing
(q
0
, aaaaaaaaaabbb, Z
0
) ¬(q
1
, aaaaaaaaaabbb, SZ
0
) ¬(q
1
, aaaaaaaaaabbb, ABZ
0
)
(1)
(3)
¬(q
1
, aaaaaaaaaabbb, aaaaaaaaaaBZ
0
) ¬. . .¬(q
1
, abbb, aBZ
0
) ¬(q
1
, bbb, BZ
0
)
Notice that the terminal symbol appearing at the stack top after applying rule
(3) corresponds to the leftmost terminal symbol appearing in the leftmost
derivation. Thus, the terminal symbol appearing at the stack top must match the
next input symbol, if the input string is generated by the grammar.
So, the parser, seeing a terminal symbol at the stack top, reads the input and, if
they match, pops the stack top. The following sequence of configurations shows
how the parser successfully pops all the terminal symbols pushed on the stack
top by applying rule (3).
363
(q
0
, aaaaaaaaaabbb, Z
0
) ¬(q
1
, aaaaaaaaaabbb, SZ
0
) ¬(q
1
, aaaaaaaaaabbb, ABZ
0
) ¬
(q
1
, aaaaaaaaaabbb, aaaaaaaaaaBZ
0
) ¬. . . .¬(q
1
, abbb, aBZ
0
) ¬(q
1
, bbb, BZ
0
) ¬?
(1) (3)
(1) (2) (3) (4) (5) (6) (7)
G: S ÷AB  AC A÷aaaaaaaaaa B ÷bB  b C ÷cC  c
Now, the parser must choose one of B‟s rules, either (4) or (5). If there remains
only one b in the input tape, rule (5) is the choice. Otherwise (i.e., if there are
more than one b), rule (4) must be applied. It follows that the parser needs to look
two cells ahead and proceeds as follows.
(q
1
, bbb, BZ
0
) ¬(q
1
, bbb, bBZ
0
) ¬(q
1
, bb, BZ
0
) ¬(q
1
, bb, bBZ
0
) ¬
(4) (4)
(q
1
, b, BZ
0
) ¬(q
1
, b, bZ
0
) ¬(q
1
, c, Z
0
)
(5)
Parsing
Lookahead 2 cells
LL(k) Parsing
364
S ¬AB ¬aaaaaaaaaaB ¬aaaaaaaaaabB ¬aaaaaaaaaabbB ¬aaaaaaaaaabbb
(1) (3) (4) (4) (5)
Notice that the last configuration above implies a successful parsing. It shows that
the sequence of rules applied on the stack generates exactly the same string as the one
originally written on the input tape. If the parser fails to reach the accepting
configuration, we say the input is rejected. In the above example, the sequence of rules
applied to the nonterminal symbols appearing at the stack top matches the sequence of
rules applied for the leftmost derivation of the input string shown below.
Parsing
In summary, our parser works as follows, where underlined parts of the input
string are lookahead contents and the numbers are the rules in the order applied
during the parsing.
(q
1
, bbb, BZ
0
) ¬(q
1
, bbb, bBZ
0
) ¬(q
1
, bb, BZ
0
) ¬(q
1
, bb, bBZ
0
) ¬
(4) (4)
(q
1
, b, BZ
0
) ¬(q
1
, b, bZ
0
) ¬(q
1
, c, Z
0
)
(5)
(q
0
, aaaaaaaaaabbb, Z
0
)¬(q
1
, aaaaaaaaaabbb, SZ
0
)¬(q
1
, aaaaaaaaaabbb, ABZ
0
) ¬
(1) (3)
(q
1
, aaaaaaaaaabbb, aaaaaaaaaaBZ
0
)¬. . . .¬(q
1
, abbb, aBZ
0
)¬
LL(k) Parsing
365
For the other input strings ending with c‟s, the parser can apply the same
strategy and successfully parse it by looking ahead at most 11 cells (see below).
This parser is called an LL(11) parser, named after the following property of the
parser; the input is read Lefttoright, the order of rules applied matches the order
of the Leftmost derivation, and the longest lookahead range is 11 cells. For a
grammar G, if we can build an LL(k) parser, for some constant k, we call G an
LL(k) grammar.
Parsing
(q
1
, ccc, CZ
0
) ¬(q
1
, ccc, bBZ
0
) ¬(q
1
, cc, CZ
0
) ¬(q
1
, cc, cCZ
0
) ¬
(6) (6)
(q
1
, c, CZ
0
) ¬(q
1
, c, cZ
0
) ¬(q
1
, c, Z
0
)
(7)
(q
0
, aaaaaaaaaabbb, Z
0
) ¬(q
1
, aaaaaaaaaaccc, SZ
0
) ¬(q
1
, aaaaaaaaaaccc, ACZ
0
) ¬
(2) (3)
(q
1
, aaaaaaaaaaccc, aaaaaaaaaaCZ
0
) ¬. . . .¬(q
1
, abbb, aCZ
0
) ¬
(1) (2) (3) (4) (5) (6) (7)
G: S ÷AB  AC A÷aaaaaaaaaa B ÷bB  b C ÷cC  c
LL(k) Parsing
366
Parsing
Formally, an LL(k) parser is defined by a parse table with the nonterminal
symbols on the rows and the lookahead contents on the columns. The table entries
are the right sides of the rules applied. Blank entries are for the rejecting cases.
The parse table below is constructed based on our observations, while analyzing
how the parser should work for the given input string. In the lookahead contents, X
is a don‟tcare symbol, and c means no lookahead is needed.
a
10
b a
10
c bbX
9
bB
10
ccX
9
cB
10
c
S
A
B
C
AB AC
a
10
bB b
cC c
Contents of 11 lookahead
Stack
top
Parse Table
(1) (2) (3) (4) (5) (6) (7)
G: S ÷AB  AC A÷aaaaaaaaaa B ÷bB  b C ÷cC  c
LL(k) Parsing
367
13.3 Designing an LL(k) Parser
Example 1. Design an LL(k) parser with minimum k for the following CFG.
S ÷aSb ' aabbb
(1) (2)
S ¬aSb ¬aaSbb ¬aaaSbbb ¬aaaaabbbbbb
(1) (1) (1) (2)
The language of this grammar is {a
i
aabbbb
i
 i > 0}. Every string generated by
this grammar has aabbb at the center. As we did in the preceding section, let‟s
examine how an LL(k) parser will parse the input aaaabbbbb with the shortest
possible lookahead range of k.
To parse the input string successfully, the machine should apply the rules in the
order of (1), (1), (1), (2), which is the same order applied for the following
leftmost derivation.
Parsing
368
Pushing the start symbol S into the stack in the initial configuration, the parser
gets ready to parse the string as shown below. With S in the stack top, it must apply
one of S‟s two rules. To choose one of them, the parser needs to look ahead for
supporting information. What could be the shortest range to look ahead?
(q
0
, aaaaabbbbbb, Z
0
) ¬(q
1
, aaaaabbbbbb, SZ
0
) ¬?
If there is aabbb, rule (2) must be applied. So it appears k = 5. But the parser does
not have to see the whole string. If there is aaa ahead, the leftmost symbol a must
have been generated by rule (1). Otherwise, if there is aab ahead, the leftmost a
must have been generated by rule (2). It is enough to look ahead 3 cells (i.e., k = 3).
Thus, in the current configuration, since the contents of 3 lookahead is aaa, the
parser applies rule (1), then reads the input to match and pop the terminal symbol a
from the stack top as follows.
Designing LL(k) Parser
Parsing
S ÷aSb ' aabbb
(1) (2)
(q
1
, aaaaabbbbbb, SZ
0
) ¬(q
1
, aaaaabbbbbb, aSbZ
0
) ¬(q
1
, aaaabbbbbbb, SbZ
0
)
(1)
Lookahead 3
369
Again, with S on the stack top, the parser looks ahead 3 cells, and seeing aaa,
applies rule (1), and repeats the same procedure until it looks ahead aab as follows.
(q
1
, aaaabbbbbbb, SbZ
0
) ¬(q
1
, aaaabbbbbb, aSbbZ
0
) ¬
(1)
(q
1
, aaabbbbbbb, SbbZ
0
) ¬(q
1
, aaabbbbbb, aSbbbZ
0
) ¬
(q
1
, aabbbbbb, SbbbZ
0
) ¬?
(1)
(q
1
, aaaaabbbbbb, SZ
0
) ¬(q
1
, aaaaabbbbbb, aSbZ
0
) ¬
(1)
Now, the parser finally applies rule (2), and keeps reading and matchand
popping until it enters the accepting configuration as follows.
(q
1
, aabbbbbb, SbbbZ
0
) ¬(q
1
, aabbbbbb, aabbbbbbZ
0
) ¬… ¬(q
1
, c, Z
0
)
(2)
S ÷aSb ' aabbb
(1) (2)
Parsing
Designing LL(k) Parser
370
The parser applied the rules in the order, (1), (1), (1), (2), which is the same
order applied for the leftmost derivation of the input string aaaaabbbbbb.
S ÷aSb ' aabbb
(1) (2)
S ¬aSb ¬aaSbb ¬aaaSbbb ¬aaaaabbbbbb
(1) (1) (1) (2)
Given an arbitrary input string, the parser, applying the same procedure, will end
up in the final accepting configuration if and only if the input belongs to the
language of the grammar. The parser needs to look ahead at least 3 cells. Hence, the
grammar is LL(3). The parse table is shown below.
Parsing
S
aaa aab
aSb aabbb
3 lookahead
Stack top
Parse Table
Designing LL(k) Parser
371
Example 2. Construct an LL(k) parser with minimum k for the following CFG.
(1) (2) (3) (4)
S ÷ abA c A÷ Saa  b
(1) (3) (1) (3) (2)
S ¬abA ¬abSaa ¬ababAaa ¬ababSaaaa ¬ababaaaa
(q
0
, ababaaaa, Z
0
) ¬(q
1
, ababaaaa, SZ
0
) ¬?
As we did for Example1, we pick up a typical string, ababaaaa, derivable by the
grammar, and examine how it can be parsed according to the LL(k) parsing
strategy with minimum k. Then, based on the analysis, we will construct a parse
table. The order of the rules applied by the parser should be the same as the one
applied in the following leftmost derivation.
Pushing the start symbol S on the top of the stack, the parser must choose
either rule (1) or (2) that will lead to finally deriving the input string. For the
choice, is there any useful information ahead on the input tape?
Parsing
Designing LL(k) Parser
372
(1) (2) (3) (4)
S ÷ abA c A÷ Saa  b
If the input is not empty, the parser, with S at the stack top, should choose rule (1) to
apply. Then, as shown below, for each terminal symbol appearing at the stack top, the
parser reads the next input symbol, and if they match, pops out the stack top until A
appears. If the input tape was empty, the parser would simply pops S (i.e., rewrites S
with c) and enters the accepting configuration. Now, with A at the stack top, the parser
should choose a rule between (3) and (4).
(q
1
, abaaaa, AZ
0
) ¬(q
1
, abaaaa, SaaZ
0
) ¬?
(3)
If rule (4) was used to derive the input, the next input symbol ahead should be b, not
a. Looking symbol a ahead, the parser applies rule (3), and consequently, having S on
the stack top as before, it needs to look ahead to choose the next rule. Up to this point,
it appears that 1 lookahead is an appropriate range.
(q
1
, ababaaaa, SZ
0
) ¬(q
1
, ababaaaa, abAZ
0
) ¬. . ¬(q
1
, abaaaa, AZ
0
) ¬?
(1)
Parsing
Designing LL(k) Parser
373
But this time, with S at the stack top it is uncertain which rule to apply. Looking a
ahead, the parser can apply either rule (1) or rule (2), because in either case, the
parser will successfully match the stack top a with the next input symbol a (see
below). To resolve this uncertainty, the parser needs one more cell to look ahead.
To solve this problem we could have the parser look down the stack. But we have
chosen to extend the range of lookahead, a straightforward solution. Later in this
chapter, we will discuss parsers which are allowed to look down the stack some
finite depth.
Parsing
Designing LL(k) Parser
(q
1
, aaaa, SaaaaZ
0
) ¬?
(3)
(1)
(q
1
, abaaaa, SaaZ
0
) ¬(q
1
, abaaaa, abAaaZ
0
) ¬. . ¬(q
1
, aaaa, AaaZ
0
) ¬
S ÷ abA c A÷ Saa  b
(1) (2) (3) (4)
Now, looking ab ahead in the extended range, which must be generated by rule (1),
the parser applies the rule and repeats the previous procedure as follows till S
appears at the stack top again.
(q
1
, abaaaa, SaaZ
0
)
(q
1
, abaaaa, abaaZ
0
)
(q
1
, abaaaa, aaZ
0
)
(1)
(2)
374
(1) (2) (3) (4)
S ÷ abA c A÷ Saa  b
In summary, the parser parses the input string ababaaaa as follows.
(1) (3)
(q
1
, abaaaa, SaaZ
0
) ¬(q
1
, abaaaa, abAaaZ
0
) ¬. . ¬(q
1
, aaaa, AaaZ
0
) ¬
(q
1
, ababaaaa, SZ
0
) ¬(q
1
, ababaaaa, abAZ
0
) ¬. . ¬(q
1
, abaaaa, AZ
0
) ¬
(1)
(3)
(q
1
, aaaa, SaaaaZ
0
) ¬(q
1
, aaaa, aaaaZ
0
) ¬. . . . (q
1
, c, Z
0
)
(2)
Parsing
Designing LL(k) Parser
Looking aa ahead with S on the stack top, the parser applies rule (2). Then, for
each a appearing at the stack top, it keeps reading the next input symbol, matching
them and popping the stack top, eventually entering the accepting configuration.
(q
1
, aaaa, SaaaaZ
0
) ¬(q
1
, aaaa, aaaaZ
0
) ¬. . . . ¬(q
1
, c, Z
0
)
(2)
375
The input string that we have just examined is the one derived by applying rule (2)
last. For the other typical string ababbaa that can be derived by applying rule (4)
last, the LL(2) parser will parse it as follows.
(1)
(q
1
, abbaa, SaaZ
0
) ¬(q
1
, abbaa, abAaaZ
0
) ¬. . ¬(q
1
, baa, AaaZ
0
)
(q
1
, ababbaa, SZ
0
) ¬(q
1
, ababbaa, abAZ
0
) ¬. . ¬(q
1
, abbaa, AZ
0
) ¬
(1) (3)
(3)
¬(q
1
, baa, baaZ
0
) ¬. . . . ¬(q
1
, c, Z
0
)
From the analysis with the two parsing examples, we construct the following parse
table. (Notice that with A at the stack top, though 1 lookahead is enough, the entries
are under the column of 2 lookahead.)
(1) (2) (3) (4)
S ÷ abA c A÷ Saa  b
Parsing
Designing LL(k) Parser
B: blank
X: don‟t care
2 lookahead
S
A
Stack
top
abA
c
Saa b
c
ab aa bX BB
Saa
Parse Table
376
For a given input string, the basic strategy of LL(k) parsing is to generate the same
string on the top of the stack by rewriting every nonterminal symbol appearing at the
stack top with the right side of that nonterminal‟s rule. If the nonterminal symbol has
more than one rule, the parser picks the right one based on the prefix of the input
string appearing on k cells looked ahead.
Whenever a terminal symbol appears on the stack top, the machine reads the next
input symbol and pops the stack top, if they match. The sequence of rules applied for
a successful parsing according this strategy is the same as the one applied for the
leftmost derivation of the input string.
The class of CFG‟s that can be parsed by LL(k) parsing strategy is limited. The
CFG G
1
below is an example for which no LL(k) parser exists. However, G
2
, which
generates the same language, is an LL(k) grammar. We will shortly explain why.
Parsing
Designing LL(k) Parser
G
1
: S ÷A B A÷aA 0 B ÷aB  1
G
2
: S ÷aS  D D ÷ 0  1
L(G
1
) = L(G
2
) = {a
i
t  i > 0, t e {0, 1}}
377
Consider the first working configuration illustrated below (with the start symbol S
on top of the stack.) The parser should choose one of S‟s two rules, S→A and S →B.
But it is impossible to choose a correct rule, because the right end symbol 0 (or 1),
which is essential for the correct choice, can be located arbitrarily far to the right. It
is impossible for any LL(k) parser to identify it ahead with its “telescope” of a finite
range k.
But for the grammar G
2
, we can easily design an LL(1) parser.
G
1
: S ÷A B A÷aA 0 B ÷aB  1
G
2
: S ÷aS  D D ÷ 0  1
L(G
1
) = L(G
2
) = {a
i
t  i > 0, t e {0, 1}}
q
1
a a a a . . . . . a a 0
SZ
0
S ÷?
G
1
Parsing
Designing LL(k) Parser
378
We saw just now two CFG‟s that generate the same language, but for the one,
no LL(k) parser exists, and for the other, we can design an LL(k) parser. So, we
may ask the following: What is the property of LL(k) grammars?
For a string x, let
(k)
x denote the prefix of length k of string x. If ' x ' < k, then
(k)
x = x. For example,
(2)
ababaa = ab,
(3)
ab = ab.
The above condition implies that with a nonterminal symbol A on the stack
top, the parser can identify A‟s rule to apply by looking ahead k cells. If G has
such property, we can build an LL(k) parser.
Parsing
Definition of LL(k) grammars
Definition (LL(k) grammar). Let G = (V
T
, V
N
, P, S) be a CFG. Grammar G is an
LL(k) grammar if it satisfies the following condition. Consider two arbitrary
leftmost derivations of the following forms.
S ¬
*
eAo ¬ eo ¬
*
ey
S ¬
*
eAo ¬ e¸o ¬
*
ex
, where o, , ¸ e(V
T
V
N
)*, e, x, y e V
T
*, A eV
N
.
If
(k)
x =
(k)
y , then it must be that  = ¸. That is, in the above two derivations, the
same rule of A should have been applied if
(k)
x =
(k)
y.
379
13.4 LR(k) Parsing
Recall that whenever a nonterminal symbol A appears on the stack top, LL(k)
parsers replaces it with the right side of one of A‟s rules chosen. This LL(k) parsing
strategy is not powerful enough to parse commonly used programming languages.
LR(k) parsing (also called bottomup parsing) uses a more powerful strategy
applicable to parsing the programming languages.
In a sense, LR(k) parsing strategy works in the reverse way of LL(k). LR(k)
parsers try to create the right side of a rule, say A÷ o, on the stack top portion, and
then replace it (o) by the left side (A), as it is illustrated in the figure below. (Notice
that because the string o is pushed into the stack so that the right end of it appears at
the stack top, it is written as o
R
in the figure.)
Parsing
q G
A ÷ o
. . . . . . . Z
0
o
R

q G
. . . . . . . Z
0
A

A ÷ o
380
If there is uncertainty because of other rules generating the same string o, for
example, B ÷ o, the parser looks some constant k cells ahead on the input string to
resolve the uncertainty. The following figures show the difference of the two parsing
strategies.
Parsing
LR(k) parsing
LL(k) Parsing
A ÷ o   ?
. . . . . . . Z
0
A

q G
q G
A ÷ o !
. . . . . . . Z
0
o

. . . . . . . Z
0
A
q G
A ÷ o !

. . . . . . . Z
0
o
R
q G
A ÷ o
B ÷ o ?

LR(k) Parsing
A
N
I
381
LR(k) Parsing
Parsing
Let‟s take a simple example of designing an LR(k) parser with the CFG
shown below. As we did for designing LL(k) parsers, we shall first pick a
typical string that can be derived by the grammar, and with this string, observe
how the machine should work according to the LR(k) strategy. Based on the
observation, we will build an LR(k) parser.
Suppose string aabc is given on the input tape. The basic strategy of LR(k) parsers
is to shift in the input (i.e., read the input and push it) into the stack, until the right
side of a rule in the grammar appears on some stack top portion.
(q
0
, aabc, Z
0
) ¬?
S ÷AC  BD A ÷aab B ÷aab C ÷c D ÷d
(1) (2)
(3) (4) (5) (6)
382
For the example, the parser will read up to aab, until it sees baa (notice that it is
reversed) on the stack top portion. Two rules, rule (3) and rule (4), are applicable.
String baa must be replaced by either A or B. To resolve this uncertainty, the machine
needs to look ahead. For this example, it is enough to look one cell ahead. Looking c
ahead, the parser applies rule (3) by rewriting baa on the top portion of the stack by A
as follows.
lookahead 1
(q
0
, aabc, Z
0
) ¬( q
1
, c, baaZ
0
) ¬ (q
1
, c, AZ
0
) ¬?
(3)
LR(k) Parsing
Parsing
S ÷AC  BD A ÷aab B ÷aab C ÷c D ÷d
(1) (2)
(3) (4) (5) (6)
383
(q
0
, aabc, Z
0
) ¬( q
1
, c, baaZ
0
) ¬ (q
1
, c, AZ
0
) ¬
(q
1
, c, cAZ
0
) ¬(q
1
, c, CAZ
0
) ¬(q
1
, c, SZ
0
)
lookahead 1
accepting configuration
(3)
(5) (1)
After applying rule (3), the parser sees no string on the stack top portion that is
reducible (i.e., generated by a rule). So the parser shifts in the next symbol c from the
input, and applies rule (5) to rewrite it with C. (This time no look ahead is needed.)
Finally, the parser applies rule (1) S ÷AC, and rewrites AC appearing on the stack
top portion (the parser reads the stack from inside out to the top) by the start symbol S,
entering the final accepting configuration (q
1
, c, SZ
0
).
LR(k) Parsing
Parsing
S ÷AC  BD A ÷aab B ÷aab C ÷c D ÷d
(1) (2)
(3) (4) (5) (6)
384
LR(k) Parsing
Parsing
The following figure shows an LR(1) parsing history for the other string aabd,
which can be generated by the grammar.
Notice that the sequence of rules applied by the parser, (4)(6)(2), is exactly the reverse
order of the rightmost derivation of aabd (see below). It is also true for the former
example aabc, and actually for every LR(k) parsing. (Recall that the reverse order of the
rules applied for the rightmost derivation of a string x corresponds to the bottomup left
toright traversal of the parse tree yielding x. )
S ¬ AD ¬Ad ¬aabd
(2) (6) (4)
S ¬ AC ¬Ac ¬aabc
(1) (5) (3)
(q
0
, aabd, Z
0
) ¬( q
1
, d, baaZ
0
) ¬ (q
1
, c, BZ
0
) ¬
(q
1
, c, dBZ
0
) ¬(q
1
, c, DBZ
0
) ¬(q
1
, c, SZ
0
)
lookahead 1
accepting configuration
(4)
(6) (2)
S ÷AC  BD A ÷aab B ÷aab C ÷c D ÷d
(1) (2)
(3) (4) (5) (6)
385
LR(k) Parsing
Parsing
Formally, an LR(k) parser is defined by a parse table (otherwise called a reduction
table), where each row corresponds to the right side of each rule, and each column
corresponds to the look ahead content as in the LL(k) parse tables. The table entries
are the left sides of the rules applied.
Based on our observations while tracing the parser parsing the two sample strings
according to the LR parsing strategy, we can build the parse table of an LR(1) parser
as shown below. The two parsing histories are copied below for a review.
S ÷AC  BD A ÷aab
B ÷aab C ÷c D ÷d
(q
0
, aabc, Z
0
) ¬( q
1
, c, baaZ
0
) ¬ (q
1
, c, BZ
0
) ¬
(q
1
, c, cAZ
0
) ¬(q
1
, c, CAZ
0
) ¬(q
1
, c, SZ
0
)
(q
0
, aabd, Z
0
) ¬( q
1
, d, baaZ
0
) ¬ (q
1
, d, BZ
0
) ¬
(q
1
, c, dBZ
0
) ¬(q
1
, c, DBZ
0
) ¬(q
1
, c, SZ
0
)
AC
BD
aab
c
d
c d c
S
S
A B
C
D
1 lookahead
Parse (reduction) table
Stack top
portion
386
LR(k) Parsing
Parsing
With a simple modification to the LR(1) grammar that we have just examined, we
can change it into an LR(k) grammar for any given constant k. For example, by
prefixing eeee to the right side of the rules of C and D as shown below, we can
change the grammar to LR(5). Below are two LR parsing histories of an LR(5)
parser for two typical strings derivable by the modified grammar and the parse
table constructed based on the histories.
S ÷AC  BD A ÷aab B ÷aab
C ÷eeeec D ÷eeeed
(q
0
, aabeeeec, Z
0
) ¬( q
1
, eeeec, baaZ
0
) ¬
(q
1
, eeeec, BZ
0
) ¬. . . ¬(q
1
, c, ceeeeAZ
0
) ¬
(q
1
, c, CAZ
0
) ¬(q
1
, c, SZ
0
)
(q
0
, aabeeeed, Z
0
) ¬( q
1
, eeeed, baaZ
0
) ¬
(q
1
, eeeed, BZ
0
) ¬. . . ¬(q
1
, c, deeeeBZ
0
) ¬
(q
1
, c, DAZ
0
) ¬(q
1
, c, SZ
0
)
AC
BD
aab
eeeec
eeeed
eeeec eeeed c
S
S
A B
C
D
5 lookahead
Parse (reduction) Table
stack
top
387
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
Example 1. Construct an LR(k) parser for the following CFG with minimum k.
The language of this grammar is {aaaddd} {aaadddc
i
 i > 1}. We shall pick
aaabbbccc, which can be derived by the rightmost derivation shown below. For the
successful LR parsing, which ends up in the accepting configuration (q
1
, c, SZ
0
),
our LR(k) parser should apply the rules in the reverse order applied in this
derivation, i.e., (3), (4), (6), (5), (5), (1).
Parsing
13.5 Designing LR(k) Parsers
We shall now practice designing an LR(k) parser for a couple of grammars. As
with LL(k), we will first pick a typical string that can be generated by the given
grammar and observe how an LR(k) parser parses it with minimum k. Based on the
observation, we will build the parse table.
S ¬ ADC ¬ADCc ¬ADCcc ¬ADccc ¬Adddccc ¬aaadddccc
(1) (5) (5) (6) (4) (3)
Rightmost derivation:
388
(b) (q
0
, aaadddccc, Z
0
)¬( q
1
, aadddccc, aZ
0
)¬. . .¬(q
1
, dddccc, aaaZ
0
) ¬?
The production rule applied last in the rightmost derivation of a string that can be
generated by this grammar is rule (3) or rule (2). So, before applying one of these
rules, the parser should shift in the input until it sees either aaa or aaaddd on the
stack top portion. But we have a problem. Consider the following two
configurations with aaa shifted in onto the stack. Cases (a) and (b) are, respectively,
for input strings aaaddd and aaadddccc.
For case (a), it is wrong to apply rule (3) and reduce aaa by A. In this case the parser
should shift in the remaining ddd onto the stack and then apply rule (2). For case (b),
rule (3) must be applied in the current configuration. How can the parser make the right
move? It needs to look ahead.
Parsing
(a) (q
0
, aaaddd, Z
0
) ¬( q
1
, aaddd, aZ
0
) ¬. . . ¬(q
1
, ddd, aaaZ
0
) ¬?
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
389
The parser needs to look 4 cells ahead to see if there is dddB or dddc (where B
denotes the blank symbol). If it is dddB (i.e., case (a) below), the parser keeps on
shifting in (i.e., reading and pushing) the input symbols onto the stack until it sees
aaaddd on the top portion of the stack (read inside out). Then the parser applies rule (2)
as shown below to enter the accepting configuration. If the 4 lookahead is dddc (i.e.,
case (b) below), the parser applies rule (3) in the current configuration.
Parsing
(a) (q
0
, aaaddd, Z
0
) ¬( q
1
, aaddd, aZ
0
) ¬. . . ¬(q
1
, ddd, aaaZ
0
) ¬?
(b) (q
0
, aaadddccc, Z
0
) ¬( q
1
, aadddccc, aZ
0
) ¬. . . ¬(q
1
, dddccc, aaaZ
0
) ¬?
(b) (q
0
, aaadddccc, Z
0
) ¬( q
1
, aadddccc, aZ
0
) ¬. . . ¬
(q
1
, dddccc, aaaZ
0
) ¬(q
1
, dddccc, AZ
0
) ¬?
(3)
(a) (q
0
, aaaddd, Z
0
) ¬( q
1
, aaddd, aZ
0
) ¬. . . ¬(q
1
, ddd, aaaZ
0
) ¬. .
. . . . . (q
1
, c, dddaaaZ
0
) ¬(q
1
, c, SZ
0
)
(2)
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
390
Now, the parser sees no content on the stack top that can be reduced. So it resumes
shifting in the input until it sees ddd on the stack top portion that can be reduced by
applying rule (4). But it needs to take caution before applying rule (4), because in
case (a) above, it also has ddd on the stack before applying rule (2). Such confusion
can be resolved by looking one cell ahead to see whether there is a symbol c ahead.
(4)
(q
1
, dddccc, AZ
0
) ¬. . . . ¬(q
1
, ccc, dddAZ
0
) ¬(q
1
, ccc, DAZ
0
) ¬?
Parsing
(b) (q
0
, aaadddccc, Z
0
) ¬( q
1
, aadddccc, aZ
0
) ¬. . . ¬
(q
1
, dddccc, aaaZ
0
) ¬(q
1
, dddccc, AZ
0
) ¬?
(3)
(a) (q
0
, aaaddd, Z
0
) ¬( q
1
, aaddd, aZ
0
) ¬. . . ¬(q
1
, ddd, aaaZ
0
) ¬. .
. . . . . (q
1
, c, dddaaaZ
0
) ¬(q
1
, c, SZ
0
)
(2)
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
391
After applying rule (4), the parser, seeing no string on the stack that is reducible
by applying a rule, shifts in the next input symbol c from the input. This first c
from the input is the one generated by rule (6). (Note that if rule (5) were C ÷cC,
with the right side reversed, the last c would have been generated by rule (6).) Thus,
the parser applies rule (6), and reduces the c on the stack top by C as follows.
(3)
(q
0
, aaadddccc, Z
0
) ¬( q
1
, aaadddccc, aZ
0
) ¬. . . ¬(q
1
, dddccc, aaaZ
0
) ¬
(4)
(q
1
, dddccc, AZ
0
) ¬. . . . ¬(q
1
, ccc, dddAZ
0
) ¬(q
1
, ccc, DAZ
0
) ¬
(6)
(q
1
, cc, cDAZ
0
) ¬(q
1
, cc, CDAZ
0
) ¬?
Parsing
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
392
Now, the parser has ADC on the stack that would be reducible by applying rule
(1). But it is too early to do this, because there are more c‟s to be processed. Here it
needs to look one cell ahead, and shift in the next c to apply rule (5) and reduce Cc
on the stack by C. The parser repeats this reduction until it sees no c‟s ahead, and
finally applies rule (1) to reduce ADC by S, entering the accepting configuration.
(q
1
, cc, CDAZ
0
) ¬(q
1
, c, cCDAZ
0
) ¬(q
1
, c, CDAZ
0
) ¬
(5)
(q
1
, c, cCDAZ
0
) ¬(q
1
, c, CDAZ
0
) ¬(q
1
, c, SZ
0
)
(1) (5)
Parsing
(3)
(q
0
, aaadddccc, Z
0
) ¬( q
1
, aaadddccc, aZ
0
) ¬. . . ¬(q
1
, dddccc, aaaZ
0
) ¬
(4)
(q
1
, dddccc, AZ
0
) ¬. . . . ¬(q
1
, ccc, dddAZ
0
) ¬(q
1
, ccc, DAZ
0
) ¬
(6)
(q
1
, cc, cDAZ
0
) ¬(q
1
, cc, CDAZ
0
) ¬?
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
393
(3)
(q
0
, aaadddccc, Z
0
) ¬. . ¬( q
1
, aadddccc, aaZ
0
) ¬(q
1
, dddccc, aaaZ
0
) ¬
(4)
(q
1
, dddccc, AZ
0
) ¬. . . . ¬(q
1
, ccc, dddAZ
0
) ¬(q
1
, ccc, DAZ
0
) ¬
(6)
(q
1
, cc, cDAZ
0
) ¬(q
1
, cc, CDAZ
0
) ¬(q
1
, c, cCDAZ
0
) ¬(q
1
, c, CDAZ
0
) ¬
(5)
(q
1
, c, cCDAZ
0
) ¬(q
1
, c, CDAZ
0
) ¬(q
1
, c, SZ
0
)
(1) (5)
Parsing
Below is the summary of our observations. The order of the rules applied is
exactly the reverse order of the rules applied for the rightmost derivation of the
input string (see below).
S ¬ ADC ¬ADCc ¬ADCcc ¬ADccc ¬Adddccc ¬aaadddccc
(1) (5) (5) (6) (4) (3)
Rightmost derivation:
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
394
Now, based on the observations, while tracing how an LR(k) parser can
successfully parse the chosen input string, we can construct the following parse
table for an LR(4) parser.
Parsing
dddc dddB BBBB cXXX c
aaa
ddd
aaaddd
c
Cc
ADC
4 lookahead
Stack top
portion
A
D
C
S
C
Shiftin
X: don‟t care
B : blanks
c: no lookahead
S
Shiftin
Parse Table
Designing LR(k) Parsers
S ÷ADC  aaaddd A÷aaa D ÷ddd C ÷Ccc
(1) (2) (3) (5) (6) (4)
395
S÷EAF ' EBF E÷aE ' a A÷aaab B ÷aaac F ÷d
(1) (2) (3) (4) (5) (6) (7)
Example 2. Construct an LR(k) parser for the following CFG with minimum k.
This is not an LL(k) grammar. (We leave the proof for the reader. For the proof,
recall the nonLL(k) grammar that we discussed with at the end of last section.)
The language of this grammar is {a
i
xd ' i > 1, x e {aaab, aaac}}. Again, we will
first trace how an LR parser should parse a typical string from this language to
construct a parse table.
Let‟s choose the string aaaaaabd for the analysis. The rightmost derivation for
this string is shown below. Our LR(k) parser, given this string as an input, should
parse it by applying the rules in the reverse order of the derivation, i.e., (4), (3), (3),
(5), (7), (1).
S ¬EAF ¬EAd ¬Eaaabd ¬aEaaabd ¬aaEaaabd ¬aaaaaabd
(1) (7) (5) (3) (3) (4)
Parsing
Designing LR(k) Parsers
396
S÷EAF ' EBF E÷aE ' a A÷aaab B ÷aaac F ÷d
(1) (2) (3) (4) (5) (6) (7)
Every rightmost derivation of this grammar must end by applying rule (4). Thus,
our LR(k) parser should shift in the a onto the stack that has been generated by
this rule. The language of this grammar is {a
i
xd ' i > 1, x e { aaab, aaac}}, and the
rightmost a in the prefix a
i
of the language is generated by rule (4).
So the parser should shift in the whole string a
i
into the stack to bring the
rightmost a (of the string a
i
) on top of the stack. But, to do this we have a problem,
because there are more a‟s to the right which belong to x e { aaab, aaac }.
To identify the last a in a
i
, the parser needs to look ahead. If it sees either aaab
or aaac ahead, then the a on top of the stack is the last a in a
i
, which should be
reduced by applying rule (4) as follow.
(q
0
, aaaaaabd, Z
0
) ¬(q
1
, aaaaabd, aZ
0
) ¬…. ¬
(q
1
, aaabd, aaaZ
0
) ¬(q
1
, aaabd, EaaZ
0
) ¬?
(4)
Parsing
Designing LR(k) Parsers
397
S÷EAF ' EBF E÷aE ' a A÷aaab B ÷aaac F ÷d
(1) (2) (3) (4) (5) (6) (7)
Now, we have aE at the stack top portion that is reducible by applying rule (3).
The parser does this reduction twice, and then shifts in the next input d, because
no string appears on the stack top that is reducible. Symbol d on the stack is
reduced by rule (7), and the result of this reduction EAF is in turn reduced by rule
(1) to enter the final accepting configuration (q
1
, c, SZ
0
).
(q
1
, aaabd, EaaZ
0
) ¬(q
1
, aaabd, EaZ
0
) ¬(q
1
, aaabd, EZ
0
) ¬
(3) (3)
(q
1
, aabd, aEZ
0
) ¬… ¬(q
1
, d, baaaEZ
0
) ¬ (q
1
, d, AEZ
0
) ¬
(5)
(q
1
, c, dAEZ
0
) ¬(q
1
, c, FAEZ
0
) ¬(q
1
, c, SZ
0
)
(7) (1)
Parsing
(q
0
, aaaaaabd, Z
0
) ¬(q
1
, aaaaabd, aZ
0
) ¬…. ¬
(q
1
, aaabd, aaaZ
0
) ¬(q
1
, aaabd, EaaZ
0
) ¬?
(4)
Designing LR(k) Parsers
398
S÷EAF ' EBF E÷aE ' a A÷aaab B ÷aaac F ÷d
(1) (2) (3) (4) (5) (6) (7)
Here are the summary of our analysis and the parse table. The order of rules
applied matches the order of the rightmost derivation, which is (4), (3), (3), (5),
(7), (1).
Parsing
aaab aaac
4 lookahead
a
aE
aaab
aaac
d
E
E
E
A
B
F
EAF
EBF
S
S
Parse Table
shiftin
c others
(q
0
, aaaaaabd, Z
0
) ¬(q
1
, aaaaabd, aZ
0
) ¬… ¬
(q
1
, aaabd, aaaZ
0
) ¬(q
1
, aaabd, EaaZ
0
) ¬
(q
1
, aaabd, EaaZ
0
) ¬(q
1
, aaabd, EaZ
0
) ¬
(q
1
, aaabd, EZ
0
) ¬(q
1
, aabd, aEZ
0
) ¬…¬
(q
1
, d, baaaEZ
0
) ¬(q
1
, d, AEZ
0
) ¬
(q
1
, c, dAEZ
0
) ¬(q
1
, c, FAEZ
0
) ¬(q
1
, c, SZ
0
)
(7)
(4) (3)
(3)
(5)
(1)
Designing LR(k) Parsers
399
The language of the grammar is {b
i
b
n
c
n
ccc  i > 0, n > 1}, and every rightmost
derivation ends by rule (4), which generates bc. Notice that this bc is located at the
center of b
n
c
n
of the language and can be brought in on the stack top by simply shifting
in the input until bc appears on the stack. Then the LR(k) parser reduces it by applying
rule (4) as follows.
S÷bS  Accc A÷bAc  bc
(1) (2) (3) (4)
Example 3. Construct an LL(k) parser for the following CFG with minimum k.
(q
0
, bbbbbcccccc, Z
0
) ¬…. ¬(q
1
, ccccc, cbbbbbZ
0
) ¬ (q
1
, ccccc, AbbbbZ
0
)
(4)
Parsing
S ¬bS ¬bbS ¬bbAccc ¬bbbAcccc ¬bbbbAccccc ¬bbbbbcccccc
(1) (1) (2) (3) (3) (4)
We shall trace the parser with input string bbbbbbccccc, which can be derived by
the rightmost derivation as shown below. Thus, the LR(k) parser should be able to
parse it by applying the rules in the order (4), (3), (3), (2), (1), (1).
Designing LR(k) Parsers
400
S÷bS  Accc A÷bAc  bc
(1) (2) (3) (4)
(q
1
, ccccc, AbbbbZ
0
) ¬(q
1
, cccc, cAbbbbZ
0
) ¬?
(q
0
, bbbbbcccccc, Z
0
) ¬…. . ¬(q
1
, ccccc, cbbbbbZ
0
) ¬
(4)
After rule (4) applied, with no stack top portion reducible, the parser shifts in
the next input symbol c as above. Now, the parser sees bAc on the stack, which
is reducible by rule (3). However, we should be careful. We cannot let the parser
repeat applying this reduction without looking ahead. It needs to save the last
three c‟s of the input string for the final reduction by rule (2).
Thus the parser needs to look 3 cells ahead. In the current configuration, since
there is ccc ahead, it can reduce the bAc on the stack top portion by rule (3). The
parser can do this shiftin and reduction once more as shown below.
Parsing
(q
1
, cccc, cAbbbbZ
0
) ¬(q
1
, cccc, AbbbZ
0
) ¬
(3)
(q
1
, ccc, cAbbbZ
0
) ¬(q
1
, ccc, AbbZ
0
) ¬(q
1
, cc, cAbbZ
0
) ¬?
(3)
Designing LR(k) Parsers
401
S÷bS  Accc A÷bAc  bc
(1) (2) (3) (4)
(3)
(q
1
, ccccc, AbbbbZ
0
) ¬(q
1
, cccc, cAbbbbZ
0
) ¬(q
1
, cccc, AbbbZ
0
) ¬
(q
0
, bbbbbcccccc, Z
0
) ¬…. . ¬(q
1
, ccccc, cbbbbbZ
0
) ¬
(4)
(q
1
, ccc, cAbbbZ
0
) ¬(q
1
, ccc, AbbZ
0
) ¬(q
1
, cc, cAbbZ
0
) ¬?
(3)
(q
1
, cc, cAbbZ
0
) ¬. . . ¬(q
1
, c, cccAbbZ
0
) ¬
(2)
(1) (1)
(q
1
, c, SbbZ
0
) ¬(q
1
, c, SbZ
0
) ¬(q
1
, c, SZ
0
)
Now, the parser sees two c‟s ahead, that must be shifted in onto the stack for
the reduction by rule (2), because those last three c‟s, together with A, are
generated by rule (2). After this reduction by rule (2), the parser reduces the
stack top twice by applying rule (1) and enters the final configuration as follows.
Parsing
Designing LR(k) Parsers
402
3 lookahead
ccc ccB xxx
bc
bAc
Accc
bS
A
A Shiftin
S
S
Stack
top
portion
Parse Table
S÷bS  Accc
A÷bAc  bc
(1) (2)
(3) (4)
Below is the summary of our analysis and the parse table of the LR(3) parser.
(q
0
, bbbbbcccccc, Z
0
) ¬…. . ¬(q
1
, ccccc, cbbbbbZ
0
) ¬
(4)
(q
1
, ccccc, AbbbbZ
0
) ¬(q
1
, cccc, cAbbbbZ
0
) ¬(q
1
, cccc, AbbbZ
0
) ¬
(3)
(q
1
, ccc, cAbbbZ
0
) ¬(q
1
, ccc, AbbZ
0
) ¬(q
1
, cc, cAbbZ
0
) ¬. . . ¬
(3)
(q
1
, c, cccAbbZ
0
) ¬(q
1
, c, SbbZ
0
) ¬(q
1
, c, SbZ
0
) ¬(q
1
, c, SZ
0
)
(2) (1) (1)
Parsing
Designing LR(k) Parsers
403
Definition (LR(k) grammar). Let G = (V
T
, V
N
, P, S) be a CFG, and let ox and
o¸y be two sentential forms in the rightmost derivation of a string in L(G), where o,
, ¸ e V*, x, y e V
T
*, and A eV
N
. If there is a constant k for which the following
conditions are satisfied, then grammar G is an LR(k) grammar (i.e., we can
construct an LR(k) parser for G).
(i) If S ¬
*
oAx ¬ ox and S ¬
*
oAy ¬ o¸y, such that
(k)
x =
(k)
y,
(ii) then  = ¸.
Parsing
Definition of LR(k) Grammars
LR(k) grammars are defined as follows, which is quite similar to the
definition of LL(k) grammars except that in this definition, we are concerned
with rightmost derivations instead of leftmost derivations.
In other words, if the above condition is satisfied, it is possible for an LR(k) parser
to establish  (= ¸ ) on the stack top portion and reduce it with the rule A →  by
looking at most k cells ahead. (The proof, which can be done using the proofby
induction technique, is a little challenging for the level of this text. We omit the
proof.)
Designing LR(k) Parsers
404
13.6 Lex and YACC
Given tokens in terms of regular expression, Lex constructs a lexical analyzer.
(Actually, this lexical analyzer is a software implementation of an FA that
recognizes the set of tokens.) The input to Lex consists of the following three
parts, each delimited by %%.
(1) Token definition
(2) Token description and actions
(3) Userwritten codes
Lexical analyzer and parser are the major components of a compiler. A lexical
analyzer, scanning the source program, identifies the basic program components,
called tokens, such as reserved words, operators, and variable names. A parser,
based on the grammar of the programming language, constructs the parse tree (see
Section 11.1) whose yield is the sequence of tokens found by the lexical analyzer.
This section briefly introduces two software tools, Lex and YACC, which
automatically generate a lexical analyzer and a parser, respectively.
Lex
Parsing
405
(1) Token definition
Tokens are defined in terms of a variation of the regular expression that was
introduced in the rumination section of Chapter 3. Here we repeat the examples
presented in that rumination section.
(a  b)* = (a+b)
*
(a  b)+ = (a+b)(a+b)* [ab] = (a+b),
[az] = a + b + . . . + z, (ab\.c)? = (ab.c + c ) abc? = {abcd, abd}
Lex
(2) Token Descriptions and Actions
Lex informs each token identified to the parser. The input part for token
description and action defines what to inform for each token. For example,
{real} return FLOAT;
{integer} return INTEGER;
(3) UserWritten Code
When the action for a token is too complex to describe in the action part, it is
written in a C program and put in the part for “the user written code.” (For further
details, see a compiler text.)
Parsing
406
YACC
Given a CFG, YACC constructs a look ahead LR (also called LALR, which is
LR(k) that we learned about in Chapter 13) parser. The input includes actions to be
taken depending on the semantics of each rule. YACC also produces a code for
such actions. The input to YACC consists of the following three parts.
(1) Declarations and definitions, (2) Grammar and actions, (3) Userwritten Codes
(1) Declarations and definitions
This part defines all tokens (except for the single symbol operators), operator
precedence, and associativity. This part also shows the variable names used in the
program and their data types such that a proper link can be established between
them existing in different parts of the program. Here are some simple examples.
%token ID /*token for identifier */
%token NUMBER /* token for numbers */
%token BEGIN END ARRAY FILE
#include “yylex.c” /* include lexical scanner */
extern int yylval; /* token values from yylex */
int tcount = 0; /* a temporary integer variable */
%start S /* Grammar‟s start symbol */
Parsing
407
(2) Grammar and Actions
The input grammar is written in BNF (Backusnaur form) as follows.
 Every terminal symbol is put between a pair of single quotation marks and non
terminal symbols are denoted by a word (with no quotation marks).
 Instead of the arrow (÷) in CFG rules, a colon( : ) is used, and each rule is
terminated by a semicolon.
 Blank denotes c.
Here are some simple examples together with the corresponding CFG rules,
where i is a variable name.
E ÷E + T  E – T  T
T ÷T * F  T / F  F
F ÷(E)  i
expr : expr „+‟ term  expr „‟ term  term ;
term : term „*‟ fact  term „/‟ fact  fact;
fact : „(„ expr „)‟  ID ;
YACC
Parsing
408
(3) Userwritten codes
main ( )
{
. . . .
yyparse( );
. . . .
}
This part, which has the following form, must include the function call yyparse(),
together with other C codes, if needed. (For further details, refer a compiler text.)
YACC
Parsing
Power Memory Class
An elderly couple were experiencing declining memories, so they decided to take a power memory class, where
they teach one to remember things by association. Later, the man was talking to a neighbor about how much the
class helped them.
“Who was the instructor?” the neighbor asked.
“Oh, let‟s see,” pondered the man. “Umm… what‟s that flower, you know, the one that smells real nice but has
those thorns…?”
“A rose?” offered the neighbor.
“Right,” said the man. He then turned toward his house and shouted,
“Hey, Rose, what‟s the name of the guy we took that memory class from?”
 DonTravels 
Break Time
409
G
1
: S ÷Ab  Bc A ÷Aa  a B ÷Ba  a
G
2
: S ÷Ab  Bc A ÷aA  a B ÷aB  a
G
3
: S ÷ aS  aA A ÷b  c
Parsing
Rumination (1): Parsing
 The LL and LR parsers work based on a given grammar. The following three contextfree grammars generate the same
language {a
i
X  i >1, X e {b, c}}. Grammar G
1
is neither an LL(k) nor an LR(k), for any k. G
2
is an LR(1) grammar, but not
LL(k), for any k., and G
3
is LL(2) and LR(0). (We leave the proofs for the reader.) Thus, LL(k) and LR(k) refer to the
grammar, not to the language.
 Ambiguous CFG‟s have more than one parse tree that yields the same string x, and hence, there are more than one leftmost
or rightmost derivation for x. Thus, it is impossible to parse x and give the exact sequence of rules that have been applied for
the leftmost or rightmost derivation. However, if we allow the parser to choose arbitrarily one of the multiple derivations, we
can design a parser. As an example, consider the following grammar, which is ambiguous. We can make an LL(0) or LR(0)
parser for this grammar by letting it choose either one of following two derivations; S ¬A¬a and S ¬B ¬a.
S ÷A B A÷a B ÷a
410
 One extra working state (q
1
) was enough for all LL(k) parsers that we have designed in this chapter. We can show that this
is true for every LL(k) grammar. However, this is not true for LR(k) parsing, though all the LR(k) parsers that we have
presented in this chapter use two states including the start state q
0
. The following grammar is an example that requires three
states.
Parsing
S ÷aaA baB  caC
A÷aA a B ÷aB  a C ÷aC  a
Rumination (1): Parsing
The language of this grammar is {xa
i
 i> 1, x e{a,b,c}}. Every string in this language begins with either a, b or c, and
depending on it, the last a in a
i
is generated by either A ÷a, B ÷a or C ÷a. Notice that for any rightmost derivation, one of
these rules is the last one applied. It follows that the parser must shift in the whole input string to bring the last a on top of the
stack, which can be done by looking one cell ahead (see above). Now, the problem is to correctly choose a rule among the three
A ÷a, B ÷a and C ÷a to reduce the a on the stack top.
The only information needed for the correct reduction is the symbol pushed in first on top of the bottom of stack symbol Z
0
(b
in the above illustration). This symbol can be arbitrarily far from the top. (Recall that the parser is allowed to look down the
stack by only a finite depth.) So the parser, reading the first symbol, needs to remember it in its finite state control and use it for
choosing the correct rule for the first reduction. The parser needs three states, say q
a
, q
b
, and q
c
, respectively, corresponding to
keeping the first symbol either a, b or c, in its memory.
(q
0
, baa. . . . aa, Z
0
) ¬…. .
¬(q
1
, c, aa. . . aabZ
0
) ¬?
411
Parsing
S ÷aaA baB  caC
A÷aA a B ÷aB  a C ÷aC  a
(q
0
, baa. . . . aa, Z
0
) ¬…. .
¬(q
1
, c, aa. . . aabZ
0
) ¬?
(q
0
, aaa. . . . aa, Z
0
) ¬(q
a
, aa. . . . aa, aZ
0
) ¬…. . ¬
(q
a
, c, aaa. . . aaaZ
0
) ¬(q
a
, c, Aaa. . . aaaZ
0
) ¬(q
a
, c, Aa. . . aaaZ
0
) ¬
. . . . ¬(q
a
, c, AaaZ
0
) ¬(q
a
, c, SZ
0
)
Rumination (1): Parsing
Below are the three LR(1) parsing histories showing how the parser can parse the above grammar with three states q
a
,
q
b
and q
c
.
(q
0
, baa. . . . aa, Z
0
) ¬(q
b
, aa. . . . aa, bZ
0
) ¬…. . ¬
(q
b
, c, aa. . . aabZ
0
) ¬(q
b
, c, Baa. . . aabZ
0
) ¬ (q
b
, c, Ba. . . aabZ
0
) ¬
. . . . . . ¬(q
b
, c, BabZ
0
) ¬(q
b
, c, SZ
0
)
(q
0
, caa. . . . aa, Z
0
) ¬(q
c
, aa. . . . aa, cZ
0
) ¬…. . ¬
(q
c
, c, aa. . . aacZ
0
) ¬(q
c
, c, Caa. . . aacZ
0
) ¬(q
c
, c, Ca. . . aacZ
0
) ¬
. . . . . . ¬(q
c
, c, CacZ
0
) ¬(q
c
, c, SZ
0
)
412
13.1 Following the steps (i) and (ii) below, construct an LL(k) parser with minimum k for each of the following grammars.
(i) Choose a typical string w that is derivable by the grammar, and trace the LL(k) parsing for the input w with clear indication
of where and why your parser needs to look k cells ahead.
(ii) Based on the analysis in step (i) above, construct an LL(k) parse table of your parser.
(a) S ÷aS  D D ÷ 0  1 (b) S ÷aaA A ÷bbS  bbb
(c) S ÷BA aaab A ÷aAb  aaaaaaab B ÷bBa  bbbb
13.2 (a) Why can CFG G below not be an LL(k) grammar for any constant k?
(b) Construct an LL(k) grammar G’ whose language is L(G), and show an LL(k) parse table for G’, together with the analysis
that you took to build your parse table according to step (i) in problem 13.1 above.
G: S ÷aAb  aBb A ÷aAb  c B ÷aBb  d
13.3 Following steps (i) and (ii) below, construct an LR(k) parser with minimum k for each of the following grammars.
(i) Choose a typical string w that is derivable by the grammar, and trace the LR(k) parsing for the input w with clear
indication of where and why your parser needs to look k cells ahead.
(ii) Based on the analysis in step (i) above, construct an LR(k) parse table of your LR(k) parser.
Exercises
(a) S ÷ABC  BC  C A ÷aaa B ÷aa C ÷a
(b) S ÷aSBb  a B ÷aaaaab
Parsing
413
Practical Application:
Web Programming and Bioinformatics
414
14. Web Programming and Bioinformatics
In Section 3.2, we learned how the syntax of Pascal programming language can be defined with a CFG. (Actually,
Appendix A shows the whole definition of Pascal programming language in terms of a syntax flow graph.) Other
programming languages can also be defined formally with a grammar, except for a few exceptional cases of context
dependency, such as double declaration of a variable in a block and using numeric labels in DOloop in FORTRAN.
This chapter first shows that the major parts of the popular Web programming language HTML (Hypertext Markup
Language) and XML (Extensible Markup Language), which is designed for ecommerce, can also be defined in terms
of a CFG. Then the chapter presents a potential application of the formal languages to molecular biology.
14.1 Hyper Text Markup Language (HTML) 415
14.2 Document Type Definition (DTD) and XML 418
14.3 Genetic code and grammar 425
415
14.1 Hyper Text Markup Language (HTML)
In this section, to see how our knowledge in the formal languages could be applicable
to investigating the properties of programming languages, we shall first examine
HTML. The left box below shows an informal definition (itemized for convenience) of
an HTML list that would commonly appear in a text. To the right are CFG rules
translated from the informal definition.
Web Programming and Bioinformatics
(1) Char denotes a character.
(2) Text is a sequence of characters.
(3) Doc is a sequence of Elements.
(4) For a string x, we call <x> a tag,
and </x> the matching tag of <x>.
(5) Element is a Text, or a Doc in
between a matching tag, or a Doc
with a tag at the front.
(6) ListItem is a document with
tag <LI> at the front. ListItem is an
item of a list.
(7) List is a sequence of zero or
more ListItem.
(1) Char ÷a  A  . . . . .  z  Z  . .
(2) Text ÷Char Text  c
(3) Doc ÷Element Doc  c
(4) Tag ÷ <Text>  </Text>
(5) Element ÷Text 
a Doc inbetween a matching tag
 <Text>Doc
(6) ListItem ÷<LI>Doc
(7) List ÷ListItem List  c
416
Notice that in the CFG rules, the boldfaced Italic words denote nonterminal symbols.
The blank between two nonterminals (e.g., Char Text) is used to delimit the two
nonterminals (not for the terminal symbol blank) and hence, can be ignored. While
translating, we found a context dependent part of the language, i.e., “Element is a Doc
between a matching tag,” that is impossible to translated into a contextfree grammar
rule (see the highlighted part).
(1) Char denotes a character.
(2) Text is a sequence of characters.
(3) Doc is a sequence of Elements.
(4) For a string x, we call <x> a tag,
and </x> the matching tag of <x>.
(5) Element is a Text, or a Doc in
between a matching tag, or a Doc
with a tag at the front.
(6) ListItem is a document with
tag <LI> at the front. ListItem is an
item of a list.
(7) List is a sequence of zero or
more ListItem.
(1) Char ÷a  A  . . . . .  z  Z  . .
(2) Text ÷CharText  c
(3) Doc ÷ElementDoc  c
(4) Tag ÷ <Text>  </Text>
(5) Element ÷Text 
a Doc between a matching tag
 TextDoc
(6) ListItem ÷<LI>Doc
(7) List ÷ListItem List  c
HTML Web Programming and Bioinformatics
417
HTML
According to the informal definition, a document Doc should appear between a
matching tag, like <x>Doc</x> , where x is a text (i.e., a string). In the formal
definition, we need a rule with Element which can generate <x>Doc</x> for an
arbitrary Text x. Notice that Element ÷<Text>Doc</Text> does not work, because
we cannot guarantee the left side Text and the right side Text generate identical
strings.
Actually, it is not hard to see that the set of documents with matching tags has the
same context dependency as in the language L = {xcx  x e {a, b}
*
}. Using the
pumping lemma for contextfree languages, we can prove that this language L is not
contextfree (see Exercise 12.4). This implies that the list part of HTML is not
contextfree. However, if we restrict the matching tags to a finite set of strings, say
{w
1
, w
2
, . . , w
k
} (e.g., <EM>Doc</EM>, <OL>Doc</OL>, etc.), HTML list can be
defined in terms of a “pure” CFG as shown below.
(1) Char ÷a  A  . . . . .  z  Z  . . . (2) Text ÷Char Text  c
(3) Doc ÷Element Doc  c (4) Tag ÷ <Text>  </Text>
(5) Element ÷Text  <Text>Doc  <w
1
>Doc</w
1
>  . . . .  <w
k
>Doc</w
k
>
(6) ListItem ÷<“LI”>Doc (7) List ÷ListItem List  c
Web Programming and Bioinformatics
418
14.2 DTD and XML
The major objective of HTML is to define the format of a document, and hence it is
not easy to describe the semantics of the text, that is needed in the area of ecommerce.
XML has been introduced to overcome this drawback. Actually, XML is the language
(i.e., the set of documents) written according to a DTD (Document Type Definition),
which can be transformed into a CFG. All DTD‟s have the following form.
<!DOCTYPE nameofDTD [ ListofElementDefinitions ]>,
where each element definition has the following format.
<!ELEMENT elementname (ElementDescription)>
This format can be transformed into the following CFG rules, where DTD corresponds
to the start symbol S.
DTD ÷<!DOCTYPE nameofDTD [ ListofElementDefinitions ]>
ListofElementDefinitions ÷ ElementDefinition ListofElementDefinitions
 ElementDefinition
ElementDefinition ÷<!ELEMENT elementname (ElementDescription)>
Web Programming and Bioinformatics
419
The ListofElementDefinitions and ElementDescription are, respectively,
delimited by the pairs of the matching tags <nameofDTD> and </nameofDTD>,
and <elementname> and </elementname>. Element description is a variation of the
regular expression that is defined as follows.
(1) Both (a) and (b) below are element descriptions.
(a) An elementname.
(b) Any text (with no tag) that is denoted by the special term #PCDATA.
(2) If E1 and E2 are element descriptions, E1*, E1+, E1?, (E1, E2), and E1  E2
are element descriptions.
Recall that E1+ = E1(E1)*, E1? = E1+c, E1  E2 = E1+E2, and E1,E2 correspond
to the concatenation of regular expression E1 and E2. We know that every language
expressible with regular expression can also be defined in terms of a regular grammar,
and every regular grammar is a CFG. Hence, ElementDescription is contextfree.
DTD ÷<!DOCTYPE nameofDTD [ ListofElementDefinitions ]>
ListofElementDefinitions ÷ ElementDefinition ListofElementDefinitions
 ElementDefinition
ElementDefinition ÷<!ELEMENT elementname (ElementDescription)>
DTD and XML
Web Programming and Bioinformatics
420
<!DOCTYPE PcSpecs [
<!ELEMENT PCS (PC*)>
<!ELEMENT PC (MODEL, PRICE, PROCESSOR, RAM, DISK+)>
<!ELEMENT MODEL (\#PCDATA)>
<!ELEMENT PRICE (\#PCDATA)>
<!ELEMENT PROCESSOR (MANF, MODEL, SPEED)>
<!ELEMENT MANF (\#PCDATA)>
<!ELEMENT MODEL (\#PCDATA)>
<!ELEMENT SPEED (\#PCDATA)>
<!ELEMENT RAM (\#PCDATA)>
<!ELEMENT DISK (HARDDISK  CD  DVD) >
<!ELEMENT HARDDISK (MANF, MODEL, SIZE)>
<!ELEMENT SIZE (\#PCDATA)>
<!ELEMENT CD (SPEED)>
<!ELEMENT DVD (SPEED)>
] >
Here is a popular example of a DTD, which defines the specification of PC‟s.
DTD and XML
Web Programming and Bioinformatics
421
Here, we show how to transform each element definition of PcSpecs into CFG
rules (shown in the box).
<!ELEMENT PC (MODEL, PRICE, PROCESSOR, RAM, DISK+)>
PC ÷ <PC>MODEL PRICE PROCESSOR RAM DISK+</PC>
DISK+ ÷<DISK+>DISKS</DISK+> DISKS ÷DISK DISKS  DISK
<!ELEMENT MODEL (\#PCDATA)> MODEL ÷<MODEL>Text</MODEL>
<!ELEMENT PRICE (\#PCDATA)> PRICE ÷<PRICE>Text</PRICE>
<!ELEMENT PCS (PC*)>
PCS ÷<PCS>PCSS </PCS> PCSS ÷PC PCSS  c
DTD and XML
Web Programming and Bioinformatics
422
<!ELEMENT MANF (\#PCDATA)> MANF ÷<MANF>Text</MANF>
<!ELEMENT MODEL (\#PCDATA)> MODEL ÷<MODEL>Text</MODEL>
<!ELEMENT DISK (HARDDISK  CD  DVD) >
DISK ÷<DISK>HARDDISK</DISK> 
<DISK>CD</DISK>  <DISK>DVD</DISK>
. . . . .
. . . . .
<!ELEMENT PROCESSOR (MANF, MODEL, SPEED)>
PROCESSOR ÷<PROCESSOR>MANF MODEL SPEED</PROCESSOR>
DTD and XML
Web Programming and Bioinformatics
423
Here is an example of XML document written according to the DTD PcSpecs.
<PCS>
<PC>
<MODEL>1234</MODEL>
<PRICE>$3000</PRICE>
<PROCESSOR>
<RAM>512</RAM>
<DISK><HARDDISK>
<MANF>Superdc</MANF>
<MODEL>xx1000</MODEL>
<SIZE>62Gb</SIZE>
</HARDDISK></DISK>
<DISK><CD>
<SPEED>32x</SPEED>
</CD></DISK>
</PC>
<PC> . . . . </PC>
</PCS>
DTD and XML
Web Programming and Bioinformatics
424
Again in XML we see the same context dependency caused by the matching tags.
As in the previous example, if we restrict the matching tags to those chosen from a
finite set of reserved words, such as MODES, PRICE, DISK, etc., XML turns out to
be a CFL.
DTD and XML
Web Programming and Bioinformatics
Warning Signs
 On children's alphabet blocks: Letters may be used to construct words, phrases and sentences that may be
deemed offensive.
 On a microscope: Objects are smaller and less alarming than they appear.
 On a revolving door: Passenger compartments for individual use only.
 On work gloves: For best results, do not leave at crime scene.
 On a palm sander: Not to be used to sand palms.
 On a calendar: Use of term "Sunday" for reference only. No meteorological warranties express or implied.
On a blender: Not for use as an aquarium.
 Dennis 
Break Time
425
14.3 Gene code and grammar
Quite a few linguistic terminologies, such as code, express, translate, edit, etc., have
been adopted by biologists, especially in the field of molecular biology, ever since
James Watson and Francis Crick proposed the structure of DNA (deoxyribonucleic acid)
strands in 1953. This implies that in biology there are problems that we can approach
with the knowledge of automata and formal languages. As an example, this section
shows how it is possible to express genes in terms of a grammar.
Before the example, we need a very brief introduction to the process of protein
synthesis. A genome is a long double helix DNA strand, consisting of four chemical
bases; adenine, thymine, cytosine and guanine, denoted by A, T, C, and G, respectively.
A G T
T C A
C
G
DNA molecule
Genome (double helix)
backbone
bases
Web Programming and Bioinformatics
426
TCAAGCT
UCAAGCU
AGTTCGA
RNA
5‟ end
3‟ end
3‟ end
5‟ end
(1) RNA Transcription
Each base in a double helix forms one of the four complementary pairs, AT, TA,
CG, and GC as the following example shows. (Thus, given a DNA single strand,
we can figure out its complementary strand.)
A C T T A A C G G C G A T
T G A A T T G C C G C T A
Proteins are synthesized by the following three steps; (1) a single strand segment
(i.e., gene) of the genome is transcribed into an RNA. The RNA is the complementary
strand of the source strand, with every base T replaced by U (uracil). Hence, RNA‟s
consist of four bases, A, C, G and U.
Gene Grammar Web Programming and Bioinformatics
427
Transcribed RNA
(2) splicing
exons introns
Messenger RNA
(3) Translation
Protein
(2) In eukaryotes, only useful substrings for protein synthesis, called exons, are
spliced (i.e., cut off and concatenated) to form a mRNA (messenger RNA). (3) Finally,
ribosomes, large molecular assemblies traveling along the mRNA, translate the mRNA
into a protein, which is an aminoacid sequence.
Gene Grammar
Web Programming and Bioinformatics
428
Ala Arg Asp Ans Cys Glu Gln Gly His Ile Leu Lys Met Phe Pro Ser Thr Trp Tyr Var Stop
GCA
GCG
GCT
GCC
AGA
AGG
CGA
CGG
CGT
CGC
GAT
GAC
AAT
AAC
TGT
TGC
GAA
GAG
CAA
CAG
GGA
GGG
GGT
GGC
CAT
CAC
ATA
ATT
ATC
TTA
TTG
CTA
CTG
CTT
CTC
AAA
AAG
ATG
(start)
TTT
TTC
CCA
CCG
CCT
CCC
AGT
AGC
TCA
TCG
TCT
TCC
ACA
ACG
ACT
ACC
TGG TAT
TAC
GTA
GTG
GTT
GTC
TAA
TAG
TGA
There are 20 different amino acids. Hence, proteins can be represented as a string of
20 characters. The translation is done by triplets (i.e., groups of three adjacent bases),
also called codons, according to the table shown below. The translation begins with the
start codon ATG, and continues until it meets one of the stop codons, TAA, TAG, and
TGA (see the table below). For every codon scanned, a specific amino acid is attached to
the next position of the protein being synthesized. Notice that because there are 4×4×4 =
64 different codons and 20 amino acids, several codons are translated into the same
amino acid. (Following convention, we shall represent codons using character T rather
than U.)
Gene Grammar Web Programming and Bioinformatics
429
• The transcript, which is the part of the gene that is copied into a messenger RNA, has
flanking 5‟ and 3‟untranslatedregions around a codingregion initiated by a start
codon.
<gene> →<transcript>
<transcript> →<5‟untranslatedregion><startcodon><codingregion>
<3‟untranslatedregion>
Gene Grammar
This three step process for protein synthesis is called the central dogma in
molecular biology, because with so many exceptional cases, it is no longer accepted as
a valid theory. However, to show a potential application of the formal languages and
automata theory to molecular biology, we shall present an example based on this
dogma. As we did in Section 14.1 for HTML, we will transform the informal definition
of genes into a formal definition in terms of a grammar. (This example is an excerpt,
lightly edited, from David Sears‟ paper “The Linguistics of DNA,” which appeared in
the journal American Scientist, Vol. 80, 1992.)
Web Programming and Bioinformatics
430
Gene Grammar
• A stopcodon is any of the three triplets TAA, TAG and TGA, whereas a codon is any
of the remaining 61 possible triplets that are translated into an amino acid according to
the rule given in the codon table.
<startcodon> → <Met> <stopcodon> →TAA  TAG  TGA
<codon> → <Ala>  <Arg>  <Asp>  <Asn> <Cys> <Glu> <Gln> 
<Gly>  <His>  <Ile>  <Leu> <Lys>  <Met> <Phe>
<Pro>  <Ser>  <Thr>  <Typ>  <Tyr>  <Val>
• The codingregion consists of a codon followed by more codingregion, a recursion
that ultimately terminates in a stopcodon. The codingregion also allows for a splice at
any point in the recursion, resulting in the insertion of an intron, which is bracketed by a
splicing signals GT and AG.
<codingregion> →<codon><codingregion>  <splice><codingregion>
 <stopcodon>
<splice> → <intron> <intron> → GT<intronbody>AG
Web Programming and Bioinformatics
431
All the grammar rules presented up to this point are contextfree. But the following
biological fact requires a rule from the upper level in the Chomsky hierarchy.
• Introns can be transposed to the left or right.
<base><intron> → <intron><base> <intron><base> → <base><intron>
<Ala> → GC<base> <Arg> → CG<base> <Asp> → GA<pyrimidine>
<Cys> → TG<pyrimidine> <Glu> → GA<purine> <Gln> → CA<purine>
<base> → A  C  G  T <pyrimidine> → C  T <purine> → A  G
• The first two bases of a codon partially specify the amino acid into which it is
translated. Every codon starting with GC (CG) is translated into alanine (respectively,
arginine), independent of the third base. Every codon starting with GA is translated into
aspartic acid if the third base is a pyrimidine (i.e., C or T). Otherwise, if the third base is
a purine (i.e., A or G), it is translated into glutamic acid. Here are some formal
representations of such translation rules specified in the codon table.
The following slide shows a grammar that specifies the set of “genes” that we have
developed in this section.
Gene Grammar Web Programming and Bioinformatics
432
<gene> →<transcript>
<transcript> →<5‟untranslatedregion><startcodon><codingregion><3‟untranslatedregion>
<codingregion> →<codon><codingregion>  <splice><codingregion>  <stopcodon>

<codon> → <Ala>  <Arg>  <Asp>  <Asn> <Cys> <Glu> <Gln>  <Gly>  <His>  <Ile>
<Leu> <Lys> <Met> <Phe> <Pro> <Ser> <Thr> <Typ> <Tyr> <Val>
<startcodon> → <Met> <stopcodon> →TAA  TAG  TGA

<Ala> → GC<base> <Arg> → CG<base> <Asp> → GA<pyrimidine> <Cys> → TG<pyrimidine>
<Glu> → GA<purine> <Gln> → CA<purine> <Gly> → GG<base> <His> → CA<Pyrimidine>
<Ile> → AT<pyrimidine>  ATA <Leu> → TT<purine>  CT<base> <Lys> → AA<purine>
<Met> → ATG <Phe> → TT<primidine> <Pro> → CC<base> <Thr> → AC<base>
<Ser> → AG<primidine>  TC<base> <Typ> → TGG <Tyr> → TA<pyrimidine> <Val> → GT<base>

<base> → A T  G  C <pyrimidine> → C  T <purine> → A  G

<splice> → <intron> <intron> → GT<intonbody>AG

<intron><base> → <base><intron> <base><intron> → <intron><base>
Gene Grammar
Web Programming and Bioinformatics
433
The grammar that we have just developed is incomplete because no rules are defined
for the nonterminal symbols <5‟untranslatedregion>, <3‟untranslatedregion> and
<intronbody>. The two untranslated regions flanking the codingregion usually
contain segments regulating the gene expression. The contents and the function of
these parts are not yet completely understood. The intronbody parts generally consist
of a long repeating string, whose exact role is also under investigation. We need more
research before we can present a formal model for these regions.
Gene Grammar
Web Programming and Bioinformatics
Love
What else is love but understanding and rejoicing in the fact that another person lives, acts, and experiences
otherwise than we do….?  Friedrich Nietzsche –
Just because you love someone doesn‟t mean you have to be involved with them. Love is not a bandage to cover
wounds.  Hugh Elliot 
Break Time
434
Hierarchy of the Models
435
Chomsky Hierarchy
Recursively
Enumerable
Sets (type 0)
Turing
Machines
(TM)
Post System,
Markov Algorithms,
urecursive Functions
Regular
Expression
Contextsensitive
Languages(type 1)
Contextfree
Languages(type 2)
Regular
Languages(type3)
Linearbounded
Automata
(LBA)
Pushdown
Automata
(PDA)
Finite State
Automata
(FA)
.
.
.
.
.
Languages (grammars)
Machines Other Models
Review
* Colored arrows indicate
proven relations.
 : Containment
÷: Characterization
436
In Chapters 7 and 12, respectively, we have partially proved the characterization relations and the containment relations in
the hierarchy of languages and automata (i.e., Chomsky hierarchy, repeated in the preceding slide, where the colored arrows
indicate proven parts).
This chapter introduces two additional interesting classes of languages into the hierarchy. One is the class of languages that
cannot be recognized by any TM, and the other is the class of languages that can be recognized by a TM that eventually halts.
(These languages are called recursive languages.) Then the chapter completes the proof of the hierarchy including these new
classes of languages. The proofs are quite challenging for undergraduate students. However, it is worth the challenge because
the techniques involve elegant logic that can be followed with the knowledge that we have learned in Chapter 1.
15. Hierarchy of the Models: Final Proof
15.1 Languages that TM’s cannot recognize 437
Enumeration
Enumerating TM‟s
15.2 Universal TM 448
15.3 Enumerable Languages 450
15.4 Recursive Languages 459
15.5 Proof of characterizations 471
Type 0 grammar and TM
CSG and LBA
CFG and PDA
Rumination 506
Exercises 507
437
At the top level of the Chomsky hierarchy, there are type 0 languages. In this
chapter we will prove that every language is type 0 if and only if it is recognizable
by a TM. Before we prove this characterization, we may ask the following question.
Is there a language that cannot be recognized by a TM? The answer is positive. We
will first prove this answer. A popular approach that we will use in this chapter is the
diagonalization technique introduced in chapter 1. This technique requires an
enumeration (i.e., listing out) of all possible target objects, such as certain types of
grammars, automata, or strings. Before using the diagonalization technique for the
proof, we need to formally define the term enumeration.
For example, the set of positive integers and the set of character strings on a finite
alphabet are enumerable, because we can build a TM that lists all the elements of the
sets on its tape (see the illustration on the next page). Notice that because these sets
are infinite, the enumerator never stops.
15.1 Languages that TM’s cannot recognize
Definition 15.1(Enumeration). For a given set S, to enumerate S is to list out every
element in S on a line, and we call a TM that is capable of enumerating a set an
enumerator. If a set is enumerable, then we call it an enumerable set.
Hierarchy
438
Enumerable sets are also called countable, because all the elements in an enumerable
set are countable, i.e., they can be put to one to one correspondence with the set of
positive integers. There are uncountable sets. The following theorem shows an example.
The figure below shows how we can build a TM which enumerates the set of
positive integers in binary. (The set ¯
*
, for a finite alphabet ¯, is also enumerable
because we can build a TM which enumerates the set in lexicographic order.)
Nontype 0 Languages
(B, 1, R)
Write a copy of the
rightmost number in the
list to its right, separated
by two cells.
Increment
the copy by
one
start
Write 1 on the
blank tape
Hierarchy
439
Theorem 15.1 For a given finite alphabet ¯, the sets of all languages in ¯ (i.e., the set
of all subsets in ¯
*
) is not enumerable.
Proof. We use the diagonalization technique. Suppose that all the languages in ¯
*
are
enumerable, i.e., we can list them out along an infinite line. Let L
i
be the ith language
in the list. As illustrated in the figure below, consider a matrix M, where M[i, j] = 1, if
w
i
is in language L
j
, otherwise, M[i, j] = 0. (Notice that this is a conceptual setting. We
don‟t actually have to construct the whole matrix.)
L
1
L
2
L
3
L
4
L
5
. . .
w
1
w
2
w
3
w
4
.
.
0
1
1
0
1 0 1 1
1 0 0 1
1 0 1
0
0 1 1 0
M
Nontype 0 Languages
Hierarchy
440
Now, with the diagonal entries we construct a language L
D
defined as follows.
L
D
= {w
i
 M[i, i] = 0}
We claim that L
D
is a language that does not exist in the list. Suppose that ith
language L
i
= L
D
. If w
i
eL
i
, then w
i
e L
D
. Otherwise, if w
i
e L
i
, then w
i
e L
D
. It
must be that L
i
= L
D
. Notice that this proof holds even for the singleton alphabet,
for example, E = {a}.
L
1
L
2
L
3
L
4
L
5
. . .
w
1
w
2
w
3
w
4
.
.
0
1
1
0
1 0 1 1
1 0 0 1
1 0 1
0
0 1 1 0
M
Nontype 0 Languages
Hierarchy
441
Now we go back to the question of whether there is a language that cannot be
recognized by any TM. We learned that for a given alphabet E, the set of languages in
E
*
is not enumerable. If we can show that the set of all TM‟s is enumerable, then we
prove that there is a language that cannot be recognized by any TM, because for every
TM, there is only one language that the machine recognizes.
Let M = (Q, ¯, I, o, q
0
, F) be a TM, where Q, I and ¯ _ I are, respectively, the set
of states, the tape alphabet, and the input alphabet, all finite. We assume that M has
only one accepting state, i.e., F = 1 and ¯ = {a, b}. (Once in an accepting state, the
TM‟s do not have to move. Hence, all accepting states can be merged into a single
state without affecting the language accepted by a TM. See the rumination section at
the end of Chapter 4.) We may use other alphabet sizes, but 2 are enough for our proof.
Notice that to list all TM‟s accepting languages in ¯*, we must consider all possible
finite sets of states and tape alphabets. Here, the finiteness implies that the size is fixed
for a given TM, but there is no upper bound. Thus, there are infinitely many TM‟s,
each recognizing a language in {a, b}
*
. If they are enumerable, how can we list them
with an enumerator?
Enumerating all TM’s
Nontype 0 Languages
Hierarchy
442
We further assume that q
0
is the start state, q
1
is the accepting state, t
0
is the blank
symbol (B), t
1
= a, and t
2
= b. (Recall from the ruminations at the end of Chapters 4
and 5 that every TM can accept its language with one accepting state.) Finally, we
chose a format to describe the state transition function o. For example, we may list o, as
follows.
o(q
0
, a) = (q
2
, b, R); o(q
2
, a) = (q
5
, a, L); o(q
0
, b) = (q
3
, b, R); . . . . .
With no loss of generality, assume that every TM uses a finite number of state names
and the tape alphabet, respectively, chosen from the left end of the following lists (pool)
of state names and tape alphabet. (Notice that both lists are infinitely long.) As usual,
we will use R, L, N for the direction of the tape head move.
pool of state names: {q
0
, q
1
, q
2
, . . . . }
pool of tape alphabet: {t
0
, t
1
, t
2
, . . . . . }
direction of a move: {R, L, N}
Nontype 0 Languages
Hierarchy
443
For simplicity, let‟s delete the parentheses and commas from the list as follows.
In this shortened list, every transition consists of six symbols and is delimited with a
semicolon. Though poorly legible, it carries the same information as the original list.
We are going to represent a TM‟s transition function o written in this format.
Recall that once all the symbols in the set of states Q, input alphabet E, tape alphabet I,
the start state q
0
, and the accepting state in F are defined, we can uniquely represent a
TM only with the transition function o (or with the state transition graph).
We will use this list for the convenience of converting it into a binary number in the
next step.
The transition function in conventional notation:
o(q
0
, t
1
) = (q
2
, t
2
, R); o(q
2
, t
2
) = (q
5
, t
1
, L); o(q
0
, t
2
) = (q
3
, t
3
, R); . . . . .
The transition function in abbreviated notation:
q
0
t
1
=q
2
t
2
R;q
2
t
2
=q
5
t
1
L;q
0
t
2
=q
3
t
3
R; . . . . .
Nontype 0 Languages
Hierarchy
444
Let E be an encoding function (i.e., homomorphism) such that given a symbol used
in the transition function of a TM, E returns a binary code as follows.
Notice that every binary code is uniquely identified by the number of 1‟s between
the two 0‟s. State codes have an even number of 1‟s and tape symbols an odd number
of 1‟s. Here are some examples of the binary encoding according to the above
encoding function E.
q
0
t
1
=q
2
t
2
R; 01
6
001
9
001
2
001
10
001
11
001
4
0010
q
2
t
2
=q
5
t
1
L; 01
10
001
11
001
2
001
16
001
9
001
3
0010
q
0
t
2
=q
3
t
3
R; 01
6
001
11
001
2
001
12
001
13
001
4
0010
We extend function E such that, given a TM M, E(M) gives the binary encoding of
M‟s transition function o according to the above definition.
Nontype 0 Languages
E(;) = 010, E(=) = 01
2
0,
E(L) = 01
3
0, E(R) = 01
4
0, E(N) = 01
5
0,
E(q
i
) = 01
6+2i
0, E(t
i
) = 01
7+2i
0, i > 0.
Hierarchy
445
It is possible to construct a TM M
D
that, given a binary string w, decides whether w
is a binary encoding E(M) of a TM M or not. M
D
simply checks whether every
substring in w delimited by 010 (the code for ;) is the binary encoding of a transition
pa=qbX for some states p and q, some tape symbols a and b, and X e {R, L, N}, each
belonging to the sets we have defined.
Now, we are ready to construct an enumerator T
E
(i.e., a TM) that lists all TM‟s on its
tape, M
0
, M
1
, M
2
, …. To do this, our enumerator T
E
generates the binary numbers on
the tape in the order of their length and value; 0, 1, 00, 01, 10, 11, . . . , one by one,
checking whether the current binary number written on the tape is the binary encoding
of the transition function of a TM. If it is, the enumerator generates the next binary
number to the right. Otherwise it erases the number before the next number is written.
The enumerator repeats this process forever. Since for every TM M, the length of E(M)
is finite, E(M) will eventually appear on the list.
M
0
, M
1
, M
2
, . . . , M
i
. . .
T
E
Nontype 0 Languages
Hierarchy
446
Now, we can claim the following theorem.
Theorem 15.2 The set of all TM‟s, each encoded by the function E, are enumerable.
Type 0 languages are also called recursively enumerable (R.E.) languages. By
Theorem 15.1, we know that for a given finite alphabet ¯, it is impossible to enumerate
all languages in ¯
*
. With this theorem and the above Corollary, we can claim the
following theorem.
Theorem 15.3 Let ¯ be a finite alphabet. There is a language in ¯
*
that cannot be
recognized by a TM.
The binary encoding function E is onetoone function, i.e., no two TM‟s can be
encoded to the same binary number. For each TM M, there is only one language
accepted by M. Thus, the fact that all TM‟s are enumerable implies that all TM
languages are enumerable. (Since a language can be recognized by more than one TM,
a language may appear in the enumeration multiple times.)
Corollary. The set of all type 0 languages on a given finite alphabet ¯ is enumerable.
Nontype 0 Languages
Hierarchy
447
Notice that this theorem also holds for a singleton alphabet, for example ¯ = {a}.
The same binary encoding technique that we have introduced for TM is also
applicable to grammars and other automata. Thus, we can also claim the following.
Theorem 15.4 The set of all binary encoded CSG‟s is enumerable.
Nontype 0 Languages
Hierarchy
Life
 We make a living by what we get, we make a life by what we give.  Sir Winston Churchill 
 Life is something that happens when you can‟t get to sleep.  Fran Lebowitz –
 Life is something that everyone should try at least once.  Henry J. Tillman 
 Life is pleasant. Death is peaceful. It‟s the transition that‟s troublesome.  Isaac Asimov –
 Life is what happens to you while you‟re busy making other plans.  John Lennon –
 The secret of a good life is to have the right loyalties and hold them in the right scale of values.  Norman Thomas 
Break Time
448
Definition 15.2 (Universal TM). We call a TM T
u
universal, if T
u
, given an arbitrary
TM Mand an input x, simulates the computation of Mon the input x.
Here is an idea of how to construct a universal TM T
u
. The input Mand x given to
T
u
must be represented according to an agreed format. Suppose that they are binary
encoded as E(M) and E(x), using the same function E that we developed in the
previous section. T
U
records E(M), E(x), the current state E(q
i
) of M and the head
position as the following figure illustrates. Using this information on its tape, T
u
follows M‟s moves one by one. If Menters an accepting state, so does T
u
. If Mmeets
a transition undefined, T
u
terminates the simulation.
T
u
E(M)
E(x)
E(q
i
)
Head position
15.2 Universal TM
Hierarchy
449
Universal TM
A TM M recognizes only its language L(M). Thus M is a special purpose computer
built to recognize L(M). In contrast, the universal TM is a general purpose computer
in the sense that, given E(M) and E(x), the machine gives the result of the
computation of M on input x. Here E(M) and E(x) correspond, respectively, to a
program and an input represented in an agreed form, i.e., a programming language.
Hierarchy
Science & Research
The whole science is nothing more than a refinement of everyday thinking.  Albert Einstein –
Research is of mind, not of hands, a concentration of thought and not a process of experimentation. Research is the
effort of the mind to comprehend relationships which no one had previously known.  DeForest Arnold 
Break Time
450
Now, we turn our attention from the topic of enumerating the sets of automata and
languages to enumerating a language itself. We first prove the following theorem,
which claims that enumerability is another characterization of type 0 languages.
Theorem 15.5 (1) If a language L is enumerable, then L is a type 0 (R.E.) language. (2)
If a language L is type 0 (R.E.), then L is enumerable.
Proof (1). Let M
e
be a TM (i.e., enumerator) that enumerates L. With M
e
we can build
a 2tape TM Mrecognizing L as follows. Given an input x, using M
e
in its finite state
control, M writes the next string w
i
in the enumeration and checks whether x = w
i
. If it
does, M enters an accepting state and halts. Otherwise, it writes the next string w
i+1
in
the enumeration and repeats the process. M accepts x if and only if x e L.
15.3 Enumerable Languages
TM M
M
e
x
w
i
Hierarchy
451
We may think of the following straightforward idea. We let M
e
, while writing the
strings w
i
e ¯
*
in the lexicographic order on one of its tape tracks, check whether
w
i
is accepted by M. If it is accepted, we let it be the next string in the enumeration.
Otherwise, we erase it and repeat the process with the next string in ¯
*
.
There is a serious problem with this idea. What will happen to M
e
, if M enters an
infinite loop without accepting w
i
? We cannot make M
e
not to put w
i
in the
enumeration, because it is impossible for M
e
to decide whether M will eventually
halt or not (i.e., it is an unsolvable decision problem). Thus, the idea fails.
Enumerable Languages
w
1
w
2
w
3
w
4
. . . .
w
2
w
4
w
7 . . . .
accepted strings
TM M
M
e
enumerated
strings in ¯
*
.
Proof (2). This part of proof is not as simple as the proof for part (1). Let M be a
TM recognizing L. We build an enumerator M
e
for L with M. Let ¯ be the alphabet
of language L.
Hierarchy
452
So we need a different approach. As before, while enumerating the strings in ¯
*
one by one in the lexicographic order, we let M
e
progressively simulate the
computation of M on input w
1
, w
2
, . . ., as the figure below illustrates.
We let M
e
, writing w
1
, simulate the first move of M on w
1
. Writing w
2
, we let it
simulate the first move of M on w
2
followed by the second move on w
1
. Writing w
3
,
we let it simulate the first move of M on w
3
followed by the second move on w
2
,
followed by third move on w
1
, and so on. Any string accepted during this procedure
is moved onto the lower track (not shown), and we let it be the next string in the
enumeration.
Enumerable Languages
steps
w
1
w
2
w
3
w
4
. . . .
TM M
M
e




1
2
3
4
.
.






Hierarchy
453
According to Church‟s hypothesis, no computational model exists that is more
powerful than Turing machines. In other words, any computational model computes
at best what Turing machines can compute. Then we may ask the following. Is there
any limit on what Turing machines can compute? We learned a proof that for a
given alphabet ¯, there is a language in ¯
*
that cannot be recognized by any TM.
This proof is nonconstructive. The proof of the following theorem (constructively)
shows a language that cannot be recognized by any TM.
Theorem 15.6 Let ¯ be an arbitrary finite alphabet. There is a language in ¯
*
, that
cannot be recognize by any TM.
Proof. We know that the set ¯
*
and the set of all TM‟s are enumerable (Theorem 15.2).
With these facts, we will prove the theorem using the diagonalization technique. We
assume that every TM M and string x e ¯
*
are, respectively, expressed by the binary
encoding E(M) and E(x) that we have introduced in the previous section.
Enumerable Languages
Hierarchy
454
Proof (cont’d). We construct a matrix M such that for every i > 0, the ith row
corresponds to the ith string w
i
, and the ith column corresponds to the ith TM M
i
in
the enumeration. This is possible because the set of TM‟s and the set ¯
*
are both
enumerable. For the matrix entries, we let M[i, j] = 1, if M
j
accepts string w
i
.
Otherwise, we let M[i, j] = 0.
Now, we claim that no TM recognizes language L
0
below, which contains string w
i
,
if ith TM M
i
in the enumeration does not accept ith string w
i
.
M
0
M
1
M
2
M
3
. . . .
Enumerable Languages
L
0
= { w
i
 M[i, i] = 0, i > 0 }
w
0
w
1
w
2
w
3
w
4
.
0 1 1 1 0
1 0 0 0 0
1 0 0 1
0 0 1 1 1
1 1 1 0 0
1
M
Hierarchy
455
M
0
M
1
M
2
M
3
. . . .
w
0
w
1
w
2
w
3
w
4
.
0 1 1 1 0
1 0 0 0 0
1 0 0 1
0 0 1 1 1
1 1 1 0 0
1
L
0
= { w
i
 M[i, i] = 0, i > 0 }
= { w
0
, w
2
, w
4
, . . . }
Suppose that there is a TM, say M
i
in the enumeration, that recognizes L
0
. (We are
using the proofbycontradiction technique.) Consider what will happen if we give i
th string w
i
as an input to M
i
. (Notice that M
i
and w
i
have the same subscript.) Either
w
i
e L
0
or w
i
e L
0
. We examine these two cases.
Enumerable Languages
(1) When w
i
e L
0
. By definition of the matrix, this case implies that M[i, i] = 0, i.e.,
M
i
does not accept w
i
. It follows that w
i
e L(M
i
) = L
0
. We have a contradiction.
(2) When w
i
e L
0
. By definition, the case implies that M[i, i] = 1. That is, M
i
accepts w
i
. It follows that w
i
e L(M
i
) = L
0
. Again, we have a contradiction.
So we should give up the assumption that L
0
= L(M
i
), for some i > 0. No TM exists
that recognizes L
0
. Language L
0
is not type 0 (R.E.).
Hierarchy
456
Now, consider L
1
= {w
i
 M[i, i] = 1 }, which is the complement of L
0
(see below).
We will show that in contrast to L
0
, language L
1
is R.E., i.e., type 0.
Theorem 15.7 L
1
is a type 0 (R.E.) language.
M
0
M
1
M
2
M
3
. . . .
w
0
w
1
w
2
w
3
w
4
.
0 1 1 1 0
1 0 0 0 0
1 0 0 1
0 0 1 1 1
1 1 1 0 0
1
L
1
= { w
i
 M[i, i] = 1, i > 0 }
= { w
1
, w
3
, . . . }
Enumerable Languages
Hierarchy
457
Proof. We construct a TM M
D
that recognizes L
1
as follows. Suppose an input x is
given to M
D
. M
D
is equipped with two enumerators, the one enumerating the set of
all strings in ¯
*
and the other enumerating the set of all TM‟s (see the figure below).
M
D
alternately enumerates M
0
, w
0
, M
1
, w
1
, . . . on its two extra tapes until TM M
i
and w
i
(= x) appear. Then M
D
simulates the computation of M
i
on input w
i
. If M
i
accepts x, so does M
D
, and it halts. (Notice that M
i
may run forever without
accepting. Then so does M
D
.) It follows that M
D
is a TM which recognizes L
1
.
Language L
1
is type 0 (R.E.).
If x = w
i
, then simulate M
i
with input x.
TM
enumerator
¯
*
enumerator
M
D
M
i
w
i
x
Input tape
Enumerable Languages
Hierarchy
458
Enumerable Languages
Since L
0
is the complement of L
1
, we have the following corollary.
Corollary. There is a type 0 language whose complement is not type 0.
Hierarchy
Good life
When you were born, you were crying and everyone around you was smiling.
Live your life so that when you die, you‟re smiling and everyone around you crying, with tears of joy for having
known you.  Anonymous 
Break Time
459
The Turing machine model that we have defined halts if the input is accepted by
the machine. The definition does not explicitly say what the machine should do if
the input is not accepted. The machine may either enter a dead state (if an undefined
input symbol is read in a state) or never halt, going around an indefinite loop of non
accepting states. (Recall that for convenience, we do not show transitions entering
the dead state.)
How about defining a TM model, as the following figure illustrates, which after
some finite number of moves, always halts in either an accepting state or a non
accepting state? Such TM‟s and their languages are called recursive.
15.4 Recursive Language
Reject and halt
Accept and halt
(a, b, N)
(c, a, N)
Recursive TM
Hierarchy
460
Definition 15.3 (Recursive TM, Recursive Language). A recursive TM is a TM that
eventually (i.e., after a finite number of moves) halts in either an accepting or a
nonaccepting state. The language accepted by such TM is called a recursive
language.
In contrast to type 0 languages, the following theorem claims that the complement
of recursive languages are also recursive.
This model could be more user friendly than the original one. We may let it output
a kind message before halting, like “Yes, I accept the input!” or “No, I regret that I
have to reject it.” For the original TM model, it is impossible to give any specific
message when it does not accept the input because there is no way to know whether
it will enter an infinite loop or not.
Interestingly, we will show that the recursive (i.e., halting) TM model is weaker
than the original model in the sense that there is a type 0 language that cannot be
recognized by any recursive TM. Before we show the proof, we define the following.
Recursive Languages
Hierarchy
461
Let TypeRec be the class of recursive languages. We know that every recursive
language is type 0. According to the Corollary of Theorem 15.7, there is a type 0
language whose complement is not type 0. Thus, Theorems 15.7 and 15.8 above
implies the following theorem.
Theorem 15.9 TypeRec is properly contained in Type0L.
Theorem 15.8 The complement of a recursive language is also recursive.
Proof. Let L be a recursive language accepted by a recursive TM M. We simply
change the accepting states of M to nonaccepting states, and vice versa. This
modified TM accepts the complement of L.
Type0L
TypeRec
Hierarchy
Recursive Languages
462
Now that the top level (i.e., type 0) of Chomsky hierarchy has been refined, we
move down to the next lower level (i.e., type 1). We will first prove Theorem 15.10
below, which claims that every contextsensitive language is recursive. Then using
this theorem and Theorem 15.9, we will show that Type1L c Type0L.
Proof. Let G = (V
T
, V
N
, P, S) be an arbitrary contextsensitive grammar. We construct
a recursive TM M that recognizes L(G) as follows.
M has a twotrack tape with G stored in its finite state control and the input is given
on the top track, as the following figure illustrates. The idea is to let M, using G,
nondeterministically generate a terminal string on its second track and see whether it
is equal to x. If it is, M accepts x and halts. Otherwise, M rejects it and halts.
Theorem 15.10 Every contextsensitive language is recursive.
x
G
TM M
S => . . . . => x ?
Hierarchy
Recursive Languages
463
x
S ¬ w
1
¬ w
2
¬. . .
G
Recursive TM M
On its lower track M writes a derivation sequence, i.e., a sequence of sentential
forms, w
0
, w
1
, w
2
, . . ., that can be derivable starting with the start symbol S of G as
the figure below illustrates. In the list w
0
= S, and w
1
is a sentential form that can be
generated by applying a rule of S, string w
2
is generated by applying a rule whose left
side is a substring in w
1
, and so on. If w
i
contains more than one substring
corresponding to the left side of some rules of G, then M nondeterministically chooses
one of the substrings to apply its rule. If the chosen substring has more than one rule
(for example, bA ÷Ab ba), M nondeterministically chooses one of those rules. M
repeats this procedure until it sees that one of the following conditions is satisfied.
(1) w
i
= x, (2) w
i
= w
j
, i < j , (3) (w
i
 > x) OR (w
i
e (V
T
)
+
AND w
i
= x)
Hierarchy
Recursive Languages
464
If condition (1) is satisfied, M halts in an accepting state. Case (2) implies that the
derivation got into a loop. So, M halts in a nonaccepting state. Case (3) implies that
according to the noncontracting property of contextsensitive grammar rules, it is
impossible to derive string x. Thus in this case too M halts in a nonaccepting state.
M is a recursive (i.e., halting) TM that recognizes L(G). It follows that every context
sensitive language is recursive.
Now, we may ask the following question: Is the converse of Theorem 15.10 true?
(That is, is every recursive language contextsensitive?) In terms of automata, this is
equivalent to the question of whether every language recognized by a recursive TM
can also be recognized by an LBA. The next theorem shows that there is a recursive
language that is not contextsensitive.
Hierarchy
Recursive Languages
465
Theorem 15.11 There is a recursive language that is not contextsensitive.
Proof. Again, we will use the diagonalization technique. We know that the set of all
CSG‟s with terminal alphabet V
T
= {a, b} is enumerable (see Theorem 15.4). As
illustrated below, construct a matrix M such that ith column corresponds to the ith
grammar G
i
in the enumeration, and ith row corresponds to the ith string w
i
e {a,
b}
*
in the lexicographic order. For the entries of the matrix, we let M[i, j] = 1, if w
i
e
L(G
j
). Otherwise, M[i, j] = 0. We have L(G
i
) = { w
j
 M[j, i] = 1, j > 0}.
G
0
G
1
G
2
G
3
. . . .
w
0
w
1
w
2
w
3
w
4
.
0 1 1 1 0
1 0 0 0 0
1 0 0 1
0 0 1 1 1
1 1 1 0 0
1
Hierarchy
Recursive Languages
466
G
0
G
1
G
2
G
3
. . . .
w
0
w
1
w
2
w
3
w
4
.
0 1 1 1 0
1 0 0 0 0
1 0 0 1
0 0 1 1 1
1 1 1 0 0
1
L
0
= { w
i
 M[i, i] = 0, i > 0 }
= { w
0
, w
2
, w
4
, , , , }
Now, going along the diagonal of matrix M, we pick all w
i
e L(G
i
) and construct
the language L
0
.
We first prove that L
0
is not contextsensitive. This proof is very similar to the one
of Theorem 15.3, where we showed a language that cannot be recognized by any TM.
Suppose that L
0
is a language of a CSG G
i
, i.e., L
0
= L(G
i
). Consider the string w
i
.
(Notice that w
i
and G
i
have the same subscript.) Either w
i
e L
0
or w
i
e L
0
. We
examine each of these cases.
Hierarchy
Recursive Languages
467
(1) Case of w
i
e L
0
: By the definition of L
0
, we have M[i, i] = 0, which implies that
w
i
e L(G
i
) = L
0
. We are in a contradiction.
(2) Case of w
i
e L
0
: Again by the definition of L
0
, we have M[i, i] = 1, which implies
w
i
e L(G
i
) = L
0
. Again, we are in a contradiction.
So we must give up the supposition that L
0
= L(G
i
). Language L
0
is not contextsensitive.
Now, we will show that L
0
is recursive. The proof is similar to the proof of Theorem
15.7. We construct a recursive TM M
0
which recognizes L
0
as follows. Suppose that
on the input tape of M
0
, a string x is given.
CSG
enumerator
M
0
G
i
w
i
x
¯
*
enumerator
Hierarchy
Recursive Languages
468
The machine M
0
has two enumerators, one for the set of contextsensitive
grammars, and another for the set ¯
*
. Using these two enumerators, M
0
keeps
alternately generating grammars and strings on two tapes as shown below, until w
i
appears, that is equal to x. Then M
0
tests whether G
i
can derive x using the idea
presented in the proof of Theorem15.10. If this test shows that G
i
can derive x, M
0
rejects the input x and halts. Otherwise, M
0
accepts x and halts. It follows that M
0
is
a recursive TM that recognizes L
0
. Language L
0
is recursive.
CSG
enumerator
M
0
G
i
w
i
x
¯
*
enumerator
Hierarchy
Recursive Languages
469
In this chapter we have introduced two additional classes of languages, i.e., the class
of recursive languages and the class of languages that are not type 0, and we have
identified their level in the Chomsky hierarchy. The one is the class of languages that
cannot be recognized by any TM, and the other is the set of languages recognized by
recursive TM‟s. The next page shows the Chomsky hierarchy, including the class of
recursive languages. (The class of nontype 0 languages is not shown.) In the figure the
red arrows show the characterization relations ( ÷) and the proper containment
relations () that we have proved so far.
Hierarchy
Recursive Languages
Funny ads
 Dinner Special: Turkey $2.35; Chicken or Beef $2.25; Children $2.00.
 For Sale: Antique desk suitable for lady with thick legs and large drawers.
 Now is your chance to have your ears pierced and get an extra pair to take home too!
 No matter what your topcoat is made of, this miracle spray will make it really repellent.
 Dog for Sale: Eats anything and is fond of children.
 Tired of cleaning yourself? Let me do it!  Jason 
Break Time
470
Chomsky Hierarchy
Turing
Machines
Regular
Expression
Contextsensitive
Languages(type 1)
Contextfree
Languages(type 2)
Regular
Languages(type3)
Linearbounded
Automata
Pushdown
Automata
Finite State
Automata
Languages Automata
Recursive
Languages
Recursive
Turing Machines
Recursively
Enumerable
Sets (type 0)
Hierarchy
* Colored arrows indicate
proven relations.
 : Containment
÷: Characterization
471
Theorem 15.12 (1) Every language generated by a type 0 grammar can be
recognized by a TM. (2) Every language recognized by a TM can be generated by a
type 0 grammar.
15.5 Proofs of Characterizations
Type 0 grammars and the Turing machines
Now, finally, we will complete the proof of Chomsky hierarchy by proving the top
three characterization relations between the types of grammars and automata. Since
there is no known class of grammars that generate recursive languages, no proof for
characterization is given for this level.
Proof (1). Let G be an arbitrary type 0 grammar, and let {o
1
, o
2
, . . , o
n
} be the set of
strings used for the left side of the rules of G. Let {
i1
, 
i2
, . . 
ik
} be the set of strings
appearing on the right side of rule o
i
, i.e.,
o
i
÷ 
i1
 
i2
, . . ,  
ik
(Notice that n and k are some finite integers.) We construct a TM M that recognizes
L(G) as follows. M has G in its finite state control stored as a lookup table.
Hierarchy
472
Suppose that a string x is given on its input tape as illustrated in the figure below.
The machine M, using an extra work tape, tests whether x can be derived by grammar
G. M starts the work by writing the start symbol S on the work tape. Suppose w is a
sentential form that can be derived starting with S. M, reading w from left to right,
nondeterministically chooses a substring o
i
and replaces it with one of its right side

ij
, also chosen nondeterministically.
M repeats this process until it sees that the current sentential form w is equal to the
input string x and, if it is, M halts in an accepting state. Notice that if x e L(G), M
may repeat the process forever or see a sentential form w on which no rule is
applicable and halt. M accepts x if and only if it is generated by G. It follows that M
recognizes L(G), and hence, every type 0 language is recognized by a TM.
G
x
S
input
work tape
?
G
x
x
x
o
i
G
x

ij
G
Proofs of Characterizations
Hierarchy
473
Proof (2). For the proof, let M = (Q, ¯, I, q
0
, o, F) be an arbitrary TM. We construct
a type 0 grammar G that generates L(M). Let Q = {q
0
, q
1
, . . , q
n
}, and q
f
e F. For
the construction we shall use two distinct symbols #, $ e I.
Grammar G generates L(M) in the following three phases (see the figure below): (1)
for an arbitrary string x e E
*
, grammar G generates string x#q
0
x$. (2) Starting with
the start state q
0
, G simulates M‟s computation on input x using the string x
generated to the right of q
0
in the first phase. (3) If M enters the accepting state q
f
in
the second phase, G erases all symbols except the left side x generated in the first
phase.
We will show how G carries out the three phases. According to the convention of
using upper case letters for the nonterminal symbols of a grammar, we shall use Q
i
instead of state q
i
. In the figure below, strings u and v concatenated corresponds to
the final tape content of M when it accepts x.
Type 0 Grammar ÷TM
S ¬. . . ¬x#Q
0
x$ ¬. . . . ¬x#uQ
f
v$ ¬. . . . ¬x
(1) (2) (3)
Hierarchy
474
Type 0 Grammar ÷TM
Let ¯ = {a
1
, a
2
, . . . , a
k
} be the input alphabet of M. The following set of rules
carries out phase (1) to generate the string x#Q
0
x$, for every x e ¯
*
. Notice that
for x = c, the rules generates #Q
0
$.
S ÷A$
A ÷a
1
Aa
1
 a
2
Aa
2
 . . .
 a
k
Aa
k
 #Q
0
(1)
S ¬. . . ¬x#Q
0
x$
S ¬. . . ¬x#Q
0
x$ ¬. . . . ¬x#uQ
f
v$ ¬. . . . ¬x
(1) (2) (3)
Hierarchy
475
Type 0 Grammar ÷TM
For phase (2), we set up a onetoone correspondence between G‟s sentential
forms and M‟s configurations as the following figures illustrate. Symbol a e I is
positioned to the right of Q
i
in a sentential form of G, if and only if M reads a in
state q
i
(figure (a)). Symbol $ (symbol #) is to the right of Q
i
in a sentential form of
G, if and only if M, in state q
i
, reads the blank symbol next to the right end
(respectively, to the left end) of the current tape contents (figures (b) – (d)).
. . . bQ
i
a . . .
. . . . . b a
q
i
. . . Q
i
# a . .
. . a
. .
q
i
. . a . .
q
i
. . . #Q
i
a . .
. . . aQ
i
$ . . .
a
q
i
. . .
S ¬. . . ¬x#Q
0
x$ ¬. . . . ¬x#uQ
f
v$ ¬. . . . ¬x
(1) (2) (3)
(a)
(b)
(c)
(d)
Hierarchy
476
Type 0 Grammar ÷TM
Depending on M‟s transitions, we design G‟s rules as follows. Part (a) below is for
the case of M reading a nonblank symbol a, and part (b) is for the case of reading
the blank symbol next to either end of the current tape contents. Grammar G uses
two special symbols $ and # as boundary markers, respectively, to expand its
sentential form to the right and to the left to accommodate M‟s current tape contents.
(a) M‟s transition: o(q
i
, a) = (q
j
, b, D) G‟s rule (where c eI {#} ):
if D = R, cQ
i
a ÷cbQ
j
if D = L, cQ
i
a ÷Q
j
cb
if D = N. cQ
i
a ÷cQ
j
b
(b) M‟s rule: o(q
i
, B) = (q
j
, b, D) G‟s rule (where c eI ):
if D = R, Q
i
# ÷#bQ
j
Q
i
$ ÷bQ
j
$
if D = L, Q
i
# ÷Q
j
#b #Q
i
$ ÷Q
j
#b$ cQ
i
$ ÷Q
j
cb$
if D = N, Q
i
# ÷#Q
j
b Q
i
$ ÷Q
j
b$
Hierarchy
477
Type 0 Grammar ÷TM
It is simple to design a set of rules for phase (3). By the two rules in (a) below,
we let the nonterminal symbol Q
f
, which denotes the accepting state of M, move to
the right till it meets $. Then by the rules in part (b), we let Q
E
keep moving to the
left and erasing every symbol it meets up to and including #.
For every symbol a eI, include the following rules in G.
(a) Q
f
# ÷#Q
f
Q
f
a ÷aQ
f
(b) Q
f
$ ÷Q
E
aQ
E
÷Q
E
#Q
E
÷ c
S ¬. . . ¬x#Q
0
x$ ¬. . . . ¬x#uQ
f
v$ ¬. . . . ¬x
(1) (2) (3)
Grammar G is type 0, and it derives a string x if and only if x is accepted by M.
We proved that L(G) = L(M). It follows that every language recognized by a TM
can be generated by a type 0 grammar.
Hierarchy
478
Theorem 15.13 (1) Every language generated by a CSG can be recognized by an
LBA. (2) For every language L recognizable by an LBA, there is a CSG G such that
L(G) = L – {c}.
Proof (1). The LBA uses a twotrack tape, instead of two tapes, to conveniently limit
the tape space. If the LBA sees that next sentential form derived can be longer than x,
it stops the derivation and rejects the input string x, because it is impossible to derive x
by applying noncontracting CSG rules.
Now, we prove the characterization relation between CSG‟s and LBA‟s. The
approach is similar to the one for the characterization between TM‟s and type 0
grammars that we have just completed. For the proof we should take into
consideration of two restrictions: the tape space of an LBA is bounded by the input
length, and the grammar rules are noncontracting.
CSG and LBA
G
[ S ]
x [ ]
G
[ ]
x [ ]
o
i
G

ij [ ]
x [ ]
G
x [ ]
x [ ]
?
input
Hierarchy
479
CSG ÷LBA
However, we have a problem to follow this approach because CSG rules are non
contracting. Reviewing the type 0 grammar rules for the threephase procedure, we
find that among the rules for phase (3) (repeated below), all the rules in line (b) are
contracting. These rules are used to erase the tape, leaving only the accepted input
string x.
(a) Q
f
# ÷#Q
f
Q
f
a ÷aQ
f
(b) Q
f
$ ÷Q
E
aQ
E
÷Q
E
#Q
E
÷ c
Proof (2). Suppose L is the language of an LBA M = (Q, E , I , o , q
0
, [, ], F) ,
where Q = {q
0
, q
1
, …., q
m
}, and E = {a
1
, a
2
, …., a
k
}, for some constants m and k.
For the proof we will construct a CSG G such that L(G) = L(M) – {c}.
For the construction, we will be inclined to use the same threephase approach
(repeated below) that we used to construct a type 0 grammar which can derive all
the input string x accepted by a TM.
S ¬. . . ¬x#Q
0
x$ ¬. . . . ¬x#uQ
f
v$ ¬. . . . ¬x
(1) (2) (3)
For the special case; when the input is c (i.e., the tape has []), the machine enters
an accepting state, if the grammar has the rule S ÷ c. (The details are omitted.)
Hierarchy
480
To overcome this problem, instead of generating two copies of x in phase (1), we
merge them into one with composite symbols as shown below. Let x = x
1
x
2
. . . x
n
be
the input string. CSG G derives the following sentential form, where each part
delimited by either the angled brackets „<„ and „>‟ or the parentheses „(„ and „)‟ is a
composite symbol, and „[„ and „]‟ are the input boundary markers.
<x
1
, q
0
, ( [, x
1
)><x
2
, x
2
> . . . . <x
n
, ( x
n
, ] )>
Now, we are ready to show how G can derive the string x if and only if M accepts it.
After deriving a sentential form of the above format, CSG G begins to “simulate” the
computation of M on the string [x
1
x
2
. . . x
n
] going along the sentential form of
composite symbols. The leftmost elements in the composite symbols delimited by the
angled brackets are reserved for the final derivation when M accepts the input.
Seeing that M is in an accepting state, G erases all symbols in the current sentential
form except for the reserved input symbols, consequently, deriving the input string x =
x
1
x
2
. . . x
n
.
CSG ÷LBA
Hierarchy
481
(1) S ÷A<a
1
, (a
1
, ])>  A<a
2
, (a
2
, ])>  . . .  A<a
k
, (a
k
, ])>
(2)  <a
1
, q
0
, ([, ], a
1
)>  <a
2
, q
0
, ([, ], a
2
)>  . . .  <a
k
, q
0
, ([, ], a
k
)>
(3) A ÷A<a
1
, a
1
>  A<a
2
, a
2
>  . . .  A<a
k
, a
k
>
(4)  <a
1
, q
0
, ([, a
1
)>  <a
2
, q
0
, ([, a
2
)>  . . .  <a
k
, q
0
, ([, a
k
)>
(5) If the start state q
0
is an accepting state, add S ÷ c.
CSG ÷LBA
<x
1
, q
0
, ([, x
1
)><x
2
, x
2
> . . . . <x
n
, (x
n
, ])>
Below (1) – (4) are the rules for deriving all possible sentential forms shown above.
(Recall that in the above sentential form, x
i
is a member of the input alphabet E = {a
1
,
a
2
, …., a
k
}.) The rules in line (2) are to generate the initial sentential forms of all
possible input strings of length one. We need the rule in line (5) for the case when M
accepts c. Notice that we can safely add this rule because no rule has S on its right side.
(Recall that CSG‟s are allowed to use this contracting rule under the condition that no
rule has S on its right side.)
Hierarchy
482
CSG ÷LBA
For the simulation, G assumes that M is currently reading (in a state q
i
) the
rightmost element in the composite symbol (delimited by parentheses) appearing next
to the right of symbol q
i
. The following figures show the correspondence between the
various contexts involving the state symbol q
i
in a sentential form and the positions of
M‟s tape head. (Recall that M doesn‟t keep the original input string x and may change
it while computing.)
<x
i
, a><q
i
, (x
i+1
, b)>
. . . . . a b
q
i
<x
1
, q
i
, ([, a)>
. . a
q
i
[
<x
1
, q
i
, (a, [)>
. . a
q
i
[
<x
n
, q
i
, (], a)>
a
q
i
] . . .
<x
1
, q
i
, ([, ], a)>
a
q
i
] . . .
a
q
i
[ ]
<x
n
, q
i
, (a, ])> <x
1
, q
i
, ([, a, ])>
a
q
i
[ ]
<x
1
, q
i
, (], a, [)>
a
q
i
[ ]
Hierarchy
483
CSG ÷LBA
Below is the set of rules of G simulating each move of M. The readers are strongly
recommended to refer the figures above showing the correspondence between the
context of q
i
and M‟s local configuration.
M‟s transition G‟s rule ( a, b e ¯ , c, d, e e I)
o(q
i
, d) = (q
j
, e, D)
if D = R
<q
i
, (a, d)><b, c> ÷<(a, e)><q
j
, (b, c)>
<a, q
i
, (], d)> ÷<b, q
j
, (e, ])>
<a, q
i
, ([, ], d)> ÷<a, q
j
, ([, e, ])>
o(q
i
, e) = (q
j
, d, D)
if D = N
o(q
i
, c) = (q
j
, d, D)
if D = L
<q
i
, (b, e)> ÷<q
j
, (b, d)>
<a, q
i
, ([, ], e)> ÷<a, q
j
, ([, ], d)>
<a, e><q
i
, (b, c)> ÷<q
j
, (a, e)><b, d>
<a, e><b, q
i
, (], c)> ÷<q
j
, (a, e)><b, (], d)>
<a, q
i
, ([, ], c)> ÷<a, q
i
, (], e, [)>
Hierarchy
484
Finally, to complete the construction for phase (2), we add the rules shown below
for the special cases when M reads the boundary markers.
CSG ÷LBA
M‟s transition
G‟s rule ( a, b e ¯, c, d, e eI)
o(q
i
, ]) = (q
j
, ], L)
o(q
i
, [) = (q
j
, [, R)
<a, q
i
, (c, ])> ÷<a, q
j
, (], c)>
<a, q
i
, ([, c, ])> ÷<a, q
j
, ([, ], c)>
<b, q
i
, (c, [)> ÷<b, q
j
, ([, c)>
<a, q
i
, (], c, [)> ÷<a, q
j
, ([, ], c)>
Hierarchy
485
CSG ÷LBA
Now, we present below a set of rules needed for the final phase. With q
f
indicating
that M has accepted the input, the grammar erases all the symbols in the sentential
form, except for the leftmost elements reserved to generate the input string. Notice that
all the rules satisfy the noncontracting condition. (Recall from Section 1.1 that
composite symbols are introduced for the matter of comprehension. They are not
different from the regular symbols. We are just extending the alphabet size.)
In the following rules, a, b e E and c, d eI.
(1) <q
f
, (a, c)><b, d>÷<a, c><q
f
, (b, d)> (2) <a, q
f
, (c,])> ÷a
(3) <b, c>a ÷ba (4) <b, ([, c)>a ÷ba
(5) <a, q
f
, ([, c, ])> ÷a <a, q
f
, ([, ], c)> ÷a <a, q
f
, (], c, [)> ÷a
Grammar G derives every string x e E
*
if and only if M accepts it, implying that
L(G) = L(M).
The rule in part (1) keeps shifting q
f
to right until it sees the rightmost composite
symbol. Rule (2) extracts the rightmost input symbol (a in the figure). Rules (3) and (4)
let the grammar, going left along the sentential form, collect the reserved input
symbols to finally derive the input string accepted by M. Rules in part (5) are for the
special cases when the length of the input string is 1.
Hierarchy
486
Theorem 15.14 (1) Every language generated by a CFG can be recognized by a
PDA. (2) Every language recognized by a PDA can be generated by a CFG.
Proof (1). Let G be an arbitrary CFG. We construct an NPDA M which recognizes
L(G) as follows. The basic approach is the same as we took for the proof of the
characterization at the upper level. Using G, the machine nondeterministically derives
a terminal string x in the stack to match the input string.
M starts the work by pushing the start symbol S into the stack and does the
following until it sees that the stack is empty. Let A be a nonterminal symbol which
has some constant k rules.
A ÷ o
1
 o
2
, . . .,  o
k
(a) If A is at the stack top, M nondeterministically choosed a rule A ÷ o
i
, 1s i s k,
popd A, and pushes o
i
.
(b) If a terminal symbol, say a, appears at the stack top, M reads the input and pop
the stack top if the input symbol and the stack top match.
Now we finally complete the proofs of the characterizations in Chomsky
hierarchy by proving the characterization relation between CFG‟s and PDA‟s.
CFG’s and PDA’s
Hierarchy
487
(c) If the stack is empty (i.e., the stack top is Z
0
), M enters an accepting state.
The NPDA M enters an accepting state if G derives the string given in the input
tape. It follows that L(M) = L(G).
CFG ÷PDA
G: S ÷aBC B ÷bBc  bc
C ÷bCc  c
L(G) = {ab
n
c
n
b
m
c
m
 n >1, m > 0 }
(a)
1
2
(c, Z
0
/SZ
0
)
(c, S/aBC), (a, a/c ), (c , c/c )
(c, B/bBc), (c, B/bc), (b, b/c)
(c, C/bCc), (c, C/c),
(c, Z
0
/Z
0
)
(b)
Example: Suppose that CFG G shown in figure (a) below is given. We construct an
NPDA as shown in figure (b), which recognizes L(G). Interestingly, we can show that
3 states are enough to construct an NPDA recognizing L(G) for any CFG G.
Hierarchy
488
CFG ÷PDA
Proof (2). This proof is the last and most challenging among the four
characterizations between the models for grammars and automata. However, once
we understand the underlying concept of the approach, it requires only perseverance.
Here again we will present how to construct a CFG G for a given PDA M such that
L(G) = L(M). For the proof we assume that M has the following property.
(1) M accepts the input by empty stack (i.e., Z
0
is popped out from the stack).
(2) M pops the stack or pushes it with exactly one symbol.
In Section 6.3, we showed that every contextfree language can be recognized by a
PDA with an empty stack (condition (1)). With an empty stack, no move is possible. So
we assume that there is one distinguished state that M enters after emptying the stack.
(If there is more than one such state, we can merge them all into one state.)
In the rumination section of Chapter 4, we showed that every context free language
can be recognized by a DPDA with stack operations restricted to either popping or
pushing exactly one symbol in a move (condition (2)). (The same claim is applicable to
NPDA‟s.)
Thus, without loss of generality, we can assume that M meets the two conditions (1)
and (2) above. Otherwise, we can modify it without affecting the language.
Hierarchy
489
CFG ÷PDA
Z
0
. . . BA
p
(a)
q
Z
0
. . . B
x
(b)
Suppose that M, in a state p, with a symbol A on top of its stack, takes some
number of moves until the next stack symbol below A (symbol B in figure (a) below)
appears on the stack top for the first time, as shown in figure (b). Let q be the state of
M when B appears for the first time on the stack top, but before any operation with B.
Notice that B may never appear on the stack top, or the stack height may change many
times before B appears.
Let [p, A, q], called the stacktop erasing set (STES), denote the set of all input
string segments that M will read from the input tape from the time when the machine
sees A at the stack top in state p until it first sees B under A, as the following figures
illustrate. (Notice that [p, A, q] may be empty.)
[p, A, q] = {x, y, . . }
(c)
Hierarchy
490
CFG ÷PDA
(a)
Figure (a) below shows nonempty STES‟s for the PDA in figure (a). Notice that,
among many others, [2, A, 5] and [1, Z
0
, 3] are empty STES‟s.
[5, a, 6] = {b}
[6, A, 7] = {b}
[7, Z
0
, 4] = {c}
[2, A, 7] = {ab}
[2, A, 3] = {b}
[1, Z
0
, 4] = {aabb, ab}
(b)
5
2
3
4
7
6
1
(a, Z
0
/AZ
0
)
(a, A/aA)
(b, a/c)
(b, A/c)
(c, Z
0
/ c)
(c, Z
0
/ c)
(b, A/ c)
Hierarchy
491
CFG ÷PDA
Let M = (Q, E, I, o, q
0
, Z
0
, o) be the NPDA. Here is an algorithm for constructing
a contextfree grammar G = (V
T
, V
N
, S, P) such that L(G) = L(M).
(1) Let V
T
= E, V
N
= { [p, A, q]  p, q e Q, A e I } {S} (i.e., the nonterminal
alphabet of G consists of all the composite symbols, each denoting an STES of M
and the start symbol S.
(2) Let t be the state M enters in a move emptying the stack (i.e., Z
0
is popped). Put
rule S ÷[q
0
, Z
0
, t] in P.
(3) Suppose that a rule with a nonterminal symbol [p, A, q] is newly included in
P, where p, q e Q, A e I. (If there is no such rule, the algorithm terminates.)
(a) If M has a move o(p, a, A) = (r, BC) (where, a e E {c }, r e Q, and B, C e I ),
then for every s e Q, put the following rule in P.
[p, A, q]÷a[r, B, s][s, C, q]
(b) If M has a move o(p, a, A) = (q, c) (where, a e E {c }), put the following rule
in P.
[p, A, q]÷a
Hierarchy
492
[p, A, q] ¬. . . . ¬x
(b) G‟s derivation
x e [p, A, q]
(a) M‟s STES
CFG ÷PDA
Now we need to understand the underlying idea for constructing CFG rules with the
PDA M that generates L(M). The main objective is to design the grammar rules such
that x e[p, A, q] if and only if string x is derivable starting with the nonterminal
symbol [p, A, q], as the following figures illustrate.
STES [q
0
, Z
0
, t] contains all the strings that M reads, starting in the start state and
continuing until it enters the state t by emptying the stack (i.e., pops Z
0
). Hence, we get
L(M) = [q
0
, Z
0
, t]. It follows that G should be able to derive L(M) starting with the
nonterminal symbol [q
0
, Z
0
, t]. However, to use S as the start symbol following the
convention, in step (2) of the algorithm we put the rule S ÷[q
0
, Z
0
, t] in P.
Hierarchy
493
CFG ÷PDA
Next let us examine step (3) of the algorithm. Recall that [p, A, q] is the set of all
possible input segments that M will read, starting in state p with A on the stack top until
the machine, in state q, sees the symbol under A (B in figure (a) below) appear for the
first time at the stack top. For example, if M has a move as shown in figure (b) below
(i.e. o(p, a, A) = (q, c)), then a e[p, A, q].
(b)
Z
0
. . . B
q
a
(a)
Z
0
. . . BA
p
a
p
q
(a, A/c)
Hierarchy
494
Now, suppose that M has a pushing move, say o(p, a, A) = (r
1
, DC). (Note that in
this move, A is changed to C, and D is pushed on top of it). Let s
1
be the state M
enters the first time it sees C at the stack top, and let q be the state M enters the first
time it sees B (which was under the A) at the stack top as shown below. The set [p, A,
q] must contain all the strings in a[r
1
, D, s
1
][s
1
, C, q] (for example, axy in the figure),
i.e., [p, A, q] _ a[r
1
, D, s
1
][s
1
, C, q].
Z
0
. . BA
p
Z
0
. . BCD
r
1
a
Z
0
. . . . B
q
a x y
Z
0
. . BC
s
1
a x
CFG ÷PDA
Hierarchy
495
The following state transition graph will help us understand why this containment
relation holds.
[p, A, q] _ (a[r
1
, D, s
11
][s
11
, C, q] a[r
1
, D, s
12
][s
12
, C, q] . . . )
Notice that, as the figure illustrates, there can be more than one state (in the figure s
1i
)
that M may enter the first time it sees C at the stack top. So the following relation holds.
s
12
(., D/..)
(. , C/. .)
p
q
s
11
r
1
(a, A/DC)
(. , C/. .)
(. , B/..)
(., D/..)
(. ./AB)
[p, A, q] _ a[r
1
, D, s
1
][s
1
, C, q]
CFG ÷PDA
Hierarchy
496
Suppose that M has multiple pushing moves in the same state p with the same stack
top A, like o(p, a
1
, A) = (r
1
, D
1
C
1
), o(p, a
2
, A) = (r
2
, D
2
C
2
). We apply the same idea
and let [p, A, q] include all the segments of the input strings that M will read until it
sees for the first time the symbol (B in the figure below) under D
i
C
i
. Thus, for this
more general case, we have the following (see the figure below, which illustrates the
case).
[p, A, q] _ a
1
[r
1
, D
1
, s
11
][s
11
, C
1
, q] a
1
[r
1
, D
1
, s
12
][s
12
, C
1
, q] . .
a
2
[r
2
, D
2
, s
21
][s
21
, C
2
, q] a
2
[r
2
, D
2
, s
22
][s
22
, C
2
, q] . . .
. . . . .
p
r
i
q
(a
i
, A/D
i
C
i
)
(.. /AB)
(., D
i
/..)
s
ij
(., C
i
/..)
(., B/..)
CFG ÷PDA
Hierarchy
497
Now, suppose that M has some popping moves in state p with A at the stack top, for
example, o(p, b
1
, A) = (q, c), o(p, b
2
, A) = (q, c), . . . Since M has only popping and
pushing moves on the stack, we finally get the following.
[p, A, q] = a
1
[r
1
, D
1
, s
11
][s
11
, C
1
, q] a
1
[r
1
, D
1
, s
12
][s
12
, C
1
, q] . .
a
2
[r
2
, D
2
, s
21
][s
21
, C
2
, q] a
2
[r
2
, D
2
, s
22
][s
22
, C
2
, q] . . .
. . . .
{b
1
} {b
2
} . . .
Suppose that in state p, M has k
1
pushing moves and k
2
popping moves, and let n be
the number of states of M. The above equation can be expressed as follows.
n
j=1
[p, A, q] =
k
1
i=1
(
a
i
[r
i
, D
i
, s
ij
][s
ij
,C
i
, q]
)
k
2
i =1
b
i
( )
,where p, q, r
i
, s
ij
, s
ij
e Q, a
i
, b
i
e E, A, D
i
, C
i
,e I.
CFG ÷PDA
Hierarchy
498
We know that for each triple p, q and A, the following equation holds.
Thus, for every nonterminal [p, A, q] in grammar G, we make the following rules.
This is exactly what step (3) of the algorithm does.
[p, A, q] = a
1
[r
1
, D
1
, s
11
][s
11
, C
1
, q] a
1
[r
1
, D
1
, s
12
][s
12
, C
1
, q] . .
a
2
[r
2
, D
2
, s
21
][s
21
, C
2
, q] a
2
[r
2
, D
2
, s
22
][s
22
, C
2
, q] . . .
. . . .
{b
1
} {b
2
} . . .
[p, A, q] ÷a
1
[r
1
, D
1
, s
11
][s
11
, C
1
, q]  a
1
[r
1
, D
1
, s
12
][s
12
, C
1
, q]  . .
 a
2
[r
2
, D
2
, s
21
][s
21
, C
2
, q]  a
2
[r
2
, D
2
, s
22
][s
22
, C
2
, q] . . .
. . . .
 b
1
 b
2
. . .
CFG ÷PDA
Hierarchy
499
CFG ÷PDA
Example 1. For the PDA M shown in figure (a) below, figure (b) shows a CFG
constructed with the algorithm. Rule (1) is constructed by step (2) of the algorithm,
rule (2) by step (3)(a), and rule (3) and (4) by step (3)(b). We can show that L(M) =
{aba} = L(G).
r p
(a, Z
0
/AZ
0
)
q
s
(b, A/c)
(a, Z
0
/c)
(a) PDA M
(1) S ÷[p, Z
0
, q]
(2) [p, Z
0
, q] ÷a[r, A, s] [s, Z
0
, q]
(3) [r, A, s] ÷b
(4) [s, Z
0
, q] ÷a
(b) CFG G
Let‟s see a couple of application examples of the algorithm.
Hierarchy
500
CFG ÷PDA
5
2
3
4
7
6
1
(a, Z
0
/AZ
0
)
(a, A/aA)
(b, a/c)
(b, A/c)
(c, Z
0
/ c)
(b, Z
0
/ c)
(c, A/ c)
S ÷[1, Z
0
, 4]
[1, Z
0
, 4] ÷a[2, A, 7] [7, Z
0
, 4]  a[2, A, 3] [3, Z
0
, 4]
[2, A, 7] ÷a[5, a, 6][6, A, 7]
[5, a, 6] ÷b [6, A, 7] ÷b [2, A, 3] ÷ c
[7, Z
0
, 4] ÷ c [3, Z
0
, 4] ÷b
Example 2. The PDA below is a little more complex than the one in Example 1.
Notice that this is an NPDA because of the two transitions in state 2.
L(M) = L(G) = {aabb, ab }
Hierarchy
501
CFG ÷PDA
The grammar that we have just constructed has too many rules to comprehend for
the simple language. If we apply the techniques for minimizing the number of c
production rules and eliminating the unit production rules to this grammar (recall
the techniques in Sections 10.1 and 10.2), we can drastically simplify the grammar
as shown below.
S ÷[1, Z
0
, 4]
[1, Z
0
, 4] ÷a[2, A, 7] [7, Z
0
, 4]  a[2, A, 3] [3, Z
0
, 4]
[2, A, 7] ÷a[5, a, 6][6, A, 7]
[5, a, 6] ÷b [6, A, 7] ÷b [2, A, 3] ÷ c
[7, Z
0
, 4] ÷ c [3, Z
0
, 4] ÷b
S ÷aabb  ab
Hierarchy
502
CFG ÷PDA
1
2
4
(a, Z
0
/AZ
0
)
(a, A/AA)
(b, A/c)
3
(b, A/c)
(c, Z
0
/c)
Example 3. Figure (a) below shows a PDA accepting (by empty stack) our familiar
language L = {a
i
b
i
 i > 1 }. Figure (b) below shows the grammar G generated by the
algorithm. Again, it has too many rules to comprehend. We need to clean it up.
(a) PDA M
(b) CFG G
(1) S ÷[1, Z
0
, 4]
(2) [1, Z
0
, 4] ÷a[2, A, 3][3, Z
0
, 4] 
a[2, A, 2][2, Z
0
, 4]
(3) [2, A, 4] ÷ a[2, A, 3][3, A, 4] 
a[2, A, 2][2, A, 4]
(4) [2, A, 3] ÷ a[2, A, 3][3, A, 3] 
a[2, A, 2][2, A, 3]  b
(5) [2, A, 2] ÷ o (6) {3, A, 4] ÷ o
(7) [3, A, 3] ÷b (8) [3, Z
0
, 4] ÷ c
Hierarchy
503
1
2
4
(a, Z
0
/AZ
0
)
(a, A/AA)
(b, A/c)
3
(b, A/c)
(c, Z
0
/c)
CFG ÷PDA
Rules (5) and (6) are useless, because they generate empty string. In state 2, M
makes only pushing moves and it never sees (in the same state 2) the stack symbol
under the current stack top symbol A. So, [2, A, 2] is empty. Set [3, A, 4] is empty,
since the machine empties the stack and terminates without reading the input. Thus
we can eliminate all rules involving two nonterminals [2, A, 2] and [3, A, 4] from the
grammar.
(1) S ÷[1, Z
0
, 4]
(2) [1, Z
0
, 4] ÷a[2, A, 3][3, Z
0
, 4] 
a[2, A, 2][2, Z
0
, 4]
(3) [2, A, 4] ÷ a[2, A, 3][3, A, 4] 
a[2, A, 2][2, A, 4]
(4) [2, A, 3] ÷ a[2, A, 3][3, A, 3] 
a[2, A, 2][2, A, 3]  b
(5) [2, A, 2] ÷ o (6) [3, A, 4] ÷ o
(7) [3, A, 3] ÷b (8) [3, Z
0
, 4] ÷ c
Hierarchy
504
1
2
4
(a, Z
0
/AZ
0
)
(a, A/AA)
(b, A/c)
3
(b, A/c)
(c, Z
0
/c)
(a) PDA M
(b) CFG G
CFG ÷PDA
(1) S ÷[1, Z
0
, 4]
(2) [1, Z
0
, 4] ÷a[2, A, 3][3, Z
0
, 4]
(4) [2, A, 3] ÷ a[2, A, 3][3, A, 3]  b
(7) [3, A, 3] ÷b (8) [3, Z
0
, 4] ÷ c
Cleaning up those useless symbols from the grammar (see Section 10.3), we get the
following simplified grammar.
Hierarchy
505
(1) S ÷A
(2) A ÷aBC
(4) B ÷ aBD  b
(7) D ÷b (8) C ÷ c
To further simplify the grammar let us replace each of the composite symbols with a
unique nonterminal symbol as shown in figure (c) below. Eliminating unit productions
and minimizing the number of cproduction rules (presented in Sections 10.1 and 2),
we finally get the CFG shown in figure (d). Now, we see that this grammar generates
the language L = {a
i
b
i
 i > 1 }.
(1) S ÷[1, Z
0
, 4]
(2) [1, Z
0
, 4] ÷a[2, A, 3][3, Z
0
, 4]
(4) [2, A, 3] ÷ a[2, A, 3][3, A, 3]  b
(7) [3, A, 3] ÷b (8) [3, Z
0
, 4] ÷ c
(1) S ÷aB
(4) B ÷ aBb  b
(b) (c)
(d)
CFG ÷PDA
Hierarchy
506
Rumination (1): Constructing a CFG with a PDA
Step (3)(a) of the algorithm (copied below) for constructing a CFG with a given PDA is the single source of generating a
“flood” of rules into the grammar. (In particular, see the underlined part.) This step is executed for every pushing move of the
automaton. As we saw in Example 3, the algorithm may generate many useless nonterminals. The algorithm for eliminating
useless symbols from a CFG that we learned in Chapter 10 can be efficiently used to eliminate those useless symbols.
(a) If M has a move o(p, a, A) = (r, BC) (where, a e E {c }, r e Q, and B, C e I ), then for every s e Q, put the following
rule in P.
[p, A, q]÷a[r, B, s][s, C, q]
Hierarchy
Our Language
Did you know that "verb" is a noun?
How can you look up words in a dictionary if you can't spell them?
If a word is misspelled in a dictionary, how would we ever know?
If two mouses are mice and two louses are lice, why aren't two houses hice?
If Webster wrote the first dictionary, where did he find the words?
If you wrote a letter, perhaps you bote your tongue?
If you've read a book, you can reread it. But wouldn't this also mean that you would have to "member" somebody
in order to remember them?
 MoodyFan 
Break Time
507
15.1 Prove that the union S
1
S
2
of two enumerable set s S
1
and S
2
is also enumerable.
15.2 Show that the set of all real numbers is not enumerable.
15.3 Prove that the class of languages which are not type 0 is not enumerable.
Exercises
Hierarchy