This action might not be possible to undo. Are you sure you want to continue?
Lecture Notes on
Regular Languages and Finite Automata
for Part IA of the Computer Science Tripos
Prof. Andrew M. Pitts Cambridge University Computer Laboratory
c A. M. Pitts, 19982003
First Edition 1998. Revised 1999, 2000, 2001, 2002, 2003.
Contents
Learning Guide 1 Regular Expressions 1.1 Alphabets, strings, and languages . 1.2 Pattern matching . . . . . . . . . 1.3 Some questions about languages . 1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 1 1 4 6 8 11 11 14 17 20 20
3 Regular Languages, I 21 3.1 Finite automata from regular expressions . . . . . . . . . . . . . . . . . . . . 21 3.2 Decidability of matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Regular Languages, II 4.1 Regular expressions from ﬁnite automata . . . . . 4.2 An example . . . . . . . . . . . . . . . . . . . . . 4.3 Complement and intersection of regular languages . 4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . 5 The Pumping Lemma 5.1 Proving the Pumping Lemma . . . . 5.2 Using the Pumping Lemma . . . . . 5.3 Decidability of language equivalence 5.4 Exercises . . . . . . . . . . . . . . 6 Grammars 6.1 Contextfree grammars 6.2 BackusNaur Form . . 6.3 Regular grammars . . . 6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 29 30 32 34 37 38 39 42 43 45 45 47 49 51
¡
2 Finite State Machines 2.1 Finite automata . . . . . . . . . . . 2.2 Determinism, nondeterminism, and 2.3 A subset construction . . . . . . . . 2.4 Summary . . . . . . . . . . . . . . 2.5 Exercises . . . . . . . . . . . . . .
. . . . . . . transitions . . . . . . . . . . . . . . . . . . . . .
ii
Learning Guide
The notes are designed to accompany six lectures on regular languages and ﬁnite automata for Part IA of the Cambridge University Computer Science Tripos. The aim of this short course will be to introduce the mathematical formalisms of ﬁnite state machines, regular expressions and grammars, and to explain their applications to computer languages. As such, it covers some basic theoretical material which Every Computer Scientist Should Know. Direct applications of the course material occur in the various CST courses on compilers. Further and related developments will be found in the CST Part IB courses Computation Theory and Semantics of Programming Languages and the CST Part II course Topics in Concurrency. This course contains the kind of material that is best learned through practice. The books mentioned below contain a large number of problems of varying degrees of difﬁculty, and some contain solutions to selected problems. A few exercises are given at the end of each section of these notes and relevant past Tripos questions are indicated there. At the end of the course students should be able to explain how to convert between the three ways of representing regular sets of strings introduced in the course; and be able to carry out such conversions by hand for simple cases. They should also be able to prove whether or not a given set of strings is regular. Recommended books Textbooks which cover the material in this course also tend to cover the material you will meet in the CST Part IB courses on Computation Theory and Complexity Theory, and the theory underlying parsing in various courses on compilers. There is a large number of such books. Three recommended ones are listed below. D. C. Kozen, Automata and Computability (SpringerVerlag, New York, 1997).
Note The material in these notes has been drawn from several different sources, including the books mentioned above and previous versions of this course by the author and by others. Any errors are of course all the author’s own work. A list of corrections will be available from the course web page (follow links from www.cl.cam.ac.uk/Teaching/). A lecture(r) appraisal form is included at the end of the notes. Please take time to ﬁll it in and return it. Alternatively, ﬁll out an electronic version of the form via the URL www.cl.cam.ac.uk/cgibin/lr/login. Andrew Pitts Andrew.Pitts@cl.cam.ac.uk
¢ ¢ ¢
T. A. Sudkamp, Languages and Machines (AddisonWesley Publishing Company, Inc., 1988). J. E. Hopcroft, R. Motwani and J. D. Ullman, Introduction to Automata Theory, Languages, and Computation, Second Edition (AddisonWesley, 2001).
1
1 Regular Expressions
Doubtless you have used pattern matching in the commandline shells of various operating systems (Slide 1) and in the search facilities of text editors. Another important example of the same kind is the ‘lexical analysis’ phase in a compiler during which the text of a program is divided up into the allowed tokens of the programming language. The algorithms which implement such patternmatching operations make use of the notion of a ﬁnite automaton (which is Greeklish for ﬁnite state machine). This course reveals (some of!) the beautiful theory of ﬁnite automata (yes, that is the plural of ‘automaton’) and their use for recognising when a particular string matches a particular pattern.
Pattern matching What happens if, at a Unix/Linux shell prompt, you type
and press return? , , , and (strangely)
. What happens if you type
and press return?
Slide 1
1.1 Alphabets, strings, and languages
The purpose of Section 1 is to introduce a particular language for patterns, called regular expressions, and to formulate some important problems to do with patternmatching which will be solved in the subsequent sections. But ﬁrst, here is some notation and terminology to do with character strings that we will be using throughout the course.
! "©
© ¨
Suppose the current directory contains ﬁles called
£
2 0 ( 31)¥£
!
% ¦ ¤ 4$§5£
© #"¨
¦ ¤ §¥£
,
' £ $£
© #¨ !
% &$¥£
! % 4$ © #"¨
2
1 REGULAR EXPRESSIONS
Alphabets
are called symbols. For us, any set qualiﬁes as a possible alphabet, so long as it is ﬁnite.
— element set of decimal digits. — element set of lowercase characters of the English language. — element set of all subsets of the alphabet of decimal digits. Examples:
an alphabet, because it is inﬁnite.
Example: if
, then ,
,
, and
of lengths one, two, three and four respectively.
are talking about).
Slide 3
6
j
string (or empty string) and denoted
(no matter which
6
N.B. there is a unique string of length zero over
6
set of all strings over
of any ﬁnite length. , called the null we
p hi i
p h i &h &h h
tuple of elements of
, written without punctuation.
are strings over
6
cpF iF hB @ dSqSEC7 6
A string of length
crrrF RF PF GF DB HHH"SQIH"EC@
Nonexample:
— set of all nonnegative whole numbers is not
Slide 2
Strings over an alphabet ( ) over an alphabet is just an ordered
6
An alphabet is speciﬁed by giving a ﬁnite set,
, whose elements
D dG
X wP SUtbUHHHqSES#ECgf7 c vF uF sFrrrF pFiF hB @ e c aF `F YF XF WF TF RF PF GF DB @ 8 b"US"SQVU"SQIH"ECA97 8 P c8 7 B @ x 9 qqCy7 gfd hEi e6 7
1.1 Alphabets, strings, and languages
3
Concatenation of strings
obtained by joining the strings endtoend.
and .
This generalises to the concatenation of three or more strings.
Slides 2 and 3 deﬁne the notions of an alphabet , and the set of ﬁnite strings over an alphabet. The length of a string will be denoted by . Slide 4 deﬁnes the operation of concatenation of strings. We make no notational distinction between a symbol and the corresponding string of length one over : so can be regarded as a subset of . Note that is never empty—it always contains the null string, , the unique string of length zero. Note also that for any
© ¨ u H¯®¤
(iii) If (the empty set — the unique set with no elements), then containing the null string.
ªªª¡yyy¡ wyy¡y wy¡ w wy¡ yy w¡ wy w¡ y w w¡ w w w¡ yy¡ wy¡y w¡ w w¡ y¡ w¡ HdU#SdU#4bq&U1bq4S14SqQS#SESSqSS4
©y¡ w¨ u dS#EC
(ii) If
, then
contains
ªªª¡ w w w w¡ w w w¡ w w¡ w¡ H#&SSSS&
© w¨ u EC
(i) If
, then
contains
Example 1.1.1. Examples of
z § t u z t "~x¥~¦¥#~
and
.
for different :
z t u z t u z t ¤4£4#h#
and
, the set just
w
t ¥~
n k n 4d¥E4E k
E.g.
.
Slide 4
y w w  u t &wv4z
w u &x
wwx{  w u z y wy w u t &xvt w  u y w u ~}{z xvt t u t u Ex¢t ¡ z¡ }UUtt t
Examples: If
,
and
, then
n tk
,
r q o nl spmk
The concatenation of two strings
is the string
¬ u «
4
1 REGULAR EXPRESSIONS
1.2 Pattern matching
Slide 5 deﬁnes the patterns, or regular expressions, over an alphabet that we will use. Each such regular expression, , represents a whole set (possibly an inﬁnite set) of strings in that match . The precise deﬁnition of this matching relation is given on Slide 6. It might seem odd to include a regular expression that is matched by no strings at all—but it is technically convenient to do so. Note that the regular expression is in fact equivalent to , in the sense that a string matches iff it matches (iff ).
is a regular expression is a regular expression
Every regular expression is built up inductively, by ﬁnitely many applications of the above rules.
(N.B. we assume , , , , , and are not symbols in .)
Slide 5
Remark 1.2.1 (Binding precedence in regular expressions). In the deﬁnition on Slide 5 we assume implicitly that the alphabet does not contain the six symbols
Then, concretely speaking, the regular expressions over form a certain set of strings over the alphabet obtained by adding these six symbols to . However it makes things more readable if we adopt a slightly more abstract syntax, dropping as many brackets as possible and using the convention that
²ÇÇÌ È ±Æ SbHÍÊ#bÆ
²ÇÌÆ Ç È ± SEËÊÆ
Ç ²ÇÌÆ È ± S4ËÊ#Æ
²Ì È SËÊ#±
So, for example,
means
, not
, or
ÉÈ É
É »É
binds more tightly than
, binds more tightly than
. , etc.
°
ÅÄ À "Â
°
°
²
È
Ç
²
Æ
³
°
È Ç Æ ³ ´
´
À
if
is a regular expression, then so is
Á "À
Á
À
if
and
are regular expressions, then so is
Ä Ã À Á Â
Á
À
if
and
are regular expressions, then so is
· º »¹
each symbol
is a regular expression
·
Regular expressions over an alphabet
°
´
´ ¶ vµ
´
³
² ³
±
µ
±
¾ ¿¸
¼ ½¸
¸
¸
¸
¸
² °
² É
² ³
1.2 Pattern matching
5
Matching strings to regular expressions
matches
expressed as the concatenation of two or more strings, each
Note that we didn’t include a regular expression for the ‘ ’ occurring in the UNIX examples on Slide 1. However, once we know which alphabet we are referring to, say, we can get the effect of using the regular expression
ì ýëëëý ê ý é &ÍõdHHS4Íõ~#õ
â ò
which is indeed matched by any string in in ).
(because
is matched by any symbol
ç ûò
ú
ë
etc
ùøö ÷ö õó ç dSqSEôeò ð ç ñîã
The case just means that (so always matches ); and the case that matches (so any string matching also matches ). For example, if and , then the strings matching are
ç èà
ã
â "á â á
ö ÷ õ÷ õ÷ õö÷ õ÷ õö ÷ õö q&&SESq&S4ï
þ ì ýëëëý ê ý é â V&ÍõdHHS4Íõ~õ
ú
á
á
ï
â á
ü
ï ç ®à
í #à
à æ
ì àëëë ê àé #hHH¥b à ä åã
for some
, can be expressed as a concatenation of , where each matches .
â &á
The deﬁnition of ‘ matches
Ù
of which matches
Slide 6
’ on Slide 6 is equivalent to saying strings,
Ù
ß Ù
matches
iff either
, or
matches , or
can be
Û
Ï
Ù
two strings,
, with
matching
and
Þ
Ï
× ½Ï Õ
Ü
ÞÜÝÕÖÏ Û "Ù
matches
iff it can be expressed as the concatenation of matching
Û
Ù
Û Ù Ú
matches
iff
Ø
no string matches
× ÕÖÏ × Ô Ò ÓÑ
iff
matches
iff
matches either
Ñ ÖÏ Õ
or
Ï
à
Ï ÐÎ Ï ÐÎ Ï ÐÎ Ï ÐÎ ù ì õöëëëö ê õöé õ &SQHHHq"S#Eó á æ ç xîã ÷&õxçá à ò Ï ÐÎ Î
just means
6
1 REGULAR EXPRESSIONS
Examples of matching, with
is matched by just the strings , , ,
is just matched by
Slide 7
Notation 1.2.2. The notation is quite often used for what we write as . The notation , for , is an abbreviation for the regular expression obtained by concatenating copies of . Thus:
Thus matches iff matches for some . We use as an abbreviation for . Thus matches concatenation of one or more strings, each one matching .
iff it can be expressed as the
1.3 Some questions about languages
Slide 8 deﬁnes the notion of a formal language over an alphabet. We take a very extensional view of language: a formal language is completely determined by the ‘words in the dictionary’, rather than by any grammatical rules. Slide 9 gives some important questions about languages, regular expressions, and the matching relation between strings and regular expressions.
9582 7
§ )£ § £ %
2 f ¨2 UV2 e 2 D B hgA @2 $dbc¨2a`2 W U @ @Y SPIRQH XV(2 G T EPIRQSH ¨2 F
ÿ
is matched by any string in
, and
ÿ
is matched by any string of even length in
ÿ
is matched by any string in
that starts with a ‘ ’
§
ÿ
is matched by each symbol in
©§¥£ ¨¦¤¢ ¡ÿ
£
2 D B ECA 5 3 642
&£'§ 1 0 § § §§(¤§¦"%#'&£¦"% ¤§"£ ¤§£ $¤§£#¤§"£! ¤§£§ §£ e ¤2 f @ ¨2 A U i2 e
1.3 Some questions about languages
7
Languages
Two regular expressions equivalent iff and same members).
and
(over the same alphabet) are
are equal sets (i.e. have exactly the
Slide 8
Some questions (a) Is there an algorithm which, given a string
(b) In formulating the deﬁnition of regular expressions, have we missed out some practically useful notions of pattern? and (over the same alphabet), computes whether or not (c) Is there an algorithm which, given two regular expressions they are equivalent? (Cf. Slide 8.)
Slide 9
y `xp
(d) Is every language of the form
?
x
x
not
matches ?
x
expression
(over the same alphabet), computes whether or
88x
matches
and a regular
q
x
The language determined by a regular expression
q
Thus any subset
y y Vp "xp x y r q ¤ "xp
r sq
in
.
determines a language over
q
rwqvup t p
A (formal) language
over an alphabet
is just a set of strings . over is
8
1 REGULAR EXPRESSIONS
The answer to question (a) on Slide 9 is ‘yes’. Algorithms for deciding such patternmatching questions make use of ﬁnite automata. We will see this during the next few sections. If you have used the UNIX utility , or a text editor with good facilities for regular expression based search, like , you will know that the answer to question (b) on Slide 9 is also ‘yes’—the regular expressions deﬁned on Slide 5 leave out some forms of pattern that one sees in such applications. However, the answer to the question is also ‘no’, in the sense that (for a ﬁxed alphabet) these extra forms of regular expression are deﬁnable, up to equivalence, from the basic forms given on Slide 5. For example, if the symbols of the alphabet are ordered in some standard way, it is common to provide a form of pattern for naming ranges of symbols—for example might denote a pattern matching any lowercase letter. It is not hard to see how to deﬁne a regular expression (albeit a rather long one) which achieves the same effect. However, some other commonly occurring kinds of pattern are much harder to describe using the rather minimalist syntax of Slide 5. The principal example is complementation, :
It will be a corollary of the work we do on ﬁnite automata (and a good measure of its power) that every pattern making use of the complementation operation can be replaced by an equivalent regular expression just making use of the operations on Slide 5. But why do we stick to the minimalist syntax of regular expressions on that slide? The answer is that it reduces the amount of work we will have to do to show that, in principle, matching strings against patterns can be decided via the use of ﬁnite automata. The answer to question (c) on Slide 9 is ‘yes’ and once again this will be a corollary of the work we do on ﬁnite automata. (See Section 5.3.) Finally, the answer to question (d) is easily seen to be ‘no’, provided the alphabet contains at least one symbol. For in that case is countably inﬁnite; and hence the number of languages over , i.e. the number of subsets of is uncountable. (Recall Cantor’s diagonal argument.) But since is a ﬁnite set, there are only countably many regular expressions over . (Why?) So the answer to (d) is ‘no’ for cardinality reasons. However, even amongst the countably many languages that are ‘ﬁnitely describable’ (an intuitive notion that we will for any regular expression . For not formulate precisely) many are not of the form example, in Section 5.2 we will use the ‘Pumping Lemma’ to see that
is not of this form.
1.4 Exercises
Exercise 1.4.2. Find regular expressions over (a) contains an even number of ’s
that determine the following languages:
hdfe VVVgl
Exercise 1.4.1. Write down an ML data type declaration for a type constructor whose values correspond to the regular expressions over an alphabet .
z
{ x ~r}w
l
gh !¤ {x iaz
i#`
z
{x iaz}w
matches
iff
does not match .
vt r ¤usqlp
hfe igd
onlj `mkf  {x iazyw

 
1.4 Exercises
9
look like? (Answer: Exercise 1.4.4. If an alphabet contains only one symbol, what does see Example 1.1.1(i).) So if is the set consisting of just the null string, what is ? (The answer is not .) Moral: the punctuationfree notation we use for the concatenation of two strings can sometimes be confused with the punctuationfree notation we use to denote strings of individual symbols. Tripos questions 1999.2.1(s) 1997.2.1(q) 1996.2.1(i) 1993.5.12
s
Exercise 1.4.3. For which alphabets alphabet?
V#¡h
V#
kk
(b)
contains an odd number of ’s
is the set
of all ﬁnite strings over
itself an
10
1 REGULAR EXPRESSIONS
11
2 Finite State Machines
We will be making use of mathematical models of physical systems called ﬁnite automata, or ﬁnite state machines to recognise whether or not a string is in a particular language. This section introduces this idea and gives the precise deﬁnition of what constitutes a ﬁnite automaton. We look at several variations on the deﬁnition (to do with the concept of determinism) and see that they are equivalent for the purpose of recognising whether or not a string is in a given language.
2.1 Finite automata
Example of a ﬁnite automaton
States:
Input symbols: , . Start state: .
Transitions: as indicated above. Accepting state(s): .
The key features of this abstract notion of ‘machine’ are listed below and are illustrated by the example on Slide 10.
There are only ﬁnitely many different states that a ﬁnite automaton can be in. In the example there are four states, labelled , , , and . We do not care at all about the internal structure of machine states. All we care about is which transitions the machine can make between the states. A symbol from some ﬁxed alphabet is associated with each transition: we think of the elements of as input symbols. Thus all the possible transitions of the ﬁnite automaton can be speciﬁed by giving a ﬁnite graph whose vertices are the states and whose edges have
¯
¦ ¡ ¨ ¨ § ¦ ¥ ¤ ¡ ¨ i ¨ £ £ £ ¦¨ ¢ i ¢ ¢ ¨ ¥ ¤ ¡ ¢ £
, , , . Slide 10
® #ª
¬ « kª iª ª
¯
© ©
12
2 FINITE STATE MACHINES
and
In other words, in state the machine can either input the symbol and enter state , or it can input the symbol and enter state . (Note that transitions from a state back to the same state are allowed: e.g. in the example.)
There is a distinguished start state.1 In the example it is . In the graphical representation of a ﬁnite automaton, the start state is usually indicated by means of a unlabelled arrow.
The states are partitioned into two kinds: accepting states2 and nonaccepting states. In the graphical representation of a ﬁnite automaton, the accepting states are indicated by double circles round the name of each such state, and the nonaccepting states are indicated using single circles. In the example there is only one accepting state, ; the other three states are nonaccepting. (The two extreme possibilities that all states are accepting, or that no states are accepting, are allowed; it is also allowed for the start state to be accepting.)
The reason for the partitioning of the states of a ﬁnite automaton into ‘accepting’ and ‘nonaccepting’ has to do with the use to which one puts ﬁnite automata—namely to recognise whether or not a string is in a particular language ( subset of ). Given we begin in the start state of the automaton and traverse its graph of transitions, using up the symbols in in the correct order reading the string from left to right. If we can use up all the symbols in in this way and reach an accepting state, then is in the language ‘accepted’ (or ‘recognised’) by this particular automaton; otherwise is not in that language. This is summed up on Slide 11.
1 2
The term initial state is a common synonym for ‘start state’. The term ﬁnal state is a common synonym for ‘accepting state’.
Å
Ã ¨¹
¸ µ´³ ± ·¶' u²° É ° ¶ ¾ k¹
both a direction and a label (drawn from ). In the example possible transitions from state are
and the only
Á À ¿ ½¼ º ¨k¹ i¹ ± Å Å
Ã Ã #¹ ¿ ¹ À (¹ ½¼
°
¾ » ¼½ º #¹ i¹ ´ º i¹ É° Æ ÈÇÅ Å Å ¾ Â¹ Ä Ä
º V¹
2.1 Finite automata
13
, language accepted by a ﬁnite automaton over its alphabet of input symbols
state. Here means, if
transitions of the form
N.B.
case case
Example 2.1.1. Let be the ﬁnite automaton pictured on Slide 10. Using the notation introduced on Slide 11 we have:
In fact in this case
(For ( ) corresponds to the state in the process of reading a string in which the last symbols read were all ’s.) So coincides with the language determined by the regular expression
(cf. Slide 8).
þ &%$ý
ú
contains three consecutive ’s
þ ¡ö ý
iff
ú þ é ñ ( û 'úaþ gúú ñ û 'ú us$ þ ¡aöÿý
ùì é #sì ¢ì é £sì
iff
(so
(no conclusion about
¡aöÿÖ¥Púgú þ ý ¤ü ú û þ ý ü ûúú ¡aöÿÖ~ggú
ò¨ì ²ì ¨ì ²ì ôêè õ ò õ ó é òì é ¨âðï sì ¨ì îì âêè ò ñí ðïïð ë é ñ ÝmÙ× ß Ð kÔç Vá Ó ¨æ(æ k·Ôå áÜ Ð kÔä Ú Ð iÔã âÑ Ð Ð æ Ó áÓ áÓ Ð v× ß àÐ ¨(Ý Ü Ð ¨Ú Ð ÝÝ à à ßØÝÝÝ Ü Ú Ø × 8Â¨(ÞmPØÛÙÏ
say, that for some states
(not necessarily all distinct) there are
: :
iff
iff
.
Slide 11
(so
Ð
Ð Õ Ò ÔÓ Ñ Ð
Ñ ¨Ð
satisfying
with
© ©§ þ £¨é ¡aöÿý
Ï
consists of all strings
the start state and
some accepting
) ) ).
Ì
Ð Ö¨Ð Õ Ò ÔÓ Ñ ú
Í Ë ÎÌÊ ì ñ Põð !õïøõï ï ¦ì ¢ ì ñ Põð õ !øï ïõ ï Âì ÷ ù õï ÷ #ì ñ Pøð Põ!õï ï1Âì ö
#"!# ë é ì ó
14
2 FINITE STATE MACHINES
is speciﬁed by
a ﬁnite set
for each
an element
(the start state)
a subset
(of accepting states)
Slide 12
2.2 Determinism, nondeterminism, and transitions
Slide 12 gives the formal deﬁnition of the notion of ﬁnite automaton. Note that the function gives a precise way of specifying the allowed transitions of , via: iff . The reason for the qualiﬁcation ‘nondeterministic’ on Slide 12 is because in general, for each state and each input symbol , we allow the possibilities that there are no, one, or many states that can be reached in a single transition labelled from , corresponding to the cases that has no, one, or many elements. For example, if is the NFA pictured on Slide 13, then
labelled ;
l q
p
i.e. in transition labelled .
, precisely two states can be reached from
kq
p
i.e. in
, precisely one state can be reached from
with a transition
q
p
i.e. in
, no state can be reached from
with a transition labelled ;
with a
yv exq
q
p
v r ut wq sq
F
reached from
p
y h ¥
@&78648eWU @ d5cb` 9 2 2 1 2 7aa @ 21 @ &98726843YC &X B @ 9876543WVT(FRB P@ 2 21 US Q I @ AGF @ 786543EDB C 9 2 21 C @A @9 2 2 &8765431
and each
a ﬁnite set
(of states)
(the alphabet of input symbols) , a subset (the set of states that can be
with a single transition labelled )
)
A nondeterministic ﬁnite automaton (NFA),
,
f
q g h
y h #wRPq
(j&mq&£¨e6&¦q ig lqh d l h j q d h xi¦£h¨e6¦&q ig f d h ge( ¦&q ig
H 0 0 0 66 q h ig h ig
0 0
2.2 Determinism, nondeterminism, and transitions
15
Example of a nondeterministic ﬁnite automaton
States, transitions, start state, and accepting states as shown:
The language accepted by this automaton is the same as for the automaton on Slide 10, namely
Slide 13
When each subset has exactly one element we say that is deterministic. This is a particularly important case and is singled out for deﬁnition on Slide 14. The ﬁnite automaton pictured on Slide 10 is deterministic. But note that if we took the same graph of transitions but insisted that the alphabet of input symbols was say, then we have speciﬁed an NFA not a DFA, since for example . The moral of this is: when specifying an NFA, as well as giving the graph of state transitions, it is important to say what is the alphabet of input symbols (because some input symbols may not appear in the graph at all). When constructing machines for matching strings with regular expressions (as we will do in Section 3) it is useful to consider ﬁnite state machines exhibiting an ‘internal’ form of nondeterminism in which the machine is allowed to change state without consuming any input symbol. One calls such transitions transitions and writes them as
n
This leads to the deﬁnition on Slide 15. Note that in an NFA , is not an element of the alphabet of input symbols.
, we always assume that
x
s&¦
s p
w w zwt v y&t v xt v wt u v v srq p &eko
Input alphabet:
.
contains three consecutive ’s
x P
n
n
{ ~srq po  { i&ekP}ko 66% i
16
2 FINITE STATE MACHINES
A deterministic ﬁnite automaton (DFA) , the ﬁnite set contains exactly one
element—call it
.
Thus in this case transitions in nextstate function,
are essentially speciﬁed by a
, mapping each (state, input symbol)pair
by a transition labelled :
iff
Slide 14
An NFA with transitions (NFA )
to indicate that the pair of states Example (with input alphabet =
Slide 15
Â ¹ Ã& ± ¹
Àw ± & ± ¿ ¾ ½ !(¼¬ ¦ » ¬ ® µ º«ª µ ¹ ³´
Æ w Â & Â x Å Ä
¤¡ ¡ ¥ 8£68¢e
the transition relation, on the set
is speciﬁed by an NFA
together with a binary relation, called . We write
is in this relation.
):
to the unique state
which can be reached from
¥¤ ¡ ¡ k8£65¢3ED
¬ ¶ µ µ ± ³´ ®T¦ Rª ¥ ¯·'k ² ¦ ® ¬ T¦ «ª ¥ ¯ ¥ ¯ ® ¬ ª ¥ °¦ «G!¯ ® ¬ ª ¥ T¦ «P¥© ¹
¹ Â Á w ¹ ±
¸
is an NFA
with the property that for each
and
¥§ ¨G ¦ ® ¬ °¦ «ª ¸
2.3 A subset construction
17
, language accepted by an NFA
state. Here
is deﬁned by:
iff
or there is a sequence from to
more transitions in (for (for
) iff
) iff
and similarly for longer strings
Slide 16
When using an NFA to accept a string of input symbols, we are interested in sequences of transitions in which the symbols in occur in the correct order, but with zero or more transitions before or after each one. We write
to indicate that such a sequence exists from state to state in the NFA . Then, by deﬁnition is accepted by the NFA iff holds for the start state and some accepting state: see Slide 16. For example, for the NFA on Slide 15, it is not too hard to see that the language accepted consists of all strings which either contain two consecutive ’s or contain two consecutive ’s, i.e. the language determined by the regular expression .
2.3 A subset construction
Note that every DFA is an NFA (whose transition relation is deterministic) and that every NFA is an NFA (whose transition relation is empty). It might seem that nondeterminism and transitions allow a greater range of languages to be characterised as recognisable by a ﬁnite automaton, but this is not so. We can use a construction, called the subset construction, to convert an NFA into a DFA accepting the same language (at the expense of increasing the number of states, possibly exponentially). Slide 17 gives an example of this construction. The name ‘subset construction’ refers to the fact that there is one state of for each subset of the set of states of . Given two subsets , there in just in case consists of all the states reachable from is a transition
î ö 'õ(÷(öö 'õ÷ î ùöTõ÷ úø ú øõ úø õ
è Ñû
¤ ¢
þþ w«ÿý ó ð ó
© £¨¦ ¦ §
çð
è
Ò
ç ô ¦ð ð ñ ò ¦ð ô óð ð ó ñò wð Eð ê îí ë ¥ìê
èó
¦
è üû
Ó wÒ
satisfying
â ÛÜ Þ ÜÛ à äã Ø Ò Í Õ Ö æÖ Í Õ Ö ÉÖ Í Õ Ò Ð ÏPå3ß Ø Ò Þ ÜÛ à Ø Ò Í Õ Ö bÖ Í Õ Ò Ð ÏPß Ø Ò ØÒ Ò Ê ØkÚÒ Ý Ò Ø ÖÖ Ò¦wxÖ Í ÛÜ Ò Ò Ù Ø Ö bÖ × Ò ¨wÒ Ô ÕÕ Ó
with the initial state and
¤ ¢ ¥£¡
Ð ÑÏ
Î
consists of all strings
over the alphabet
of input symbols some accepting
ÊÍ
Ë È ÌÊÉÇ è Ñû þþ w«ÿý ó èç è éç ï èç
¦ ¦
â£áÒ ÞÕ ÞÕ WÒ ÍÕ Ò ç
of one or
ö
ï
ï
ê
18
! " #
2 FINITE STATE MACHINES
states in via the relation deﬁned on Slide 16, i.e. such that we can get from to in via ﬁnitely many transitions followed by an transition followed by ﬁnitely many transitions.
Example of the subset construction
( ' & $ % '
Slide 17
By deﬁnition, the start state of is the subset of whose elements are the states reachable by transitions from the start state of ; and a subset is an accepting state of iff some accepting state of is an element of . Thus in the example on Slide 17 the start state is and
c b a X ` X W 6£¡6Yfe d c b a X ` X %6YW & & & VU v u q s q p %rt9r9iih & gU '
Indeed, in this case . The fact that and accept the same language in this case is no accident, as the Theorem on Slide 18 shows. That slide also gives the deﬁnition of the subset construction in general.
& gU &
& U
d w d (
& Vtie
v u P%iih
v u q s q p %irt6r9ih
!
v u q s q p iir9r9ih
& gU
because in
:
u i
u i
p
!
p
&
because in
:
I 8 1 H
I 8 1 H
I 8 1 H
I 8 T91 H
I 8 1 H
I 8 T91 H
F G G G G
I 8 Q 2 3%1 H 1 1 Q 5
I 8 Q 2 3%1 H 1 1 Q 5
I 8 1 Q 2 1 T9T3Q 5 1 H
I 8 1 Q 2 1 T9T3Q 5 1 H
I 2 1 H
I 2 1 H
E
0 G
I 5 P61 H
I 2 1 H
I 8 1 H
I 8 1 S6RQ 5 1 H
I 8 1RQ 2 1 H
I 2 R361 H 1 Q 5
I 8 1 Q 2 1 S6RTSRQ 5 1 H
C B D6A 4 2 31 8 91 5 61 @ 4 7 7 0
& U x w Vy3(
) & x w y3(
Ð Ð Ð Ï Ü Í Ü µ ´ Ï Ë â tzÂÍ Ü Øwt»¶Ï ³ ½ Ç à º ¿ µ ´ Í Ë â ½ Ç ¼ ³ ½ Ç à º £yÎÂ¥wiáØwt»ãÍ ¿
°
´ ° · ³ ² º V¹
É ½ Æ Ä Ã Á Á À º wÇ ÅÂÂ¤È¥¼ ½ Ç
Ñ3ËyÐÐÐ¨ÏzËrÎÌÊÉ Í Ë µ ½ «ÅÂÂº Æ Ä Ã Á Á À
n
¨ytt¨¦ v © { § §
l ¡®¬«
} { v x v 9¥wt
p iD9¥zywus ~ } { v x v t
2.3 A subset construction
same alphabet of input symbols and accepting exactly the same
Theorem. For each NFA
there is a DFA
g ji
g hf
strings as
, i.e. with
n r¡mqyoj¡mk g l k p n g i l
Deﬁnition of
in
(refer to Slide 12):
iff
, where
in
n og
u S l p l n i ¡ n l qw p g ji D~ p ¥
To prove the theorem on Slide 18, given any NFA . We split the proof into two halves.
Slide 18
we have to show that
µ ´ ° ³ ¶²
° ±¯
D~ ¥wt } { v x v
ª p i f ¡ ¢ ¥S v © { § § ¦ D~ ¨ytt¨s p i¤£~ s ¢ D~ s s
D~
g ji
g
Proof that some null string of the form
. Consider the case of ﬁrst: if , then for , hence and thus . Now given any non, if is accepted by then there is a sequence of transitions in
¿
¯¾
½ 9¼
´ ° ³ ² º »¹
¹
´ ° · ³ ² ¸ ´ ° ³ gy²
¿
´ g² ° · ³
(1)
Ð
½ «ÅÂ£yØ×Ñ Æ Ä Ã Á Á À º
Ö RÒ ¿ ¾ Õ Õ Õ
Ô Ò R±Í Ó Ò %¼ ½ ¾ ¿ ¾
Since it is deterministic, feeding
to
° V·
Ñ Ë Ð Ð Ð Ï Ë Í 3Ùzr«Ë
(2)
Ñ Ü
Ö ÞÚ Ò tÝÚ Û
Õ Õ Õ
Ô RÒ Ú Í Ü Û
Ó Ò Ú Û
½ Ç Èi¼
where (1) we deduce
,
, etc. By deﬁnition of
½ Ç wà
´ Ï Ë â tÂÍ Ü ÊÈtªyÏ Ü ³ ½ Ç à µ
so so
´ Í Ë â ½ Ç ¼ ³ ½ Ç à µ yÎÂ»Èiá®wtªßÍ Ü °
Ñ Ü
µ ´ Ñ Ë â Í ä D3ÂiwzÑ Ü Øwt»Ñ ³ ½ Ç à º ¿
results in the sequence of transitions
with the
(Slide 18), from 19
20
î ì ë ê é é è ç æ «ÂÂÊtï î í ì ë ê é é è ç æ w¡ÅÂÂÊå
2 FINITE STATE MACHINES
ð
and hence by . Proof that
î ì ë ê é é wí ÅÂÂè ò Vñ
(because
). Therefore (2) shows that is accepted
. Consider the case of ﬁrst: if , then and so there is some with , i.e. and thus . Now given any nonnull string , if is accepted by then there is a sequence of transitions in of the form (2) with , i.e. with containing some . Now since , by deﬁnition of there is some with in . Then since , there is some with . Working backwards in this way we can build up a sequence of transitions like (1) until, at the last step, from the fact that we deduce that . So we get a sequence of transitions (1) with , and hence is accepted by .
2.4 Summary
The important concepts in Section 2 are those of a deterministic ﬁnite automaton (DFA) and the language of strings that it accepts. Note that if we know that a language is of the form for some DFA , then we have a method for deciding whether or not any given string (over the alphabet of ) is in or not: begin in the start state of and carry out the sequence of transitions given by reading from left to right (at each step the next state is uniquely determined because is deterministic); if the ﬁnal state reached is accepting, then is in , otherwise it is not. We also introduced other kinds of ﬁnite automata (with nondeterminism and transitions) and proved that they determine exactly the same class of languages as DFAs.
ó
2.5 Exercises
Exercise 2.5.1. For each of the two languages mentioned in Exercise 1.4.2 ﬁnd a DFA that accepts exactly that set of strings. Exercise 2.5.2. The example of the subset construction given on Slide 17 constructs a DFA with eight states whose language of accepted strings happens to be . Give a DFA with the same language of accepted strings, but fewer states. Give an NFA with even fewer states that does the same job.
ð õ R
Exercise 2.5.4. Given two DFAs with the same alphabet of input symbols, construct a third such DFA with the property that is accepted by iff it is accepted by both and . [Hint: take the states of to be ordered pairs of states with and .] Tripos questions 2001.2.1(d) 2000.2.1(b) 1998.2.1(s) 1995.2.19 1992.4.9(a)
õ ï r
¢ ï & ¦#ô
ò
2 7ò
4
4 ç ( @qð
2 3ò
ð
Exercise 2.5.3. Given a DFA , construct a new DFA symbol and with the property that for all , accepted by .
with the same alphabet of input is accepted by iff is not
ç
õ3Â æ þ æ å áô wí ý æ ùå î î ì ë ê é é è ç wí ÅÂÂæ å
¨ æ tï
÷
ì ë ê é é è ç î ì«ëÅê£éÂéèüçï î «ÅÂÂùï ú î ø û õ ò ñ ô ó gy e÷ ç ÷
î Èí ø
¨ ¦©£
! û ¢ ¨ æ " £tï
! 'ï & û
ò
( 0( þô &1)ó
ò
§
ð
ò
î ø
æ tï
æ þ 3
¤¤¤ ¢ þ þ ý ¦¦¥£Â¡ÿuð
û ç fï æ ð ò
¢£f£tï ¨ æ å ç ¢ ¨ æ ¨ æ ©£tï ¨ æ ©£å
(î 4 ç 65ùð
õ þ Â Â
%
ò gñ
î ø wí ô
ì ë ê é é è ç æ î «ÂÂÊ×iï
P Fêì Dì B ç ¢ 6î I¡6HÎG&ï
î ì ë ê é é «Å£Âè
ð
ç
¢ ò 9r8ò
î wí
¨ æ ï
î ø ç Èí ï
§
ó
õ r
ý ã ± ï å ç
¨ æ þ ¢ ¨ æ å £SÂô
ç fï æ
$
ò
ó
ò
¢ Aò
ò
#
õ ò ô ó ö õ ò ñ ô ygyó ò
!
î
F16EYCç 'ï ê ì D ì B 8ò
î wí
î Èí
õ ò ô ó ç y÷
§
§
ý "
¨ æ å ç ¨ æ ©£f®©£tï
ò õ ò ô ó eý î
æ å
4
ó
ð
ò gñ ó ð
21
3 Regular Languages, I
Slide 19 deﬁnes the notion of a regular language, which is a set of strings of the form for some DFA (cf. Slides 11 and 14). The slide also gives the statement of Kleene’s Theorem, which connects regular languages with the notion of matching strings to regular expressions introduced in Section 1: the collection of regular languages coincides with the collection of languages determined by matching strings with regular expressions. The aim of this section is to prove part (a) of Kleene’s Theorem. We will tackle part (b) in Section 4.
Deﬁnition A language is regular iff it is the set of strings accepted by some deterministic ﬁnite automaton.
Kleene’s Theorem (cf. Slide 8). some regular expression .
Slide 19
3.1 Finite automata from regular expressions
Given a regular expression , over an alphabet say, we wish to construct a DFA with alphabet of input symbols and with the property that for each , matches iff is accepted by —so that . Note that by the Theorem on Slide 18 it is enough to construct an NFA with the property . For then we can apply the subset construction to to obtain a DFA with . Working with ﬁnite automata that are nondeterministic and have transitions simpliﬁes the construction of a suitable ﬁnite automaton from . Let us ﬁx on a particular alphabet and from now on only consider ﬁnite automata whose set of input symbols is . The construction of an NFA for each regular expression over proceeds by recursion on the syntactic structure of the regular expression, as follows.
e
g
T
e
v du
v
g rpig q f h
b WY cadX
(b) Conversely, every regular language is the form
u
f f f e V R6QsVwURSQsVwvURSQsV3USQ e v y TR v y s V eR Q s V vR 'U68xwUSQ
b WY ca`X W
(a) For any regular expression ,
is a regular language for
V TR 3USQ
f
W
V TR Q s V eR 3US8t'U6Q f
e
T
T
T
22
3 REGULAR LANGUAGES, I
Thus starting with step (i) and applying the constructions in steps (ii)–(iv) over and over again, we eventually build NFA s with the required property for every regular expression . Put more formally, one can prove the statement for all , and for all regular expressions of size such that , there exists an NFA
by mathematical induction on , using step (i) for the base case and steps (ii)–(iv) for the induction steps. Here we can take the size of a regular expression to be the number of occurrences of union ( ), concatenation ( ), or star ( ) in it.
o k ¥cUh
Step (i) Slide 20 gives NFAs whose languages of accepted strings are respectively (any ), , and .
w
l
k 5Uh
o k w x h
G
o k x#h
l
kk h }~ n3 £E%h
k 3Uh
u p o k #¦7x'h
l
o k w xUh
s
l yC
o k w tm#Uh
Thus
when
.
v uk #t3Uh
x r
and each
l
k h }~ 3U£'H%
s rv v v r r p o kk h }~ &p%¦¦$¡j&q¥n5U£H ah
l
l
(iv) Given any NFA
, we construct a new NFA ,
with the property
k 9Uh
o kw xUh
k ty h
o k w tt# h
kk i h ~ } f z n9jy mj¡h
l
o kw w x¦xnUh
Thus
when
and
l
k i h ~ } f 1AjUj¡ {z
v uk #I9Uh
l
6¡r
and
l
k tUh
l
l
r s r r p o kk i h~ } f z xd)n&5xn1Aj¦UIj h
l
9
(iii) Given any NFA s property
and
, we construct a new NFA ,
with the
k 9Uh
o k w tm h
k ty h
o k w xtUh
kk i h fd nm9jU$ ge'h
l
o k s w txw Uh
Thus
when
and
l
v uk #ImAUh
l
ir
or
k i h fd 9jU$ ge
l
l
k tUh
l
l
r s r p o kk i h fd i&C&qxnmAjUge'h
9
(ii) Given any NFA s property
and
, we construct a new NFA ,
with the
.
.
(i) For each atomic form of regular expression, ( ), , and , we give an NFA accepting just the strings matching that regular expression.
89
l
l l l l p l l l u %xp
3.1 Finite automata from regular expressions
23
NFAs for atomic regular expressions
accepts no strings
Slide 20
Step (ii) Given NFA s and , the construction of is pictured on Slide 21. First, renaming states if necessary, we assume that and are disjoint. Then the states of are all the states in either or , together with a new state, called say. The start state of is this and its accepting states are all the states that are accepting in either or . Finally, the transitions of are given by all those in either or , together with two new and . transitions out of to the start states of , i.e. if we have for some , then we get showing that . Similarly for . So contains the union of and . Conversely if is accepted by , there is a transition sequence with or . Clearly, in either case this transition sequence has to begin with one or other of the transitions from , and thereafter we get a transition sequence entirely in one or other of or ﬁnishing in an acceptable state for that one. So if , then either or . So we do indeed have Thus if
³ ° Á¬À ¾ 6® ª)HtIÀSwµÅ± ® )Ett6wÅ± ¯ ª Á¬ÀÀ ¾ µ ± º ² ¼ &± ´ ¨ ¡ ¦ 9U6· ¨ ¦ t8US· ¡A ¨ ¡ § ¦ ¥¤ £ ¢ ¦ · 9jyU$£ ge'SÄµ ´ ¯® ª)ÁHtt6¿½'± ¬ÀÀ ¾ µ º '± »¯ ¼ ® ¹ ¡ 9 y ³ ¡ A 8 ¡ A ² ¦± ¨ ¡ § ¦ ¥¤ £ 9j8 $£ g%©¢ y ¨ ¡ § ¦ ¥¤ £ mAjyU$£ ge¢ ¯ ® ¬ª «ª IaE%© ¨ ¡ § ¦ ¥¤ £ 1AjyU$£ ge¢ ¡ A
Ì Ë¨ ¡ ¦ · µ #I9U6Êi´
or
¡ A ° ®¬ª «ª 6&IaE%©
¨¨ ¡ § ¦ £ ¥¤ £ ¢ ¦ · µ nm9j¦U©ge'Sww´
¨ ¦ · µ ´ É ´ È Ç ¨¨ ¡ § ¦ ¥¤ £ ¢ ¦ ny 6iÆx&qxn9jU$£ ge'S·
just accepts the null string,
just accepts the onesymbol string
¨ ¡ ¦ · µ mAUSÆi´ ² &± ¨ ¡ § ¦ ¥¤ £ AjmyU£ He¢ ¨¨ ¡ § ¦ ¥¤ £ ¢ ¦ nm9jU$£ ge'S· ± º ¼ x¹ ÂÃ ¦± ¯ ® ² ¨t yU¦S¸¶´ · µ ² &± ¨ ¡ § ¦ ¥¤ £ 9jU$£ ge¢ ² &± ¨j yU¦S·iÆ´ µ ¡ A
24
3 REGULAR LANGUAGES, I
Slide 21
Step (iii) Given NFA s and , the construction of is pictured on Slide 22. First, renaming states if necessary, we assume that and are disjoint. Then the states of are all the states in either or . The start state of is the start state of . The accepting states of are the accepting states of . Finally, the transitions of are given by all those in either or , together with new transitions from each accepting state of to the start state of (only one such new transition is shown in the picture). Thus if and , there are transition sequences in with , and in with . These combine to yield
in witnessing the fact that is accepted by . Conversely, it is not hard to see that every is of this form. For any transition sequence witnessing the fact that is accepted starts out in the states of but ﬁnishes in the disjoint set of states of . At some point in the sequence one of the new transitions occurs to get from to and thus we can split as with accepted by and accepted by . So we do indeed have
é ¦r6ö þ æ §¨ è ¦6xþ ¡ ø ÿ ø ¡ ÷ ÿ ÷ ö ø ï ¤ôíí ¢ ü 6ö )¥tIS£ré é 9ç é ø ¡ 9UçS¡ dø ¡ é ÿ û 6ö þ è ç è ¡ dþ ÿ ÷ ö òé ð ý ü ÷ è úç ù ò é çñ è çð ï îí ì ë 1AjyUj¡ê é ç ò1éAçjyUj¡ê ñ è çð ï îí ì ë è yç é 9ç yç è ò é çñ è çð ï îí ì ë 9j8 mj¡ê ø öõôï îï 6&IaE%ó ÷ ö õôï îï IaE%ó ò é çñ è çð ï îí ë 1Ajm8Um'j¡ì ê é Aç è yç
A ç6Æü òé ð ý
é ¡û
and
è ©û
è Øç
ò é çñ è çð ï îí ë 9jyUmj¡ì {ê
é ûè )jû
©
à å äãââ ÅÛ $nÅá
©
òò é çñ è çð ï îí ë ê ð ý ü n1AjmU1jì SA© é ûè ¡j©û
òè çð ý ü è û é ûè û òò é çñ è çð ï îí ë ê ð tU6Æ©&)j©&n9jU1'j¡ì 6ý
Ü å äãââ Û $nmÅá
Set of accepting states is union of
Õ Ô
× Ô
Ù × ÔÖ Õ ÔÒ Î ÑÏ Î Øm7aÓ£EÐÍ
©
Ü Û Ú à ÅÛ Ú é9ç é ç é Aç ß ß
è 8ç
Þ Ý
and .
ò é çñ è çð ï îí ë m9jy jì ê é Aç é ¡û è yç
÷ ï ¤ôíí ¢ ö )¥tIS£ü è òtèUð6w©û ç ý ü è é Aç è 8ç
æ
ò é çñ è çð ï îí ë AjmyU1t¡ì ê
¡
ù
3.1 Finite automata from regular expressions
25
Set of accepting states is
.
Slide 22
Step (iv) Given an NFA , the construction of is pictured on Slide 23. The states of are all those of together with a new state, called say. The start state of is and this is also the only accepting state of . Finally, the transitions of are all those of together with new transitions from to the start state of and from each accepting state of to (only one of this latter kind of transition is shown in the picture). Clearly, accepts (since its start state is accepting) and any concatenation of one or more strings accepted by . Conversely, if is accepted by , the occurrences of in a transition sequence witnessing this fact allow us to split into the concatenation of zero or more strings, each of which is accepted by . So we do indeed have
and each
X
gphXf y e fi d
i fe a phXgPdcb`
s 3r
s 3r
iphXf%9c¥b` e a i fe a phXgPdcb`
x
A 4
CA 486 20(& $" DB@975431)'%#! Q F RPE X x v ii f e a ` £uu'VqhX%Pdcbf y Q F 'V)@& S 0 UT& s wr I X 6 4 X X v X YW X G F H9E i fe a qtXgPdcb` i fe a phXgPdcb` s i fe a ur qtXgPdcb` i fe a qhX19¥cb` s ur
26
3 REGULAR LANGUAGES, I
The only accepting state of
This completes the proof of part (a) of Kleene’s Theorem (Slide 19). Figure 1 shows how the stepbystep construction applies in the case of the regular expression to produce an NFA satisfying . Of course an automaton with fewer states and transitions doing the same job can be crafted by hand. The point of the construction is that it provides an automatic way of producing automata for any given regular expression.
3.2 Decidability of matching
The proof of part (a) of Kleene’s Theorem provides us with a positive answer to question (a) on Slide 9. In other words, it provides a method that, given any string and regular expression , decides whether or not matches . The method is:
beginning in ’s start state, carry out the sequence of transitions in corresponding to the string , reaching some state of (because is deterministic, there is a unique such transition sequence);
Note. The subset construction used to convert the NFA resulting from steps (i)–(iv) of has Section 3.1 to a DFA produces an exponential blowup of the number of states. (
~ w e~phw
check whether otherwise
is accepting or not: if it is, then , so does not match .
, so
matches ;
~ #Bqh#£ w ~ w
construct a DFA
satisfying
;
x~ z g)@}{yxw
sr p om l ki qn%Vjh t
is
Slide 23
o
.
~ eph w ~ w
p om l ki qn%Vjh ~ x~ z ww ~ w gg) yxqt v Pu t s Pr
3.2 Decidability of matching
27
@}{y
Figure 1: Steps in constructing an NFA for
g) y
Step of type (iii):
@}h
Step of type (iv):
Step of type (ii):
Step of type (i):
Step of type (i):
28
3 REGULAR LANGUAGES, I
states if has .) This makes the method described above very inefﬁcient. (Much more efﬁcient algorithms exist.)
3.3 Exercises
Exercise 3.3.1. Why can’t the automaton required in step (iv) of Section 3.1 be constructed simply by taking , making its start state the only accepting state and adding new transitions back from each old accepting state to its start state?
Exercise 3.3.3. Show that any ﬁnite set of strings is a regular language. Tripos question 1992.4.9(b) [needs Slide 26 from Section 4]
¡ ª
Exercise 3.3.2. Work through the steps in Section 3.1 to construct an NFA . Do the same for some other regular expressions.
¨ §¦ ¤ ph¡gPd¥b£
¡
¨°¯ ± ±°¨ ® § ¬ ¨ @u11)¯ b©V§ « eqt¡§ «
¢
¡
©
P
satisfying
29
4 Regular Languages, II
The aim of this section is to prove part (b) of Kleene’s Theorem (Slide 19).
4.1 Regular expressions from ﬁnite automata
Given any DFA , we have to ﬁnd a regular expression (over the alphabet of input symbols of ) satisfying . In fact we do something more general than this, as described 1 in the Lemma on Slide 24. Note that if we can ﬁnd such regular expressions for any choice of , , and , then the problem is solved. For taking to be the whole of and to be the start state, say, then by deﬁnition of , a string matches this regular expression iff there is a transition sequence in . As ranges over the ﬁnitely many accepting states, say, then we match exactly all the strings accepted by . In other words the regular expression has the property we want for part (b) of Kleene’s Theorem. (In case , i.e. there are no accepting states in , then is empty and so we can use the regular expression .)
and each pair of states expression
, there is a regular
number of accepting states, with ,
start state,
th accepting state.
, take
Slide 24
The lemma works just as well whether is deterministic or nondeterministic; it also works for by (cf. Slide 16). NFA s, provided we replace
1
» º ¸¹ }¹)Æ³
Proof of the Lemma on Slide 24. The regular expression tion on the number of elements in the subset .
Ù
(In case
to be the regular expression .)
¨ û¤¤¤ ¢ ©ë 1§¦¥û £ë ô ë
Hence
Ú èÆæ ú ýæ ûüúó å Vñ é Æ÷õ¥óï }íuìí ë òð ÿþ ù øö ô î ñ ï î ìí g}íu1ë
satisfying
in
mediate states of the sequence
, where
¡Û
in
. and
å äãà âà Þ Ü Æw1áßÝÛ
å äwVà1àjÞêé'è%@çnæ ã â æ Ú
Lemma Given an NFA
, for each subset
with all inter
can be constructed by induc
ÄÃ À À PgdÂ9¥Áb¿ »º ¹ )¸¹ ³ ² ¶ µ ph²´
²
Ù Øp·Î× Öº ÔÕÕ Ô ¹ g¸Ç ³ uuÕ3Ó}¹º¸Ç ³ Ò ÐÑÑÑÐÏ u½guuuuD½ ¾½ ² ¾ ½ ÎÉÊÅ Í ÌË È Å »¹ gÆ³ º ¸Ç ¼ ¾ 9½ ½ ¼ ¶ µ ´ ·¶ µ ph²Dt³´ ¼ ³ Ø R× · ô æ ô å äwàgàjÞ ô Û }ígì ë ô §ë ã â î ô ó ë òð ô ó Ú òð ñ ñ
# % $" !
³
²
½
²
30
4 REGULAR LANGUAGES, II
, we are looking for a regular
if if
Induction step. Suppose we have deﬁned the required regular expressions for all subsets elements, choose some element of states with elements. If is a subset with and consider the element set . Then for any pair of states , by inductive hypothesis we have already constructed the regular expressions and
Clearly every string matching is in the set
Conversely, if is in this set, consider the number of times the sequence of transitions passes through state . If this number is zero then (by deﬁnition of ). Otherwise this number is and the sequence splits into pieces: the ﬁrst piece is in (as the sequence goes from to the ﬁrst occurrence of ), the next pieces are in (as the sequence goes from one occurrence of to the next), and the last piece is in (as the sequence goes from the last occurrence of to ). So in this case is in . So in either case is in . So to complete the induction step we can deﬁne to be this regular expression .
4.2 An example
Perhaps an example will help to understand the rather clever argument in Section 4.1. The example will also demonstrate that we do not have to pursue the inductive construction of the regular expression to the bitter end (the base case ): often it is possible to ﬁnd some of one needs by ad hoc arguments. the regular expressions
3
9 w
DC 1u&
with all intermediate states in this sequence in
0 ' ¦' m' ¦' 8yw v s r rQq 3
D ¥Q A 44rQ§mQ 5 r Q dcwIa Y Q p so ql
Consider the regular expression
& §'
D )S XwViW S bn§Q dcbIa Y ¥Q k j T g j fS p
0 pq' ' I 0 pi' ' hI u E(DDD(r vXt¥¥¥¥sE
On the other hand, if there are some such input symbols,
say, we can take
.
G
G 0'
So each element of this set is either a single input symbol (if holds in possibly , in case . If there are no input symbols that take us from to in can simply take if if .
'
0 ' F @9 '
D 1C
with no intermediate states
0'( 1)£' E
Base case, is empty. In this case, for each pair of states expression to describe the set of strings
C ' hI & I C §U' 5 ¡' 2 B§4' 2 & &
( )S ij wVSbnfjgT S Q dcbIa Y o4Q ( XS iwnfS Q dcbIa Y lmQ ( )jS whf©Q dcbIa Y r Q WT T kj k j j gVSU k iXgSVUS e4t§6 0 )v' d '(
0§pq' ' I 0 pi' ' hI
f
I {&
u r 5H £E 5 y y y xE T RS u£E 54y¥y¥yX5 r dcwIa Y W VSU©Q 4¥¥X5 xE e
p¥Q A s44Qr©lmQ 5 r Q I Q o q s ©rQq v 3
H dcbIa `XVSU©Q Y W T RS f ge
'
0 ' B8' 642 A 7 @9 5 3 xBw 4' Q
WT XVSUfS Q
3 0 ' B7 @ 9 ' t42 A 5 3
0 PB' ' I
T W VSUfS Q s so ql mp¥Q A 44rQ©mzQq v sp m¥rQq v s UomzQq v s UlmrQq v r 0 ' A 7 @ 9 'Q
&
) or , we
H
4.2 An example
31
Note also that at the inductive steps in the construction of a regular expression for we are free to choose which state to remove from the current state set . A good rule of thumb is: choose a state that disconnects the automaton as much as possible.
Direct inspection yields:
Slide 25
As an example, consider the NFA shown on Slide 25. Since the start state is and this is also the only accepting state, the language of accepted strings is that determined by the regular expression . Choosing to remove state from the state set, we have (3)
Direct inspection shows that
and
. To calculate
, and
, we choose to remove state :
These regular expressions can all be determined by inspection, as shown on Slide 25. Thus
and it’s not hard to see that this is equal to
; and
¬ ~ £ § ~ £ w¦ wV~ ~b¥ X¦ w ~b¥z£ w )1 © wV~ ~)¥z¤¢ ¨¦ tUV~ )¥z¤¢ ¬ ~ £ § ~ £ ¦ w ~b¥ X¦ w ~b¥z£ w )1 © w ~)¥z¤¢ ¨¦ tU )¥z¤¢ ¦ tU~ )¥ry¢ ± ~ £ ° £ ¦ w¬ ®y¢ §¯¦ tU~U¥ry¢ ¦ ¬ ®¤¢ ¦ tVUV~ ~U£ry¢ £ § ~ £ ~ £ w¦ tU~ V~)¥ X¦«Uª )1r£ tU ~U¥ © tUV~ ~U¥z¤¢ ¨¦ t)U~ U¥z¤¢ ¬ t ~ ~ ~ £ § ~ ~ £ ¡ ~ tV)UV~ ~U£
r ¦
Example

¦b¬ v®£ X¦ z²® © ´y¢ ¨¦ tUV~ )¥ry¢ ¦ ® ¬ £ £ § ~ £ ° ® £ ¦ w¬ v® © z²¤¢ ¦ ¬ £ £ § ~ £ ¦ ° ¬ ®£ )¦ ³²® © z²y¢ ¨¦ tU )¥ry¢
m x ¥
~ m}
¦ tU ~)¥z¤¢ £
32
4 REGULAR LANGUAGES, II
. Substituting all these values into (3), we get
So is a regular expression whose matching strings comprise the language accepted by the NFA on Slide 25. (Clearly, one could simplify this to a smaller, but equivalent regular expression (in the sense of Slide 8), but we do not bother to do so.)
4.3 Complement and intersection of regular languages
We saw in Section 3.2 that part (a) of Kleene’s Theorem allows us to answer question (a) on Slide 9. Now that we have proved the other half of the theorem, we can say more about question (b) on that slide. Complementation Recall that on page 8 we mentioned that for each regular expression over an alphabet , we can ﬁnd a regular expression that determines the complement of the language determined by :
As we now show, this is a consequence of Kleene’s Theorem.
transitions of start state of
= transitions of
= start state of
.
Provided
is a deterministic ﬁnite automaton, then iff it is not accepted by :
is
accepted by
.
Slide 26
º
ö ÙÙ Ø×Ö Ô Ó× ü ÙÚÏþ û ù ý ú ÿí ù ý ÷ì tÚ¦vÕ)$þ Ø× Ø Ù Ø×Ö Ô ïå4vÕÓ ý Ø üæ ÖnÞ bòð ø ú æ mvÖ Ü i¥÷ì õæwôà nÞ UòîÛ ó ñ ñ ûù ßÞÖ Ý ù ø ö è äãá Ö ó ññ ð Û Ø Ù Ø×Ö Ô ïå4vÕÓ Û Ø Ù Ø×Ö Ô ïåmvÕÓ ë é è äãá í æ í 4ìê âçæåtâà îÛ ßÞÖ Ý ë é è äãá à ßÞÖ Ý Ü æ mvÖ Ü 4ìê âçæåtâ1mvÖ 8Û Ù Ø×Ö Ô Ú¦vÕÓ
Å · · Ã · Ã Â ¶ µ Á À ½ ½» ¼ ¶ t¹ ¸ v£· ¸ ¹ w¸ v Â· ³Ä¶ w¸ · 1¸ ·yp¨¹ t¿)¾UV¼½ U¥zº¤µ ¼ Å1t¹©rº¶¤µÑÐÍ Ï¸ ÆÎ¡4ËÌÁÊ¹Érº¶ÈÇdyµ Ò Î Â Í ¹ ¶ º ¹ ¶ ÉrºÈÇ
¹¸ ·· ¶ w£v·yµ Æ
which is equal to
¸ £v· ¸ ¹ Ã ¸ v Â· zÄ¶ Ã ¸ Â· ¸ · ·· ·
4.3 Complement and intersection of regular languages
¡
33
¨ ¡ ¥ £ ©§¦¤¢
Lemma 4.3.1. If is a regular language over alphabet is also regular.
¥ £
, then its complement
Given a regular expression , by part (a) of Kleene’s Theorem there is a DFA such that . Then by part (b) of the theorem applied to the DFA , we can ﬁnd a regular expression so that . Since
# ( & $"0)'% b # 7 ¥ £ ca968`Y ¨ 1X¤UWV$"!¤ ¨ 1¤UTHB6SRQP! ¡ ¥ £ ¢ # ¥ £ ¡ ¥ £ ¢ # # ( & % # # ( & % # # 7 C HB6SRQP!IHE6GF! 7 # 7 E6DC # # 7 B6!A@968
This can be deduced from the following lemma.
¡ g r f 8
Lemma 4.3.2. If intersection
and
are a regular languages over an alphabet
g ¥ )rX£
and
is also regular. are regular languages, there are DFA and such that ( ). Let be the DFA constructed from and as on Slide 27. It is not hard to see that has the property that any is accepted by iff it is accepted by both and . Thus is a regular language.
#H#Sg§qb5f6dWP!rDW g s f ¨ 1X£ ¡ ¥ f g § g 3 f g § f # g b f 03q06W # g b f 53q06W g Vb9 f 8 # g b f 0§q56W # "!
I
Proof. Since
and
g )7
f ¥ £ WXY ¨ 1X¤ wrutW ¡ ¥ £ ¢ F5v g s f yx
£
f c7
£
# g 7 i f 7 ¤Y@qc"
matches
£
iff
matches
and
matches
.
# g 7 i f 7 pY@ah6
g Y7
Intersection sions and
f 97
As another example of the power of Kleene’s Theorem, given regular expreswe can show the existence of a regular expression with the property:
V#$"!¦ed@§`¤¢ ¥ £ ¨ ¡ ¥ £
Note. The construction given on Slide 26 can be applied to a ﬁnite automaton not it is deterministic. However, for to equal to be deterministic. See Exercise 4.4.2.
# # ( & % H$"5)'P8
7
# 7 E6GC
this
is the regular expression we need for the complement of . whether or we need
, then their
# ( & $"0)'%
¥ £ 413) ¨ 21¤¢ ¡ ¥ £ # $"!
Proof. Since is regular, by deﬁnition there is a DFA such that be the DFA constructed from as indicated on Slide 26. Then of strings accepted by and hence is regular.
#$65)'% ( &
. Let
is the set
34
4 REGULAR LANGUAGES, II
states of
are all ordered pairs
and
alphabet of and
is the common alphabet of iff
in
in
and
in
start state of and
is
accepting in accepting in
iff
accepting in
.
Slide 27
Thus given regular expressions and , by part (a) of Kleene’s Theorem we can ﬁnd DFA and with ( ). Then by part (b) of the theorem we can ﬁnd a regular expression so that . Thus matches iff accepts , iff both and accept , iff matches both and , as required.
4.4 Exercises
Exercise 4.4.1. Use the construction in Section 4.1 to ﬁnd a regular expression for the DFA whose state set is , whose start state is , whose only accepting state is , whose , and whose nextstate function is given by the following alphabet of input symbols is table.
£ ¡ ¦ ¥ §0¤ £ ¡ Ya¢Y hV9pY
Exercise 4.4.2. The construction given on Slide 26 applies to both DFA and NFA; but for to be the complement of we need to be deterministic. Give an example of an alphabet and a NFA with set of input symbols , such that is not the same set as .
B6! ¬ « ª H$"5)'P8 ¬ « B60)'ª ¨© ¬ « ª HB6SRQP! ±® ° ¯ ® a$68¤QW1X¤
c
k Bi
k $i
k li
Y 3 03q06W Y@a9 H03q06WP8pY©Hc68 Y@qc V9 68!p6! 3 Y
p n u m k u cESEdPh ku
k hu
~ p r} pm } {h
k hu
p n i m k ih g f 9o0RBtTsre
n ei p n i m k i h g f co0EBjssre
r}EYz{xy{xtwvGn u ~ }  z x y x w v EY{dHt'k u p n i m k i h g f 9o0EBtTsre p n p n i m k i h g f co0EBjssre i 5m k n ei n cu Eu n p 5m kPh ¦9R59th n u u p n u m k u ih g f tTs'e n Eu p n u m k u 9R59th n i
p n i m k i h g f 9o0EljII'e
with
q q q q q
4.4 Exercises
² º96Á ²µ » º · ¶µ ¹ ¶ » º · ¶ a0¹ ¸RdSV0¹ ¸µ ³ ´²
35
Exercise 4.4.3. Let . Find a complement for over the alphabet , i.e. a regular expressions over the alphabet satisfying . Do the same for the alphabet . Tripos questions 1995.2.20 1994.3.3 1988.2.3 2000.2.7
Ä Ã ½ ³ º º ²µ Áµ Å¤rH9"F8Â ¼ À Ç ¾ ¹ ¾ ¶ ¤a)a¢Y½ ÀVº9²"µ!XY· » Â ÆÄ Ã ¼ À ¹ ¾ ¶ ½ Ya¿Y³ ¼
36
4 REGULAR LANGUAGES, II
37
5 The Pumping Lemma
In the context of programming languages, a typical example of a regular language (Slide 19) is the set of all strings of characters which are wellformed tokens (basic keywords, identiﬁers, etc) in a particular programming language, Java say. By contrast, the set of all strings which represent wellformed Java programs is a typical example of a language that is not regular. Slide 28 gives some simpler examples of nonregular languages. For example, there is no way to use a search based on matching a regular expression to ﬁnd all the palindromes in a piece of text (although of course there are other kinds of algorithm for doing this).
Examples of nonregular languages
parentheses ‘ ’ and ‘ ’ occur wellnested. i.e. which read the same backwards as forwards.
The intuitive reason why the languages listed on Slide 28 are not regular is that a machine for recognising whether or not any given string is in the language would need inﬁnitely many different states (whereas a characteristic feature of the machines we have been using is that they have only ﬁnitely many states). For example, to recognise that a string is of the form one would need to remember how many s had been seen before the ﬁrst is encountered, requiring countably many states of the form ‘just seen s’. This section make this intuitive argument rigorous and describes a useful way of showing that languages such as these are not regular. The fact that a ﬁnite automaton does only have ﬁnitely many states means that as we look at longer and longer strings that it accepts, we see a certain kind of repetition—the pumping lemma property given on Slide 29.
Ù Ú Ù S¿Ø Ú Ø Û Ø
Ñ Ð Ë Ï Ï Ï Ë Î Ë Í dSER)Rc0tÉ
The set of strings over
Ñ Ð Ë Ï Ï Ï Ë Î Ë Í Ë Ì Ë Ê 5cR)Rc0t50¤a¢É
Slide 28
The set of strings over
in which the
Ì
Ñ × Õ Ô Ó Ò Î Ò Í $Öh¦p¢¢É
Ê
È È È
which are palindromes,
38
5 THE PUMPING LEMMA
The Pumping Lemma
the pumping lemma property : concatenation of three strings, satisfy:
(i.e.
)
,
5.1 Proving the Pumping Lemma
Since is regular, it is equal to the set of strings accepted by some DFA . Then we can take the number mentioned on Slide 29 to be the number of states in . For suppose with . If , then there is a transition sequence as shown at the top of Slide 30. Then can be split into three pieces as shown on that slide. Note that and . So it just remains by choice of and , to check that for all . As shown on the lower half of Slide 30, the string takes the machine from state back to the same state (since ). So for any , takes us from the initial state to , then times round the loop from to itself, and then from to . Therefore for any , is accepted by , i.e. .
÷
þ Tý
I
&
Note. In the above construction it is perfectly possible that nullstring, .
ø
, in which case
is the
R YQ
¦
ÿ ý
¦
S TQ
õ qþ ý
D FE'
÷
R Q
I P!
÷
¨ õ ý¥ 75 3 1 4Hþ 68%6420)(
R XQ
¢ ÿ õþ £¡ý qtý ¦ V edbba¡`! 5 c)a R Q Q WQ ÷ 6U V ÿ ý q¸ý õ þ õ R Q ¦ IPH ¢ ÿ õþ #Gý qtý BC$!@A' ÷ ¨96¥8%6420)( ' & & õ 75 3 1 "! ÿ þ ÷
¢£¡ ¢ £¤
¨ ¦¥ ©§¡¢
¨ ¦¥ ¢ ©%$#
¢ £¡ ¢ £¡
(i.e.
,
ÿ ý õ õ õ þ hhq¸ý ÿ ý õ þ qtý Ü á ò î ü ñ ð 3D¸¿¿0Tî
ÿ ý õ õ þ qtý ÿ ý þ q¸ý û Þ $Öú
for all
,
[but we knew that anyway], , etc).
Slide 29
ò ¸î
ñ
ð Tî
ò î ñ ð î í t¿pïAà
Ý Þ ì àë éç æ å ã U@DPTêè¢Yäâ
Ý ù ì ñ ð î ë éç æ å ã U@pItTêè¿Yôâ
ø ö÷ 1õ ß Þ ì ñ ë éç æ å ã ´@tTêè¿Yôâ
Ü á 3´à
all
with
can be expressed as a , where , and
ß Þ ´DÝ
Ü
For every regular language
, there is a number
satisfying
ó ó ó
¢
5.2 Using the Pumping Lemma
39
where
Slide 30
Remark 5.1.1. One consequence of the pumping lemma property of and is that if there is any string in of length , then contains arbitrarily long strings. (We just ‘pump up’ by increasing .) If you did Exercise 3.3.3, you will know that if is a ﬁnite set of strings then it is regular. In this case, what is the number with the property on Slide 29? The answer is that we can take any strictly greater than the length of any string in the ﬁnite set . Then the Pumping Lemma property is trivially satisﬁed because there are no with for which we have to check the condition!
5.2 Using the Pumping Lemma
The Pumping Lemma (Slide 5.1) says that every regular language has a certain property— namely that there exists a number with the pumping lemma property. So to show that a language is not regular, it sufﬁces to show that no possesses the pumping lemma property for the language . Because the pumping lemma property involves quite a complicated alternation of quantiﬁers, it will help to spell out explicitly what is its negation. This is done on Slide 31. Slide 32 gives some examples.
C¤%8%6420
n dnnn r 66 9 q ~
 i t y 2m w 6v6m u f "ox©6su r q uCqpou u nnn tv j h
w x w v u t lkiX2hAghefd6u Yw 6 Yu Xdw 6yu Ybu y"6Cq 6s t r q i g Aphf
states
If
number of states of
, then in
can’t all be distinct states. So
for some
. So the above transition sequence looks like
l jihh g e d { r t kX2Ah}6u £ Xz { Yu
rnnn p 6 q F
q u £x X"6Cq t s p { z v u
pnnn 6 Yq $ ~
40
5 THE PUMPING LEMMA
How to use the Pumping Lemma to prove
there is some
on Slide 31.]
[For each ( ).]
,
is of length
(iii)
prime is not regular.
and has property ( ).]
Slide 32
Û
has length
Ðç æ Ü£å
å
[For each
, we can ﬁnd a prime
with
Ð!Ñ Ï
Ð !Ñ ê Ù × è ëé6Ô Ò Ñ Ó¤Ð Ï äÍ ãÊ È â EËÉ YÇ Æ x Û ÚáÙ£ØË2kËÔ Ó¤Ð × Õ ÔÖÕ Ò Ñ ¥Í ßÏÌÝ ÊÉ ¦ ¥ È àyXÞËÜ¤ËÉ YÇ Æ ³
(ii)
a palindrome is not regular.
and has property
and then
Û
Ð !Ñ
[For each
Úx£ØTkËÔ Ó¤Ð Ù × ÕÖÕ Ò Ñ ÏÃ ¡ ÂÍ ÄÌÄÊ È Æ ° e©h4Î²²ËÉ YÇ $Å
(i)
Ã ¡ ©hÂ Á ¿ ¯¼ º¹ ¸ · µ p¡À± ° ¾½» ¶´
ªª¬
( )
with
,
¥
no matter how
is split into three, and
for which
Slide 31
Examples is not regular.
is of length and has property ( )
¯Ä ³
± ° ¯ £¤¤À¾½»
¶´ ¡ ¿ ±¼ º¹ ¸ · µ ³¯²Xq¯C®¥ ±° p¡
, , is not in
¦ !§¥
£ ¡ ¤¢
For each
, ﬁnd some
that a language
is not regular of length so that
©ªª «
¨
.
5.2 Using the Pumping Lemma
Proof of the examples on Slide 32. We use the method on Slide 31.
41
(i) For any , consider the string . It is in and has length . We show that property ( ) holds for this . For suppose is split as with and . Then must consist entirely of s, so and say, and hence . Then the case of is not in since
(ii) The argument is very similar to that for example (i), but starting with the palindrome . Once again, the of yields a string which is not a palindrome (because ).
C 5 3 AIPY8CA½XÀú §9¡ 3 ñ ¨ ü ¦ ¤ ¢ ÿ 0þý ì"I¨©k÷ú û ¦ ¤¢ þ §9¡ ÿ 0ý ÿ 0þý H ó®ð ñ ìF D GEC î àì í
(iii) Given with that
, since there are inﬁnitely many primes , we can certainly ﬁnd one satisfying . I claim that has property ( ). For suppose is split as and . Letting and , so , we have
¨ ¦ ¤¢ ©û §9¡ T Q ÿ 0þý US ñ W5 ü ú û÷ ú ñ ekCð
Remark 5.2.1. Unfortunately, the method on Slide 31 can’t cope with every nonregular language. This is because the pumping lemma property is a necessary, but not a sufﬁcient condition for a language to be regular. In other words there do exist languages for which a number can be found satisfying the pumping lemma property on Slide 29, but which nonetheless, are not regular. Slide 33 gives an example of such an .
ö
ö
5 3 C ñ ABAu"
Therefore
D î p tC15
ìF D wtC
î
ö 8e9kÞú 0 ü ú % û÷ í£¤à!xwuvuC ì2 ñ ì 3 ìF D 5 3 ¨ 5 3 qsrC ¦ qCW5 ¦ ¨ î p
Now
is not prime, because (since (since by choice, and when .
ìB©k÷ ú ¨ û ñ y5 5 p P ¦ ¤ ÿ ¢ î í §û 9§¡ 9¡ 0þý ÿ 0þý ñ 5 ¨ ©¦ ¤¢ î
ô ô ó ñ ü ú÷ Ëó õ (ò£eÞú
ib ` f( H hb
÷
¨ ÷ ¦ ¤¢ Vú §9¡ H ó ñ £CCð
e )g`
ó "ñ
! H fd! H ec
T Q ÿ 0þý US ñ RP
ó "ñ
ü ú % û÷ À4&Þú
! H ó )! H a b `
ù
î $û §9¡ í ¨ ¦ ¤¢
ì 7ñ 5 3 ÜBA4ì # ñ @P"
ì 7ñ 5 3 Ü864ì
ó "áÀú ! H Þú ó ñ ü û÷
÷ xö
î í Gì
0 ô ô 1Tõ (ó 2
and
because
(since
).
÷ xö ü ú % û÷ À4&Þú óòñ ÷ ú ó üÀúËk÷Þú ñ ð û ì øí
ó ñ ü ú÷ ú ñ ü ú' û÷ ô õ !)ô "½¨ ô õ ( ô ó ¦ "áÀk"áeX&kÞú ó ñ
# ñ $£"
î ¨ ¦ ¤¢ ÿ ñ Pí ©û (9¡ 20þý ¡5
ô ô bõ4ó ñ ð ÷ xö
û ÷ ú
ôõô ó ñ bïòð
ôTT! "áÀú õ ô ó ñ ü î ¨ ¦ ¤¢ òí ©û §¥¡ ÿ 0þý ð
ó ñ "Gû ì ¨ û÷ ¦ ¤¢ ¤©kú §¥£¡ ÿ 0þý ù
î í ïÎì
ôóõô ó 2kËñ ð
) and ).
42
5 THE PUMPING LEMMA
Example of a nonregular language that satisﬁes the ‘pumping lemma property’
rest of
.]
But
is not regular. [See Exercise 5.4.2.]
Slide 33
5.3 Decidability of language equivalence
The proof of the Pumping Lemma provides us with a positive answer to question (c) on Slide 9. In other words, it provides a method that, given any two regular expressions and (over the same alphabet ) decides whether or not the languages they determine are equal, . First note that this problem can be reduced to deciding whether or not the set of strings accepted by any given DFA is empty. For iff and . Using the results about complementation and intersection in Section 4.3, we can reduce the question of whether or not to the question of whether or not , since
{ ~ t r x r x y $aq9uEqWu§V¥It y
iff
~ q
By Kleene’s theorem, given and we can ﬁrst construct regular expressions and , then construct DFAs and such that and . Then and are equivalent iff the languages accepted by and by are both empty. follows from The fact that, given any DFA , one can decide whether or not the Lemma on Slide 34. For then, to check whether or not is empty, we just have to check whether or not any of the ﬁnitely many strings of length less than the number of states is accepted by . of
y $ ~ y fXqIU9t ~ y qxUV& { $¡ 9It { y Vt9It n¥It ~ ~ q y t y y ~ y ~ t { ~ faxU q9I¥It y ~ VUq
~ t y q9IpV9t
q
~ t { y q9I8V9It
{ R} tzx  { y
~ t y Xq9sxV&¥It
w v
~ t y q949It
q t r usq
[For any
of length
, can take
,
ﬁrst letter of
g p o
satisﬁes the pumping lemma property on Slide 29 with
j i kGf h
.
,
j i knf
g wf
h v6X m e d { ~ y $fXqB¥It { ~ kx ~ t { y q9 V¥It ~ q y t ~ 9Iq9t
e d &Yq l
and
5.4 Exercises
43
empty, then we can ﬁnd an element of it of shortest length, sequence
distinct and we can shorten it as on Slide 30. But then we would . So we must have
Slide 34
5.4 Exercises
Exercise 5.4.1. Show that the ﬁrst language mentioned on Slide 28 is not regular. Exercise 5.4.2. Show that there is no DFA for which is the language on Slide 33. [Hint: argue by contradiction. If there were such an , consider the DFA with the same states as , with alphabet of input symbols just consisting of and , with transitions all those of which are labelled by or , with start state (where is the start state of ), and with the same accepting states as . Show that the language accepted by has to be and deduce that no such can exist.] Exercise 5.4.3. Check the claim made on Slide 33 that the language mentioned there satisﬁes the pumping lemma property of Slide 29 with . Tripos questions 2002.2.9 2001.2.7 1999.2.7 1995.2.27 1993.6.12 1991.4.6 1989.6.12
è æ Ûçå Ö °Ø Ñ ÒÍ Ô Ð ÚVÙyqU×qÕ Ö ØÏ Ö Ó Ð ÍÏ n¥IÎ Í Í Í Í Ô Ó ä ã â á à Ý Ô Ý Ó ©³rßÞ&qÜ Í Í Í Ñ ÛÍ
£ Ì s³² © ¢§ Ë)W¦
obtain a strictly shorter string in
¥ Ê y²
± « ¯ ¯ ¯ ® ¬ X°k« «
£ ¤ s³²
If
, then not all the
states in this sequence can be contradicting the choice of
.
¶ Éf³Ã± ¸ È ÇÆ Å Å Ä Â
¾» Á ½ À À À ® ¸
´ ¤ n³²
say (where
). Thus there is a transition
1998.2.7
© ¢§ ªd¨¦
¥ ¤ w£
¾ ¿V » ½ ¬ ¸
£
¾» ¼ V½ º ¸ £t¹·
¢
Proof. Suppose
has
states (so
). If
¢
whose length is less than the number of states in
¢
Lemma If a DFA
accepts any string at all, it accepts one . is not
± « ¯ ¯ ¯ ® ¬ X°k« « ¶ µ
.
1996.2.1(j)
1996.2.8
44
5 THE PUMPING LEMMA
45
6 Grammars
We have seen that regular languages can be speciﬁed in terms of ﬁnite automata that accept or reject strings, and equivalently, in terms of patterns, or regular expressions, which strings are to match. This section brieﬂy introduces an alternative, ‘generative’ way of specifying languages.
6.1 Contextfree grammars
Some production rules for ‘English’ sentences
Slide 35
Slide 35 gives an example of a contextfree grammar for generating strings over the seven element alphabet The elements of the alphabet are called terminals for reasons that will emerge below. The grammar uses ﬁnitely many extra symbols, called nonterminals, namely the eight symbols
One of these is designated as the start symbol. In this case it is (because we are interested in generating sentences). Finally, the contextfree grammar contains a ﬁnite set of production rules, each of which consists of a pair, written , where is one of the nonterminals and is a string of terminals and nonterminals. In this case there are twelve productions, as shown on the slide.
9 ( 0 0
ð ó ê ò ì í ê ñ ð ï é ê í ë ê ì ë ê é ì í ê ñ ð ô ê é õ ó ù ø ë ï ô ë ë ï ô ë ê ÷ í ö ì ó õ ê ò ö ì í ê &f q©°a &&©£a ©°a ©©© ° && qqñ
C
ê í ë ê ì ë ê &©&&£é
( B
97 ( (¨ ( ( ( ¤ ( & @8 6 ©2¨5 ¦43 21¦0)¢ '"$#%!
D C
þ ý û ÿf
ì í ê ñ ð ô ð ó ê ò ì í ê ñ ð ï q£B&Rq°é î ê é õ ó ù ø ë ï ô ë ê ÷ í ö ì ó £&&Þ°©B°©õ ê é õ ó ù ø ë ï ô ë ê ÷ í ö ì ó £&&Þ°©B°©õ ë ï ô ë ê ò ö ì í ê Þ°©8q©qñ õ ë ï ô ©ë
( 0
ú
û ú þ a
( 3
¨ ¦ §¦¤¥©¨£ ¢ ¡
þ ý û î ê ÷ í ö ì ó ÿü8°&õ ú 8°&õ î ê ÷ í ö ì ó
ú k
û ú £
û ú þ
û ú £
î ê í ë ê ì ë ê 8©&&é î ì í ê ñ ð ï Bq©°é î ì í ê ñ ð Bq£ô î8q©qñ õ ê ò ö ì í ê î8q©qñ õ ê ò ö ì í ê î ©&&°ë ê é õ ó ù ø ë ï ô î ©&&°ë ê é õ ó ù ø ë ï ô î©ë ë ï ô î ë ï ô ©ë î ð ó ê B&ò ú
( ( 0 0
¡¡
D
( A
¡
õ
46
6 GRAMMARS
The idea is that we begin with the start symbol and use the productions to continually replace nonterminal symbols by strings. At successive stages in this process we have a string which may contain both terminals and nonterminals. We choose one of the nonterminals in the string and a production which has that nonterminal as its lefthand side. Replacing the nonterminal by the righthand side of the production we obtain the next string in the sequence, or derivation as it is called. The derivation stops when we obtain a string containing only terminals. The set of strings over that may be obtained in this way from the start symbol is by deﬁnition the language generated the contextfree grammar.
A derivation
For example, the string
is in this language, as witnessed by the derivation on Slide 36, in which we have indicated lefthand sides of production rules by underlining. On the other hand, the string
Remark 6.1.1. The phrase ‘contextfree’ refers to the fact that in a derivation we are allowed to replace an occurrence of a nonterminal by the righthand side of a production without regard to the strings that occur on either side of the occurrence (its ‘context’). A more general form of grammar (a ‘type grammar’ in the Chomsky hierarchy—see page 257 of Kozen’s
FIGFHG @665FE
is not in the language, because there is no derivation from
s ih rvg
(4)
q6¥2rvQrQs§¥xp@£R s t s tgsi g yw ihg G XG s t s tgsi g yw ihg 8§S5q6¥2rvQrQs§¥xp@£R G X F cHI Uua s tgsi g yw ihg 8§S5G V¥@8`FvvQrQs§¥xp@£R FEaWfeG XG s tgsi g yw ihg Q56681S8vvQrQs§¥xp@£R FQ5a5f8§S5G 6Q§I6QrQs§¥xp@£R E W eG X Fd cHWa tgsi g yw ihg HI UTX tgsi g yw ihg @8`FQvQrQs§¥xp@£R HI U X t s G XG yw ihg 51FTx¥g6i 8§S5§¥xp@£R HI UT tgsi G X F cHI Uua ihg @5F1QXvQr1S5G `V@8`F¥@qp@£R HI U X t s FEaWfeG XG ihg 5F1Tqg6i Q558§S5qp@£R HI U TWFV FEaWfeG XG ihg 5F1TX 6rQ558§S5qp@£R H U X T F EaWfeG X Fd cHWa @I51FTQYW6VYF556@8§S5G Q§I5@bR HI UTX TWF HI UT FIGFHGF @8`FQY6V @51F81SE R Q56@66QE
Slide 36
FIGFHG @665FE
q60rv¥ggs§¥xp@g s t s t si yw ih
P
to the string. (Why?)
6.2 BackusNaur Form
47 are arbitrary strings of
would allow occurrences of ‘ ’ that occur between ‘ ’ and ‘ ’ to be replaced by ‘ ’, deleting the surrounding symbols at the same time. This kind of production is not permitted in a contextfree grammar.
Example of BackusNaur Form (BNF) Terminals: Nonterminals: Start symbol: Productions:
Slide 37
6.2 BackusNaur Form
It is quite likely that the same nonterminal will appear on the lefthand side of several productions in a contextfree grammar. Because of this, it is common to use a more compact notation for specifying productions, called BackusNaur Form (BNF), in which all the productions for a given nonterminal are speciﬁed together, with the different righthand sides being separated by the symbol ‘ ’. BNF also tends to use the symbol ‘ ’ rather than ‘ ’ in the notation for productions. An example of a contextfree grammar in BNF is given on Slide 37. Written out in full, the contextfree grammar on this slide has eight productions,
~
j Qi
book, for example) has productions of the form where and terminals and nonterminals. For example a production of the form
u y n z 0 y n zy6y n z v ~}{{ y n z t x w s r ¥qp ~}{{ 6x y o v n ~}{{ 6v w w
d gfe 6 h51d@ mk jQ`h5`d¥@v l i d gfe ynz y n z 6x v y w u t s r qo n p
§mk l
48 namely:
6 GRAMMARS
The language generated by this grammar is supposed to represent certain arithmetic expressions. For example (5) is in the language, but (6) is not. (See Exercise 6.4.2.)
A contextfree grammar for the language
Nonterminal: Start symbol: Productions:
¥ ª~¨¦§¥ © ¦ ¥ ¥
Terminals:
Y1 § 1 1 1 § § § ¤¢ 0£¡6¥¥
Slide 38
6.3 Regular grammars
49
6.3 Regular grammars
A language over an alphabet is contextfree iff is the set of strings generated by some contextfree grammar (with set of terminals ). The contextfree grammar on Slide 38 generates the language . We saw in Section 5.2 that this is not a regular language. So the class of contextfree languages is not the same as the class of regular languages. Nevertheless, as Slide 39 points out, every regular language is contextfree. For the grammar deﬁned on that slide clearly has the property that derivations from the start symbol to a string in must be of the form of a ﬁnite number of productions of the ﬁrst kind followed by a single production of the second kind, i.e.
Every regular language is contextfree Given a DFA , the set of strings accepted by
can be
generated by the following contextfree grammar: set of terminals =
set of nonterminals =
start symbol = start state of productions of two kinds:
whenever whenever
in
Slide 39
Ê
Thus a string is in the language generated by the grammar iff it is accepted by
Ø
ß áÈÃòã ï î í Ø 2ñ@åñé ð·åå ê ëæ
× Ó Ò Ñ ËÍ Ç ¾ 0ÖÕÄÔÄÓÏ¯ Â Ð¿ ~ÍÏÆÆÆ Î¿ Ë Í ÁÂ Ì¿ Ë Í `¾½
Ü Ú ÝÛØqÙ
Ø ß `ä)ã5)âµà áá
ß xÞ
ì çå æ éå æ 0è çå
Ø
Ê
where in
the following transition sequence holds
¯ Ã®ÇÇÉ5Ä®¥® ¿ ¯ Â ¯ Ã®ÇÇÈ5Ä®¥® @ÆÆÆ ¿ Å±Â5Ä®¥® ¿ ÁÃÂ® À`¾½ ÇÅ Á ÇÅ Á ¿ ÅÁ Á ¿
« ¬
º ´² °¯ 5¸¹¶·µ³±¯5`® ¬
» ¼¬
«
.
50
6 GRAMMARS
Deﬁnition A contextfree grammar is regular iff all its productions are of the form
or where
is a string of terminals and
and
Theorem
(a) Every language generated by a regular grammar is a regular language (i.e. is the set of strings accepted by some DFA). (b) Every regular language can be generated by a regular grammar. Slide 40
It is possible to single out contextfree grammars of a special form, called regular (or right linear), which do generate regular languages. The deﬁnition is on Slide 40. Indeed, as the theorem on that slide states, this type of grammar generates all possible regular languages. Proof of the Theorem on Slide 40. First note that part (b) of the theorem has already been on Slide 39 is a regular grammar proved, because the contextfree grammar generating (of a special kind). To prove part (a), given a regular grammar we have to construct a DFA whose set of accepted strings coincides with the strings generated by the grammar. By the Subset Construction (Theorem on Slide 18), it is enough to construct an NFA with this property. This makes the task much easier. We take the states of to be the nonterminals, augmented by some extra states described below. Of course the alphabet of input symbols of should be the set of terminal symbols of the grammar. The start state is the start symbol. Finally, the are deﬁned as follows. transitions and the accepting states of
d ec¢
(ii) For each production of the form transition
with
, i.e. with
3 "))) ' "$ " 42110(&%#!¢
(i) For each production of the form , we add fresh states
with
, say with to the automaton and transitions
, we add an
û
û
þ
) £ ÿ þ 7 ÿ ab`0¢ú P(¨¦¥¤ £ ¥¢ Yÿ ý ©§ ÿ ) £ ÿbS W X7Q$FDÛ11V ¨ U TDÿ S R QPÿ I H !ÿ G7 E 3 ÿ V V G7 B G7 $ G7 $E F $ ©§ ý0¢ú ¨¦¥¤ £ ¥¢ ¡ÿ ÿ1B@1)1)1)C@4ÿD3 4ÿA@1'4ÿ 1@9ÿ
ý ú £üû¼ù
ö
û
ó
õ øó ô ö ô ÷õ ó
û
õ
are nonterminals.
85 7
6 5
d
6.4 Exercises
51
Moreover we make the state
accepting.
If we have a transition sequence in of the form with , we can divide it up into pieces according to where nonterminals occur and then convert each piece into a use of one of the production rules, thereby forming a derivation of in the grammar. Reversing this process, every derivation of a string of terminals can be converted into a transition sequence in the automaton from the start state to an accepting state. Thus this NFA does indeed accept exactly the set of strings generated by the given regular grammar.
6.4 Exercises
Exercise 6.4.1. Why is the string (4) not in the language generated by the contextfree grammar in Section 6.1? Exercise 6.4.2. Give a derivation showing that (5) is in the language generated by the contextfree grammar on Slide 37. Prove that (6) is not in that language. [Hint: show that if is a string of terminals and nonterminals occurring in a derivation of this grammar and that ‘ ’ occurs in , then it does so in a substring of the form , or , or , etc., where is either or .] Exercise 6.4.3. Give a contextfree grammar generating all the palindromes over the alphabet (cf. Slide 28). Exercise 6.4.4. Give a contextfree grammar generating all the regular expressions over the alphabet . Exercise 6.4.5. Using the construction given in the proof of part (a) of the Theorem on Slide 40, convert the regular grammar with start symbol and productions
into an NFA whose language is that generated by the grammar. Exercise 6.4.6. Is the language generated by the contextfree grammar on Slide 35 a regular language? What about the one on Slide 37? Tripos questions 1990.6.10 2002.2.1(d) 1997.2.7 1996.2.1(k) 1994.4.3 1991.3.8
i t l{zCix ef u ryy w
q 6pi
with , i.e. with (iv) For each production of the form in any new states or transitions, but we do make an accepting state.
, we do not add
¡
2110&8i
~~ ~ ~ ~ ~
f t r vu s f n y w uts omlivPS¦rq i kf g If xDfpDgj ¦i11h1pCgf eIp¨gd Qp1g !f g g g f f f fff S&¨1111&I&19f y w uts ixvPS¦rq i g phf Af g Pf f g ¨ iDf g Df¨ iDf q g iDf
(iii) For each production of the form we add fresh states
with , say to the automaton and transitions
with
,
i
z
}
A
}
~
Iz%
i
52
6 GRAMMARS
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.