This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

A Dissertation

Presented to The Faculty of the Graduate School of Arts and Sciences Brandeis University Department of Computer Science James Pustejovsky, Advisor

In Partial Fulﬁllment of the Requirements for the Degree Doctor of Philosophy

by Jos´ M. Casta˜o e n February, 2004

The signed version of this dissertation page is on ﬁle at the Graduate School of Arts and Sciences at Brandeis University. This dissertation, directed and approved by Jos´ M. Casta˜o’s committee, has been accepted and e n approved by the Graduate Faculty of Brandeis University in partial fulﬁllment of the requirements for the degree of:

DOCTOR OF PHILOSOPHY

Adam Jaﬀe, Dean of Arts and Sciences

Dissertation Committee: James Pustejovsky, Dept. of Computer Science, Chair. Martin Cohn, Dept. of Computer Science Timothy J. Hickey, Dept. of Computer Science Aravind K. Joshi, University of Pennsylvania

.

c Copyright by Jos´ M. Casta˜o e n 2004 .

.

no lo emprendas. que ser´ en vano a S. Rodr´ ıguez. Debes amar la arcilla que va en tus manos debes amar su arena hasta la locura y si no. A la Pipi y su gran amor. vii .Al Campris y su eterna perserverancia.

.

Piroska Cs´ri. some I don’t know. Jos´ Alvarez. Mar´ Pi˜ango. I would like to express my gratitude to James Pustejovsky who supported and encouraged me during these years. Francesco Rizzo. I am extremely grateful to Aravind Joshi for being part of my committee and for his precise comments and suggestions. Anna ıa n u Rumshisky. Roser Saur´ and Bob Knippen. I would like to thank Martin Cohn for introducing me to Formal Languages and having always time for me. Alejandro Renato and the extraordinary friends I met at Brandeis. First and foremost. Domingo Medina. Marc Verhagen. Ricardo e Rodr´ ıguez.Acknowledgments Many people contributed to this work. Ana Arregui. Edward Stabler and Stuart Shieber helped me to ﬁnd a way at the initial stages of this endeavour. It is my belief that every intellectual work is fruit of a community. ı I cannot express how grateful I am to my compa˜era Daniela in this and every route. in many diﬀerent ways. my American friend. far away from my initial interests when I arrived at Brandeis. My family n has a very special dedication. Aldo Blanco. This work was inﬂuenced by many people. Also many circumstances led me to this unexpected path. Many thanks to my Argentinian friends and colleagues. Pablo Funes. My sincere thanks to Tim Hickey and Ray Jackendoﬀ. My highest appreciation to those anonymous reviewers who made an eﬀort to understand and comment on my submissions during these years. ix .

.

ABSTRACT Global Index Languages A dissertation presented to the Faculty of the Graduate School of Arts and Sciences of Brandeis University. An Earley type algorithm to solve the recognition problem is discussed in Chapter 5. One of those problems concerns natural language phenomena. we discuss in Chapter 6 the relevance of Global Index Languages for natural language phenomena. This work takes these ideas as its goal. as well as the structural descriptions generated by the corresponding grammars. named Global Index Languages (GILs). This family of languages preserves the desirable properties mentioned above: bounded polynomial parsability. Waltham. Then characterization and representation theorems as well as closure properties are also presented. Casta˜o e n Context-free grammars (CFGs) is perhaps the best understood and most applicable part of formal language theory. However there are many problems that cannot be described by context-free languages: “the world is not context-free” [25]. Mildly context-sensitive languages and grammars emerged as a paradigm capable of modeling the requirements of natural languages. and at the same time has an “extra” context-sensitive descriptive power (e.g the “multiple-copies” language is a GI language). We present a two-stack automaton model for GILs and a grammar model (Global Index Grammars) in Chapter 2 and 3. In order to model coordination. as well as an LR-parsing algorithm for the deterministic version of GILs. There is increasing consensus that modeling certain natural language problems would require more descriptive power than that allowed by context-free languages. xi . GIls descriptive power is shown both in terms of the set of string languages included in GILs. One of the hard problems in natural language is coordination. Innumerable formalisms have been proposed to extend the power of context-free grammars. At the same time there is a special concern to preserve the ‘nice’ properties of context-free languages: for instance. Massachusetts by Jos´ M. This thesis presents an abstract family of languages. Finally. it might be necessary to deal with the multiple copies language: {ww+ |w ∈ Σ∗ } [29]. semi-linearity. polynomial parsability and semi-linearity.

.

. . . . . . . . .1 Indexed Grammars and Linear Indexed Grammars 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Pumping Lemmas and GILs . . . . 2. . .1 Deterministic LR-2PDA . . . . . . . . . . . . . . . .5. . . .3 Our proposal . . . . . . . . . . . . . . . . .Contents vii Acknowledgments Abstract List of Figures 1 Introduction 1. . . . . . 2. . .2. . . . . . . . . . . . . . . . . . . . . . . . .2 Left Restricted 2 Stack Push Down Automatas (LR-2PDAs) . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . .1 Relations to other families of languages . . . . . . . . . . . . . .3 Semilinearity of GILs and the Constant Growth property 4. . . 3. . . . . . . . . 4. . . . . 2.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Natural Language .2 Outline of the Thesis and Description of the . . . . . . . .1.2 GIGs examples . . . . . . . . . . . . . . . .2 LR-2PDA with deterministic auxiliary stack . . . . . . . . . . . . .2 Shrinking Two Pushdown Automata . . . . . . . 1. . . . . . . . PDA and LR-2PDA . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . .5 Determinism in LR-2PDA . . . . . .2 Closure Properties of GILs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix xi xv 1 1 2 4 5 6 9 9 10 12 13 15 17 20 20 21 23 23 24 26 28 32 35 41 41 42 44 46 47 2 Two Stack Push Down Automata 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Automata with two Stacks . . . . . . . . . . . . . . . . . . . . . ﬁnite turn CFGs and Ultralinear GILs . . . . . . . . .1 trGIGS .1 EPDA and equivalent 2 Stack Machines . . . . . . . . . 1. 1. . . . . . . . . . . . xiii .2 Biology and the Language of DNA . . . . . . . . . . . . . . . 3 Global Index Grammars 3. . . . . . . . . . . . . .1. . . Results . . . . . . . . . . . .1. 2. . . . . . . .3 Comparison with TM. . . . . . . . . . . . 3. . . . . . . . .1. . . . . 2. . 1. . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . .4 Equivalence of GIGs and LR-2PDA . 3. . . . . . . .1. . . . . . . . . . . . 2. 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 GILs. . . . . . . . 4 Global Index Languages Properties 4. . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Global Index Grammars . . . . . . . . . . . .1 Introduction . . . . . . . .3 GILs and Dyck Languages . . . .4 Examples of LR-2PDAs . . . . . . . . . . . . . . . . . .5. . . . . . . . . .

. . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . 51 . . . . . . . . . . 5. . 73 . . . . . . . . . . . . . . . . . . . 79 . . 6. . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . 126 xiv . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . .1 Control of the derivation in LIGs and GIGs . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. .2 Time and Space Complexity . . . . .1 Graph-structured Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1. . . 52 . . . . . . . . . . . . . . 5. . . . . .1 LR items and viable preﬁxes . . . . . . . . . . . GILs. . . 5. .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . .5 Implementation . . . . . . . . . 5. . . .3 Parsing Schemata for GIGs . . . . . . . . . . . . . . . . . .3 The Copy Language . . . . . . . . . 53 . . . . . . .3 Reﬁning the Earley Algorithm for GIGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . .1 LIGs in push-Greibach Normal Form . . . . . . . . . .1. . . . . .2 Set Inclusion between GIGs and LIGs . . . . .1 Control Languages . . . . . . . . . .2. . . . 100 . . . . . . . . . . . . .2 The Closure Operation. .2. . . . . . . . . . . . . . . . . . . . .2 First approach to an Earley Algorithm for 5. . . . .2 Parsing Schemata . . . .1 Overview . . . . . . . . . . . 81 . .1. 98 . . 5. . . . . 73 . . . . . . . . . . . . . . . .1 Earley Algorithm.5 Derivation order and Normal Form .2. . . . . . .4 Proof of the Correctness of the Algorithm . . . . . . . . . . . . . . 5. . . . . . . . . . . . .4. . . . . . . . . . . .2 The palindrome language . . . . . . . . . . . . . . . . . . . . . . 98 . . . . . . . . . . . . . 48 5 Recognition and Parsing of GILs 5. . . . . . 5. 61 .5 Crossing dependencies . . . . . 57 . . . . . . . 52 . . . . . . . . 5. . 51 . . . . 7 Conclusions and Future Work 125 7. . . . . . . . 6.4 Multiple dependencies . . . . . . . . . . . . . . . 83 . . .2 GILs Recognition using Earley Algorithm . . . . . . . . . . . . . . . . 96 . . . . . . 5. 99 . . . .2. . . . . . . 5. . . . .4 Earley Algorithm for GIGs: ﬁnal version . . . . . . . . . . . 6. . . . . .1 Linear Indexed Grammars (LIGs) and Global Index Grammars (GIGs) 6. . . .4. . . .3 Complexity Analysis . . . . 6. . . 75 . . . . . . . . . . . . . .4. . . . . . . . . . . . . 107 107 108 108 110 112 113 115 116 119 6 Applicability of GIGs 6. . 5. 5. . . . . . . . . . .6 LR parsing for GILs . .6. . . . . . . . . . . . . . . . 80 . . . . . . . . . . . . . .6. . . . . . .3. . . . . . . . . 6. . . . . . .3 The Goto Operation. . . . . . . . . . . . . . . .3 GIGs and HPSGs . . . . . . . . . . . .1 Future Work . . . . . . . . . . . . . . 6. . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .21 5. . . . . . . . . . . . . . . . .7 5. . . . . . . . . . . . . . 62 i A graph structured stack with i and ¯ nodes . . . .2 6.4 5. . . 59 A complete steps in the graph . . . . . . . Two LIG structural descriptions for the copy language . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Path invariant . . . . 8 11 15 15 17 17 18 Two graph-structured stacks . . . . . . . . . . . . . . . . . . . . . . . . . 88 Set of States for Gwcw . . . . .9 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 5. . . . . . . . . . . . . . . . . . . . . . . . .6 5. . . . . . . . . . 68 Completer B . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 5. . . . . . . . . . . . . . . . . . . . . . . . .20 5. . . . . . 60 Ambiguity on diﬀerent spines . . . . . . . . . . . . . . Dutch Crossed Dependencies (Joshi & Schabes 97) A Turing Machine . . . . . . . . . . . . . . . . .14 5. . . . . . . . . . . . .2 5. . . . .4 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 5. . . . . . . . . . . . . . . . . . . . . .5 2. . . . . . . . . . . . . . . . . .5 5. . . . . . . . . . last case . . . . . . . . . . A non context-free structural description for the language wwR . . . . . 69 Completer D . . . . . . . 63 i I-edges . . . . . 71 Completer F . . . . .8 5. . . . . . . .3 2. . . . . A PDA . . . . . .List of Figures 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 5. . . . .5 Language relations . . . . . . . . . 100 Transition diagram for the PDA recognizing the set of valid preﬁxes. . . LR-2PDA . . . 70 Completer E. . . . 79 GIG invariant . . . . . . .22 5. . . . . . . . . . . . . . . . . . . . . . . . . . . .3 6. . . . . . . . . . . . .11 5. . . . . . . . . 64 Completer A . . . . . . . . . . . .10 5. . . . . . . . . 68 Completer C . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . . . . . . 60 Two alternative graph-structured stacks distinguished by position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Paths and reductions in a graph structured stack with i and ¯ nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 LIGs: multiple spines (left) and GIGs: leftmost derivation . . . . . . . . 108 109 109 110 110 xv . annotated with indices for the grammar Gwcw . . Multiple agreement in an sLR-2PDA . . . 52 Push and Pop moves on the same spine . . . . . .24 6. . . . . . 58 i Push and Pop moves on diﬀerent spines . . . . . . . . . . . . . . .18 5. . 59 Two alternative graph-structured stacks . . . . . . . .17 5. . . . . . .1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sLR-2PDA . . . . . . . . . . . . . . . .15 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Completer G . . . Another non context-free structural description for the language wwR GIG structural descriptions for the language wwR . . . . . . . . . . . . . . . . . . . . . . . . 57 A graph structured stack with i and ¯ nodes . . . . . . . . . . . .19 5. . . . . . . . . . . . . . . . . . . . . . . . . 58 A predict time graph . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Earley CFG invariant . . . . . . . . . .1 2.1 5.23 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 6. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .10 6. . . Left and right recursive equivalence in LIGs . SLASH and coordination in GIG format . . .11 6. . . . . . . . . . . . . . . . . . . . . . . . . .18 6. . . . . . . . . . . SLASH in GIG format . . . . . . . . . .20 6. . . . . . . . . . . . . . . . . . . . . . . . . . . LIGs not in push-GNF II . . LIGs not in push-GNF II in GIGs . . . . . . . . . . . . LIG and GIG structural descriptions of 4 and 3 dependencies A LIG and a GIG structural descriptions of 3 dependencies . . . . . . . Left and right recursive equivalence in LIGs. . . . . . . . . . . . . . . . . . . . . . Head-Comp in GIG format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 6. .17 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 6. Head-Subject in GIG format . second case . . GIG structural descriptions of 4 dependencies . . . . .9 6. . . . . . . . . . . . . . . . . . . . . . . . . LIGs not in push-GNF I . . . . . . . . . . 111 111 112 112 113 113 117 117 118 118 119 120 121 121 122 123 124 xvi . . . . . . Head-Comps Schema tree representation .7 6. . . . . . . . . . . . . .8 6. . . . . . . . . . . . . . . . . .22 An IG and GIG structural descriptions of the copy language . Head-Subject Schema . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . GIG structural descriptions of 3 dependencies . A GIG structural description for the multiple copy language . . . . .19 6. . . . . . . . . . . . . . . . . . . . . . . . . . . .14 6. . . . . . . .15 6. . . . .13 6. . . . . . .12 6. . .

Chapter 1 Introduction 1. at least. semilinearity property. 81. • Computational Biology • Possibly also relevant to programming languages which might use context sensitive features (like Algol 60). We try to obtain a model with eﬃcient computation properties. well understood and most widely applicable part of formal language theory. 1 . where the control device is a context-free grammar (see [87] and [26] regarding control languages). polynomial parsability. However there are many problems that cannot be described by context-free languages.1 Introduction Context-free grammars (CFGs) constitute perhaps the most developed. for the following areas: • Natural Language modeling and processing. Such a model is relevant. The appeal of this approach is that many of the attractive properties of context-free languages are preserved (e. In [25] seven situations which cannot be described with a context-free framework are given. The purpose of this thesis is to explore an alternative formalism with restricted context-sensitivity and computational tractability. In an automaton model those extensions either use constrained embedded or additional stacks (cf. closure properties).g. 59]). [7. Many formalisms have been proposed to extend the power of context-free grammars using control devices.

or additional stacks can be added. multiple context free grammars [71]. Georgian Case and Chinese numbers) might be considered 3 However to be beyond certain mildly context-sensitive formalisms. These constructions are exempliﬁed as [26]:2 – reduplication. there is increasing consensus1 on the necessity of formalisms more powerful than CFGs to account certain typical natural languages constructions. scrambling. following type of languages are also relevant to model molecular biology phenomena [70] there are other phenomena (e. 23. in particular related to the so called Mildly-Context Sensitive formalisms. 1. 4 See for example. etc. where a language of level k properly includes a language of level k − 1. – crossed agreements. As a consequence those generalizations provide more expressive power but at a computational cost. {an bn cn dn | n ≥ 1}. 47] on semi-linearity. Within Mildly-Context Sensitive formalisms we focus on Linear Indexed Grammars (LIGs) and Tree Adjoining Grammars (TAGs) which are weakly equivalent.1. among others. that can be modeled with GILs. In general. and multi-push-down grammars [21].2 CHAPTER 1.3 It has been claimed that a NL model must have the following properties:4 1 See 2 The for example [73. corresponding to languages of the form {an bn cn | n ≥ 1}. 86] and [55. leading to languages of the form {ww |w ∈ Σ∗ } – multiple agreements. INTRODUCTION Such models can be generalized and additional control levels. as modeled by {an bm cn dm | n m ≥ 1} Mildly context-sensitive grammars [43] have been proposed as capable of modeling the above mentioned phenomena.1 Natural Language Regarding natural languages (one of the seven cases mentioned above). They form hierarchies of levels of languages.g. [46. The complexity of the recognition problem is dependent on the language level. We will focus in natural language problems from a language theoretic point of view. We will exemplify this issue with some properties of natural language and we will consider some biology phenomena. 29]. Examples of those related hierarchies are: Khabbaz’s [49]. for a level k language the complexity of the recognition problem is a polynomial where the degree of the polynomial is a function of k. Weir’s [87]. . additional levels of embeddedness.

.a2 k +1 | n ≥ 0} and {w2 k −1 +1 |w ∈ Σ∗ } . and combinatory categorial grammars (CCGs) which are weakly equivalent. The equivalent machine model is a nested or embedded push down automata (NPDA or EPDA). A TAG/LIG is a level-2 control grammar. Tree Adjoining languages belong to the class L2 in Weir’s hierarchy. In such hierarchies there is a correlation between the complexity of the formalism (grammar or automata). This is reﬂected in the pumping lemma. 62]). its descriptive power. However. where a CFG controls another CFG.1. Many diﬀerent proposals for extending the power of LIGs/TAGs have been presented in the recent past.. Their expressive power regarding multiple agreements is limited to 4 counting dependencies (e. and the time and space complexity of the recognition problem. L = {an bn cn dn |n ≥ 1}). Therefore the following relations follow: The family of level-k languages contains the languages n n {a1 . These formalisms have long been used to account for natural language phenomena. a stronger version of this requirement ) – Polynomial parsability – Limited cross-serial dependencies – Proper inclusion of context-free languages 3 The most restricted mildly context-sensitive grammar formalisms are linear indexed Grammars (LIGs).1.g. tree adjoining grammars (TAGs).. an increase in descriptive power is correlated with an increase in time and space parsing complexity... A lemma for level k of the hierarchy involves “pumping” 2k positions in the string. head grammars (HGs). They have been proven to have recognition algorithms with time complexity O(n6 ) considering the size of the grammar a constant factor ( [67. INTRODUCTION – Constant growth property (or semi-linearity.a2 k | n ≥ 0} and {w2 k −1 |w ∈ Σ∗ } but does not contain the languages n n {a1 . More expressive power can be gained in a mildly context-sensitive model by increasing the progression of this mechanism: in the grammar model adding another level of grammar control. and in the machine model allowing another level of embeddedness in the stack. One of the characteristics of mildly context sensitive formalisms is that they can be described by a geometric hierarchy of language classes.

There are other formalisms related to mildly context-sensitive languages that are not characterized by a hierarchy of grammar levels. recombination. However. u ¯ . Tree Description Grammars [47] are other such examples. Searls [70. that it is ambiguous and that it is not context-free. or ordering) the recognition problem is in O(n3 ·2 k −1 ) [86.4 CHAPTER 1. 10.1. generative grammars have been proposed as models of biological phenomena and the connections between linguistics and biology have been exploited in diﬀerent ways (see [40. Contextual Languages do not properly include context-free languages. He shows that the language of nucleic acids is not regular. nor deterministic nor linear. 54]. gene structure and expression. 69]). Those problems which are beyond context free are: Tandem Repeats (multiple copies). Contextual Grammars [52. conformation of macromolecules and the computational analysis of sequence data are problems which have been approached from such a perspective. 68] proposes a grammar model to capture the language of DNA. Some of those formalisms are: Minimalist grammars [78. In Coupled Context-Free Grammars the recognition problem is in O(|P | · n3l ) where l is the rank of the grammar [28] and Multiple Context Free Grammars is O(ne ) where e is the degree of the grammar. Range Concatenative Grammars [34. 12]. INTRODUCTION An increase in the language level increases the time (and space) complexity but it still is in P : for a level-k control grammar (or a level-k of stack embeddedness. 66] is another formalism with some mildly context sensitive characteristics. Direct Repeats (the copy language). 87]. Multivalued Linear Indexed Grammars [63]. 1. mutation and rearrangements.2 Biology and the Language of DNA Recently. and m the number of terms in the productions. Gene regulation. Pseudoknots (crossing dependencies) represented as: Lk = {uv¯R v R }. 38. In this case the complexity of the recognition problem is a bound polynomial: O(n6 ) space and O(n9 ) time [36]. For LMGs the complexity of the recognition problem is O(|G|m(+p)n1 +m+2p ) where p is the arity of the predicates. but their complexity and expressive power is bound to some parameter in the grammar (in other words the expressive power is bound to a parameter of the grammar itself). and the related Literal Movement Grammars (LMGs).

INTRODUCTION 5 Unbounded number of repeats. dependent branches There are many similarities between Linear Indexed Grammars and Global Index Grammars.1. which we call Global Index Languages. are characteristic of the language of DNA. and they are able to describe those phenomena. b.. (GILs). which is in time O(n6 ). Recently Rivas and Eddy [64] proposed an algorithm which is O(n6 ) to predict RNA structure including pseudoknots. We will show that the automaton and grammar model proposed here extends in a natural way the properties of Push Down Automata and context-free languages. b}∗ } unbounded number of copies or repeats (which allows also an unbounded number of crossing dependencies). and proposed a formal grammar [65] corresponding to the language that can be parsed using such an algorithm. in the model we present here. 1. k ≥ 2} any number of agreements or dependencies. The following languages can be generated by a Global Index Grammar: • Lm = {a1 n a2 n .1. String Variable Languages are a subset of Indexed Languages.ak n | n ≥ 1.3 Our proposal Unlike mildly context-sensitive grammars and related formalisms. inverted repeats. • L(Gmix ) = {w|w ∈ {a. • L(Ggenabc ) = {an (bn cn )+ | n ≥ 1} unbounded number of agreements. • L(Gsum ) = { an bm cm dl el f n | n = m + l ≥ 1}.. c}∗ and |a|w = |b|w = |c|w ≥ 1} The mix language. or interleaved repeats. However there is no known algorithm that decides the recognition problem for String Variable Languages. • L(Gwwn ) = {ww+ |w ∈ {a. however there are many GI languages which do not belong to the set of Linear Indexed Languages . there is no correlation between the descriptive power (regarding the three phenomena we mentioned) and the time complexity of the recognition problem. combinations. We will also show that it is able to describe phenomena relevant both to natural language and biology. which is conjectured not to be an Indexed Language. Probably the work by Searls is the most salient in this ﬁeld.1. He proposed String Variable Grammars to model the language of DNA.

We also discuss the structural descriptions generated by GIGs along the lines of Gazdar [29] and Joshi [82. The recognition problem for Global Index Languages (GILs) has a bounded polynomial complexity. An abstract family of languages (AFL) is a family of languages closed under the following . In addition. given they contain the MIX (or Bach) language [11]. However GIGs can generate structural descriptions where the dependencies are expressed in dependent paths of the tree. although no further implications are pursued. In this work we will address three aspects of the problem: language theoretic issues. which we call LR-2PDAs in Chapter 2 (a signiﬁcant part of this chapter was published in [18]). We detail the properties of GILs in the following section. a proof of equivalence between the grammar model and the automaton model (the characterization Theorem) and a Chomsky-Sch¨tzenberger representation Theorem are presented in Chapter 3. Therefore GILs have at least three of the four properties required for mild context sensitivity: a) semi-linearity b) polynomial parsability and c) proper inclusion of context free languages. a property which can be proved following the proof presented in [37] for counter automata.6 CHAPTER 1. We show in Chapter 6 that the relevant structural descriptions that can be generated by LIGs/TAGs can also be generated by a GIG. A subset of GIGs (trGIGs). GILs are also semi-linear. This suggests that Linear Indexed Languages might be properly included in the family of Global Index Languages. The set of Global Index Languages is an Abstract Family of Languages. and the family of Global Index Languages (GILs) in Chapter 3. 1. GILs include multiple copies languages and languages with any ﬁnite number of dependencies as well as the MIX language. 44]. and there is no correlation between the recognition problem complexity and the expressive power. The initial presentation of Global Index Languages was published in [17]. We present the corresponding grammar model : Global Index Grammars (GIGs). INTRODUCTION (LILs). The fourth property.2 Outline of the Thesis and Description of the Results Language Theoretic Aspects (Chapters 2-4) We deﬁne the corresponding Automaton model. limited cross-serial dependencies does not hold for GILs. u In Chapter 4 we discuss some of the closure properties of Global Index Languages and their relationship with other languages. parsing and modeling of natural language phenomena. is also presented in Chapter 3.

for higher level programming languages or deterministic approaches for shallow parsing in natural language. We also show that GILs are closed under intersection with regular languages and inverse homomorphism (proved by construction with LR-2PDAs). shows that the size of the grammar has the same impact as in the CFG case. We also give an LR parsing algorithm for for deterministic GILs. The graph-structured stack enables the computations of a Dyck language over the set of indices and their complements. We show by construction using GIGs that GILs are closed under union. This is due to the fact that GILs inherit most of CFL properties. Tree Adjoining Grammars and Head Grammars are weakly . e-free regular sets. We also prove that GILs are semi-linear. However it is O(n) for LR or state bound grammars and is O(n3 ) for constant indexing grammars which include a large set of ambiguous indexing grammars. The space complexity is O(n4 ). Natural Language Modeling (Chapter 6) We present a comparison of LIGs and GIGs (some of these results were presented in [20]).2. We also show that GILs are closed under substitution by e-free CFLs. concatenation and Kleene star. The result. They are also the basis for non-deterministic approaches. The construction of the Goto function shows an interesting (though not surprising) fact about GILs: the Goto function deﬁnes a PDA that recognizes the set of viable preﬁxes. free morphisms and inverse morphisms. Combinatory Categorial Grammars. intersection with regular languages.1. catenation. LIGs. but the indexing mechanism could have a higher impact. Most of the properties of GILs shown in these chapters are proved using straightforward extensions of CFL proofs. OUTLINE OF THE THESIS AND DESCRIPTION OF THE RESULTS 7 six operations: union. O(|I|3 · |G|2 · n6 ). Deterministic LR parsing techniques are relevant. We evaluated the impact of the size of the grammar and the indexing mechanism separately. for example. such as a non-deterministic Generalized LR parsing. e-free homomorphism. Parsing (Chapter 5) We present a recognition algorithm that extends Earley’s algorithm for CFGs using a graphstructured stack [80] to compute the operations corresponding to the index operations in a GIG. The time complexity of the algorithm is O(n6 ). Kleene star.

8 CHAPTER 1. We discuss the relationship of LIGs with GILs in depth in section 6. Our conjecture is that it is. GILs include CFLs by deﬁnition. The set inclusion relation between CFLs. Finally.2. The question still to be answered is whether the area with lines in the LILs and LCFR languages set is empty or not. There is a long and fruitful tradition that has used those formalisms for Natural Language modeling and processing. we disscuss how some properties of HPSGs can be modeled with GIGs (presented in [19]. We also show that GIGs can generate trees with dependent paths.1.1: Language relations . so the descriptive power of CFGs regarding by natural language phenomena is inherited by GIGs. Those natural language phenomena that require an extra power beyond CFLs can be dealt with GILs with much more ﬂexibility than with LILs/TALs. and GILs can be represented as in ﬁgure 1. Other formalisms that might share structural similarities with GIGs could be Minimalist Grammars and Categorial Grammars. LCFRLs LILs CFLs ww anbncn ww + an(bncn ) + GILs Figure 1. INTRODUCTION equivalent. LILs/TALs.

g.2 It is well known that an unrestricted 2-PDA is equivalent to a Turing Machine (cf. [41]).g..Chapter 2 Two Stack Push Down Automata 2. (e.[7. Consequently it is possible to simulate the capacity that a TM 1 See [7. 84]). 84..1 Automata with two Stacks Diﬀerent models of 2-Stack Push Down Automata have been proposed. but behaves like a PDA.g. 59] [21. 2 E. The model of 2-PDA that we want to explore here is a model which preserves the main characteristics of a PDA but has a restricted additional storage capacity: allows accessing some limited left context information. also generalized models of multistacks (e. 85]. 21. and n-Stack PDA.. or multi-store PDAs as equivalent to n-order EPDA. 83. 81.[31. We need to guarantee that the proposed automaton model does not have the power of a Turing Machine. 9 . The simulation of a TM by a 2-PDA is enabled by the possibility of representing the head position and the tape to the left side of the head in a TM with one stack and the tape to the right side with the second stack. Restricted classes of PDA with two stacks (2-PDA) were proposed as equivalent to a second order EPDA1 . 9]. Those models restrict the operation of additional stacks (be embedded or ordered) allowing to read or pop the top of an additional stack only if the preceding stacks are empty. in the sense that it is forced to consume the input from left to right and has a limited capacity to go back from right to left.

except that the push-down store may be a sequence of stacks. . sb2 . a 2nd order PDA in the progression hierarchy presented in [86]. 3 We might restrict even more the operation of the auxiliary stack disallowing moves that pop symbols from the auxiliary Stack. st2 . Then a transition function looks like (from [45]): δ (input symbol. either reading or replacing symbols.. Given that it is not possible to re-store them again in the auxiliary stack.1 EPDA and equivalent 2 Stack Machines An EPDA.. the input might be initially stored in the second stack (e.st1 .... In the next two subsections we review some models that proposed 2-stack or nested stack automata.. sbm are a ﬁnite number of stacks introduced below the current stack. current state.3 we deﬁne the LR-2PDA.1. Its expressive power is beyond PDA (context free grammars) and possibly beyond EPDA (e. and st1 . sbm .3 These constraints force a left to right operation mode on the input in the LR-2PDA. We adopt the following restrictions to the transitions performed by the kind of 2-PDA we propose here: the Main stack behaves like an ordinary PDA stack (allows the same type of transitions that a PDA allows). sb1 . “erases” the symbols as they are removed from the stack. they cannot be recovered (unless some more input is consumed). because once the input is consumed..g. Alternatively. . Consequently it is possible to “use” the information stored in the auxiliary stack only once. st2 .3... the input must be consumed and it must be possible to transfer symbols from one stack to the other. push/pop on current stack. However transitions that push a symbol into the second auxiliary stack do not allow moves. Consequently this would produce a more restricted set of languages.. . PDAs.. sb2 . A more detailed comparison of TMs. stn are stacks introduced above the current stack. Moving back to the left. In order to simulate these movements in a 2-PDA. M’. . It is initiated as a PDA but may create new stacks above and below the current stack. is similar to a PDA. stack symbol)= (new state..g. 13]). In 2. TAGs). 2. [83. . and the LR-2-PDA is presented in section 2. stn ) where sb1 ... it is not possible to write into the auxiliary stack. The number of stacks is speciﬁed in each move.10 CHAPTER 2. TWO STACK PUSH DOWN AUTOMATA has to move from left to right and from right to left.

59]. It allows the NPDA. 81. so as to separate diﬀerent sub-stacks.2. They also require the use of a symbol for the bottom of the stack. AUTOMATA WITH TWO STACKS 11 The possibility of swapping symbols from the top stack to embedded stacks produces the context sensitive power of L2 languages in Weir’s hierarchy.1: Dutch Crossed Dependencies (Joshi & Schabes 97) 2 Stack automata (2-SA) were shown to be equivalent to EPDA [7. The possible moves for a 2-SA automaton are the following [7]: • Read a symbol from the input tape . they use the additional stack in the same way as the embedded stacks are used in the EPDA. to recognize crossed dependencies as in the example depicted in the next ﬁgure: Figure 2. for instance. Symbols written in the second stack can be accessed only after the preceding stack is empty.1.

Growing Context Sensitive Languages (deﬁned by a syntactic restriction on context sensitive grammars) and the corresponding model of two pushdown automata (see [24. The reading/writing restriction is related to the requirement that the preceding stacks be empty. 2. and the alternative restriction with respect to writing.12 CHAPTER 2. The kind of automata we present implements a restriction on the writing conditions on the additional stack. Γ. GCSLs are a proper subclass of CSLs: The Gladkij language {w#wR #w | w ∈ {a.2 Shrinking Two Pushdown Automata Recently there has been interest on Church-Rosser Languages. q 0 . or pop a symbol from. where • Q is the ﬁnite set of states. is deﬁned as follows: Deﬁnition 1 a) A TPDA is a 7-tuple M = (Q. TWO STACK PUSH DOWN AUTOMATA • push a string onto. 53. 15. F ). But this restriction is not related to conditions on the the main stack but to conditions on the input. Σ.1. n . b}∗ } belongs to CSL \ GCSLs. • remove bottom of the stack symbols. ⊥. Wartena [85] shows the equivalence of the concatenation of one-turn multi-pushdown automata restricted with respect to reading (as in the 2-SA case). • pop a symbol from the auxiliary stack provided the main stack contains the bottom of the stack symbol • create a new set of empty stacks (add a string of bottom of the stack symbols). GCSLs contain the non-context-free and non-semi-linear {a2 |n ≥ 0} [16] The copy language is not a GCSL {ww|w ∈ {a. 16. The TPDA initializes the input in one of the stacks. but {an bn cn |n ≥ 1} is a GCSL. In [16] Buntrock and Otto’s Growing Context Sensitive Languages (GCSLs) are characterized by means of the Shrinking Two Pushdown Automata (sTPDA). The shrinking-TPDA. the main stack • push a string onto the auxiliary stack. b}∗ }. 58]. The recognition problem for growing context sensitive grammars is NP-complete ( [14]). δ.

LEFT RESTRICTED 2 STACK PUSH DOWN AUTOMATAS (LR-2PDAS) • Σ is the ﬁnite input alphabet. The deﬁnition of the transition function δ ensures that transitions with epsilon moves are allowed only if they do not push a symbol into the auxiliary stack. and • d : Q × Γ × Γ → P(Q × Γ∗ × Γ∗ ) Σ and Γ ∩ Q = ∅ 13 It can be seen that this kind of automaton needs some restriction. • F ⊆ Q is the set of ﬁnal (or accepting) states. The restriction is implemented using a weight function over the automaton conﬁguration. v) ∈ (δ. otherwise it is equivalent to a TM.e. Part a) in the deﬁnition of the transition function is equivalent to a PDA. causing the machine to move without reading a symbol from the input or from the stacks. • ⊥ ∈ Γ \ Σ is the bottom marker of pushdown store. the next symbol read and the top symbols of the stacks determine the next move of a LR-2PDA.2.2. b) then φ(puv) < φ(qab). Either of these symbols may be . a. • Γ is the ﬁnite tape alphabet with Γ • q 0 ∈ Q is the initial state. 2.e.2 Left Restricted 2 Stack Push Down Automatas (LR2PDAs) We deﬁne here the model of automaton that we described and proposed at the beginning of this section. We can consider the automaton model we present in the next section as preserving a shrinking property that conditions the use of the additional stack: it has to consume input. b ∈ Γ if (p. only if they are equivalent to a PDA transition function. u. if it is not ). Part b) enables transitions that aﬀect the auxiliary stack if and only if an input symbol is consumed (i. i. b) A TPDA is M is called shrinking if there exists a weight function φ : Q ∪ Γ → N+ such that for all q ∈ Q and a. and the requirement that every move must be weight reducing [57]. We use a standard notation similar to the one used in [77] or [41].. Part c) is a relaxation of the previous one and enables . The current state.

Σ. ⊥. If part c) is removed from the deﬁnition a diﬀerent class of automata is deﬁned: we will call it sLR-2PDA. Q × (Σ ∪ { }) × Γ × Γ → P F (Q × Γ∗ × { }) . Q × (Σ ∪ { }) × Γ × { } → P F (Q × Γ∗ × { }) and 4 The two transition types a. and F are all ﬁnite sets and 1. δ. Deﬁnition 2 A LR-2PDA is a 7-tuple (Q. Deﬁnition 3 A sLR-2PDA is a LR-2PDA where δ is: a. Σ. and c. Q × (Σ ∪ { }) × Γ × { } → P F (Q × Γ∗ × { }) and b. Σ is the input alphabet. 2. F ) where Q. 3. Such a restriction could be accomplished in the sLR-2PDA case because the auxiliary stack cannot be emptied using -moves. The set of languages that can be recognized by a sLR-2PDA is therefore a proper subset of the set of languages that can be recognized by a LR-2PDA. Γ. Γ. TWO STACK PUSH DOWN AUTOMATA -moves that pop elements from the second stack. δ is the transition function ( where P F denotes a ﬁnite power set.14 CHAPTER 2. ⊥ ∈ Γ \ Σ is the bottom marker of the stacks 7. F ⊆ Q is the set of accept states. The auxiliary stack should be emptied only consuming input. q 0 . Γ is the stacks alphabet. For Natural Language purposes it might be desirable to keep stronger restrictions on the auxiliary stack. Q × Σ × Γ × Γ → P F (Q × Γ∗ × Γ2 ) c. Q × (Σ ∪ { }) × Γ × Γ → P F (Q × Γ∗ × { })4 5. Q is the set of states. but this would rule out reduplication (copy) languages (See the introduction) as we will see below. and Γ is Γ ∪ { } ) : a. We keep them separate to show the parallelism with the type of productions in the grammar model. . 4. can be conﬂated in: a’. q 0 ∈ Q is the start state 6.

Σ. ⊥. η). Writes left to right or same cell Independent operation: allows e-moves Moves to the left erasing. w. whatever remains at the right is forgotten). An ID is a 4-tuple (q. A clear example of maximum transversal and input storage is the language Lr = {wwR | w ∈ Σ∗ } which can be recognized by a PDA. ⊥. state and stack contents. Figure 2. Γ. PDA and LR-2PDA A Turing Machine: the input tape is also a storage tape. The input can be transversed as many times as desired. They specify the current input. O) M. 2. that can be also used and reused at any time. ∗ is denoted by ∗ The language ac- cepted by a LR-2PDA M = (Q. Oβ) contains (p. PDA AND LR-2PDA b. There is a work space (storage data structure) that also can be transversed only once (as soon there is a move to the left. γ 2 ) where q is a state w is a string of input symbols and γ 1 . If M = (Q. ⊥) (Acceptance by empty stacks) (p.3. δ. then (q. Reads left to right only. aw. ⊥. q 0 . F ) is {w | (q 0 . ζ.3 Comparison with TM. ⊥. δ. a. ηβ) if δ(q. from left to right.3: A PDA . but it cannot recognize Lc = {ww | w ∈ Σ∗ } which would require additional use of the storage capacity. ζα. Z. Read and Write one cell One head Figure 2. ⊥)} for some p in F .2: A Turing Machine A PDA: The input can be transversed only once. Γ. F ) is a LR-2PDA. .2. q 0 . Q × Σ × Γ × Γ → P F (Q × Γ∗ × Γ∗ ) 15 The instantaneous descriptions (IDs) of a LR-2PDA describe its conﬁguration at a given moment. COMPARISON WITH TM. plus it has an unlimited work-space. w. w. Σ. γ 1 . Zα. The reﬂexive and transitive closure of M M (p. γ 2 are strings of stack symbols. It has the capacity of remembering a sequence of symbols and matching it with another equal length sequence.

. ) = (q 1 . A sLR-2PDA cannot recognize the copy language: if the auxiliary stack is used to store all the symbols of the input tape then it would be of no more use than storing all the symbols in the main stack: the symbols can only be transferred from the auxiliary stack to the main stack only if the input tape is being read. u) = (q 3 .4. whatever remains at the right is forgotten). This capacity is reduced even further in the case of sLR-2PDA: both types of operations on the auxiliary stack are made dependent on consuming input. TWO STACK PUSH DOWN AUTOMATA A LR-2PDA: The input can be transversed only once. This exempliﬁes how many times (and the directions in which) the same substring can be transversed. but unlike a TM. then the possibility of storing information from the input seen at this move is lost. There are two work- spaces which also can be transversed only once (as soon there is a move to the left. Transfer from one stack to the other is restricted. The following languages which can be recognized by an LR-2PDA are examples of a maximum use of the storage of the input: L2ra = {wwR wR w | w ∈ Σ∗ } and L2rb = {wwR wwR | w ∈ Σ∗ }. consequently what is being read cannot be stored in the main or the auxiliary stack if there is a transferral occurring. a. . The crucial fact is that transitions type c. (above) . . (q 2 . the second storage can be written to only when consuming input. (q 1 . a) (store the symbols of the ﬁrst copy) 2. (q 3 . and store the symbols of the second copy back A loop back from state q 3 to state q 2 allows recognition of the language of any ﬁnite number of copies (example 2). from left to right. ) (transfer symbols from auxiliary to main stack) 3. but it can store again in the second stack (the second copy). Consider the following transitions (where a is a symbol in Σ). a LR-2PDA is not able to transverse a substring indeﬁnitely. 1. While recognizing the second copy all the information stored about the ﬁrst copy is lost. a. In order to avoid the possibility of transferring symbols from one stack to the other indeﬁnitely. au) (recognize the second copy using the symbols stored in the main stack. a) = (q 2 . . However. The corresponding machine is detailed below in the exsection 2. an LR-PDA can recognize the language L3 = {www | w ∈ Σ∗ }. given the constraints imposed on LR-2PDAs.16 CHAPTER 2. a. if information from the ﬁrst ordered stack is transferred to the second auxiliary stack. . a.

q 2 . Independent both operations. e}. c. Reads left to right only. Moves to the left erasing.2.4: LR-2PDA are not allowed. Examples 2 and 3 are LR-2PDAs. Transfer possible Dependent writing from 1. Independent moving. q 3 ) where δ: . Example 1 (Multiple Agreements) . {a. Writes left to right or same cell Independent both operations. 17 Writes left to right or same cell Moves to the left erasing.4 Examples of LR-2PDAs The ﬁrst example is a sLR-2PDA. M 5 = ({q 0 . q 0 . {x}. However the language L = {wuw | u. w ∈ Σ∗ and |w| ≤ |u|} is recognized by a sLR-2PDA. δ.4. ⊥. Writes left to right or same cell Transfer NOT possible Dependent writing from 1. q 3 }.5: sLR-2PDA 2. EXAMPLES OF LR-2PDAS Reads left to right only. Writes left to right or same cell Moves to the left erasing. Dependent moving (erasing) Figure 2. d. b. Figure 2. Moves to the left erasing. q 1 .

)} 10.$. ⊥. $) = ( ε.a. )} L(M 5 ) = {an bn cn | n ≥ 1} ∪ {an bn cn dn | n ≥ 1} ∪ {an bn cn dn en | n ≥ 1} It can be easily seen that the following is a proper generalization: Claim 1 For any alphabet Σ = {a1 . x⊥. $ . )} 6. ⊥. $ . ⊥)} 12. δ(q 1 . c . (q 1 . c . $ . ⊥) = {(q 1 . ⊥. ⊥) = {(q 1 . ⊥. δ(q 1 . $ . d. b. ε ) = ( ε.a.. ⊥. c . ε) = (a$. c . $) = accept 5 5 ( e. δ(q 1 . ⊥) = {(q 2 . TWO STACK PUSH DOWN AUTOMATA Four Counting Dependencies M2 Five Counting dependencies M3 (a. ε ) ( [] . c’$) 4 4 ( d . ε ) (c. . xx. $ . ak }.$. ε ) ( [] . c . δ(q 1 .a. b. . there is an sLR-2PDA that recognizes the language L(M k ) = {an an .. a. e. x⊥)} 4. ⊥)} 3.an | n ≥ 1} 1 2 k . . c. ε ) 4 ( d .a’) = (cc. ε ) 1 (a. x. d. $ .18 Three counting dependencies M1 0 CHAPTER 2. x. ε) ( d . x) = {(q 1 . x) = {(q 1 . $) ( d . c’) = ( $ . ⊥) = {(q 1 . . ⊥. δ(q 2 .. ⊥)} 8.a’$) Idem 2 (b. δ(q 0 .a’) = ($. ). ⊥)} 2. xx}) 5. ε ) = (aa. x.a’) = (c$. c’c’) ( e . δ(q 1 . x) = {(q 2 . x. ⊥. x. ⊥. xx} 9. ε ) Idem 3 (c. δ(q 1 . x. x) = {(q 2 . c. x⊥).. a’ ) = ( ε. $ ) = (ε . . )} 7. e. where k > 3.a’) = ($. a. (q 2 . $) = ( ε.$. x) = {(q 2 . . ⊥) = {(q 1 . c’) = ( $ . δ(q 2 .6: Multiple agreement in an sLR-2PDA 1. x. d. ε ) 3 ( [] . ε ) (b.. xx. x) = {(q 1 .a’a’) (c. c’) = ( ε. . c. δ(q 1 . x⊥. )} 11. $) = accept 6 Figure 2. x) = {(q 2 . δ(q 2 . $) = accept (c. ⊥. δ(q 1 .

⊥. . ⊥) = {(q 3 . δ(q 0 . y. . δ(q 1 . (q 3 . ⊥)} 13. ⊥. ⊥. δ(q 3 . In the next example. a1 ⊥. {x. ⊥. )} 14. x) = {(q 2 . EXAMPLES OF LR-2PDAS Proof Construct the sLR-2PDA M k as follows: 19 M k = ({q 0 . x⊥. Example 2 (The language of multiple copies) M wwn = ({q 0 . ae−1 . q 2 }. . w. . ⊥. ⊥)} 2. ao ao . ao . δ(q 0 . w. y⊥). y}. δ(q 2 . b. q 3 }. x. ⊥. δ(q 1 . ae . a1 a1 . b}. a1 . δ(q 3 . )} 9. u) = {(q 3 . ⊥. (q 3 . e ≥ 3 < k and o is odd and e is even: 1. If an intermediate copy is recognized transitions 13 and 14 allow going back to state 2 and reverse the order of the stored symbols. x. ⊥)} 12. ⊥. )} 6. a1 . u) = {(q 1 . q 2 ) where δ is as follows. b}+ } 8. q 1 . ⊥. ⊥. y⊥. ⊥)} if k is even. y⊥)} 3. ⊥)} 10. δ(q 1 . ⊥. )} . ao . transitions 1 to 4 recognize the ﬁrst occurrence of w. a. δ(q 1 . δ. x⊥). δ(q 1 . δ(q 2 . δ(q 1 . y) = {(q 2 .2. . such that o. b. x⊥)} 2. . a. δ(q 2 . . xu)} 4. . ao−1 ) = {(q 1 . ae−1 . . Σ. (q 3 . δ(q 0 . ⊥) = {(q 1 . q 0 . ak −1 ) = {(q 2 . ⊥) = {(q 1 . q 1 . b. )} 7. ao . ⊥. . y) = {(q 2 . u is any symbol in the vocabulary): 1. . . y. . ⊥) = {(q 1 . reverse the order of the stored symbols. xu). a2 a2 }) 5. δ(q 1 . x) = {(q 2 . ak . . δ(q 1 . xw. .t. a2 . transitions 5 to 8. ao ⊥. a. a2 ) = {(q 1 . yu). . a1 . δ(q 2 . . ae ⊥)} 8. δ(q 1 . ⊥) = {(q 1 . ⊥. y⊥. ⊥. δ(q 3 . δ(q 3 . Σ. ⊥)} 3. )} 7. {a. ⊥) = {(q 1 . q 0 . . yw. q 2 . ⊥. a. δ(q 1 . ⊥)} 11. ⊥. ⊥) = {(q 1 . ae ae } and 9. b. ak −1 . δ(q 1 . u) = {(q 3 . a2 . ae ) = {(q 1 . transitions 9 to 12 recognize either the following copy or the last copy. δ(q 1 .4. ⊥) = {(q 2 . a1 . yu)} 5. ao−1 ) = {(q 1 . )} 6. a1 . ae . )} L(M 4 ) = {ww+ |w ∈ {a. a2 ⊥)} 4. . ⊥) = {(q 3 . x⊥. δ(q 1 . u) = {(q 1 . q 3 ) where δ (s. ⊥. x) = {(q 2 . ak . y) = {(q 2 . ⊥. (q 3 . δ. )} if k is odd or 10.

I) is empty for all I ∈ Γ. {a. f }. a. ⊥. a. yz. ) is nonempty. δ(q 2 . . y) = {(q 1 . zy)} 6. ⊥. a. xx)} 3. . xy. ⊥. y. a. ⊥)} 14. . then δ(q. ⊥. ⊥)} 15. A ∈ Σ and Z. q 3 ) where δ: 1. ⊥. ⊥. b. I ∈ Γ. y) = {(q 1 . Z. z. TWO STACK PUSH DOWN AUTOMATA Example 3 (Crossed dependencies) M cr3 = ({q 0 . . ⊥)} 2. q 2 . Z.5. A. δ(q 1 . δ(q 2 . x. δ(q 1 . . e. ⊥. . y) = {(q 1 . ⊥)} 5. ⊥. )} 13. z) = {(q 2 . whenever δ(q. whenever δ(q. x) = {(q 1 . ⊥. x⊥)} 2. m. ⊥. ⊥) = {(q 3 . l ≥ 1} n m l n m l 9. δ(q 3 . zz)} 7. ⊥. d. δ(q 3 . q 3 }. ⊥. y. I) contain more than one element. .1 Deterministic LR-2PDA Deﬁnition 4 A LR-2PDA is deterministic iﬀ: 1. z}. z) = {(q 1 . for no q ∈ Q. d. . z. . {x. and a ∈ Σ ∪ { } does δ(q. d. x) = {(q 1 . b. zz. for each q ∈ Q. z) = {(q 1 . for each q ∈ Q and Z. f. )} 8. δ(q 0 . δ(q 2 . ⊥) = {(q 1 . I ∈ Γ. yx)} 4. . c. δ(q 1 . q 0 . x) = {(q 1 . then δ(q. Z. δ(q 1 . c. The corresponding deﬁnition of a deterministic LR-2PDA is as follows: 2. ⊥) = {(q 3 . δ. δ(q 1 . )} L(M cr3 = {a b c d e f | n.5 Determinism in LR-2PDA An LR parser for a CFL is essentially a compiler that converts an LR CFG into a DPDA automata [41. 3. ⊥) = {(q 3 . z.20 CHAPTER 2. Therefore understanding the deterministic LR-2PDA is crucial for obtaining a deterministic LR parser for GILs deﬁned in chapter 6. yy)} 12. . Z. z⊥. δ(q 3 . e. A. I) is nonempty. δ(q 2 . y. Z. ⊥. δ(q 1 . 2]. b. x. y. Deterministic GILs are recognized in linear time as we mentioned in the introduction. yy. Z. δ(q 2 . 2. ⊥. I ∈ Γ. y) = {(q 1 . I) is empty for all a ∈ Σ. q 1 . )} 10. ⊥) = {(q 3 . )} 11. c.

whenever δ(q. . ) 2. a. 3. Z i . whenever δ(q. Z. for each q ∈ Q and Z. I i ) and (q j . . . I) is empty for all a ∈ Σ. ) = (q. then δ(q. in the following examples of transitions.2. I ∈ Γ. for each q ∈ Q. Z j . I) is empty for all I ∈ Γ. Deﬁnition 5 A LR-2PDA has a deterministic indexing. . (q. 1. A. Therefore. ) = (q. I ∈ Γ. ) is nonempty. i) = (q. nor with the fourth. b. j) 2. a. or is a deterministic indexing LR-2PDA if and only if: 1. The construction of the LR(k) parsing tables would show whether the indexing mechanism for a given GIG is non-deterministic. it might be an advantage to be able to determine whether the non-determinism lies in the CFG back-bone only and the indexing mechanism does not introduce non-determinism. .2 LR-2PDA with deterministic auxiliary stack Even if a GIG grammar is non-deterministic. then δ(q. a.5. . because they imply a non-deterministic choice. i) 3. for no q ∈ Q. I ∈ Γ. . Z. (q. j) 4. I j ) such that I i = I j . Z. We can regard to be an abbreviation for all the possible symbols in the top of the stack. DETERMINISM IN LR-2PDA 21 Condition 1 prevents the possibility of a choice between a move independent of the input symbol ( -move) and one involving an input symbol. I) contain two elements = (q i . . Z. . I) is nonempty. j) = (q. (q. A ∈ Σ and Z. a. Condition 2 prevents the choice of move for any equal 4-tuple. Z. and a ∈ Σ ∪ { } does δ(q. Z. If the non-determinism is in the CFG backbone only we can guarantee that the recognition problem is in O(n3 ). the ﬁrst case is not compatible with the third. 2. (q.5. . Condition 3 prevents the possibility of a choice between a move independent of the second stack symbol ( -move) and one involving a symbol from the stack. a. A.

22 CHAPTER 2. TWO STACK PUSH DOWN AUTOMATA .

As a consequence they are semi-linear and belong to the class of MCSGs. is deﬁned as follows.. The class of LILs contains L4 but not L5 (see Chapter 1). S). and X i in V ∪ T .] → B[i. on sentential forms. The relation derives. a string in I ∗ .]α n where A and B are in V . and α is in (V ∪ T )∗ . and P is a ﬁnite set of productions of the form: a. have the capability to associate stacks of indices with symbols in the grammar rules. i is in I .. i.. A[. strings in (V I ∗ ∪ T )∗ . I the set of indices.] c. A[i. (LIGs. and Linear Index Grammars. P. δ be in I ∗ .1 Indexed Grammars and Linear Indexed Grammars Indexed grammars. ⇒..Chapter 3 Global Index Grammars 3. 23 .] → [. An Indexed Grammar is a 5-tuple (N. where N is the set of non-terminal symbols.LILs) [29].] represents a stack of indices. I. A → α b. T the set of terminals. LIGs are Indexed Grammars with an additional constraint on the form of the productions: the stack of indices can be “transmitted” only to one non-terminal. [.. (IGs) [1].. T. Let β and γ be in (V I ∗ ∪ T )∗ . S in V is the start symbol.e. IGs are not semi-linear (the class of ILs contains the language {a2 | n ≥ 0}).

The proposal we present here uses the stack of indices as a unique designated global control structure. It is a grammar that controls the derivation through the variables of a CFG. The introduction of indices in the derivation is restricted to rules that have terminals in the right-hand side. {a}.. b... S) where P : S[.X k is a is a production of type c. B[ ] → 3.. the stack of indices is associated with variables...] → B[. This feature makes the use .]A → [. G1 = ({S.] A[i.. where A and B are non terminals and α and γ are (possible empty) sequences of terminals and non terminals: a. 2..] → bB[.. B[i. In this sense. then βAδγ ⇒ βX 1 δ 1 ..] A[. then βAδγ ⇒ βBxδγ 3.] → αB[i. GLOBAL INDEX GRAMMARS 1.] X 1 ... P. {i}.. S[..] → A[.] → aS[i.. Example 4 L(G1 ) = {a2 n ≥ 0}.] A[.. {a. If [x.X k is a production of type a... A[.]c...] → α B[..24 CHAPTER 3. {i}..] γ Example 5 L(G1 ) = {an bn cn dn |n ≥ 0}. A[.. then βAxδγ ⇒ βX 1 δ 1 ...] → A[..] → B[x.]. A}. A[i.2 Global Index Grammars In the IG or LIG case. c. B}..] γ b. these grammars provide a global but restricted context that can be updated at any local point in the derivation.] → α B[.. S).] γ c.] → S[i.].] is a production of type b.X k δ k γwhere δ i = δ if X i is in V and δ i = n if X i is in T. d}.] A[] → a A Linear Indexed Grammar is an Indexed Grammar where the set of productions have the following form.. where P is: S[... G1 = ({S. GIGs are a kind of regulated rewriting mechanism [25] with global context and history of the derivation (or ordered derivation) as the main characteristics of its regulating device.X k δ k γ where δ i = δ if X i is in V and δ i = if X i is in T.. If A[. P. If A → X 1 . S[.]d.

I are ﬁnite pairwise disjoint sets and 1) N are non-terminals 2) T are terminals 3) I a set of stack indices 4) S ∈ N is the start symbol 5) # is the start stack symbol (not in I. An additional constraint imposed on GIGs is strict leftmost derivation whenever indices are introduced or removed by the derivation. .]A → [x. so we can consider them global. T. If A → X 1 . x in I.X k γ (production type a. A ∈ N . a.N .X k is a production of type a. T. x ∈ I) then: δ#βAγ ⇒ δ#βX 1 . Deﬁnition 6 A GIG is a 6-tuple G = (N.1 A → α a. β ∈ (N ∪ T )∗ and a ∈ T . [39]). δ be in I ∗ . P ) where N... Derivations in a GIG are similar to those in a CFG except that it is possible to modify a string of indices. #. Pop rules do not require a terminal at all. This string of indices is not associated with variables.2 A → α [y] or or or or A→α [..1 where x ∈ I. The notation at the right is intended to be more similar to the notation used in IGs and LIGs.. w be in T ∗ and X i in (N ∪ T ).X k γ) (production type a.. S. (i.T ) and 6) P is a ﬁnite set of productions.g.e.]A → [.. GLOBAL INDEX GRAMMARS 25 of indices dependent on lexical information..]a β [x.3.. µ = µ or µ = [x]. 1. The constraint on push rules is a crucial property of GIGs. We deﬁne the derives relation ⇒ on sentential forms.1 or context free ) xδ#βAγ ⇒ xδ#βX 1 . without it GIGs would be equivalent to a Turing Machine. A → α x ¯ Note the diﬀerence between push (type b) and pop rules (type c): push rules require the righthand side of the rule to contain a terminal in the ﬁrst position.X k γ Using a diﬀerent notation: δ#βAγ ⇒ δ#βX 1 . having the following form. Let β and γ be in (N ∪ T )∗ .. y ∈ {I ∪ #}. A → a β x c.. α..]α (epsilon rules or context free (CF) rules ) (epsilon rules or CF with constraints) (push rules or opening parenthesis) (pop rules or closing parenthesis) b. In the next subsection we present an even more restricted type of GIG that requires both push and pop rules in Greibach Normal Form. It also makes explicit that the operations are associated to the computation of a Dyck language (using such notation as used in e.2) [x] 1 The notation at the left makes explicit that operation on the stack is associated to the production and not to terminals or non-terminals.]α [.. I.2..]A → [. in a linguistic sense. and allows the kind of recognition algorithm we propose in Chapter 5.. which are strings in I ∗ #(N ∪ T )∗ as follows.

are as follows.) are constrained to start with a terminal in a similar way as push (type b.. x ∈ I. α. then: µ δ#wAγ ⇒ xδ#waX k . We deﬁne the language of a GIG G.1 trGIGS We now deﬁne trGIGs as GIGs where the pop rules (type c. GIGs can have rules where the right-hand side of the rule is composed only of terminals and aﬀects the stack of indices. LIGs and GIGs.. IGs. and a ∈ T : c. β ∈ (N ∪ T )∗ . A ∈ N .]A → [. We also introduce some examples of trGIGs modeling multiple agreements and crossed dependencies.. Deﬁnition 7 A trGIG is a GIG where the set of production of types (c) (pop) rules..b) (in other words push productions must be in Greibach Normal Form)... with the consequence that LILs are semilinear.X n γ ) x 3. x ∈ I.. A → a β x ¯ or [x.) or push: µ = x. The derives deﬁnition requires a leftmost derivation for those rules (push and pop rules) that aﬀect the stack of indices.) or pop : µ = x. the stack of indices in a GIG is independent of non-terminals in the GIG case.. x ∈ I. is the interpretation of the derives relation relative to the behavior of the stack of indices. where x ∈ I. L(G) to be: {w|#S ⇒ #w and w is in T ∗ } It can be observed that the main diﬀerence between..2.X n γ ) x ¯ ∗ The reﬂexive and transitive closure of ⇒ is denoted as usual by ⇒. In LIGs indices are associated with only one non-terminal at the right-hand side of the rule. but to an extreme: there is only one stack to be considered. If A → aX 1 . Unlike LIGs and IGs.26 CHAPTER 3.X n γ µ ( or δ#wAγ ⇒ xδ#waX 1 . GIGs share this uniqueness of the stack with LIGs.X n γ ( or xδ#wAγ ⇒ δ#wX 1 . This produces the eﬀect that there is only one stack aﬀected at each derivation step.. If A → X 1 ... GLOBAL INDEX GRAMMARS This is equivalent to a CFG derives relation in the sense that it does not aﬀect the stack of indices (push and pop rules). In IGs the stacks of indices are distributed over the non-terminals on the right-hand side of the rule.X n is a production of type (c.) rules in GIGs..] a β .. ∗ 3.X n is a production of type (b. 2. Indeed push rules (type b) are constrained to start the right-hand side with a terminal as speciﬁed in GIG’s deﬁnition (6.... then: ¯ xδ#wAγ ⇒ δ#wX 1 .

4 ≤ k} We can generalize the same mechanism even further as follows: Example 8 (Multiple dependencies) L(Ggdp ) = { an (bn cn )+ | n ≥ 1}. R. S → A D 2. . C}. e}. #. c. {i}. {a. #. S→ a1 E 2 4a. E. E}. C.. A. and P is composed by: 1. Ak }. S. {a . a j }.. S. ak }.E i → ai E i ai ai−1 ¯ 2. S → a1 S E 2 3a. . D → d D | d x ¯ Now we generalize example 6 and show how to construct a trGIG that recognizes any ﬁnite number of dependencies (similar to the LR-2PDA automaton presented in the previous chapter): Claim 2 The language Lm = {an an . {a. b. O. . P ) and P is: S → ACE C → cD ¯ i 27 A → aAb i A → ab i ¯ j ¯ j C → cD1 ¯ i D1 → CD D→d j E → eE E→e The derivation of w = aabbccddee: #S ⇒ #ACE ⇒ i#aAbCE ⇒ ii#aabbCE ⇒ i#aabbcD1 E ⇒ i#aabbcCDE ⇒ #aabbccDDE ⇒ j#aabbccdDE ⇒ jj#aabbccddE ⇒ j#aabbccddeE ⇒ #aabbccddee Example 7 (crossing dependencies) L(Gcr ) = {an bm cn dm | n. d}... #. B → b B | b x 4. c}. B.. A.ak | n ≥ 1.. E i → ai Oi+1 ai and for every E i and Oi such that i is odd in Oi and i is even in E i add the following rules: 5a. a 4 . L}. Ak → ak 6b..2. k ≥ 4} is in trGIL 1 2 k Proof Construct the GIG grammar Gk = ({S. S. Ak → ak Ak ak −1 ¯ n 4b. {a 2 . Ggdp = ({S. A. d.. Ok → ak ak −1 ¯ L(Gn ) = {a1 a2 . O3 .3.an | n ≥ 1. Oi → ai E i+1 If k is even add: If k is odd add: n n 3b.. D. A → a A c | a B c 3. b. where Gc r = ({S.. P ) such that k ≥ 4 and j = k if k is even or j = k − 1 if k is odd. G5 = ({S.. c. g }. b. m ≥ 1}. Oi → ai Oi E i+1 ai−1 ¯ 6a.. {a1 .. P ) and P is: S → AR L → OR | C A → aAE ¯ i A→a ¯ i E→b i R→bL i C→cC|c O → c OE | c . D1 . E 2 . {x and P is: 1. GLOBAL INDEX GRAMMARS Example 6 (multiple agreements) L(G5 ) = {an bn cn dn en | n ≥ 1}. {a.Ak → ak Ak 5b.

we can see that by using left recursive pop productions we can generate a language with a higher number of crossing dependencies than we can by using a trGIG. Example 11 (Crossing Dependencies) L(Gcr3 ) = {an bm cl dn em f l } Gcr3 = ({S. {i. D. It is easy to see that by generalizing the same mechanism any ﬁnite number of crossing dependencies can be generated. #. F. S. {a. j}. P ) and P = 1. S → aS i 2. #. we mentioned the diﬀerence between push and pop rules in GIGs. P ) and P = . C}. c. This is not possible in trGIGs because pop rules cannot be left recursive. b}∗ } Gwwn = ({S. S → bS j 3. {a. Example 9 (Copy Language) L(Gww ) = {ww |w ∈ {a. d. B. f }. j. L. S. A. B. P ) and P = S → AS | BS | C B→b j ¯ i C → RC | L L → Lb | b ¯ j R → RA ¯ i R → RB ¯ j R→ [#] A→a i L → La | a The derivation of ababab: #S ⇒ #AS ⇒ i#aS ⇒ i#aBS ⇒ ji#abS ⇒ ji#abC ⇒ ji#abRC ⇒ i#abRBC ⇒ #abRABC ⇒ #abABC ⇒ i#abaBC ⇒ ji#ababS ⇒ ji#ababL ⇒ i#ababLb ⇒ #ababab In the next example. E}. j}. {i. C.28 CHAPTER 3. This diﬀerence enables the use of left recursive pop rules so that the order of the derivation is a mirror image of the left-right order of the input string. e. b}. k}.2 GIGs examples We conjecture that the following languages can be deﬁned with a GIG and cannot be deﬁned using a trGIG. R → Ra | a ¯ i 5. b}∗ } Gww = ({S. b}. #. R}. R → Rb | b ¯ j The derivation of the string abbabb: #S ⇒ i#aS ⇒ ji#abS ⇒ jji#abbS ⇒ ji#abbRb ⇒ i#abbRbb ⇒ #abbRabb ⇒ #abbabb Example 10 (Multiple Copies) L(Gwwn ) = {ww+ | w ∈ {a. S → R 4. #S ⇒ #AR ⇒ #aAER ⇒ #aaER ⇒ i#aabR ⇒ ii#aabbL ⇒ ii#aabbOR ⇒ i#aabbcOER ⇒ #aabbccER ⇒ i#aabbccbRii#aabbccbbL ⇒ ii#aabbccbbC ⇒ i#aabbccbbcC ⇒ #aabbccbbcc 3. b. R. S. {a.2. Above. GLOBAL INDEX GRAMMARS The derivation of the string aabbccbbcc shows ﬁve dependencies. {i.

c. L → Lf | Ef 6. [11] ). {i. S. P ) and P is: S → cS i ¯ i ¯ j S → bS j S → aS k S → SaSbS | SbSaS S → SaScS | ScSaS S → SbScS | ScSbS ¯ k S→ L(Gmix ) = Lmix = {w|w ∈ {a. and at the same time it requires that three dependencies be observed. The diﬃculty is due to the number of strings in the language at the basic combination level (there are 90 strings of length 6. It allows an incredible number of possible strings of any given length in the language. The number of permutations for a string of length n is n!. c}∗ and |a|w = |b|w = |c|w ≥ 1} Example of a derivation for the string abbcca. however it is not so easy to prove or assess whether the 2 A. as exempliﬁed by Lmcr = {an bm cl dn em f l g n hm il } or {an bm cl (dn em f l )+ } The next example shows the MIX (or Bach) language (the language with equal number of a’s b’s and c’s in any order. It is easy to design GIG grammars where the respective languages are in Lmix . and 1680 strings of length 9). There are also factorial duplicates for the number of times a character occurs in the string. GLOBAL INDEX GRAMMARS 1. j.3. b. c}. We show that GILs are semilinear. This language though conceptually simple. #. This is important because it is very diﬃcult to build a proof for the claim that the MIX language belongs (or does not belong to) to a family of languages 2 . In the mix language each character occurs in one third of the string and there are three characters. . S → F L ¯ k 29 3. {a.2. obtaining sets of crossing dependencies. The number of strings of length 3n is (3n)!/(n!)3 . Example 12 (MIX language) Gmix = ({S}. k}. Joshi p. therefore ILs and GILs could be incomparable under set inclusion. It was conjectured in [29] that the MIX language is not an IL. F → aF | aB i ¯ j 4. B → bB | bC j ¯ i 2. see e.g. b. C → cC | c k 5. S ⇒ k#aS ⇒ #aSbScS ⇒ #abScS ⇒ j#abbcS ⇒ #abbcScSaS ⇒ #abbccSaS ⇒ #abbccaS ⇒ #abbcca We present here the proof that L(Gmix ) = Lmix . b’s and c’s) are observed. turns out to be a very challenging problem. E → Ee | De 7. and the need to ensure that the three dependencies (between a’s. D → Dd | d This can be even further generalized.

30 CHAPTER 3. with the one proposed in 2. w = w c2 and y = a2 y b2 trigger the following derivation (perhaps the only one). b. First direction: L(Gmix ) is in Lmix = {w|w ∈ {a. S ⇒ k#a ⇒ #acb k j j i i S ⇒ j#b ⇒ #bac ¯ j ¯ j S ⇒ j#b ⇒ #bca S ⇒ i#b ⇒ #cba ¯ i ¯ i S ⇒ i#b ⇒ #cab Inductive Hypothesis: Assume for any string uwyz in Lmix then uwyz is in L(GMIX ). We need to prove that ut1 wt2 y t3 z is in L(GMIX ) ( |uti wti y ti z| = n + 3). 1. S ⇒ jδ #u bS ⇒ δ#u bSaScS ⇒ iδ #u bw caScS ⇒ δ #u bw caay bcS If we try to combine the derivation proposed in 1. It can be seen that all the derivations allow only equal number of a s. Where ¯ ¯ t1 t2 t3 is any string of length three in Lmix and t designates the terminals that introduce the indices ¯ in the derivation. This blocking eﬀect can be illustrated by the following hypothetical situation: suppose uwyz might be composed by u = u b1 . we reach the following point: ∗ ∗ ∗ ∗ . Proof . We show ¯ ¯ that Gmix generates any uti wti y ti z It is trivial to see that the following derivation is obtained (where a corresponds to tk and b and c ¯ to the tk ’s. b s and c s due to the constraint imposed by the indexing mechanism. n ≥ 0. Second direction: Lmix is in L(Gmix ) Basis n = 0. S ⇒ δ#uS ⇒ kδ#uaS ⇒ δ#uaSbScS ⇒ δ #uawbScS But at this point the strings b and c might block a possible derivation. It might just turn out to be a signiﬁcant subset of Lmix (in other words it is not so easy to ﬁgure out if there are some set of strings in Lmix that might not be generated by a particular grammar). GLOBAL INDEX GRAMMARS language is indeed Lmix . c}∗ and |a|w = |b|w = |c|w ≥ 0}. and t are members of the pairs that remove indices in the derivation. y = a1 y c1 z. 2. n = 3: S ⇒ k#a ⇒ #abc k ¯ k ¯ k is clearly in the language. where ¯ ¯ ¯ ¯ |uwyz| = n.

tj . This in turn can be easily derived by: δ#S ⇒ jδ#bS ⇒ δ#baScS ⇒ in−1 δ#baan−1 cS ⇒ kin−1 δ#baan−1 caS ⇒ in−1 δ#baan−1 cacbS ⇒ δ#baan−1 cacb(cb)n−1 Claim 4 For any sequence of two possible conﬂicting indices there is always at least a derivation in Gmix . We show now that this situation is impossible and that therefore Lmix is in L(GMIX ). removing the index j.. tk . is there any possible combination of indices : ¯ ¯ ¯ ¯ ¯ ¯ ti · · · tj · · · tk · · · tj · · · tk · · · ti · · · tj · · · tk · · · ti that cannot be derived by GMIX ? Remember t designates the terminals that introduce the indices ¯ in the derivation.3. This cannot be done because the derivation has to proceed with the preceding S (at a0 Sb0 ) due to the leftmost derivation constraint for GIGs. there are three diﬀerent indices (ti = tj = tk ): ti utj vtk . Case 2. tj are reduced to the case of two conﬂicting indices. GLOBAL INDEX GRAMMARS 3....(ti . Case 1. there is more than one element in one of the indices : ¯¯ ¯¯ ti tj (ti )n [tj tj (ti ti )n+1 ] for example: ab(a)n [cacb](cb)n ab(a)n [accb](cb)n S ⇒ ti S[ti ti ] reduces to the following problem which is derived by the embedded S ¯ ¯ ¯ ¯ tj (ti )n [tj . Claim 3 For any sequence of possibly conﬂicting indices the conﬂicting indices can be reduced to two conﬂicting indices in a derivation of the grammar Gmix . Therefore.. there is no guarantee that the necessary sequence of indices are removed. because the grammar is highly ambiguous.. . and t are members of the pairs that remove indices in the derivation. clearly S ⇒ i#ti S ⇒ #ti Stj Stk S and the complements of ti .e.: baScacbS where the two non terminal S expand the ti n−1 triples..ti )n ] And this problem is reduced to: ¯¯¯¯ tj ti S tj tj ti ti S i. We prove now that such a blocking eﬀect in the indexing is not possible: there are always alternative derivations. We need to account whether there is a possible conﬁguration such that a conﬂict of indices arises.2. In other words. S ⇒ jδ #u b1 S ⇒ kjδ #u b1 a0 S ⇒ jδ #u b1 a0 Sb0 Sc0 S ∗ 31 At this point the next step should be the derivation of the non-terminal S at b0 Sc0 .

Formally: S ⇒ #uS ⇒ #uwS ⇒ #uwxS ⇒ #uwxzS ⇒ #uwxzy Therefore for the following combination: ¯ ¯ ¯ ¯ ti utj wti xti z tj y tj : the following derivation is possible. shows that any CFG is u . if indices are introduced. GLOBAL INDEX GRAMMARS Assume the possible empty strings u. The tests were performed on the strings of length 6 and 9. given we are considering just two possible conﬂicting indices. w. x. The well-known Chomsky-Sch¨tzenberger theorem [22].32 CHAPTER 3. etc. Let us assume the ti triple corresponds to a(bc) and tj triple corresponds to b(ca) therefore it is: uai vbj wbi xcj zci yaj (or uai vbj ci xcj zbi yaj . z and y do not introduce possible blocking indices (i.) It can be seen that there is an alternative derivation: S ⇒ k#uai vS ⇒ #uai vbj Scj S ⇒ j#uai vbj wbi xcj S ⇒ #uai vbj wbi xcj Sci Saj ⇒ #uai vbj wbi xcj zci yaj .3 GILs and Dyck Languages We argue in this section that GILs correspond to the result of the “combination” of a CFG with a Dyck language. ¯ ¯ ¯ ¯ S ⇒ i#uti S ⇒ #uti S ti S ti S ⇒ j#uti vtj ti S ti S ⇒ ¯ ¯ ¯ ¯ ¯ ¯ ∗ ¯ ¯ ¯ ¯ j#uti vtj ti xti S ⇒ #uti vtj ti xti S tj S tj ⇒ #uti vtj ti xti z tj y tj Similar alternatives are possible for the remaining combinations: ¯ ¯ ¯ ¯ uti v ti wtj xti z tj y tj ¯ ¯ ¯ ¯ uti vtj wtj xti z tj y ti ¯ ¯ ¯ ¯ uti vtj wti xtj z tj y ti ¯ ¯ ¯ ¯ uti v ti wtj xtj z ti y tj ¯ ¯ ¯ ¯ uti vtj wtj xti z ti y tj ¯ ¯ ¯ ¯ uti vtj wti xtj z ti y tj ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ However the last one requires some additional explanation.e. they are also removed in the derivation of the substring). 3. Therefore we conclude that there is no possible blocking eﬀect and there is always a derivation: S ⇒ #uawbycz ∗ ∗ ∗ ∗ We also used the generate-and-test strategy to verify the correctness of the above grammar Gmix for Lmix and two other grammars for the same language. This assumption is made based on the previous two claims.

Dyck languages can also provide a representation of GILs. As we have seen above. and α. if i ∈ I. GILs can be represented using a natural extension of Chomsky-Sch¨tzenberger u theorem (we will follow the notation used in [39]). This stack of indices. T and I are pairwise disjoint and |T ∪ I| = r ¯ ¯ The alphabet for the semi-Dyck set will be Σ0 = (T ∪ I) ∪ (T ∪ I). where D is a semi-Dyck set. if p = A → αaβ then p1 = A1 → α1 a¯β 1 a 2. R is a regular set and φ is a homomorphism [39]. has the power to compute the Dyck language of the vocabulary of indices relative to their derivation order. P ). The analogy in the automaton model is the combination of a Finite State Automaton with a stack. T. Let Dr be the semi-Dyck set over Σ0 . φ(¯) = a φ(i) = . Deﬁne a CFG grammar G1 = (N. as we said. #. I. L = φ(Dr ∩ R). which results in a PDA. S 1 . This “combination” is formally deﬁned in the CFG case as follows: each context-free language L is of the form. there is an integer r. S. if p = A → α then p1 = A1 → iα1 i 3 We 3 use a subscript 1 to make clear that either productions and non-terminals with such subscript belong to the CFG G1 and not to the source GIG. i ∈ I.3. β ∈ (N ∪ T )∗ create a new ¯ production p1 ∈ P 1. Σ0 . Proof Let L = L(G). φ(¯ = i) if a ∈ T . Theorem 1 (Chomsky-Sch¨ tzenberger for GILs) . a CFL and a homomorphism φ such that L = φ(Dr ∩CF L). GILs include languages that are not context free. GILS AND DYCK LANGUAGES 33 the result of the “combination” of a Regular Language with a Dyck Language. u For each GIL L. A GIG is the “combination” of a CFG with a Dyck language: GIGs are CFG-like productions that may have an associated stack of indices. Deﬁne φ as the homomorphism from Σ0 ∗ into T ∗ determined by φ(a) = a. such that α1 and β 1 ∈ (N ∪ T T )∗ as follows: 1. For every production p ∈ P where a ∈ T. . P 1) such that P1 is given according to the following conditions. Push and pop moves of the stack have the power to compute a Dyck language using the stack symbols.3. where G = (N T.

n > 1 then: Case (A) If: #S ⇒ #yB ⇒ #ya S 1 ⇒ zA1 ⇒ za¯ a It is clear that if z ∈ Dr . if p = A → α then p1 = A1 → ¯ 1 ¯ i CHAPTER 3. so is za¯. y ∈ k ¯ ¯ T ∗ ) implies there is a L(G1) derivation u A ⇒ u zB (where u . GLOBAL INDEX GRAMMARS 4. We have to show that L = φ(Dr ∩ L(G1)). (2) and (3) i k n−1 n i n n−1 n−1 by Induction Hypothesis and (1) S ⇒ u A1 ⇒ u ia¯B 1 ⇒ u ia¯zC 1 ⇒ u ia¯z¯ a a a a ia¯ If u . And if φ(z) = y then φ(za¯) = ya a a Case (B) If: #S ⇒ #uA ⇒ i#uaB ⇒ i#uayC ⇒ #uaya by Induction Hypothesis. z ∈ (T T ∪ I ∪ I)∗ ) such that z ∈ Dr k and φ(z) = y for every k < n. if p = A → α then p1 = A1 → ¯ 1 iiα [i] L(G1) is a CFL. z ∈ Dr so is u ia¯z¯ a.34 iα 3. Basis a) w = then #S ⇒ # and S 1 ⇒ b) |w| = 1 then #S ⇒ #a and S 1 ⇒ a¯ a Both and a¯ are in Dr ∩ L(G1) a The induction hypothesis exploits the fact that any computation starting at a given conﬁguration of the index stack and returning to that initial conﬁguration is computing a Dyck language. φ and the corresponding CFL. Suppose w ∈ L. Basis: a) w = then S 1 ⇒ and #S ⇒ # k k n−1 b) |w| = 1 then S 1 ⇒ a¯ and #S ⇒ #a and φ(a¯) = a a a Induction Hypothesis Suppose that if there is a CFL derivation uA ⇒ uzB and z ∈ Dr then there is a GIL derivation δ#u ⇒ δ#u yA such that φ(z) = y Case (A) If: S 1 ⇒ zA1 ⇒ za¯ then a n−1 k . Suppose that if there is a GIL derivation δ#uA ⇒ δ#uyB (where u. Induction Hypothesis. And if φ(z) = y and φ(u ) = u then φ(u ia¯z¯ a) = uaza a ia¯ a ia¯ The reverse direction φ(Dr ∩ L(G1)) ⊆ L. First direction L ⊆ φ(Dr ∩ L(G1)). The proof is an induction on the length of the computation sequence. so we have deﬁned Dr . Suppose w ∈ φ(Dr ∩ L(G1)).

S →S b¯¯ bj 4. The proof is an extension of the equivalence proof between CFGs and PDAs. . b}∗ } Given Gww = ({S. j}. Σ = T = {a. S → S a ¯ i 7. b}∪I = {i. #. then there exists a LR-2PDA M such that L= L(M). z ∈ Dr so is u ia¯z¯ a and if φ(z) = y and φ(u ) = u then φ(u ia¯z¯ a) = uaza a ia¯ a ia¯ If other (possible) derivations were applied to u ia¯zC then the corresponding string would not a be in Dr . i. b}. S → 3. S }. and we construct GwwD from Gww as follows: GwwD = ({S. 2. S → aS i 2. #. ¯ S. ¯ = Σ∪Σ. j}. j} CF Lww = L(GwwD ). b. ¯ i. S →S a¯¯ ai 3. EQUIVALENCE OF GIGS AND LR-2PDA #S ⇒ #yA ⇒ #ya If z ∈ Dr so is za¯a¯ and if φ(z) = y then φ(za¯a¯) = yaa a a a a Case (B) If: S 1 ⇒ u A1 ⇒ u ia¯B 1 ⇒ u ia¯zC 1 ⇒ u ia¯z¯ a then a a a ia¯ #S ⇒ #yA ⇒ i#yaB ⇒ i#uayC ⇒ #uaya i k n−1 k n−1 n−1 35 If u . S → bS j 3. P ) and P = ¯ b. S. S → S a ¯ i 5. b. S → S b¯¯ bj 1.3. S → a¯iS a 6. a. a. {a. Theorem 2 If L is a GIL. This is on our view a consequence of the fact that GIGs are a natural extension of CFGs. ¯ j.4.4 Equivalence of GIGs and LR-2PDA In this section we show the equivalence of GIGs and LR-2PDA. S → S ¯ a ia¯ 5. ¯ j. S → S b ¯ j ¯ Let D4 be the semi-Dyck set deﬁned over Σ0 = {a. {a. S → 4. i. P ) and P = 1. ¯ i We exemplify how the Chomsky-Sch¨tzenberger equivalence of GIGs can be obtained constructu ing the grammar for the language L1: Example 13 () L(Gww ) = {ww |w ∈ {a. S → b¯ bjS 7. S }. ¯ i. j} ¯ b. {i. S → S b ¯ j 6.

c. )} Now we have to prove that L(G) = L(M). Γ. A. Remember GIGs require leftmost derivations for the rules that aﬀect the stack to apply. w. S. the only change that we require is to take into account the operations on the indices stated in the grammar rules. α. Therefore each sentential must be of the form xα where x ∈ T and alpha ∈ V ∪ T Claim 5 #S ⇒ #xα iﬀ (q1. If the top of the stack is ⊥ accept. ) = {(q 1 . ) = {(q 1 . I. ⊥) ∗ Notice that the proof is trivial if all the derivations are context free equivalent. . ⊥. The corresponding formalization is: Let G = (V. T. If the top of the stack is a terminal a.36 CHAPTER 3. q 2 }. x. δ(q 1 . If the top of the stack is a variable A. ju)| where A → aw is a rule in P } j 5. a. . Given a GIG G: 1. δ(q 0 . δ 2 = ⊥ . .e. u) = {(q 1 . δ. δ(q 1 . ⊥) or δ = [#]. q 0 . b. a. i. #. if #S ⇒ #xα if and only if (q1. and aﬀect the second stack accordingly. If they match repeat. x. S⊥. A. w. S. ) = {(q 1 . . . a. . compare it to the next symbol in the input. Construct the LR-2PDA M = ({q 0 . use G to construct an equivalent LR-2PDA. )| where A → w is a rule in P } ¯ j 6. j) = {(q 1 . select one of the rules for A and substitute A by the string on the right-hand side of the rule. q 2 ) where Γ = V ∪ T ∪ I and δ: 1. w. ⊥)} 2. 2. Otherwise reject this branch of the non-determinism. )| where A → w is a rule in P } 3. δ 2 = ∗ δ2 δ1 (q1. α. We use as a basis the algorithm used in [77] to convert a CFG into an equivalent PDA. A. δ(q 1 . j) = {(q 1 . δ(q 1 . Repeat the following steps forever. Place the marker symbol ⊥ and the start variable on the stack. δ(q 1 . S. ⊥) ∗ ∗ (q1. . a. ⊥) where δ 1 . T. w. . P ) be a GIG. The general description of the algorithm is as follows. j)| where A → w is a rule in P } [j] 4. q 1 . GLOBAL INDEX GRAMMARS Proof Assume a GIG G that generates L. A.

x. EQUIVALENCE OF GIGS AND LR-2PDA 37 This holds true by the equivalence of PDA and CFG. α. . α. Aγ. By (1. Assume (q1. ⊥) k−1 (q 1 . ηγ. S. a. ⊥) k≥1 Case A context free If #S ⇒ δ#yAγ ⇒ δ#yaηγ = δ#xα where x = ya and α = ηγ then by IH and 2 or 3: (q 1 . ⊥) Case C Pop #S ⇒ jδ#yAγ ⇒ δ#yaηγ = δ#xα then by IH and 5. . S. δ) for any (q 1 . a. ⊥) ∗ k−1 (q 1 . x. ⊥) First direction (if) Basis: #S ⇒ #x if (q1. α. β. ⊥) ∗ (q1. β. jµ) where δ = jµ. S. S.4. Aγ. (q 1 . . ⊥) where x ∈ T ∪ { } and δ = (q1. a. w. a. α. the corresponding equivalence holds. i. (q 1 . ∗ ¯ j The last step implies A → aη ∈ P Hence: #S ⇒ jδ#yβ ⇒ δ#yaηγ = xα The only if direction: Suppose that #S → δ#wα implies that (q 1 . ι Given the last step implies β = Aγ for some A ∈ V . S. a. that the claim above is a particular case of: #S ⇒ δ#xα iﬀ (q1. S. . δ). ya.) or (2. Pop moves in the auxiliary stack. α. ya. δ) . Push moves in the auxiliary stack. S. δ) (q 1 . ⊥) or δ = [#]. . This case is a standard CFL derivation. jδ) ¯ j ¯ j (q 1 . δ) (q 1 . ⊥. k ∗ ∗ ∗ (q1. ⊥) Case B Push #S ⇒ µ#yAγ ⇒ jµ#yaηγ = jµ#xα then by IH and 4. .) respectively. S. The last step implies A → aη ∈ P Hence: #S ⇒ γ#yβ ⇒ jγ#yaηγ = xα j ∗ Case C. β. α. (q 1 . x. δ) Induction Hypothesis. ya. ¯ j ∗ j k−1 ∗ ∗ k k ∗ (q 1 .e. . . S. A → aη ∈ P where ι = #S ⇒ δ#yβ ⇒ yaηγ = xα Case B. δ) (q 1 . Therefore we need to prove that whenever the stacks are used. µ) j (q 1 . . jγ) where δ = jγ. γ) j j (q 1 . ya. α. ⊥) k−1 and α = ηγ Hence: (q 1 . (q 1 . ya. δ) implies #S ⇒ δ#yβ for any k ≥ 1 and show by induction on k that #S ⇒ δ#xα Let x = ya Case A.3.

. This construction is deﬁned so that a leftmost derivation in G of a string w is a simulation of the LR-2PDA M when processing the input w. We follow the proof from [41]. q and p in Q. α = (which excludes Case B Push to be the last step or move but not as a (q 1 . [qAq n ] → a [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] . such that Γ1 is used in the main stack and Γ2 is used in the auxiliary stack. 2. j) contains (q 1 . δ) = (q 1 . Σ.e.. ⊥) Theorem 3 If L is recognized by a LR-2PDA M. Notice that in case 4. F ). ⊥) ∗ ∗ (p. and P is the set of productions..B n−1 . B 1 B 2 .[q n−1 B n−1 q n ] such that δ(q. ⊥) .38 (q 1 . I. the only modiﬁcation is to mimic the operation of the auxiliary stack. B 1 . (push) 5. a. each a in Σ ∪ { }.B n−1 . ⊥. This conversion algorithm to construct a CFG from a PDA follows the one in [41] (including the notation). proving by induction on the steps in a derivation of G or moves of M that: #[qAp] ⇒ #w if and only if (q.. 3. I = Γ2 . q 0 . S → q 0 ⊥q for each q in Q And for each q. q 1 . S. . S. In all the cases if n = 1. a. a. j in I. A in Γ.[q n−1 B n−1 q n ] such that j δ(q. δ. then L is a GIL. q 2 . ⊥.. i) contains (q 1 . a. items 3. Γ2 .. B 1 B 2 . ⊥) k−1 CHAPTER 3. . 4. A. α. . Aγ.. then the production is [qAq 1 ] → a.B n−1 . Σ.. A. plus the new symbol S. [qAq n ] → a [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] .. S. ⊥. B n−1 in Γ and i.. Γ.. B 1 B 2 . Let M be the LR-2PDA (Q. ya. ).. Let G be a GIG = (V. δ) Given the values δ...[q n−1 B n−1 q n ] (pop) such that ¯ j δ(q. ).. a must be in δ Σ. B 1 B 2 . 1. q n in Q. α.B n−1 . P ) where V is the set of variables of the form qAp. B 2 . ji). We need to show that L(G) = L(M ). and 4. a... Assume Γ is composed of two disjoint subsets Γ1 .[q n−1 B n−1 q n ] such that [j] δ(q. . A. ⊥) step ≤ k − 1) we obtain #S ⇒ δ#xα = #x iﬀ (q 1 . w. A. and cannot be given the constraints imposed on the possible transitions in a LR-2PDA. A. jδ) ∗ ¯ j (q 1 . i. ) contains (q 1 . x. j). j) contains (q 1 . GLOBAL INDEX GRAMMARS (q 1 . A.. Proof Given a LR-2PDA construct an equivalent GIG as follows. [qAq n ] → a [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] . #. [qAq n ] → a [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] . . with the stack of indices in the grammar..

(q i . B m . w. ie.. γ k −1 ) (p.3. y i . so the induction hypothesis is: If γ 1 #u[qAp] ⇒ γ k #uw then (q.. q m + 1 = p. ⊥. ... ⊥) ∗ k−1 ∗ (q n−1 . y = y 1 .4.: (q.. ay. γ 1 ) k−1 (p. where γ i ∈ I ∗ and ∗ u. B 1 B 2 . . ⊥. γ k ) then γ 1 #u[qAp] ⇒ γ k #uw. γ k −1 = ⊥ and δ = or [#] or 2... A. ⊥. γ k ) implies: δ #[qAq n ] ⇒ γ k −1 #ay 1 y 2 . .y m−1 [q n−1 B n−1 q n ] ⇒ γ k #ay 1 y 2 .. B 1 B 2 .. γ i ) for 1 ≤ j ≤ m The ﬁrst move in I.. γ k ) for any k ≥ 1. such that the inductive hypothesis holds for each: II. these are a CFG and a PDA and therefore follows from the standard CFG.y m where each y i has the eﬀect of popping B i from the stack (see the details corresponding to the PDA-CFG case in [41]) then there exist states q 2 . y m .. B i . γ 1 ) k (p. (q. A. given the only type of rules/moves applicable are 1 and 2 (in this case where δ = [⊥]).. (q. γ k ) Decomposing y in m substrings... the induction hypothesis states that whenever the LR-2PDA M recognizes a string emptying the main stack and changing the auxiliary stack from a string of indices γ 1 to a string of indices γ k . EQUIVALENCE OF GIGS AND LR-2PDA 39 If only rules created with step 1 or 2 and the corresponding transitions are used then.. w..B m . In other words. . q m +1. ⊥) δ ∗ (q i+1 . then either 1. The basis is also proven for both directions. ⊥) (q 1 . A..y m = #w where γ k = ⊥.. .B m . A. then γ 1 = . Therefore I. y. ay. γ 1 ) implies #[qAq n ] ⇒ γ 1 #a [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] .. implies: #[qAq n ] ⇒ γ 1 #a[q 1 B 1 q 2 ] [q 2 B 2 q 3 ] . ay. PDA equivalence. . in k moves (for any k ≥ 1) then the GIG G will derive the same string w and string of indices γ k I.[q n−1 B n−1 q n ] where δ can be only be some j in I then γ 1 = j or δ is [⊥]. γ i+1 ) then γ i #[q i B i q i+1 ] ⇒ γ i+1 #y i ∗ (q 1 . γ 1 ) Suppose: k ∗ (p.y m = #w The same holds for the last symbol that is extracted from the stack (from II again): III.[q n−1 B n−1 q n ] ⇒ γ k #ay 1 y 2 . γ k −1 = j ∈ I and δ = ¯ j Now the “only if” part (the basis was proven above). ⊥.. w ∈ Σ∗ . A.. First (if) direction: Induction Hypothesis: If (q. y. .

is If we consider the last step from I. γ 1 ) or [⊥].. then (q..B n .. B 1 B 2 .xm [q n−1 B n−1 q n ] ⇒ γ k #uw which in turn implies: δ (q. B i B i+1 . w. γ i ) ∗ (q i+1 . we can decompose w as ax1 x2 . Z 0 ) ∗ ∗ (p. ⊥) k−1 ∗ (p...n. ⊥. w. ∗ where γ 1 is ⊥ if δ in the ﬁrst step of I.. γ k − 1) (q n . .. ⊥) or xm = (q n−1 .B n to each stack in the sequence of ID’s so that: II.. x1 x2 . A. . Again. ⊥). we obtain (remember w = ax1 x2 . ... γ k ) where the possibilities are either xm = xn or δ = [⊥] or and given we require γ k to be ⊥ then the only possibilities are: (A) either γ k = γ k −1 = ⊥ and δ = (B) δ = γ k −1 then γ k = ⊥ ¯ The ﬁnal observation is that we satisfy the ﬁrst claim assigning the following values: q = q O and A = Z 0 . . γ i+1 ) From the ﬁrst step of the derivation in I. B i+1 .. B i .. B n . where i = 1.[q n−1 B n−1 q n ] ⇒ γ k #uw δ k−1 where q n = p. .. xm . γ k ) #u[qAq n ] ⇒ γ k −1 #uax1 . #u[qAq n ] ⇒ γ 1 #ua [q 1 B 1 q 2 ] [q 2 B 2 q 3 ] . ⊥. GLOBAL INDEX GRAMMARS I. 2. (q i . So according to 1.. A.: and from this move and II.. xi . γ i ) ∗ ∗ (q i+1 ... w. ⊥. w.xn .xn such that γ i #ui [q i B i q i+1 ] ⇒ γ i+1 #ui xi therefore by the induction hypothesis: (q i .. ⊥.B n . in the construction of the GIG G.. A. .B n .40 CHAPTER 3.. .. xi . ⊥) (q 1 .xn ): (q. γ i+1 ) for 1 ≤ i ≤ n Now we can add B i+1 . we obtain: #S ⇒ #w iﬀ (q 0 .

The relation to Growing Context Sensitive languages is interesting for a similar reason. and the constraints imposed on the use of the additional stack in the GIL automaton might be seen as a shrinking property.1 Relations to other families of languages There are two clear cut relations that place GILs between CFLs and CSLs: GILs are included in CSLs and GILs include CFLs. The corresponding automaton is another two stack machine. Therefore a LBA can simulate a LR-2PDA. GILs and ILs might be incomparable. It is interesting to establish the relation of GILs to LILs and LCFRLs because there is an extensive tradition that relates these families of languages to natural language phenomena. However GCSLs might include trGILs. We do not address possible relationship with the counter languages [42] however. GILs are incomparable to the Growing Context Sensitive Languages (GCSLs). GILs might include LILs and perhaps LCFRLs. The additional stack in GIGs/LR-2PDA. the relation to LILs is interesting because of their similarities both in the grammar and the automaton model.Chapter 4 Global Index Languages Properties 4. 41 . Most of other relations are more diﬃcult to determine. • Proper inclusion of GILs in CSLs. corresponds to an amount of tape equal to the input in a LBA. Due to the constraints on the push productions at most the same amount of symbols as the size of the input can be introduced in the additional stack. Moreover. another family of languages that has some similarities with GILs.

#.2 Closure Properties of GILs The family of GILs is an Abstract Family of Languages. A string w is in L1 if and only if the derivation #S 3 ⇒ #S 1 ⇒ #w is a derivation in G3 . Assume also that S 3 . care should be taken that the two sets of stack indices be disjoint. such as the MIX language. This is an open question. Proof Let L1 and L2 be GILs generated respectively by the following GIGs: G1 = (N 1 . There are no other derivations from S 3 than #S 3 ⇒ #S 1 and #S 3 ⇒ #S 2 Therefore L1 ∪ L2 = L3 . [41]). ∗ . This can be proved using the same type of construction as in the CFL case (e. S 5 are not in N 1 or N 2 . T 1 . I 1 . The copy language {ww|w ∈ Σ∗ } is not a GCSL [53]. Construct the grammar G3 = (N 1 ∪ N 2 . using two GIG grammars. • Proper inclusion of LILs in GILs. GILs contain some languages that might not be ILs. For concatenation.g. I 1 ∪ I 2 . we assume that N 1 . I 2 . The same argument holds for w in L2 starting the derivation #S 3 ⇒ #S 2 . We consider this quite possible. S 2 ) Since we may rename variables at will without changing the language generated. This result is obtained from the deﬁnition of GIGs. P 1 . S 4 . Proposition 1 GILs are closed under: union. P 2 . Union L1 ∪ L2 :. P 3 . #. #. • GILs and Growing Context Sensitive Languages are incomparable. T 2 . concatenation and Kleene closure. GCSLs include the language {ba2 |n ≥ 1} which is not semi-linear. I 2 are disjoint. S 1 ) and G2 = (N 2 . I 1 . GLOBAL INDEX LANGUAGES PROPERTIES • Proper inclusion of GILs in ILs. S 3 ) where P 3 is P 1 ∪ P 2 plus the productions S 3 → S 1 | S 2 . T 1 ∪ T 2 .42 CHAPTER 4. N 2 . We discuss this issue in Chapter 6. n 4. However it remains as an open question. ILs are not semi-linear and GILs are semi-linear. instead of two CFG grammars. • CFLs are properly included in GILs.

(e) The productions of G1 are all the productions of each Ga together with all the productions from G modiﬁed as follows: replace each instance of a terminal a in each production by the start symbol of Ga . S 4 ) such that P 4 is P 1 ∪ P 2 plus the production S 4 → S 1 S 2 . I 1 ∪ I 2 . (c) the set of indices I and # are the set of indices and # from G (d) the start symbol of G1 is the start symbol of G. Let L be L(G) and for each a in Σ let La be L(Ga ). A string w is in L1 if and only if the derivation #S 4 ⇒ #S 1 ⇒ #wS 2 is a derivation in G4 . given the only production that uses S 1 and S 2 is S 4 L(G4 ) = L1 · L2 . where each Ga is in GNF. S 1 ) where P 5 is P 1 plus the productions: S5 → S1 S5 | . Proposition 2 GILs are closed under substitution by -free CFLs. T 1 ∪ T 2 . therefore also no δ ∈ I 2 can be generated at this position. [#] ∗ ∗ ∗ The proof is similar to the previous one. #. T 1 . P 4 . This proof adapts the substitution closure algorithm given in [41] to prove substitution closure by CFLs.Let G5 = (N 1 ∪ {S 5 }. Assume that the non-terminals of G an Ga are disjoint. CLOSURE PROPERTIES OF GILS 43 Concatenation:. -free homomorphisms. -free regular sets.4. Construct a GIG grammar G1 such that: (a) the non terminals of G1 are all the non-terminals of G and of each Ga . Proof Let L be a GIL.2. P 5 . Closure:. Construct G4 = (N 1 ∪ N 2 . Notice that #S 4 ⇒ #S 1 ⇒ δ#wS 2 for some δ in I 1 does not produce any further derivation because the I 1 and I 2 are disjoint sets. (b) the terminals of G1 are the terminals of the substituting grammars Ga . The proof that L(G4 ) = L(G1 )L(G2 ) is similar to the previous one. is a derivation in G4 for some w. The substituting CFG must be in Greibach Normal Form (so as to respect the constraints on productions that push indices in the stack). and a string u is in L2 if and only if the derivation #wS 2 ⇒ #wu. L ⊆ Σ∗ . #. I 1 . and for each a in Σ let La be a CFL.

Y ) contains ([p . γ. δ. q 0 ]. a) = p and δ G (q. ⊥. GILs are also closed under intersection with regular languages and inverse homomorphism. (push) productions which require Greibach normal form. X. Again. a. Note that a may be . q].44 CHAPTER 4. F A ×F G ) such that δ is deﬁned as follows: δ([p. GLOBAL INDEX LANGUAGES PROPERTIES Note that the requirement that the substituting grammar be in GNF implies that it is an -free CFL. Γ. Parikh [60] showed that CFLs are semilinear. η) if and only if (an induction proof is given by [41] for the CFL case) (q 0 . On input a M 1 generates the string h(a) stores it in a buﬀer and simulates M on h(a). η) if and only if: δ A (p. F G ) and let R be L(A) for a DFA A = (QA . [p0 . . w. ⊥) ([p. q]. q 0 ]. Therefore L(M 1) = {w | h(w) ∈ L}. γ. Y ) contains (q . Σ. δ G . Proposition 3 If L is a Global Index Language and R is a regular set. Γ. δ. q 0 . η) and δ(p0 . q ]. ⊥) (q. w) = p i Proposition 4 Global Index Languages are closed under inverse homomorphism. . p0 . . ⊥. Construct M’ = (QA ×QG . ⊥. δ A . F ). X. γ. L ∩ R is a Global Index Language. 4. q 0 . Σ. Construct an LR-2PDA M 1 accepting h−1 (L) in the following way. a similar proof to that used in the CFL case works for these closure properties: in both cases a LR-2PDA with a modiﬁed control state is used (again see [41] for the CFL case). GILs do not tolerate an substitution due to the constraints on type b. Γ. Let L = L(M ) where M is the LR-2PDA (Q. Σ. γ. Z. η).3 Semilinearity of GILs and the Constant Growth property The property of semilinearity in a language requires that the number of occurrences of each symbol in any string (also called the commutative or Parikh image of a language) to be a linear combination of the number of occurrences in some ﬁnite set of strings. in which case p = p and η = i or η = Y It is easy to see that ([p0 . w. Σ. F A ). a. Proof Let h : Σ → ∆ be a homomorphism and let L be a GIL. Proof Let L be L(M ) for an LR-2PDA M =(QG .

Two languages L1 .. . The proof of this lemma is in [30]. is semilinear provided that the languages corresponding to C are semilinear.. |an |(w)) where |ai |(w) denotes the number of occurrences of ai in w. Semilinearity has been assumed as a good formulation of the constant growth property of natural languages (e. . We will use the following lemma in the proof of semilinearity for GIGs.3. concept.g.1.6. L2 in Σ∗ are letter-equivalent if ψ(L1 ) = ψ(L2 ). Intuitively.. 37]).e. SEMILINEARITY OF GILS AND THE CONSTANT GROWTH PROPERTY 45 i.g. We will not discuss here whether semilinearity is the correct approach to formalize constant growth. ψ(L) = {ψ(w)|w ∈ L}.4. the constant growth property requires that there should be no big gaps in the growth of the strings in the languages. The Parikh image of a language is the set of all vectors obtained from this mapping to every string in the language. The Parikh mapping is a function that counts the number of occurrences in every word of each letter from an ordered alphabet. deﬁne a mapping from Σ∗ into N n as follows: ψ(w) = (|a1 |(w). Theorem 5 If L is a GIL then L is a semi-linear language. [46. recently it has been claimed that semilinearity might be too strong a requirement for models of natural language (e. as Theorem 5.. that CFLs are letter-equivalent to a regular set. The commutative or Parikh image of a language L ∈ Σ∗ .. 35]).g. [55. v i ∈ Nn for 1 ≤ i ≤ m} b) is semilinear if V is the ﬁnite union of linear sets. However. L is a semi-linear language. We follow the proof presented in [37] to prove that the class of languages L(M c ). Deﬁnition 9 Parikh Mapping (see [39]) : Let Σ be {a1 .. an }. + cm v m |ci ∈ N. Deﬁnition 8 A set V in Nn a) is linear if V = {v 0 + c1 v 1 + .. If ψ(L) is semi-linear. then their intersection X ∩Y is also semilinear. Lemma 4 If X and Y are semilinear subsets of Nn .. This property has been extensively used as a tool to characterize formal languages (e. 60]. [33. the property of semi-linearity of GILs follows from the properties of the control mechanism that GILs use. where M c is obtained from an automata class C by augmenting it with ﬁnitely many reversal-bounded counters. We assume a deﬁnition of semilinearity of languages such as that found in [39. 86]). Roughly speaking. .

46 CHAPTER 4. Let S 2 be the semilinear set {(a1 . The intersection of S 1 and S 2 is a computable semilinear set S 3 according to [30]. We can consider pumping lemmas for diﬀerent families of languages as specifying the following facts about them: . S. ¯n . I. #. We modify the construction of L1 in the proof of Theorem 1. in other words. if p = A → α then p1 = A → ¯ iα ¯ i 4. P 1) Deﬁne a CFG grammar G1 = (N. P ) such that ¯ Σ1 is (Σ ∪ I ∪ I) and P 1 is given according to the following conditions. ψ(L(G1)) is an eﬀectively computable semi-linear set S 1 ..4 Pumping Lemmas and GILs This section and the remainder of this chapter will present some exploratory ideas rather than established results. GLOBAL INDEX LANGUAGES PROPERTIES The proof exploits the properties of Theorem 1: every GIL L = φ(Dr ∩ L1) such that L1 is a CFL. in . . removing the pairs of ik and ¯k coordinates. ¯k ∈ N} where the pairs of ik and ¯k i i i i ¯ coordinates correspond to symbols in I and the corresponding element in I. This follows from Theorem 1. Proof Given a GIG G = (N. S. ¯p )|aj . if p = A → α then p1 = A → ¯ iiα [i] It can be seen that L(G1) ⊆ Σ1 ∗ and that for each w that is in the GIL L(G) there is a word w1 ∈ L(G1) such that |i|(w1) = |¯ i|(w1). Computing the intersection of S 1 and S 2 amounts to compute the properties that the control performed by the additional stack provides to GILs.. am . i Corollary 6 The emptiness problem is decidable for GILs. removing the illegal derivations. ip . Σ.. Σ1 .. note that S 2 is a subset of S 1 . For every production p ∈ P where a ∈ T. From S3 we obtain S 0 the semilinear set corresponding to ψ(L(G)). By the Parikh theorem [60] the Parikh map of the CFL L(G1)... and α. if p = A → α then p1 = A → α 2. Dr and L1 are semilinear by the Parikh theorem (L1 and Dr are CFLs). 4. as follows: 1. i ∈ I. ik . ∈ (N ∪ T )∗ create a new production p1 ∈ P 1. if p = A → α then p1 = A → iα i 3.

4.4. PUMPING LEMMAS AND GILS

47

• Regular Languages: Inserting any number of copies of some substring in a string always gives a regular set. • Context Free Languages: every string contains 2 substrings that can be repeated the same number of times as much as you like. • GCFLs, LCFRLs, others and GILs, there is always a number of substrings that can be repeated as many times as you like. Following upon this last observation, the following deﬁnition is given in [35] as a more adequate candidate to express properties of natural language. This is a candidate for a pumping lemma for GILs. Deﬁnition 10 (Finite Pumpability) . Let L be a language. Then L is ﬁnitely pumpable if there is a constant c0 such that for any w ∈ L with |w| > c0 , there are a ﬁnite number k and strings u0 , ..., uk and v 1 , ..., v k such that w = u0 v 1 u1 v 2 u2 ...uk −1 v k uk and for each i, 1 ≤ |v i | < c0 and for any p ≥ 0, u0 v 1 p u1 v 2 p u2 ...uk −1 v k p uk ∈ L. Compare the previous deﬁnition with the following k-pumpability property of LCFRS ([71], and also in [35]) which applies to any level-k language. Deﬁnition 11 (k-pumpability) Let L be a language. Then L is (universally) k − pumpable if there are constants c0 ,k such that for any w ∈ L with |w| > c0 , there are stringsu0 , · · · , uk and v 1 , · · · , v k such that u0 v 1 u1 v 2 u2 · · · uk −1 v k uk , for each i : 1 ≤ |v i | < c0 , and for any u0 v 1 p u1 v 2 p u2 ...uk −1 v k p uk ∈ L. In this case the k value is a constant determined by the k-level of the language.

4.4.1

GILs, ﬁnite turn CFGs and Ultralinear GILs

We showed that there is no boundary in the number of dependencies that can be described by a GIL (see the language Lm in the previous chapter). This fact and the candidate pumping lemma for GILs are related to the following observation. For any CFG that recognizes a number of pairs of dependencies, ( e.g., La = {an bn , cm dm |n ≥ 1, m ≥ 1} has 2 pairs of dependencies or turns [32, 39, 88, 5]) there is a GIG that generates the total number of dependencies included in the

48

CHAPTER 4. GLOBAL INDEX LANGUAGES PROPERTIES

pairs. This is done by “connecting” the CFG pairs of dependencies (turns) with “dependencies” (turns) introduced by the GIG extra stack. This “connection” is encoded in the productions of the corresponding terminals. For example, in La it is possible to “connect” the b’s and c’s, in a turn of the GIG stack: any production that has a b in the right-hand side also adds an index symbol to the GIG stack (our notation for this will be bx ), and every production that has a c in the right-hand side removes the same index symbol from the production stack (we will denote this by c¯ ). x La = {an bn cm dm |n ≥ 1, m ≥ 1} → Lb = {an bx n c¯ m dm |n ≥ 1, m ≥ 1} = {an bn cn dn |n ≥ 1, } x Conjecture 1 For any k-turn CFG, where k > 1 is ﬁnite, there is a GIG that generates a language with (2 · k) dependencies. These observations are also related to the fact that LIGs (and the equivalent automata), can connect one-turn for each stack [85]. Therefore the boundary of the number of dependencies is four.

4.5

Derivation order and Normal Form

We introduced GIGs as a type of control grammar, with leftmost derivation order as one of its characteristics. GIG derivations required leftmost derivation order. We did not address the issue of whether this is a normal form for unconstrained order in the derivation, or if an extension of GIGs with unconstrained order in the derivation would be more powerful than GIGs. In this section we will speculate about the possibility of alternative orders in the derivation, e.g. (rightmost) or free derivation order. The following grammar does not generate any language in either leftmost or rightmost derivation order: Example 14 L(G6 ) = {an bm cn dm }, where G6 = ({S, A, B, C, D}, {a, b, c, d}, {i, j}, S, #, P ) and P = 1. S → A B C D 6. C → cC

¯ i

2. A → aA

i

3. A → a

i ¯ j ¯ j

4. B → bB

j

5. B → b

j

7. C → c

¯ i

8. D → dD

9.D → d

The only derivation ordering is an interleaved one, for instance all the a’s are derived ﬁrst, then all the c’s, followed by the b’s and ﬁnally the d’s. However we know that such a language is a GIL. It is our belief that the following conjecture holds, though it might be diﬃcult to prove. Unrestricted derivation order would introduce only unnecessary additional ambiguity.

4.5. DERIVATION ORDER AND NORMAL FORM

49

Conjecture 2 Unrestricted order of the derivation in a GIG is not more powerful than leftmost derivation order. Following the previous example, the next example might seem impossible with leftmost/rightmost derivation. It looks like an example that might support the possibility that unrestricted order of the derivation might yield more powerful grammars. Example 15 L(G6b ) = {an bm cl dm en f l | n, m, n ≥ 1}, where G6b = ({S, A, B, C, D, E, F }, {a, b, c, d, e, f }, {i, j, k}, S, #, P ) and P is: 1. S → ABCDEF 5. B → b

j k

2. A → aA

i

3. A → a

i

4. B → bB

j

6. C → cC | c 10.E → e

¯ i

7. D → dD

¯ j

8.D → d

¯ j

9.E → eE

¯ i

11. F → f F

¯ k

12. F → f

¯ k

A derivation with unrestricted order would proceed as follows: S ⇒ ABCDEF ⇒ i#aABCDEF ⇒ ji#abCDEF ⇒ kji#abcDEF ⇒ ji#abcDEf ⇒ i#abcdEf ⇒ #abcdef However the following grammar generates the same language in leftmost derivation: Example 16 L(G6c = {an bm cl dm en f l | n, m, n ≥ 1}, where G6b = ({S, A, B, C, D, E, F }, {a, b, c, d, e, f }, {i, j, k}, S, #, P ) and P is: 1. S → A 5. B → bC

j

2. A → aA

i k

3. A → aB

i

4. B → bB

j

6. C → cCf | cM f 10.E → De

¯ i

M → DE

7. D → dD

¯ j

8.D → d

¯ j

9.E → Ee

¯ i

**Similarly the following language which contain unordered crossing dependencies is a GIL: {an bm cl dp bm an dp c l |n, m, l, p ≥ 1}
**

2 2 2 2

The next example also shows that leftmost and rightmost derivation ordering might produce two diﬀerent languages with the same grammar for each ordering, introducing therefore another dimension of ambiguity. Example 17 L(G7 ) = {ww|w ∈ {a, b}∗ } ∪ {wwR |w ∈ {a, b}∗ } derivation respectively) (in leftmost and rightmost

G7 = ({S, S , A, B}, {a, b}, {i, j}, S, #, P ) and P =

50 1. S → aS

i

**CHAPTER 4. GLOBAL INDEX LANGUAGES PROPERTIES 2. S → bS
**

j

3. S → S 8. A → a

¯ i

4. S → S A 9. B → b

¯ j

5. S → S B

6. S → A

7. S → B

**Rightmost derivation produces: #S → i#aS → ji#abS → ji#abS → ji#abS B → ji#abAB → i#abAb → #abab
**

i j ¯ j ¯ i

**Leftmost derivation produces: #S → i#aS → ji#abS → ji#abS → ji#abS A → ji#abBA → i#abbA → #abba
**

i j ¯ j ¯ i

(i.e. iii#. the active nodes are represented by circles and the inactive are represented by squares. As an initial approach.1 If all the possible stack conﬁgurations in a GIG derivation were to be represented. jij#. ij#. It is a device for eﬃciently handling of nondeterminism in stack operations. the number of possible conﬁgurations would grow exponentially with the length of the input. i#. in ﬁgure 5. ijj#.1.Chapter 5 Recognition and Parsing of GILs 5. and [50] for an approach to statistical parsing using graph algorithms. the number of edges connecting a node to others is limited. While the number of possible nodes increases. Each node at length n can have an edge only to a node at length n − 1. The set of nodes that represents the top of the stack will be called active nodes (following Tomita’s terminology). 1 See [51] for a similar approach to Tomita’s. iji#. ji#. 51 . etc. j with maximum length of stack 3. each node of the graph-structured stack will represent a unique index and a unique length of the stack.1 Graph-structured Stacks In this section we introduce the notion of a graph-structured stack [80] to compute the operations corresponding to the index operations in a GIG.) #. j#. For instance. This is a departure from Tomita’s graph-structured stack. jjj#. The one at the right is equivalent to the maximal combination of stack conﬁgurations for indices i. The numbers indicate the length. ii#. The graph-structured stack at the left represents the following possible stack conﬁgurations: iii#. jj#. jii#.

P ) be a CFG. Retrieves all the nodes that are connected by an edge from the currentnode.2 5. pj ].2 i.2 j. pj ] is inserted in the set of states S i if α corresponds to the recognition of the substring aj . S.3 i... currentnode connects only to nodes of current length-1. Algorithm 1 (Earley Recognition for GFGs) .2.3 i.. Create the Sets S i : 1 S0 = [S → • S$. T. Earley’s main data structures are states or “items”. Push(newnode. RECOGNITION AND PARSING OF GILS i. Let G = (N.2 i. 5.0 j. Earley items are inserted in sets of states. Therefore. n ≥ 0.. Therefore the number of edges connecting a node to a preceding one is bound by the size of the indexing vocabulary. Because at this point we use nodes of unambiguous length.oldnode). Earley’s parsing algorithm [27] computes a leftmost derivation using a combination of top-down prediction and bottom-up recognition.1 j.an with n ≥ 0 any integer i such that 0 ≤ i ≤ n is a position in w. given a string w = a1 a2 . 0] 2 For 0 ≤ i ≤ n do: Process each item s ∈ S1i in order performing one of the following: a) Predictor: (top-down prediction closure) .1 GILs Recognition using Earley Algorithm Earley Algorithm.1 $. which are composed of a ”dotted” rule and a starting position: [A → α • β. and ai ∈ T for 1 ≤ i ≤ n. Let w = a1 a2 · · · an be an input string.1 j.52 i. These items are used to represent intermediate steps in the recognition process.1: Two graph-structured stacks The following operations are possible on the graph-structured stack.1 $.3 CHAPTER 5.3 Figure 5.0 j. Creates a newnode if it is not in the graph and creates an edge from newnode to oldnode if necessary. An item [A → α • β. There are as many sets as positions in the input string. Pop(curretnode).2 j.ai .

O. I. S.2.2. #. j] ∈ S i and wi + 1 = a: add [A → αa • β. Initialize the graph-structured stack with the node (#. and O. j] to S i + 1 3 4 If S i+1 is empty. used to record the ordering of the rules aﬀecting the stack. Let w = a1 a2 · · · an be an input string.{#}.2 Therefore Earley items for GILs might be as follows: [∆. i] to S i b) Completer: (bottom-up recognition) If [B → γ • . 0. k] ∈ S j : add [A → αB • β. j] ∈ S i and (A → γ) ∈ P : add [A → • γ.2 First approach to an Earley Algorithm for GILs. n ≥ 0. 0] 2 For 0 ≤ i ≤ n do: Process each item s ∈ S i in order performing one of the following: a) Predictor b) Completer c) Scanner 2 Actually O ≤ 2n. GILS RECOGNITION USING EARLEY ALGORITHM If [B → α • Aβ. If i = n and Sn+1 = {[S → S$ • .{S}. pj ] Algorithm 2 (Earley Recognition for GIGs) . k] to S1i c) Scanner: equivalent to shift in a shift-reduce parser.5. if pop rules “erase” symbols from the stack. 0]} then accept 53 5. Create the Sets S i : 1 S0 = [(#.b}.{P}) with P: S → aS | bS S→S| ¯ i . such that O ≤ n where n is the length of the input. Reject. j] ∈ S i : for [A → α • Bβ. and ai ∈ T for 1 ≤ i ≤ n. Let G = (N. 0). T. S → • S$.{a. If [A → α • aβ. a pointer to an active node in the graph-structured stack. We use the graph-structured stack and we represent the stack nodes as pairs of indices and counters (the purpose of the counters is to keep track of the length of the stack for expository purposes). An example of such case would be the following i grammar: Gd = ({S}. We modify Earley items adding two parameters: ∆. 0).{i}. P ) be a GIG. A → α • Aβ.

This implementation used Shieber’s deduction engine backbone [72]. C 2 = C 1 − 1 if δ = then (δ 1 . O. CHAPTER 5. C 1 ) ) s. C 2 ). C 2 ) and O1 = O2 2.54 3 4 If S i+1 is empty. As we said. C 1 ) = (δ 2 . A → • γ. k] to S i δ 3. i] to S i δ such that: if δ ∈ I then δ 2 = δ. C 1 ) = (δ 2 . Scanner If [∆. The example . scanner. A → α • Bβ. 1. B → α • Aβ. Completer B If [∆1 . O. O2 . k] ∈ S j where δ 2 = i if µ = ¯ i: δ add [∆1 . O. completer used in the For loop in 2 are modiﬁed as follows. to perform the corresponding operations on the graph-structured stack. C 2 ). Predictor If [(δ 1 . C 2 ). C 1 )) s. O1 . O − 1. B → γ • . A → α • aβ. j] ∈ S i where µ = µ δ or µ = [i]: for [(δ 2 . C 2 ) ∈ pop((δ 1 . and have the corresponding initial order in the derivation. O. A → αB • β. A → αa • β. k] ∈ S j where δ 2 = i if µ = [i]: add [∆1 . 0]} then accept The structure of the main loop of Earley’s algorithm remains unchanged except for the require- ment that the initial item at S 0 (line 1) and the accepting item at S n + 1 (line 4) point to the empty stack node (#. 0. (δ 1 . C) such that C ≤ n. O − 1. j] ∈ S i : µ for [(δ 2 . B → γ • . The operations predictor. j] ∈ S i and (A → γ) ∈ P : µ δ 1. C 1 ). 0). j] to S i + 1 µ 3. j] ∈ S i and wi + 1 = a: µ add [∆.t. Reject. C 2 ). O2 = O1 + 1 and push( (δ 2 . O. RECOGNITION AND PARSING OF GILS If i = n and Sn+1 = {[(#. A → αB • β. k] to S i δ The following example shows the trace of the items added to a chart parser implementing the above algorithm. A → α • Bβ. Completer A If [∆1 .t.2 add every [(δ 2 . S → S$ • . C 2 = C 1 + 1 if δ = ¯ and i = δ 1 then O2 = O1 + 1 and i (δ 2 . 0). O. C 2 ) and O1 = O2 ( move) if δ = [δ 1 ] then (δ 1 . we represent nodes in the graph-structured stack with pairs (δ.

Adding to chart: <27798> item(B.[].2.[].1.2.c].[b. StackCounter.c].0) 3 Adding to chart: <23470> item(S.d].il].B.[].S. S).3) 16 Adding to chart: <23852> item(B.[i.1)) 7 Adding to chart: <23598> item(B.[i.[].2.0.3.1.0. GILS RECOGNITION USING EARLEY ALGORITHM 55 shows the recognition of the string aabbccdd with the grammar G2 transformed into a trGIG.[#.[b.0.4.[c].0) Adding to stack: <22605> item((i.ir].[b].5.[#.S.[].3.il]. B}.1.2.e].2.1.[].3.[i.1) 5. Example 18 (Trace of aabbccdd) L(G2 ) = {an bn cn dn |n ≥ 0}.[B.[i.[i.3.c].2) 13.Adding to chart: <23725> item(B.0.[a].2) 11. initialposition.1.1. c.[i.2.1.e].1.[#.(i.il].e].2.[#.0.1.[S].[b]. #.0.d].2)]) 12 Adding to chart: <23725> item(B.4.[b.[B. where P is: S → aSd.[].2) 14.1.[b. Postdotlist. ﬁnalposition) The stack operation is represented as follows: “il” is a push rule of the stack index “i” and “ir” is a pop rule.[i.B.Adding to chart: <23724> item(S.1.[B].S.c].0.[b].1) Adding to stack: <22732> item((i.[a.0.3.(i.0.[].Adding to chart: <27540> item(S.[#.3.3) 18.Adding to chart: <23599> item(S.d].[]. b.ir].1) 9 Adding to chart: <10640> item(S. [Stacksymbol.4.ir].2).ir]. Predotlist.[a].0.0) 2 Adding to chart: <27413> item(S. P.[a.0.0. {i}.[]. StackOperation].c]. {a.3.4) .0)) 4 Adding to chart: <10771> item(S. OrderCounter. Adding to chart: <26897> item(B.1.ir].1) 6.(#.2) Adding to stack: <22863> item((i.2.3) 17.1.[S.2.3.2.[b.3) 15.il].1.c].[i.ir].[B].[#.1.c]. Adding to chart: <23851> item(B.2.d].d].3.2.3).il]. Each item is represented as follows: item(LeftSymbol.ir].2.[]. i S→B B → bBc ¯ i B → bc ¯ i 1 Adding to chart: <10898> item(<start>.1) 8 Adding to chart: <23598> item(B.ir].0. d}.[].[B].[i.2.2) 10.[].2.[i.[#. Adding to chart: <23852> item(B.e].[a.B.c].[S.[b.[i. G2 = ({S.Adding to chart: <27671> item(S.ir].1).

[].[S.0.[c. the completer operation may jump up two steps and complete the “R” introduced after step 3. Consider the language Lww = {ww |w ∈ {a. 6. 0).[d.2.0.0.[#.3.8) 28. S → aS i 2.3.[].5) 21.a]. S → R 4.2.[#. j .il].e].4.0. 3] ∈ S 3 : ¯ j add [(#.[S. Adding to chart: <10982> item(<start>.2.0)aaba[a][b] ¯ i 6 After recognized this substring.a].[B.[]. R → a • .4. 2].[c.0) S ⇒ (i.S.a].2. Adding to chart: <23085> item(B.3) aabR⇒(i.2)aabR[b]⇒ i i j ¯ j ¯ i 1 2 3 4 5 (i.7) 26.[]. The pairs in parenthesis represent the nodes of the graph-structured stack: (#.[#.2. In other words.[#.b].S.6) 25.2) aaS ⇒ (j .5) 22.3.[].B. 2. instead of the one introduced at step 5. S → R • . Adding to chart: <10883> item(S.1. Adding to chart: <27783> item(B.[S].1) aS ⇒ (i.[#. the corresponding productions of the grammar Gww repeated below and the string aaba 1. aaRa ⇒ (?. Adding to chart: <23342> item(S. This bookkeeping is performed here by the order parameter.[d].8) yes The following example shows why some bookkeeping of the order of the derivation is required in Earley items. the following sequences can be generated using Earley’s Algorithm if no ordering constraint is enforced at the Completer Operation.ir].1. R → Rb | b ¯ j The following derivation is not possible (in particular step 4). S → bS j 3. 3] ∈ S 4 and [(j.[B]. b}∗ }.7) 27.2.0.0.e]. 3.[]. In the following example the square brackets represent substrings to be recognized.0.6) 23.[d].il].0.[#.0) S ⇒ (x.3) aabS ⇒(j . S → • R.[b].2) aaR⇒ (i.0. Adding to chart: <10625> item(S.[c].b]. R → Ra | a ¯ i 5.56 19 CHAPTER 5. Adding to chart: <10754> item(S.[c]. 3).ir].1) aS ⇒ (i.ir].3. S → bS • .1)aabR[a][b]⇒(#.1).0.0.0.[#.ir]. Adding to chart: <23215> item(S. the following complete operation would be performed : Given [(#.il].b]. 3.2) aaS ⇒(i. 3] to S 4 Then the following are sequentially added to S 4 : a) [(#.0.0.1.il].1. 0).?) aaba i i ¯ i ¯ j 1 2 3 4 However.[#. RECOGNITION AND PARSING OF GILS Adding to chart: <22956> item(B.[#.a]. 0). (#. Adding to chart: <27654> item(B.6) 24.[d.[#.4) 20.

i 57 c) [(#. c. S → aS • . 0). 4. the complete operation of the previous algorithm does not specify the correct constraints. . The ambiguity arises in the following conﬁguration S ⇒ aabR which would lead to the following items: [(#. 2). R → [(#. there is no way to control the correct derivation given no information on the pop moves has been stored.2. S → aS • . R → bRd ¯ j Grammar Gamb4 produces the following derivation: S ⇒ i#aSd ⇒ ji#aaScd ⇒ ji#aaRcd ⇒ i#aabRdcd ⇒ #aabbcdcd But it does not generate the string aabbcddc nor aabbdccd. #. R}. The exact prediction path that led to the current conﬁguration has to be transversed back in the complete steps. 0). The predict-complete ambiguity In this case. P ) where P is: 1.2. 2]. R → • ∗ • b c. b. 4. It cannot handle cases of what we call predict-complete ambiguity. S → aSd i 2. 2] and b d. push push pop pop Figure 5.2. and at completer cases. complete steps have to check that there is a corresponding pop-move item that has been completed for every item produced at a reduced push move. S → aSc j 3. R → bRc ¯ i 5. 2). 0] i 5. {i. 1. R → bd ¯ j 4. 1]. S. R → bc ¯ i 6. 0. Example 19 Gamb4 = ({S. At this point. completed-stack ambiguity and stack-position ambiguity. In other words. GILS RECOGNITION USING EARLEY ALGORITHM b) [(#.5. at some predict steps. {a.2: Push and Pop moves on the same spine The following grammar produces alternative derivations which illustrate this issue. This is depicted in the following ﬁgure 5. j}.3 Reﬁning the Earley Algorithm for GIGs The Algorithm presented in the preceding section does not specify all the necessary constraints. d}. S → R 7.

58

CHAPTER 5. RECOGNITION AND PARSING OF GILS ¯ A solution is to extend the graph-structured stack to include I nodes. Whenever a push item is

being completed it has to be matched against the complement node on the stack. The existence of ¯ a corresponding I node stored in the stack has to be checked whenever an item with a reduced push ¯ node is completed. The following ﬁgure 5.11 depicts a graph that includes I nodes (pop nodes). It also shows some additional edges (the dashed edges) that depict the corresponding active nodes in the I domain. We will discuss this issue again in the next subsection.

i,1 $,0 j,1

i,2

i,3

i,3

i,2

j,2

j,3

j,3

j,2

j,1

Figure 5.3: A graph structured stack with i and ¯ nodes i

Completed stack ambiguity The graph-structured stack we presented so far is built at prediction steps and no additional computation is performed on the graph at completion steps. Therefore there is no way to represent the diﬀerence between a conﬁguration like the one depicted in ﬁgure 5.2 and the one we present in ﬁgure 5.4.

A

push

B

pop

Figure 5.4: Push and Pop moves on diﬀerent spines Such a diﬀerence is relevant for a grammar like Gamb5 Example 20 Gamb5 = ({S, A, R, B}, {a, b, c, d, e}, {i, j}, S, #, P ) where P is: S→AB A→R A → aAb |ab

i

R → aRd |ad

j

B → cBd |cd

¯ i

B → cBe |ce

¯ j

5.2. GILS RECOGNITION USING EARLEY ALGORITHM

59

Grammar Gamb5 generates the string aadbccde but it does not generate the string aadbcced. If we consider the recognition of the string aaaddbccdde, it will generate the following stack nodes for the derivation A ⇒ aaaddb: a(i, j, 1)a(i, j, 2)a(i, j, 3)d(→ j, 3)d(→ j, 2)b(→ i, 1). The notation (i, j, 1) indicates that two nodes (i, 1) and (j, 1) where created at position a1 , and (→ j, 3) indicates that the node (j, 3) was completed at the position d4 . Consequently only the nodes (j, 3), (j, 2) and (i, 1) are completed at the derivation point S ⇒ aaaddbB. The edges depicted in ﬁgure 5.5 are all valid at position 3 (i.e accessible from the item [∆, 3, R → aaa • d, 3] in S 3 ). However they would be still valid at position 6: [∆, 3, B → α • d, 6] in S 6

i,1 $,0 j,1 j,2 j,3 i,2 i,3

∗ ∗

Figure 5.5: A predict time graph Therefore, it is necessary to distinguish between prediction and complete stages. We need to introduce a new edge, a completion edge. The arrows in ﬁgure 5.6 are intended to designate this completion edge. They also indicate that there is a diﬀerent kind of path between nodes. So, the simple lines indicate that there was a potential edge at the prediction step. Only those paths with additional arrows were conﬁrmed at a complete step.

i,1 $,0 j,1 j,2 j,3 i,2 i,3

Figure 5.6: A complete steps in the graph

Stack position Ambiguity Two (or more) diﬀerent stacks might have the same active nodes available at the same position. However those alternative stacks might correspond to alternative derivations. This issue is illustrated by the two stacks represented in ﬁgure 5.7.

60

CHAPTER 5. RECOGNITION AND PARSING OF GILS

i,1 $,0 j,1

i,2

i,3

i,1 $,0

i,2

i,3

j,2

j,3

j,1

j,2

j,3

Figure 5.7: Two alternative graph-structured stacks

The alternative graph-structured stacks depicted in ﬁgure 5.7 might be generated at diﬀerent positions, e.g. in the following alternative spines as illustrated in the following ﬁgure 5.8.

push pop

Figure 5.8: Ambiguity on diﬀerent spines Notice that although these stacks have the same active nodes, they do diﬀer in the nodes that can be reached from those active nodes. They even might be continued by the same pop complement part of the graph. It must be noticed that the active nodes from the graph structures in ﬁgure 5.8 are indistinguishable from each other. Some information must be added to make sure that they point to active nodes in diﬀerent graphs. The information that will enable to distinguish the corresponding nodes in the graph will be the position in which the push nodes are generated. This information replaces the order parameter that was used in the preliminary version of the algorithm. In this case the graph structured stack will be more like the one that is proposed in Tomita’s algorithm, in the sense that there is not an unambiguous length of the stack at each node.

i,5 $,0 $,0 j,1 j,2 j,3 j,4 j,5 j,8 i,6 i,7

i,1

i,2

i,3

Figure 5.9: Two alternative graph-structured stacks distinguished by position

PushC(currentnode. The modiﬁcations are ¯ necessary due to the fact that the graph-structured stack contains both nodes in I and I.oldactivnode. Push(newnode.10. and pi > pj .e. pj ). These operations enable the computation of the Dyck language over the graph-structured stack. It retrieves all the nodes in I that are connected by an edge to the push node that currentnode reduces. We will use a slightly more complicated version that takes into account completed nodes and the initial nodes introduced by the Push operation. We will use also the complement notation to designate the ¯ corresponding complements. initialnode).5. Path(currentnode). This operation is the equivalent of the previous Pop function with a stack ¯ that keeps record of the nodes in I. It ﬁnds the preceding unreduced push node after one or more reductions have been performed. pi ). p) such that δ is an index and p > 0 (p s are integers). Creates a newnode if it is not in the graph and creates an edge from newnode to oldnode if necessary. We will discuss it again below. p) such ¯ ¯ that δ ∈ I. oldnode. as is speciﬁed in algorithm 5 and depicted in ﬁgure 5. p < 0. A node can connect to any node at a preceding position. Note that the position is determined by the corresponding position of the terminal introduced by the push production. A node ∆+ designates a node corresponding to a push move on the stack.2. Therefore the number of edges connecting a node to a preceding one is bound by the position of the current node. Therefore ∆+ will designate a node corresponding to a pop move which ¯ “removes” ∆+ . We say ∆i > ∆j if ∆i = (δ i . Therefore. Creates a complete edge from currentnode to precednode if necessary.2. It also creates an i-edge from initialnode to newnode. for a node at position pi there might be pi−1 edges to other nodes.4 Earley Algorithm for GIGs: ﬁnal version Revised Operations on the stack We will use the following notation. On the other hand ∆− will designate a push node which has been removed by ∆− . It is a pair (δ. GILS RECOGNITION USING EARLEY ALGORITHM 61 5. A node ∆− designates a node corresponding to a pop move in the stack. The following operations are possible on the graph-structured stack. i. We say ∆+ ∈ I.initialnode). precednode). after a reduce operation has been done. the path function obtains the nodes in I which can be active. We say ∆− ∈ I. ∆j = (δ j . . These edges will be represented then by triples: (newnode. It is a pair (δ.

0 j. named edgeC.3 i.1 j. −2 S S S S S Figure 5. 2 j.2 i. RECOGNITION AND PARSING OF GILS ¯ ¯ where ∆ − 1 . ∆ − 3 ) add Path2(∆ − 3 ) to valueset return valueset S S i.2 i.3 j. ∆ − 3 ∈ I. −1 j.0 i. ∆ + 2 ∈ I 1 2 3 4 5 6 valueset = {} ¯ For edge((∆ − 1.62 Algorithm 3 .1 j. Path(∆ − 1 ): CHAPTER 5.3 i. Both Path and Path2 perform Dyck language reductions on the graph structured stack. Path obtains the closest non reduced nodes ¯ ¯ in I. −2 j.1 $.−3 i. While Path performs them at predict time. ∆ + 2 ) add ∆ + 2 to valueset ¯ For edge(∆ − 1 . Path2 performs them at complete time. The returned set of nodes represent points in the derivation at which to continue performing reductions at complete time. −2 j.3 j. −3 j.10: Paths and reductions in a graph structured stack with i and ¯ nodes i The following operation operates on the reverse direction of the Path function.1 $. −3 i. −1 j. and ∆ − 1 .−3 i. Completed nodes are designated by a particular kind of edge. 2 j. The portion of the graph that is already reduced corresponds to completed nodeswhich already matched their respective complements. Path2 returns a set of nodes in I accessible from another node in I. −2 i. .

∆2 ) add ∆2 to valueset ¯ For edgeC(∆ − 1 . p) such that δ ∈ I.11: A graph structured stack with i and ¯ nodes i New Earley items for GIGs Earley items have to be modiﬁed to represent the portion of the non reduced graph-stack corresponding to a recognized constituent. j] ∈ S i ¯ Nodes are represented by pairs of elements in I ∪ I ∪ {#}. Two parameters represent the portion of the stack: the current active node ∆Ac and the initial node ∆In . -3 S S paths $.3 j. p) such that δ ∈ DI and p is an integer. will be denoted by DI (i.5. A → α • aβ.1 j. ∆ − 3 ) add Path2(∆ − 3 ) to valueset return valueset c c S 63 $. negative integers and zero respectively. deﬁned as follows: cl(δ) = cδ for any δ ∈ DI ∆cl represents a node (cl(δ). 0). p) (∆) in the stack. The vocabulary corresponding to the set of indices (I). 2 j. p > 0. A node ∆cl (or cl(∆)) is a pointer (cl(δ). p) to the node (δ. and positive.3 j. p) such that δ ∈ I. -1 j. ∆ − 3 ∈ I. The empty stack is then represented by the node (#. -2 j. their ¯ ¯ complements (I) and the empty stack symbol (#). [∆Ac . GILS RECOGNITION USING EARLEY ALGORITHM Algorithm 4 . -3 Figure 5.1 j. ∆ − 1 ∈ I) and : ∆2 ∈ I ∪ {#} 1 2 3 4 5 6 valueset = {} ¯ For edgeC((∆ − 1. but inherit the last generated node. 2 j.e. Path2(∆ − 1 ): ¯ ¯ ¯ where ∆ − 1 .0 j.0 j. We say a node ∆+ is in I if it designates a pair (δ. p < 0. ∆In . -1 j. We use a substitution mapping cl from DI into DI ∪ {c}∗ . We refer to these as transmitted nodes. We need also to represent those constituents that did not generate a node of their own ( a push or pop derivation). . -2 j.2. We ¯ ¯ say a node ∆− is in I if it designates a pair (delta. DI = I ∪ I ∪ {#}).

∆ − 3 ) For Path (∆ − 3 .12. StorePath(∆ − 1 . ∆ + 2 ) such that ∆ + 2 ≥ ∆R add Path(∆ − 1 . Algorithm 5 . and ∆ − 1 . RECOGNITION AND PARSING OF GILS As we mentioned when we introduced the Push operation Push(newnode. ∆R2 ) to Pathset ¯ else if ∆ − 1 > ∆R ¯ For edgeC(∆ − 1 . by the following Push function. ∆ + 2 . ∆R ): ¯ ¯ ¯ where ∆ − 1 . ∆ + 2 . ∆R ) to Pathset ¯ For edgeC (∆−1 . ∆ + 2 ∈ I. ∆ + 2 . ∆R2 ∈ I ∪ I. ∆ + 2 . Notice that an I-edge can be identical to a push or pop edge. This kind of edge relates ∆In nodes. I-edges are depicted in ﬁgure 5. B I−edge α push or pop edge A Figure 5. ∆R . ∆ − 3 ∈ I. ∆R 2) add Path(∆ − 1 .64 CHAPTER 5. ∆ − 3 ) and For Path(∆ − 3 .oldactivnode. ∆ + 2 .12: I-edges Now that we have introduced I − edges we present a second version of the path computation that takes into account I − edges (here ∆R ) as boundary points to deﬁne a path. ∆R2 ) add Path(∆ − 1 . ∆ − 3 .initialnode). ∆R2 ) to Pathset ¯ For edge(∆−1 . ∆R ) to Pathset . I the set of indices. we will make use of an additional type of edge. therefore we will call them I-edges. ∆R2 ) add Path(∆ − 1 ∆ − 3 . It also takes into account the diﬀerence between nodes corresponding to completed constituents (lines 7-12) and nodes corresponding to incomplete constituents (lines 1-6). These edges will be introduced in the predict operation. The interpretation of these edge is that there is a path connecting the two ∆In nodes. 1 2 3 4 5 6 7 8 9 10 11 12 ¯ If ∆ − 1 = ∆R : ¯ For edge(∆ − 1 . ∆ + 2 .

If i = n and Sn+1 = {[(#. Ac(δ 1 . pop or e-move). 0). p) such that −n ≤ p ≤ n. S → S$ • . Create the Sets S i : 1 S0 = [(#. 0). δ ∈ DI and the absolute value of p (abs(p)) refers to the position in w = a1 . S → • S$. Initialize the graph-structured stack with the node (#.. Let G = (N. n ≥ 0. 0). to retrieve the active nodes in I. (#. #. In the second case. 0]} then accept 65 The structure of the main loop of Earley’s algorithm remains unchanged except for the requirement that the initial item at S 0 (line 1) and the accepting item at S n + 1 (line 4) have an empty stack Active and Initial node: (#. This is also required for productions ¯ that check the top of the stack: (A → γ) when the triggering item Active node is in I. Prediction and Scanner Operations Prediction Operations will be divided in diﬀerent subcases. I. Reject.5. (#. S. The operations predictor. and ai ∈ T for 1 ≤ i ≤ n. 0] 2 For 0 ≤ i ≤ n do: Process each item s ∈ S i in order performing one of the following: a) Predictor b) Completer c) Scanner 3 4 If S i+1 is empty. the function path is used. In the pop case there are two subcases (1. p0 ) designates an item initial node..2. 0).a): ¯ the active node in the triggering item is in I or (1.b) where the active node is in I.2. P ) be a GIG. p1 ) designates an item active node (∆Ac ) and I(δ 0 . scanner. 0).2. completer used in the For loop in 2 are modiﬁed as follows to perform the corresponding operations on the graph-structured stack: as we said. These subcases are constrained by the type of production applied (push. 0).an in which the index δ + in I was introduced. T. Let w = a1 a2 · · · an be an input string. [δ] . we represent nodes in the graph-structured stack with pairs (δ ± . GILS RECOGNITION USING EARLEY ALGORITHM Algorithm 6 (Earley Recognition for GIGs) .

∆In . A → α • aβ. p2 ) ∈ path(δ 1 . j] ∈ S i .66 1. p0 )) ¯ 1. Scanner If [∆Ac . I(cl(δ 0 ). p2 ). A → • γ. p1 ). δ r . I(δ 0 . B → α • Aβ. j] ∈ S i and (A → γ) ∈ P and δ 1 . (δ 0 . A → αa • β. p0 ). p0 ). p2 ). p1 ). p3 ) ∈ path((δ 1 .1 Predictor Push CHAPTER 5.b Predictor Pop: Active Node is in I ¯ If [Ac(δ 1 . p1 ). (δ 1 . pr )) and δ 2 = δ 3 and p3 = −p2 and push( (δ 2 . (δ 0 . I(δ 0 . p2 ). B → α • Aβ. and push( (δ 2 . p1 ). ∆In . j] ∈ S i and wi + 1 = a: add [∆A . j] ∈ S i . p1 ). j] to S i + 1 Completer Operations The diﬀerent subcases of Completer operations are motivated by the contents of the Active Node ∆Ac and the Initial Node ∆In in the triggering completed item (item in S i ) and also by the contents of the Initial Node ∆I0 of the combination target item (item in S j ). I(δ 0 . p1 ). (δ 0 . p0 ). I(δ 2 . p1 ). (δ 3 . B → α • Aβ. All the completer operations . δ 2 ∈ I: δ2 add every [Ac(δ 2 . I(δ 0 . p1 ). p2 ). p2 ).a Predictor Pop: Active Node is in I If [Ac(δ 1 . p0 ). (δ r . (A → γ) ∈ P δ2 and δ 2 ∈ I: add every [Ac(δ 2 . (δ 1 . p2 ). p3 ). i] to S i and push( (δ 2 . δ 1 ∈ I and δ 2 = δ add every [Ac(δ 1 . p0 ) ) StorePath((δ 2 . p2 ). RECOGNITION AND PARSING OF GILS If [Ac(δ 1 . p1 ).2. ¯ such that: δ 2 = δ 1 and p2 = −p1 (A → γ) ∈ P δ2 and ¯ δ 2 ∈ I = δ¯ : 1 add every [Ac(δ 2 . p0 ) ) StorePath((δ 2 . pr )) 1.3 Predictor e-move If [Ac(δ 1 . I(δ 2 . (δ 1 . I(δ 2 . B → α • Aβ. p2 ). p2 ).2. p0 ) ) 1. i] to S i such that: p2 = i + 1. p2 ). p0 ). i] to S i ¯ such that: (δ 3 . (δ 0 . i] to S i 2. p1 ). A → • γ. j] ∈ S i δ= and (A → γ) ∈ P such that δ ¯ or δ = [δ 1 ] or (δ 2 . A → • γ. p2 ). A → • γ.

. ¯ 3. p2 ). therefore. I(δ 3 . A → αB • β. (derived by Predictor 1. 3 Use the standard terminology concerning Dyck languages as found for instance in [39]. and the active node has to contain its inverse. k] ∈ S j such that: ¯ δ 0 ∈ cl(I ∪ I) add [Ac(δ 2 . As a result of the combination.e ∆Ac = (δ 2 .5. ∆I0 ) (where the omitted element is the active node ∆Ac of the combination target). p2 ). ( . No further constraints are checked. Therefore if the information is stored in a three dimension table. However. Combination with an e-move item: δ 0 ∈ cl(I ∪ I) If [Ac(δ 2 . A → α • Bβ. ∆I0 = (cl(δ). k] to S i Subcase 3. p3 ). the initial node ∆In is in I. j] ∈ S i : for edge((δ 3 . a}.a Completer A.b combines a completed triggering item generated by a push production (using the prediction operation 1. I(δ 3 . For a subtree to be in the Dyck language over DI is as follows: the initial node has to contain the node for the ﬁrst index introduced in the derivation.e.1). the initial node ∆In is inherited by the result item. p0 ) I(δ 0 .3. The result item preserves the active node from the triggering item and the initial node from the combination target item. These edges are of the form (∆In . B → γ • . a stack that is in the ¯ Dyck3 language of the indices and its complements (I ∪ I)). . The constraint that there is an I-edge between ∆In and ∆I0 is checked. p3 ). the transmitted nodes are consumed. At this step. p3 ). This is the equivalent of a production S → aS¯ for a Dyck language over the alphabet {a. If a subtree is in the Dyck language over a ¯ DI. a reduction can be performed.2. p). We use the symbol ‘ ’ hereafter to indicate that the information stored in the corresponding ﬁeld is not required in the computation. This is the reverse of predictor 1. The ﬁrst completer subcase involves a combination item in S j that had been generated by an e-production. (δ 0 . The active node ∆Ac is also in I or it is the empty stack (i. . ). GILS RECOGNITION USING EARLEY ALGORITHM 67 check properties of the combination target item in S j against edges that relate its the initial node ∆I0 to the initial node of the completed triggering item ∆In . p2 ) such that δ ∈ I ∪ {#} and 0 ≤ p2 ≤ n). p0 ). p0 )) and for [Ac1(δ 0 . it means that only two dimensions are being used. A completion edge is introduced recording the fact that this push node ∆In has been completed. The idea is to be able to compute whether a tree has a valid stack (i. ie. we need to check also whether the stack associated ¯ with each subtree is a proper substring of the Dyck language over I ∪ I.3 e-move) and therefore its initial node is a transmitted node.

(p2 > −1) If [Ac(δ 2 . I(δ 3 . p3 ). δ 2 ∈ I ∪ {#}.e ∆Ac = (δ 2 . p0 )) and for [ . A → αB • β.b Completer B: Push with Active Node in I ∪ {#}. I(δ 0 . The ﬁrst is −p2 < pI .c also combines a triggering item generated by a push production. In other words the active node ∆Ac is a closing parenthesis of the Dyck language that has not yet exceeded its opening parenthesis. The existence of an edge between ∆In and ∆I0 is checked as in the previous cases. ∆In ∈ I. A → α • Bβ. k] to S i and such that I_0 − + I_0 I−edge + − I_3 + push edge + Ac_2 + Ac_2 + Figure 5. RECOGNITION AND PARSING OF GILS I_3 I−edge edge I_3 Ac_2 + Ac_2 + − − Figure 5. and (p2 > −1) : for edge((δ 3 . p0 ). δ 0 . p0 )) such that: ¯ δ 0 ∈ I ∪ I. p3 ). It should be noted that the string position in which an index in I is introduced constrains the possible orderings of the control . p2 ) such that δ 2 ∈ I and −n ≤ p2 < 0). (δ 0 . In this case ¯ ¯ the active node ∆Ac is in I (i.13: Completer A 3. add [Ac(δ 2 . p0 ). p2 ). B → γ • . p2 ). j] ∈ S i δ 3 ∈ I.14: Completer B Subcase 3.68 I_0: e−move CHAPTER 5. p3 ). ( . k] ∈ S j pushC( (δ 3 . There are two additional constraints. ). I(δ 0 .

.. p3 ). k] to S i such that ¯ δ 3 ∈ I.. −p3 ). I(δ 0 . In this case the active node ∆Ac is also in I ( like in subcase 3. j] ∈ S i for edge((δ 3 . ¯ 3. I(δ 0 . B → γ • . p_3) (d_1.c). because ∆Ac should have met the corresponding matching Dyck element and a reduction should have been performed. two nodes ∆In = (i.[(I0 .e. p2 ). p0 )) and ¯ for [ .−p_1) A_2 − A_2 − Figure 5.p_1) + − I_3 + push edge mirror push/pop (d_3..(In .c Completer C: Push with Active Node in I. A → α • Bβ.5. ¯ ∆In ∈ I.. But there is a ¯ diﬀerent constraint: ∆Ac = ∆In . ¯ (Ac ... δ 2 ∈ I..(I . (p2 < 0). i.. ( ...)Ac ] ¯ ¯ The result item preserves the active node from the triggering item and the initial node from the combination item. p3 ). so the node ¯ ∆In has to have been completed... ∆Ac is the matching parenthesis of ∆In .. )In . ).. ( )) I_0 + − I_0 I−edge (d_1. add [Ac(δ 2 ..2. k] ∈ S j such that: δ 0 ∈ I ∪ I. do not constitute a j valid stack conﬁguration for a subtree.[(I0 . The substring of the Dyck language that corresponds to the completed subtree is indicated by the square brackets.d is the last case that combines a completed item generated by a push production. I(δ 3 . In other words: . 2) and ∆Ac = (¯ − 3). For instance. This constraint then expresses the fact that (Ac is outside the square brackets. δ 0 .. p2 ).15: Completer C Subcase 3. p0 ). and −p2 < p3 If [Ac(δ 2 . (p2 < 0) and ¯ −p2 < p3 there is a completed edge((δ 3 . p0 ).)Ac ] ¯ The second constraint requires that ∆In has to have a matching closing parenthesis. Following the preceding example )¯ has to be inside the square I brackets: (Ac . GILS RECOGNITION USING EARLEY ALGORITHM 69 language elements. A → αB • β.

p5 ) ∈ path2(δ 2 . ¯ ¯ 3. (δ 0 . and checked to verify that they satisfy the control properties required by the case 3. p0 ). k] ∈ S j such that: δ 0 ∈ I ∪ I. (p2 < 0). This option is similar to the previous one.d Completer D: Push with Active Node in I. ∆I0 ) where ∆In = ∆Ac1 . becomes the updated Active node of .e. A → αB • β. p1 ). A node preceding the active node (∆Ac1 ) of the target item (δ 1 . p1 ).16: Completer D The following Completer cases involve completed triggering items generated by pop productions. j] ∈ S i ¯ −p2 = p3 and δ 2 = δ 3 for edge((δ 3 . )).p_1) I_0 + − reduced path Ac_2 = I_3 Ac_2 − Figure 5. p2 ).e corresponds to edges (∆In .d. (δ 1 . RECOGNITION AND PARSING OF GILS ¯ (δ 2 . I(δ 3 .p_1) I_3 + push edge Ac1: (d_1. (δ 0 . In other words. I(δ 0 . p2 ) = (δ 3 . This is the base conﬁguration to perform a Dyck reduction. the only diﬀerence is that ∆Ac was not completed yet at this stage (the path2 function requires completed nodes). −p3 ) such that δ I ∈ I and 0 < p3 ≤ n. ). ¯ Subcase 3. p0 )) and ¯ for [Ac(δ 1 . which is ∆Ac1 . + − i−edge Ac1: (d_1. A → α • Bβ. I(δ 0 . and ∆Ac = ∆I If [Ac(δ 2 . δ 2 ∈ I. p5 . B → γ • . p1 ). k] to S i I_0 such that ¯ δ 3 ∈ I. Therefore the whole stack of the completed triggering item can be reduced and the active node of the result is updated to a node that precedes ∆In in the stack. The alternatives for performing a Dyck reduction on the graph are three: a) The reduction is performed by the path2 function: choose (δ 5 . p3 ).( p2 < 0). (δ 1 . c) The completed item initial node and active node are equal (∆In = ∆Ac ). p0 ). p3 ). there ¯ ¯ is an edge ((δ 2 . −p0 . p1 )(δ 5 . add [Ac(δ 1 .70 CHAPTER 5. p2 ) b) The reduction is performed by a single edge from the complement of the Active Node (∆Ac ) to a matching node of the initial node in the target combination item ∆I0 . p1 ). −p2 ). product of the reduction. i. ∆Ac1 . The Active Node of the result item are updated.

). (δ 0 .f Completer F. p1 ). p1 )(δ 5 . and (δ 5 . p3 . I(δ 3 . GILS RECOGNITION USING EARLEY ALGORITHM the result. p5 ). or exceeds it. −p2 ). p5 ) ≥ (δ 0 . p0 ). A → αB • β. δ 1 ∈ I. A → α • Bβ. Part of this constraint might be speciﬁed in the Completer Case G. δ 0 ∈ I and p2 > p3 4 There ¯ is a complete path of I elements that reaches the one that complements δ 0 .f considers the combination of a pop item with a push item and checks the properties of the Active Node (∆Ac ) to verify that it satisﬁes the control properties required in 3. p2 ). p2 ). for [Ac(δ 1 . −p0 )4 ¯ ¯ else if −p2 > p0 and there is an edge (δ¯ . in other words ∆Ac > ∆In . which has not yet exceeded the position of its opening parenthesis. Pop items with Push items and graph reduction ¯ If [Ac(δ 2 . p5 ).2. p5 ) ∈ path2(δ 2 .5. I(δ 0 . alternatively. The Active node ∆Ac is updated according to the performed reduction. 3. p2 ) for edge((δ 3 . The information that ∆In was completed is stored by the completed edge (∆In .p_4) + − I_3=A_2 Ac_2 − Figure 5. last case Subcase 3.17: Completer E. This case is relevant for the language Lgenabc edge to itself. 5δ ¯2 6 An was introduced after δ 0 was reduced. p3 = −p1 .p_4) path + − + pop edge reduced graph Ac: (d_4. I(δ 0 . p3 ). p0 ). p2 < 0 : Compute path2 (δ 2 . k] ∈ S j such that ¯ If p1 > p0 . −p0 . (δ 1 . p5 ) = (δ 0 . (δ 0 . (∆Ac− ). B → γ • . Either the Active Node is a closing parenthesis. p0 )) where ¯ δ 3 = δ 1 . −p0 ) 2 else if p2 = p3 . δ 0 ∈ I and p0 > 0. k] to S i I_0 5 71 + − i−edge Ac_1 I_0 (d_4. ) add [Ac(δ 5 . j] ∈ S i such that δ 2 δ 3 ∈ I. the Active Node is positive (∆Ac+ ).δ 2 = δ 3 for edge(δ 1 .e Completer E. ∆In ). and (δ 5 . p1 ).c. . Pop items with Push items: p0 > 0. These two alternatives are allowed by p2 > p3 .6 3. and −p2 > p0 and (δ 5 . p3 ).

. j] ∈ S i : such that p2 > p3 for edge((δ 2 . A → αB • β. p0 )). A → α • Bβ. k] to S i pushC edge (δ 3 . p0 ). k] to S i and pushC edge (δ 3 . (δ 0 .72 CHAPTER 5. A → αB • β. B → γ • . p2 ). p0 )) for [ . Therefore the target item has to have the same contents in the Initial and Active node as the completed item. p0 ). RECOGNITION AND PARSING OF GILS If [Ac(δ 2 . p3 ). I(δ 3 . ¯ for edge((δ 3 . p2 ). p2 ). Subcase 3. B → γ • . p3 ). I(δ 0 . I(cl(δ 3 ). A → α • Bβ. I(δ 0 . Subcase 3. k] to S i The following table summarizes the diﬀerent Completer cases. Completer H e-move If [Ac(δ 2 . p3 ). p3 )( ) 3. A → αB • β. p2 ). (δ 3 . p2 ). A → α • Bβ. ( . p3 ). j] ∈ S i : for [Ac(δ 2 . p2 ). p3 ). (δ 3 . I(δ 0 .18: Completer F The following subcases complete the remaining alternatives. 7 Here we could state the constraint p2 > p0 . I(δ 0 . j] ∈ S i such that δ 3 ∈ I. k] ∈ S j : add [Ac(δ 2 . ( . 3.3 predictor. p0 ). B → γ • .g Completer G Pop items combine with Pop items7 ¯ If [Ac(δ 2 . (δ 0 . ). I(δ 3 .g combines completed items generated by pop productions with target items also generated by pop productions. k] ∈ S j add [Ac(δ 2 . k] ∈ S j add [Ac(δ 2 . p0 ). p3 ). p2 ).h combine completed items generated by the 1. or compute a graph reduction if the condition is not met. ). p2 ). δ 0 ∈ I for [ . p3 )( ) I_0 + I_0 i−edge + I_3 − pop edge Ac_2 > I_3 Ac_2 + Ac_2 + − − Figure 5.

For each active node there might be at most i edges. −) 5. might require O(i2 ). transmitted Speciﬁc Actions ∆I 0 ← ∆In pushC(∆In . I(δ 2 . therefore the pop procedure might take time O(i2 ) for each set S i . Predictor pop.∆Ac + + +/-. −) pushC(∆In .(∆A = ∆I ) +/. . COMPLEXITY ANALYSIS I_0 73 − I_0 i−edge − I_3 Ac_2 − Ac_2 − Figure 5. Each item in S i has the form: [Ac(δ 1 . Predictor e-move and Predictor Push operations on an item each require constant time. j] where: 0 ≤ j. p1 . Scanner.19: Completer G Summary of Completer Cases CASE A B C D E F G H COMPLETED ITEM +/+ + + transmitted ACTIVE NODE +/+ ¯ .5.3 5. It has a maximum of n iterations where n is the length of the input. p2 . p2 ). ∆I0 ) ¯ check for ∆In reduce: ∆A ← ∆In − 1 reduce with path2 pushC(∆In . which may have O(i) active nodes.3.3. A → α • Bβ.1 Complexity Analysis Overview The algorithm presented above has one sequential iteration (a for loop). ≤ i Thus there might be at most O(i3 ) items in each set S i .(∆A > ∆In ) +/ +/TARGET ITEM transmitted +/+/+/+/. p1 )..

d}. S. m ≥ 1} In this case. p3 .a to 3. [Ac(δ.e. RECOGNITION AND PARSING OF GILS The computation of the Path function might require O(n3 ) also.B → b ¯ i 5.e. k] ∈ S j (i = j).B → bB ¯ i 4. In such cases the number of items in each set Si is proportional to i. P ) where P is S → aSd i S → BC S → DE n m n n B → bB ¯ i B→b ¯ i C → cC E → cE ¯ i C→c E→c ¯ i D → bD D→b n n m n L(Gamb ) = {a b c d |n. p1 ) is not taken into account in the combination. This is in general the case except for some restricted cases in which the node ∆Ac1 is bound to the node ∆In and therefore does not increase the combinatorial possibilities. The Completer Operations (3. C → c L(Gam2 ) = {a b c d |n.B → bB n k m n 8.. p1 ). b. C}. A → α • Bβ..C → cC 10. if for each item there is only one value for p1 and p2 for each i. the ambiguity resides in the CFG backbone. m. A diﬀerent case of ambiguity is Gamb2 = ({S. A → α • Bβ. i.74 CHAPTER 5. B. k ≥ 1. only one indexing alternative can apply at each step in the derivation. S. p2 ).S → BC 3. {a. ≤ j such that there is an i−edge((δ 3 . where: 0 ≤ j. i. D. }. p0 . c. pj ]. O(i) and the complexity of the algorithm is O(n3 ). C → cC ¯ i 6. m ≥ 1} ∪ {a b c d |n. m + k ≥ n} . also in S j . Given that there are at most O(i3 ) items in each set. k. where 0 ≤ j ≤ i Languages that have this property will be called constant indexing languages. This is a general analysis. {a. p3 ). p2 ). There are at most O(j 2 ) such possible combinations. p3 ).B → b 9. p2 . c. Once the CF backbone chooses either the “BC” path or the “DE” path of the derivation. {i. The required time for each iteration (S i ) is thus O(i5 ). }. There are S n iterations then the time bound on the entire 1-4 steps of the algorithm is O(n6 ). provided the information in the node Ac1(δ 1 . #. I0(δ 0 . while the indexing is so to speak. I(δ 2 . p0 ).h ) can combine an item from S i : [Ac(δ 2 . I(δ 3 . In other words. then the complexity of this operation for each state set is O(i5 ). (δ 0 .C → c ¯ i 7. only two dimensions are taken into account from the three dimensional table where i-edges and edges are stored. C. B. We consider in detail each subcase of the Completer in the next subsection. P ) where P is : 1. B → α • γ. E}. d}. {i. It can be observed that this algorithm keeps a CFG (O(n3 )) complexity if the values of p1 and p2 are dependent on i.S → aSd i 2. #. j] where: 0 ≤ j. deterministic. p0 )). b. ≤ i with at most all [ . This is true even for some ambiguous grammars such as the following: Gamb = ({S.

3. the sources of increase in the time complexity are a) ambiguity between mode of operation in the stack ( . {i. . O(n2 ) for unambiguous context free back-bone grammars with unambiguous indexing and O(n) for bounded-state8 context free back-bone grammars with unambiguous indexing. There are no mathematical average case results. The expensive one is the ﬁrst case. There is an important multiplicative factor on the complexity computed in terms of the input size which depends on the size of the grammar (see [6]). m + k = n} Therefore. and other grammar formalisms run much better than the worst case. 4. }. S. we keep separate the size of the context-free backbone and the indexing mechanism. The size of the grammar designated by |G| corresponds to the number of rules and the number of non-terminals. Complexity results concerning formalisms related to Natural Language (including CFLs) mostly are worst case results (upper bounds) and they do establish polynomial parsing of these grammars. COMPLEXITY ANALYSIS 75 In this case (where the ambiguity is between a pop and an epsilon-move the number of items corresponding to the rules 3. so the following results hold: O(n6 ) is the worst case. 7 and 8 would be bounded by the square of the number of b’s scanned so far. b. c. Usually. m.2 Time and Space Complexity In this subsection we compute the time and space complexities in terms of the length of the input sentence (n) and the size of the grammar. O(n3 ) holds for grammars with constant or unambiguous indexing. TAGs. B. However. 5. 8 Context Free grammars where the set of items in each state set is bounded by a constant. In practice. #. most algorithms for CFGs. d}. k ≥ 0. C}. the similar grammar Gamb3 = ({S. All average case results reported are empirical. the real limiting factor in practice is the size of the grammar. In the analysis we present here. or pop) through the same derivation CF path and b) a diﬀerent index value used in the same type of stack operation. Unambiguous indexing should be understood as those grammars that produce for each string in the language a unique indexing derivation. push.5. P ) would produce items proportional to i in each set S i : S → aSd i S → BC B → bB ¯ i B→ C → cC ¯ i C→ L(Gam3 ) = {an bk cm dn |n ≥ 1.3. The properties of Earley’s algorithm for CFGs remain unchanged. {a. This will allow to compare what is the impact of the indexing mechanism as an independent multiplicative factor.

A → α • Bβ. i] to S i A grammar rule is added to the state set. ∆3 . There might be O(I · ∆In − 1) possible values for each edge and ∆In . B → γ • .76 CHAPTER 5. Scanner If [∆A . A → αa • β. There is still the need to check for duplicates of the grammar rules. there are at most O(|I| · n) edges in a path and we consider edges to two vertices: the active node and the initial node. k] ∈ S j add [∆Ac . and for each value there might be O(I · n) path values. ∆I0 ) and item: for [ . The same grammar rules may apply at most n2 times (if it is a pop rule): there might be O(I ·n) possible values for Ac(δ 1 . It checks the current word (input aj ) against the terminal a expected by the scanner. ∆In . Complete The scanner operation requires constant time. k] to S i The complexity is bound by the number of elements in S j that satisfy the edge condition. ∆I0 . Predict. . ∆I0 . We consider now the more simple cases of the Completer operation which have the following shape. ∆0 . k are bounded by n. the complexity of the predictor operation is: O(|I|2 · |G| · n4 ). Predictor Operations If [Ac(δ 1 . A → αB • β. j] ∈ S i and (A → γ) ∈ P : δ add every [(δ 2 . RECOGNITION AND PARSING OF GILS Analysis of Scan. ∆In . The computation of the path in Predictor takes at most O(n3 ) for each vertex. B → α • Aβ. It depends on k and p0 : O(|I| · |G| · n2 ). The push (1. p2 ). p1 ). C. j] ∈ S i : for every i-edge (∆In . Given there are two stack possible values (∆Ac and ∆In ). The same grammar rule may have diﬀerent values for δ. Completer Case 1.3 operations take constant time regarding the input size. no recalculation of ∆Ac (No reduction involved) If [∆Ac .2) and predictor 1. j] ∈ S i and wi + 1 = a: add [∆A . A → α • aβ. p1 ). ∆In . A → • γ. j] to S i + 1 Predictor operations have in general the following shape.

k] ∈ S j such that the following conditions imposed on the edges are satisﬁed: ¯ if pAc1 > pI0 . ∆Ac1 .5. j] ∈ S i such that ∆Ac ∆In ∈ I. k] ∈ S j such that: ∆0 ∈ I ∪ I.3. ∆Ac1 . These additional operations are performed provided certain constraints are met. B → γ • . ¯ There are at most O(|I| · |G| · n2 ) items in S j that satisfy this condition. ∆0 ) and ¯ for [∆Ac1 . k] to S i Computing the set path2 for each ∆Ac takes O(|I| · n2 ) time. Checking if ∆I0 ∈ ¯ ¯ path2(∆Ac ) or there is an edge (∆Ac . Subcase 1 ¯ If [∆Ac . ∆IO and k produce give O(n3 )). ∆Ac1 . n. ∆In .d Completer D: Push with Active Node in I If [∆Ac . Each such item can be combined with at most O(|I|2 · |G| · n3 ) items in each set in S j . given the node ∆Ac1 has to be considered to compute the result item added to the S i set ( the values of ∆Ac . so its bound is O(|I| · i3 ) for each set S i (∆Ac ) is bound by i). B → γ • . add [∆Ac1 . ∆I0 . ∆I0 .∆In and j : 0 ≤ j. ∆In . pIn < 0 : Compute path2(∆Ac ) for edge(∆In . performing reductions.e Completer E. ) is done in constant time. ∆I0 . Completer Case 2. and −pAc > pI0 and ∆I0 ∈ path2(∆Ac ) ¯ ¯ else if −pAc > pI0 and there is an edge (∆Ac . So the total is n × O(|I| · n) × O(|I| · |G| · n2 ) × O(|I| · |G| · n2 ) = O(|I|3 · |G|2 · n6 ) 3. A → α • Bβ. ¯ ∆In = ∆Ac1 . abs(pAc ). j] ∈ S i for edge(∆In . Pop items with Push items that re-compute the value Active node. A → αB • β. ∆I0 ) where for [∆Ac1 . ∆IO ∈ I such that ¯ ∆Ac = ∆I . ∆I0 . A → αB • β. COMPLEXITY ANALYSIS 77 In each set in S i there might be at most O(|I|2 · |G| · n3 ) items. abs(pIn ). We will analyze each of them in turn. ∆I0 . k] to S i ¯ In each set in S i there might be at most O(|I|·|G|·n2 ) items that satisfy the condition ∆Ac = ∆I . ) ¯ add [∆I0 . pAc . ∆I0 . There are O(n) possible values for each ∆Ac . recalculation of ∆Ac : Completer D and Completer E ¯ 3. A → α • Bβ. This requires the pA value of ∆Ac to be equal to −pI in ∆In (their integer values are opposites). Total complexity is then n × O(|I|2 · |G| · n3 ) × O(I · G · n2 ) = O(|I|3 · |G|2 · n6 ) The remaining cases of the Completer involve more complex operations because the value of ∆Ac has to be recalculated.

η. A[. γ. the impact of the grammar size might be higher for the LIG algorithms and the impact of the indexing mechanism smaller.78 CHAPTER 5. A → αB • β. ∆In . i. Therefore if the set of indices is relatively small there is not a signiﬁcant change in the inﬂuence of the grammar size. Total complexity is then n × O(|I| · n) × O(|I| · |G| · n2 ) × O(|I| · |G| · n2 ) = O(|I|3 · |G|2 · n6 ) ¯ Finally we have not considered the case of the path2 with diﬀerent values than ∆I0 . used to parse TAGs. ∆5 .e Completer E. rs] However the work load can be set either in the indexing mechanisms as is shown in the equivalent completor from [74]. r. Subcase 2 ¯ If [∆Ac . |C. RECOGNITION AND PARSING OF GILS Total complexity is then n × O(|I|2 · |G| · n3 ) × O(|I| · |G| · n2 ) = O(|I|3 · |G|2 · n6 ) 3. |−. i..γ]Γ2 . −] [B → Γ3 • . ∆Ac1 .γ]Γ2 .b. −..γ]Γ2 . k] ∈ S j for edge(∆Ac1 . p. The higher impact of the grammar size would be motivated in the following kind of completer required by LIGs (from [4]).. We have seen then that O(|I|3 · |G|2 · n6 ) is the asymptotic bound for this algorithm. ∆I0 ) where ¯ ∆In = ∆Ac1 and items for [∆Ac1 . ∆In .}. j] ∈ S i such that ∆Ac = ∆In ∈ I: for edge(∆In .] → Γ1 • B[. where the set of nonterminals is {t. Pop items with Push items that re-compute the value Active node. s] [A[. q] [C → Γ4 • . −. j. η. |D. k] to S i There are at most O(|I| · |G| · n2 ) items in each set S i that satisfy the condition ∆Ac = ∆In ¯ There are at most O(|I| · |G| · n2 ) items in each set S j that satisfy the condition ∆In = ∆Ac1 There are at most O(|I| · n) edges for each ∆Ac1 . These can be done as a separate ﬁltering on the completed items in S i .] → Γ1 B[. If we compare the algorithm with LIG algorithms. If we compare the inﬂuence of the grammar size on the time complexity of the algorithm for GIGs and the one for CFGs. k. the only change corresponds to the added size of I.] → Γ1 • B[. q. B → γ • . |D. k... ∆I0 . ) add [∆5 . that involves the combination of three constituents: [A[. p.. j. within the same time bounds adding an operation to update the active node reducing all possible paths. A → α • Bβ.

ai a i+1 . j..20: Earley CFG invariant . and the number of possible nodes is bound by n.η] → t[. j. k. i.. therefore it can be done in a n × n × n table.4..5.ai Aγ 2.aj ∗ ∗ S γ A η α β a1 . aj Figure 5... α ⇒ ai+1 .ηη r ] 5. −. l b[η] → ∆ • . i. p. −. k t[η] → t[η r ] • . Storage of the graph-structured stack takes O(n3 ). i.4 Proof of the Correctness of the Algorithm The correctness of Earley’s algorithm for CFGs follows from the following two invariants (top down prediction and bottom-up recognition) respectively: Proposition 5 An item [A → α • β.. Storing the combination of i-edges and edges involves three nodes. t[.. PROOF OF THE CORRECTNESS OF THE ALGORITHM Type 4 Completor: 79 t[η] → • t[η r ]. q.. i] is inserted in the set S j if and only if the following holds: 1. Given there are four parameter bound by n in each item. q.. l A note on Space Complexity The space complexity is determined by the storage of Earley items.. S ⇒ a1 . i t[η r ] → Θ • . its space complexity is O(n4 ). p.

I(z. x ∈ I. If A → X 1 . #δ and w is in T ∗ . #δ or βAγ.X n γ.X k is a production of type (a. S.. we address the correctness of the algorithm as follows: Proposition 6 (Invariants) .. x ∈ I) then: βAγ..X k γ.X n is a production of type (b.. and δ y ∈ C then: ¯ µ wAγ... L(G) to be: {w|#S ⇒ #w and w is in T ∗ }. x ∈ I. C): {w| S....X n γ. .X n γ. µ = µ [x] or µ = [x]. δ be in I ∗ . 173-185 and [87] for a control language representation for LIGs. #.. taking into account the operations on the stack. S. Deﬁne C ¯ to be the Dyck language over the alphabet I ∪ I (the set of stack indices and their complements). #δ ⇒ waX k . If A → X 1 ..4. I. then: ¯ µ wAγ.X n is a production of type (c. #δ ⇒ βX 1 ... If A → X 1 .ai Aγ. Assuming such an alternative deﬁnition of a GIG language. 9 See ∗ [25] for control languages pp. w be in T ∗ and X i in (N ∪ T ).80 CHAPTER 5.1 Control Languages ∗ We deﬁned the language of a GIG G. then: µ wAγ.. A → α • β.. #δx ⇒ wX 1 . #δx¯ x ¯ 4. P ) as follows. #ν for some γ ∈ V ∗ and some ν ∈ (I ∪ I)∗ ..) or pop : µ = x. It is apparent that both deﬁnitions of a GIG language are equivalent.X k γ. δ is in C }.. deﬁne the language of a GIG G to be the control language L(G. # ⇒ w.. ∗ An item [Ac(y. x in I..) or push: µ = x. #δxδ y¯ x Then.) or pop : µ = x.. T.. #δx 2. We can obtain an explicit control language9 modifying the deﬁnition of derivation for any GIG G = (N.X n is a production of type (c. p1 ). 1. p0 ).. x ∈ I..e.. This is speciﬁed in terms of the Dyck language over ¯ I ∪I 5. # ⇒ a1 . If A → aX 1 . y ∈ I..) (i. #δx ⇒ βX 1 .. Let β and γ be in (N ∪ T )∗ . #δx 3. RECOGNITION AND PARSING OF GILS The corresponding invariants of Earley’s algorithm for GIGs can be stated in a similar way. i] is a valid item in the Set S j if and only if the following holds: ¯ 1. #δxδ y ⇒ wX 1 .

It will also help to understand the implementation of the algorithm we detail in the appendix. 76].. a framework that allows high-level speciﬁcation and analysis of parsing algorithms [75. Figure 5. α. The remaining conditions specify the properties of the string νz µy π relative to the control language C. speciﬁes the properties for the diﬀerent types of productions that generated the current item. #νδ ⇒ ai+1 . PROOF OF THE CORRECTNESS OF THE ALGORITHM 2. d. y π A z ai a . p0 ≤ i + 1 and p1 ≤ j. Condition a... µπ ∈ (I ∪ I)∗ .21: GIG invariant We will use Parsing Schemata to prove this proposition. Condition c.aj ...4. Condition b. speciﬁes the constraints related to the position correlation with the strings in the generated language. Condition d. (A → αβ) ∈ P δ 81 3.2 Parsing Schemata We use Parsing Schemata.α η β µ .4. which naturally embed the invariant properties of the Earley algorithm for CFGs. z. y ∈ I ∪ I iﬀ z = z . π ∈ C and νz µy πρ ∈ C for some ρ ∈ (I ∪ I)∗ . ∗ S ν a 1 .. speciﬁes that the item has a transmitted node from the graph if the empty string was generated in the control language. else δ = ¯ ¯ ¯ c. i+1 aj ν z µ yπ Figure 5. δ = z iﬀ δ ∈ I ∪ I.5. δ ∈ DI iﬀ z µy π = epsilon. z. This framework enables the analysis of relations between diﬀerent parsing .21 depicts the properties of the invariant. #νz µy π Such that: a. ∈ cl(δ ). # γ . y = y . ν. speciﬁes the properties of information of the graph-structured stack encoded in the item. 5. ¯ b.

. A set of ﬁnal items. i. η k sentence. j]} DInit = { [S → • S. deﬁned by I Earley below... The corresponding set of items in the Schema is encoded by an additional parameter in the items Schema items. i. This parsing schema is deﬁned by a parsing system PEarley for any CFG G and any a1 . 0]}. j. j. j]|S ⇒ a1 . DPred . 0. Parsing schemata are closely related to the deductive parsing approach (e. C Earley = {[S → S • . j] [A → αa • β. The input string is encoded by the set of hypothesis HEarley ....82 CHAPTER 5. We present parsing schemata rather informally here. k]. F ∈ I. DScan = {[A → α • aβ. position in the string.. As an example we introduce the Earley Schema for CFGs. i.. i. V(PEarley ) = {[A → α • β. . [B → γ • δ. say j. Parsing Schema 1 (Earley Schema for CFGs) .[72]).. i. n] |S ⇒ a1 .an ∈ Σ∗ by: I Earley = {[A → α • β. A ﬁnal item F(PEarley ) ∈ I is correct if there is a parse tree for any string a1 .aj } ∗ .g. 0. j]}.an } The set of valid items is deﬁned by the invariant of the algorithm. represent the recognition of a [B → • γ. 0. Earley operations are deﬁned by the allowed deductions: DScan . [a. i. DComp .an that conforms to the item in question.. RECOGNITION AND PARSING OF GILS algorithms by means of the relations between their respective parsing schemata. n]}. [A → αB • β.. i − 1. A Parsing schema extends parsing systems for arbitrary grammars and strings. as we already saw are stored in sets according to each. Deduction steps are of the form η 1 . j + 1]} such that A → αβ ∈ P and 0 ≤ i ≤ j HEarley = {[a. j. A parsing system P for a grammar G and a string a1 . Parsing Schemata are generalizations (or uninstantiations) of a parsing system P. j] DEarley = DInit ∪ DScan ∪ DPred ∪ DComp F Earley = {[S → S • . k..an is composed of three parameters a) I as set of items that represent partial speciﬁcations of parse results b) H a ﬁnite set of items (hypothesis) encodes the sentence to be parsed and D a set of deduction steps that allow to derive new items from already known items. Earley items.ai Aγ ∧ ∗ ∗ α ⇒ ai+1 .. i] | a = ai ∧ 1 ≤ i ≤ n}. DComp = {[A → α • Bβ. j + 1] DPred = {[A → α • Bβ. j].. j]}. ξ which means that if all the required antecedents η i are present.. then the consequent ξ is generated by the parser.

DPath b . η ik . It can be observed that the major diﬀerences compared to the CFGs schema are the addition of diﬀerent types of Prediction and Completer deduction steps. Items in IC EGIG will participate in some restricted deduction steps. will be used to perform the corresponding deduction steps.4. A Parsing system is correct if F(P) ∩ V(P) = C(P) 5. and DPrPop2 have the following shape. η kj Those deduction steps that compute the the Control language are DPath a .5. The parsing schema for GIGs corresponds to the algorithm 6 presented in section 5. Prediction Push and Pop deduction Steps. However there is an additional type of deduction step performed in this parsing schema. Predictor steps can be decomposed as: η ij η ij η jj ηp η ij ηp Completer Steps can be decomposed as: η ik .2. a diﬀerent set of items. η ij η jj ∧ η p Completers DCompB .3 Parsing Schemata for GIGs The parsing schema for GIGs presented here is an extension of the parsing schema for CFGs given in the previous subsection. will represent edges and paths in the graph. and DCompG have the following shape. PROOF OF THE CORRECTNESS OF THE ALGORITHM 83 Deﬁnition 12 A parsing system (P) is sound if F(P) ∩ V(P) ⊆ C(P) (all valid ﬁnal items are correct). DPath2 b . η kj η ik .4. Those items in IC EGIG . Therefore IC EGIG . and they introduce prediction items η p in IC EarleyGIG . and this corresponds to the control language. DPath2 a . η kj η ij ∧ η p This kind of deduction step can be understood as shortform for two deduction steps. DPrPush . A parsing system (P) is complete if F(P) ∩ V(P) ⊇ C(P) (all correct ﬁnal items are valid). and they introduce completion items η p in IC EarleyGIG . DCompF . DPrPop1 .

RECOGNITION AND PARSING OF GILS Parsing Schema 2 (Earley Schema for GIGs) . i − 1. ∆I0 . j. −p1 ). j] |ν = or ν = [δ Ac ]} ν DPrPush = {[∆Ac1 . i. A → α • Bβ. A → α • Bβ. j + 1]} DPrE = {[∆Ac . DScan = {[∆Ac . j] [∆Ac . i] | a = ai ∧ 1 ≤ i ≤ n}. p1 ). pushC.. 0). ∆I0 ] | δ ∈ I]}. (#. j. j]. p ≤ j and IC EGIG = {[P. ∆1 . S → • S$. P ) and any a1 . j]} such that ∆Ac . p1 )+. path. (δ 1 . j + 1). A → α • Bβ. ∆In . p)|δ ∈ DI. j. ∆I0 . j] [(δ 2 . j. j] ∧ ¯ ¯ [pop. (δ 2 . i. ∆2 ]} such that ∆i ∈ {(δ. A → α • β. B → • γ. (δ 1 . (δ 2 . cl(∆)I0 . (δ 1 . i. T. j + 1). j] ∧ δ2 [push. ¯ δ1 . B → • γ. ∆Ac . S. #. ∆In . A → αa • β. ∆Ac1 . ∆I0 . This parsing schema is deﬁned by a parsing system PEGIG for any GIG G = (N. B → • γ. j] ¯ ¯ [(δ 1 . ∆I0 )]} ∪ {[P.. A → αβ ∈ P 0 ≤ i.an ∈ Σ∗ by: I EGIG = {[∆Ac . I. ∆I0 ] | δ 2 ∈ I}. A → α • aβ. −p1 ). j + 1). i.84 CHAPTER 5. i. j + 1] [∆Ac . 0). i. path2} HEGIG = {[a. push. DInit = { [(#. or δ = cl(δ )} . [a. ∆In . DPrPop1 = {[(δ 1 . ∆In ∈ {(δ. 0. popC. 0]}. ∆Ac1 . p)|δ ∈ DI ∧ 0 ≤ p ≤ j} and P is in {pop. −p1 ).

∆R ]. j] ∧ [pushC. ∆1 +. ∆R2 ] ¯ [path. ∆In . ∆3 +. ∆3 − . ∆1 −. [path. j] ∧ [pop. ∆I0 ]} DCompC = {[∆Ac −. A → α • Bβ. ∆R2 = ∆R1 . ¯ δ1 85 ¯ DPath c = {[pop. ∆2 +. k. i. ∆In . ∆I ]|δ 1 ∈ I}. CL(∆)I0 . ∆2 +. [pushC. ∆I . ∆Ac −. ∆1 −. ∆R2 ] | (∆R = ∆1 +. i. . ∆1 +. B → γ • . ∆]. ∆In +. ∆3 −.4. P = push) ∨ (∆R < ∆1 +. [path. j. ∆I0 . P = pushC)} DCompA = {[∆Ac . ∆1 +. k] [∆Ac +. ∆1 +. ∆1 +. ∆]. P = pushC)} ¯ DPath b = {[pop. ∆1 −. ∆R1 ] ¯ [path. ∆2 +. ∆1 +. ∆R3 = ∆R1 . ∆3 +. k]. [∆In +. [path. ∆1 . ∆R4 ] ¯ [path. ∆3 −. [∆In . ∆R ]. DCompB = {[∆Ac +. ∆3 +. . i. ∆In +. ∆R1 ]. ∆1 − . ∆R1 ] | − same conditions P atha } DPrPop2 = {[∆Ac −. ∆R3 = ∆R4 . i. ∆1 −. ∆4 +. ∆R3 ] | (∆R1 = ∆1 +. j. i. P = pushC)} ¯ DPath d = {[pop. ∆I0 ] [∆Ac1 . [∆In +. ∆2 −. ∆R2 = ∆R . P = push) ∨ (∆R1 < ∆1 +. ∆In +. A → αB • β.5. [P. ∆1 −. PROOF OF THE CORRECTNESS OF THE ALGORITHM ¯ DPath a = {[pop. j]}. ∆I . B → γ • k. ∆I0 . [path. A → α • Bβ. ∆R3 = ∆R2 . ∆R1 ]. ∆I0 . ∆1 − . ∆2 +. ∆1 − . ∆I0 ] ¯ ¯ [∆Ac1 . k] [∆Ac . ∆R1 ] ¯ ¯ ¯ ¯ [∆1 . A → αB • β. B → • γ. ∆Ac −. ∆I ] . ∆2 −. A → α • Bβ. P = push) ∨ (∆R1 < ∆1 +. A → α • Bβ. ∆R ]. j]. ∆R3 ] | (∆R1 = ∆1 +. ∆R2 ] ¯ [path. j]. k. ∆R ]. . [path. B → γ • . ∆3 −. ∆1 +. [P. j]. ¯ [P. ∆I0 ] [∆Ac1 . ∆2 −. ∆1 . ∆2 −. [P. ∆R3 = ∆R1 . ∆1 +. j]. ∆1 +.

RECOGNITION AND PARSING OF GILS [∆Ac +. j]. i. ∆2 −] ¯ [path2. ∆1 +. i. k]. ∆1 +. A → αB • β. ∆In . [path2. [pushC. i. ∆3 −] ¯ [path2. ∆I0 +] [∆Ac1 . ∆I0 ]} DCompH = {[∆Ac1 . B → γ • . ∆In −. [path2.86 CHAPTER 5. ∆2 −]. j]. A → αB • β. ∆I0 . k. ∆Ac −. A → α • Bβ. j]. CL(∆)I0 . ∆2 −. ∆I0 ] |∆Ac > ∆In } ¯ DPath2 a = [pushC. ∆I0 ] ¯ [∆In −. j]. [pushC. ∆In +. ∆I0 +. i. A → αB • β. B → γ • . k. . A → αB • β. B → γ • . i. [∆In −. i. i. j]. ∆I0 . k] [∆Ac1 . A → α • Bβ. j] ∧ [pushC. ∆I0 . k. j]} ¯ DCompE = {[∆Ac −. ∆I0 . ∆I0 −. ∆I0 −. B → γ • . ∆2 −] ¯ DPath2 b = [pushC. . ∆1 −. ∆I0 . ∆Ac2 −] [∆Ac2 −. ∆I0 . ∆In −. k. ∆I0 −] [∆Ac1 . j] } DCompF = {[∆Ac . A → αB • β. i. k] [∆Ac . ∆I0 . A → α • Bβ. [∆In +. ∆I0 ] [∆Ac1 . ∆In . ∆1 +]. ∆1 +]. j] ∧ [pushC. A → αB • β. ∆1 − . ∆1 −. A → α • Bβ. ∆In −. ∆3 −] DCompG = {[∆Ac . j]} ¯ DCompD = {[∆I −. ∆I0 . [∆Ac1 . k] [∆Ac . j] } . ∆1 −. [∆In −. i. . i. B → γ • . i. A → α • Bβ. k] [∆Ac1 . [∆In −. k. ∆In −.

we need also an invariant for the computation of the control properties. This proof uses a so-called deduction length function (dlf ). Assume that any item η in W with derivation length less than m is obtained (if d(η) < m then η ∈ V(P)). ξ ∈ V(P).5. #νx for some γ ∈ N V ∗ and some ν ∈ (I ∪ I)∗ and x ∈ I. it holds that ξ ∈ W . η k } ⊆ W and d(η i ) < d(ξ) for 1 ≤ i ≤ k. νx = µyπ such that y ∈ I. • Proof of completeness: W ⊆ V(P). Because we introduced another class of items. It has to be shown that for all η1 · · · ηk ξ ∈ D with η i ∈ H ∪ W. S. In other words that the deduction steps preserve the invariant property. The derivation length function is deﬁned according to the properties of the parsing algorithm. to use in the induction proof. # ⇒ a1 . the items corresponding to the control language.. (y. • Proof of soundness for all the deduction steps: V(P) ⊆ W. · · · . IC EGIG . An item [path. · · · . 1 ≤ i ≤ k. (z.4. (x.aj γ. p0 )] is valid for a string a1 · · · aj if and only if the following holds: ¯ ¯ 1. then prove that all ξ with d(ξ) = m.11 Proposition 7 (Invariant of the Path function) . p2 > p1 ≥ p0 3. : W ⊆ I a set of items. PROOF OF THE CORRECTNESS OF THE ALGORITHM A correctness proof method for parsing schemata This method is deﬁned in [76] 87 • Deﬁne the set of viable items W ⊆ I as the set of items such that the invariant property holds. −p2 ). Show that all the viable items (all possible derivations are deduced by the algorithm). Deﬁnition 13 (derivation length function ) 10 Let (P) be a parsing schema. and π ∈ C ∗ . A function d : H ∪ W → N is a dlf if (i) d(h) = 0 for h ∈ H. p1 ). η k ξ ∈ D such that {η 1 . 2.. (ii) for each ξ ∈ W there is some η 1 .

A → αa • β. p1 ). correctness of the push items is straightforward. 1 ≤ i ≤ k it holds that ξ ∈ W This can be rephrased as follows in terms of the algorithm presented in section 5. I(z. # ⇒ a1 . n ≥ 0 where each ak ∈ T for 1 ≤ k ≤ n. Predict Completer) .. η k are correct items. (the invariant property) holds. Proof . S) be a GIG and let w = a1 · · · an . .ai Aγ.aj −1 .. Let 0 ≤ i. for grammar G and input w and η i . V(P) ⊆ W. j ≤ n. # µ y x γ ρ .. b. η k item. p0 ). #ν for some γ ∈ V ∗ and some ν ∈ (I ∪ I)∗ .. α. T. S.. P..88 CHAPTER 5. p0 ). For all η 1 · · · η k ξ ∈ D with η i ∈ H ∪ W. η j then η j is a correct • If η i . #νz µy π 10 From 11 The ∗ [76]. i] is in S j −1 and η j −1 Since η j −1 is correct. An item η j : [Ac(y. Let G = (N.. µ y aj π π a j Figure 5. I(z.(A → αaβ) ∈ P δ ∗ c. #. RECOGNITION AND PARSING OF GILS S z S zy y a1 # x γ ρ a 1 . A → α • aβ. • If η i is a correct item for grammar G and input w and η i η j then η j is a correct item. i] is in S j if and only if η j −1 : [Ac(y. Regarding the pop items it is derived from the path function. #νδ ⇒ ai+1 ..22: Path invariant Lemma 7 (GIG items soundness is preserved under Scan.2. p1 ). • Case 1. I. where correct means that the proposition 2. then scan ηj ¯ a. Scanner.

−p1 ). x = y . #νzµyπx ⇒ y ¯ ∗ . b. p2 = j + 1 η ij predict η jj ∧ η push η push = [push. (z. (B → aξ) ∈ P . • Case 2. This together with a.. I(z. η push η path [path. p3 ). #νzµyπ from η ∗ ij ∗ (a.) and the following deduction steps apply according to the value of δ i. #ν for some γ ∈ V ∗ and some ν ∈ (I ∪ I)∗ . #νzµy π Therefore: (this is common to all the subcases) d. (B → ξ) ∈ P . S.. η e. (¯. δ = x ∈ I. ∗ ¯ a. # ⇒ a1 . #νzµyπ y ¯ It should be noted also due to η pop is obtained in this derivation. B → • ξ. and we conclude the proof for Case1. #νzµyπx iii. p0 )) e. (y. b. . #ηδ ⇒ (ai+1 . (y. j + 1). p1 = p2 . Predictors An item η jj : [Ac(u. α. y . #νzµyπ y ⇒ ¯ ∗ . S. p3 = p0 . ij predict η jj ∗ . u = y . the following derivations follow: η pop . p3 = j + 1. (z. # ⇒ a1 · · · ai ai+1 · · · aj Bβγ..4. z ) ¯ .. x = cl(z). (B → ξ) ∈ P such that δ Since η ij is correct. p0 )) y e. p2 ). i] is in S j . j] is in S j if and only if: η ij : [Ac(y. u = x. p0 ). δ = y . #νzµyπ from (B → ξ) ∈ P ii. and c. δ = . #νzµyπ ⇒ x . p3 = −p1 . x .. (x. #νδ ⇒ ai+1 . allows us to conclude that η j is a correct item in S j .. and c. p1 ).aj .(A → αBβ) ∈ P δ c. I(x. p2 = −p1 ¯ ¯ ¯ η ij predict η jj ∧ η pop η pop = [pop. A → α • Bβ. p1 ).aj . #νz µy π by b. u = y. PROOF OF THE CORRECTNESS OF THE ALGORITHM ∗ 89 Now αaj . .ai Aγ.5. p1 ).

y. x .ai Aγ. ¯ ¯ ¯ y x. p3 ). z). p3 ). (y. (B → ξ) ∈ P . x . (z. y. u = x. I1(x. p0 ). #νz µy πδ ⇒ ∗ ∗ ij (a. S. η push η path1 Case 3. ∆Ac1 . z ]. A → α • Bβ. p1 ). p1 ). η path1 = [path. z ) η pop = [pop. I(z. ¯ by Dpath a or η pop . (x . −p3 ). z ]. p0 ). y . Therefore S. I(z. #ν for some γ ∈ V ∗ and some ν ∈ (I ∪ I)∗ . x. η path predict η jj ∧ η pop η path ((y.90 CHAPTER 5. y. #νzµyπ from η . I(z. . the same deductions as in the previous case are triggered: η pop . and c. x . z ) such that x ∈ I and z is either z or z . z) and η push = [push. # ⇒ a1 · · · ai ai+1 · · · aj Bβγ. z ] is correct. I1(x. #νzµyπ x from (B → ξ) ∈ P ¯ δ such that #νzµyπ x = #ν xπ x from d. p2 ). η push η path η path2 by Dpath b η pop . p3 < 0. p1 ). ¯ x η ij . then d. b. p1 ). x. A → αB • β. k] is in S j and [Ac1(u. x ∈ I. ¯ ¯ ¯ iv. η push η path1 η path2 by Dpath b where η pop = [pop. (¯. y ∈ I. x = x . η push = [push. y . abs(p3 )). # ⇒ a1 . p2 ). ¯ ¯ It should be noted also that as a consequence of η pop .. #νzµyπ = #ν x π such that π ∈ C. δ = x . B → ξ • . i] is in S k and an item η p : [p. z ). η path2 = [¯. (y. RECOGNITION AND PARSING OF GILS where η pop = [pop. in the previous page) . (This item corresponds to the edges that relate the nodes in η ij and η kj Given η ik and η kj are valid items we obtain: ∗ ¯ a. p2 = p3 . Completer An item η ij : [Ac(y . abs(p3 )). i] is in S j if and only if an item η an item η kj : ik : [Ac(y. (x . p0 )] x Since η path = [path. p0 )] where p is either push or pop.. y.

active node:-) if and only if: . B. k. The crucial step is checking the consistency of: #νzµuπxµ yπ in f. i] is a correct item in S j . η ik .. p2 ). η p Complete η ij (η ik is a transmitted item). η p . if and only if: z ∈ cl(i). η ik .. ξ. therefore the Consequently the result subtree contains #ν xµ yπ = which is in the control language. I(z. x ∈ I and p2 ≥ p3 > 0 η pushC = [pushC. PROOF OF THE CORRECTNESS OF THE ALGORITHM b. I(z. #νz µu πδ ∗ ∗ from (B → ξ) ∈ P δ f. #ν δ for some (B → ξ) ∈ P δ Now compose some of the pieces: d. (z. The individual properties of zµu and xµy are checked using the information stored at: Ac1(u. Both substrings are substrings of the Control language but their concatenation might not be. p2 ). p3 ) in η kj . η kj . #νδ ⇒ ai+1 . p0 )] therefore given #ν (xµ (yπ is in the control language for some ρ then #νzµuπxµ yπ is also in the control language for some ρ C.. η kj .ak . #νz µu π for some (A → αBβ) ∈ P δ ∗ 91 from η ik and ⇒ ak +1 . p0 ) in η ik Ac(y. η kj . A → αB • β.aj . ηp Complete η ij (η kj initial node:+... η ik . #νz µu π e. η p Complete η ij ∧ η pushC (η kj has positive initial and active nodes) if and only if: y. I1(x. αB.5. p0 ). p3 ). aj #νz µu πx µy π This derivation together with a.4. (x.ak Bβ. Those conditions are checked as follows according to the corresponding deduction step: A. α. p1 ). #νδ ⇒ ai+1 . ⇒ ai+1 · · · ak ξ. We need to check the information in the corresponding nodes to validate the substring properties. prove that η j : [Ac(y.. the control language spans in positions i. ⇒ ai+1 · · · ak ak +1 · · · . #ν x µy π ∗ c.

p1 ). x ∈ I. −p1 ). (¯.92 CHAPTER 5. ηp Complete η ij ∧ η p 2 if and only if η p 2 = [popC. if and only if: η p = [p. p2 )]. p3 < 0. (¯. . p0 < 0. (¯. ¯ η path2 guarantees that the result is in the control language. p3 > 0. (x. η ik . The current active node is negative). (x. p3 < p2 Result is νzµuπx)µ y π G.) if and only if: x ∈ I. p2 = −p3 . abs(p2 ) ≤ abs(p0 ) x = u. p3 )] ¯ z ∈ I. Therefore: #νzµuπ(xµ y)π = #νzµuπ(xµ x)µ y)π ¯ D. p3 )] ¯ z. . Result is νzµuπx)µ y π H. p3 ). p0 )] u η path2 = [path2. −p3 )] x x requirement that x has to be reduced. η ik . −p3 ). (z. ηp Complete η ij ∧ η p if and only if: η p = [popC. A reduction is performed. η p and π ∈ C. y = x. ¯ Therefore #νzµuπ(xµ x)π = #νzµuπ ¯ E. (x. η kj . (x. p3 = −p1 ¯ Result string: #νzµ(uπ u)µ y π which is equivalent to #νzµπ µ y π . η kj . abs(p2 ) < p3 η p = [popC. ηp Complete η ij if and only if x ∈ cl(DI). p2 ). η p Complete η ij (xµ yπ is in C. (u. RECOGNITION AND PARSING OF GILS ¯ x ∈ I. η kj . η kj . F. p3 ). η path2 Complete η ij ( x reduces u. η ik . the initial node is a transmitted node. x ∈ I. η ik . (y. η ik . η kj . (y . y ∈ I. p3 > 0.

¯ µ = µ x )µ reduce: #νz µ u π ¯ #νz µ(u π u )µ y π reduce: #νz µ y π #νz µu πx )µ y π pushC(∆In . −) #νzµuπ +:u Table 5. . (A → αβ) ∈ P δ ∗ ¯ 3.4. p0 ) cl(z µu π) = +/+/+/+/+ +/-. then an item ξ = [Ac(y. p1 ). I(z.1: Conditions on the completer Deductions Lemma 8 (Second direction. For each ξ ∈ W there is some η 1 · · · η k 1 ≤ i ≤ k.2 and Proposition 6. A → α • β. It states that all the derivations in the grammar will have the corresponding item in the set of states. This lemma is equivalent to the following claim in terms of the algorithm presented in section 5. i] is inserted in the Set S j . p0 ). α. Completeness) . p3 ) Ac(y. CASE A B C D E F G H COMPLETED ITEM I1(x. Result x µy π #νz µu π(x µ (y π #νz µu π(x µ y )π .1 summarizes the conditions speciﬁed by the Completer Operations.(x < y) +/ TARGET ITEM Ac1(u. PROOF OF THE CORRECTNESS OF THE ALGORITHM cl(x)µ y π = Therefore the result is νzµuπ 93 The following table 5. S. p2 ) +/+/+ –+ + – + -x=u ¯ cl(xµyπ ) = –: y=x ¯ +/. #νδ ⇒ ai+1 · · · aj . p1 ) I(z. If 1.5. #ν 2. #νzν yν } where δ = z if δ ∈ I ∪ I else δ = ∗ ξ ∈ D such that {η 1 · · · η k } ∈ W and d(η i ) < d(ξ) for such that #νzν yν satisﬁes the conditions imposed in proposition 2. # ⇒ a1 · · · ai Aγ.

p1 ). Therefore the derivation length function is the same as deﬁned by [76] as follows: d([Ac(y. p0 ). The conception of the derivation length function is similar to the kind of induction used by [39] on the order of the items to be introduced in the set S i . (A → αβ) ∈ P δ ∗ 3. p1 ). α. #ν zν uν xµyπ with minimal µ + ρ ¯ then if δ = x ∈ I ∪ I η = [Ac(u. #ν ∧ δ. d(η) = d(ξ) − 1 and ζ. # ⇒ δAγ. S. p1 ). #ν zν uν B. j]) = min{π + 2λ + 2µ + j | S.94 CHAPTER 5. p2 ). #ν zν uν δ ⇒ aj +1 · · · ak . α. #ν . p0 ). η ξ. 12 Notice that induction on the strict length of the derivation would not work. i] is in S j −1 and from ζ = [a. p0 ). i] Let 1. I(z. p0 ). j − 1. . #ν ⇒ ai+1 · · · aj . because of the Completer operation. #ν µ λ π 12 ∧ } ⇒ ai+1 · · · aj . I(z. I(z. such that the invariant holds for those items previously introduced in the item set. i] in S k Let S. p0 ). #νδ ⇒ ai+1 · · · aj . I(z. #νzν yν ∗ ¯ } if δ ∈ I ∪ I else δ = Then η = [Ac(y. i] is in S j . # ⇒ a1 · · · ai Aγ . A → α • β. A → αB • β. #ν 2. A → α • aβ. #ν ⇒ a1 · · · ai . i. • Completers: ξ = [Ac(y. A → αa • β. #ν α. ν zν yν • Scanner: ξ = [Ac(y. p1 ). A → α • Bβ. I(z. RECOGNITION AND PARSING OF GILS The proof by induction on the derivation length function d applied to the derivations speciﬁed in 1. #ν zν uν δ δ ρ µ ∗ and γ. and 3. # ⇒ a1 · · · ai Aγ . The modiﬁcation of the Earley algorithm for GIGs does not modify the treewalk that the algorithm performs. #ν zν uν ⇒ γ. j] ∈ HGIG d(ζ) = 0. The derivation length function is based on the treewalk that the Earley algorithm performs.

ζ if δ = ξ 95 and xµyπ = ζ = [Ac(u. I(z. #ν ⇒ a1 · · · ai . I(x. p3 ). ζ ξ • Predict: ξ = [Ac(y. p3 ). η. ζ ξ if δ = and xµyπ = η = [p. (u. λ. p0 )] such that p = push or p = pop. p3 ).5. Then η = [Ac(y. #ν and A. (x. #ν . (z. ζ = [Ac(y. p1 ). ν zν yν µ } with π + 2λ + 2µ minimal. p0 ). p0 )] d(η) = d(ξ) − 1 = d(η p ) . j] is in S k and d(η) < d(ζ) = d(ξ) − 1. p1 ). B → γ • . I(cl(z). PROOF OF THE CORRECTNESS OF THE ALGORITHM η = [p. j] in S j Let S. p1 ). I(x. p1 ). η . p1 ). δ. j] η η ξ if δ = ξ ∧ ηp and η p = [p. B → γ • . p0 )] such that p = push or p = pop. (x. (z. π. #ν δ δ λ π ∗ α. p3 ). (y.4. η. #ν and let δ. µ such that S. A → α • Bβ. I(z. j] is in S k and d(η) = d(η ) − 1 < d(ζ) = d(ξ) − 1. #ν ⇒ αBβ. p2 ). (u. p1 ). η . (δ. γ . #ν δ ⇒ ai+1 · · · aj . p2 ). η. j] is in S k and d(η) < d(η ) < d(ζ) = d(ξ) − 1. (z. p0 ). p3 ). # ⇒ a1 · · · aj Bγ . ζ = [Ac(y. η . B → • γ. p0 ). # ⇒ δAγ . B → γ • .

called the trigger item. Those new potential items are triggered by items added to the chart. RECOGNITION AND PARSING OF GILS Therefore from Lemma 7 ( V(P) ⊆ W). (b) Add the trigger item to the chart. agenda-based inference engine described in [72]. the prototype is not particularly eﬃcient. 2. The general form of an agenda-driven. . When an item is removed from the agenda and added to the chart. Initialize the chart to the empty set of items and the agenda to the axioms of the deduction system. However. the inference engine provides the control and data types. we have proved the correctness of the Parsing Schema V(P) = W.2 was implemented in Sicstus Prolog on top of a general-purpose. The Prolog database is used as a chart which plays the same role as the state sets in Earley’s algorithm. 5. Encodings of the parsing schema as inference rules are interpreted by the inference engine. Items should be added to the chart as they are proved. its consequences are computed and themselves added to the agenda for later consideration.96 CHAPTER 5. However. Repeat the following steps until the agenda is exhausted: (a) Select an item from the agenda. as a meta-interpreter. Any new potential items are added to an agenda of items for temporarily recording items whose consequences under the inference rules have not been generated yet. Given that the parsing schema abstracts from implementation details of an algorithm. each new item may itself generate new consequences. The chart holds unique items in order to avoid applying a rule of inference to items to which the rule of inference has already applied before. and remove it. the parsing algorithm described in section 5. the relation between the abstract inference rules described in the previous section as parsing schema and the implementation we used is extremely transparent. chart-based deduction procedure is as follows (from [72]): 1. if necessary. Because the inference rules are stated explicitly.5 Implementation In order to perform some tests and as a proof of concept. and Lemma 8 W ⊆ V(P). This is performed using the subsumption capabilities of prolog: those items not already subsumed by a previously generated item are asserted to the database (the chart).

(#. IMPLEMENTATION 97 (c) If the trigger item was added to the chart. {an bn cn dn |n ≥ 1}.5.5. 0]} and the encoding of the input string. S → • S$. This grammar has a high degree of ambiguity. b}+ } • Grammar Gamb4 which generates aabbcdcd but does not generate aabbcddc nor aabbdccd. The goal corresponds to the ﬁnal item in the Parsing Schema: F EGIG = {[(#. otherwise it is not. Tests on the following languages were performed: • {an bn cn |n ≥ 1}. {an bn cm dm en f n g m hm in j n |n ≥ 1} • {an bn wwcn dn |n ≥ 1. b}+ } {an (bn cn )+ |n ≥ 1}. • {an bn cm dm en f n g m hm |n ≥ 1}. • The mix language {w|w ∈ {a. 0. If a goal item is in the chart. w ∈ {a. {an bn cn de n |n ≥ 1}. 3. The axioms correspond to the initial item in the Parsing Schema: DInit = { [(#. 0). • Grammar Gamb5b which generates aacdbbcd and aadcbbdc but does not generate aacdbbdd nor aacdbbdc (a variation of Gamb5 presented above). In this implementation the control language was not represented by separate items but as side conditions on the inference rules. 0). generate all items that are new immediate consequences of the trigger item together with all items in the chart. b. (#. the goal is proved (and the string recognized). c}∗ and |a|w = |b|w = |c|w ≥ 1} Critically the most demanding was the mix language due to the grammar we chose: Gmix presented in chapter 2. which where performed by procedures which implemented the path function over the graph representation. 0). The number of items generated for the Mix and the copy languages were as follows: . 0. {ww|w ∈ {a. 0). and add these generated items to the agenda. n]}. S → S • . b}∗ } • {w(cw)+ |w ∈ {a.

[41].500 ≈ 2. It is also more complex because GIL derivation requires leftmost derivation for the stack operations. RECOGNITION AND PARSING OF GILS n4 1296 10.000 ≈ 9. The notion of valid items for GILs is more complex because a GIG LR-item has to encode the possible conﬁgurations of the auxiliary stack. The sets that deﬁne the states in GIG LR parsing are composed by a pair of sets (IT. We say a GIG item index pair [ δ.6..000 ≈6. Therefore the set of items has to encode the corresponding top-down prediction that aﬀects the stack of indices.1 LR items and viable preﬁxes We assume the reader has knowledge of LR parsing techniques (cf.000 20. [2]).98 Length 6 9/10 12 15/16 18 CHAPTER 5.500 ≈2.976 Mix L. Both [75] and [3] describe transformations of Earley Parsing Schemata to LR Parsing Schemata. The similarities of Earley Parsing and LR parsing are made explicit in those transformations..000 ≈ 16. ≈ 100 ≈ 400 ≈ 700 ≈ 1. if multiple values are allowed in the parsing table (cf. We will refer to the connections between Earley parsing and LR parsing to clarify this presentation.800 5.000 Copy L.100 MCopy L.ai Aβ ⇒ δγ j a1 .6 LR parsing for GILs An LR parser for a CFL is essentially a compiler that converts an LR CFG into a DPDA automaton (cf.000 ≈ 28. ≈ 300 ≈ 1. 5.625 104. IN ). [80]). An item A → β 1 • β 2 is valid for a viable preﬁx αβ 1 if there is a derivation S ⇒ αAw ⇒ αβ 1 β 2 w. In the GIG case. ≈ 500 ≈ 2.aj β 2 β.736 50. the technique we present here converts an LR GIG into a deterministic LR-2PDA according to the deﬁnition presented in Chapter 2. We will do this in such a way that the set of items contain information not only of what possible items belong to the set but also of the possible indices that might be associated to the set of parsing conﬁgurations that the items describe. This approach can be extended to a Generalized LR parsing approach.. We use this notion of viable preﬁxes to describe the computation of the rm rm ∗ main stack in a LR-2PDA.. The set of preﬁxes of right sentential forms that can appear on the stack of a shift-reduce parser are called viable preﬁxes in context free LR-parsing. ∗ ∗ rm rm ∗ .100 ≈5. [2]). A → β 1 • β 2 ] where δ is in I ∪ {#} is valid for a viable GIG µ preﬁx αβ 1 if there is a derivation using the GIG context free backbone: S ⇒ αAw⇒ αβ 1 β 2 w and there is a GIG derivation S ⇒ γ i a1 .

Example 21 L(Gwcw ) = {wcw |w ∈ {a. S → cR 6. S. R → Rb ¯ j The set of states for the grammar Gwcw should encode the possible conﬁgurations of the stack of indices as follows: . R → Ra ¯ i 5. b}∗ }. For CFG parsing (cf. R → [#] 4.5. the goto function and the parsing table. 2. j}.[3]). LR PARSING FOR GILS 99 5. If A → α • Bβ is in closure(IT ) and (i) B → γ is a production where δ is in I ∪ { }. P ) and P is: 1. then closure(IT ) is the set of items constructed from IT by the two rules: 1. ) |B → γ is a production } (a PDA transition) ¯ ι ¯ ι It is apparent that the two items should not be conﬂated because there is a change in the stack conﬁguration. S → aS i 2. {a.The transitions of this NFA are of the following form: δ(A → α • Bβ. #. [41]) the Closure operation is equivalent to obtaining the set of states of a NFA that recognizes the viable preﬁxes. However in Earley’s algorithm this computation is done in run-time while in LR parsing it is precompiled in the LR states. ) = {B → • γ |B → γ is a production } (a NFA transition) In the GIG case the transitions should be as follows. The following example uses a grammar for the language Lwcw to depict the construction of the set of items. S → bS j 3. They are also determined by the Goto operation. Apply this rule until no more new items can be added to the closure of IT . ι) = {(B → • γ. For the GIG algorithm the set of items and possible index values cannot be determined only by the Closure operation.6. R}. ¯ Note that the two conditions of the Closure operation rule out the possible index operations δ and [δ] when the right-hand side of the production starts with a non-terminal. Such productions are considered in case 2 of the Goto function. if the index operation is for instance ¯: ι δ(A → α • Bβ. or (ii) µ δ B → aγ then add the item B → • γ (case 1) or the item B → • aγ (case 2) to IT if it is not δ δ δ already there. b}. The Closure operation is equivalent to the Predict function in Earley parsing (cf. Gwcw = ({S. Initially.6. If IT is a set of items for a grammar G.2 The Closure Operation. . {i. every item in IT is added to closure(IT ).

The GIG function Goto has three parameters a) IT is a set of items b) X is a grammar symbol and c) IN is a set of pairs (i. X). if µ µ I is the set of items that are valid for some viable preﬁx γ. ιj ).3 The Goto Operation. We will say an index i is in IN if (i. s) is in IN . The set I6 requires that the stack be empty. We will describe the Goto function adding the possibility of computing the possible stack values of the PDA for each state. the Goto function in the GIG case is a PDA and the set of viable preﬁxes with associated indices is a context-free language. then goto(I. s) where i is an index symbol and s is a state deﬁned by a set of items closure.23: Set of States for Gwcw In the same way that the dot implies that some input symbols have been consumed.6. The CF Goto function is a DFA transition function where IT is in the state set and X is in the vocabulary. Intuitively. #} {i. j. IN ) is deﬁned to be the pair (IT 2 . if I is the set of items that are valid for some viable preﬁx-index pairs (γ. 5. is the set of items that are valid of the viable preﬁx-index pairs (γX.ι). #} {i. In the GIG case. then goto(I.100 I0 : # S → •S S → • aS i CHAPTER 5. RECOGNITION AND PARSING OF GILS I1 : i S → a•S i i I2 : j S → b•S j i I3 : I4 : I5 : I6 : I 10 : {i. X) is the set of items that are valid for the viable preﬁx γX.ιj ) in other words. X. the new sets I4 and I5 (as compared to context free sets) imply that an index i or j has been consumed from the stack. X. is deﬁned to be the closure of the set of all items [A → αX • β] such that [A → α • Xβ] is in IT . ι) = (I j . where IT is a set of items and X is a grammar symbol. j.IN 2 ) where either 1 or 2 applies: . The operation goto(IT. #} # # S → c•R R → • Ra R → • Rb ¯ j ¯ i S → • aS S → • bS j S → • aS S → • bS j S → • bS j S → • cR I7 : I 11 : # # S → aS • j S → • cR I8 : I 12 : # # S → bS • j S → • cR I9 : I 13 : # # S → cR • j R→ • [#] R → R•a ¯ i R → R• b ¯ j R → Ra • ¯ i R → Rb • ¯ j Figure 5. j. The CF function goto(IT.

IT i ) is in IN 2 . input (or ) and auxiliary stack. Case C: If α is not X is a terminal a then IN = IN 2 (i. . IT i ) is in IN 2 for every (δ.g. LR PARSING FOR GILS 1. [A → αa • β] is in IT 2 ) µ When the second parameter of the Goto function is 2. The indices associated in each state are those indices that are represented also in ﬁgure 5. [2]). IN ) is deﬁned to be: β] such that [A → α • Bβ] is in IT and δ is in IN and IN 2 is deﬁned such that µ predecessor(δ. Case B: If X is a non-terminal B then IN 3 is in IN 2 such that [B → γ • ] is in (IT 3 . .: i indicates a push. IT i ) in IN is in IN 2 ¯ If µ is δ and δ is in IN then predecessor(δ.6. [#] indicates a transition that requires an empty stack and indicates that i the content of the stack is ignored). ¯ indicates a pop. The ﬁrst element of the pair is the symbol X and the second indicates the operation on the PDA stack (e. If µ is then IN is in IN 2 If µ is [δ] and δ is in IN then every (δ. IT i ) in IN is in IN 2 The Goto function for the grammar Gwcw is depicted in ﬁgure 5. (e. state. the auxiliary stack has the conﬁguration corresponding to the completed item B.e. The construction of an LR parsing table is shown in the following algorithm. IT 2 is the closure of the set of all items [A → αX • β] such that [A → α • Xβ] is in IT µ µ 101 and IN 2 is deﬁned to be the set of indices with an associated state such that: Case A: α is and X is a terminal a (i.24 with a PDA. the possible values are: a) pairs of shift action and operations on the auxiliary stack: push. The action table has three parameters. [A → a • β] is in IT 2 ): µ If µ is δ in I then (δ. IN 3 ).23. IT i ) in IN [B → [δ] • β] such that [A → α • Bβ] is in IT and δ is in IN and IN 2 is deﬁned such that every µ (δ.g. IT 2 ) is in IN 2 and IN is in predecessor(δ.e. The construction of the collection of sets of items for a GIG grammar is made following the algorithm used for context free grammars. The only changes are those made to the Closure and Goto operations as described above. µ In other words. IT 2 is the closure of the set of all items [B → ¯ δ • then goto(IT. IT 2 ).5.

# I3 (ε . j ) (ε .#) # I 11 to I 6 i. ε) # I 12 ( ε.j. b) reduce action (in this case no operation on the auxiliary stack is performed).24: Transition diagram for the PDA recognizing the set of valid preﬁxes.102 (a.#) # I 10 to I 6 (a. δ. . RECOGNITION AND PARSING OF GILS (a. B) denote non-terminals lower case letters (a) terminals. i) (c. ε ) (S.j ) ( ε. ε) to I 1 to I 3 # I8 (ε .j. i ) i. ε ) i. j ) (ε .j. The Goto table has the usual properties.j) (b. annotated with indices for the grammar Gwcw pop or none. The notation used is a follows: capital letters (A.i) # I0 i I1 CHAPTER 5.ε ) # I 13 (ε .[#]) # I6 (R. and no operation on the auxiliary stack. ε ) (b. β is a possible empty sequence of non terminals and ¯ terminals. ε ) (ε . α is a non empty sequence of non terminals and terminals.j) to I 2 to I 3 # I7 (c.# I4 (R. [δ].i) (b. ε ) (S.j) j I2 (a . δ is an index and µ is either δ. i ) (c. ε ) # I9 Figure 5. i ) (R.# I5 (b. ε ) ( ε .

If [A → • aβ] is in I i and goto(I i .). Construct C = {I 0 . “pop δ”). IN j ). If [S → S • ] is in I i then set action[i. then µ set action[i. 2. the initial state of the parser is the one constructed from the set of items The output transition table for the grammar Gwcw is shown in ﬁgure 5. IN j ). IN i ) = (I j . State i is constructed from I i . h. ] to (“shift j”. a. If [A → • aβ] is in I i and goto(I i . IN j ). If [A → • Bβ] is in I i and goto(I i . e. adding the additional stack to compute the corresponding actions depicted in the table. inj ] = j. δ] to (“shift j”. If [A → β • ] is in I i . a. IN i ) = (I j . IN i ) = (I j . . and δ is in IN i . g. #] to “accept” If any conﬂicting actions are generated by the above rules. The parsing actions for state i are determined as follows: a. If [A → • Bβ] is in I i and goto(I i . The goto transitions for state i are constructed for all non-terminals A using the rule: If goto(I i . then ¯ δ set action[i. a. a. a.5. a. LR PARSING FOR GILS 103 Algorithm 7 (Constructing an SLR parsing table) Input.2. I n }. Here a must be a terminal and α must be non-empty. If [A → • aβ] is in I i and goto(I i . and δ is in IN i . $. then set action[i.6. “push δ”). i. . Output. IN j ). c. -). . a. and δ is in IN i and IN j [δ] then set action[i. but it is GLR(1). δ] to (shift j. 4. IN j ). ] to (“shift j”. then δ set action[i. then ¯ δ set action[i. the collection of sets of LR(0) items for G . then goto[i. then set action[i. . Here a must be a terminal. -). where A may not be S . An augmented GIG grammar G’. The SLR parsing table functions action and goto 1. f. δ] to (“shift j”. A. IN j ). Here B must be a non-terminal. IN i ) = (I j . . . for every index in IN j .. If [A → • aβ] is in I i and goto(I i . . and δ is in IN j .. IN i ) = (I j . IN i ) = (I j . and δ is in IN i and IN j then [δ] set action[i. d. I 1 . IN i ) = (I j .-). a. ] to “reduce A → α” for all a in µ µ FOLLOW(A). we say the grammars is not SLR(1). ] to (“shift j”. a. Here a must be a terminal. 3. “pop δ”). This table is used with an obvious extension of the context free algorithm. IN j ). IN j ). Here B must be a non-terminal. a. δ] to (shift j.. A. b. a. IN i ) = (I j . If [A → α • aβ] is in I i and goto(I i .

) ) ) ) ( .#) 14 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 (a. pop) (s5. ) (s2. ) (s13. ) r6 r1 r2 r3 (s12. We follow the notation from [2]. ) (s6. p i) (s1. pop) (s5. The only diﬀerence is that it takes into account the operations on the auxiliary stack. ) (s1. p i) (s1. . pop) (s4. pop) r6 r6 (s5. pop) (s4. (s3. p j) (s2.j) ( . p j) (c. ) r4 r5 acc Table 5.104 State CHAPTER 5. (s3. pop) (s6. RECOGNITION AND PARSING OF GILS Action ( .#) (s4. p i) (b. ) (s6.i) Goto (S.#) (R.2: Action/Goto Table for Gwcw The parsing algorithm for GIGs is essentially the same algorithm for CFGs. (s3. p j) (s2.#) ($.

pi )then begin push a then s on top of the Stack1 push i on top of Stack2 . . Initiate Stack2 with #. (i ∪ )] = shift (s . if action [s. Output. else if action [s. advance ip to the next input symbol. pop) then begin push a then s on top of the Stack1 pop i from Stack2 else if action [s. pop) then begin push a then s on top of the Stack1 pop i from Stack2 advance ip to the next input symbol. else if action [s. a. Input. a parse for w otherwise an error indication.6. let s be the state now on top of the stack. ) then begin push a then s on top of the Stack1 advance ip to the next input symbol. i] = shift (s . a. repeat forever begin let s be the state on top of the Stack1 and i the symbol on top of the Stack2 a the symbol pointed by ip. i] = reduce A → β then begin pop 2 ∗ |β| symbols from Stack1.5. a. Initiate Stack1 with s0 . . If w is in L(G). push A then goto[s . i] on top of Stack1 output the production A → β µ 105 else if action [s. (i ∪ )] = shift (s . a. An input string w and an LR parsing action − goto table for a grammar G. else if action [s. a. i] = accept then return else error end The following table shows the moves of the LR parser on the input string abcab. Set ip to point to the ﬁrst symbol of w$. LR PARSING FOR GILS Algorithm 8 (LR parsing for GIGs) . A. i] = shift (s .

RECOGNITION AND PARSING OF GILS 1 2 3 4 5 6 7 8 9 9 11 12 13 14 15 16 Stack 1 0 0a1 0a1b2 0a1b2c3 0a1b2c35 0a1b2c354 0a1b2c3546 0a1b2c354R10 0a1b2c354R10a 0a1b2c35R11 0a1b2c35R11b 0a1b2c3R9 0a1b2S8 0a1S7 0S14 - Stack2 # #i #ij #ij #i #i # # # # # # # # # # Remaining input abcab$ bcab$ cab$ ab$ ab$ ab$ ab$ ab$ b$ b$ $ $ $ $ $ $ Comments Initial ID Shift and push Shift and push Shift Shift and Pop Shift and Pop Shift Reduce by 6 Shift Reduce by 4 Shift Reduce by 5 Reduce by 3 Reduce by 2 Reduce by 1 Accept Table 5.3: Trace of the LR-parser on the input abcab and grammar Gwcw .106 CHAPTER 5.

the formalisms diﬀer in the kid of trees they generate. They also oﬀer additional descriptive power in terms of the structural descriptions they can generate for the same set of string languages. 107 . because they can produce dependent paths1 . In this section.Chapter 6 Applicability of GIGs 6. His discussion is addressed not in terms of weak generative capacity but in terms of strong-generative capacity. we review some of the abstract conﬁgurations that are argued for in [29]. 1 For the notion of dependent paths see for instance [82] or [44]. However those dependent paths are not obtained by encoding the dependency in the path itself. at the same computational cost in terms of asymptotic complexity. Though this mechanism looks similar to the control device in Linear Indexed Grammars .1 Linear Indexed Grammars (LIGs) and Global Index Grammars (GIGs) Gazdar [29] introduces Linear Indexed Grammars and discusses their applicability to Natural Language problems. The goal of this section is to show how the properties of GILs are related to the peculiarities of the control device that regulates the derivation. GIGs oﬀer additional descriptive power as compared to LIGs (and weakly equivalent formalisms) regarding the canonical NL problems mentioned above. Similar approaches are also presented in [82] and [44] (see [56] concerning weak and strong generative capacity).

ai ∈ T ∪ { }. as is depicted in the ﬁgure 6. G = (N. and every possible spine is independent.. It is easy to see that no control substring obtained in a derivation subtree is necessarily in the control language. In other words.3.. We gave an alternative deﬁnition of the language of a GIG in chapter 5 to be the control language: L(G.] B ∗ ∗ A B pop push C A [] B [.an | < S. T.1. B [. the control of the derivation can be distributed over diﬀerent paths. but every substring encoded in a spine has to belong to the control set language. Gazdar suggests that the conﬁguration in ﬁgure 6.1: LIGs: multiple spines (left) and GIGs: leftmost derivation 6. w1 ..g. CFGs cannot generate the structural descriptions depicted in ﬁgures 6. and C is a control set deﬁned by a CFG (a Dyck language): {a1 . (we follow Gazdar’s notation: the leftmost element within the brackets corresponds to the top of the stack). the language {wwR |w ∈ Σ∗ }) only of the nested (center embedded) sort.2 and 6.2 The palindrome language CFGs generate structural descriptions for palindrome languages (e. C).. P ). wn ∈ C}.. # ⇒ w.. where G is a labeled grammar (cf. L. Those control strings are encoded in the derivation of each spine as depicted in ﬁgure 6. < an .1 at the left. however those paths are connected transversely by the leftmost derivation order. In other words.2 would be necessary to represent Scandinavian . Those control words describe the properties of a path (a spine) in the tree generated by the grammar G. C): {w | S.1 Control of the derivation in LIGs and GIGs Every LIL can be characterized by a language L(G. >⇒< a1 ...108 CHAPTER 6. w1 > .1.1 at the right. wn >. the control strings wi are not necessarily connected to each other.. [87]). S. #δ and w is in T ∗ . . δ is in C } ¯ C is deﬁned to be the Dyck language over the alphabet I ∪ I (the set of stack indices and their complements). APPLICABILITY OF GIGS 6.i] C [] push pop Figure 6.

In such structure the dependencies are introduced in the leftmost derivation order..a] [a] [. [.6.b..e. c}. ¯ Gadj = ({AP. {a. #.b.a] [c.b. b. N P. the structure of ﬁgure 6.b.b.c.c..a] [d.4 at the left..] Figure 6.] a b c d d c b a Figure 6. in productions that are not in Greibach Normal Form).a] [a] [. A}.3) cannot be generated by a GIG because it would require push productions with a non terminal in the ﬁrst position of the right-hand side. A. The English adjective constructions that Gazdar uses to motivate the LIG derivation are generated by the following GIG grammar.3: Another non context-free structural description for the language wwR However GIGs generate a similar structural description as depicted in ﬁgure 6.b.a] [c. But the exact mirror image of that structure. {i. Such an structure can be obtained using a GIG (and of course an LIG).a] [b.] [a] a b c d d c b a [b. AP. The corresponding structural description is shown in ﬁgure 6.2: A non context-free structural description for the language wwR unbounded dependencies.] [a] [b. (i.1.4: Example 22 (Comparative Construction) .e. LINEAR INDEXED GRAMMARS (LIGS) AND GLOBAL INDEX GRAMMARS (GIGS) 109 [. j}.a] [d. (i.a] [c.a] [c. P ) where P is: AP → AP N P A→a i ¯ j ¯ AP → A A→c k ¯ ¯ A→AA NP → a NP ¯ i A→b j NP → b NP NP → c NP ¯ k .a] [b.

shown in ﬁgure 6..] A NP b [c] [a] [b.b.d] [c..a] [b.c] NP [d. APPLICABILITY OF GIGS [.a] c A A [a. However..b.3.. as in the LIG case.110 CHAPTER 6.] NP AP [.c.5: Two LIG structural descriptions for the copy language Gazdar further argues that the following tree structure. 6.c. the introduction of indices is dependent on the presence of lexical information and its transmission is not carried through a top-down spine.5).c] [c.a] [a] [.c.c] a b A c [c] b Figure 6.a] [a.] a A a b [.b.a] [d.1..b.3 The Copy Language Gazdar presents the following two possible LIG structural descriptions for the copy language (depicted in ﬁgure 6.a] [c...] [b.c.4: GIG structural descriptions for the language wwR It should be noted that the operations on indices are reversed as compared to the LIG case shown in ﬁgure 6. a similar one can be obtained using the same strategy as we used with the comparative construction.d] a [b.d] b [d] c d Figure 6.a] c d d a [b.b.] AP AP AP [. On the other hand.] [a] a b c d [c..b.d] [d] [.] NP NP c NP [.6 at the left..] a b c d a b c d [b.c. The structural description at the left can be generated by a GIG. but not the one at the right.a] [a] [b.a] [c.d] [c.] A [b.b. could . [..] [.a] [.b. The arrows show the leftmost derivation order that is required by the operations on the stack.

c. but they can generate the one presented in the ﬁgure 6.b.a] ε a d [b.b.a] c [a] b [] a [d.7.a] [c.a] [a] a b a c [b.6 at the left either.a] [d.a] d c b [c.b.a] [a] a [] b Figure 6.b.a] [c.a] [a.a] [c.b.a] [a] [a] [] [b.a] [b.a] a b c d [c.c.a] [a] [] [d.a] b c d ε a Figure 6. [d] [c.6 (right).c.b.c.c.b.c.c.d] [b.b.a] [b.d] [b. GIGs can also produce similar structural descriptions for the language of multiple copies (the language {ww+ | w ∈ Σ∗ } as shown in ﬁgure 6.a] [d.b.c.b.a] [b.a] [d. LINEAR INDEXED GRAMMARS (LIGS) AND GLOBAL INDEX GRAMMARS (GIGS) 111 be more appropriate for some Natural Language phenomenon that might be modeled with a copy language.a] c d d [c.c.6: An IG and GIG structural descriptions of the copy language GIGs cannot produce the structural description of ﬁgure 6.c.d] [d] [d.b.d] [a.d] [c.b. corresponding to the grammar shown in example 10. Such structure cannot be generated by a LIG.b.a] [c.b. [] [a] [b.7: A GIG structural description for the multiple copy language .1.6.b. where the arrows depict the leftmost derivation order.d] a b c d [c.c.c.c.c.b.b.d] [a. but can be generated by an IG.d] b c [d] d [b.b.a] [b.a] [a] [d.a] [c.b.d] [] [d.

. F... The corresponding structure is depicted in the ﬁgure 6. Gsum = ({S.9 (right).i] d [i] b c d a a [i] b b [.. The relevant structures that can be produced by a LIG are depicted in ﬁgures 6.11. as we show in ﬁgures 6. R.] b Figure 6. [.i] b [. f }. GIGs can generate the same structures as in 6.4 Multiple dependencies Gazdar does not discuss of the applicability of multiple dependency structures in [29].] d [.112 CHAPTER 6. #.1.] c c Figure 6.] [i] [.] [i..] c [i] [i. Example 23 (Dependent branches) . L(Gsum ) = { an bm cm dl el f n | n = m + l ≥ 1}.9 (left).10 (at the right).8 and 6. The following language cannot be generated by LIGs.8: LIG and GIG structural descriptions of 4 and 3 dependencies [.] [. P ) where P is: . {i}.i] [i] c c d [... b. d.] [i. L}. S.i] [i... APPLICABILITY OF GIGS 6. e.10 and 6.] c c b [i.9: A LIG and a GIG structural descriptions of 3 dependencies GIGs can also produce other structures that cannot be produced by a LIG. {a.] [i] a a [i. It is mentioned in [82] in relation to the deﬁnition of composition in [79] Categorial Grammars.8 and the somehow equivalent in 6.. which permits composition of functions with unbounded number of arguments and generates tree sets with dependent paths.i] d [i] b b [. c.i] [i] d [.

11: GIG structural descriptions of 3 dependencies 6..6. The ﬁrst example is generated by the grammars Gcr presented in Chapter 3 and repeated here...i] [i.i] [i. LINEAR INDEXED GRAMMARS (LIGS) AND GLOBAL INDEX GRAMMARS (GIGS) 113 S → aSf |R i R→F L|F |L F → b F c |b c ¯ i L → d L e |d e ¯ i The derivation of aabcdef f : #S ⇒ i#aSf ⇒ ii#aaSf f ⇒ ii#aaRf f ⇒ ii#aaF Lf f ⇒ i#aabcLf f ⇒ #aabcdef f [.1.] c [i.5 Crossing dependencies Crossing dependencies in LIGs and TAGs have the same structural description as the copy language.. .] [.] d d ε b Figure 6.i] f [..i] a [i] [i] [i.. GIGs can produce crossing dependencies with some alternative structures as depicted in the following structures..i] a b b c [i] [.i] a c c [.1.] [i..] d [i] a c d b d e Figure 6.] [i] a f [i.i] b [i] [i] c [.] c [i] a [i..10: GIG structural descriptions of 4 dependencies [.i] [i] a [i.] [i] a [.i] [i.] [.

h}. B → b B | b x D→dD|d x ¯ [] [ii] [] a a a b [ii] b b [iii] [i] c [] c c d d d [i] [] Additional number of crossing dependencies can be obtained (pairs or triples). c. B. C}. {a. B. f. A. S. m ≥ 1}. {x. L(Gcr8 ) = {an bn cm dm en f n g m hm | n. {a. c.114 Example 24 (Crossing Dependencies) . where Gcr = ({S. d}. b. S. CHAPTER 6. g. A. b. P ) and P is: S→AD A→aAc|aBc 3. {x}. #. The following kind of dependencies can be obtained: Example 25 (Crossing Dependencies 2) . y}. C}. m ≥ 1}. d. P ) and P is: S→AL A→aAf |aBf x B → b B e| b Ce x ¯ C → c C d| c d y L → g L h| g h y ¯ [i] a a b [ii] [i] [] e f f g g [k] h [] h b c [k] e d [kk] c d . APPLICABILITY OF GIGS L(Gcr ) = {an bm cn dm | n. e. #. where Gcr8 = ({S.

c2 . m ≥ 1} 1 2 1 1 2 2 2 1 The language Lcrm is generated by the following GIG: Example 26 (Crossing Dependencies 3) . L2 }.i] γ . we could proceed as follows. B. d1 . B ∈ N. Lcrm = {an am bn cn bm cm dm dn |n. related to the operation of generalized composition in CCGs.6. c1 . {x. a2 . LIGs and Greibach Normal Form The deﬁnition of GIGs requires push productions in Greibach Normal Form. A. D. y}... If an equivalence between LIGs and LIGs with push productions in GNF could be obtained. S.2 Set Inclusion between GIGs and LIGs The last section showed similarities and diﬀerences between LIGs and GIGs in terms of the structural descriptions. {a1 . #.2. A2 . b1 . Our conjecture is that LILs are properly included in GILs. Gcrm = ({S. C. P ) and P is: S→AL x ¯ A → a1 A B 1 | a1 A2 B 1 L2 → C D A2 → a2 A2 y y ¯ B 1 → b B| b x L → c1 L d1 | c1 L2 d1 D → d2 d2 y ¯ C → b2 C C 2 | b2 C 2 C 2 → c2 C 2 y The derivation of a1 a1 a2 a2 a2 b1 b1 c1 c1 b2 b2 b2 c2 c2 c2 d2 d2 d2 d1 d1 S ⇒ AL ⇒ #a1 AB 1 L ⇒ #a1 a1 A2 B 1 B 1 L ⇒ yyy#a1 a1 a2 a2 a2 B 1 B 1 L ⇒ xxyyy#a1 a1 a2 a2 a2 b1 b1 L ⇒ yyy#a1 a1 a2 a2 a2 b1 b1 c1 c1 CDd1 d1 ⇒ #a1 a1 a2 a2 a2 b1 b1 c1 c1 b2 b2 b2 C 2 C 2 C 2 Dd1 d1 ⇒ yyy#a1 a1 a2 a2 a2 b1 b1 c1 c1 b2 b2 b2 c2 c2 Dd1 d1 ⇒ #a1 a1 a2 a2 a2 b1 b1 c1 c1 b2 b2 b2 c2 c2 c2 d2 d2 d2 d1 d1 ∗ ∗ ∗ ∗ ∗ ∗ 6. d2 }. L. Deﬁnition 14 A LIG is in push Greibach Normal Formal if every push production is in Greibach Normal Form (where A.] → a B[. a ∈ T and γ ∈ (N ∪ T )∗ : • A[. SET INCLUSION BETWEEN GIGS AND LIGS 115 Such structures might be relevant to be able to generate the following language(discussed in [86]). In this section we are going to address the issue of whether LILs and GILs are comparable under set inclusion. b2 .

Lemma 11 Let G = (N.γ r ]αr be the set of A-productions for which A is the leftmost symbol of the right hand side. I. Let G1 = (N.1 LIGs in push-Greibach Normal Form We can apply the techniques for left recursion elimination used in the conversion to Greibach Normal Form in CFGs. from a LIG derivation to a GIG derivation and the equivalence is easy obtained. P 1.] such that 1 ≤ i ≤ s B[. if a lemma such as the following statement could be proved. Let G1 = (N ∪ {B}.2... I. Lemma 10 Deﬁne an A − production to be a production with variable A on the left. Let A[. B[. index γ to I and replacing the A−productions by the productions: A[. P. T. I ∪ {γ }.]β 1 |β 2 | · · · |β r be the set of all B-productions (they do not aﬀect the stack).] → B[]α be a production in P and B[.3 in [41]. just a change in the interpretation of the derivation.γ i ] → αi B[.. P..γ i ] → β 1 |β 2 | · · · |β s be the remaining A−productions. T..γ 2 ]α2 | · · · |A[. This is easy to prove. Let G = (N.116 CHAPTER 6. I. GILs contain languages which are proven not to be in the LIL set.. T. Let A[.] → A[.γ i ] → β i B[.. Let A[. An adaptation of the lemma 4. S) be a LIG. T. 6. Therefore. the proper inclusion of LILs in GILs would be obtained: • For any LIG there is an equivalent LIG in push-Greibach Normal Formal. However we show in the next section that such a claim does not seem to hold. APPLICABILITY OF GIGS Lemma 9 For any LIG in push-Greibach Normal Formal there is an Equivalent GIG that generates the same language...4 in [41] as follows also holds for LIGs and it would allow to transform left recursion into right recursion. Then L(G) = L(G1 ). A similar lemma for CFGs is lemma 4. S) be the LIG formed by adding the variable B to N . S) be obtained from G by deleting the production A → Bα2 from P and adding the productions A → β 1 α2 |β 2 α2 |β r α2 .. A[. S) be a LIG.] such that 1 ≤ i ≤ r .γ i ] → αi .γ i ] → β i . P...] → [. The following lemma applies to productions that do not carry stack information.. as long as the left recursion involves the whole spine.γ 1 ]α1 |A[.. The proof of this lemma holds for the GIG case because there is no stack information aﬀected.

14 would require even more additional machinery.. .c.13: Left and right recursive equivalence in LIGs.. second case The structures depicted in ﬁgure 6..d] a [b..b..b.a] [c.c.a] [b. such as index renaming: [.a] [d..12 involving the palindrome language would be obtained.d] [c. because it would need some additional changes. There is no way that a LIG like the one generating the three dependencies could be transformed into a LIG in push-GNF.b.] [a] a b c d [c.6.a] [a] a b c d d c b a Figure 6.a] [b.] [.a] [a.a] [c. SET INCLUSION BETWEEN GIGS AND LIGS 117 In this way.b.c.a] [c.] [b.a] [c.b.a] [a] [.c..a] [d.a] [b.] a b c d a b c d [b.d] [c. [.a] [a] [.d] b [d] c d Figure 6.b.d] [d] [.b.b.c.a] [c.] [a] [b.] [.2.a] [d.c. without some mayor changes in the mapping algorithm. the equivalence of the structures in ﬁgure 6.b.b.] [a] a b c d d c b a [b.12: Left and right recursive equivalence in LIGs The previous lemma would not work in the following structure.

APPLICABILITY OF GIGS [.16 It seems that some of the above-mentioned issues would still arise if we try to obtain the inclusion of LILs by simulating either an EPDA or Becker’s 2-SA with a LR-2PDA. They are however generated by GIGs as shown in ﬁgure 6..e.] [i] [i.] c c d d b Figure 6.14: LIGs not in push-GNF I Moreover some additional problems arise as in the following languages: {w1 w2 w1 w2 R |wi ∈ Σ∗ } {w1 w2 w2 w1 R |wi ∈ Σ∗ } which can be depicted by the following structures: push push push b1 b2 a1 push push push a2 b3 push a3 push push a1 a2 b1 b2 push push pop pop pop pop push a3 a3 a2 a1 pop b3 pop b3 b2 b1 pop pop pop b3 b 2 a3 a2 pop pop b1 a1 Figure 6. However..] c c d CHAPTER 6..15: LIGs not in push-GNF II These languages do not seem to be obtainable with a LIG in push-GNF..i] [i] [.i] d [i] b b [. this alternative .] [i] b [i. The problem is that they require the two directions of recursion (i.118 [. both right and left) for push productions.

This does not change the power of the formalism. Those structural descriptions corresponding to ﬁgure 6. [8]). We will introduce informally some slight modiﬁcations to the operations on the stacks performed ¯ by a GIG. and others which are beyond the descriptive power of LIGs and TAGs. [48]. It is a standard change in PDAs (cf. by performing HPSG derivations at compile time.6. Therefore. This follows from the ability of GIGs to capture dependencies through diﬀerent paths in the derivation. One of the motivations was the potential to improve the processing eﬃciency of HPSG. Such compilation process allowed the identiﬁcation of signiﬁcant parts of HPSG grammars that were mildly context-sensitive. We will allow the productions of a GIG to be annotated with ﬁnite strings in I ∪ I instead of single symbols.3. GIGS AND HPSGS 119 pop pop push b pop push 1 a1 a2 b2 push pop pop a3 pop b3 a3 a2 a1 push b 3 push b 2 push b1 Figure 6. Also the symbols will be interpreted relative to the elements in the top of the stack (as a Dyck set). In this section. [39]) to allow to push/pop several symbols from the stack. 6.16: LIGs not in push-GNF II in GIGs should be considered in detail.3 GIGs and HPSGs We showed in the last section how GIGs can produce structural descriptions similar to those of LIGs.2 were correlated to the use of the SLASH feature in GPSGs and HPSGs. There has been some work compiling HPSGs into TAGs (cf. diﬀerent derivations . we will show how the structural descriptive power of GIGs is not only able to capture those phenomena but also additional structural descriptions compatible with those generated by HPSGs [61] schemata.

We will ignore many details and try give an rough idea on how the transmission of features can be carried out from the lexical items by the GIG stack. This is exempliﬁed with the productions X → x and X → x.17: Head-Subject Schema 2 2 33 HEAD 1 6 77 6 6 6 77 6 6 77 6 S 6 L | C 6SUBJ 6 77 6 6 4 55 6 4 6 3 6 COMPS 6 2 6 2 6 6 6 6 6 6 6 6 6 6 HEAD DTR 6 S | L | 6 6 6 6 6 6 4 6D 6 6 6 6 6 6 6 4 4 COMP-DTR S 2 2 3 7 7 7 7 7 7 7 7 2 3337 7 HEAD 1 77 7 6 7777 6 77 6 7777 6SUBJ 2 7777 6 7777 4 5577 77 COMPS 3 77 77 55 (1) Figure 6. Example 27 (intransitive verb) XP → Y P XP XP → X YP →Y X→x nv ¯ Y →y n .120 CHAPTER 6.17 depicts the tree structure corresponding to the Head-Subject Schema in HPSG [61]. in particular in the ﬁrst three cases where nv ¯ [n]v diﬀerent actions are taken (the actions are explained in the parenthesis) : nnδ#wXβ ⇒ vnδ#wxβ (pop n and push v) nv ¯ n¯δ#wXβ ⇒ δ#wxβ v nv ¯ (pop n and v ) vnδ#wXβ ⇒ v¯ vnδ#wxβ (push n and v) nδ#wXβ ⇒ ¯ n ¯ nv ¯ [n]v vnδ#wxβ ( check and push) We exemplify how GIGs can generate similar structural descriptions as HPSGs do in a very oversimpliﬁed and abstract way. obtaining very similar structural descriptions. according to what is on the top of the stack. HEAD 1 SUBJ < > H SUBJ 2 1 HEAD SUBJ < 2 > Figure 6. an how the elements contained in the stack can be correlated to constituents or lexical items (terminal symbols) in a constituent recognition process. Head-Subj-Schema Figure 6. The arrows indicate how the transmission of features is encoded in the leftmost derivation order.18 shows an equivalent structural description corresponding to the GIG productions and derivation shown in the next example (which might correspond to an intransitive verb). APPLICABILITY OF GIGS might be produced using the same production.

20..6...] 121 YP [n..21.] y [v. XP → X CP The derivation: CP → Y CP X → x nv n ¯ ¯ CP → Y →y n n#XP ⇒ n#XCP ⇒ nv#xCP ⇒ nv#xY CP ⇒ v#xyCP ⇒ v#xy ¯ ¯ The productions of example 29 (which uses some of the productions corresponding to the previous schemas) generate the structural description represented in ﬁgure 6.18: Head-Subject in GIG format #XP ⇒ #Y P XP ⇒ #yXP ⇒ n#Y XP ⇒ n#yX ⇒ v#yx Head-Comps-Schema HEAD COMP 1 < 2 HEAD 1 6 77 7 6 6 7 6 S 4 L | C 4SUBJ 2 55 7 6 7 6 COMPS 3 6 2 2 2 3337 7 6 HEAD 1 7 6 6 6 6 6 7777 6 6 HEAD DTR 4 S | L | 4SUBJ 2 5577 6D 6 77 6 4 COMPS union 4 ..19: Head-Comps Schema tree representation Figure 6.] X x Figure 6.19 shows the tree structure corresponding to the Head-Complement schema in HPSG. 3 57 5 4 COMP-DTR S 4 2 2 33 3 2 > C n-2 3 n H C HEAD 1 COMP < 2 1 . where the initial conﬁguration of the stack is assumed to be [n]: Example 28 (transitive verb) . The following GIG productions generate the structural description corresponding to ﬁgure 6. corresponding to the deriva- . 3 n > Figure 6.3.] XP [v..] XP [v.] Y [n. GIGS AND HPSGS [.

22 shows how coordination combined with SLASH can be encoded by a GIG.20: Head-Comp in GIG format tion given in example 29. XP → Y P XP CP → Y P CP X → know nv ¯¯ n XP → X CP X → hates nv n ¯ ¯ nv¯ ¯ v XP → X XP CP → X → claims Y P → Kim|Sandy|Dana|we A derivation of ‘Kim we know Sandy claims Dana hates’: #XP ⇒ #Y P XP ⇒ n#Kim XP ⇒ n#Kim Y P XP ⇒ nn#Kim we XP ⇒ nn#Kim we X XP ⇒ v n#Kim we know XP ⇒ ¯ v n#Kim we know Y P XP ⇒ ¯ n¯n#Kim we know Sandy XP ⇒ v n¯n#Kim we know Sandy X XP ⇒ v v n#Kim we know Sandy claims XP ⇒ ¯ v n#Kim we know Sandy claims Y P XP ⇒ ¯ n¯n#Kim we know Sandy claims Dana XP ⇒ v #Kim we know Sandy claims Dana hates Finally the last example and ﬁgure 6.122 [n] CHAPTER 6. Example 29 (SLASH in GIG format) . We show the contents of the stack when each lexical item is introduced in the derivation. APPLICABILITY OF GIGS XP [ v ] X [n v] [n v] CP [ v ] Y [v] CP x y ε Figure 6. ∗ .

XP → Y P XP CP → Y P CP X → talk to nv n ¯ ¯ XP → X CP CP → C → and X → did nv ¯¯ XP → X XP X → visit [n¯n]c v CXP → XP CXP Y P → W ho|you n CXP → C XP A derivation of ‘Who did you visit and talk to’: #XP ⇒ #Y P XP ⇒ n#W ho XP ⇒ n#W ho Y P XP ⇒ v n#W ho did XP ⇒ ¯ v n#W ho did Y P XP ⇒ n¯n#W ho did you CXP ⇒ ¯ v n¯n#W ho did you XP CXP ⇒ v cn¯n#W ho did you visit CXP ⇒ v cn¯n#W ho did you visit C XP ⇒ v n¯n#W ho did you visit and XP ⇒ v ∗ .3.21: SLASH in GIG format Example 30 (SLASH and Coordination2) .6. GIGS AND HPSGS [] 123 XP YP [n] [n] Kim YP [nn] we XP XP X [vn] know YP [nvn] Sandy XP XP X XP [vn] claims YP [nvn] Dana [] hates XP X CP ε Figure 6.

22: SLASH and coordination in GIG format .124 #W ho did you visit and talk to XP CHAPTER 6. APPLICABILITY OF GIGS [] YP [n] Who X [nv] did XP XP YP [nvn] you XP [cnvn] visit ε CXP CXP C [ n v n ] XP [] [ n v n] and talk to CP ε Figure 6.

time-complexity and space complexity properties. We showed the equivalence of LR-2PDA and GIGs. it might be diﬃcult to prove it. We introduced trGIGs (and sLR-2PDAs) which are more restricted than GIGs. A proof of semilinearity showed that GILs share mildly context sensitive properties. multiple agreements and crossed agreements. or it might not be the case and both languages turn out to be incomparable. We conjectured that trGIGs are not able to capture reduplication phenomena. We have shown that GIGs generate structural descriptions for the copy and multiple dependency languages which can not be generated by LIGs. We presented a comparison of the structural descriptions that LIGs and GIGs can be generate. and that are more limited regarding crossed agreements. we have shown also that the extra power that characterizes GIGs. However. We analyzed the algorithm providing an account of grammar-size. GIGs. We presented a Chomsky-Sch¨tzenberger representation theorem for GILs. The strong similarity between GIGs and LIGs suggests that LILs might be included in GILs. GILs and their properties. We also showed that both the weak and the strong descriptive power of GIGs is beyond LIGs. Finally.Chapter 7 Conclusions and Future Work We have presented LR-2PDAS. corresponds to the ability of GIGs to 125 . We showed u the descriptive power of GIGs regarding the three phenomena concerning natural language context sensitivity: reduplication. An Earley type algorithm for the recognition problem of GILs was presented with a bounded polynomial time result O(n6 ) and O(n4 ) space. The proof of correctness of this algorithm was also presented using the Parsing Schemata framework. However we did not discuss the (possible) equivalence of sLR-2PDAs and trGIGs.

should be addressed.126 CHAPTER 7. CONCLUSIONS AND FUTURE WORK generate dependent paths without copying the stack but distributing the control in diﬀerent paths through the leftmost derivation order. 7. Although they do seem to be. coupled context-free grammars among other formalisms. We have established some coarse-grain relations with other families of languages (CFLs and CSLs). GILs and GCSLs are incomparable but trGILs might be a properly included in GCSLs. which follows as a corollary from the semi-linearity property. A candidate pumping lemma for GILs was suggested.1 Future Work The results described in the previous section are encouraging enough to continue further work concerning Global Index Languages. except the emptiness problem. We have shown also that those non-local relationships which are usually encoded in HPSGs as feature transmission can be encoded in GIGs using its stack. The similarity of GIGs and counter automata. exploiting the ability of GIGs to encode dependencies through connected dependent paths and not only through a spine. The relationships between GILs LILs and ILs is still an open question. this class might not have any theoretical import besides the fact that the recognition problem is in O(n3 ). In this case. We also mentioned the issue of ambiguity and we deﬁned GILs with deterministic indexing. We have not addressed most of the decision problems. the proof should be considered in detail. However. We mention now some of the issues that might be addressed in future research. however there are other variations on Earley’s algorithm that were not considered and that might improve the impact of the grammar size on the parsing . we did not explore the properties of the class of deterministic GILs. however. it is harder to determine whether GILs with deterministic indexing are a proper subset of the Global Index Languages. and the membership problem. We gave an Earley algorithm for GIGs . We deﬁned deterministic LR-2PDAs and deterministic GILs. They seem to be a proper subset of GILs and they also seem to be closed under complementation (which could be proved following the proof for CFLs). however it is not clear to us if such a pumping lemma would provide an additional tool beyond the result already obtained on semilinearity. and we gave an algorithm to parse LR(k) GILs.

7.1. FUTURE WORK

127

complexity. There are also some details on the introduction of indices which would improve its performance. Another alternative to be considered is extension of the LR(k) algorithm to generalized LR(k) ` la Tomita. If a Chomsky Normal Form for GILs is obtainable, a CYK bottom-up algorithm a could be considered. We considered some of the issues related to Natural Language from a formal and abstract point of view, i.e. language sets and structural descriptions. In the same perspective it might be interesting to see if GILs are able to describe some phenomena that are beyond mildly-context sensitive power (e.g. scrambling). Also, we only compared GILs with LIGs; there are other formalisms to be considered. Minimalist Grammars [78] share some common properties with GILs. It would be interesting to determine if minimalist grammars can be compiled into GILs. In the area of practical applications, it would be interesting to implement the HPSG compilation ideas. Another relevant question is how much of TAG systems can be compiled into GIGs and how do the performances compare. It should be noted that the capabilities of TAGs to deﬁne extended domains of locality might be lost in the compilation process. We mentioned in the introduction some of the approaches to use generative grammars as models of biological phenomena. The capabilities of GIGs to generate the multiple copies language (or multiple repeats in the biology terminology) as well as crossing dependencies (or pseudo-knots) might be used in the computational analysis of sequence data.

128

CHAPTER 7. CONCLUSIONS AND FUTURE WORK

Bibliography

[1] A. V. Aho. Indexed grammars - an extension of context-free grammars. Journal of the Association for Computing Machinery, 15(4):647–671, 1968. [2] A. V. Aho, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. AddisonWesley, Reading, MA, 1986. [3] M. A. Alonso, D. Cabrero, and M. Vilares. Construction of eﬃcient generalized LR parsers. In Derick Wood and Sheng Yu, editors, Automata Implementation, volume 1436, pages 7–24. Springer-Verlag, Berlin-Heidelberg-New York, 1998. [4] M. A. Alonso, E. de la Clergerie, J. Gra˜a, and M. Vilares. New tabular algorithms for LIG n parsing. In Proc. of the Sixth Int. Workshop on Parsing Technologies (IWPT’2000), pages 29–40, Trento, Italy, 2000. [5] J. Autebert, J. Berstel, and L. Boasson. Context-free languages and pushdown automata. In A. Salomaa and G. Rozenberg, editors, Handbook of Formal Languages, volume 1, Word Language Grammar, pages 111–174. Springer-Verlag, Berlin, 1997. [6] G. Barton, R. Berwick, and E. Ristad. Computational Complexity and Natural Language. MIT Press, Cambridge, 1987. [7] T. Becker. HyTAG: a new type of Tree Adjoining Grammars for Hybrid Syntactic Representation of Free Order Languages. PhD thesis, University of Saarbruecken, 1993. [8] T. Becker and P. Lopez. Adapting HPSG-to-TAG compilation to wide-coverage grammars. In Proceedings of TAG+5, pages 47–54. 2000.

129

2003. GIGs: Restricted context-sensitive descriptive power in bounded polynomial-time. Chicago Journal of Theoretical Computer Science. 14 pages. France. 1994. Boullier. Comput. [14] G. Otto. Buntrock and F.130 BIBLIOGRAPHY [9] R.. scrambling. Lecture Notes in Computer Science. February 2000. [12] P. 1995. [10] P. pages 77–88. [16] G. Philadelphia. Weakly growing context-sensitive grammars. In Proceedings of the Sixth International Workshop on Parsing Technologies (IWPT2000). Range concatenation grammars. n Santa Barbara. Trento. Niemann. Languages and Programming. January 1999.CA. Boullier. Buntrock and G. . Italy. [15] G. In Automata.. In In Proceeding of CIAA 2003. Growing context sensitive languages and church-rosser languages. February 16-22. Conﬂuent and other types of thue systems. INRIA. In Symposium on Theoretical Aspects of Computer Science. [19] J. 1982. 2003. Lorys. and range concatenation grammars. pages 15–22. A generalization of mildly context-sensitive formalisms. Assoc. Research Report RR-3614. Book. Casta˜o. mix. [11] P. [17] J. The variable membership problem: Succinctness versus complexity. Casta˜o. of Cicling 2003. LR Parsing for Global Index Languages (GILs). 2003. pages 595–606. [13] G. Boullier. Mexico City. In Sandra K n ubler Kotaro Funakoshi and Jahna Otterbacher. 1992. Chinese numbers. In Proceedings of the 12th Symposium on Theoretical Aspects of Computer Science. Mach. [18] J. volume 900. In TAG Workshop. Proceedings of the ACL-2003 Student ¨ Research Workshop. n In Proc. Lorys. Buntrock and K. editors. J. 1996. pages 53–64. Casta˜o. Rocquencourt. 1998. On the applicability of global index grammars. Buntrock and K. 29:171–182. On growing context-sensitive languages.

Casta˜o. 8:345–351. Linguistics and Philosophy. and S. Berlin. Earley. 1996. pages 118–161. and A. editors. 7(3):253–292. June. 1997. Bloomington. 1970. Rohrer.-P. [30] S. Hotz. Indiana. Communications of the ACM. Grammars with controlled derivations. Amsterdam. of Mathematics of Language. In U. The Netherlands. 1989. [24] E. Sch¨tzenberger. Ginsburg. Dahlhaus and M. Pitsch G. Multipushdown languages and grammars. Rozenberg a and A. Breveglieri. P˘un. Culy. Mc Graw Hill. pages 85–99. 13:94– 102. 1996. [23] C. Sci. 1985. a Heidelberg. P˘un. [25] J. In Colloquium on Trees in Algebra and Programming. pages 205– 233. G. 1966. n Proc. Handbook of Formal Languages. K. Membership for growing context sensitive grammars is polynomial. Computer Programming and Formal Systems. Dassow and G. [28] G. 1963. Chomsky and M. Berlin. In u P. Salomaa. L. C. D. An Eﬃcient Context-free Parsing Algorithm. pages 69–94. The complexity of the vocabulary of bambara. Reyle and C. Theor. [26] J. MOL 8. Dordrecht. The algebraic theory of context-free languages. 1988. [27] J. Comp. Hirschberg. New York. 2003b.BIBLIOGRAPHY 131 [20] J. [21] A. Regulated Rewriting in Formal Language Theory. Citrini. editors. Natural Language Parsing and Linguistic Theories. editors. Braﬀort and D. Dassow. [22] N. Salomaa. Springer. Global index grammars and descriptive power.. 2. In G. Applicability of indexed grammars to natural languages. International Journal of Foundations of Computer Science. Vol. Reidel. [29] G. editors. On parsing coupled-context-free languages. Rogers. Warmuth. . Cherubini. Springer. 1986. Gazdar. Oehrle and J. New York. Reghizzi. In R. The Mathematical Theory of Context Free Languages. North-Holland.

Words. [32] S. Inc. 1995. Reversal-bounded multicounter machines and their decision problems. 1978. 1971. Massachusetts. Harkema. pages 737–759. H. Journal of the ACM (JACM). Finite-turn pushdown automata. and Computation. Parsing contextual grammars with linear. [35] A. 30 1996. Harju. [41] J. Salomaa. In 104. 1995. Groenink. Ginsburg and M. Tokio. 1987. Springer. MA. Hopcroft and Jeﬀrey D. H. Decision questions concerning semilinearity. Reading. 25(1):116–133. Harrison. [37] T. H. [36] K. Literal movement grammars.132 BIBLIOGRAPHY [31] S. Introduction to Formal Language Theory. [33] S. Mitrana. [39] M. Bulletin of Mathematical Biology. In IWPT 2000. No. page 22. 2001. University College. Formal language theory and dna: An analysis of the generative capacity of speciﬁc recombinant behaviors. In Martin-Vide and V. 2000. biology and linguistics meet. Ullman. Head. Ibarra. Dublin. H. regular and context-free selectors. Mild context-sensitivity and tuple-based generalizations of context-free grammar. a morphisms and commutation of languages. Ginsburg and E. [34] A. 2000. E.. and A. [38] H. Harrison. Spanier. 1979. ISSN 0169-118X. 5(4):365–396. In LNCS 2076. Introduction to Automata Theory. Addison-Wesley. Ibarra. [42] O. Karhum¨ki. O. Berling. Groenink. J. languages: where computer science. Harbush. Addison-Wesley Publishing Company. editors. 1966. Reading. Journal of the Association for Computing Machinery. 15. sequences. SIAM Journal on Control. One-way nondeterministic real-time list-storage. page 579ﬀ. Centrum voor Wiskunde en Informatica (CWI). 1968. New York. 4(3):429–453. Springer. Journal of Computer and System Sciences. Languages. A recognizer for minimalist grammars. Spanier. AFL with the semilinear property. Ginsburg and E. 1978. . 3:428–446. [40] T. In Proceedings of the 7th EACL Conference.

Saarbr¨cken. Zwicky. 1974. Parsing and hypergraphs. [48] R. B. Compilation of HPSG into TAG. A. Mass. [47] Laura Kallmeyer. Handbook of Formal Languages. In Automata. Cambridge. Vijay-Shanker. 1985. Local tree description grammars: A local extension of tag allowing underspeciﬁed dominance relations. March 1999. Springer. [46] A. A geometric hierarchy of languages. Karttunen. Berlin. editors. Foundational issues in natural language processing. pages 206– 250. MA. 257.. Joshi and Y. editors. Joshi. Weir. Kasper. 8(2):142–157. editors. Manning. and K. Paris. Stuart Shieber. 2000. [49] N. Tree-adjoining grammars. L. Languages and Programming. and D. Journal of Computer and System Sciences. and A. pages 31–81. 2nd Colloquium. 1991. computational and theoretical perspectives. 2001. Deterministic techniques for eﬃcient non-deterministic parsers. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. MIT Press. [51] B. Joshi. In Peter Sells. The convergence of mildly context-sensitive grammatical formalisms. Kiefer. Cambridge. pages 255–269. In Proceedings of the 7th International Workshop on Parsing Technologies. pages 107–114. In G. 1995. Rozenberg and A. 3. [50] Dan Klein and Christopher D. Dowty. Relationship between strong and weak generative power of formal systems. pages 85–137. Vijay-Shanker. France. TUCS Technical Report No. 1997. Lang. Joshi. 1974. Natural language processing: psycholinguistic. pages 92–99.BIBLIOGRAPHY 133 [43] A. [44] A. K. Khabbaz. Chicago University Press. Netter. [45] A. and Thomas Wasow. . Grammars. In Proceedings of the Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5). New York. 2001. u [52] Carlos Mart´ ın-Vide and et al. Salomaa. Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural description? In D. volume 14 of Lecture Notes in Computer Science. K. Sewing contexts and mildly context-sensitive languages. Vol. Schabes.

13:570–581. 1999. pages 243–257. LACL’98. 1966. In Christian Retor´. [62] S. Stanford University. In Rewriting Techniques and Applications: 10th International Conference. [59] M. Pollard and I. On the connections between rewriting and formal language theory. 1998. Parikh. Alonso Pardo. University of Pennsylvania. A.134 [53] R. Trento. In Proceedings of Logical Aspects of Computational Linguistics. Rajasekaran. 1999. Berlin. An insertion into the chomsky hierarchy? 204–212. The church-rosser languages are the deterministic variants of the growing context-sensitive languages. On context-free languages. Head-driven Phrase Structure Grammar. Formal and Computational Aspects of Natural Language Syntax. Journal of the Association for Computing Machinery. [61] C. Rambow. . IL. University of Chicago Press. 1998. Philadelphia. Michaelis and M. Universidade da Coru˜a. USA. [63] O. Sag. Miller. [58] F. n [60] Rohit J. Otto. Springer-Verlag. [56] P. CSLI Publications. Chicago. BIBLIOGRAPHY In Jewels are Forever. Derivational minimalism is mildly context-sensitive. Stanford CA. 2000. Italy. Dept of CIS. RTA-99. Semilinearity as a syntactic invariant. LNCS 1631. 1994. e LACL’96: First International Conference on Logical Aspects of Computational Linguistics. Kracht. SIAM J. [57] G. Michaelis. 1997. PhD thesis. Tree-adjoining language parsing in o(n6 ) time. McNaughton. Strong Generative Capacity.A. Otto. 1996. Niemann and F. Grenoble. Springer-Verlag. In Foundations of Software Science and Computation Structure. 1994. editor. PhD thesis. Interpretaci´n Tabular de Aut´matas para lenguajes de Adjunci´n de o o o ´ Arboles. 25. [55] J. pages [54] J. pages 329–345. 1999. of Computation.

An alternative conception of tree-adjoining derivation. [70] D. 24(1 and 2):73–102. Journal of Logic Programming. Theoretical Computer Science. and F. Formal language theory and biological macromolecules. Martin-Vide.R. Paun S. Shieber and Y. Rivas and S. Sikkel.B. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. The computational linguistics of biological sequences. Bioinformatics. 1998. Parsing schemata and correctness of parsing algorithms. Eddy.BIBLIOGRAPHY 135 [64] E. 8:333–343. Artiﬁcial Intelligence and Molecular Biology. Matsumura. 1995. M. 2. 1997. Mol Biol. [67] G. Schabes. 199(1–2):87–103. Hunter. pages 334–340. [76] K. AAAI Press. J. Kasami. [72] S. Shieber. 1999. Shieber. Marcus. Rivas and S. 24:3–36. 20:91–124.M. Theoretical Computer. [68] D. 1985. Computational Linguistics. String variable grammar: A logic grammar formalism for the biological language of DNA. page 117ﬀ. Seki. Searls. Computational Linguistics. Parsing schemata. 1995. 1999. Contextual grammars as generative models of natural languages. C. Springer-Verlag. editor. Computational linguistics.R. [71] H. [69] D.M. Science. Searls. Sikkel. [74] S. B. 1994. Linguistics and Philosophy. Principles and implementation of deductive parsing. 20. pages 191–229. 1998. 1994. On multiple context-free grammars. Pereira. [73] S. Satta. pages 245–274. Tree-adjoining grammar parsing and boolean matrix multiplication. [65] E. Fujii. 2000. Eddy. . [66] Gh. In L. Journal of Logic Programming. and T.B. No. Schabes. [75] K. 1991. Y. Evidence against the context-freeness of natural language. A dynamic programming algorithm for rna structure prediction including pseudoknots. Searls. The language of rna: A formal grammar that includes pseudoknots. T. pages 47–120. 1993. pages 2053–2068.

Wartena. CA. [84] C. [85] Ch. Introduction to the Theory of Computation. Walters. V. Theoretical Computer Science. An eﬃciente augmented-context-free parsing algorithm. Tomita. K. Grammars with composite storages. [83] D. D. In Proceedings of the Conference on Logical Aspects of Computational Linguistics (LACL ’98). PWS Publishing Company. A geometric hierarchy beyond context-free languages. Massachusetts. Steedman. Computational linguistics. 1987. [82] K. 17:41–61. Weir. [81] E. Stabler. Characterizing mildly context-sensitive grammar formalisms. In COLING-ACL. A. Sipser. Workman. Weir. Boston. Language. Characterizing structural descriptions produced by various grammatical formalisms. 1976. 1985. 104(2):235–261. 1988. Wartena. 1997. P. 1992.136 BIBLIOGRAPHY [77] M. 32 No. [78] E. Grenoble. Dependency and coordination in the grammar of dutch and english. Information and Control. University of Pennsylvania. 1999. and A. [88] D. Information and Control. On the concatenation of one-turn pushdowns. Stanford. Lecture notes in computer science. Weir. Derivational minimalism. pages 11–14. A. [86] D. Deterministic context-sensitive languages: Part II. Grammars. Vijay-Shanker. 1998. of the 25th ACL. de la Clergerie and M. 2:188–200. pages 523–568. J. [80] M. Turn-bounded grammars and their relation to ultralinear languages. A. pages 1333–1339. [87] D. pages 259–269. Joshi. 1970. 13:31–46. [79] M. In Proc. pages 104–111. 1998. 1328. A tabular interpretation of a class of 2-stack automata. J. PhD thesis. Pardo. 1997. 1987. .

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd