You are on page 1of 7

An Object-Oriented Approach in Representing the English Grammar and Parsing

W. Faris and K.H. Cheng Computer Science Department, University of Houston, Houston, Texas, U.S.A
Abstract - Using a natural language to communicate is an important requirement for any artificial intelligence program. This paper defines the purpose of the communication problem and presents its three subproblems. The first sub-problem involves learning the grammar of a natural language, English. The second subproblem is to understand a given instance constructed according to the learned grammar. The third sub-problem is on how to use the learned natural language to express an internal thought. In addition, we describe how to represent a subset of the English grammar and present our initial effort in parsing an English sentence. These solutions provide an important first step in achieving intelligent communication between a computer and human. We use Object-Oriented paradigm to analyze the communication problem and design our solution. Key Words: communication; natural language processing; artificial intelligence; parsing; grammar We have proposed A Learning Program System (ALPS) [2] with the goal of learning knowledge similar to those a human is capable of learning. The focus had been on the development of the memory agent to store knowledge and their relationships. The program provides basic capabilities such as creating a new category and adding objects, attributes, and properties to a category. We have recently developed two major knowledge components of categories: hierarchy [3] and definition [4]. Hierarchy allows both generalization and containment relationships among categories to be specified and stored. Definition specifies the necessary and sufficient conditions of a specific category that may be used to classify objects. Currently, we provide these learning capabilities through special user interface objects that request the appropriate information. Since each kind of knowledge is composed of different subknowledge, this approach requires us to continue to develop these special interface objects for each new kind of knowledge. Our goal is to use simple English sentences as the interface to learn most knowledge. In order to achieve this goal, ALPS will need to learn the English grammar. Our program will use this to understand sentences issued by a human and access the capabilities of the current program. The user may then use this new interface to declare knowledge or to pose a question. For example, the sentence John is a human. will add the knowledge object John to the human category. On the other hand, the question What is John? will require the program to find and provide its answer. We use Object-Oriented paradigm to analyze the communication problem and design its solution, and implement our program in C++. An advantage of an Object-Oriented solution is the reusability of the implementation. We have reused the hierarchy class to store both the relationships among the different kinds of sentences and the relationships among grammar terms. For example, an interrogative sentence is a kind of sentence. Similarly, a personal pronoun is a pronoun and a pronoun is a noun, thus a personal pronoun is a noun. Object-Oriented solutions also allow mix-and-match of the different components, and provide simpler future extensions of any component. For instance, we have extended the condition class of the definition project by adding three sub-classes: the pattern, fulfill, and sequence conditions.

1 Introduction
In an artificial intelligence program, the capability of the program to communicate with the external world allows the program to gain knowledge, interact with its environment, and express what it knows. An important form of communication is with a natural language, which is a language people use to communicate among each other. In a multi-agent program [1], the communication agent should be responsible for learning the grammar of the natural language. It should also use this grammar to understand thoughts expressed to it, and to produce responses to its user. Similar to all other knowledge acquired by our learning program, the knowledge on how to communicate are taught instead of pre-programmed. This paper focuses on the problem of representing the grammar of a natural language, English. We describe the knowledge components that are needed to define the grammar of the language in order to use it effectively. The component knowledge of this grammar is stored in the memory agent. We also describe a parsing algorithm, distributed among the different grammar terms, to identify the different parts of a given sentence. The parsing algorithm of the communication agent provides an important step in understanding a sentence.

The rest of this section provides a brief review of previous research on the different formal models of grammar. Section 2 presents the communication problem and its three sub-problems, along with our approach on teaching the system. Section 3 presents the details on how to represent the English grammar in our program. In Section 4, we describe a distributed algorithm to parse a given sentence presented to the program. Section 5 concludes the paper along with a look at future research on the communication problem. Research in natural language processing has attracted a lot of attention leading to a variety of different formal models of grammar. Besides context free, this includes grammars such as categorical [5], Montague [6], combinatorial categorical [7], and type-theoretic [8]. Functional programming languages such as Miranda [9] and Haskell [10] have been proposed and several parsers for functional programming languages [11-13] have been developed. Many natural language interface systems such as Lolita [14], and Windsor [15] have been built using these parsers. A comprehensive survey of the different grammars including the lazy functional programming approach can be found in [16]. One major problem on using functional programming approach is that additions to accommodate a complex aspect of the natural language may require a major change to many existing structures [17]. Furthermore, although the intention of lazy functional programming is to avoid the duplication of computation, it is quite possible to have unforeseen side effects.

given instance constructed according to the learned grammar. It involves parsing a sentence in order to understand its intention, which should then be carried out. The intention is an important bridge linking the communication capability to the internal capability of a program. Finally, the third sub-problem is on how to express an internal intention using the grammar. Once again, the intention may be used to decide which grammar term to use in producing a sentence. We use an English sentence to express a complete thought. Understanding and expressing multiple thoughts, such as complex sentences or paragraphs, will not be the concerns of this paper. Currently, we focus on learning a subset of the grammar rules for present tense sentences presented in an active voice. Sentences using other tenses and passive voices will be considered in the future. As for the problem of subject-verb agreement, we use stemming to change plurals into singular spelling words. Stemming is a common technique to transform all variations of a word to a common root word. Words like computers, buses, mice, and runs are translated to computer, bus, mouse, and run, respectively. The original form of a word is retained and may be referenced in subsequent processing. We depend on a variation of the word banks provided by WordNet (http://wordnet.princeton.edu/) to carry out the stemming process. WordNet is a lexical database for the English language developed at the Cognitive Science Laboratory at Princeton University. Our approach in learning a natural language follows closely on how a human learns a language; namely, that the grammatical structures are taught. Learning a language is a long and intensive process that a student cannot accomplish in one sitting. Therefore, the process is iterative. Simple grammar structures are learned first, and alternatives that are more complicated are learned later. Although the set of grammar taught to the program in this paper will be a limited subset of the English grammar, the framework that we produced will allow our program to handle sentences that are more intricate after more grammar has been taught. Our effort is a step in achieving intelligent communication between the computer and its human users. In addition, the subject of reading will teach the program to understand multiple sentences in a paragraph, and the subject of writing will teach the program to present a theme with several related thoughts in an organized manner. Since a natural language is complex and may not conform to any formal grammar or any improvements on it, we are not concerned with any specific formal grammar. Hence, our solution will not be limited by any grammar. However, the success of our solution depends on the presence of good teachers who will teach the correct usage of the grammar. The correct usage is conveyed by the intention of a grammar term, which is the bridge that links the communication capability to the internal capability of a program. For example, the

2 The Communication Problem


We assume the system has the capability to perform the task according to the instruction issued by its user. It is also assumed to have the intention to express its own thoughts. The communication problem is about learning the agreeable ways used for the exchange of ideas. We chose to use a natural language as the way to achieve this exchange. Our goal is to teach the system how to use this language to communicate. After learning the natural language, the program is able to understand what the user intends and carries out that intention using its existing capability. Moreover, the system may use the learned language to express its thoughts. The problem of learning a natural language is divided into three sub-problems. The first subproblem is to learn the agreeable ways of communication, i.e., the grammar of the natural language. The grammar is comprised of the different grammar terms. Each term is defined by its grammatical formats and any restriction that it may have on its use. However, learning the grammar cannot be limited to learning the structures and rules. More importantly, it must also include learning the intention of each grammar term, which we refer to as the role of the grammar term. The second sub-problem is to understand a

intention of a declarative sentence is to express a statement regarding the subject knowledge. Our program translates a declarative sentence into a thought, which will carry out an internal action that manipulates that knowledge accordingly. On the other hand, the purpose of an interrogative sentence is to inquire about the information about some knowledge. This is translated into a thought that will carry out a sequence of two actions, finding and presenting the answer to the question, respectively. Finding the answer will create a thought that contains the details of the answer. Presenting the answer will translate this internal thought into a meaningful sentence based on the learned grammar. For the grammar learning problem, our responsibility is to identify the different knowledge components that will allow proper usage of the grammar. This also includes providing basic features to allow the system to learn these grammar details in a flexible and incremental manner. This is a stark contrast from the functional programming approach, in which new rules will require a major overhaul of previously defined structures.

3 The Grammar Representation Problem


The sub-problem of learning the English grammar requires us to design the representation of all the identified knowledge components of the grammar. Learning the English grammar involves learning the various grammar terms in English: such as sentence, complete subject, verb, and preposition. There are four major components for each grammar term: structure, role, kind, and rule. Not all components are required for every grammar term. In other words, a specific grammar term may be defined by a combination of some of these components. The structure of a grammar term defines exactly the grammatical format of the term and the kind of knowledge that it contains. It is used in identifying the different parts of a sentence during parsing and in generating a sentence reflecting a specific thought of the learning program. The role of a grammar term defines the intention of the term, allowing the program to understand the exact purpose of the sentence. These roles bridge the communication agent to the rest of the program. A term may have multiple kinds, which are subsets that may share the same structure but must have different roles. Finally, a rule specifies a condition that must be satisfied. Rules may be applied directly to the grammar term or to one of its structures. The structure of a grammar term may be either a sequence or an alternative. A sequence specifies the order of multiple grammar terms used to construct the entire grammar term. For instance, one structure of a sentence is the sequence of a complete subject followed by a complete predicate and ending with a form of punctuation. In this case, complete subject, complete predicate, and punctuation

are grammar terms used to define this structure of the sentence grammar term. Currently, the occurrence of a term in a sequence may be compulsory, optional, zero-to-many, or implied. For example, in a verb phrase, the main verb is compulsory and the phrase may have zero-to-many helping verbs. The implied option is currently used in a structure for the second-person imperative sentence, where the subject of you is not always given but rather it is implied, e.g., Go to the store. It should be noted that if a grammar term has a sequence structure, then it is quite possible that multiple sequences may be used to define it; however, it must only be satisfied by one such sequence given a specific instance of a sentence. An alternative structure indicates the acceptable alternatives that a term can assume. These alternatives may include a set of grammar terms, existing knowledge, or special words for which the current system does not have an associated knowledge, such as you, of, and how. Each of these sets is implemented as a sub-class of alternatives. The first sub-class is for alternative grammar terms. For example, the grammar term main verb is an alternative comprised of an action verb, a linking verb, or a possessive verb. The second sub-class is for alternative types of knowledge. This sub-class creates a connection between the communication capability and the existing knowledge base, which is acquired separately and dynamically by the learning program. We use this sub-class to teach the noun grammar term as an alternative of the knowledge types: category, object, concept, and data type. These alternatives are responsible for finding the knowledge object that the program has already learned, such as human, John, force, and fractions, respectively. Two more subclasses of alternatives are required to deal with words that have exact spelling. The first sub-class handles a collection of knowledge of the same category, where each piece of knowledge is identified uniquely by a combination of indices from orthogonal dimensions. These include articles, personal pronouns and demonstrative pronouns. For instance, the personal pronoun for firstperson/subject/plural is we. In order to understand a given sentence and to facilitate the process of deciding which pronoun to use when the program is trying to formulate a sentence, this sub-class uses a multidimensional data structure [18] to organize the knowledge. The second sub-class handles words that are stored without such additional knowledge These words include prepositions, punctuations, forms-of-be verb, interrogative pronoun, etc. The role of a grammar term defines the purpose of the term. These roles will take a successfully parsed sentence and direct the program in the proper way towards satisfying the users intentions. A role can either define the intention of a grammar term to guide the program or label a grammar term to help in its identification. As an example, we have mentioned that the intent of a sentence is to express a thought, thus, the role of a sentence is a thought. However,

there are different types of thoughts and therefore different types of roles for a sentence: specifically a statement, a question, a command, or an exclamation. All of these roles can be used to describe the intent of a particular sentence. When a kind of sentence is chosen, its associated role will supersede that of the thought role. When it comes to guiding the program, consider as illustrations the following sentences that use a forms-of-be linking verb: John is a doctor and Humans are mammals. In both cases, the verb tells us that the subject is defined by the predicate. For this reason, the role of the forms-of-be grammar term is to define the subject. Thus, when encountered with such a sentence, the program will establish a relationship between the subject and the predicate. John will be defined as a doctor and the human class will be classified as mammals. In addition, the role of a grammar term may be determined by its position in a sentence. For instance, a noun phrase is labeled as a subject, a direct object, or an indirect object depending on where it appears in a sentence and this label helps to explain its role. In this case, the role is associated with the term as it appears in the sentence structure, not with the noun phrase grammar term. Furthermore, a term may be given an option of several roles, however a role will not be assigned until a rule has been verified. As we will see in a later paragraph, a rule can either chose a role for a grammar term, or restrict it from obtaining one. All these roles are part of the grammar that the user needs to teach the system so that it can communicate using the grammar. A kind of a grammar term is a sub-category of that term. A term with different kinds is similar to one with alternative grammar terms. However, each kind must be taught with an attached role, and its role will overwrite the role of its parental grammar term. Using the sentence grammar term as an example, the role of a sentence is the generic thought role, but each specific sentence should have the role corresponding to its exact kind. A declarative sentence kind is associated with the statement role, an interrogative sentence has the question role, and a decision question, a sub-kind of an interrogative sentence, must have the decision question role. We will next discuss how rules help to decide the kind of a grammar term, and how they help in parsing. Since the knowledge that can be added to a kind is the same as a grammar term, each kind is simply implemented as another grammar term. A rule specifies the condition that a fulfilled grammar term or a sub-term needs to satisfy. Given an English sentence, a grammar term is fulfilled when the content of its structure is determined. In the discussion of rules, the reference to a grammar term is understood to be a fulfilled grammar term. We implement rules using the condition class developed in the learning of definitions [4]. We added three subclasses for the condition base class: pattern, fulfill, and sequence. The pattern condition verifies that a term matches a certain pattern, like checking the ending

punctuation of a sentence. The fulfill condition ensures a particular grammar term has been satisfied properly, such as the verb being a linking verb. By satisfying this verb condition, the program can decide that the complement must be a subject complement according to the rules of the English grammar. The responsibility of the sequence condition is to assert a certain ordering of the grammar term. For example, a sentence could have a sequence of helping verbs, however, certain helping verbs must precede others and it is not correct to have more than one helping verb of the same kind. There are four types of rules identified by two independent dimensions. Each type of rule is for a specific purpose and is applied at a specific time during parsing. The first dimension determines whether the rule is for a grammar terms structure or its kind. The second dimension defines whether it is a restriction or a choice rule. A restriction rule is a necessary condition that needs to be satisfied in order for the term to be acceptable. The choice rule specifies a sufficient condition to make a choice when it is satisfied. The resulting four combinations of rules are kind-restriction, term-restriction, kind-choice, and termchoice. A kind-restriction rule specifies a necessary condition that the kind must satisfy. For example, the kindrestriction rule for a decision question states that its complete subject must not be fulfilled by an interrogative pronoun. An input instance such as Is how a human? follows the structure requirement for a decision question, but it will not be a decision question because this restriction rule is violated. A term-restriction rule defines the necessary condition that a grammar term must satisfy. For instance, when a complex structure of noun phrase contains an article or a prepositional phrase, the term-restriction rule requires that the noun cannot be fulfilled by a pronoun. A kind-choice rule defines the sufficient condition to decide the kind of the grammar term. If the condition is satisfied, the kind is chosen and its associated role is recorded. For example, the basic structure of a sentence contains several kind-choice rules, one of which states that if the structure ends with a period then it is sufficient to conclude that the given instance is a declarative sentence. Finally, a termchoice rule defines the sufficient condition to fulfill a grammar term by one of its alternatives. It speeds up the parsing process by dictating which alternative grammar term should be applied, instead of trying them one by one. Take for instance the complement of a sentence. It is composed of two alternatives, subject and object complements. The English grammar states that if the main verb is an action verb, then the complement must be an object complement [19]. Otherwise, a subject complement should fulfill the complement. Notice that both the rules and the roles may associate with a grammar term or its structure, so our learning interface has to provide the opportunity to introduce them as needed.

4 A Distributed Parsing Algorithm


We have completed the learning of a small subset of the English grammar and have started working on the second sub-problem. The second sub-problem involves understanding a given sentence using the learned grammar. To understand a sentence, the learning program first parses the sentence, then executes accordingly and appropriately based on the parse result and the current state of the system. Parsing determines the exact content of each grammar term and their respective roles, while ensuring all rules are satisfied. We have already implemented a simple algorithm for parsing. Our solution uses the grammar structure in a top-down, depth-first order to parse a given English sentence. During parsing, each grammar term will consume the part of the sentence satisfying that term, leaving the rest of the sentence for subsequent terms to parse. A grammar term parses according to its structure, so each type of structure has its own parsing algorithm. As a result, the parsing algorithm is a distributed solution. For the alternative structure, its parsing algorithm will try the alternatives one by one. The next alternative is tried only when the previous alternative fails. For instance, the structure for a noun includes class, concept, data type, and object in that order. When encountered with the word John, all alternatives fail until finally discovering that John is an object. For the sequence structure, it will parse each successive term in the given order. If a term is satisfied without violating any rule, its result is stored. However, when a term fails, the processing differs depending on its occurrence attribute. For an optional, zeroto-many, or implied term, the fulfillment of the term is complete, although no part of the sentence is consumed. On the other hand, if a compulsory term fails, then either our algorithm backtracks or the parsing of this sequence fails. The parsing fails if the most recently fulfilled term is also compulsory; otherwise, our algorithm backtracks. Notice that this may be the last part of a term that appears zero-tomany times, but not the whole term. For example, the parsing of could have been initially fulfills them as three helping verbs, and then the parsing of the main verb fails. Backtracking will change the last verb been to a main verb keeping could have as helping verbs. Finally, since a sequence structure is actually an alternative of multiple sequences, if one alternative sequence fails, our algorithm will try the next sequence. Although the parsing algorithm is adequate for a majority of input cases, there are certain instances that it does not succeed. These correspond to the occurrence of left recursion. Left recursion occurs when multiple alternatives begin with the same term. For example, a noun phrase can be either a noun or a noun followed by a prepositional phrase. As an illustration, for the phrase kings of Europe,

the noun phrase will be satisfied by just the noun kings because both alternatives begin with a noun. Since the first alternative is fulfilled successfully, the second structure will never be tried and the remaining words of Europe will not be considered as a part of the noun phrase, which is not correct. There are two common approaches for top-down parsers: backtracking and predictive parsers [20]. A backtracking LR parser is capable of eliminating the threat of left recursion. The backtracking LR does this by keeping a record of the choices it makes and when a parsing error occurs, it undoes the parsing back to the most recent choice and proceeds with the next possibility. Since the backtracking LR algorithm guarantees trying all possibilities, the worst case parsing time is exponential. Predictive parsers will look ahead a limited number of words in the input sentence trying to predict what type of grammar terms to expect. Predictive parsers perform in linear time but there is no guarantee that it works for all cases. The Packrat Parser [21] combines the techniques of a top-down backtracking parser with those of a look ahead predictive parser, while still performing in linear time. Recently, a modified version of the Packrat Parser has been developed [22] to handle left recursion. These approaches will be considered in later versions of our parsing algorithm. Since we assumed the grammar is taught from simple to complicated, to overcome the left recursion problem, our parsing algorithm currently uses the more complicated alternative first. Our parsing algorithm applies each of the four different kinds of rules at a different time during the parsing. If termchoice rules exist for an alternative structure, our algorithm uses them to decide which alternative term to parse. Our algorithm will apply any term-restriction rule once a grammar term has been fulfilled successfully. After that, if kind-choice rules exist, the parsing algorithm will apply them to determine the exact kind of the grammar term and store its associated role in the parse result. When a grammar term has no role chosen by the kind-choice rules but does have a generic role, this role will be assigned to the grammar term. Finally, our algorithm will check any kindrestriction rule to make sure the chosen kind satisfies the necessary condition. The following describes the main components of the subset of the English grammar that we have taught the learning program. The teaching includes the basic sentence structures for different kinds of sentences: declarative sentence, exclamatory sentence, interrogative sentence, and imperative sentence, a second-person command. In addition, this subset also contains the structures for decision question and third-person command. The general components of a sentence include complete subject, verb, and both subject and object complements. The complete

subject and the complements are all noun phrases; a phrase focused around a noun and may include modifiers, such as a determinate and prepositional phrases. A prepositional phrase may begin with one of over fifty prepositions. The learned grammar also includes main verbs and helping verbs. The main verb can be a linking, action, or possessive verb, and the helping verbs cover forms-of-be, forms-of-do, forms-of-have, and modal verbs. A noun can be an existing knowledge object within ALPS or one of three pronoun types: demonstrative, personal, or interrogative pronouns. We have tested our parsing algorithm on various combinations of this learned subset. For each test sentence, the contents of all the grammar terms and their associated roles have been correctly identified and recorded. In addition, the rules have been applied at the appropriate time during parsing.

6 References
[1] S. Russell and P. Norvig, Artificial Intelligence, A Modern Approach, 2nd Ed., Prentice Hall, 2003. [2] K. Cheng, An Object-Oriented Approach to Machine Learning, Proc. WSES International Conference on Artificial Intelligence, June 2000, pp. 487-492. [3] K. Cheng, The Representation and Inferences of Hierarchies, Proc. IASTED International Conference on Advances in Computer Science and Technology, January 2006, pp. 269-273. [4] K. Cheng, Representing Definitions and Its Associated Knowledge in a Learning Program, Proc. International Conference on Artificial Intelligence, June 2007, pp. 71-77. [5] J. Lambek, The mathematics of sentence structure. Amer. Mathemat. Month. 65, pp. 154-170, 1958. [6] R. Montague, Universal grammar, Theoria 36, pp. 373398, 1970. [7] M. Steedman, Type-raising and directionality in combinatory grammar, Proc. 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, CA, 1991, pp. 71-79. [8] A. Ranta, Type-Theoretical University Press, Oxford, U.K., 1994. Grammar, Oxford

5 Future Work and Conclusion


In this paper, we have shown how we implemented an Object-Oriented program to learn and store a subset of the natural language English. With the use of classes such as grammar terms, structures, roles, and rules, the program can correctly parse sentences belonging to a large variety of sentence structures, and at the same time, identify the correct purpose for all their components. New grammar terms such as adjectives, adverbs, compound predicates, and gerunds, may easily be taught without the need to change the existing program. Much work still needs to be done in order to complete the communication problem. For the second sub-problem of understanding a sentence, the development of the program to execute the identified roles is currently underway. The third sub-problem of constructing a valid sentence reflecting an internal thought is theoretically the reverse process of the second sub-problem. The solutions for both sub-problems are required for the program to communicate. In addition, solutions are needed to handle other grammar usages such as verb tenses, a passive voice, various agreement problems, clauses, and possessive nouns. Furthermore, an important extension is for a communication agent to tolerate sentences with flawed grammar. Although rules are important in identifying grammatically correct sentences, people do not always speak nor write sentences with perfect grammar. We would like to investigate which rules can be relaxed so that when violated, the program can predict the original intention of the writer. Finally, the use of a paragraph is required to understand multiple thoughts in a clear and organized manner, which is the subject of reading and writing. The successful completion of the communication problem is a first step towards achieving intelligent communication between a computer and a human.

[9] D.A. Turner, A new implementation technique for applicative languages, Softw. Pract. Exper. 9,1, pp. 31-49, 1979. [10] P. Hudak, S.L. Peyton-Jones, P. Wadler, B. Boutel, J. Fairbairn, J.H. Fasel, M.M. Guzman, K. Hammond, J. Hughes, T. Johnsson, R.B. Kierburtz, R.S. Nikhil, W. Partain, and J. Peterson, Report on the programming language Haskell, a non-strict, purely functional language, SIGPLAN 27,5, R1-R164, 1992. [11] R. Leermakers, The Functional Treatment of Parsing, International Series in Engineering and Computer Science, Kluwar Academic Publishers, 1993. [12] P. Ljunglof, Functional pearls: Functional chart parsing of context-free grammars functional pearl, J. Funct. Program 14,6, pp. 669-680, 2004. [13] P.C. Callaghan, Generalized LR parsing, The Happy User Guide, Chap. 3, Simon Marlow, 2005.

[14] R. Garigliano, R. Morgan, and M.Smith, The LOLITA system as a contents scanning tool, Proc. 13th International Conference on Artificial Intelligence, Expert Systems and Natural Language Processing, Avignon, France, 1993. [15] R.A. Frost, W/AGE the Windsor attribute grammar programming environment, Proc. IEEE Symposia on Human Centric Computing Languages and Environments, 2002, pp. 96-99. [16] R.A. Frost, Realization of Natural Language Interfaces Using Lazy Functional Programming, ACM Computing Surveys, 38,4, Article 11, Dec. 2006. [17] C. Shan, Monads for natural language semantics, Proc. 13th European Summer School in Logic, Language and Information, Student Session, K. Striegnitz, Ed., Helsinki, Finland, 2001, pp. 285-298. [18] W. Faris and K. Cheng, A Multi-Dimensional Data Organization That Assists in The Understanding and Production of A Sentence, technical report #UH-CS-08-04, Computer Science Department, University of Houston; accepted by the 17th International Conference on Software Engineering and Data Engineering, June 30 July 2, 2008, Los Angeles, CA, USA. [19] W.A. Sabin, The Gregg Reference Manual, A manual of style, grammar, usage, and formatting, 10th Ed. McGrawHill/Irwin, New York, 2005. [20] B. Ford, Packrat parsing: simple, powerful, lazy, linear time, functional pearl, Proc. 7th ACM SIGPLAN international conference on Functional programming, Oct. 2002, pp. 36-47. [21] A. Thurston and J. Cordy, A backtracking LR algorithm for parsing ambiguous context-dependent languages, Proc. 2006 ACM conference of the Center for Advanced Studies on Collaborative research, Oct. 2006. [22] A. Warth, J. Douglass, and T. Millstien, Packrat parsers can support left recursion, Proc. 2008 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, Jan. 2008, pp. 103-110.

You might also like