NANYANG T ECHNOLOGICAL U NIVERSITY

F INAL Y EAR P ROJECT

NLP Search Engine

Author: T HAI B INH D UONG

Supervisor: P ROF. C HAN C HEE K EONG Examiner: P ROF. C HONG YONG K IM

April 23, 2011

Abstract Even though many natural language systems have been developed successfully and commercialized, none of them yet proved to be versatile enough for a wide variation of tasks. One exception probably was IBM’s Watson, which during the course of this project has won against 2 human champions in a Jeopardy contest and showed for the first time that full scale interaction and reasoning in natural language were finally within the reach of modern technology. In this project, user input query which is in natural language form will be analyzed and presented in FOPC (First Order Predicate Calculus) which is suitable for using as input for higher layer tasks such as logic. In the process, referents or implication from the question, answer pair might also be deducted.

Acknowledgment
I owed my deepest gratitude to my supervisor professor Chan Chee Keong, who is very considerate and cheerful at the same time, and who has given me the opportunity to work on this project which has been very enjoyable. I also want to express my gratitude to my counsellor Mr. Frank Boon Pink, Ms. Jasmine, Ms. Joanne Quek, and professor Gwee Bah Hwee for behind me all the time. My big thanks to professor Francis Bond for his teachings that I became interested in natural language processing. And last but not least, my no-word-can-describe-this gratitude and love to my family and my friends Long and John for their supports. Thanks all of you for everything. All the mistakes in this project are my own.

Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction 1.1 1.2 1.3 1.4 2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Report Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 3 3 4 4 4 6 6 6 6 7 7 7 7 7 7 8 9 9 9 10 10 11 16

System Design 2.1 2.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

Tools and Resources 3.1 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 3.1.2 3.2 Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Natural Language Processing Toolkit . . . . . . . . . . . . . . . . . . . . . . .

Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 3.2.4 Wordnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semcor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Question Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.3 3.4 4

Download and Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Text Clean Up 4.1 4.2 4.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tokenizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spell Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 4.3.2 4.3.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .3 Introduction .4 First Order Predicate Calculus . . . . . . . . . . . . . . . . Mathematical Background . . . . . . . . . . . . . . . .2. . . . . . . . . . . Operation Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 4. . .6 Language Model . . . . . . .2 Implication Derivation . .4. . . . . . . . . . . . . . . . . . . . .1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Analysis A Study Case . . . . . . . . . . . . . 6. .6 7 Evaluations . . . . . . . Background Theory . . . . . . . . . . . . 6 Meaning Representation 6. . . . . . . . . . . .2. . . . . .2 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . Results . . . . . . . . . . . . . . . . .5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 5. . . . . . . . . . . . . . . . Method . . . . . . . . . . . .1 5. . . . . . . . . . . . . . . .3. . . . Formal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . Presentation . . Meaning Representation . . . . . Evaluation and Discussion . . . . . 5. . . . . .2 Semantics Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. .4. .5. . . . . . . . . . . .3. . . . . . .5 5 Evaluation . . . . . . . . . . . . . . 6. .3 6. . .1 6. . . . . . . . . . Conclusion . . . . . . . . . . .3. . . . . . . . . . . . . .4 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation explanation . . . . . . . . Future Work . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . .5 5. . . . . . . . . . . . . . . . . .2 5. . . . . . .2 5. . . . . . . . . 17 18 19 19 20 21 21 21 22 23 23 25 25 26 26 27 28 30 30 32 32 32 34 34 36 Part of Speech Tagger 5. . .1 6. .1 6. . . . . . . . . . . . . . . . Estimating Lambda Value . . . . . . . . . . . . . . . . . . . . . . .2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.

. . . . Spell checker flow chart . . . . . . .1 Simplified system DFD . . . . . . . . . . . . . . . . . . . . . . 5 11 12 13 22 . .List of Figures 2. . . . . . . . . . . . . . . . . . . . . . . . . Stemmer flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 4. . . . . . . . . . . . . . . . . . . . . . . POS tagger flow chart . . . . . . Spelling correction module flow chart . . . . . . . . . . . . . . . . . . . . . .3 5. . . . . . . . . . . .1 4. . . . . . . . .1 4. .

. . . . Weighed transformation . . . . . . . .1 4.1 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accuracies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Transformation steps and their patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 5.List of Tables 4. . . . . 15 16 23 23 . . . . . . . . . . . . . . . . . . . . . . Accuracies . . . . . . . . . .

Other interesting search engines that might be more useful than Google when it comes to more specific tasks are GazoPa. language generation. Google’s power lies in their gigantic knowledge base. . each with their distinct features. the real content of a web page plays less significant role than it should do. Natural language search engine can be broken down to basic natural language tasks that we perform daily: analysis. and be able to analyze the actual contents of the web page to determine the level of relevant. This can be very irrelevant as in early search engine back in the 90s. To be fair. and Wolfram Alpha. Bing which is great for lifestyle. Most search engine up to date based on key words. 1 . or can be very effective as in Google case. TinyEyes. Natural language search engine on the other hand in theory will be able to response to questions from users as opposed to keywords only. the world’s most academic search engine. traditional search engine probably is the most suitable choice by letting the user do the final and the usually the most difficult task: read through the contents and choose whatever suit their needs. To process such a huge knowledge base and half a billion queries per day. which is also a natural language search engine. Nevertheless. • Discourse: The study of linguistic units larger than one single utterance. Stock photography all of which are similar image search engine. • Morphology: The study of meaningful components of words. sense disambiguation. this suggests that in Google method. In fact. . any natural language tasks can be grouped into these below: • Phonetics and Phonology: The study of linguistic sounds. meta data and ranking algorithm to return results that are most likely to match the input queries. which is also known as search engine optimization (SEO). • Syntax: The study of structural relationships between words • Semantics: The study of meaning • Pragmatics: The study of how language can be used to accomplish goal or in different situation. but still it can be tricked to give a higher rank than it really should.Chapter 1 Introduction Seach engine is vital for fast and accurate information retrieval.

It performed poorly considering a closed context and was discarded in the following sequel. the main goal is to derive the implication given a pair of question and answer. Along the way. that humankind had the facility to realized thist long standing dream. it sometime produced amazingly human like conversation. hence their different conversation styles. Users around the world for the first time have access to a search engine which can operate using natural language both as input and output. which I believe is how the machine should. speech recognition in electrical engineering. from ancient programmable machines by pegs and ropes. a computer simulation of a Rogerian psychotherapist [21]. and even brain signal. Wolfram Alpha was launched. human also invent more methods to effectively communicate with the systems. to touch screen. and what kind of science fiction that is without humanlike machine. either in a form of lovable and talkative android or an intelligent super computer with its own evil will and desire. it was not until about 80 years ago in 1936. eye and motion tracking. Weizenbaum proposed ELIZA. In 1966. Many chat bots were developed based on ELIZA. and can learn. In May 2009. In this project. As the technology evolves. The linguistic tasks that we human perform almost effortlessly daily turn out to be challenging indeed. each with their unique discourse.Human attempts to build automation that mimic humanlike behavior dated back some thousands years ago and still going strong. to a mechanical marvel robotic lion by Leonardo da Vinci in1515. mouse. In early 2011. from keyboard. it is known as natural language processing. The drawback is that it’s pretty useless for nonacademic purposes. minor tasks such as spelling and syntax analysis will also be explored. yet none prove to be versatile enough for a wide variation of tasks. Even though many systems has been successfully developed and commercialized [7]. when the first freely programmable computer ‘Z1 Computer’ was invented. It’s still highly excited to see such a feature though. or more generally natural languages. It’s so clever that Weizenbaum reported some attendants refused to not believe in ELIZA even after being explained about the situation [21].1 Literature Review Research on linguistics has been carried out in many other fields long before the computer science era and are known under different names in different fields: computational linguistics in linguistics. and computational psycholinguistics in psychology. which required contestants 2 . But the holy grail of communication will be what we have been developed through generations and what we are most naturally familiar with: our mother tongue. IBM machine one more time defeated human champions in an intellectual contest. striving and many brilliant minds. In computer science. In 1997 the infamous PC game Fallout was released with a feature that allowed player to interact with in-game characters using natural language input. Despite a long history of envisioning. Using almost no information on human though or feeling and only clever decomposition and recomposition rules trigger by system of ranked keywords. 1. Information retrieval and display is throughout and well organized as if have been carefully prepared by a human. just as how we used to learn when we were kids and plain. This time it was IBM’s Watson against 2 champions on Jeopardy quiz show.

tools and resources. chance are not so far into the future. 1. things such as interact freely with an intelligent system in natural language form will start to penetrate and change the way we are using the machines today. as well as the requirements. Despite the fact that Watson had the entire data of Wikipedia loaded in its RAM. chunking and partial meaning representation from an natural language input.3 Scope Currently. The first one. 1. chapter 6 will discuss how meaning can be extracted and represented using syntax analysis information and FOPC scheme. result and performance is discussed in chapter 5. spelling correction. Part of speech (POS) tagging methodology. which is typically a question or question and answer pair and return its meaning representation in the format suitable for logic operation. chapter 4 will cover spelling correction. Next. part-of-speech tagging. or exhibited weird behavior at some points. Finally. 3 .to figure out which question should have been asked given a statement. Additional spelling correction might be performed if required. specifications. the program is able to perform word stemming. chapter 2 will cover the overall design of the system. Guide for installation is also provided.4 Report Organization The main body of this report consists of 4 chapters.2 Goals and Objectives Be able to analyse a natural language input query. 1.

• sense module: for semantics analysis.1 4 .2 Designs At the core of the program are three separate modules and a central database. Some necessary intermediate processes are: • Text cleaning up: Including tokenizing and spelling correction if required. • Meaning deduction and representation using first order predicate calculus (FOPC). 2. Three modules are: • spelling module: for spelling correction. • Part-of-speech (POS) tagging using second hidden Markov model (HMM): Accuracy should approach 90%.1 Requirements At the end of the project.Chapter 2 System Design 2. which is typically a question or a pair of question and answer and return its meaning representation in the format suitable for logic operation. A simplified data flow diagram is shown in figure 2. • tagging module: for POS tagging. the program should be able to analyse a natural language query.

1: Simplified system DFD 5 .Figure 2.

Zope. . . Paint Shop Pro. and integration.. . software and data for Python. As a general-purpose language. is an easy-to-use.. objectoriented.1 3. • And many mores. Battlefield 2.. . portability. . .1. To be fair.1 Tools Python Python. • Games: Civilization 4.. named after Monty Python a British band of comedians. In short. Both C and Python have their distinct strengths and roles. . 3. they tend to run near or even quicker than the speed of C language somehow. fast development speed.2 Natural Language Processing Toolkit Natural Language Processing Toolkit (NLTK) is an open source natural language processing libraries.Chapter 3 Tools and Resources 3. flexible. since Python programs use highly optimized structures and libraries. productivity. In this project. these terms mean (but not limited to) readability. NLTK was used mainly for its corpus and probability module. Python can be used for almost anything computers are capable of. text processing power and web work suitability.1. mature. 6 . popular. National Weather Service. However. In modern software context. • Numerical application: NASA. A few organizations currently using Python are: • Web developments: Google. Python emphasizes concepts such as quality. . Blender. and open source programming language designed to optimize development speed. Yahoo. it’s unlikely that Python will ever be as fast as C. Disney. . Python’s speed of development is just as important as C’s speed of execution. . • Graphics: Pixar.

1 Wordnet Wordnet is a lexical database for English language.4 Question Corpus Collections of ‘which’ and ‘what’ questions tagged with part of speech and intention of the question. Nouns.2.2. A full installation will require approximately 800 MB of free disk space.3 Semcor A subset of Brown corpus in which words are also tagged with their sense along with their part of speech.014.312 wordsl of running text of edited English prose printed in the United States during the calendar year 1961. Available at Rada Mihalcea’s page 3. 3. • Python 2. adjectives and adverbs are grouped into sets of synonyms (synsets).3.7 • PyYAML • NLTK After installing all packages. which is useful for text analysis and artificial intelligence.2.2. The result is a meaningful hierarchical network of related words. The samples were divided into categories and subdivisions. 3.download() 7 .3 Download and Installation Download size is about 17 MB. each expressing a distinct concept. Synsets are interlinked by means of semantic and lexical relations.2 Resources Below are collection of corpora used in the process. NLTK is shipped with Wordnet 3. run the Python IDLE (see Getting Started). Available at Rada Mihalcea’s page 3. Some of them are shipped with NLTK. 3. and type the commands: >>> import nltk >>> nltk. A tagged version of the corpus. verbs. in which every word was tagged with its part of speech is also available in NLTK.2 Brown The Brown University Standard Corpus of Present-Day American English (or just Brown corpus) consists of 1.0.

’County’. In Windows: Start ->All Programs ->Python 2. Save your program to a file with a . select the packages or collections you want to download. showing the NLTK Downloader.. set this to C:\nltk data. the Integrated Development Interface. It opens up a window and you can enter commands at the >>> prompt. Test that the data has been installed as follows. select Properties>Advanced>En Variables>User Variables>New. ’said’..4 Getting Started The simplest way to run Python is via IDLE.corpus import brown >>> brown. ’Grand’.A new window should open. then run it using Run ->Run Module. If you did not install the data to one of the above central locations. (This assumes you downloaded the Brown Corpus): >>> from nltk. ’Fulton’. . For central installation.. ’Jury’.. Next. you will need to set the NLTK DATA environment variable to specify the location of the data. Click on the File menu and select Change Download Directory. It should print Monty Python! >>> print "Monty Python!" 8 .py extension. Right click on My Computer.] 3. You can also open up an editor with File ->New Window and type in a program.6 ->IDLE Check that the Python interpreter is listening by typing the following command at the prompt.words()[:50] [’The’.

sent)] The function uses a regular expression define in pattern (default value is TOK) to search for words and punctuations in the input string sent. >>> tokenizer("W-H-A-T ’y’re’ looking/gaping at \"huh\" ??? ") [’W-H-A-T’. The expression A and B or C is an equivalent to: if A is True: return B else: return C However.2 Tokenizer Here is the code for the tokenizer: import re TOK=r’(?:\b([\w][\w\-\’]*[\w]))|([ˆ\s\w])’ def tokenizer(sent. Even if it’s just a simple procedure which filters out non-desired characters. the amount of memory saving can be substantial considering a very large and noisy corpus such as html documents.findall(pattern..pattern=TOK): return [item[0] and item[0] or item[1] \ for item in re. "y’re". ’huh’. user input query will be tokenized (including punctuation). ’gaping’. ’/’. "’". "’".1 Introduction Usually the very first step of every text processing tasks. filtered out odd characters and spell checked before being used for further processing. ’"’. ’?’] 9 . ’?’. ’looking’. 4. ’at’. ’"’. As in this project context. this is not the case if B is False. A result example for an noisy input is shown below. ’?’.Chapter 4 Text Clean Up 4. A clever cleaning up can benefit the project in many ways.

1 Spell Checker Introduction [17] reports a rate of 25 errors per thousand for handwritten essays by secondary school students.. Depend on what kind of spelling errors. • Real word error: hole (hope). 1 (number one) or l (letter l). a rate of 0. Section 4. • Suffix and prefix: stopt (stopped).5% to 23% for bibliographic database was reported in [2].. the rate of errors will be high. sed (said). while the rest of this chapter will evaluate and discuss on the performance. lirbary (library) • Confusion of homophones: there/their.2 will explain in detail the method use in this project. On the other hand.3. realy (really). For the project’s context where user types in the input query. happend (happened).4. [17] found that real-word errors account for about a quarter to a third of all spelling errors. and they will have significant impact on the output results There are many sources of spelling errors: • Spelling by sound: wuns (ones). Some methods are: • N-gram technique • Rule based technique • Minimum edit distance technique • Probability technique • Neural nets technique • Acceptance based technique • Expectation based technique • . them (then). perhaps more if you include word-division errors.3 4. 10 .3. there are different methods for detection and correction [14]. • Similar shape in optical recognition: rn (m). though there was considerable variation between good spellers and poor spellers. Spelling error rate and it’s significance varies depend on the application fields. brid (bird) • Finger slip: what (qhat). • Not hearing the sound: umrella (umbrella). As for this project. • Order of letters in words: gril (girl). we will only focus on errors that result in nonexistent words. no3 (now). two/too/to.

2 Method A few lexicons were prepared before hand: • dict en. This leads to many awkward results as shown below. the stemmer doesn’t validate the return solutions.stem.1 shows a simplified flow chart for the spell checker Figure 4. Compiled using Wordnet.txt: The corpus of every lemma. its POS tag and associated definition.PorterStemmer() However. • wn dic.stem(’propose’) ’propos’ >>> stemmer.stem(’goes’) ’goe’ >>> stemmer.txt: Compiled using the English lexicon in NLTK.4. >>> stemmer.stem(’grocery’) ’groceri’ 11 .3.exc: Corpora of morphological exceptions (mice/mouse).txt: Compiled using Wordnet and stop word lexicon in NLTK. • small dict. Figure 4. • *.1: Spell checker flow chart Stemmer A handy and readily availbale stemmer called Porter Stemmer can be called using the following code: import nltk stemmer = nltk.

>>> stemmer. sums up their lengths then weights. The experiment results indicated that feature transmuteI alone was sufficient. 4 basic (distance 1) transform steps are: delete. Feature fitc was chosen because of its high speed and usually resulted in a neither too broad or too restricted candidate set. A few features were chosen and combined either by linear or cascade combination. and swap (adjacent letters only). Source code can be found in module morphy.py. change. can): scans for match strings between word and cand. it is the opposite of feature fitc.2.py Spelling Correction The underlined approach was to figured out the similarity between 2 tokens. Figure 4. 12 . • transmuteI(word. It can also return the definition associating with the part of speech of the token during the process if required. can): length difference. These features are: • fitl(word.can): level of character disorder. • fitorder(word. This weights the unique. add. rather than common characters. • fitm(word.stem(’groceries’) ’groceri’ For this reason. It’s flow chart is shown in figure 4. can): in some sense. The other features can be used for selecting potential candidates to speed up the process. a more sophisticated stemmer was developed.2: Stemmer flow chart The stemmer utilises extra information such as exceptions and part of speech tags for validation. Another version that also return the meaning associated with the word can be found in module wn dict fast. • fitc(word.can): steps required to transform word into can. hence a better accuracy.

7)] 0. Figure 4. properties=[ #(fitorder.can)*wei for (prop. (transmuteI.0). last element is used for candidate sig = sum(wei for (prop. The last element in the feature vector is used for 13 . (fitc. 1. A flow chart for the spelling correction is shown in figure 4.3.0).wei) in classifiers[:-1])/sig The C(properties) is a vector of features together with their weights for linear interpolation as can be seen in the function C(interpolation). 1.0).wei) in classifiers[:-1]) return 1.0).py.Detail Operation Explanation Refer to figure 4.5)] 0.67)] def interpolation(word. 0.0). #(fitc.0). 1. 1. #(transmute. The source code can be found in module morphy.classifiers=properties): ’Linear interpolation of various classifiers. #(fitm. 1.can. 1. #(fitl.2 for the stemmer operation.3: Spelling correction module flow chart Let’s explore the module at its very top layer. #(fitl.0*sum(prop(word. #(fitm.

As stated above. • Delete one character.1). Calculating transmuteI is not very straightforward as in the case of f itc. but there should be a limited number of shortest paths. If 4 steps is required for the transformation. C(fitc) is calculated by using equation (4.0/len(word)+1. These steps are: • Add one character.1) Note that f itc is in range [0.0/len(can)) The code is self explained. Obviously. Below are the implementation code for C(fitc): def fitc(word. • Weight the transformation for calculating transmuteI. there are infinite number of ways to transform one word to another.candidate filtering. when C(word) or C(can) becomes longer.can) = (word. • Assign transformation steps. only C(transmuteI) will be used as feature and C(fitc) as candidate filter. 1]. f itc = 1 × length(common) × 2 1 1 + length(word) length(candidate) (4. The function C(transmuteI) will calculate the shortest distance and will figure out the intermediate transformation steps if required as well. The function implies that the bigger the value. 14 . • Swap two adjacent character. • Segment and align the two words based on the common string. the more similar the words are. we’ll say the distance between the two is 4. This can be achieved through a four steps process: • Search for the common string between the two words. Furthermore.lower(). can): """ The function scan for match strings btw word and candidate.can) l=0 for string in common: l+=len(string) return 0. The transformation from C(word) to C(can) can be assumed to go through a series of basic transformation steps. their common must approach their lengths for f itc to be significant. Sum up their lengths then weight""" (word.lower()) common = common_strings_mk2(word. • Change one character into a different one. can.5*l*(1.

3. >>> common_strings_mk2(’abcdefghklm’.verbose=False): . 15 . ’k’. >>> align(’abcdefghklm’. 6. ’m’].3.’21becdfhlkm’) [[’a’. ’b’. 8. ’cd’.’e’. for instance ’abc’ as opposed to ’a’. ’’.’l’. search for the common string between the two words. ’’.1: Transformation steps and their patterns Finally. ’m’]. assign transformation steps. The desired output of this process is shown in the example below.. ’cd’. weight the transformation for calculating transmuteI. i. ’’.. [ 2. ’h’. ’’.’l’. ’f’. ’c’ ’’ ’c’ ’d’ ’’ ’d’ ’c’ ’d’ ’’ ’’ but not ’c’ ’c’ Delete ’c’ Change ’c’ to ’d’ ’c’ ’’ ’d’ ’d’ Swap ’c’ and ’d’ ’c’ ’’ Add ’c’ Table 4. Sample code for searching function: def common_strings_mk2(word.can.’1’. The process bases on the fact that the transformation steps will decide how the column vector look like. def align(word. ’’.’b’... This is a greedy process. which means it tries to group as many adjacent characters as possible. ’m’] Secondly. nearer distance.2 showed the summary of the patterns and their corresponding steps. 4. Table 4.e. ’k’. ’f’. ’k’. so these transformations should be weighed less. ’h’. ’h’. ’’. [’2’.’e’.’c’. Some transformation are more likely to happen than others. in order of appearance from left to right" . 10.’g’. can): "Return list of substrings that match. ’b’.’c’ or ’ab’. Table 4. 12]] Thirdly. ’cd’.Firstly.’21becdfhlkm’) [’b’. segment and align the two words based on the common string.2 shows the weighed transformations. ’f’.

’g’.h).can): "Measure distance (˜steps require to tranform word to can)" #tagseq(align(word. [1.0.0 return 6.index]]" . ’change a to e’: 1.5 4. 5].’21becdfhlkm’) [[1.5 1. (r. u to o.0. [1.2) implies that returned value of the function is equal to 1. converges to zero when distance approaches infinity.5’. (h. ’e’.’21becdfhlkm’) 0.p). ’Del’. o to u.2) Function (4. 0]. ’Add’. (h. #For example: ’swap a and e:0. and swap (adjacent chars) #Has been modified to weight the steps instead. ’e’. 11]] def transmuteI(word. delete. 7].0.0. [1. and (h... y to i Not above cases Table 4.2: Weighed transformation 0. ’l and k’. i to e or y.can)) originally was the steps require to tranform word to can #Basic steps are add. Some sample codes and outputs of a few keys functions were shown below: def tagseq(word.can.type. [1. ’Change’.s).3. (h. >>> tagseq(’abcdefghklm’.t).Swap Swap Change All steps Between vowels Between (g.3 Results Below are a few demonstrations for the stemmer.0 After weighing the transformation steps.can)])) >>> transmuteI(’abcdefghklm’. and reduces by one-third when distance is 3. (c. e to a. change.0. ’a to 2’. 1]. ’Add’.h). 16 . [1.k). ’Swap’.r) a to e.0/(6+sum([val[0] for val in tagseq(word.5 0.0. 3].involved chars. transmuteI is calculated as follow: transmuteI = 6 6 + sum(weighed transf ormations) (4.5 0.verbose=False): "Figure out the transmute steps given aligned sequences\n\ return value: [[weight.0. ’Del’. ’1’.0 only if the distance is zero.

’v’]] >>> morphy(’grocery’) [[’grocery’. ’arid’] >>> correct(’gril’) [’grit’. ’braid’. ’quat’. ’v’]] >>> morphy(’propose’) [[’propose’. ’bris’. ’non’. ’brim’. ’brad’. ’not’. ’grid’. ’ghat’.3. ’nov’. ’stop’.1 >>> correct(’umrella’) [’umbrella’] >>> correct(’libary’) [’library’] >>> correct(’qhat’) [’what’. ’noi’. ’qat’. 17 . ’bid’. ’nog’. ’buns’] >>> correct(’sed’) [’sad’] >>> correct(’stopt’) [’stout’. ’realty’. ’no’] >>> correct(’brid’) [’rid’. ’brit’.4 Evaluation The stemmer worked perfectly well for the test samples.’. ’chat’] >>> correct(’no3’) [’nox’. ’aril’] >>> correct(’wuns’) [’uns’. ’stoat’] >>> correct(’happend’) [’happen’. ’nob’. ’girl’. ’grim’. ’hat’. ’now’. ’brig’. ’brie’. ’n’]] >>> morphy(’groceries’) [[’grocery’.>>> morphy(’goes’) [[’go’. ’nod’. ’ready’. ’mealy’] 4. ’noc’. ’no. ’nor’. ’that’. ’real’. ’relay’. ’gris’. ’redly’. ’khat’. ’nos’.3. ’stops’. ’grip’. The error samples were taken in 4. ’realm’. ’append’] >>> correct(’realy’) [’reply’. ’rely’. ’bride’. ’grail’. ’grid’. ’bird’. ’grin’. ’brio’. ’grill’. ’really’. ’noe’. ’n’]] Some demonstration for the spelling correction.

However. the spell checked failed in 4 cases: ‘wuns’. A corpus reader to make this corpus readily available in NLTK would be very useful. these four cases couldn’t be helped since they were errors due to spelling by sound and the spell checker wasn’t designed for this type of errors.3. which was one of Woody Allen’s words for ‘love’ since he thought ‘love’ was too weak of a word.5 Future Work A spelling corpus is freely available at ota.xml The corpus can be used for a statistical spell checker or accuracy test. catching one glance at a long word and rapidly typing it into the computer showed that the spell checker rarely failed.On the other hand. However. 4. The best solution therefore would be letting the user choose from a list of suggested words. ‘sed’. 18 . ’curve’] So in conclusion. Further experiments by quickly skimming through a dictionary. the spell checker has done its job well. >>> correct(’lurve’) [’lure’. One of the few failed cases was ‘lurve’. So we would say a spell checker fails only if it lefts out the intended solutions.ox. the corpus is not readily usable.uk/headers/0643. ‘stopt’.ac. it’s hard to define a baseline or perform an accuracy test for the spelling checker because due to undetermined nature of spelling errors. and ‘happend’. From the above testing experiment. there might be no unique correct answer.

“He didn’t marry her because she was rich. but closer inspection reveals a quite simple structure: a sentence pre modifier ( When Ulysses S. “I am driving a car”. the meaning can be “being out of town before 4PM and arriving in town only then”. Virginia to work out the terms for the surrender of Lee’s Army of Northern 19 . the clause “because she was rich” can modify either the state of being married (the whole utterance) or just the cause (the verb only). “He won’t be in town until 4PM. at times when the syntax becomes complicated. However. Virginia to work out the terms for the surrender of Lee’s Army of Northern Virginia. this utterance can either mean that the speaker wants to eat at some nearby location. depending on which part of the sentence that the preposition phrase “until 4PM” modifies. or there is ambiguity. Considering example 1. Grant and Robert E. Grant and Robert E. For example.” 4. In fact most of natural language tasks can be viewed ask resolving ambiguity at some points. When Ulysses S. “I want to eat someplace that’s close to NTU.” 2.Chapter 5 Part of Speech Tagger 5. given the usual structure of the verb eat. or some random jungle talk “George good. Example 4 might look complicated at first. Let’s consider a few examples: 1. Lee met in the parlor of a modest house at Appomattox Court House. a great chapter in American life came to a close and a greater chapter began. The latter is much less likely to happen in real world.1 Introduction Most of the time.” 3. The same goes for example 3. Lee met in the parlor of a modest house at Appomattox Court House. we can understand an utterance while paying little to no regard to its syntax. Considering example 2. or that the person wants to swallow the place. some syntax analysis will be necessary to understand the utterance’s sense correctly. George no hurt you!”. or “arriving in town at some earlier time but not staying as late as 4PM”.

Some examples are “adjective usually precede noun”. 20 . . considering our context which is part of speech tagging. which appeared quite far after the empty location. board. the HMM model should account for these unseen events (also known as smoothing) • The application filed should be similar to the training data. • Most words in English are unambiguous.” Which word will be likely to fill in the empty space? It should be something that can be painted on. Necessary mathematical calculation is covered in section 5. The following sections will continue to elaborate on the program operation and its performance. .e. green except for the front door yesterday. or “to must be follow by noun phrase or bare infinitive verb”. house.) followed by a series of simple sentences ([a great chapter in American life came to a close][ and a greater chapter began. such as face. A HMM model has some major advantages compared to a decision tree model. A tagger based on HMM model will estimate the probability of various sequences and return the most probable sequence or sequences.Virginia. we’ll explore the syntax analysis. a good syntax analysis will make the task of meaning extraction easier and more precise than intuition alone. Let’s consider an example: “I painted my neighbour’s whole new . But only “house” are related to “door”. since a strict definition of a grammar rule is not easy to define.2 Method The language is assumed to be a second order hidden Markov model (HMM). This kind of database is usually simpler to prepare but time consumming. the HMM model works much more efficiently since grammar rules restrict which classes can stand next to each other. i.3. In this chapter. • It would required an adept knowledge in linguistics to capture all the grammar rules for using in the decision tree.2 will provide some information about method used. which means that the choice of a word is only depend on its previous two words. pants. Section 5. POS tagging. This of course is not true. But many of the most common words in English are not. or cow. HMM model also has drawbacks: • True model of the language is not known and can only be approximated. However. that is they only have one part of speech. In fact [9]stated that over 40 • A single HMM tagger can be reused for many languages or applications if provided with proper training data which in this case is a set of sentences tagged with each word’s part of speech. but not totally false either. Our current HMM model will fail in this case unless the term “whole new house” happens to appear more often than the others in usual context.] As those examples above illustrated. 5. • It is impossible to have a databse of every intance of a language.

t2 ) = P (w3 |t3 ) × P (t3 |t1 . P (t3 . w3 ) = P (w3 |t3 ) × P (t3 ) Solution can be found using iinear interpolation of 5.3) P (wi |ti )(λ3 × P (ti |ti−2 . t2 ) The formula can be derived by making the assumption that w3 only depends on t3 . One method for finding this path is known as Viterbi algorithm [10] 5. w3 |t2 ) = P (w3 |t3 ) × P (t3 |t2 ) P (t3 .e. w3 |t1 .• A database of sufficient size for trainning must be availbe. so the actual data must be minus by 1. It’s not the same as the multiple of highest probabilities.t. increase the corresponding lambda by a certain amount. and t−1 = t−2 = BOS (Beginning of Sentence). • Unable to validate the returned solutions. the highest multiple of probabilities. t3 ). C(t2 . t2 .t. compare the following values: C(t1 .t) ) C(t1 2 The reason for minus 1 is because we treat the in using trigram as observed event.3.3 5. The source code can be found in module tagger. a regular expression tagger and a default tagger will be called in that order to prevent error propagation caused by failing to tag a word. C(t1 . Similarly. t2 .t)−1 . λ3 are weights of the classifiers.2. 21 . bigram and unigram classifiers were linearly interpolated into a tagger call lihmmtagger. and 5. Logarithm is used when the numbers get smaller. In case this tagger fails to tag a words.4) In which λ1 . For this reason we skipped trigrams which have been seen only once. The 2 3 chosen amount were: 1.1 Mathematical Background Language Model The tagging problem can be defined for the trigram model as follow: giving sequence of[t1 .3. the returned soultions is the most probable paths.py. a trigram.e.3: argmax 1≤i≤n (5. 5. the bigram and unigram model can be defined: P (t3 . λ2 . t3 ) × P (t3 |t1 .2) (5. 5. λ1 + λ2 + λ3 = 1. C(t3 )−1 C(t1 2 C(t2 N (t)−1 Depend on which is the maximum of them. In this project. C(t1 . It should be pointed out that.t)−1 .1) (5. ti−1 ) + λ2 × P (ti |ti−1 ) + λ3 × P (ti )) (5. i.t.t.2 Estimating Lambda Value 3 )−1 2 3 )−1 For each trigram in training data. an prefix tagger. t2 ] (POS 1 and 2). t2 ) = P (w3 |t1 .1. find the probability of the sequence being follow by word w3 which has POS t3 i.

22 . which automatically assigns the most common tags which is ‘NN’ (147169 counts in 1071233. Each tagger is a separate class and has some common methods: • train: takes a tagged corpus as trainning data and export training information for later use since training might take a long time. • affixtagger: a tagger bases on prefix and suffix of words. • tagit: tag a word.1 Figure 5. that is 14%) • retagger: a tagger bases on regular expression.1: POS tagger flow chart There are 4 tagger classes: • A default tagger. bigram and unigram tagger.4 Operation Explanation The tagger flow chart is shown in figure 5. • lihmmtagger: a linear interpolation of trigram.5. • accuracy: take a tagged corpus as test set and return the percentage of correctly tagged words. • tagthem: tag a set of words. • setimport: import previous training information.

It was surprising to achieve such decent accuracy just by knowing the ending or starting character. complex sentences which consist of more than one clauses will be broken and tagged individually since words across clauses have little syntax relation.04 78. divide it into smaller sets and perform accuracy test on each set.6 5.45 Table 5. segmenting the test set gave more insights to the accuracy scores than the latter. the scores were close to each other. 5. Hence it can be assumed that the tagger will work 67% of the time.23 6. Take a tagged coprus as test set.03 Worst 45. which shouldn’t be the case.5 Results Below are some return scores in accuracy test.15 85. This implied 23 .38 6.6 Evaluation and Discussion The suffix tagger had slightly better accuracy and deviation than prefix tagger. The drawback is it requires much more computational effort. However.72 67. the outcomes might not be very useful for other tasks such as information extraction due to misleading tags. Interpolation of them fell somewhere in between. In order to improve speed of processing but limiting the affect on accuracy at the same time. This is a much lower score than the other HMM tagger.14 28.42 Table 5. A search beam (1000 to 2000) must be applied to limit the search space. Despite the large difference in testing scale. Tagger Regular expression Suffix tagger Prefix tager HMM tagger (most likely tags only) Mean (percent) 7.2: Accuracies Variance 132 24 Best 78. Return average accuracy and the standard deviations.56 27.25 67. and since the tagger searches for the most probable path.54 Mean w/o testset segmenting 64. which are the most common sentence length in Brown corpus. but this will affect the accuracy as well. Testset size 120 1010 Mean 63. Therefore searching for the most probable path is more desirable. As for the HMM tagger.1: Accuracies Table 5. So suffix tagger was chosen.5 showed accuracy test for the linear interpolation HMM tagger.68 Standard deviation 2. the search space can easily reach several hundred thousands in just 10 or 15 words. Even though the mean accuracy for 2 method of testing is approximate to each other.45 58. There are 170 possible tags.84 4.• test: An improved accuracy test. simply going from left to right and choosing the most likely tags yield a decent accuracy and speed.

In some occasions during debugging process. ’. ("’’". (’is’. (’?’. ’. since punctuations are more likely to be tagged correctly. Even though it is possible to modify the program to carry out experiments for the sake of verifying the above problem. 24 . This might due to punctuations often have very high frequency of appearance. On the other hand. ’DT’). (’vow’. ’NN’). when applied to the semantics module (discussed in the next chapter) the returned results were promising. whether their benefits outgrow their disadvantage is unresolved at the moment. The tagger assigned ‘what’ with a ‘‘‘’ tag instead of ‘WDT’. For instance. ’IN’). the POS tag ‘‘‘’ has a count of 6160 as opposed to a count of 4834 for ‘WDT’. let’s consider a sample from the log: (’‘‘’. Furthermore. The speed is much slower than comparing to the other HMM tagger. (’jazz’. Analyzing experiment logs suggested that punctuations can improve. Therefore should the punctuations be considered during the training or tagging process. segmenting the sentence into smaller clauses proved to have better accuracy.’). | . but can also degrade the performance dramatically. ’‘‘’). (’What’. "’’"). considering the project’s context where the inputs are user search queries. (’with’. the speed was practically double. So in conclusion. The experiment logs are provided in the database. even though the performance score was not as high as expected. ’BEZ’). the program are good enough for practical usage. (’this’. (’?’. . . punctuations will not be a big issue.that the language model might go wrong at some points such as smoothing process or estimating lambda values.’)] NoCls: 2 1 1 3 3 6 12 12 12 72 432 432 432 | ‘‘ ‘‘ | ‘‘ WDT | WDT BEZ | BEZ IN | IN DT | DT NN | NN NN | NN ’’ | . they also have the effect of limiting the error propagation. ’WDT’). More specifically. ’NN’). The performance improvements however was unknown. By segmenting the sentences into clauses. especially for lengthy sentences.

In this chapter. Semantic Networks and Frames. It is clear that none of morphological or syntactical representation thus far will get us very far on these tasks.3. 6.1 Introduction In the previous chapter. What is needed is a representation that can bridge the gap between linguistic inputs. like “give”.3 will explore how these theory can be apply to a specific case. for example is some verbs must not stand alone on itself. The rest of this chapter will then explain how the program achieved the desired result describe in 6. some background theory on FOPC and formal logic will be cover in 6. their meaning and the kind of real world knowledge that is needed to perform the involved tasks. consider some of everyday language tasks that require some form of semantic processing: • Answering an question. • Following a recipe. and in someway represents the sense of the verbs. 25 .2. Over the years. However. This class of verbs is known as transitive verb. It makes sense since most of the grammar rules are about how different words can be combined to form a sentence. • Realizing that you’ve been insulted.Chapter 6 Meaning Representation 6. a fair number of representational schemes have been invented to capture the meaning of the natural language inputs for use in processing systems. Three notable schemes are First Order Predicate Calculus (FOPC). a few examples have illustrated that in many cases syntax analysis is useful but not necessary for meaning comprehension. The exception. This implies that a different system other than grammar is necessary for meaning representation.

but are logically derivable from the available propositions.1 Background Theory First Order Predicate Calculus Desiderata For Representation Scheme There are basics requirements that a meaning representation must fulfill. Like in programming language. The semantics of FOPC Capturing meaning of a sentence involves identifying the terms and predicates corresponding to various grammar elements of the sentence. The conclusions might be not explicitly represented in the knowledge base. to be useful a representation scheme must be expressive enough to cover a wide range of subject matters such as time and tense. Inference The system’s ability to derive valid conclusions based on meaning representations of inputs and its knowledge base.2 6. Expressiveness Finally. however have more than one contants refer to them. Canonical Form The phenomenon of distinct inputs that should be assigned the same meaning representations. Some means of determining that certain interpretations are more or less preferable than others is needed.6. Variables Allow the system to match unknown entity to a known object in knowledge base so that the entire propositions is matched. FOPC contants refer to exactly one object. Some basic building elements of FOPC are: Constants Refer to specific object in the world. Verifiability The system’s ability to compare an affair described by a representation to affairs modeled in a knowledge base. 26 . Unambiguous Representations Ambiguities exist in all aspects of all languages.2. Objects can.

Lambda Notation Enable formal functionality for replacing of a specifics variable by a specifics term. FOPC formular is a representation of objects. constants. Formular An equivalent so sentence in grammar representation. functions. and variables and not a formular. Some necessary terms to read and understand formal logic are: • Argument: Is a line of reasoning. and ‘all’ quantifier that refers to all objects in a class. they are in fact terms since they actually refer to unique objects. Usage of quantifiers make these two uses possible. Quantifiers Variable can be used to making statements about either a particular unknown object. • Contradiction: is a simultaneous acceptance and rejection of some remarks.2. not on meaning or facts. or all the objects in some arbitrary classes of objects. • Proof and disproof: A formal proof is a logical argument which convinces by following formal rules. Examples: lambda x P(x)(A) --> P(A) 6. FOPC functions have the same syntax as a single argument predicate. Examples: LocationOf(NTU) Variables Give the system the ability to draw inferences or make assertions about objects without having to refer to any particular ones. rather than by shape or meaning.e.Functions Functions in FOPC can be expressed as attributes of objects. 27 . properties or relations between terms. The two basic operator in FOPC are ‘exist’ quantifier that denotes one particular unknown object. starts by a premise and ends by a conclusions. i.2 Formal Logic The word ‘formal’ means by form. since proof obeys rules. Note that the arguments of formulars must be terms. Contradiction isn’t allowed in formal logic (also known as consistency principle) • Monotonicity principle: A proof cannot be invalidated by adding premises. • Claims: Are declarative remarks about states of the world. Formulars therefore can be assigned with True or False values depending on whether the information they encoded are in accord with the knowledge base or not. however.

. why do witches burn? . Consider the following excerpt from the movie “Monty Python and the Holy Grail” ..A3: ’Cause they’re made of wood? .Q5: What also floats in water? .. she’s made of wood. The names suggest that the connectives are introduced or eliminated in the final proof.So.A1: Burn them! . A witch! .A5: A duck! .A4: No.. – If-then: If we accept ‘if A then B’ and A. Each connective has two rules associated with them: introduction and elimination.A witch! . – Not: If we accept ‘not A’ then accept A lead to a contradiction. If she. – Or: If we accept ‘A or B’ then we can either accept only A.Q4: How do we tell if she’s made of wood? Does wood sink in water? . It floats.Q3: So.More witches! .... we are forced to accept B. therefore.. . Rules show how can larger proof be made out of one or more smaller proofs.3 Semantic Analysis A Study Case Semantic analysis is the process whereby meaning representation of linguistic inputs is created. . • Connectives: the logic connectives are used to build larger claims out of smaller claim. or both. There are four connectives: – And: If we accept ‘A and B’ then we are forced to accept both A and B simultaneously..Q2: What do you burn apart from witches? .• Formal rules: is an intermediate step in an logical arguments.Q1: Tell me: What do you do with witches? . logically.A2: Wood.We shall use my largest scales. Remove the supports! A representation scheme based on FOPC and logic proof for the above excerpt will look something like this: lambda_x (witches(x) --> burn(x)) (1) premise lambda_x (burn(x)-->wood(x)) (2) premise 28 .And. therefore B 6.. only B. weighs the same as a duck. For instance here is an example of ‘If-then’ elimination rule: (If A then B) also A.. .

Using logic as shown above..lambda_x (witches(x)-->wood(x)) (3) implication-introduction (1) (2) lambda_x (wood(x)-->float(x)) (4) premise float(duck) (5) premise lambda_x lambda_y ((weigh(x)=weigh(y)) and float(y) -->float(x)) (6) premise "We shall use my largest scales. .A witch!" The claim at the end is nonsensical according to our intuition. it is still quite possible for the claim to be true if we rewrite the proof as below: lambda_x (wood(x)-->float(x)) (4) premise 29 . However.. this is true since there is one invalid proof at step (12). Remove the supports!’ weigh(duck)=weigh(girl) (7) given lambda_x ((weigh(x)=weigh(duck)) and float(duck) -->float(x)) (8) lambda-reduction (6) (weigh(girl)=weigh(duck)) and float(duck) -->float(girl) (9) lambda-reduction (8) (weigh(girl)=weigh(duck)) and float(duck) (10) And-introduction (7) (5) float(girl) (11) Implication-elimination (10) (9) wood(girl) (12) Implication-elimination (11) (4) Let’s assume: lamda_x (not_witches(x)-->not_wood(x)) (13) given lamda_x (wood(x)-->witches(x)) (14) backward chaining (13) (3) wood(girl)-->witches(girl) (15) lambda-reduction (14) witches(girl) (16) Implication-elimination (16) (12) "A witch!.

referents and their syntactical roles in the sentences. phrase Class As the name suggested.2). and implication such as who --> he is not very informative. while predicate and argument extraction are handled by sentence class. Logically. the final claim which is supposed to be humorous and nonsensical is true logically.4) (4. In the above example. and the right hand side of the implication must be either the argument or predicate of the answer. preposition phrase or verb phrase can be modeled using the class phrase Some key attributes of the class phrase are: 30 . The left hand side must be meaningful. the implication must satisfy the following two rules: 1. or the implication will be pretty much useless.4 6.1 Design Operation explanation Implication Derivation The program derives the implication by figuring out what are the symbols.lambda_x (human(x) and not_wood(x) --> not_float(x)) (4.3) lambda-reduction human(girl) (4.1) premise lambda_x (human(x) and float(x) --> wood(x)) (4. phrases such as noun phrase. The left hand side of the implication must be either the argument or predicate of the question. Amazingly enough. which is reasonable is introduced. and they refers to ‘genius’ and ‘Albert’ respectively. The above two rules and referent identification process are embedded in deduct class. a sound implication would be Albert --> genius or Albert --> he.2) backward chaining (4. Following the above two rules. 2. ‘who’ and ‘he’ are symbols.4) given wood(girl) (12) Implication-elimination (11) (4.4. Considering an example: Q: Who is Albert? A: He is a genius. 6.3) A new premise (4.1) (4) human(girl) and float(girl) --> wood(girl) (4.

Instance of the sentence on the right hand side (the answer). Each phrase has an associated function to search for that particular phrase in a sentence. subject.• string . deduct Class To derive implications from a pair of question and answer. verb phrase and adjective phrase are supported • span .Type of phrases.String of POS tags. • wttups .Sentence type.beginning and ending indexes of root of the phrase.the actual string of the phrase. preposition phrase. • root .List of possible implications.beginning and ending indexes of the phrase. • SubPre .Jungle talk version of the original sentence. yes/no question and some of Wh-questions are supported. • type . sentence type.Symbols and referents. including postmodifiers and premodifiers. Currently declarative. • FOPC . • refpairs . • rhs . Some key attributes of the class deduct are: • lhs .Symbols and referents pairs 31 . • jdiscourse . • presentation . Currently noun phrase. • negate . Sentence Class This class is used to represent an instance of sentence.Discourse of the sentence.Tuples of POS tags. It is worh noting that the process relies solely on syntactical information provided by the Sentence class and pays no regard to the actual meaning of the tokens. Some key attributes of the class Sentence are: • type .FOPC representation. • modifiers . • wtstr . Unimplemented at the moment.Instance of the sentence on the left hand side (the question). hence the name syntax driven semantics analysis. predicate. . .modifiers of the phrase. • jungle . It provides convenient access to many components of a sentence such as POS tags.Lists of subjects and predicates.Whether the sentence is negative or not • referent .

• quantifier . atom Class On top of generating basic atomic representations.5.FOPC formulas without quantifiers.Formula’s quantifier.6. • preamble .3. • connective . or function).Formula’s term • predicate .2 Design Presentation At the moment. • var .Formula’s connective. • presentations . More specifically.Trigger lambda reduction. • Lambda .Formula’s predicate. what should the term be (variable. q1="What do you do with witches?" a1=’Burn them.Variable symbol. constant. However. if we are able to provide these information. 6. lambda reduction and combining smaller expressions are also supported. • term . Some of them are from the study case in section 6.FOPC formulas with quantifiers.1 Results Semantics Analysis Below are the output implications for sample pairs of questions and answers. the representation module is not integrated into the semantics analysis modules discussed above since the extracted information is not detail enough for the module to generate an accurate representation.presentation [’witches/NNS-->burn/VB’] 32 . • amble .4.’ >>> deduct(q1.a1).5 6. and some statements are modified into questions for compatibility or manually tagged if they were tagged wrongly. the class atom can be used to generate a proper representation. the module will need information such as what should the quantifier be.List of possible representations.

1). ’duck/NN-->made/VBN of/IN wood/NN’] q8=’who is Albert’ a8=’He is a genius’ >>> deduct(q8.a8).presentation [’north/NR of/IN america/NP-->canada/NP’] q10=’who is the most stupid guy on earth’ a10=’Dummies’ 33 . it floats.presentation [’wood/NN-->floats/VBZ’.a2).a5).1).presentation [’weight/VB same/AP as/CS a/AT duck/NN-->wood/NN’.presentation [’witches/NNS-->made/VBN of/IN wood/NN’] q4=’does/DOZ wood/NN sink/VB in/IN water/NN’ a4=’No.presentation [’floats/VBZ in/IN water/NN-->duck/NN’] q6=’why/WRB does/DOZ she/PPS weight/VB the/AT same/AP as/CS a/AT duck/NN’ a6=’Because she is made of wood’ >>> deduct(q6.’ >>> deduct(q2.a3).presentation [’albert/NP-->genius/NN’. ’albert/NP-->he/PPS’] q9=’which country is north of America’ a9=’Canada is north of America’ >>> deduct(q9.’ >>> deduct(q4.presentation [’burn/VB-->wood/NN’] q3=’Why do witches burn?’ a3="Because they are made of wood" >>> deduct(q3.a4. ’water/NN-->floats/VBZ’] q5="What also floats in water?" a5=’A duck’ >>> deduct(q5.a9).a6.q2=’What do you burn apart from witches?’ a2=’Wood.

witches) connective ’ >>> t2=atom(’burn’.6 Evaluations Further expriments on the deduction program showed that even though the program analyses the sentence in a rather simple minded way.A). >>> t1=atom(’witches’.None. it worked unexpectedly well for simple sentences.5.A).’var’) >>> t1.presentation [’dog/NN-->bites/VBZ him/PP’. lambda reduction and combining smaller atoms into a bigger representation.presentation ’lambda_x_witches Isa(x_witches. ’dog/NN-->bites/VBZ him/PP’] Q=’Why does someone die?’ A=’Because he is old’ >>> deduct(Q.Apresentation ’Isa(x_burn.’var’) >>> t2. >>> Q=’Why is he kicking that poor dog?’ >>> A=’Because it bites him.witches) connective Role_of_girl(x_burn.>>> deduct(q10.witches) connective Role_of_x_witches(x_burn.presentation [’someone/PN-->old/JJ’] 34 .girl)’ 6.presentation ’lambda_x_burn Isa(x_burn.burn) connective ’ >>> t3=atom(t2. and for some a bit more complex sentences.x_witches)’ >>> t4=atom(t3.burn) Isa(girl.Lambda=[[t1.’girl’]]) >>> t4.2 Meaning Representation Below are a few demonstrations including atomic terms.burn) lambda_x_witches Isa(x_witches.presentation [’the/AT most/QL stupid/JJ guy/NN on/IN earth/NN-->dummies/NNS’] 6.None.’ >>> deduct(Q.t1) >>> t3.presentation ’lambda_x_burn Isa(x_burn.a10).

Hence it can either be used as a layer in a multi level process in which each layer solves a particular problem. As the formulas getting more complex. ‘whom’. which are necessary for FOPC representation scheme. Currently only yes no question. • Unable to extract information such as tense. Menu) and Had(e. y Having(e) and Haver(e. Menu) and all x Isa(x. y) . it worked as expected for simple formulas. • The analysis bases only on syntactical roles and does not consider the actual meaning and the relative locations of the tokens Despite the limitations. as well as sentences with clauses and commas are supported. x) and Isa(y.exist y Isa(y. a sentence with n quantifiers will have n! representations. Restaurant) then exist e Having(e) and Haver(e. Let’s consider an example: . ‘why’ and simple sentences are available. As for the representation module. connectives proved to be an ambiguity issue.Every restaurant has a menu The meaning representation of the sentence might take either one of the below two forms: . ‘which’. ‘who’. ‘what’. 35 .Some current limitations are: • Not all type of questions. the program does work in simple context. y) In the worst case.all Restaurant(x) then exist e. or be improved to be able to dealt with complex sentences and extract more useful information. x) and Had(e. quantity or entity relationships.

and memorable moments such as the excitement when getting the program running for first time. realized how magnificent this world and its people are.Chapter 7 Conclusion This has been a very enjoyable project for me. My deepest gratitude’s owed to my supervisor professor Chan Chee Keong. 36 . thank you. or had my heart sunk to my feet when hearing about IBM Watson’s triumph. I managed to retrieve other precious things in return: less arrogant. I really am glad to have the opportunity to work on this project. Though I felt that I could have done much more if I have had received more formal training on linguistics. Though I failed to achieve the initial goals. I have had fun learning Python and study some very interesting aspect of the language that I used to take for granted. I was contended with the achievement.

Commun. and William A. Frequency and impact of spelling errors in bibliographic data bases. Church. A technique for computer detection and correction of spelling errors. May 1992. An improved error model for noisy channel spelling correction. Manning. ACL ’00. 2000. Proof and Disproof in Formal Logic An introduction for programmers. [3] Thorsten Brants. [4] Eugene Charniak Brian Roark. In Proceedings of the 13th conference on Computational linguistics . 38:71–79. 35:80–90. Rau. 2000. Commun. Spelling correction for the telecommunications network for the deaf. [8] Fred J. Massachusetts Institute of Technology. COLING ’90. Computational Linguistics and Speech Recognition. Computation and Neural Systems. Stroudsburg. Bourne. November 1995. Foundations of Statistical Natural Language Processing. [12] Mark D. USA.Bibliography [1] Richard Bornat.broad-coverage statistical parsing. Oxford University Press. An Introduction to Natural Language Processing. [11] Lisa F. PA. 1990. [13] Karen Kukich. pages 286–293. pages 205–210. 37 . [10] JR. 13(1):1 – 12. Rau Kenneth W. Commun.Volume 2. second edition. ACM. 1977. [5] Eric Brill and Robert C. Church. Stroudsburg. Martin Daniel Jurafsky. ACM. USA. Moore. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Prentice Hall. Commercial applications of natural language processing. The viterbi algorithm. [6] Hinrich Schiitze Christopher D. 7. 7:171–176.. Church and Lisa F. G.a statistical part-of-speech tagger. Information Processing and Management. 2005. Tnt . [7] Kenneth W. [2] Charles P. Kenneth W. Gale. A spelling correction program based on a noisy channel model. Damerau. March 1964. PA. 1999. Association for Computational Linguistics. May 1991. DAVID FORNEY. Kernighan. Commercial applications of natural language processing. ACM. Measuring efficiency in high-accuracy. [9] James H. Association for Computational Linguistics.

Python Text Processing with NLTK 2. Holger Schwenk. ACM. Commun. pages 144–151. Association for Computational Linguistics.0 Cookbook. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. 23:676–687. [17] Roger Mitton.. [21] Joseph Weizenbaum. December 1992. error-proneness. USA.. Davis. Computer programs for detecting and correcting spelling errors. Peterson. Neural probabilistic language models. January 1976. Litecky and Gordon B. A study of errors. January 1966. [15] Charles R. 24:377– 439. Techniques for automatically correcting words in text. ACM. [20] Kristina Toutanova and Robert C. Logic in Computer Science Modelling and reasoning about systems.. Stroudsburg. December 1980. 2000. Process. Pronunciation modeling for improved spelling correction. Inf. 19:33–38. Eliza&mdash. Manage. ACM Comput. Spelling checkers. ACL ’02. Commun. 38 . Surv. [19] James L. 2010. September 1987. 9:36–45. PA. [22] Jean-Sbastien Sencal Frderic Morin Jean-Luc Gauvain Yoshua Bengio.spelling correctors and the misspellings of poor spellers. ACM. 23:495–505. Commun. Moore. [16] Mark D Ryan Michael RA Huth. Packt Publishing.[14] Karen Kukich. 2002. Cambridge University Press. and error diagnosis in cobol. [18] Jacob Perkins.a computer program for the study of natural language communication between man and machine.

Sign up to vote on this title
UsefulNot useful