NANYANG T ECHNOLOGICAL U NIVERSITY

F INAL Y EAR P ROJECT

NLP Search Engine

Author: T HAI B INH D UONG

Supervisor: P ROF. C HAN C HEE K EONG Examiner: P ROF. C HONG YONG K IM

April 23, 2011

Abstract Even though many natural language systems have been developed successfully and commercialized, none of them yet proved to be versatile enough for a wide variation of tasks. One exception probably was IBM’s Watson, which during the course of this project has won against 2 human champions in a Jeopardy contest and showed for the first time that full scale interaction and reasoning in natural language were finally within the reach of modern technology. In this project, user input query which is in natural language form will be analyzed and presented in FOPC (First Order Predicate Calculus) which is suitable for using as input for higher layer tasks such as logic. In the process, referents or implication from the question, answer pair might also be deducted.

Acknowledgment
I owed my deepest gratitude to my supervisor professor Chan Chee Keong, who is very considerate and cheerful at the same time, and who has given me the opportunity to work on this project which has been very enjoyable. I also want to express my gratitude to my counsellor Mr. Frank Boon Pink, Ms. Jasmine, Ms. Joanne Quek, and professor Gwee Bah Hwee for behind me all the time. My big thanks to professor Francis Bond for his teachings that I became interested in natural language processing. And last but not least, my no-word-can-describe-this gratitude and love to my family and my friends Long and John for their supports. Thanks all of you for everything. All the mistakes in this project are my own.

Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction 1.1 1.2 1.3 1.4 2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Report Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 3 3 4 4 4 6 6 6 6 7 7 7 7 7 7 8 9 9 9 10 10 11 16

System Design 2.1 2.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

Tools and Resources 3.1 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 3.1.2 3.2 Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Natural Language Processing Toolkit . . . . . . . . . . . . . . . . . . . . . . .

Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 3.2.4 Wordnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semcor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Question Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.3 3.4 4

Download and Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Text Clean Up 4.1 4.2 4.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tokenizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spell Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 4.3.2 4.3.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .1 5. . . . . 6. . 6. . . . . . . .5 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Semantics Analysis . . . . . . .2 Implication Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation Explanation . . . .2 Introduction . . 6 Meaning Representation 6. . . . . . 6. . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . .2 6. . . . . . . . .3. . . . . . . . . .2. . . . . . . . . . . . Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 5. . . . . . . 6. . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . .1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evaluation and Discussion . . . . . . . . . . . . . . . . . . . . . .4. . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . Mathematical Background . . . . . . . . . . . . . . . . Future Work . . . . . . . .2 5. . . .2 5. . . . . Operation explanation . . . . . . . . . . . . . . . . .5 Results . . . .4 First Order Predicate Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Introduction . . . .4 5. . . . . . . . . . . . .1 6. . . . . . . . Presentation . . . . . . . . . . . . . . . . . . . . . . . . . .1 6. . . . 17 18 19 19 20 21 21 21 22 23 23 25 25 26 26 27 28 30 30 32 32 32 34 34 36 Part of Speech Tagger 5. . . . . . . .5. . . . .6 Language Model . . . . . . . . . . . . . . . . . . . . . . .5 5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . .5. Background Theory . . . . . . . . . . . Meaning Representation . . . . . . . . . . . . .6 7 Evaluations . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Analysis A Study Case . Estimating Lambda Value . . . . . . . . Formal Logic . . .4 4. . . . . . .3 6.1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .1 Simplified system DFD . . . . Spell checker flow chart . POS tagger flow chart . . . .1 4. . . . . . . . . . . . . . . . 5 11 12 13 22 . .3 5. . . . . . . . . . . . . . . . . . . . . .List of Figures 2. . . . . . . . . . . . . . . . . . . . Spelling correction module flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stemmer flow chart . . . . . . . . . . . . .2 4. . . . . . . . .

. . .1 5. . . . . .2 Transformation steps and their patterns . . . . . .2 5. . . . . . . . . Accuracies . . . . . . . . . . . . . . . . . . . . . . . . Accuracies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 16 23 23 . . . . . . . . . . . . . . . . . . Weighed transformation . . . . . . . . . . . . . . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . .List of Tables 4. . .

or can be very effective as in Google case. Most search engine up to date based on key words. each with their distinct features. any natural language tasks can be grouped into these below: • Phonetics and Phonology: The study of linguistic sounds. TinyEyes. 1 . • Morphology: The study of meaningful components of words. Natural language search engine can be broken down to basic natural language tasks that we perform daily: analysis. To process such a huge knowledge base and half a billion queries per day. Nevertheless. which is also a natural language search engine. sense disambiguation. • Syntax: The study of structural relationships between words • Semantics: The study of meaning • Pragmatics: The study of how language can be used to accomplish goal or in different situation. In fact. language generation. . the real content of a web page plays less significant role than it should do. • Discourse: The study of linguistic units larger than one single utterance. .Chapter 1 Introduction Seach engine is vital for fast and accurate information retrieval. Natural language search engine on the other hand in theory will be able to response to questions from users as opposed to keywords only. Stock photography all of which are similar image search engine. Google’s power lies in their gigantic knowledge base. the world’s most academic search engine. and Wolfram Alpha. but still it can be tricked to give a higher rank than it really should. this suggests that in Google method. which is also known as search engine optimization (SEO). To be fair. Other interesting search engines that might be more useful than Google when it comes to more specific tasks are GazoPa. traditional search engine probably is the most suitable choice by letting the user do the final and the usually the most difficult task: read through the contents and choose whatever suit their needs. This can be very irrelevant as in early search engine back in the 90s. Bing which is great for lifestyle. meta data and ranking algorithm to return results that are most likely to match the input queries. and be able to analyze the actual contents of the web page to determine the level of relevant.

or more generally natural languages. Wolfram Alpha was launched. In this project. In 1997 the infamous PC game Fallout was released with a feature that allowed player to interact with in-game characters using natural language input. and computational psycholinguistics in psychology. just as how we used to learn when we were kids and plain. Along the way. human also invent more methods to effectively communicate with the systems. it was not until about 80 years ago in 1936. it is known as natural language processing. either in a form of lovable and talkative android or an intelligent super computer with its own evil will and desire. to touch screen. eye and motion tracking. IBM machine one more time defeated human champions in an intellectual contest. It’s so clever that Weizenbaum reported some attendants refused to not believe in ELIZA even after being explained about the situation [21]. the main goal is to derive the implication given a pair of question and answer. which I believe is how the machine should. In 1966. from keyboard. But the holy grail of communication will be what we have been developed through generations and what we are most naturally familiar with: our mother tongue. In computer science. Users around the world for the first time have access to a search engine which can operate using natural language both as input and output. 1. striving and many brilliant minds. from ancient programmable machines by pegs and ropes. Information retrieval and display is throughout and well organized as if have been carefully prepared by a human. In early 2011. Using almost no information on human though or feeling and only clever decomposition and recomposition rules trigger by system of ranked keywords. and can learn. when the first freely programmable computer ‘Z1 Computer’ was invented. it sometime produced amazingly human like conversation. mouse. and what kind of science fiction that is without humanlike machine. As the technology evolves. a computer simulation of a Rogerian psychotherapist [21]. The drawback is that it’s pretty useless for nonacademic purposes.Human attempts to build automation that mimic humanlike behavior dated back some thousands years ago and still going strong. to a mechanical marvel robotic lion by Leonardo da Vinci in1515. speech recognition in electrical engineering. Even though many systems has been successfully developed and commercialized [7]. which required contestants 2 . This time it was IBM’s Watson against 2 champions on Jeopardy quiz show. It’s still highly excited to see such a feature though. that humankind had the facility to realized thist long standing dream. minor tasks such as spelling and syntax analysis will also be explored. Many chat bots were developed based on ELIZA. hence their different conversation styles. It performed poorly considering a closed context and was discarded in the following sequel. and even brain signal. Despite a long history of envisioning. Weizenbaum proposed ELIZA. yet none prove to be versatile enough for a wide variation of tasks. In May 2009. The linguistic tasks that we human perform almost effortlessly daily turn out to be challenging indeed. each with their unique discourse.1 Literature Review Research on linguistics has been carried out in many other fields long before the computer science era and are known under different names in different fields: computational linguistics in linguistics.

Next. tools and resources. Guide for installation is also provided. Despite the fact that Watson had the entire data of Wikipedia loaded in its RAM. 1.2 Goals and Objectives Be able to analyse a natural language input query. The first one. chapter 4 will cover spelling correction. Part of speech (POS) tagging methodology. which is typically a question or question and answer pair and return its meaning representation in the format suitable for logic operation. specifications.to figure out which question should have been asked given a statement. Additional spelling correction might be performed if required. or exhibited weird behavior at some points. things such as interact freely with an intelligent system in natural language form will start to penetrate and change the way we are using the machines today. chapter 6 will discuss how meaning can be extracted and represented using syntax analysis information and FOPC scheme. the program is able to perform word stemming.4 Report Organization The main body of this report consists of 4 chapters. Finally. part-of-speech tagging. chapter 2 will cover the overall design of the system. result and performance is discussed in chapter 5. as well as the requirements. 1. 3 . chunking and partial meaning representation from an natural language input.3 Scope Currently. spelling correction. 1. chance are not so far into the future.

Chapter 2 System Design 2.2 Designs At the core of the program are three separate modules and a central database. which is typically a question or a pair of question and answer and return its meaning representation in the format suitable for logic operation. • Part-of-speech (POS) tagging using second hidden Markov model (HMM): Accuracy should approach 90%. • Meaning deduction and representation using first order predicate calculus (FOPC). 2.1 4 . • sense module: for semantics analysis. Some necessary intermediate processes are: • Text cleaning up: Including tokenizing and spelling correction if required.1 Requirements At the end of the project. • tagging module: for POS tagging. Three modules are: • spelling module: for spelling correction. the program should be able to analyse a natural language query. A simplified data flow diagram is shown in figure 2.

1: Simplified system DFD 5 .Figure 2.

Zope. Python emphasizes concepts such as quality. since Python programs use highly optimized structures and libraries.. 3. mature. Python can be used for almost anything computers are capable of. objectoriented.1 3. . popular. . Python’s speed of development is just as important as C’s speed of execution. portability. it’s unlikely that Python will ever be as fast as C.. NLTK was used mainly for its corpus and probability module. National Weather Service. . .1. • Graphics: Pixar. .2 Natural Language Processing Toolkit Natural Language Processing Toolkit (NLTK) is an open source natural language processing libraries. software and data for Python. To be fair. In short. • Numerical application: NASA. named after Monty Python a British band of comedians. However. • Games: Civilization 4. A few organizations currently using Python are: • Web developments: Google. is an easy-to-use. these terms mean (but not limited to) readability. Disney. flexible. Paint Shop Pro. . As a general-purpose language. fast development speed. productivity. 6 . they tend to run near or even quicker than the speed of C language somehow. and open source programming language designed to optimize development speed. In this project. .. . Both C and Python have their distinct strengths and roles. In modern software context.1. Yahoo. • And many mores.. Blender. . text processing power and web work suitability. Battlefield 2.Chapter 3 Tools and Resources 3.1 Tools Python Python. . and integration.

which is useful for text analysis and artificial intelligence. A tagged version of the corpus. 3. Synsets are interlinked by means of semantic and lexical relations. The result is a meaningful hierarchical network of related words.3.2. 3. Some of them are shipped with NLTK.2.312 wordsl of running text of edited English prose printed in the United States during the calendar year 1961. in which every word was tagged with its part of speech is also available in NLTK. • Python 2. 3. The samples were divided into categories and subdivisions.3 Download and Installation Download size is about 17 MB. verbs. and type the commands: >>> import nltk >>> nltk. run the Python IDLE (see Getting Started).0.2.4 Question Corpus Collections of ‘which’ and ‘what’ questions tagged with part of speech and intention of the question.download() 7 .014. A full installation will require approximately 800 MB of free disk space.2 Resources Below are collection of corpora used in the process. adjectives and adverbs are grouped into sets of synonyms (synsets). NLTK is shipped with Wordnet 3. Available at Rada Mihalcea’s page 3.2 Brown The Brown University Standard Corpus of Present-Day American English (or just Brown corpus) consists of 1. Nouns.2.7 • PyYAML • NLTK After installing all packages.3 Semcor A subset of Brown corpus in which words are also tagged with their sense along with their part of speech. Available at Rada Mihalcea’s page 3. each expressing a distinct concept.1 Wordnet Wordnet is a lexical database for English language.

’said’. ’County’. ’Fulton’. Test that the data has been installed as follows.A new window should open. You can also open up an editor with File ->New Window and type in a program. ’Grand’. Save your program to a file with a . Next. the Integrated Development Interface. It opens up a window and you can enter commands at the >>> prompt. select Properties>Advanced>En Variables>User Variables>New. If you did not install the data to one of the above central locations. ’Jury’. select the packages or collections you want to download..py extension. you will need to set the NLTK DATA environment variable to specify the location of the data. set this to C:\nltk data..words()[:50] [’The’.4 Getting Started The simplest way to run Python is via IDLE.corpus import brown >>> brown.. Click on the File menu and select Change Download Directory. then run it using Run ->Run Module.6 ->IDLE Check that the Python interpreter is listening by typing the following command at the prompt. Right click on My Computer. It should print Monty Python! >>> print "Monty Python!" 8 .] 3.. For central installation. showing the NLTK Downloader. . In Windows: Start ->All Programs ->Python 2. (This assumes you downloaded the Brown Corpus): >>> from nltk.

As in this project context.sent)] The function uses a regular expression define in pattern (default value is TOK) to search for words and punctuations in the input string sent. Even if it’s just a simple procedure which filters out non-desired characters. ’?’] 9 . ’huh’. >>> tokenizer("W-H-A-T ’y’re’ looking/gaping at \"huh\" ??? ") [’W-H-A-T’.findall(pattern. A clever cleaning up can benefit the project in many ways. ’?’.2 Tokenizer Here is the code for the tokenizer: import re TOK=r’(?:\b([\w][\w\-\’]*[\w]))|([ˆ\s\w])’ def tokenizer(sent. 4. ’/’. ’?’.1 Introduction Usually the very first step of every text processing tasks. filtered out odd characters and spell checked before being used for further processing.Chapter 4 Text Clean Up 4. "’". A result example for an noisy input is shown below.pattern=TOK): return [item[0] and item[0] or item[1] \ for item in re. ’at’.. ’"’. ’"’. user input query will be tokenized (including punctuation). this is not the case if B is False. the amount of memory saving can be substantial considering a very large and noisy corpus such as html documents. ’looking’. The expression A and B or C is an equivalent to: if A is True: return B else: return C However. "’". "y’re". ’gaping’.

[17] found that real-word errors account for about a quarter to a third of all spelling errors. we will only focus on errors that result in nonexistent words. a rate of 0. though there was considerable variation between good spellers and poor spellers. Spelling error rate and it’s significance varies depend on the application fields.4. happend (happened).3 4. • Real word error: hole (hope). sed (said). lirbary (library) • Confusion of homophones: there/their. while the rest of this chapter will evaluate and discuss on the performance. 10 .5% to 23% for bibliographic database was reported in [2]. the rate of errors will be high.. 1 (number one) or l (letter l).1 Spell Checker Introduction [17] reports a rate of 25 errors per thousand for handwritten essays by secondary school students. • Suffix and prefix: stopt (stopped). • Order of letters in words: gril (girl). perhaps more if you include word-division errors..3. brid (bird) • Finger slip: what (qhat). there are different methods for detection and correction [14]. • Not hearing the sound: umrella (umbrella). • Similar shape in optical recognition: rn (m). On the other hand. Depend on what kind of spelling errors. two/too/to. For the project’s context where user types in the input query.3. Some methods are: • N-gram technique • Rule based technique • Minimum edit distance technique • Probability technique • Neural nets technique • Acceptance based technique • Expectation based technique • .2 will explain in detail the method use in this project. and they will have significant impact on the output results There are many sources of spelling errors: • Spelling by sound: wuns (ones). them (then). realy (really). As for this project. Section 4. no3 (now).

1 shows a simplified flow chart for the spell checker Figure 4.exc: Corpora of morphological exceptions (mice/mouse). Compiled using Wordnet. Figure 4.4.txt: Compiled using Wordnet and stop word lexicon in NLTK.txt: The corpus of every lemma. • *.stem.2 Method A few lexicons were prepared before hand: • dict en.1: Spell checker flow chart Stemmer A handy and readily availbale stemmer called Porter Stemmer can be called using the following code: import nltk stemmer = nltk.stem(’grocery’) ’groceri’ 11 .txt: Compiled using the English lexicon in NLTK.PorterStemmer() However. • small dict. This leads to many awkward results as shown below.3. the stemmer doesn’t validate the return solutions. • wn dic. its POS tag and associated definition.stem(’goes’) ’goe’ >>> stemmer.stem(’propose’) ’propos’ >>> stemmer. >>> stemmer.

py. can): length difference. change. Feature fitc was chosen because of its high speed and usually resulted in a neither too broad or too restricted candidate set. The experiment results indicated that feature transmuteI alone was sufficient. These features are: • fitl(word. it is the opposite of feature fitc. sums up their lengths then weights. Source code can be found in module morphy. A few features were chosen and combined either by linear or cascade combination. hence a better accuracy.py Spelling Correction The underlined approach was to figured out the similarity between 2 tokens.>>> stemmer. 4 basic (distance 1) transform steps are: delete. • fitc(word. • fitm(word.stem(’groceries’) ’groceri’ For this reason.2. It can also return the definition associating with the part of speech of the token during the process if required. It’s flow chart is shown in figure 4. Another version that also return the meaning associated with the word can be found in module wn dict fast. • transmuteI(word. This weights the unique.can): level of character disorder. and swap (adjacent letters only). The other features can be used for selecting potential candidates to speed up the process.2: Stemmer flow chart The stemmer utilises extra information such as exceptions and part of speech tags for validation. a more sophisticated stemmer was developed.can): steps required to transform word into can. 12 . • fitorder(word. rather than common characters. can): in some sense. Figure 4. can): scans for match strings between word and cand. add.

0. #(transmute. A flow chart for the spelling correction is shown in figure 4. #(fitl. 1.67)] def interpolation(word. 1.can.0).0). #(fitl.classifiers=properties): ’Linear interpolation of various classifiers.0). properties=[ #(fitorder.Detail Operation Explanation Refer to figure 4.0). (fitc.3. last element is used for candidate sig = sum(wei for (prop. #(fitc. Figure 4.3: Spelling correction module flow chart Let’s explore the module at its very top layer. 1.0*sum(prop(word. #(fitm.py.wei) in classifiers[:-1])/sig The C(properties) is a vector of features together with their weights for linear interpolation as can be seen in the function C(interpolation). The last element in the feature vector is used for 13 . The source code can be found in module morphy.can)*wei for (prop. 1. (transmuteI.7)] 0. 1.2 for the stemmer operation.wei) in classifiers[:-1]) return 1.5)] 0.0).0). 1. #(fitm.

• Assign transformation steps. • Swap two adjacent character. Obviously.candidate filtering. when C(word) or C(can) becomes longer.0/len(word)+1. their common must approach their lengths for f itc to be significant. • Segment and align the two words based on the common string. f itc = 1 × length(common) × 2 1 1 + length(word) length(candidate) (4. can. As stated above. The function C(transmuteI) will calculate the shortest distance and will figure out the intermediate transformation steps if required as well. • Weight the transformation for calculating transmuteI. The function implies that the bigger the value. 1].1). we’ll say the distance between the two is 4. Calculating transmuteI is not very straightforward as in the case of f itc.5*l*(1. there are infinite number of ways to transform one word to another. The transformation from C(word) to C(can) can be assumed to go through a series of basic transformation steps. Furthermore. This can be achieved through a four steps process: • Search for the common string between the two words. • Change one character into a different one.lower()) common = common_strings_mk2(word.can) = (word. the more similar the words are. Sum up their lengths then weight""" (word. If 4 steps is required for the transformation.1) Note that f itc is in range [0. C(fitc) is calculated by using equation (4. These steps are: • Add one character.lower().0/len(can)) The code is self explained. can): """ The function scan for match strings btw word and candidate.can) l=0 for string in common: l+=len(string) return 0. 14 . • Delete one character. only C(transmuteI) will be used as feature and C(fitc) as candidate filter. but there should be a limited number of shortest paths. Below are the implementation code for C(fitc): def fitc(word.

This is a greedy process. i.3. >>> align(’abcdefghklm’. ’k’. def align(word.’c’ or ’ab’. The desired output of this process is shown in the example below.’l’. ’cd’. ’k’. Table 4. ’b’. segment and align the two words based on the common string. ’m’]. Sample code for searching function: def common_strings_mk2(word.’c’. assign transformation steps.’l’. ’’. ’f’. 6. [’2’.’1’.’g’. ’cd’. ’f’.e. ’c’ ’’ ’c’ ’d’ ’’ ’d’ ’c’ ’d’ ’’ ’’ but not ’c’ ’c’ Delete ’c’ Change ’c’ to ’d’ ’c’ ’’ ’d’ ’d’ Swap ’c’ and ’d’ ’c’ ’’ Add ’c’ Table 4. 15 . nearer distance..’e’. ’m’] Secondly.’b’. The process bases on the fact that the transformation steps will decide how the column vector look like. ’h’.1: Transformation steps and their patterns Finally. 12]] Thirdly. so these transformations should be weighed less. can): "Return list of substrings that match. Some transformation are more likely to happen than others.’21becdfhlkm’) [[’a’. ’’. which means it tries to group as many adjacent characters as possible.2 shows the weighed transformations. ’cd’. 4.verbose=False): . in order of appearance from left to right" . ’’. search for the common string between the two words. weight the transformation for calculating transmuteI.2 showed the summary of the patterns and their corresponding steps.can.. 8. ’h’. for instance ’abc’ as opposed to ’a’. Table 4.3. ’’. ’h’. ’m’]..Firstly. 10. ’f’. ’’. [ 2. >>> common_strings_mk2(’abcdefghklm’. ’’.. ’b’. ’k’.’21becdfhlkm’) [’b’.’e’.

0 return 6.0 only if the distance is zero.h). ’1’. u to o. [1. (h. ’Change’. 0].type.2: Weighed transformation 0.5 1.can): "Measure distance (˜steps require to tranform word to can)" #tagseq(align(word.5’. (h. [1. 3]. ’Add’. 1].0/(6+sum([val[0] for val in tagseq(word.Swap Swap Change All steps Between vowels Between (g..’21becdfhlkm’) [[1. [1. and reduces by one-third when distance is 3.0.5 4. converges to zero when distance approaches infinity. ’change a to e’: 1. Some sample codes and outputs of a few keys functions were shown below: def tagseq(word. ’l and k’. ’Del’. (r. 11]] def transmuteI(word.index]]" . ’e’. o to u. and (h. [1.2) Function (4.can)) originally was the steps require to tranform word to can #Basic steps are add. delete. 7].0.can)])) >>> transmuteI(’abcdefghklm’.r) a to e. ’g’.3 Results Below are a few demonstrations for the stemmer.can. y to i Not above cases Table 4.h).0.t). [1.0.0.involved chars.0. (c. 5]. >>> tagseq(’abcdefghklm’. ’Swap’.5 0. 16 . transmuteI is calculated as follow: transmuteI = 6 6 + sum(weighed transf ormations) (4. ’e’.3. change.0 After weighing the transformation steps.. ’Add’. and swap (adjacent chars) #Has been modified to weight the steps instead.s). ’a to 2’.2) implies that returned value of the function is equal to 1. ’Del’. (h.verbose=False): "Figure out the transmute steps given aligned sequences\n\ return value: [[weight. i to e or y. e to a.p).5 0.k).’21becdfhlkm’) 0. #For example: ’swap a and e:0.0.

’nos’. ’nod’. ’ready’. ’nor’. ’rely’. ’stoat’] >>> correct(’happend’) [’happen’. ’realm’. ’brad’. ’arid’] >>> correct(’gril’) [’grit’.3. ’qat’. ’really’. ’brim’. ’n’]] >>> morphy(’groceries’) [[’grocery’. ’stop’. ’chat’] >>> correct(’no3’) [’nox’. ’not’. ’noe’. ’now’. ’that’. ’non’. ’n’]] Some demonstration for the spelling correction. ’bid’. ’real’. ’v’]] >>> morphy(’propose’) [[’propose’. ’nov’. ’buns’] >>> correct(’sed’) [’sad’] >>> correct(’stopt’) [’stout’. ’khat’. ’mealy’] 4. ’braid’. ’grail’.4 Evaluation The stemmer worked perfectly well for the test samples. ’no. ’grim’. ’append’] >>> correct(’realy’) [’reply’. ’brig’. ’relay’. ’grin’. ’bride’. ’redly’. ’brie’. ’girl’. ’brio’.1 >>> correct(’umrella’) [’umbrella’] >>> correct(’libary’) [’library’] >>> correct(’qhat’) [’what’. ’no’] >>> correct(’brid’) [’rid’. ’stops’. ’aril’] >>> correct(’wuns’) [’uns’.>>> morphy(’goes’) [[’go’. ’grill’. ’grid’. ’grid’. ’realty’. ’hat’. ’ghat’. ’bird’.3. ’nob’. ’brit’. ’gris’. ’nog’. ’grip’. 17 . ’noi’. ’bris’. ’quat’. The error samples were taken in 4.’. ’v’]] >>> morphy(’grocery’) [[’grocery’. ’noc’.

xml The corpus can be used for a statistical spell checker or accuracy test. and ‘happend’. there might be no unique correct answer.5 Future Work A spelling corpus is freely available at ota. Further experiments by quickly skimming through a dictionary. From the above testing experiment. the spell checked failed in 4 cases: ‘wuns’. ’curve’] So in conclusion. catching one glance at a long word and rapidly typing it into the computer showed that the spell checker rarely failed. One of the few failed cases was ‘lurve’.uk/headers/0643.ac.On the other hand. ‘stopt’. ‘sed’. The best solution therefore would be letting the user choose from a list of suggested words. the corpus is not readily usable. So we would say a spell checker fails only if it lefts out the intended solutions.3. the spell checker has done its job well. 18 .ox. However. which was one of Woody Allen’s words for ‘love’ since he thought ‘love’ was too weak of a word. it’s hard to define a baseline or perform an accuracy test for the spelling checker because due to undetermined nature of spelling errors. these four cases couldn’t be helped since they were errors due to spelling by sound and the spell checker wasn’t designed for this type of errors. 4. >>> correct(’lurve’) [’lure’. A corpus reader to make this corpus readily available in NLTK would be very useful. However.

Grant and Robert E. the meaning can be “being out of town before 4PM and arriving in town only then”. Let’s consider a few examples: 1. “He won’t be in town until 4PM. the clause “because she was rich” can modify either the state of being married (the whole utterance) or just the cause (the verb only). some syntax analysis will be necessary to understand the utterance’s sense correctly. Lee met in the parlor of a modest house at Appomattox Court House. George no hurt you!”. Grant and Robert E. we can understand an utterance while paying little to no regard to its syntax. but closer inspection reveals a quite simple structure: a sentence pre modifier ( When Ulysses S. For example. or some random jungle talk “George good.Chapter 5 Part of Speech Tagger 5.” 3. or that the person wants to swallow the place. Considering example 2. at times when the syntax becomes complicated. this utterance can either mean that the speaker wants to eat at some nearby location. Virginia to work out the terms for the surrender of Lee’s Army of Northern Virginia. Lee met in the parlor of a modest house at Appomattox Court House. depending on which part of the sentence that the preposition phrase “until 4PM” modifies. Example 4 might look complicated at first. The latter is much less likely to happen in real world. Virginia to work out the terms for the surrender of Lee’s Army of Northern 19 .1 Introduction Most of the time. The same goes for example 3. “I want to eat someplace that’s close to NTU. or “arriving in town at some earlier time but not staying as late as 4PM”. Considering example 1. given the usual structure of the verb eat. When Ulysses S. However. In fact most of natural language tasks can be viewed ask resolving ambiguity at some points. or there is ambiguity. “He didn’t marry her because she was rich. “I am driving a car”.” 4.” 2. a great chapter in American life came to a close and a greater chapter began.

Section 5. However. 20 . • It is impossible to have a databse of every intance of a language.) followed by a series of simple sentences ([a great chapter in American life came to a close][ and a greater chapter began. the HMM model should account for these unseen events (also known as smoothing) • The application filed should be similar to the training data. But only “house” are related to “door”. pants. • Most words in English are unambiguous. Some examples are “adjective usually precede noun”.2 Method The language is assumed to be a second order hidden Markov model (HMM).” Which word will be likely to fill in the empty space? It should be something that can be painted on. Our current HMM model will fail in this case unless the term “whole new house” happens to appear more often than the others in usual context. that is they only have one part of speech. but not totally false either. house. • It would required an adept knowledge in linguistics to capture all the grammar rules for using in the decision tree. green except for the front door yesterday.3. which appeared quite far after the empty location. we’ll explore the syntax analysis. . i. considering our context which is part of speech tagging. 5. such as face. the HMM model works much more efficiently since grammar rules restrict which classes can stand next to each other. or “to must be follow by noun phrase or bare infinitive verb”. . which means that the choice of a word is only depend on its previous two words. or cow. But many of the most common words in English are not. since a strict definition of a grammar rule is not easy to define. a good syntax analysis will make the task of meaning extraction easier and more precise than intuition alone.] As those examples above illustrated. The following sections will continue to elaborate on the program operation and its performance.2 will provide some information about method used. A tagger based on HMM model will estimate the probability of various sequences and return the most probable sequence or sequences. This of course is not true.e. Let’s consider an example: “I painted my neighbour’s whole new .Virginia. A HMM model has some major advantages compared to a decision tree model. In fact [9]stated that over 40 • A single HMM tagger can be reused for many languages or applications if provided with proper training data which in this case is a set of sentences tagged with each word’s part of speech. board. POS tagging. HMM model also has drawbacks: • True model of the language is not known and can only be approximated. In this chapter. Necessary mathematical calculation is covered in section 5. This kind of database is usually simpler to prepare but time consumming.

1. C(t1 . The source code can be found in module tagger.• A database of sufficient size for trainning must be availbe. For this reason we skipped trigrams which have been seen only once. the highest multiple of probabilities.e. λ2 . w3 ) = P (w3 |t3 ) × P (t3 ) Solution can be found using iinear interpolation of 5. the bigram and unigram model can be defined: P (t3 . t2 ] (POS 1 and 2).2 Estimating Lambda Value 3 )−1 2 3 )−1 For each trigram in training data. i.e.3. It’s not the same as the multiple of highest probabilities. In this project. One method for finding this path is known as Viterbi algorithm [10] 5.t) ) C(t1 2 The reason for minus 1 is because we treat the in using trigram as observed event. It should be pointed out that. C(t3 )−1 C(t1 2 C(t2 N (t)−1 Depend on which is the maximum of them.t. The 2 3 chosen amount were: 1. so the actual data must be minus by 1.t. find the probability of the sequence being follow by word w3 which has POS t3 i. P (t3 . t2 . bigram and unigram classifiers were linearly interpolated into a tagger call lihmmtagger.3) P (wi |ti )(λ3 × P (ti |ti−2 . Similarly. t3 ) × P (t3 |t1 . C(t1 . w3 |t2 ) = P (w3 |t3 ) × P (t3 |t2 ) P (t3 .3: argmax 1≤i≤n (5. increase the corresponding lambda by a certain amount.t.3 5. an prefix tagger. 21 . t3 ). w3 |t1 . t2 ) = P (w3 |t1 . λ3 are weights of the classifiers.t)−1 . and 5. • Unable to validate the returned solutions. 5. 5.2. ti−1 ) + λ2 × P (ti |ti−1 ) + λ3 × P (ti )) (5. a regular expression tagger and a default tagger will be called in that order to prevent error propagation caused by failing to tag a word.py. compare the following values: C(t1 .4) In which λ1 . C(t2 . t2 ) = P (w3 |t3 ) × P (t3 |t1 .t)−1 . a trigram.3. t2 .2) (5. λ1 + λ2 + λ3 = 1.1) (5. Logarithm is used when the numbers get smaller.t. t2 ) The formula can be derived by making the assumption that w3 only depends on t3 . the returned soultions is the most probable paths.1 Mathematical Background Language Model The tagging problem can be defined for the trigram model as follow: giving sequence of[t1 . In case this tagger fails to tag a words. and t−1 = t−2 = BOS (Beginning of Sentence).

• lihmmtagger: a linear interpolation of trigram. • tagit: tag a word. that is 14%) • retagger: a tagger bases on regular expression. • tagthem: tag a set of words. bigram and unigram tagger. • setimport: import previous training information.5. • affixtagger: a tagger bases on prefix and suffix of words. which automatically assigns the most common tags which is ‘NN’ (147169 counts in 1071233. • accuracy: take a tagged corpus as test set and return the percentage of correctly tagged words. Each tagger is a separate class and has some common methods: • train: takes a tagged corpus as trainning data and export training information for later use since training might take a long time.1 Figure 5.1: POS tagger flow chart There are 4 tagger classes: • A default tagger. 22 .4 Operation Explanation The tagger flow chart is shown in figure 5.

As for the HMM tagger. which are the most common sentence length in Brown corpus. A search beam (1000 to 2000) must be applied to limit the search space. It was surprising to achieve such decent accuracy just by knowing the ending or starting character. Despite the large difference in testing scale.5 Results Below are some return scores in accuracy test.14 28. divide it into smaller sets and perform accuracy test on each set. the outcomes might not be very useful for other tasks such as information extraction due to misleading tags. the search space can easily reach several hundred thousands in just 10 or 15 words.84 4.56 27.54 Mean w/o testset segmenting 64.15 85. However. The drawback is it requires much more computational effort.5 showed accuracy test for the linear interpolation HMM tagger. and since the tagger searches for the most probable path. simply going from left to right and choosing the most likely tags yield a decent accuracy and speed.• test: An improved accuracy test. 5.6 Evaluation and Discussion The suffix tagger had slightly better accuracy and deviation than prefix tagger.03 Worst 45. which shouldn’t be the case. This implied 23 . Therefore searching for the most probable path is more desirable. complex sentences which consist of more than one clauses will be broken and tagged individually since words across clauses have little syntax relation. So suffix tagger was chosen. but this will affect the accuracy as well.23 6. Testset size 120 1010 Mean 63.25 67. the scores were close to each other.04 78. Tagger Regular expression Suffix tagger Prefix tager HMM tagger (most likely tags only) Mean (percent) 7.1: Accuracies Table 5.42 Table 5. segmenting the test set gave more insights to the accuracy scores than the latter. There are 170 possible tags.72 67.68 Standard deviation 2.6 5.45 58.45 Table 5. Hence it can be assumed that the tagger will work 67% of the time.2: Accuracies Variance 132 24 Best 78. Interpolation of them fell somewhere in between. This is a much lower score than the other HMM tagger. Return average accuracy and the standard deviations. Take a tagged coprus as test set.38 6. Even though the mean accuracy for 2 method of testing is approximate to each other. In order to improve speed of processing but limiting the affect on accuracy at the same time.

especially for lengthy sentences. ’NN’). Even though it is possible to modify the program to carry out experiments for the sake of verifying the above problem. considering the project’s context where the inputs are user search queries. segmenting the sentence into smaller clauses proved to have better accuracy. The speed is much slower than comparing to the other HMM tagger.that the language model might go wrong at some points such as smoothing process or estimating lambda values. (’with’. More specifically. even though the performance score was not as high as expected. ’NN’). The performance improvements however was unknown. but can also degrade the performance dramatically. since punctuations are more likely to be tagged correctly. ’. In some occasions during debugging process. "’’"). the POS tag ‘‘‘’ has a count of 6160 as opposed to a count of 4834 for ‘WDT’.’)] NoCls: 2 1 1 3 3 6 12 12 12 72 432 432 432 | ‘‘ ‘‘ | ‘‘ WDT | WDT BEZ | BEZ IN | IN DT | DT NN | NN NN | NN ’’ | . (’?’. let’s consider a sample from the log: (’‘‘’. ("’’". For instance. (’What’. the speed was practically double. This might due to punctuations often have very high frequency of appearance. (’?’. (’is’. when applied to the semantics module (discussed in the next chapter) the returned results were promising. (’this’. By segmenting the sentences into clauses. ’‘‘’). Analyzing experiment logs suggested that punctuations can improve. The experiment logs are provided in the database. On the other hand. (’jazz’. they also have the effect of limiting the error propagation. | . The tagger assigned ‘what’ with a ‘‘‘’ tag instead of ‘WDT’. Furthermore. 24 . ’WDT’). So in conclusion. ’BEZ’). the program are good enough for practical usage. . whether their benefits outgrow their disadvantage is unresolved at the moment. ’. ’DT’).’). ’IN’). Therefore should the punctuations be considered during the training or tagging process. . punctuations will not be a big issue. (’vow’.

25 . It is clear that none of morphological or syntactical representation thus far will get us very far on these tasks. However.Chapter 6 Meaning Representation 6. The exception.3 will explore how these theory can be apply to a specific case. Over the years.1 Introduction In the previous chapter. What is needed is a representation that can bridge the gap between linguistic inputs. a few examples have illustrated that in many cases syntax analysis is useful but not necessary for meaning comprehension. Semantic Networks and Frames. 6. • Following a recipe.2. some background theory on FOPC and formal logic will be cover in 6. This class of verbs is known as transitive verb. It makes sense since most of the grammar rules are about how different words can be combined to form a sentence. consider some of everyday language tasks that require some form of semantic processing: • Answering an question. for example is some verbs must not stand alone on itself. Three notable schemes are First Order Predicate Calculus (FOPC). This implies that a different system other than grammar is necessary for meaning representation. a fair number of representational schemes have been invented to capture the meaning of the natural language inputs for use in processing systems.3. like “give”. and in someway represents the sense of the verbs. • Realizing that you’ve been insulted. The rest of this chapter will then explain how the program achieved the desired result describe in 6. In this chapter. their meaning and the kind of real world knowledge that is needed to perform the involved tasks.

however have more than one contants refer to them. Variables Allow the system to match unknown entity to a known object in knowledge base so that the entire propositions is matched.2. to be useful a representation scheme must be expressive enough to cover a wide range of subject matters such as time and tense. The conclusions might be not explicitly represented in the knowledge base. Some means of determining that certain interpretations are more or less preferable than others is needed. Some basic building elements of FOPC are: Constants Refer to specific object in the world.2 6. Canonical Form The phenomenon of distinct inputs that should be assigned the same meaning representations.1 Background Theory First Order Predicate Calculus Desiderata For Representation Scheme There are basics requirements that a meaning representation must fulfill. Unambiguous Representations Ambiguities exist in all aspects of all languages. Objects can. but are logically derivable from the available propositions. 26 . Like in programming language. The semantics of FOPC Capturing meaning of a sentence involves identifying the terms and predicates corresponding to various grammar elements of the sentence. Expressiveness Finally. Inference The system’s ability to derive valid conclusions based on meaning representations of inputs and its knowledge base.6. FOPC contants refer to exactly one object. Verifiability The system’s ability to compare an affair described by a representation to affairs modeled in a knowledge base.

2 Formal Logic The word ‘formal’ means by form.Functions Functions in FOPC can be expressed as attributes of objects. functions. they are in fact terms since they actually refer to unique objects. Formular An equivalent so sentence in grammar representation. Contradiction isn’t allowed in formal logic (also known as consistency principle) • Monotonicity principle: A proof cannot be invalidated by adding premises. Examples: LocationOf(NTU) Variables Give the system the ability to draw inferences or make assertions about objects without having to refer to any particular ones. constants. FOPC functions have the same syntax as a single argument predicate. • Contradiction: is a simultaneous acceptance and rejection of some remarks. • Claims: Are declarative remarks about states of the world. The two basic operator in FOPC are ‘exist’ quantifier that denotes one particular unknown object. rather than by shape or meaning. i. and variables and not a formular. Usage of quantifiers make these two uses possible. properties or relations between terms. Lambda Notation Enable formal functionality for replacing of a specifics variable by a specifics term. 27 . • Proof and disproof: A formal proof is a logical argument which convinces by following formal rules. not on meaning or facts. starts by a premise and ends by a conclusions. and ‘all’ quantifier that refers to all objects in a class. or all the objects in some arbitrary classes of objects.e. Examples: lambda x P(x)(A) --> P(A) 6. Formulars therefore can be assigned with True or False values depending on whether the information they encoded are in accord with the knowledge base or not. FOPC formular is a representation of objects. however. Note that the arguments of formulars must be terms. since proof obeys rules. Quantifiers Variable can be used to making statements about either a particular unknown object.2. Some necessary terms to read and understand formal logic are: • Argument: Is a line of reasoning.

A witch! . only B.. – If-then: If we accept ‘if A then B’ and A. Consider the following excerpt from the movie “Monty Python and the Holy Grail” . logically.A5: A duck! .Q5: What also floats in water? .And. It floats.3 Semantic Analysis A Study Case Semantic analysis is the process whereby meaning representation of linguistic inputs is created. There are four connectives: – And: If we accept ‘A and B’ then we are forced to accept both A and B simultaneously..More witches! . – Not: If we accept ‘not A’ then accept A lead to a contradiction.. The names suggest that the connectives are introduced or eliminated in the final proof. • Connectives: the logic connectives are used to build larger claims out of smaller claim.. – Or: If we accept ‘A or B’ then we can either accept only A. we are forced to accept B. .• Formal rules: is an intermediate step in an logical arguments.. therefore... .We shall use my largest scales. therefore B 6.Q4: How do we tell if she’s made of wood? Does wood sink in water? .A3: ’Cause they’re made of wood? . .Q3: So. or both. .A4: No. why do witches burn? .. Remove the supports! A representation scheme based on FOPC and logic proof for the above excerpt will look something like this: lambda_x (witches(x) --> burn(x)) (1) premise lambda_x (burn(x)-->wood(x)) (2) premise 28 . A witch! . weighs the same as a duck. If she..Q2: What do you burn apart from witches? .So.A1: Burn them! . Rules show how can larger proof be made out of one or more smaller proofs. For instance here is an example of ‘If-then’ elimination rule: (If A then B) also A.A2: Wood. Each connective has two rules associated with them: introduction and elimination.Q1: Tell me: What do you do with witches? . she’s made of wood..

. Remove the supports!’ weigh(duck)=weigh(girl) (7) given lambda_x ((weigh(x)=weigh(duck)) and float(duck) -->float(x)) (8) lambda-reduction (6) (weigh(girl)=weigh(duck)) and float(duck) -->float(girl) (9) lambda-reduction (8) (weigh(girl)=weigh(duck)) and float(duck) (10) And-introduction (7) (5) float(girl) (11) Implication-elimination (10) (9) wood(girl) (12) Implication-elimination (11) (4) Let’s assume: lamda_x (not_witches(x)-->not_wood(x)) (13) given lamda_x (wood(x)-->witches(x)) (14) backward chaining (13) (3) wood(girl)-->witches(girl) (15) lambda-reduction (14) witches(girl) (16) Implication-elimination (16) (12) "A witch!. However.lambda_x (witches(x)-->wood(x)) (3) implication-introduction (1) (2) lambda_x (wood(x)-->float(x)) (4) premise float(duck) (5) premise lambda_x lambda_y ((weigh(x)=weigh(y)) and float(y) -->float(x)) (6) premise "We shall use my largest scales. this is true since there is one invalid proof at step (12). Using logic as shown above.A witch!" The claim at the end is nonsensical according to our intuition.. it is still quite possible for the claim to be true if we rewrite the proof as below: lambda_x (wood(x)-->float(x)) (4) premise 29 . .

‘who’ and ‘he’ are symbols.2). The above two rules and referent identification process are embedded in deduct class. The left hand side of the implication must be either the argument or predicate of the question. In the above example.1 Design Operation explanation Implication Derivation The program derives the implication by figuring out what are the symbols.2) backward chaining (4. phrase Class As the name suggested.4 6. the final claim which is supposed to be humorous and nonsensical is true logically. Following the above two rules. preposition phrase or verb phrase can be modeled using the class phrase Some key attributes of the class phrase are: 30 . and they refers to ‘genius’ and ‘Albert’ respectively.4) given wood(girl) (12) Implication-elimination (11) (4. referents and their syntactical roles in the sentences.1) premise lambda_x (human(x) and float(x) --> wood(x)) (4. The left hand side must be meaningful. Considering an example: Q: Who is Albert? A: He is a genius.4.1) (4) human(girl) and float(girl) --> wood(girl) (4. 6.lambda_x (human(x) and not_wood(x) --> not_float(x)) (4. while predicate and argument extraction are handled by sentence class. and the right hand side of the implication must be either the argument or predicate of the answer. Amazingly enough.3) A new premise (4. phrases such as noun phrase. and implication such as who --> he is not very informative. Logically. or the implication will be pretty much useless. 2. the implication must satisfy the following two rules: 1. which is reasonable is introduced. a sound implication would be Albert --> genius or Albert --> he.4) (4.3) lambda-reduction human(girl) (4.

deduct Class To derive implications from a pair of question and answer. • rhs .Instance of the sentence on the left hand side (the question).the actual string of the phrase. • jungle .Symbols and referents. Some key attributes of the class deduct are: • lhs . • modifiers .Symbols and referents pairs 31 .Discourse of the sentence. • presentation .• string . preposition phrase. .List of possible implications.Instance of the sentence on the right hand side (the answer).String of POS tags. • jdiscourse . predicate. . Unimplemented at the moment. Each phrase has an associated function to search for that particular phrase in a sentence.Lists of subjects and predicates.Whether the sentence is negative or not • referent . It is worh noting that the process relies solely on syntactical information provided by the Sentence class and pays no regard to the actual meaning of the tokens.Sentence type. Currently declarative. • FOPC . Sentence Class This class is used to represent an instance of sentence. including postmodifiers and premodifiers.FOPC representation. Currently noun phrase.beginning and ending indexes of root of the phrase. hence the name syntax driven semantics analysis.modifiers of the phrase. yes/no question and some of Wh-questions are supported. • SubPre .Type of phrases. It provides convenient access to many components of a sentence such as POS tags.Jungle talk version of the original sentence.Tuples of POS tags. • root . • refpairs . • negate . • type . verb phrase and adjective phrase are supported • span . Some key attributes of the class Sentence are: • type . sentence type. • wtstr .beginning and ending indexes of the phrase. subject. • wttups .

6. the module will need information such as what should the quantifier be. • presentations . Some of them are from the study case in section 6. • var . • quantifier .1 Results Semantics Analysis Below are the output implications for sample pairs of questions and answers.5 6.’ >>> deduct(q1.Formula’s term • predicate . • connective . • term . q1="What do you do with witches?" a1=’Burn them.2 Design Presentation At the moment.a1). • preamble .FOPC formulas without quantifiers. or function). the representation module is not integrated into the semantics analysis modules discussed above since the extracted information is not detail enough for the module to generate an accurate representation.List of possible representations.3. constant. More specifically.presentation [’witches/NNS-->burn/VB’] 32 . • amble .Formula’s quantifier.Variable symbol.Formula’s connective.FOPC formulas with quantifiers. and some statements are modified into questions for compatibility or manually tagged if they were tagged wrongly.Trigger lambda reduction.4. what should the term be (variable. • Lambda . However. the class atom can be used to generate a proper representation.5. lambda reduction and combining smaller expressions are also supported.6.Formula’s predicate. atom Class On top of generating basic atomic representations. if we are able to provide these information.

q2=’What do you burn apart from witches?’ a2=’Wood. ’albert/NP-->he/PPS’] q9=’which country is north of America’ a9=’Canada is north of America’ >>> deduct(q9.a2).a9).a6.a3).presentation [’albert/NP-->genius/NN’.a4. ’water/NN-->floats/VBZ’] q5="What also floats in water?" a5=’A duck’ >>> deduct(q5.1).’ >>> deduct(q4.presentation [’north/NR of/IN america/NP-->canada/NP’] q10=’who is the most stupid guy on earth’ a10=’Dummies’ 33 .a5).presentation [’floats/VBZ in/IN water/NN-->duck/NN’] q6=’why/WRB does/DOZ she/PPS weight/VB the/AT same/AP as/CS a/AT duck/NN’ a6=’Because she is made of wood’ >>> deduct(q6.presentation [’wood/NN-->floats/VBZ’.presentation [’burn/VB-->wood/NN’] q3=’Why do witches burn?’ a3="Because they are made of wood" >>> deduct(q3.presentation [’witches/NNS-->made/VBN of/IN wood/NN’] q4=’does/DOZ wood/NN sink/VB in/IN water/NN’ a4=’No.presentation [’weight/VB same/AP as/CS a/AT duck/NN-->wood/NN’.1). it floats.’ >>> deduct(q2. ’duck/NN-->made/VBN of/IN wood/NN’] q8=’who is Albert’ a8=’He is a genius’ >>> deduct(q8.a8).

5.witches) connective ’ >>> t2=atom(’burn’.>>> deduct(q10.a10). >>> Q=’Why is he kicking that poor dog?’ >>> A=’Because it bites him.2 Meaning Representation Below are a few demonstrations including atomic terms.presentation [’someone/PN-->old/JJ’] 34 .None.A).presentation [’the/AT most/QL stupid/JJ guy/NN on/IN earth/NN-->dummies/NNS’] 6. and for some a bit more complex sentences.’girl’]]) >>> t4.t1) >>> t3.Lambda=[[t1.x_witches)’ >>> t4=atom(t3. it worked unexpectedly well for simple sentences.Apresentation ’Isa(x_burn. ’dog/NN-->bites/VBZ him/PP’] Q=’Why does someone die?’ A=’Because he is old’ >>> deduct(Q.burn) lambda_x_witches Isa(x_witches.’ >>> deduct(Q.witches) connective Role_of_x_witches(x_burn.witches) connective Role_of_girl(x_burn.’var’) >>> t2.6 Evaluations Further expriments on the deduction program showed that even though the program analyses the sentence in a rather simple minded way.presentation ’lambda_x_burn Isa(x_burn.presentation [’dog/NN-->bites/VBZ him/PP’.A).burn) connective ’ >>> t3=atom(t2.burn) Isa(girl.’var’) >>> t1.presentation ’lambda_x_burn Isa(x_burn.girl)’ 6. >>> t1=atom(’witches’. lambda reduction and combining smaller atoms into a bigger representation.presentation ’lambda_x_witches Isa(x_witches.None.

it worked as expected for simple formulas. ‘whom’. a sentence with n quantifiers will have n! representations. As for the representation module. ‘who’. Let’s consider an example: . quantity or entity relationships.Some current limitations are: • Not all type of questions. as well as sentences with clauses and commas are supported. • Unable to extract information such as tense. Hence it can either be used as a layer in a multi level process in which each layer solves a particular problem. y Having(e) and Haver(e. ‘what’. the program does work in simple context. 35 . which are necessary for FOPC representation scheme. or be improved to be able to dealt with complex sentences and extract more useful information. • The analysis bases only on syntactical roles and does not consider the actual meaning and the relative locations of the tokens Despite the limitations. Menu) and all x Isa(x. ‘why’ and simple sentences are available. x) and Had(e. connectives proved to be an ambiguity issue. Currently only yes no question. y) . Menu) and Had(e.all Restaurant(x) then exist e.Every restaurant has a menu The meaning representation of the sentence might take either one of the below two forms: . x) and Isa(y.exist y Isa(y. As the formulas getting more complex. ‘which’. Restaurant) then exist e Having(e) and Haver(e. y) In the worst case.

I have had fun learning Python and study some very interesting aspect of the language that I used to take for granted. Though I felt that I could have done much more if I have had received more formal training on linguistics. I was contended with the achievement. My deepest gratitude’s owed to my supervisor professor Chan Chee Keong. or had my heart sunk to my feet when hearing about IBM Watson’s triumph.Chapter 7 Conclusion This has been a very enjoyable project for me. 36 . Though I failed to achieve the initial goals. and memorable moments such as the excitement when getting the program running for first time. I really am glad to have the opportunity to work on this project. realized how magnificent this world and its people are. I managed to retrieve other precious things in return: less arrogant. thank you.

7. Stroudsburg. Kenneth W. Church. USA. ACM. USA. Measuring efficiency in high-accuracy. Commun. Spelling correction for the telecommunications network for the deaf. Commun. Manning. 2000. 13(1):1 – 12. A technique for computer detection and correction of spelling errors. [9] James H. Commercial applications of natural language processing. A spelling correction program based on a noisy channel model. Association for Computational Linguistics. May 1992. 37 . [13] Karen Kukich. May 1991. The viterbi algorithm. [6] Hinrich Schiitze Christopher D. Frequency and impact of spelling errors in bibliographic data bases. pages 205–210. Proof and Disproof in Formal Logic An introduction for programmers. [11] Lisa F. 7:171–176. [2] Charles P. 1990. DAVID FORNEY. In Proceedings of the 13th conference on Computational linguistics .Volume 2. Tnt . 2005. Commercial applications of natural language processing. [3] Thorsten Brants. [10] JR.a statistical part-of-speech tagger. [8] Fred J. Church and Lisa F. 1999. 35:80–90. PA. Computation and Neural Systems. Commun.broad-coverage statistical parsing. An Introduction to Natural Language Processing. Damerau. Rau. [7] Kenneth W. Massachusetts Institute of Technology. Oxford University Press. [4] Eugene Charniak Brian Roark. November 1995. 2000. PA. Foundations of Statistical Natural Language Processing. Martin Daniel Jurafsky. G. and William A. Moore. [5] Eric Brill and Robert C. ACL ’00.Bibliography [1] Richard Bornat. Church. ACM. second edition. Association for Computational Linguistics. Rau Kenneth W. Bourne. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. pages 286–293. Stroudsburg. 1977. March 1964. An improved error model for noisy channel spelling correction. Kernighan. [12] Mark D.. Gale. Computational Linguistics and Speech Recognition. Information Processing and Management. Prentice Hall. ACM. 38:71–79. COLING ’90.

9:36–45. Process. December 1992. [15] Charles R. January 1976. Cambridge University Press. Surv. [22] Jean-Sbastien Sencal Frderic Morin Jean-Luc Gauvain Yoshua Bengio. ACM. ACM Comput. USA. Commun. Association for Computational Linguistics.[14] Karen Kukich. Inf.. Litecky and Gordon B. 19:33–38. December 1980. [19] James L. Python Text Processing with NLTK 2. ACM. Packt Publishing.. Commun. 23:676–687. A study of errors. 38 .a computer program for the study of natural language communication between man and machine. Commun.. 2000. 24:377– 439. ACM. Manage. [18] Jacob Perkins. January 1966. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Spelling checkers. PA. Logic in Computer Science Modelling and reasoning about systems. Peterson.0 Cookbook. and error diagnosis in cobol. 23:495–505. [16] Mark D Ryan Michael RA Huth. Stroudsburg. Davis. 2002. Moore. 2010. Holger Schwenk. ACL ’02. September 1987. Pronunciation modeling for improved spelling correction. error-proneness. Neural probabilistic language models. [20] Kristina Toutanova and Robert C. pages 144–151. Computer programs for detecting and correcting spelling errors. [17] Roger Mitton. Eliza&mdash. Techniques for automatically correcting words in text.spelling correctors and the misspellings of poor spellers. [21] Joseph Weizenbaum.

Sign up to vote on this title
UsefulNot useful