You are on page 1of 104

Research Areas in NLP

Why NLP is difficult


• Fundamental goal: deep understand of broad language
– Not just string processing or keyword matching
• Language is ambiguous
– At all levels: lexical, phrase, semantic
• Language is flexible
– New words, new meanings
– Different meanings in different contexts
• Language is subtle
• Language is about human communication
• Problem of scale
– Many (infinite?) possible words, meanings, context
• Problem of sparsity
– Very difficult to do statistical analysis, most things (words, concepts) are
never seen before
• Long range correlations
• Representation of meaning
Linguistics Essentials
• Important distinction:
– study of language structure (grammar)
– study of meaning (semantics)
• Grammar
– Phonology (the study of sound systems and abstract sound units).
– Morphology (the formation and composition of words)
– Syntax (the rules that determine how words combine into
sentences)
• Semantics
– The study of the meaning of words (lexical semantics) and fixed
word combinations (phraseology), and how these combine to form
the meanings of sentences
Morphology
• Morphology is the study of the internal structure of words, of the way
words are built up from smaller meaning units.
• Morpheme:
– The smallest meaningful unit in the grammar of a language.
• Two classes of morphemes
– Stems: “main” morpheme of the word, supplying the main meaning (i.e.
establish in the example below)
– Affixes: add additional meaning
• Prefixes: Antidisestablishmentarianism
• Suffixes: Antidisestablishmentarianism
• Infixes: hingi (borrow) – humingi (borrower) in Tagalog
• Circumfixes: sagen (say) – gesagt (said) in German
– Examples: unladylike, dogs, technique
Types of morphological processes
• Inflection:
– Systematic modification of a root form by means of prefixes and suffixes to
indicate grammatical distinctions like singular and plural.
– Doesn’t change the word class
– New grammatical role
– Usually produces a predictable, non idiosyncratic change of meaning.
• run  runs | running | ran
• hope+ing  hoping hop  hopping

• Derivation:
– Ex: compute  computer  computerization
– Less systematic that inflection
– It can involve a change of meaning
• Compounding:
– Merging of two or more words into a new word
• Downmarket, (to) overtake
Stemming & Lemmatization
• The removal of the inflectional ending from words (strip off any
affixes)
• Laughing, laugh, laughs, laughed laugh
– Problems
• Can conflate semantically different words
– Gallery and gall may both be stemmed to gall
– Regular Expressions for Stemming
– Porter Stemmer
– nltk.wordnet.morphy
• A further step is to make sure that the resulting form is a known
word in a dictionary, a task known as lemmatization.
Grammar: words: POS
• Words of a language are grouped into classes to reflect similar
syntactic behaviors
• Syntactical or grammatical categories (aka part-of-speech)
– Nouns (people, animal, concepts)
– Verbs (actions, states)
– Adjectives
– Prepositions
– Determiners
• Open or lexical categories (nouns, verbs, adjective)
– Large number of members, new words are commonly added
• Closed or functional categories (prepositions, determiners)
– Few members, clear grammatical use
Part-of-speech (English)
Terminology

• Tagging
– The process of associating labels with each token in a text
• Tags
– The labels
– Syntactic word classes
• Tag Set
– The collection of tags used
Example
• Typically a tagged text is a sequence of white-space separated
base/tag tokens:
These/DT
findings/NNS
should/MD
be/VB
useful/JJ
for/IN
therapeutic/JJ
strategies/NNS
and/CC
the/DT
development/NN
of/IN
immunosuppressants/NNS
targeting/VBG
the/DT
CD28/NN
costimulatory/NN
pathway/NN
./.
Part-of-speech (English)
Part-of-Speech Ambiguity

Words that are highly ambiguous as to their part of speech tag


Sources of information

• Syntagmatic: tags of the other words


– AT JJ NN is common

– AT JJ VBP impossible (or unlikely)

• Lexical: look at the words


– The  AT

– Flour  more likely to be a noun than a verb

– A tagger that always chooses the most common tag is 90% correct (often
used as baseline)

• Most taggers use both


What does Tagging do?
1. Collapses Distinctions
• Lexical identity may be discarded
• e.g., all personal pronouns tagged with PRP

2. Introduces Distinctions
• Ambiguities may be resolved
• e.g. deal tagged with NN or VB

3. Helps in classification and prediction


Why POS?
• A word’s POS tells us a lot about the word and its neighbors:
– Limits the range of meanings (deal), pronunciation (text to speech) (object vs
object, record) or both (wind)

– Helps in stemming: saw[v] → see, saw[n] → saw

– Limits the range of following words

– Can help select nouns from a document for summarization

– Basis for partial parsing (chunked parsing)


Choosing a tagset

• The choice of tagset greatly affects the difficulty of the problem

• Need to strike a balance between


– Getting better information about context

– Make it possible for classifiers to do their job


Tagging methods
• Hand-coded
• Statistical taggers
– N-Gram Tagging
– HMM
– (Maximum Entropy)
• Brill (transformation-based) tagger
Unigram Tagger
• Unigram taggers are based on a simple statistical algorithm: for
each token, assign the tag that is most likely for that particular
token.
– For example, it will assign the tag JJ to any occurrence of the word frequent, since
frequent is used as an adjective (e.g. a frequent word) more often than it is used as a

verb (e.g. I frequent this cafe).

P(t n | wn)

18
N-Gram Tagging
• An n-gram tagger is a generalization of a unigram tagger whose context is the
current word together with the part-of-speech tags of the n-1 preceding tokens

• A 1-gram tagger is another term for a unigram tagger: i.e., the context used to
tag a token is just the text of the token itself. 2-gram taggers are also called
bigram taggers, and 3-gram taggers are called trigram taggers.

trigram tagger
P(t n | wn, tn  1, tn  2 )
19
N-Gram Tagging
• Why not 10-gram taggers?

• As n gets larger, the specificity of the contexts increases, as does the chance
that the data we wish to tag contains contexts that were not present in the
training data.

• This is known as the sparse data problem, and is quite pervasive in NLP. As a
consequence, there is a trade-off between the accuracy and the coverage of our
results (and this is related to the precision/recall trade-off)
Markov Model Tagger

• Bigram tagger

• Assumptions:
– Words are independent of each other

– A word identity depends only on its tag

– A tag depends only on the previous tag


Markov Model Tagger

t1 t2 tn

w1 w2 wn

P(t , w)  P(t 1, t 2,.., tn, w1, w2,.., wn, )   P(t i | ti  1 )P(wi | ti)
i
Rule-Based Tagger
• The Linguistic Complaint
– Where is the linguistic knowledge of a tagger?

– Just massive tables of numbers P(t n | wn, tn  1, tn  2 )


– Aren’t there any linguistic insights that could emerge from the data?

– Could thus use handcrafted sets of rules to tag input sentences, for example,
if input follows a determiner tag it as a noun.

23
The Brill tagger
(transformation-based tagger)
• An example of Transformation-Based Learning
– Basic idea: do a quick job first (using frequency), then revise it using
contextual rules.
• Very popular (freely available, works fairly well)
– Probably the most widely used tagger (esp. outside NLP)
– …. but not the most accurate: 96.6% / 82.0 %
• A supervised method: requires a tagged corpus
Brill Tagging: In more detail

• Start with simple (less accurate) rules…learn better ones from


tagged corpus
– Tag each word initially with most likely POS

– Examine set of transformations to see which improves tagging decisions


compared to tagged corpus

– Re-tag corpus using best transformation

– Repeat until, e.g., performance doesn’t improve

– Result: tagging procedure (ordered list of transformations) which can be


applied to new, untagged text
An example
• Examples:
– They are expected to race tomorrow.
– The race for outer space.
• Tagging algorithm:
1. Tag all uses of “race” as NN (most likely tag in the Brown corpus)
• They are expected to race/NN tomorrow
• the race/NN for outer space
2. Use a transformation rule to replace the tag NN with VB for all uses of
“race” preceded by the tag TO:
• They are expected to race/VB tomorrow
• the race/NN for outer space
What gets learned? [from Brill 95]

Tags-triggered transformations Morphology-triggered transformations


Rules are linguistically interpretable
Phrase structure

• Words are organized in phrases

• Phrases: grouping of words that are clumped as a unit

• Syntax: study of the regularities and constraints of word order


and phrase structure
Major phrase types

• Sentence (S) (whole grammatical unit). Normally rewrites as a


subject noun phrase and a verb phrase

• Noun phrase (NP): phrase whose head is a noun or a pronoun,


optionally accompanied by a set of modifiers
– The smart student of physics with long hair
Major phrase types
• Prepositional phrases (PP)
– Headed by a preposition and containing a NP
• She is [on the computer]
• They walked [to their school]
• Verb phrases (VP)
– Phrase whose head is a verb
• [Getting to school on time] was a struggle
• He [was trying to keep his temper]
• That woman [quickly showed me the way to hide]
Phrase structure grammar
• Syntactic analysis of sentences
– (Ultimately) to extract meaning:
• Mary gave Peter a book
• Peter gave Mary a book
Phrase structure parsing

• Parsing: the process of reconstructing the derivation(s) or phrase


structure trees that give rise to a particular sequence of words

• Parse is a phrase structure tree


– New art critics write reviews with computers
Phrase structure parsing & ambiguity

• The children ate the cake with a spoon


• PP Attachment Ambiguity
• Why is it important for NLP?
Text Normalization
• Stemming
• Convert to lower case
• Identifying non-standard words including numbers, abbreviations,
and dates, and mapping any such tokens to a special vocabulary.
– For example, every decimal number could be mapped to a single token 0.0, and
every acronym could be mapped to AAA. This keeps the vocabulary small and
improves the accuracy of many language modeling tasks.
• Lemmatization
– Make sure that the resulting form is a known word in a dictionary
– WordNet lemmatizer only removes affixes if the resulting word is in its
dictionary
Segmentation

• Word segmentation
– For languages that do not put spaces between words
• Chinese, Japanese, Korean, Thai, German (for compound nouns)

• Tokenization

• Sentence segmentation
– Divide text into sentences
Tokenization
• Divide text into units called tokens (words, numbers, punctuations)
• What is a word?
– Graphic word: string of continuous alpha numeric character surrounded by white
space
• $22.50
– Main clue (in English) is the occurrence of whitespaces
– Problems
• Periods: usually remove punctuation but sometimes it’s useful to keep periods
(Wash.  wash)
• Single apostrophes, contractions (isn’t, didn’t, dog’s: for meaning extraction could
be useful to have 2 separate forms: is + n’t or not)
• Hyphenation:
– Sometime best a single word: co-operate
– Sometime best as 2 separate words: 26-year-old, aluminum-export ban

• (RE for tokenization)


Sentence Segmentation
• Sentence:
– Something ending with a .. ?, ! (and sometime also :)
– “You reminded me,” she remarked, “of your mother.”
• Nested sentences
• Note the .”
• Sentence boundary detection algorithms
– Heuristic
– Statistical classification trees (Riley 1989)
• Probability of a word to occur before or after a boundary, case and length of words
– Neural network (Palmer and Hearst 1997)
• Part of speech distribution of preceding and following words
– Maximum Entropy (Mikheev 1998)
Sentence Segmentation
• Sentence:
– Something ending with a .. ?, ! (and sometime also :)
– “You reminded me,” she remarked, “of your mother.”
• Nested sentences
• Note the .”
• Sentence boundary detection algorithms
– Heuristic
– Statistical classification trees (Riley 1989)
• Probability of a word to occur before or after a boundary, case and length of
words
– Neural network (Palmer and Hearst 1997)
• Part of speech distribution of preceding and following words
– Maximum Entropy (Mikheev 1998)

Note: MODELS and Features


Segmentation as classification

• Sentence segmentation can be viewed as a classification task for


punctuation:
– Whenever we encounter a symbol that could possibly end a sentence, such as a
period or a question mark, we have to decide whether it terminates the
preceding sentence.

– We’ll return on this when we cover classification


• See Section 6.2 NLTK book

• For word segmentation see section 3.8 NLTK book


– Also page 180 of Speech and Language Processing Jurafsky and Martin
Semantics

• Semantics is the study of the meaning of words, construction


and utterances

1. Study of the meaning of individual words (lexical semantics)

2. Study of how meanings of individual words are combined into


the meaning of sentences (or larger units)
Lexical semantics

• How words are related with each other

• Hyponymy
– scarlet, vermilion, carmine, and crimson are all hyponyms of red

• Antonymy (opposite)
– Male, female

• Meronymy (part of)


– Tire is meromym of car

• Etc..
Word Senses
• Words have multiple distinct meanings, or senses:
– Plant: living plant, manufacturing plant, …
– Title: name of a work, ownership document, form of address, material at the
start of a film, …
• Many levels of sense distinctions
– Homonymy: totally unrelated meanings (river bank, money bank)
– Polysemy: related meanings (star in sky, star on tv, title)
– Systematic polysemy: productive meaning extensions (metonymy such as
organizations to their buildings) or metaphor
– Sense distinctions can be extremely subtle (or not)
• Granularity of senses needed depends a lot on the task
Word Sense Disambiguation

• Determine which of the senses of an ambiguous word is invoked in a particular


use of the word
• Example: living plant vs. manufacturing plant
• How do we tell these senses apart?
– “Context”
• The manufacturing plant which had previously sustained the town’s economy shut down
after an extended labor strike.
– Maybe it’s just text categorization
– Each word sense represents a topic
• Why is it important to model and disambiguate word senses?
– Translation
• Bank  banca or riva
– Parsing
• For PP attachment, for example
– information retrieval
• To return documents with the right sense of bank
Various Approaches to WSD
• Unsupervised learning
– We don’t know/have the labels
– More than disambiguation is discrimination
• Cluster into groups and discriminate between these groups without giving labels
• Clustering
– Example: EM (expectation-minimization), Bootstrapping (seeded with
some labeled data)
• Supervised learning
Supervised learning

• Supervised learning
– When we know the truth (true senses) (not always true or easy)

– Classification task

– Most systems do some kind of supervised learning

– Many competing classification technologies perform about the same (it’s all
about the knowledge sources you tap)

– Problem: training data available for only a few words

– Examples: Bayesian classification


• Naïve Bayes (simplest example of Graphical models)
Semantics: beyond individual words

• Once we have the meaning of the individual words, we need to


assemble them to get the meaning of the whole sentence

• Hard because natural language does not obey the principle of


compositionality by which the meaning of the whole can be
predicted by the meanings of the parts
Semantics: beyond individual words

• Collocations
– White skin, white wine, white hair

• Idioms: meaning is opaque


– Kick the bucket
Lexical acquisition

• Develop algorithms and statistical techniques for filling the holes in


existing dictionaries and lexical resources by looking at the
occurrences of patterns of words in large text corpora
– Collocations

– Semantic similarity

– (Logical metonymy)

– Selectional preferences
Collocations
• A collocation is an expression consisting of two or more words that
correspond to some conventional way of saying things
– Noun phrases: weapons of mass destruction, stiff breeze (but why not *stiff
wind?)
– Verbal phrases: to make up
– Not necessarily contiguous: knock…. door
• Limited compositionality
– Compositional if meaning of expression can be predicted by the meaning of the
parts
– Idioms are most extreme examples of non-compositionality
• Kick the bucket
Collocations
• Non Substitutability
– Cannot substitute words in a collocation
• *yellow wine
• Non modifiability
– To get a frog in one’s throat
• *To get an ugly frog in one’s throat
• Useful for
– Language generation
• *Powerful tea, *take a decision
– Machine translation
• Easy way to test if a combination is a collocation is to translate it into another
language
– Make a decision  *faire une decision (prendre), *fare una decisione (prendere)
Finding collocations
• Frequency
– If two words occur together a lot, that may be evidence that they have a
special function
– Filter by POS patterns
– A N (linear function), N N (regression coefficients) etc..
• Mean and variance of the distance of the words
• For not contiguous collocations
• Mutual information measure

51
Lexical acquisition

• Examples:
– “insulin” and “progesterone” are in WordNet 2.1 but “leptin” and
“pregnenolone” are not.

– “HTML” and “SGML”, but not “XML” or “XHTML”.

– “Google” and “Yahoo”, but not “Microsoft” or “IBM”.

• We need some notion of word similarity to know where to locate


a new word in a lexical resource
Semantic similarity

• Similar if contextually interchangeable


– The degree for which one word can be substituted for another in a given context
• Suit similar to litigation (but only in the legal context)

• Measures of similarity
– WordNet-based

– Vector-based

• Detecting hyponymy and other relations with patterns


Lexical acquisition

• Lexical acquisition problems


– Collocations
– Semantic similarity
– (Logical metonymy)
– Selectional preferences
Selectional preferences

• Most verbs prefer arguments of a particular type: selectional


preferences or restrictions
– Objects of eat tend to be food, subjects of think tend to be people etc..

– “Preferences” to allow for metaphors


• Feat eats the soul

• Why is it important for NLP?


Selectional preferences

• Why Important?
– To infer meaning from selectional restrictions
• Suppose we don’t know the words durian (not in the vocabulary)

• Susan ate a very fresh durian

• Infer that durian is a type of food

– Ranking the possible parses of a sentence


• Give higher scores to parses where the verbs has ‘natural argument”
Corpus-based statistical approaches to tackle NLP
problem

• Data (corpora, labels, linguistic resources)

• Feature extractions (usually linguistics motivated)

• Statistical models
The NLP Pipeline
• For a given problem to be tackled
1. Choose corpus (or build your own)
– Low level processing done to the text before the ‘real work’ begins
• Important but often neglected
– Low-leveling formatting issues
• Junk formatting/content (Html tags, Tables)
• Case change (i.e. everything to lower case)
• Tokenization, sentence segmentation
2. Choose annotation to use (or choose the label set and label it
yourself )
1. Check labeling (inconsistencies etc…)
3. Extract features
4. Choose or implement new NLP algorithms
5. Evaluate
6. (eventually) Re-iterate
Corpora
• Text Corpora & Annotated Text Corpora
– NLTK corpora
– Use/create your own
• Lexical resources
– WordNet
– VerbNet
– FrameNet
– Domain specific lexical resources
• Corpus Creation
• Annotation
Annotated Text Corpora

• Many text corpora contain linguistic annotations, representing


genres, POS tags, named entities, syntactic structures, semantic
roles, and so forth.

• Not part of the text in the file; it explains something of the structure
and/or semantics of text
Annotated Text Corpora

• Grammar annotation
– POS, parses, chunks
• Semantic annotation
– Topics, Named Entities, sentiment, Author, Language, Word senses, co-
reference …
• Lower level annotation
– Word tokenization, Sentence Segmentation, Paragraph Segmentation
Processing Search Engine Results
• The web can be thought of as a huge corpus of unannotated text.

• Web search engines provide an efficient means of searching this


text
Lexical Resources

• A lexicon, or lexical resource, is a collection of words and/or


phrases along with associated information such as part of speech
and sense definitions.

• Lexical resources are secondary to texts, and are usually created


and enriched with the help of texts
– A vocabulary (list of words in a text) is the simplest lexical resource
• WordNet
• VerbNet
• FrameNet
• Medline
Annotation: main issues
• Deciding Which Layers of Annotation to Include
– Grammar annotation
– Semantic annotation
– Lower level annotation
• Markup schemes
• How to do the annotation
• Design of a tag set
How to do the annotation?
• By hand
– Can be difficult, time consuming, domain knowledge and/or training may be required
• Unsupervised methods do not use labeled data and try to learn a task from the
“properties” of the data.
• Automatic (i.e. using some other metadata available)
• Bootstrapping
– Bootstrapping is an iterative process where, given (usually) a small amount of labeled
data (seed-data), the labels for the unlabeled data are estimated at each round of the
process, and the (accepted) labels then incorporated as training data.
• Co-training
– Co-training is a semi-supervised learning technique that requires two views of the data. It
assumes that each example is described using two different feature sets that provide
different, complementary information about the instance.
– “the description of each example can be partitioned into two distinct views” and for
which both (a small amount of) labeled data and (much more) unlabeled data are
available.
– co-training is essentially the one-iteration, probabilistic version of bootstrapping
• Non linguistic (i.e. clicks for IR relevance)
Why Probability?

• Statistical NLP aims to do statistical inference for the field of NLP

• Statistical inference consists of taking some data (generated in


accordance with some unknown probability distribution) and then
making some inference about this distribution.
Why Probability?

• Examples of statistical inference are WSD, the task of language


modeling (example, how to predict the next word given the previous
words), topic classification, etc.

• In order to do this, we need a model of the language.

• Probability theory helps us finding such model


Probability Theory

• How likely it is that something will happen

• Sample space Ω is listing of all possible outcome of an experiment


– Sample space can be continuous or discrete

– For language applications it’s discrete (i.e. words)

• Event A is a subset of Ω

• Probability function (or distribution)

P : Ω  0,1
Prior Probability

• Prior probability: the probability before we consider any


additional knowledge.

P ( A)
Conditional probability

• Sometimes we have partial knowledge about the outcome of an


experiment

• Conditional (or Posterior) Probability

• Suppose we know that event B is true

• The probability that A is true given the knowledge about B is


expressed by

P( A | B)
P(A,B)
P(A|B) 
P(B)
Conditional probability (cont)

P( A, B)  P( A | B) P( B)
 P( B | A) P( A)
• Note: P(A,B) = P(A ∩ B)
• Chain Rule
• P(A, B) = P(A|B) P(B) = The probability that A and B both happen is the
probability that B happens times the probability that A happens, given B has
occurred.
• P(A, B) = P(B|A) P(A) = The probability that A and B both happen is the
probability that A happens times the probability that B happens, given A has
occurred.
• Multi-dimensional table with a value in every cell giving the probability of
that specific state occurring

74
Chain Rule

P(A,B) = P(A|B)P(B)
= P(B|A)P(A)

P(A,B,C,D…) = P(A)P(B|A)P(C|A,B)P(D|A,B,C..)

75
Chain Rule  Bayes' rule

P(B|A)P(A)
P(A,B) = P(A|B)P(B) P(A|B) 
P(B)
= P(B|A)P(A)

Bayes' rule

Useful when one quantity is more easy to calculate;


trivial consequence of the definitions we saw but it’ s
extremely useful

76
Bayes' rule

P(A|B)P(A)
P(A|B) 
P(B)

Bayes' rule translates causal knowledge into diagnostic knowledge.

For example, if A is the event that a patient has a disease, and B is the event
that she displays a symptom, then P(B | A) describes a causal relationship, and
P(A | B) describes a diagnostic one (that is usually hard to assess).

If P(B | A), P(A) and P(B) can be assessed easily, then we get P(A | B) for free.

77
Example
• S:stiff neck, M: meningitis
• P(S|M) =0.5, P(M) = 1/50,000 P(S)=1/20
• I have stiff neck, should I worry?

P( S | M ) P( M )
P( M | S ) 
P( S )
0.5 1 / 50,000
  0.0002
1 / 20
(Conditional) independence
• Two events A e B are independent of each other if
P(A) = P(A|B)

• Two events A and B are conditionally independent of each other


given C if
P(A|C) = P(A|B,C)
Back to language
• Statistical NLP aims to do statistical inference for the field of NLP
– Topic classification
• P( topic | document )
– Language models
• P (word | previous word(s) )
– WSD
• P( sense | word)
• Two main problems
– Estimation: P in unknown: estimate P
– Inference: We estimated P; now we want to find (infer) the topic of a
document, or the sense of a word
Language Models (Estimation)

• In general, for language events, P is unknown

• We need to estimate P, (or model M of the language)

• We’ll do this by looking at evidence about what P must be based


on a sample of data
Inference
• The central problem of computational Probability Theory is the
inference problem:
• Given a set of random variables X1, … , Xk and their joint density
P(X1, … , Xk), compute one or more conditional densities given
observations.
– Compute
• P(X1 | X2 … , Xk)
• P(X3 | X1 )
• P(X1 , X2 | X3, X4,)
• Etc …
• Many problems can be formulated in these terms.
Bayes decision rule
• w: ambiguous word
• S = {s1, s2, …, sn } senses for w
• C = {c1, c2, …, cn } context of w in a corpus
• V = {v1, v2, …, vj } words used as contextual features for
disambiguation

• Bayes decision rule


– Decide sj if P(sj | c) > P(sk | c) for sj ≠ sk
• We want to assign w to the sense s’ where

s’ = argmaxsk P(sk | c)
Graphical Models
• Within the Machine Learning framework
• Probability theory plus graph theory
• Widely used
– NLP

– Speech recognition

– Error correcting codes

– Systems diagnosis

– Computer vision

– Filtering (Kalman filters)

– Bioinformatics
(Quick intro to)
Graphical Models

Nodes are random variables


A D
Edges are annotated with conditional
probabilities

Absence of an edge between nodes B C


implies conditional independence
P(A)
P(D)
“Probabilistic database”
P(B|A)
P(C|A,D) 85
Graphical Models

• Define a joint probability distribution:


• P(X1, ..XN) = i P(Xi | Par(Xi) ) D
A
• P(A,B,C,D) =
P(A)P(D)P(B|A)P(C|A,D)
• Learning
– Given data, estimate the parameters P(A), B C
P(D), P(B|A), P(C | A, D)

86
Graphical Models

• Define a joint probability distribution:


• P(X1, ..XN) = i P(Xi | Par(Xi) ) D
A
• P(A,B,C,D) =
P(A)P(D)P(B|A)P(C,A,D)
• Learning
– Given data, estimate P(A), P(B|A), P(D), P(C | B C
A, D)
• Inference: compute conditional
probabilities, e.g., P(A|B, D) or P(C | D)
• Inference = Probabilistic queries
• General inference algorithms (e.g.
Junction Tree) 87
Naïve Bayes models
• Simple graphical model

x1 x2 x3

• Xi depend on Y
• Naïve Bayes assumption: all xi are independent given Y
• Currently used for text classification and spam detection
Naïve Bayes models

Naïve Bayes for document classification


topic

w1 w2 wn

Inference task: P(topic | w1, w2 … wn)


Naïve Bayes for SWD

sk

v1 v2 v3

• Recall the general joint probability distribution:


P(X1, ..XN) = i P(Xi | Par(Xi) )
P(sk, v1..v3) = P(sk) P(vi | Par(vi))
= P(sk) P(v1| sk) P(v2| sk) P(v3| sk )
Naïve Bayes for SWD

sk

v1 v2 v3

P(sk, v1..v3) = P(sk) P(vi | Par(vi))


= P(sk) P(v1| sk) P(v2| sk) P(v3| sk )

Estimation (Training): Given data, estimate:


P(sk) P(v1| sk) P(v2| sk) P(v3| sk )
Naïve Bayes for SWD

sk

v1 v2 v3

P(sk, v1..v3) = P(sk) P(vi | Par(vi))


= P(sk) P(v1| sk) P(v2| sk) P(v3| sk )

Estimation (Training): Given data, estimate:


P(sk) P(v1| sk) P(v2| sk) P(v3| sk )

Inference (Testing): Compute conditional probabilities of


interest: P(sk| v1, v2, v3)
Language Models
• Model to assign scores to sentences

P( w1 , w2 ,..., wN )
• Probabilities should broadly indicate likelihood of sentences
– P( I saw a van) >> P( eyes awe of an)
• Not grammaticality
– P(artichokes intimidate zippers) ≈ 0
• In principle, “likely” depends on the domain, context, speaker…

93
Language models
• Related: the task of predicting the next word

P(w n | w1,...,wn  1 )
• Can be useful for
– Spelling corrections
• I need to notified the bank
– Machine translations
– Speech recognition
– OCR (optical character recognition)
– Handwriting recognition
– Augmentative communication
• Computer systems to help the disabled in communication
– For example, systems that let choose words with hand movements
94
Language Models
• Model to assign scores to sentences
P( w1 , w2 ,..., wN )

– Sentence: w1, w2, … wn


– Break sentence probability down with chain rule (no loss of
generality)

P( w1 , w2 ,..., wN )   P( wi w1w2 ,..., wi 1 )


i

– Too many histories!

95
Markov assumption: n-gram solution

P( wi w1w2 ,..., wi 1 ) w1 wi

• Markov assumption: only the prior local context --- the last
“few” n words– affects the next word
• N-gram models: assume each word depends only on a short
linear history
– Use N-1 words to predict the next one

P(w i | wi  n , , , , wi  1 ) Wi-3 wi

P( w1 , w2 ,..., wN )   P( wi wi n ...wi 1 )
i 96
Markov assumption: n-gram solution
P(w i | wi  n , , , , wi  1 )  P(w i)
• Unigrams (n =1)
P(w1, w2,...wn, )   P(w i)
i

P(w i | wi  n , , , , wi  1 )  P(w i | wi  n)
• Bigrams (n = 2)
P(w1, w2,...wn, )   P(wi | wi  1 )
i

P(w i | wi  n , , , , wi  1 )  P(w i | wi  2, wi  1 )
• Trigrams (n = 3)
P( w1 , w2 ,..., wN )   P( wi | wi 1 , wi 2 )
i

97
Choice of n
• In principle we would like the n of the n-gram to be large
– green
– large green
– the large green
– swallowed the large green
– swallowed should influence the choice of the next word
(mountain is unlikely, pea more likely)
– The crocodile swallowed the large green ..
– Mary swallowed the large green ..
– And so on…
Discrimination vs. reliability

• Looking at longer histories (large n) should allows us to make


better prediction (better discrimination)

• But it’s much harder to get reliable statistics since the number of
parameters to estimate becomes too large
– The larger n, the larger the number of parameters to estimate, the larger the
data needed to do statistically reliable estimations
Language Models

• N size of vocabulary
• Unigrams
P(w i | wi  n , , , , wi  1 )  P(w i) For each wi calculate P(wi):
N of such numbers: N parameters

• Bi-grams
For each wi, wj
P( w1 , w2 ,..., wN )   P( wi | wi 1 ) calculate P(wi | wj,):
i
NxN parameters

• Tri-grams
For each wi, wj wk
P( w1 , w2 ,..., wN )   P( wi | wi 1 , wi 2 ) calculate P(wi | wj, wk):
i
NxNxN parameters
100
N-grams and parameters
• Assume we have a vocabulary of 20,000 words
• Growth in number of parameters for n-grams models:

Model Parameters
Bigram model 20,0002 = 400 million
Trigram model 20,0003 = 8 trillion
Four-gram model 20,0004 = 1.6 x 1017

101
Sparsity
• Zipf’s law: most words are rare

– This makes frequency-based approaches to language hard

• New words appear all the time, new bigrams more often, trigrams or
more, still worse!
c ( wi )
P ( wi )   P ( wi )  0 if c( wi )  0
N
c( wi , wi  1)
P( wi | wi  1)   0 if c( wi , wi  1)  0
c( wi  1)
c( wi , wi  1, wi  2)
P( wi | wi  1, wi  2)   0 if c( wi , wi  1, wi  2)  0
c( wi  1, wi  2)

• These relative frequency estimates are the MLE (maximum likelihood estimates):
choice of parameters that give the highest probability to the training corpus

102
Sparsity
• The larger the number of parameters, the more likely it is to get 0
probabilities

• Note also the product:

P( w1 , w2 ,..., wN )   P( wi wi n ...wi 1 )
i

• If we have one 0 for un unseen events, the 0 propagates and gives


us 0 probabilities for the whole sentence
103
A ton of thanks!

104

You might also like