Professional Documents
Culture Documents
Part – III
• Support Vector Machine
• Vector Space Model
• Latent Semantic Analysis
SVM (Concept)
SVM
SVM
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w x + b)
denotes +1 w x + b>0
denotes -1
0
b=
+
How would you
x
w
classify this data?
w x + b<0
a
Linear Classifiers
x f yest
f(x,w,b) = sign(w x + b)
denotes +1
denotes -1
Misclassified
to +1 class
a
Classifier Margin
x f f yest
f(x,w,b) = sign(w x + b)
denotes +1
denotes -1
Define the
margin of a
linear
classifier as
the width that
the boundary
could be
increased by
before hitting
a datapoint.
a
Maximum Margin
x f yest
1. Maximizing the margin is good according
to intuition and PAC
f(x,w,b) theory x + b)
= sign(w
denotes +1 2. Implies that only support vectors are
denotes -1 important; other training examples are
ignorable.
3. Empirically it works very very well.
The maximum
Support Vectors margin linear
are those classifier is the
datapoints that the linear classifier
margin pushes up with the, maximum
against margin.
This is the simplest
kind of SVM
Linear SVM (Called an LSVM)
Linear SVM Mathematically
zonex+
+1” M=Margin Width
l ass =
ic tC
ed
“Pr X-
zo ne
+b=
1
= -1”
wx =0 ss
Cla
+b
wx t
+ b =-1
r e dic
wx “P
What we know:
(x x ) w 2
• w . x+ + b = +1 M
• w . x- + b = -1 w w
• w . (x+-x-) = 2
Linear SVM Mathematically
Goal: 1) Correctly classify all training data
wxi b 1 if yi = +1
wxi b 1 if yi = -1
yi ( wxi b) 1for all i 2
M
2) Maximize the Margin
1 t w
same as minimize ww
2
We can formulate a Quadratic Optimization Problem and solve for w and b
1 t
Minimize ( w) 2 w w
subject to yi ( wxi b) 1 i
Solving the Optimization Problem
Find w and b such that
Φ(w) =½ wTw is minimized;
and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1
Need to optimize a quadratic function subject to linear constraints.
Quadratic optimization problems are a well-known class of
mathematical programming problems, and many (rather intricate)
algorithms exist for solving them.
The solution involves constructing a dual problem where a Lagrange
multiplier αi is associated with every constraint in the primary
problem:
OVERFITTING!
Soft Margin Classification
Slack variables ξi can be added to allow
misclassification of difficult or noisy examples.
What should our quadratic
optimization criterion be?
e11
e2 Minimize
1 R
w.w C εk
2 k 1
1
b=
wx
+
0 e7
+ b=
wx b=-1
+
wx
Hard Margin v.s. Soft Margin
The old formulation:
Find w and b such that
Φ(w) =½ wTw is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1
f(x) = ΣαiyixiTx + b
Non-linear SVMs
Datasets that are linearly separable with some noise work out
great:
0 x
x2
0 x
Non-linear SVMs: Feature spaces
General idea: the original input space can always be
mapped to some higher-dimensional feature space where
the training set is separable:
Φ: x → φ(x)
The “Kernel Trick”
The linear classifier relies on dot product between vectors
K(xi,xj)=xiTxj
If every data point is mapped into high-dimensional space via some
transformation Φ: x → φ(x), the dot product becomes:
K(xi,xj)= φ(xi) Tφ(xj)
A kernel function is some function that corresponds to an inner
product in some expanded feature space.
Example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2,
Need to show that K(xi,xj)= φ(xi) Tφ(xj):
K(xi,xj)=(1 + xiTxj)2,
= 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2
= [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2]
= φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]
What Functions are Kernels?
For some functions K(xi,xj) checking that
K(xi,xj)= φ(xi) Tφ(xj) can be cumbersome.
Mercer’s theorem:
Every semi-positive definite symmetric function is a kernel
Semi-positive definite symmetric functions correspond to a
semi-positive definite symmetric Gram matrix:
• Consider a term in the query that is rare in the collection (e.g., arachnocentric)
• For frequent terms, we want high positive weights for words like high, increase,
and line
But lower weights than for rare terms.
w t ,d log(1 tf t ,d ) log10 ( N / df t )
• Best known weighting scheme in information retrieval
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
V
qd q d qi d i
cos( q , d ) i 1
qd q d
i1 i
V 2 2 V
q
i 1 i
d
V
cos(q, d ) q d qi di For length-normalized
i1 vectors, cosine similarity is
simply the dot product (or
scalar product)
qi is the tf-idf weight of term i in the q
di is the tf-idf weight of term i in the d
Sensibility jealous 10 7 11
cos(SaS,PaP) ≈
0.789 × 0.832 + 0.515 × 0.555 + 0.335 × 0.0 + 0.0 × 0.0
≈ 0.94
cos(SaS,WH) ≈ 0.79
cos(PaP,WH) ≈ 0.69
Why do we have cos(SaS,PaP) > cos(SaS,WH)?
Problems – High Dimensionality
• Term-document matrices are very large
• General idea
• Map documents (and terms) to a low-dimensional representation.
• Design a mapping such that the low-dimensional space reflects semantic
associations (latent semantic space).
• Compute document similarity based on the inner product in this latent
semantic space
Latent Semantic Indexing was developed at Bellcore (now Telcordia) in the late 1980s (1988). It was patented in 1989.
http://lsi.argreenhouse.com/lsi/LSI.html
LSA
• But first:
• What is the difference between LSI and LSA???
• LSI refers to using it for indexing or information retrieval.
• LSA refers to everything else.
= VkSkUkT UkSkVkT
= (VkSk) (VkSk) T
• Since Vk = AkTUkSk-1
we should transform query q to qk as follows
qk qT U k 1
k
Singular Value Decomposition
For an M N matrix A of rank r there exists a factorization
(Singular Value Decomposition = SVD) as follows:
T
A
U
V
MM MN V is NN
1
diag
...
r Singular values.
Singular Value Decomposition
• Illustration of SVD dimensions and sparseness
SVD example
1 1
Let
A 0 1
1 0
Thus M=3, N=2. Its SVD is
0 2/61/3
10
1
/21/2
1
/21
/61/303
1
/21
/2
1/21/61
/300
Typically, the singular values arranged in decreasing order.
Low-rank Approximation
• SVD can be used to compute optimal low-rank approximations.
• Approximation problem: Find Ak of rank k such that
A
kmin
A
XF
X
:
Frobenius norm
rank
(
X)
k
A
kUdiag
( T
,...,
,
0
1k,...,
0
)
V
set smallest r-k
singular values to zero
k
Ai
k
uv T
1i i i
column notation: sum
of rank 1 matrices
Reduced SVD
• If we retain only k singular values, and set the rest to 0, then we
don’t need the matrix parts in red
• Then Σ is k×k, U is M×k, VT is k×N, and Ak is M×N
• This is referred to as the reduced SVD
• It is the convenient (space-saving) and usual form for computational
applications
k
Computing Similarity in LSI
Approximation error
• How good (bad) is this approximation?
• It’s the best possible, measured by the Frobenius norm of the
error:
X
:
min
A
X
rank
(
X)
k
A
A
F
k
Fk
1
• For small SVD calculations, you can use the BlueBit calculator at
http://www.bluebit.gr/matrix-calculator
• Some of the singular values are “too small” and thus
“negligible.” ………….“too small” is usually determined
empirically.
Terms representation
Documents representation
L18LSI Prasad
L18LSI Prasad
L18LSI Prasad
Empirical evidence
• Precision at or above median TREC precision
• Top scorer on almost 20% of TREC topics
• Slightly better on average than straight vector
spaces
• Effect of dimensionality:
Dimensions Precision
250 0.367
300 0.371
346 0.374
But why is this clustering?
• We’ve talked about docs, queries,
retrieval and precision here.
• What does this have to do with
clustering?
• Intuition: Dimension reduction through
LSI brings together “related” axes in the
vector space.
Intuition from block matrices
N documents
0’s
Block 2
M
terms …
0’s
Block k
N documents
Block 1
0’s
Block 2
M
terms …
0’s
Block k
wiper
tire Block 1
V6
Topic 2
Topic 3
Some wild extrapolation
• The “dimensionality” of a corpus is the
number of distinct topics represented in
it.
• More mathematical wild extrapolation:
• if A has a rank k approximation of low
Frobenius error, then there are no more
than k distinct topics in the corpus.
LSI has many other
applications
• In many settings in pattern recognition and retrieval, we have
a feature-object matrix.
• For text, the terms are features and the docs are objects.
• Could be opinions and users …
• This matrix may be redundant in dimensionality.
• Can work with low-rank approximation.
• If entries are missing (e.g., users’ opinions), can recover if
dimensionality is low.
• Powerful general analytical technique
• Close, principled analog to clustering methods.