Professional Documents
Culture Documents
SVMs
• Geometric
– Maximizing Margin
• Kernel Methods
– Making nonlinear decision boundaries linear
– Efficiently!
• Capacity
– Structural Risk Minimization
SVM History
• SVM is a classifier derived from statistical learning
theory by Vapnik and Chervonenkis
• SVM was first introduced by Boser, Guyon and
Vapnik in COLT-92
• SVM became famous when, using pixel maps as
input, it gave accuracy comparable to NNs with
hand-designed features in a handwriting
recognition task
• SVM is closely related to:
– Kernel machines (a generalization of SVMs), large
margin classifiers, reproducing kernel Hilbert space,
Gaussian process, Boosting
α
Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1
Any of these
would be fine..
..but which is
best?
α
Classifier Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 Define the margin
of a linear
classifier as the
width that the
boundary could be
increased by
before hitting a
datapoint.
α
Maximum Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
with the maximum
margin.
This is the
simplest kind of
SVM (Called an
LSVM)
Linear SVM
α
Maximum Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
Support Vectors with the, um,
are those
datapoints that maximum margin.
the margin
This is the
pushes up
against simplest kind of
SVM (Called an
LSVM)
Linear SVM
Why Maximum Margin?
1. Intuitively this feels safest.
f(x,w,b)
2. If we’ve made = sign(w.
a small error inxthe
- b)
denotes +1
location of the boundary (it’s been
denotes -1 The maximum
jolted in its perpendicular direction)
this gives us leastmargin
chance linear
of causing a
misclassification. classifier is the
linear
3. There’s some theory classifier
(using VC
Support Vectors dimension) that iswith
related
the,to um,
(but not
are those the same as) the proposition that this
datapoints that maximum margin.
is a good thing.
the margin
This
4. Empirically it works is very
very the well.
pushes up
against simplest kind of
SVM (Called an
LSVM)
A “Good” Separator
O O
X X
X O
X O O
X X O
X O
X O
Noise in the Observations
O O
X X
X O
X O O
X X O
X O
X O
Ruling Out Some Separators
O O
X X
X O
X O O
X X O
X O
X O
Lots of Noise
O O
X X
X O
X O O
X X O
X O
X O
Maximizing the Margin
O O
X X
X O
X O O
X X O
X O
X O
Specifying a line and margin
1 ” Plus-Plane
= +
la ss Classifier Boundary
i ct C one
Pr ed z Minus-Plane
“
- 1”
=
l a ss
i ct C one
Pr ed z
“
• Plus-plane = { x : w . x + b = +1 }
• Minus-plane = { x : w . x + b = -1 }
Classify +1 if w . x + b >= 1
as..
-1 if w . x + b <= -1
Universe if -1 < w . x + b < 1
explodes
Computing the margin width
1 ”
= + M = Margin Width
la ss
i ct C one
r e d z
“ P
= - 1” How do we compute
l a ss M in terms of w
=1
+b i ct C one
wx
b=
0
Pr ed z
and b?
+
w x b = -1 “
+
wx
• Plus-plane = { x : w . x + b = +1 }
• Minus-plane = { x : w . x + b = -1 }
Claim: The vector w is perpendicular to the Plus
Plane. Why?
Computing the margin width
1 ”
= + M = Margin Width
la ss
i ct C one
r e d z
“ P
= - 1” How do we compute
l a ss M in terms of w
=1
+b i ct C one
wx
b=
0
Pr ed z
and b?
+
w x b = -1 “
+
wx
• Plus-plane = { x : w . x + b = +1 }
• Minus-plane = { x : w . x + b = -1 }
Claim: The vector w is perpendicular to the Plus
Plane. Why? Let u and v be two vectors on the
Plus Plane. What is w . ( u – v ) ?
• Plus-plane = { x : w . x + b = +1 }
• Minus-plane = { x : w . x + b = -1 }
• The vector w is perpendicular to the Plus Plane
Any location in
A :: not
R not m
m
• Plus-plane = { x : w . x + b = +1 }
• Minus-plane = { x : w . x + b = -1 }
• The vector w is perpendicular to the Plus Plane
• Let x- be any point on the minus plane
• Let x+ be the closest plus-plane-point to x-.
• Claim: x+ = x- + λ w for some value of λ. Why?
Computing the margin width
1 ”
= + M = Margin Width
ss x+
la
i ct C one The line from x- to x+ is
r e d z
“ P
- 1” How do we
perpendicularcompute
to the
s s x=-
planes.
b =1
ct C la
e M in terms of w
di zo n
+
wx 0 e
+ b=
w x b = -1 “Pr So to getbfrom
and ? x- to x+
+
wx travel some distance in
• Plus-plane = { x : w . x + b =direction +1 } w.
• Minus-plane = { x : w . x + b = -1 }
• The vector w is perpendicular to the Plus Plane
• Let x- be any point on the minus plane
• Let x+ be the closest plus-plane-point to x-.
• Claim: x+ = x- + λ w for some value of λ. Why?
Computing the margin width
1 ”
= + M = Margin Width
ss x+
la
i ct C one
r e d z
“ P
1”
sx =- -
a s
+b
=1
ct Cl e
wx
b=
0
r e di zo n
+
w x b = -1 “P
+
wx
What we know:
• w . x+ + b = +1
• w . x- + b = -1
• x+ = x- + λ w
• |x+ - x- | = M
It’s now easy to get M in terms of w and b
Computing the margin width
1 ”
= + M = Margin Width
ss x+
la
i ct C one
r e d z
“ P
1”
sx =- -
a s
+b
=1
ct Cl e
wx
b=
0
r e di zo n w . (x - + λ w) + b = 1
+
w x b = -1 “P
+
wx
=>
What we know: w . x + b + λ w .w = 1 -
• w . x+ + b = +1 =>
• w . x- + b = -1 -1 + λ w .w = 1
• x+ = x- + λ w => 2
λ=
• |x+ - x- | = M w.w
It’s now easy to get M in terms of w and b
Computing the margin width
1 ” 2
= + M = Margin Width =
ss x+ w.w
la
i ct C one
r e d z
“ P
1”
sx =- -
a s
+b
=1
ct Cl e
wx
b=
0
r e di zo n M = |x+ - x- | =| λ w |=
+
w x b = -1 “P
+
wx
find w
argmax c⋅w
subject to
w⋅ai ≤ bi, for i = 1, …, m
wj ≥ 0 for j = 1, …, n
There are fast algorithms for solving linear programs including the
simplex algorithm and Karmarkar’s algorithm
Learning via Quadratic Programming
e additional linear
a( n +1)1u1 + a( n +1) 2u2 + ... + a( n +1) mu m = b( n +1)
constraints
equality
a( n + 2)1u1 + a( n + 2 ) 2u2 + ... + a( n + 2 ) mu m = b( n + 2 )
:
a( n + e )1u1 + a( n + e ) 2u2 + ... + a( n + e ) mu m = b( n + e )
Quadratic Programming
T
u Ru
Find arg max c + d u +
T Quadratic criterion
u 2
e additional linear
v
a( n +1)1u1 (+Buat( nth+1e)y2u2 +n...’t +waan(tn +to1) mwurimte = b( n +1)
a r e
ly do
constraints
equality
p r o b a b r s elf)
a u +a yo u
uon+e ... + a u =b
( n + 2 )1 1 ( n+ 2) 2 2 ( n+ 2) m m ( n+2)
:
a( n + e )1u1 + a( n + e ) 2u2 + ... + a( n + e ) mu m = b( n + e )
Learning the Maximum Margin Classifier
1 ” M=
= + 2 Given guess of w , b we
la ss
ic t C one w.w can
Pr ed z
“ • Compute whether all
- 1”
=
l a ss data points are in the
=1
+b i ct C one
wx =0 ed z
+ b
wx b=-1 “ Pr correct half-planes
+
wx
• Compute the margin
width
What should our How many
Assume constraints
R datapoints,
quadratic optimization will we
each (xhave? R
k,yk) where yk =
criterion be? What
+/- 1should they be?
Minimize w.w w . xk + b >= 1 if yk = 1
w . xk + b <= -1 if yk = -1
This is going to be a problem!
Uh-oh!
What should we do?
denotes +1 Idea 1:
denotes -1
Find minimum w.w, while
minimizing number of
training set errors.
Problem: Two things to
minimize makes for an
ill-defined optimization
This is going to be a problem!
Uh-oh!
What should we do?
denotes +1 Idea 1.1:
denotes -1
Minimize
w.w + C (#train errors)
Tradeoff parameter
Class 2
m
Class 1
05/18/12 39
Finding the Decision Boundary
• Let {x1, ..., xn} be our data set and let yi ∈ {1,-1} be the
class label of xi
• The decision boundary should classify all points correctly ⇒
• The Lagrangian is
05/18/12 41
• The Karush-Kuhn-Tucker conditions,
05/18/12 42
The Dual Problem
• If we substitute to , we have
• Note that
• This is a function of αi only
05/18/12 43
The Dual Problem
• The new objective function is in terms of αi only
• It is known as the dual problem: if we know w, we know all αi; if we
know all αi, we know w
• The original problem is known as the primal problem
• The objective function of the dual problem needs to be maximized!
• The dual problem is therefore:
• w can be recovered by
05/18/12 45
Characteristics of the Solution
• Many of the αi are zero
– w is a linear combination of a small number of data points
– This “sparse” representation can be viewed as data compression
as in the construction of knn classifier
• xi with non-zero αi are called support vectors (SV)
– The decision boundary is determined only by the SV
– Let tj (j=1, ..., s) be the indices of the s support vectors. We can
write
• For testing with a new data z
– Compute and classify
z as class 1 if the sum is positive, and class 2 otherwise
– Note: w need not be formed explicitly
05/18/12 46
A Geometrical Interpretation
Class 2
α10=0
α8=0.6
α7=0
α2=0
α5=0
α1=0.8
α4=0
α6=1.4
α9=0
α3=0
Class 1
05/18/12 47
Non-linearly Separable Problems
• We allow “error” ξi in classification; it is based on the output of the
discriminant function wTx+b
• ξi approximates the number of misclassified samples
Class 2
Class 1
05/18/12 48
Learning Maximum Margin with Noise
M=
ε11 2 Given guess of w , b we
ε2 w.w can
• Compute sum of
b=
1 distances of points to
wx
+
=0 ε7
+ b
wx b=-1
their correct zones
+
wx
• Compute the margin
width
What should our How many R
Assume constraints will
datapoints,
quadratic optimization weeach have? R k) where yk =
(xk,y
criterion be? R What+/-should
1 they be?
1
Minimize w.w + C ∑ εk w . x + b >= 1-ε if y = 1
2 k =1
k k k
w . xk + b <= -1+εk if yk = -1
Learning Maximum Marginm with Noise
= # input
M= dimensions
2 Given guess of w , b we
ε11
ε2 w.w can
• Compute
Our original (noiseless data) QPsum m+1
had of
=1
variables: w1, w2,distances
… wm, and ofb. points to
b
wx
+
=0 ε7
+ b
wx b=-1 Our new (noisy their
data) QP correct
has m+1+R zones
+
wx
… wm, b, εthe
variables: w1, •w2,Compute k , ε1margin
,… εR
width
What should our How many R
Assume constraints
datapoints, will
R= # records
quadratic optimization weeach have? R k) where yk =
(xk,y
criterion be? R What+/-should
1 they be?
1
Minimize w.w + C ∑ εk w . x + b >= 1-ε if y = 1
2 k =1
k k k
w . xk + b <= -1+εk if yk = -1
Learning Maximum Margin with Noise
M=
ε11 2 Given guess of w , b we
ε2 w.w can
• Compute sum of
b=
1 distances of points to
wx
+
=0 ε7
+ b
wx b=-1
their correct zones
+
wx
• Compute the margin
width
What should our How many R
Assume constraints
datapoints,will
quadratic optimization weeach have? R k) where yk =
(xk,y
criterion be? R What+/-should
1 they be?
1
Minimize w.w + C ∑ εk w . x + b >= 1-ε if y = 1
2 k =1
k k k
2 k =1 w . xk + b <= -1+εk if yk = -1
εk >= 0 for all k
Learning Maximum Margin with Noise
M=
ε11 2 Given guess of w , b we
ε2 w.w can
• Compute sum of
b=
1 distances of points to
wx
+
=0 ε7
+ b
wx b=-1
their correct zones
+
wx
• Compute the margin
Howwidth
many constraints will
What should our we have?
Assume R 2R
datapoints,
quadratic optimization Whateach (xk,ythey
should k) where
be? yk =
criterion be? +/- 1
1 R w . x + b >= 1-εk if yk = 1
Minimize w.w + C ∑ εk
k
2 k =1 w . xk + b <= -1+εk if yk = -1
εk >= 0 for all k
An Equivalent Dual QP
Minimize
R
w . xk + b >= 1-εk if yk = 1
1
w.w + C ∑ εk w . xk + b <= -1+εk if yk = -1
2 k =1
εk >= 0 , for all k
R
1 R R
Maximize ∑ αk − ∑∑ αk αl Qkl where Qkl = yk yl ( x k .x l )
k =1 2 k =1 l =1
R
Subject to these
constraints:
0 ≤ αk ≤ C ∀k ∑α
k =1
k yk = 0
An Equivalent Dual QP
R
1 R R
Maximize ∑ αk − ∑∑ αk αl Qkl where Qkl = yk yl ( x k .x l )
k =1 2 k =1 l =1
R
Subject to these
constraints:
0 ≤ αk ≤ C ∀k ∑α
k =1
k yk = 0
Then define:
R
w = ∑αk y k x k Then classify with:
k =1 f(x,w,b) = sign(w. x - b)
b = y K (1 −ε K ) −x K .w K
where K = arg max αk
k
Example XOR problem revisited:
x1 = (-1,-1), d1= - 1
x2 = (-1, 1), d2 = 1
x3 = ( 1,-1), d3 = 1
x4 = (-1,-1), d4= -1
Q(a)= Σ ai – ½ Σ Σ ai aj di dj φ(xi) Tφ(xj)
=a1 +a2 +a3 +a4 – ½(9 a1 a1 - 2a1 a2 -2 a1 a3 +2a1 a4
+9a2 a2 + 2a2 a3 -2a2 a4 +9a3 a3 -2a3 a4 +9 a4 a4 )
To minimize Q, we only need to calculate
∂Q ( a )
∂a i = 0, i = 1,...,4
(due to optimality conditions) which gives
1 = 9 a1 - a2 - a3 + a4
1 = -a1 + 9 a2 + a3 - a4
1 = -a1 + a2 + 9 a3 - a4
1 = a1 - a2 - a3 + 9 a4
The solution of which gives the optimal values:
a0,1 =a0,2 =a0,3 =a0,4 =1/8
w0 = Σ a0,i di φ(xi)
= 1/8[φ(x1)- φ(x2)- φ(x3)+ φ(x4)]
1 1 1 1 0
1 1 1 1 0
1 2 − 2 − 2 2 − 1
= − + − − = 2
8 1 1 1 1 0
− 2 − 2 2
2 0
− 2 2 − 2 2 0
x=0 z k = ( xk , xk2 )
For a non-linearly separable problem we have to first map data onto
feature space so that they are linear separable
xi φ(xi)
Given the training data sample {(xi,yi), i=1, …,N}, find the
optimum values of the weight vector w and bias b
w = Σ a0,i yi φ(xi)
where a0,i are the optimal Lagrange multipliers determined by
maximizing the following objective function
N
1 N N
Q ( a ) = ∑ai − ∑∑ai a j d i d j φ ( x i )φ( x j )
T
i =1 2 i =1 j =1
Look at the equation of the boundary in the feature space and use the
optimality conditions derived from the Lagrangian formulations
Hyperplane is defined by
m1
∑w ϕ ( x ) + b = 0
j =1
j j
or
m1
∑w ϕ ( x ) = 0;
j =0
j j where ϕ0 ( x ) = 1
∑a d ϕ ( x i )ϕ( x ) = 0
T
Thus : i i
i =1
N
and so boundary is : ∑ai d i K ( x, xi ) = 0
i =1
N
and Output = w ϕ( x ) = ∑ai d i K ( x, xi )
T
i =1
m1
where : K ( x , xi ) = ∑ ϕj ( x )ϕj ( x i )
j =0
In the XOR problem, we chose to use the kernel function:
K(x, xi) = (x T xi+1)2
= 1+ x12 xi12 + 2 x1x2 xi1xi2 + x22 xi22 + 2x1xi1 ,+ 2x2xi2
∑a d K ( x, x ) = 0
i =1
i i i
We therefore only need a suitable choice of kernel function cf:
Mercer’s Theorem:
Then define:
dot product ∑
Subject to these 0 ≤ αk ≤ CEach∀k α y =0
constraints: requiresk m k/22
k =1
additions and multiplications
The whole thing costs R2 m2 /4.
Then define:
Yeeks!
: : ∑ 2a b i i
2 a 2 b i =1
m m
a12
b1 2
+
2 2
a b m
∑ i bi
2 2 2 2
a
: :
2
2
i =1
a b
Quadratic
m m
Φ(a) • Φ(b) = •
2a1a2 2b1b2
2a1a3 2b1b3 +
:
:
Dot Products
2a1am 2b1bm
m m
2 a a
2 3 2 b b
2 3 ∑ ∑ 2a a b b i j i j
: : i =1 j =i +1
2 a a
1 m 2b b
1 m
: :
2am −1am 2bm −1bm
Just out of casual, innocent, interest,
let’s look at another function of a and
Quadratic b:
(a.b + 1) 2
Dot Products = (a.b) 2 + 2a.b + 1
2
m
m
= ∑ ai bi + 2∑ ai bi + 1
i =1 i =1
Φ(a) • Φ(b) = m m m
m m m m = ∑∑ ai bi a j b j + 2∑ ai bi + 1
1 + 2∑ ai bi + ∑ a b + ∑
2 2
i i ∑ 2a a b bi j i j
i =1 j =1 i =1
i =1 i =1 i =1 j =i +1
m m m m
= ∑ (ai bi ) + 2∑
2
∑a b a b i i j j + 2∑ ai bi + 1
i =1 i =1 j =i +1 i =1
Just out of casual, innocent, interest,
let’s look at another function of a and
Quadratic b:
(a.b + 1) 2
Dot Products = (a.b) 2 + 2a.b + 1
2
m
m
= ∑ ai bi + 2∑ ai bi + 1
i =1 i =1
Φ(a) • Φ(b) = m m m
m m m m = ∑∑ ai bi a j b j + 2∑ ai bi + 1
1 + 2∑ ai bi + ∑ a b + ∑
2 2
i i ∑ 2a a b bi j i j
i =1 j =1 i =1
i =1 i =1 i =1 j =i +1
m m m m
= ∑ (ai bi ) + 2∑
2
∑a b a b i i j j + 2∑ ai bi + 1
i =1 i =1 j =i +1 i =1
dot product ∑
Subject to these 0 ≤ αk ≤ CEach∀k k =0
αk yrequires
now only
constraints: k =1
m additions and multiplications
Then define:
R
Subject to these
constraints:
0 ≤ αk ≤ C ∀k ∑α
k =1
k yk = 0
Then define:
w= ∑α
k s.t. α k >0
k y k Φ( x k )
b = y K (1 −ε K ) −x K .w K
Then classify with:
where K = arg max αk
k f(x,w,b) = sign(w. φ (x) - b)
QP with Quintic basis functions
We must do RR2/2 dot products
R R to get this
matrix ready.
Maximize ∑ α + ∑∑ α α Q
In 100-d, each
k
k =1dot product
k =1 now
k l kl
l =1 needs 103
where Qkl = yk yl (Φ(x k ).Φ(x l ))
operations instead of 75 million
R
But there are still worrying things lurking away.
Subject to these
What are they?
constraints:
0 ≤ α ≤ C ∀k
k ∑α
•The fear of overfittingkwith
y =0
k k
=1 this enormous
number of terms
Then define: •The evaluation phase (doing a set of
predictions on a test set) will be very
∑α
expensive (why?)
w= k y k Φ( x k )
k s.t. α k >0
b = y K (1 −ε K ) −x K .w K
Then classify with:
where K = arg max αk
k f(x,w,b) = sign(w. φ (x) - b)
QP with Quintic basis functions
We must do RR2/2 dot products
R R to get this
matrix ready.
Maximize ∑ α + ∑∑ α α Q
In 100-d, each
k
k =1dot product
k =1 now
k l kl
l =1 needs 103
where Qkl = yk yl (Φ(x k ).Φ(x l ))
The use of Maximum Margin
operations instead of 75 million magically makes this not a
problem R
But there are still worrying things lurking away.
Subject to these
What are they?
constraints:
0 ≤ α ≤ C ∀k
k ∑α
•The fear of overfittingkwith
y =0
k k
=1 this enormous
number of terms
Then define: •The evaluation phase (doing a set of
predictions on a test set) will be very
∑α
expensive (why?)
w= k y k Φ( x k )
k s.t. α k >0 Because each w. φ (x) (see below)
needs 75 million operations. What
b = y K (1 −ε K ) −x K .w K can be done?
Then classify with:
where K = arg max αk
k f(x,w,b) = sign(w. φ (x) - b)
QP with Quintic basis functions
We must do RR2/2 dot products
R R to get this
matrix ready.
Maximize ∑ α + ∑∑ α α Q
In 100-d, each
k
k =1dot product
k =1 now
k l kl
l =1 needs 103
where Qkl = yk yl (Φ(x k ).Φ(x l ))
The use of Maximum Margin
operations instead of 75 million magically makes this not a
problem R
But there are still worrying things lurking away.
Subject to these
What are they?
constraints:
0 ≤ α ≤ C ∀k
k ∑α
•The fear of overfittingkwith
k y =0
k
=1 this enormous
number of terms
Then define: •The evaluation phase (doing a set of
predictions on a test set) will be very
∑α
expensive (why?)
w= k y k Φ( x k )
k s.t. α k >0 Because each w. φ (x) (see below)
needs 75 million operations. What
b ⋅=
w x) =
Φ(y
K
∑εαk y)k Φ
(1 − −(x
x k ) .⋅w
k s.t. α k >0K
Φ( x)
K K
can be done?
Then classify with:
where K∑
= ⋅ + 5
= arg
α y
k k ( x
max
k x 1
α
k s.t. α k >0
)
k
k
Only Sm operations (S=#support vectors)
f(x,w,b) = sign(w. φ (x) - b)
QP with Quintic basis functions
We must do RR2/2 dot products
R R to get this
matrix ready.
Maximize ∑ α + ∑∑ α α Q
In 100-d, each
k
k =1dot product
k =1 now
k l kl
l =1 needs 103
where Qkl = yk yl (Φ(x k ).Φ(x l ))
The use of Maximum Margin
operations instead of 75 million magically makes this not a
problem R
But there are still worrying things lurking away.
Subject to these
What are they?
constraints:
0 ≤ α ≤ C ∀k
k ∑α
•The fear of overfittingkwith
k y =0
k
=1 this enormous
number of terms
Then define: •The evaluation phase (doing a set of
predictions on a test set) will be very
∑α
expensive (why?)
w= k y k Φ( x k )
k s.t. α k >0 Because each w. φ (x) (see below)
needs 75 million operations. What
b ⋅=
w x) =
Φ(y
K
∑εαk y)k Φ
(1 − −(x
x k ) .⋅w
k s.t. α k >0K
Φ( x)
K K
can be done?
Then classify with:
where K∑
= ⋅ + 5
= arg
α y
k k ( x
max
k x 1
α
k s.t. α k >0
)
k
k
Only Sm operations (S=#support vectors)
f(x,w,b) = sign(w. φ (x) - b)
QP with Quintic basis functions
R
1 R R
Maximize ∑
k =1
αk − ∑∑ αk αl Qkl where
2 k =1 l =1
WhyQ kl = y k yl (Φ( x k ).Φ( x l ))
SVMs don’t overfit as much as
you’d think:
No matter what the basis function,
there are really R
only up to R
Subject to these
constraints:
0 ≤ αk ≤ C ∀k ∑α
parameters: α1, α2 .. αk
most are set tokzero
y =0
R, and
k
usually
=1 by the Maximum
Margin.