You are on page 1of 34

Support Vector

Machines

By
Ashis Kumar Dhara
Support Vector Machines: Slide 2
Linear Classifiers
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you
classify this data?
w x + b<0
w x + b>0
Support Vector Machines: Slide 3
Linear Classifiers
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you
classify this data?
Support Vector Machines: Slide 4
Linear Classifiers
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you
classify this data?
Support Vector Machines: Slide 5
Linear Classifiers
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
Any of these
would be fine..

..but which is
best?
Support Vector Machines: Slide 6
Linear Classifiers
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you
classify this data?
Misclassified
to +1 class
Support Vector Machines: Slide 7
Classifier Margin
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
Define the margin
of a linear
classifier as the
width that the
boundary could be
increased by
before hitting a
datapoint.
Classifier Margin
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
Define the margin
of a linear
classifier as the
width that the
boundary could be
increased by
before hitting a
datapoint.
Support Vector Machines: Slide 8
Maximum Margin
f
x
o
y
est
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
The maximum
margin linear
classifier is the
linear classifier
with the, um,
maximum margin.
This is the
simplest kind of
SVM (Called an
LSVM)
Linear SVM
Support Vectors
are those
datapoints that
the margin
pushes up
against
1. Maximizing the margin is good
according to intuition and PAC theory
2. Implies that only support vectors are
important; other training examples
are ignorable.
3. Empirically it works very very well.
Support Vector Machines: Slide 9
Linear SVM Mathematically
What we know:
w . x
+
+ b = +1
w . x
-
+ b = -1
w . (x
+
-x
-)
= 2
X
-
x
+
w w
w x x
M
2 ) (
=

=
+
M=Margin Width
Support Vector Machines: Slide 10
Linear SVM Mathematically
Goal: 1) Correctly classify all training data
if y
i
= +1
if y
i
= -1
for all i
2) Maximize the Margin
same as minimize

We can formulate a Quadratic Optimization Problem and solve for w and b

Minimize

subject to

w
M
2
=
w w w
t
2
1
) ( = u
1 > +b wx
i
1 s +b wx
i
1 ) ( > +b wx y
i i
1 ) ( > +b wx y
i i
i
w w
t
2
1
Support Vector Machines: Slide 11
Solving the Optimization Problem

Need to optimize a quadratic function subject to linear
constraints.
Quadratic optimization problems are a well-known class of
mathematical programming problems, and many (rather
intricate) algorithms exist for solving them.
The solution involves constructing a dual problem where a
Lagrange multiplier
i
is associated with every constraint in
the primary problem:

Find w and b such that
(w) = w
T
w is minimized;
and for all {(x
i
,y
i
)}: y
i
(w
T
x
i
+ b) 1
Find
1

N
such that
Q() =
i
-
i

j
y
i
y
j
x
i
T
x
j
is maximized and
(1)
i
y
i
= 0
(2)
i
0 for all
i

Support Vector Machines: Slide 12
The Optimization Problem Solution
The solution has the form:

Each non-zero
i
indicates that corresponding x
i
is a
support vector.
Then the classifying function will have the form:

Notice that it relies on an inner product between the test
point x and the support vectors x
i
we will return to this
later.
Also keep in mind that solving the optimization problem
involved computing the inner products x
i
T
x
j
between all
pairs of training points.
w =
i
y
i
x
i
b= y
k
- w
T
x
k
for any x
k
such that
k
= 0
f(x) =
i
y
i
x
i
T
x + b
Support Vector Machines: Slide 13
Dataset with noise
Hard Margin: So far we require
all data points be classified correctly
- No training error
What if the training set is
noisy?
- Solution 1: use very powerful
kernels
denotes +1
denotes -1
OVERFITTING!
Support Vector Machines: Slide 14
Slack variables i can be added to allow
misclassification of difficult or noisy examples.
c
7

c
11

c
2

Soft Margin Classification
What should our quadratic
optimization criterion be?
Minimize

=
+
R
k
k
C
1
.
2
1
w w
Support Vector Machines: Slide 15
Hard Margin v.s. Soft Margin
The old formulation:



The new formulation incorporating slack variables:



Parameter C can be viewed as a way to control
overfitting.
Find w and b such that
(w) = w
T
w is minimized and for all {(x
i
,y
i
)}
y
i
(w
T
x
i
+ b) 1
Find w and b such that
(w) = w
T
w + C
i
is minimized and for all {(x
i
,y
i
)}
y
i
(w
T
x
i
+ b) 1-
i
and
i
0 for all i
Support Vector Machines: Slide 16
Linear SVMs: Overview
The classifier is a separating hyperplane.
Most important training points are support vectors; they
define the hyperplane.
Quadratic optimization algorithms can identify which
training points x
i
are support vectors with non-zero
Lagrangian multipliers
i
.
Both in the dual formulation of the problem and in the
solution training points appear only inside dot products:

Find
1

N
such that
Q() =
i
-
i

j
y
i
y
j
x
i
T
x
j
is maximized and
(1)
i
y
i
= 0
(2) 0
i
C for all
i
f(x) =
i
y
i
x
i
T
x + b
Support Vector Machines: Slide 17
Non-linear SVMs
Datasets that are linearly separable with some noise
work out great:

But what are we going to do if the dataset is just too
hard?

How about mapping data to a higher-dimensional
space:
0
x

0
x

0
x

x
2
Support Vector Machines: Slide 18
Non-linear SVMs: Feature spaces
General idea: the original input space can always be
mapped to some higher-dimensional feature space
where the training set is separable:
: x

(x)
Support Vector Machines: Slide 19
The Kernel Trick
The linear classifier relies on dot product between vectors K(x
i
,x
j
)=x
i
T
x
j
If every data point is mapped into high-dimensional space via some
transformation : x

(x), the dot product becomes:
K(x
i
,x
j
)= (x
i
)

T
(x
j
)
A kernel function is some function that corresponds to an inner product in
some expanded feature space.
Example:
2-dimensional vectors x=[x
1
x
2
]; let K(x
i
,x
j
)=(1 + x
i
T
x
j
)
2
,

Need to show that K(x
i
,x
j
)= (x
i
)

T
(x
j
):
K(x
i
,x
j
)=(1 + x
i
T
x
j
)
2
,

= 1+ x
i1
2
x
j1
2
+2 x
i1
x
j1

x
i2
x
j2
+x
i2
2
x
j2
2
+ 2x
i1
x
j1
+2x
i2
x
j2

=[1 x
i1
2
2 x
i1
x
i2
x
i2
2
2x
i1
2x
i2
]
T
[1 x
j1
2
2 x
j1
x
j2
x
j2
2
2x
j1
2x
j2
]
= (x
i
)

T
(x
j
), where (x) =

[1 x
1
2
2 x
1
x
2
x
2
2
2x
1
2x
2
]

Support Vector Machines: Slide 20
What Functions are Kernels?
For some functions K(x
i
,x
j
) checking that
K(x
i
,x
j
)= (x
i
)

T
(x
j
) can be cumbersome.
Mercers theorem:
Every semi-positive definite symmetric function is a kernel
Semi-positive definite symmetric functions correspond to a
semi-positive definite symmetric Gram matrix:

K(x
1
,x
1
) K(x
1
,x
2
) K(x
1
,x
3
) K(x
1
,x
N
)
K(x
2
,x
1
) K(x
2
,x
2
) K(x
2
,x
3
) K(x
2
,x
N
)

K(x
N
,x
1
) K(x
N
,x
2
) K(x
N
,x
3
) K(x
N
,x
N
)
K=
Support Vector Machines: Slide 21
Examples of Kernel Functions
Linear: K(x
i
,x
j
)= x
i
T
x
j


Polynomial of power p: K(x
i
,x
j
)= (1+ x
i
T
x
j
)
p

Gaussian (radial-basis function network):



Sigmoid: K(x
i
,x
j
)= tanh(
0
x
i
T
x
j
+
1
)

)
2
exp( ) , (
2
2
o
j i
j i
x x
x x

= K
Support Vector Machines: Slide 22
Non-linear SVMs Mathematically
Dual problem formulation:





The solution is:



Optimization techniques for finding
i
s remain the same!
Find
1

N
such that
Q() =
i
-
i

j
y
i
y
j
K(x
i
,

x
j
) is maximized and
(1)
i
y
i
= 0
(2)
i
0 for all
i

f(x) =
i
y
i
K(x
i
,

x
j
)+ b
Support Vector Machines: Slide 23
SVM locates a separating hyperplane in the
feature space and classify points in that
space
It does not need to represent the space
explicitly, simply by defining a kernel
function
The kernel function plays the role of the dot
product in the feature space.
Nonlinear SVM - Overview
Support Vector Machines: Slide 24
Properties of SVM
Flexibility in choosing a similarity function
Sparseness of solution when dealing with large
data sets
- only support vectors are used to specify the separating
hyperplane
Ability to handle large feature spaces
- complexity does not depend on the dimensionality of the
feature space
Overfitting can be controlled by soft margin
approach
Nice math property: a simple convex optimization
problem which is guaranteed to converge to a single global
solution
Feature Selection
Support Vector Machines: Slide 25
Weakness of SVM
It is sensitive to noise
- A relatively small number of mislabeled examples can
dramatically decrease the performance

It only considers two classes
- how to do multi-class classification with SVM?
- Answer:
1) with output arity m, learn m SVMs
SVM 1 learns Output==1 vs Output != 1
SVM 2 learns Output==2 vs Output != 2
:
SVM m learns Output==m vs Output != m
2)To predict the output for a new input, just predict with
each SVM and find out which one puts the prediction the
furthest into the positive region.

Support Vector Machines: Slide 26
Some Issues
Choice of kernel
- Gaussian or polynomial kernel is default
- if ineffective, more elaborate kernels are needed
- domain experts can give assistance in formulating
appropriate similarity measures

Choice of kernel parameters
- e.g. in Gaussian kernel
- is the distance between closest points with different
classifications
- In the absence of reliable criteria, applications rely on the
use of a validation set or cross-validation to set such
parameters.

Optimization criterion Hard margin v.s. Soft margin
- a lengthy series of experiments in which various
parameters are tested
Support Vector Machines: Slide 27
SVM Kernel Functions
K(a,b)=(a . b +1)
d
is an example of an SVM
Kernel Function
Beyond polynomials there are other very high
dimensional basis functions that can be made
practical by finding the right Kernel Function
Radial-Basis-style Kernel Function:


Neural-net-style Kernel Function:
|
|
.
|

\
|

=
2
2
2
) (
exp ) , (
o
b a
b a K
) . tanh( ) , ( o k = b a b a K
Support Vector Machines: Slide 28
Nonlinear Kernel (I)
Support Vector Machines: Slide 29
Nonlinear Kernel (II)
Support Vector Machines: Slide 30
Diffusion Kernel
Kernel function describes the correlation or similarity
between two data points

Given that I have a function s(x,y) that describes the
similarity between two data points. Assume that it is a non-
negative and symmetric function. How can we generate a
kernel function based on this similarity function?

A graph theory approach
Support Vector Machines: Slide 31
Diffusion Kernel
Create a graph for the data points
Each vertex corresponds to a data point
The weight of each edge is the similarity s(x,y)
Graph Laplacian




A diffusion kernel can be defined as

,
( )
1 and are connected
1
k i
i j
i j
x N x
x x
L
i j
e

Support Vector Machines: Slide 32


Computing Diffusion Kernel
Singular value decomposition of Laplacian L





What is L
2
?



1
1 2 1 2
1
,
( , ,..., ) ( , ,..., )
, :
T T
m m
m
m
T
i i i
i
T
i j i j
i j

o
=
| |
|
=
|
|
\ .
=
=

L = UU u u u u u u
u u
u u
( )
2
2
1
,
, 1 , 1
2
1
m
T
i i i
i
m m
T T T
i j i i j j i j i j i j
i j i j
m
T
i i i
i

=
= =
=
= =
=

L = u u
u u u u u u
u u
Support Vector Machines: Slide 33
Computing Diffusion Kernel
What about L
n
?


Compute diffusion kernel

2
1
m
n T
i i i
i

=
=

L u u
L
e
|
( )
1 1 1
1 1 1
! !
!
i
n n n
m
L n T
i i i
n n i
n n
m m
T T
i
i i i i
i n i
L
e
n n
e
n
|
|
| |

|

= = =

= = =
= =
| |
= =
|
\ .


u u
u u u u
Support Vector Machines: Slide 34 Copyright 2001, 2003, Andrew W. Moore
References
An excellent tutorial on VC-dimension and Support
Vector Machines:
C.J.C. Burges. A tutorial on support vector machines
for pattern recognition. Data Mining and Knowledge
Discovery, 2(2):955-974, 1998.
http://citeseer.nj.nec.com/burges98tutorial.html
The VC/SRM/SVM Bible: (Not for beginners
including myself)
Statistical Learning Theory by Vladimir Vapnik, Wiley-
Interscience; 1998
Software: SVM-light, http://svmlight.joachims.org/,
free download

You might also like