Professional Documents
Culture Documents
Piyush Rai
September 4, 2018
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 1
Recap: Hyperplane-based Classification
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplane-based Classification
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplane-based Classification
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplane-based Classification
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplane-based Classification
The “direct” approach defines a model with parameters w (and optionally b) and learns them by
minimizing a suitable loss function (and doesn’t model x, i.e., purely discriminative)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplane-based Classification
The “direct” approach defines a model with parameters w (and optionally b) and learns them by
minimizing a suitable loss function (and doesn’t model x, i.e., purely discriminative)
The hyperplane need not be linear (e.g., can be made nonlinear using kernel methods - next class)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 2
Recap: Hyperplanes and Margin
Class +1
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 3
Recap: Hyperplanes and Margin
Class +1
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 3
Recap: Hyperplanes and Margin
Class +1
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 3
Recap: Hyperplanes and Margin
Class +1
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 3
Recap: Maximum-Margin Hyperplane
Total margin
(equivalent to)
Hard-margin SVM
(“hard” = want all points to satisfy the margin constraint)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 4
Recap: Maximum-Margin Hyperplane
Total margin
(equivalent to)
Hard-margin SVM
(“hard” = want all points to satisfy the margin constraint)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 4
Recap: Maximum-Margin Hyperplane
Total margin
(equivalent to)
Hard-margin SVM
(“hard” = want all points to satisfy the margin constraint)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 4
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Points with ξn ≥ 0 will be either in the margin region or totally on the wrong side
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Points with ξn ≥ 0 will be either in the margin region or totally on the wrong side
PN
New Objective: Maximize the margin while keeping the sum of slacks n=1 ξn small
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Still want a max-margin hyperplane but want to relax the hard constraint yn (w > x n + b) ≥ 1
Let’s allow every point x n to “slack the constraint” by a distance ξn ≥ 0
Class +1
Slacks
Class -1
Points with ξn ≥ 0 will be either in the margin region or totally on the wrong side
PN
New Objective: Maximize the margin while keeping the sum of slacks n=1 ξn small
Note: Can also think of the sum of slacks as the total training error
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 5
Recap: Maximum-Margin Hyperplane with Slacks
Hyperparmeter
Maximize the to balance the two Minimize the
margin sum of slacks
(don’t have too
many violations)
Slack-relaxed constraints
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 6
Recap: Maximum-Margin Hyperplane with Slacks
Hyperparmeter
Maximize the to balance the two Minimize the
margin sum of slacks
(don’t have too
many violations)
Slack-relaxed constraints
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 6
Recap: Maximum-Margin Hyperplane with Slacks
Hyperparmeter
Maximize the to balance the two Minimize the
margin sum of slacks
(don’t have too
many violations)
Slack-relaxed constraints
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 6
Recap: Maximum-Margin Hyperplane with Slacks
Hyperparmeter
Maximize the to balance the two Minimize the
margin sum of slacks
(don’t have too
many violations)
Slack-relaxed constraints
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 6
Recap: Maximum-Margin Hyperplane with Slacks
Hyperparmeter
Maximize the to balance the two Minimize the
margin sum of slacks
(don’t have too
many violations)
Slack-relaxed constraints
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 6
Summary: Hard-Margin SVM vs Soft-Margin SVM
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 8
Solving Hard-Margin SVM
||w ||2
arg min
w ,b 2
T
subject to 1 − yn (w x n + b) ≤ 0, n = 1, . . . , N
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 9
Solving Hard-Margin SVM
||w ||2
arg min
w ,b 2
T
subject to 1 − yn (w x n + b) ≤ 0, n = 1, . . . , N
N
||w ||2 X T
min max L(w , b, α) = + αn {1 − yn (w x n + b)}
w ,b α≥0 2 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 9
Solving Hard-Margin SVM
||w ||2
arg min
w ,b 2
T
subject to 1 − yn (w x n + b) ≤ 0, n = 1, . . . , N
N
||w ||2 X T
min max L(w , b, α) = + αn {1 − yn (w x n + b)}
w ,b α≥0 2 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 9
Solving Hard-Margin SVM
The dual problem (min then max) is
N
w >w X T
max min L(w , b, α) = + αn {1 − yn (w x n + b)}
α≥0 w ,b 2 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 10
Solving Hard-Margin SVM
The dual problem (min then max) is
N
w >w X T
max min L(w , b, α) = + αn {1 − yn (w x n + b)}
α≥0 w ,b 2 n=1
N N
∂L X ∂L X
=0⇒ w = αn yn x n =0⇒ αn yn = 0
∂w n=1
∂b n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 10
Solving Hard-Margin SVM
The dual problem (min then max) is
N
w >w X T
max min L(w , b, α) = + αn {1 − yn (w x n + b)}
α≥0 w ,b 2 n=1
N N
∂L X ∂L X
=0⇒ w = αn yn x n =0⇒ αn yn = 0
∂w n=1
∂b n=1
Important: Note the form of the solution w - it is simply a weighted sum of all the training inputs
x 1 , . . . , x N (and αn is like the “importance” of x n )
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 10
Solving Hard-Margin SVM
The dual problem (min then max) is
N
w >w X T
max min L(w , b, α) = + αn {1 − yn (w x n + b)}
α≥0 w ,b 2 n=1
N N
∂L X ∂L X
=0⇒ w = αn yn x n =0⇒ αn yn = 0
∂w n=1
∂b n=1
Important: Note the form of the solution w - it is simply a weighted sum of all the training inputs
x 1 , . . . , x N (and αn is like the “importance” of x n )
PN
Substituting w = n=1 αn yn x n in Lagrangian, we get the dual problem as (verify)
N N
X 1 X T
max LD (α) = αn − αm αn ym yn (x m x n )
α≥0
n=1
2 m,n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 10
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
Important: Inputs x’s only appear as inner products (helps to “kernelize”; more on this later)
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
Important: Inputs x’s only appear as inner products (helps to “kernelize”; more on this later)
Can solve† the above objective function for α using various methods, e.g.,
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
Important: Inputs x’s only appear as inner products (helps to “kernelize”; more on this later)
Can solve† the above objective function for α using various methods, e.g.,
Treating the objective as a Quadratic Program (QP) and running some off-the-shelf QP solver such as
quadprog (MATLAB), CVXOPT, CPLEX, etc.
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
Important: Inputs x’s only appear as inner products (helps to “kernelize”; more on this later)
Can solve† the above objective function for α using various methods, e.g.,
Treating the objective as a Quadratic Program (QP) and running some off-the-shelf QP solver such as
quadprog (MATLAB), CVXOPT, CPLEX, etc.
Using (projected) gradient methods (projection needed because the α’s are constrained). Gradient
methods will usually be much faster than QP methods.
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Solving Hard-Margin SVM
Can write the objective more compactly in vector/matrix form as
> 1 >
max LD (α) = α 1 − α Gα
α≥0 2
Good news: This is maximizing a concave function (or minimizing a convex function - verify that
the Hessian is G, which is p.s.d.). Note that our original SVM objective was also convex
Important: Inputs x’s only appear as inner products (helps to “kernelize”; more on this later)
Can solve† the above objective function for α using various methods, e.g.,
Treating the objective as a Quadratic Program (QP) and running some off-the-shelf QP solver such as
quadprog (MATLAB), CVXOPT, CPLEX, etc.
Using (projected) gradient methods (projection needed because the α’s are constrained). Gradient
methods will usually be much faster than QP methods.
Using co-ordinate ascent methods (optimize for one αn at a time); often very fast
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 11
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
αn is non-zero only if x n
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Hard-Margin SVM: The Solution
b = − 12
minn:yn =+1 w T x n + maxn:yn =−1 w T x n (exercise)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 12
Solving Soft-Margin SVM
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 13
Solving Soft-Margin SVM
Recall the soft-margin SVM optimization problem:
N
||w ||2 X
min f (w , b, ξ) = +C ξn
w ,b,ξ 2 n=1
T
subject to 1 ≤ yn (w x n + b)+ξn , −ξn ≤ 0 n = 1, . . . , N
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 14
Solving Soft-Margin SVM
Recall the soft-margin SVM optimization problem:
N
||w ||2 X
min f (w , b, ξ) = +C ξn
w ,b,ξ 2 n=1
T
subject to 1 ≤ yn (w x n + b)+ξn , −ξn ≤ 0 n = 1, . . . , N
N N N
||w ||2 X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− β n ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 14
Solving Soft-Margin SVM
Recall the soft-margin SVM optimization problem:
N
||w ||2 X
min f (w , b, ξ) = +C ξn
w ,b,ξ 2 n=1
T
subject to 1 ≤ yn (w x n + b)+ξn , −ξn ≤ 0 n = 1, . . . , N
N N N
||w ||2 X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− β n ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Note: The terms in red above were not present in the hard-margin SVM
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 14
Solving Soft-Margin SVM
Recall the soft-margin SVM optimization problem:
N
||w ||2 X
min f (w , b, ξ) = +C ξn
w ,b,ξ 2 n=1
T
subject to 1 ≤ yn (w x n + b)+ξn , −ξn ≤ 0 n = 1, . . . , N
N N N
||w ||2 X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− β n ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Note: The terms in red above were not present in the hard-margin SVM
Two sets of dual variables α = [α1 , . . . , αN ] and β = [β1 , . . . , βN ]. We’ll eliminate the primal
variables w , b, ξ to get dual problem containing the dual variables (just like in the hard margin case)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 14
Solving Soft-Margin SVM
The Lagrangian problem to solve
N N N
w >w X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− βn ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 15
Solving Soft-Margin SVM
The Lagrangian problem to solve
N N N
w >w X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− βn ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 15
Solving Soft-Margin SVM
The Lagrangian problem to solve
N N N
w >w X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− βn ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Note: Solution of w again has the same form as in the hard-margin case (weighted sum of all
inputs with αn being the importance of input x n )
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 15
Solving Soft-Margin SVM
The Lagrangian problem to solve
N N N
w >w X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− βn ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Note: Solution of w again has the same form as in the hard-margin case (weighted sum of all
inputs with αn being the importance of input x n )
Note: Using C − αn − βn = 0 and βn ≥ 0 ⇒ αn ≤ C (recall that, for the hard-margin case, α ≥ 0)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 15
Solving Soft-Margin SVM
The Lagrangian problem to solve
N N N
w >w X X T
X
min max L(w , b, ξ, α, β) = + +C ξn + αn {1 − yn (w x n + b)−ξn }− βn ξn
w ,b,ξ α≥0,β≥0 2 n=1 n=1 n=1
Note: Solution of w again has the same form as in the hard-margin case (weighted sum of all
inputs with αn being the importance of input x n )
Note: Using C − αn − βn = 0 and βn ≥ 0 ⇒ αn ≤ C (recall that, for the hard-margin case, α ≥ 0)
Substituting these in the Lagrangian L gives the Dual problem
N N
X 1 X T
max LD (α, β) = αn − αm αn ym yn (x m x n )
α≤C ,β≥0
n=1
2 m,n=1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 15
Solving Soft-Margin SVM
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Solving Soft-Margin SVM
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Solving Soft-Margin SVM
Like hard-margin case, solving the dual requires concave maximization (or convex minimization)
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Solving Soft-Margin SVM
Like hard-margin case, solving the dual requires concave maximization (or convex minimization)
Can be solved† the same way as hard-margin SVM (except that α ≤ C )
Can solve for α using QP solvers or (projected) gradient methods
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Solving Soft-Margin SVM
Like hard-margin case, solving the dual requires concave maximization (or convex minimization)
Can be solved† the same way as hard-margin SVM (except that α ≤ C )
Can solve for α using QP solvers or (projected) gradient methods
Given α, the solution for w , b has the same form as hard-margin case
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Solving Soft-Margin SVM
Like hard-margin case, solving the dual requires concave maximization (or convex minimization)
Can be solved† the same way as hard-margin SVM (except that α ≤ C )
Can solve for α using QP solvers or (projected) gradient methods
Given α, the solution for w , b has the same form as hard-margin case
Note: α is again sparse. Nonzero αn ’s correspond to the support vectors
† If interested in more details of the solver, see: “Support Vector Machine Solvers” by Bottou and Lin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 16
Support Vectors in Soft-Margin SVM
The hard-margin SVM solution had only one type of support vectors
.. ones that lie on the margin boundaries w T x + b = −1 and w T x + b = +1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 17
Support Vectors in Soft-Margin SVM
The hard-margin SVM solution had only one type of support vectors
.. ones that lie on the margin boundaries w T x + b = −1 and w T x + b = +1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 17
Support Vectors in Soft-Margin SVM
The hard-margin SVM solution had only one type of support vectors
.. ones that lie on the margin boundaries w T x + b = −1 and w T x + b = +1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 17
Support Vectors in Soft-Margin SVM
The hard-margin SVM solution had only one type of support vectors
.. ones that lie on the margin boundaries w T x + b = −1 and w T x + b = +1
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 17
Support Vectors in Soft-Margin SVM
The hard-margin SVM solution had only one type of support vectors
.. ones that lie on the margin boundaries w T x + b = −1 and w T x + b = +1
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVMs via Dual Formulation: Some Comments
Recall the final dual objectives for hard-margin and soft-margin SVM
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVMs via Dual Formulation: Some Comments
Recall the final dual objectives for hard-margin and soft-margin SVM
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVMs via Dual Formulation: Some Comments
Recall the final dual objectives for hard-margin and soft-margin SVM
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVMs via Dual Formulation: Some Comments
Recall the final dual objectives for hard-margin and soft-margin SVM
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
However, the dual formulation can be expensive if N is large. Have to solve for N variables
α = [α1 , . . . , αN ], and also need to store an N × N matrix G
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVMs via Dual Formulation: Some Comments
Recall the final dual objectives for hard-margin and soft-margin SVM
> 1 >
Hard-Margin SVM: max LD (α) = α 1 − α Gα
α≥0 2
> 1 >
Soft-Margin SVM: max LD (α) = α 1 − α Gα
α≤C 2
However, the dual formulation can be expensive if N is large. Have to solve for N variables
α = [α1 , . . . , αN ], and also need to store an N × N matrix G
A lot of work† on speeding up SVM in these settings (e.g., can use co-ord. descent for α)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 18
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Note that the slack ξn is the same as max{0, 1 − yn (w > x n + b)}, i.e., hinge loss for (x n , yn )
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Note that the slack ξn is the same as max{0, 1 − yn (w > x n + b)}, i.e., hinge loss for (x n , yn )
Another View: Thus the above is equivalent to minimizing the `2 regularized hinge loss
N
X λ >
L(w , b) = max{0, 1 − yn (w > x n + b)} + w w
n=1
2
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Note that the slack ξn is the same as max{0, 1 − yn (w > x n + b)}, i.e., hinge loss for (x n , yn )
Another View: Thus the above is equivalent to minimizing the `2 regularized hinge loss
N
X λ >
L(w , b) = max{0, 1 − yn (w > x n + b)} + w w
n=1
2
Comparing the two: Sum of slacks is like sum of hinge losses, C and λ play similar roles
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Note that the slack ξn is the same as max{0, 1 − yn (w > x n + b)}, i.e., hinge loss for (x n , yn )
Another View: Thus the above is equivalent to minimizing the `2 regularized hinge loss
N
X λ >
L(w , b) = max{0, 1 − yn (w > x n + b)} + w w
n=1
2
Comparing the two: Sum of slacks is like sum of hinge losses, C and λ play similar roles
Can learn (w , b) directly by minimizing L(w , b) using (stochastic)(sub)gradient descent
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
SVM: The Regularized Loss Function View
Maximize the margin subject to constraints led to the soft-margin formulation of SVM
N
||w ||2 X
arg min +C ξn
w ,b,ξ 2 n=1
subject to yn (w T x n + b) ≥ 1−ξn , ξn ≥ 0 n = 1, . . . , N
Note that the slack ξn is the same as max{0, 1 − yn (w > x n + b)}, i.e., hinge loss for (x n , yn )
Another View: Thus the above is equivalent to minimizing the `2 regularized hinge loss
N
X λ >
L(w , b) = max{0, 1 − yn (w > x n + b)} + w w
n=1
2
Comparing the two: Sum of slacks is like sum of hinge losses, C and λ play similar roles
Can learn (w , b) directly by minimizing L(w , b) using (stochastic)(sub)gradient descent
Hinge-loss version preferred for linear SVMs, or with other regularizers on w (e.g., `1 )
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 19
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Loss = 0 if score on correct class is at least 1 more than score on next best scoring class
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM
Multiclass SVMs use K weight vectors W = [w 1 , w 2 , . . . , w K ] (similar to softmax regression)
Loss = 0 if score on correct class is at least 1 more than score on next best scoring class
Can optimize these similar to how we did it for binary SVM
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 20
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
One-vs-All
Boundaries
Effective Pair-wise
Boundaries
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
One-vs-All
Boundaries
Effective Pair-wise
Boundaries
All-Pairs: Learn K -choose-2 binary classifiers, one for each pair of classes (j, k)
X
y∗ = arg max w>j,k x ∗ (predict k that wins over all others the most)
k
j6=k
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
Multiclass SVM using Binary SVM?
One-vs-All
Boundaries
Effective Pair-wise
Boundaries
All-Pairs: Learn K -choose-2 binary classifiers, one for each pair of classes (j, k)
X
y∗ = arg max w>j,k x ∗ (predict k that wins over all others the most)
k
j6=k
All-Pairs approach can be expensive at training and test time (but ways to speed up)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 21
One-Class Classification
Can we learn from examples of just one class, say positive examples?
May be desirable if there are many types of negative examples
“Norma
Positive
Examples
l”
“Novel
”
Several Types
of “Negative”
Examples
“Novel
” “Novel
”
Several Types
of “Negative”
Examples
“Novel
” “Novel
”
Origin
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 23
One-Class Classification via SVM-like methods
Origin
Approach 1: Assume positives lie within a ball with smallest possible radius (and allow slacks)
Known as “Support Vector Data Description” (SVDD). Proposed by [Tax and Duin, 2004]
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 23
One-Class Classification via SVM-like methods
Origin
Approach 1: Assume positives lie within a ball with smallest possible radius (and allow slacks)
Known as “Support Vector Data Description” (SVDD). Proposed by [Tax and Duin, 2004]
Approach 2: Find a max-marg hyperplane separating positives from origin (representing negatives)
Known as “One-Class SVM” (OC-SVM). Proposed by [Schölkopf et al., 2001]
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 23
One-Class Classification via SVM-like methods
Approach 1: Assume positives lie within a ball with smallest possible radius (and allow slacks)
Known as “Support Vector Data Description” (SVDD). Proposed by [Tax and Duin, 2004]
Approach 2: Find a max-marg hyperplane separating positives from origin (representing negatives)
Known as “One-Class SVM” (OC-SVM). Proposed by [Schölkopf et al., 2001]
Optimization problems for both cases can be solved similary as in binary SVM (e.g., via Lagrangian)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 23
Nonlinear SVM ?
A nice property of SVM (and many other models) is that inputs only appear as inner products
For example, recall the dual problem for soft-margin SVM had the form
> 1 >
arg max LD (α) = α 1 − α Gα
α≤C 2
where G is an N × N matrix with Gmn = ym yn x >
m x n , and 1 is a vector of 1s
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 24
Nonlinear SVM ?
A nice property of SVM (and many other models) is that inputs only appear as inner products
For example, recall the dual problem for soft-margin SVM had the form
> 1 >
arg max LD (α) = α 1 − α Gα
α≤C 2
where G is an N × N matrix with Gmn = ym yn x >
m x n , and 1 is a vector of 1s
We can replace each inner-product by any general form of inner product, e.g.
k(x n , x m ) = φ(x n )> φ(x m )
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 24
Nonlinear SVM ?
A nice property of SVM (and many other models) is that inputs only appear as inner products
For example, recall the dual problem for soft-margin SVM had the form
> 1 >
arg max LD (α) = α 1 − α Gα
α≤C 2
where G is an N × N matrix with Gmn = ym yn x >
m x n , and 1 is a vector of 1s
We can replace each inner-product by any general form of inner product, e.g.
k(x n , x m ) = φ(x n )> φ(x m )
.. where φ is some transformation (e.g., a higher-dimensional mapping) of the data
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 24
Nonlinear SVM ?
A nice property of SVM (and many other models) is that inputs only appear as inner products
For example, recall the dual problem for soft-margin SVM had the form
> 1 >
arg max LD (α) = α 1 − α Gα
α≤C 2
where G is an N × N matrix with Gmn = ym yn x >
m x n , and 1 is a vector of 1s
We can replace each inner-product by any general form of inner product, e.g.
k(x n , x m ) = φ(x n )> φ(x m )
.. where φ is some transformation (e.g., a higher-dimensional mapping) of the data
Note: Often the mapping φ doesn’t need to be explicitly computed (“kernel” magic - next class)!
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 24
Nonlinear SVM ?
A nice property of SVM (and many other models) is that inputs only appear as inner products
For example, recall the dual problem for soft-margin SVM had the form
> 1 >
arg max LD (α) = α 1 − α Gα
α≤C 2
where G is an N × N matrix with Gmn = ym yn x >
m x n , and 1 is a vector of 1s
We can replace each inner-product by any general form of inner product, e.g.
k(x n , x m ) = φ(x n )> φ(x m )
.. where φ is some transformation (e.g., a higher-dimensional mapping) of the data
Note: Often the mapping φ doesn’t need to be explicitly computed (“kernel” magic - next class)!
Can still learn a linear model in the new space but be nonlinear in the original space (wondeful!)
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 24
SVM: Some Notes
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 25
SVM: Some Notes
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 25
SVM: Some Notes
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 25
SVM: Some Notes
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 25
SVM: Some Notes
Intro to Machine Learning (CS771A) SVM (Contd), Multiclass and One-Class SVM 25