Professional Documents
Culture Documents
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines
for function estimation. Furthermore, we include a summary of currently used algorithms for
training SV machines, covering both the quadratic (or convex) programming part and advanced
methods for dealing with large datasets. Finally, we mention some modifications and extensions
that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a
SV perspective.
(ξ i
, i =1 (3) i i i
b y x
, i − (w, i ⟩ − ≤ ε + ξi It follows from the saddle point condition that the partial
subject to (w, xi ⟩ + b − yi ≤ ε + derivatives of L with respect to the primal variables (w, b, ξi ,
, ∗ ∗ ∗
ξhave
i ) to vanish for optimality.
, ξξii , ξ i ≥0
Σ
4
The constant C > 0 determines the trade-off between the flat- ∂b L = (αi∗ − αi ) = 0 (7)
ness of f and the amount up to which deviations larger than i =1
ε are tolerated. This corresponds to dealing with a so called Σ
4
ε-insensitive loss function |ξ |ε described by ∂w L = w − (αi − αi∗)xi = 0 (8)
(
i =1
0 if |ξ | ≤ ε
|ξ |ε := (4)
|ξ | − ε otherwise. ∂ξ (∗) L = C − αi (∗)− ηi (∗)= 0 (9)
i
Only two pages were converted.
Please Sign Up to convert the full document.
www.freepdfconvert.com/membership