You are on page 1of 2

The SVM is originally developed by Vapnik in 1990s

and has been applied in many practical problems


since then. The advantages of SVM comes from two
good characteristics: the maximized margin and the
kernel trick. These good characteristics can provide
high testing accuracy of classification and avoid the
limits of dimensionality. The SVM solves the
optimization problems in below equations:
n
minimize w  C   i ,
2

 0 ,... n
i 1
n
w    i   xi , K  xi , xi     xi    xi  ,
i 1

 n

s.t. yi   0    i K  xi , xi    1   i ,  i  0
 i 1 
K is the kernel functions used in SVM to solve non-
linear classification. The very popular kernel
functions include the linear function, polynomial
function and the Gaussian radius basis function (RBF).
Here we adopts the …
K-means is a classical clustering algorithm in the
unsupervised machine learning. It partitions a data
set into a pre-specified number of distinct, non-
overlapping clusters. Here, we solve the problem:
K 
minimize W  Ck  
C1 ,...,CK
 k 1  ,

  x  xij 
p
1
W  Ck  
2
ij
Ck i ,i Ck j 1
,
Where the Ck represents the number of
observations in the kth cluster.

Compression rate:
C.R.  No. of observations No. of clusters in K-Means

You might also like