CENTER FOR

MACHINE PERCEPTION
CZECH TECHNICAL
UNIVERSITY
R
E
S
E
A
R
C
H
R
E
P
O
R
T
I
S
S
N
1
2
1
3
-
2
3
6
5
Statistical Pattern Recognition
Toolbox for Matlab
User’s guide
Vojtˇech Franc and V´aclav Hlav´ aˇc
¦xfrancv,hlavac¦@cmp.felk.cvut.cz
CTU–CMP–2004–08
June 24, 2004
Available at
ftp://cmp.felk.cvut.cz/pub/cmp/articles/Franc-TR-2004-08.pdf
The authors were supported by the European Union under project
IST-2001-35454 ECVision, the project IST-2001-32184 ActIPret and
by the Czech Grant Agency under project GACR 102/03/0440 and
by the Austrian Ministry of Education under project CONEX GZ
45.535.
Research Reports of CMP, Czech Technical University in Prague, No. 8, 2004
Published by
Center for Machine Perception, Department of Cybernetics
Faculty of Electrical Engineering, Czech Technical University
Technick´a 2, 166 27 Prague 6, Czech Republic
fax +420 2 2435 7385, phone +420 2 2435 7637, www: http://cmp.felk.cvut.cz
Statistical Pattern Recognition Toolbox for Matlab
Vojtˇech Franc and V´aclav Hlav´ aˇc
June 24, 2004
Contents
1 Introduction 3
1.1 What is the Statistical Pattern Recognition Toolbox and how it has been
developed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 The STPRtool purpose and its potential users . . . . . . . . . . . . . . 3
1.3 A digest of the implemented methods . . . . . . . . . . . . . . . . . . . 4
1.4 Relevant Matlab toolboxes of others . . . . . . . . . . . . . . . . . . . . 5
1.5 STPRtool document organization . . . . . . . . . . . . . . . . . . . . . 5
1.6 How to contribute to STPRtool? . . . . . . . . . . . . . . . . . . . . . 5
1.7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Linear Discriminant Function 8
2.1 Linear classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Linear separation of finite sets of vectors . . . . . . . . . . . . . . . . . 10
2.2.1 Perceptron algorithm . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Kozinec’s algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Multi-class linear classifier . . . . . . . . . . . . . . . . . . . . . 14
2.3 Fisher Linear Discriminant . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Generalized Anderson’s task . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Original Anderson’s algorithm . . . . . . . . . . . . . . . . . . . 19
2.4.2 General algorithm framework . . . . . . . . . . . . . . . . . . . 21
2.4.3 ε-solution using the Kozinec’s algorithm . . . . . . . . . . . . . 21
2.4.4 Generalized gradient optimization . . . . . . . . . . . . . . . . . 22
3 Feature extraction 25
3.1 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Linear Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Kernel Principal Component Analysis . . . . . . . . . . . . . . . . . . . 31
3.4 Greedy kernel PCA algorithm . . . . . . . . . . . . . . . . . . . . . . . 32
3.5 Generalized Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . 33
3.6 Combining feature extraction and linear classifier . . . . . . . . . . . . 34
1
4 Density estimation and clustering 41
4.1 Gaussian distribution and Gaussian mixture model . . . . . . . . . . . 41
4.2 Maximum-Likelihood estimation of GMM . . . . . . . . . . . . . . . . 43
4.2.1 Complete data . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Incomplete data . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 Minimax estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Probabilistic output of classifier . . . . . . . . . . . . . . . . . . . . . . 48
4.5 K-means clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5 Support vector and other kernel machines 57
5.1 Support vector classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2 Binary Support Vector Machines . . . . . . . . . . . . . . . . . . . . . 58
5.3 One-Against-All decomposition for SVM . . . . . . . . . . . . . . . . . 63
5.4 One-Against-One decomposition for SVM . . . . . . . . . . . . . . . . . 64
5.5 Multi-class BSVM formulation . . . . . . . . . . . . . . . . . . . . . . . 66
5.6 Kernel Fisher Discriminant . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.7 Pre-image problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.8 Reduced set method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6 Miscellaneous 76
6.1 Data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3 Bayesian classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.4 Quadratic classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.5 K-nearest neighbors classifier . . . . . . . . . . . . . . . . . . . . . . . 82
7 Examples of applications 85
7.1 Optical Character Recognition system . . . . . . . . . . . . . . . . . . 85
7.2 Image denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8 Conclusions and future planes 94
2
Chapter 1
Introduction
1.1 What is the Statistical Pattern Recognition Tool-
box and how it has been developed?
The Statistical Pattern Recognition Toolbox (abbreviated STPRtool) is a collection of
pattern recognition (PR) methods implemented in Matlab.
The core of the STPRtool is comprised of statistical PR algorithms described in
the monograph Schlesinger, M.I., Hlav´ aˇc, V: Ten Lectures on Statistical and Structural
Pattern Recognition, Kluwer Academic Publishers, 2002 [26]. The STPRtool has been
developed since 1999. The first version was a result of a diploma project of Vojtˇech
Franc [8]. Its author became a PhD student in CMP and has been extending the
STPRtool since. Currently, the STPRtool contains much wider collection of algorithms.
The STPRtool is available at
http://cmp.felk.cvut.cz/cmp/cmp software.html.
We tried to create concise and complete documentation of the STPRtool. This
document and associated help files in .html should give all information and parameters
the user needs when she/he likes to call appropriate method (typically a Matlab m-
file). The intention was not to write another textbook. If the user does not know
the method then she/he will likely have to read the description of the method in a
referenced original paper or in a textbook.
1.2 The STPRtool purpose and its potential users
• The principal purpose of STPRtool is to provide basic statistical pattern recog-
nition methods to (a) researchers, (b) teachers, and (c) students. The STPRtool
is likely to help both in research and teaching. It can be found useful by practi-
tioners who design PR applications too.
3
• The STPRtool offers a collection of the basic and the state-of-the art statistical
PR methods.
• The STPRtool contains demonstration programs, working examples and visual-
ization tools which help to understand the PR methods. These teaching aids are
intended for students of the PR courses and their teachers who can easily create
examples and incorporate them into their lectures.
• The STPRtool is also intended to serve a designer of a new PR method by
providing tools for evaluation and comparison of PR methods. Many standard
and state-of-the art methods have been implemented in STPRtool.
• The STPRtool is open to contributions by others. If you develop a method
which you consider useful for the community then contact us. We are willing to
incorporate the method into the toolbox. We intend to moderate this process to
keep the toolbox consistent.
1.3 A digest of the implemented methods
This section should give the reader a quick overview of the methods implemented in
STPRtool.
• Analysis of linear discriminant function: Perceptron algorithm and multi-
class modification. Kozinec’s algorithm. Fisher Linear Discriminant. A collection
of known algorithms solving the Generalized Anderson’s Task.
• Feature extraction: Linear Discriminant Analysis. Principal Component Anal-
ysis (PCA). Kernel PCA. Greedy Kernel PCA. Generalized Discriminant Analy-
sis.
• Probability distribution estimation and clustering: Gaussian Mixture
Models. Expectation-Maximization algorithm. Minimax probability estimation.
K-means clustering.
• Support Vector and other Kernel Machines: Sequential Minimal Opti-
mizer (SMO). Matlab Optimization toolbox based algorithms. Interface to the
SV M
light
software. Decomposition approaches to train the Multi-class SVM clas-
sifiers. Multi-class BSVM formulation trained by Kozinec’s algorithm, Mitchell-
Demyanov-Molozenov algorithm and Nearest Point Algorithm. Kernel Fisher
Discriminant.
4
1.4 Relevant Matlab toolboxes of others
Some researchers in statistical PR provide code of their methods. The Internet offers
many software resources for Matlab. However, there are not many compact and well
documented packages comprised of PR methods. We found two useful packages which
are briefly introduced below.
NETLAB is focused to the Neural Networks applied to PR problems. The classical
PR methods are included as well. This Matlab toolbox is closely related to the
popular PR textbook [3]. The documentation published in book [21] contains de-
tailed description of the implemented methods and many working examples. The
toolbox includes: Probabilistic Principal Component Analysis, Generative Topo-
graphic Mapping, methods based on the Bayesian inference, Gaussian processes,
Radial Basis Functions (RBF) networks and many others.
The toolbox can be downloaded from
http://www.ncrg.aston.ac.uk/netlab/.
PRTools is a Matlab Toolbox for Pattern Recognition [7]. Its implementation is
based on the object oriented programming principles supported by the Matlab
language. The toolbox contains a wide range of PR methods including: analysis
of linear and non-linear classifiers, feature extraction, feature selection, methods
for combining classifiers and methods for clustering.
The toolbox can be downloaded from
http://www.ph.tn.tudelft.nl/prtools/.
1.5 STPRtool document organization
The document is divided into chapters describing the implemented methods. The last
chapter is devoted to more complex examples. Most chapters begin with definition of
the models to be analyzed (e.g., linear classifier, probability densities, etc.) and the list
of the implemented methods. Individual sections have the following fixed structure: (i)
definition of the problem to be solved, (ii) brief description and list of the implemented
methods, (iii) reference to the literature where the problem is treaded in details and
(iv) an example demonstrating how to solve the problem using the STPRtool.
1.6 How to contribute to STPRtool?
If you have an implementation which is consistent with the STPRtool spirit and you
want to put it into public domain through the STPRtool channel then send an email
to xfrancv@cmp.felk.cvut.cz.
5
Please prepare your code according to the pattern you find in STPRtool implemen-
tations. Provide piece of documentation too which will allow us to incorporate it to
STPRtool easily.
Of course, your method will be bound to your name in STPRtool.
1.7 Acknowledgements
First, we like to thank to Prof. M.I. Schlesinger from the Ukrainian Academy of
Sciences, Kiev, Ukraine who showed us the beauty of pattern recognition and gave us
many valuable advices.
Second, we like to thank for discussions to our colleagues in the Center for Machine
Perception, the Czech Technical University in Prague, our visitors, other STPRtool
users and students who used the toolbox in their laboratory exercises.
Third, thanks go to several Czech, EU and industrial grants which indirectly con-
tributed to STPRtool development.
Forth, the STPRtool was rewritten and its documentation radically improved in
May and June 2004 with the financial help of ECVision European Research Network
on Cognitive Computer Vision Systems (IST-2001-35454) lead by D. Vernon.
6
Notation
Upper-case bold letters denote matrices, for instance A. Vectors are implicitly column
vectors. Vectors are denoted by lower-case bold italic letters, for instance x. The
concatenation of column vectors w ∈ R
n
, z ∈ R
m
is denoted as v = [w; z] where the
column vector v is (n + m)-dimensional.
¸x x

) Dot product between vectors x and x

.
Σ Covariance matrix.
µ Mean vector (mathematical expectation).
S Scatter matrix.
A ⊆ R
n
n-dimensional input space (space of observations).
¸ Set of c hidden states – class labels ¸ = ¦1, . . . , c¦.
T Feature space.
φ: A → T Mapping from input space A to the feature space T.
k: A A →R Kernel function (Mercer kernel).
T
XY
Labeled training set (more precisely multi-set) T
XY
=
¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦.
T
X
Unlabeled training set T
X
= ¦x
1
, . . . , x
l
¦.
q: A → ¸ Classification rule.
f
y
: A →R Discriminant function associated with the class y ∈ ¸.
f: A →R Discriminant function of the binary classifier.
[ [ Cardinality of a set.
| | Euclidean norm.
det(()) Matrix determinant.
δ(i, j) Kronecker delta δ(i, j) = 1 for i = j and δ(i, j) = 0
otherwise.
_
Logical or.
7
Chapter 2
Linear Discriminant Function
The linear classification rule q: A ⊆ R
n
→ ¸ = ¦1, 2, . . . , c¦ is composed of a set of
discriminant functions
f
y
(x) = ¸w
y
x) + b
y
, ∀y ∈ ¸ ,
which are linear with respect to both the input vector x ∈ R
n
and their parameter
vectors w ∈ R
n
. The scalars b
y
, ∀y ∈ ¸ introduce bias to the discriminant functions.
The input vector x ∈ R
n
is assigned to the class y ∈ ¸ its corresponding discriminant
function f
y
attains maximal value
y = argmax
y

∈Y
f
y
(x) = argmax
y

∈Y
(¸w
y
x) + b
y
) . (2.1)
In the particular binary case ¸ = ¦1, 2¦, the linear classifier is represented by a
single discriminant function
f(x) = ¸w x) + b ,
given by the parameter vector w ∈ R
n
and bias b ∈ R. The input vector x ∈ R
n
is
assigned to class y ∈ ¦1, 2¦ as follows
q(x) =
_
1 if f(x) = ¸w x) + b ≥ 0 ,
2 if f(x) = ¸w x) + b < 0 .
(2.2)
The data-type used to describe the linear classifier and the implementation of the
classifier is described in Section 2.1. Following sections describe supervised learning
methods which determine the parameters of the linear classifier based on available
knowledge. The implemented learning methods can be distinguished according to the
type of the training data:
I. The problem is described by a finite set T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ containing pairs
of observations x
i
∈ R
n
and corresponding class labels y
i
∈ ¸. The methods
using this type of training data is described in Section 2.2 and Section 2.3.
8
II. The parameters of class conditional distributions are (completely or partially)
known. The Anderson’s task and its generalization are representatives of such
methods (see Section 2.4).
The summary of implemented methods is given in Table 2.1.
References: The linear discriminant function is treated throughout for instance in
books [26, 6].
Table 2.1: Implemented methods: Linear discriminant function
linclass Linear classifier (linear machine).
andrerr Classification error of the Generalized Anderson’s
task.
androrig Original method to solve the Anderson-Bahadur’s
task.
eanders Epsilon-solution of the Generalized Anderson’s task.
ganders Solves the Generalized Anderson’s task.
ggradander Gradient method to solve the Generalized Anderson’s
task.
ekozinec Kozinec’s algorithm for eps-optimal separating hyper-
plane.
mperceptr Perceptron algorithm to train multi-class linear ma-
chine.
perceptron Perceptron algorithm to train binary linear classifier.
fld Fisher Linear Discriminant.
fldqp Fisher Linear Discriminant using Quadratic Program-
ming.
demo linclass Demo on the algorithms learning linear classifiers.
demo anderson Demo on Generalized Anderson’s task.
2.1 Linear classifier
The STPRtool uses a specific structure array to describe the binary (see Table 2.2)
and the multi-class (see Table 2.3) linear classifier. The implementation of the linear
classifier itself provides function linclass.
Example: Linear classifier
The example shows training of the Fisher Linear Discriminant which is the classical
example of the linear classifier. The Riply’s data set riply trn.mat is used for training.
The resulting linear classifier is then evaluated on the testing data riply tst.mat.
9
Table 2.2: Data-type used to describe binary linear classifier.
Binary linear classifier (structure array):
.W [n 1] The normal vector w ∈ R
n
of the separating hyper-
plane f(x) = ¸w x) + b.
.b [1 1] Bias b ∈ R of the hyperplane.
.fun = ’linclass’ Identifies function associated with this data type.
Table 2.3: Data-type used to describe multi-class linear classifier.
Multi-class linear classifier (structure array):
.W [n c] Matrix of parameter vectors w
y
∈ R
n
, y = 1, . . . , c of
the linear discriminant functions f
y
(x) = ¸w
y
x) +b.
.b [c 1] Parameters b
y
∈ R, y = 1, . . . , c.
.fun = ’linclass’ Identifies function associated with this data type.
trn = load(’riply_trn’); % load training data
tst = load(’riply_tst’); % load testing data
model = fld( trn ); % train FLD classifier
ypred = linclass( tst.X, model ); % classify testing data
cerror( ypred, tst.y ) % evaluate testing error
ans =
0.1080
2.2 Linear separation of finite sets of vectors
The input training data T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ consists of pairs of observations
x
i
∈ R
n
and corresponding class labels y
i
∈ ¸ = ¦1, . . . , c¦. The implemented methods
solve the task of (i) training separating hyperplane for binary case c = 2, (ii) training
ε-optimal hyperplane and (iii) training the multi-class linear classifier. The definitions
of these tasks are given below.
The problem of training the binary (c = 2) linear classifier (2.2) with zero training
error is equivalent to finding the hyperplane H = ¦x: ¸w x) +b = 0¦ which separates
the training vectors of the first y = 1 and the second y = 2 class. The problem is
formally defined as solving the set of linear inequalities
¸w x
i
) + b ≥ 0 , y
i
= 1 ,
¸w x
i
) + b < 0 , y
i
= 2 .
(2.3)
with respect to the vector w ∈ R
n
and the scalar b ∈ R. If the inequalities (2.3) have a
solution then the training data T is linearly separable. The Perceptron (Section 2.2.1)
10
and Kozinec’s (Section 2.2.2) algorithm are useful to train the binary linear classifiers
from the linearly separable data.
If the training data is linearly separable then the optimal separating hyperplane
can be defined the solution of the following task
(w

, b

) = argmax
w,b
m(w, b)
= argmax
w,b
min
_
min
i∈I
1
¸w x
i
) + b
|w|
, min
i∈I
2

¸w x
i
) + b
|w|
_
,
(2.4)
where 1
1
= ¦i: y
i
= 1¦ and 1
2
= ¦i: y
i
= 2¦ are sets of indices. The optimal separating
hyperplane H

= ¦x ∈ R
n
: ¸w

x) + b

= 0¦ separates training data T with maxi-
mal margin m(w

, b

). This task is equivalent to training the linear Support Vector
Machines (see Section 5). The optimal separating hyperplane cannot be found exactly
except special cases. Therefore the numerical algorithms seeking the approximate so-
lution are applied instead. The ε-optimal solution is defined as the vector w and scalar
b such that the inequality
m(w

, b

) −m(w, b) ≤ ε , (2.5)
holds. The parameter ε ≥ 0 defines closeness to the optimal solution in terms of the
margin. The ε-optimal solution can be found by Kozinec’s algorithm (Section 2.2.2).
The problem of training the multi-class linear classifier c > 2 with zero training
error is formally stated as the problem of solving the set of linear inequalities
¸w
y
i
x
i
) + b
y
i
> ¸w
y
x
i
) + b
y
, i = 1, . . . , l , y
i
,= y , (2.6)
with respect to the vectors w
y
∈ R
n
, y ∈ ¸ and scalars b
y
∈ R, y ∈ ¸. The task (2.6)
for linearly separable training data T can be solved by the modified Perceptron (Sec-
tion 2.2.3) or Kozinec’s algorithm. An interactive demo on the algorithms separating
the finite sets of vectors by a hyperplane is implemented in demo linclass.
2.2.1 Perceptron algorithm
The input is a set T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of binary labeled y
i
∈ ¦1, 2¦ training
vectors x
i
∈ R
n
. The problem of training the separating hyperplane (2.3) can be
formally rewritten to a simpler form
¸v z
i
) > 0 , i = 1, . . . , l , (2.7)
where the vector v ∈ R
n+1
is constructed as
v = [w; b] , (2.8)
11
and transformed training data Z = ¦z
1
, . . . , z
l
¦ are defined as
z
i
=
_
[x
i
; 1] if y
i
= 1 ,
−[x
i
; 1] if y
i
= 2 .
(2.9)
The problem of solving (2.7) with respect to the unknown vector v ∈ R
n+1
is equivalent
to the original task (2.3). The parameters (w, b) of the linear classifier are obtained
from the found vector v by inverting the transformation (2.8).
The Perceptron algorithm is an iterative procedure which builds a series of vectors
v
(0)
, v
(1)
, . . . , v
(t)
until the set of inequalities (2.7) is satisfied. The initial vector v
(0)
can
be set arbitrarily (usually v = 0). The Novikoff theorem ensures that the Perceptron
algorithm stops after finite number of iterations t if the training data are linearly
separable. Perceptron algorithm is implemented in function perceptron.
References: Analysis of the Perceptron algorithm including the Novikoff’s proof of
convergence can be found in Chapter 5 of the book [26]. The Perceptron from the
neural network perspective is described in the book [3].
Example: Training binary linear classifier with Perceptron
The example shows the application of the Perceptron algorithm to find the binary
linear classifier for synthetically generated 2-dimensional training data. The generated
training data contain 50 labeled vectors which are linearly separable with margin 1.
The found classifier is visualized as a separating hyperplane (line in this case) H =
¦x ∈ R
2
: ¸w x) + b = 0¦. See Figure 2.1.
data = genlsdata( 2, 50, 1); % generate training data
model = perceptron( data ); % call perceptron
figure;
ppatterns( data ); % plot training data
pline( model ); % plot separating hyperplane
2.2.2 Kozinec’s algorithm
The input is a set T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of binary labeled y
i
∈ ¦1, 2¦ training
vectors x
i
∈ R
n
. The Kozinec’s algorithm builds a series of vectors w
(0)
1
, w
(1)
1
, . . . , w
(t)
1
and w
(0)
2
, w
(1)
2
, . . . , w
(t)
2
which converge to the vector w

1
and w

2
respectively. The
vectors w

1
and w

2
are the solution of the following task
w

1
, w

2
= argmin
w
1
∈X
1
,w
2
∈X
2
|w
1
−w
2
| ,
where A
1
stands for the convex hull of the training vectors of the first class A
1
=
¦x
i
: y
i
= 1¦ and A
2
for the convex hull of the second class likewise. The vector w

=
12
−55 −50 −45 −40 −35 −30
−55
−50
−45
−40
−35
−30
Figure 2.1: Linear classifier trained by the Perceptron algorithm
w

1
−w

2
and the bias b

=
1
2
(|w

2
|
2
−|w

1
|
2
) determine the optimal hyperplane (2.4)
separating the training data with the maximal margin.
The Kozinec’s algorithm is proven to converge to the vectors w

1
and w

2
in infinite
number of iterations t = ∞. If the ε-optimal optimality stopping condition is used
then the Kozinec’s algorithm converges in finite number of iterations. The Kozinec’s
algorithm can be also used to solve a simpler problem of finding the separating hyper-
plane (2.3). Therefore the following two stopping conditions are implemented:
I. The separating hyperplane (2.3) is sought for (ε < 0). The Kozinec’s algorithm is
proven to converge in a finite number of iterations if the separating hyperplane
exists.
II. The ε-optimal hyperplane (2.5) is sought for (ε ≥ 0). Notice that setting ε = 0
force the algorithm to seek the optimal hyperplane which is generally ensured to
be found in an infinite number of iterations t = ∞.
The Kozinec’s algorithm which trains the binary linear classifier is implemented in the
function ekozinec.
References: Analysis of the Kozinec’s algorithm including the proof of convergence
can be found in Chapter 5 of the book [26].
Example: Training ε-optimal hyperplane by the Kozinec’s algorithm
The example shows application of the Kozinec’s algorithm to find the (ε = 0.01)-
optimal hyperplane for synthetically generated 2-dimensional training data. The gen-
erated training data contains 50 labeled vectors which are linearly separable with
13
margin 1. The found classifier is visualized (Figure 2.2) as a separating hyperplane
H = ¦x ∈ R
2
: ¸w x) +b = 0¦ (straight line in this case). It could be verified that the
found hyperplane has margin model.margin which satisfies the ε-optimality condition.
data = genlsdata(2,50,1); % generate data
% run ekozinec
model = ekozinec(data, struct(’eps’,0.01));
figure;
ppatterns(data); % plot data
pline(model); % plot found hyperplane
−10 −8 −6 −4 −2 0 2 4 6 8 10
−10
−8
−6
−4
−2
0
2
4
6
8
Figure 2.2: ε-optimal hyperplane trained with the Kozinec’s algorithm.
2.2.3 Multi-class linear classifier
The input training data T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ consists of pairs of observations
x
i
∈ R
n
and corresponding class labels y
i
∈ ¸ = ¦1, . . . , c¦. The task is to design
the linear classifier (2.1) which classifies the training data T without error. The prob-
lem of training (2.1) is equivalent to solving the set of linear inequalities (2.6). The
inequalities (2.6) can be formally transformed to a simpler set
¸v z
y
i
) > 0 , i = 1, . . . , l , y = ¸ ¸ ¦y
i
¦ , (2.10)
where the vector v ∈ R
(n+1)c
contains the originally optimized parameters
v = [w
1
; b
1
; w
2
; b
2
; . . . ; w
c
; b
c
] .
14
The set of vectors z
y
i
∈ R
(n+1)c
, i = 1, . . . , l, y ∈ ¸ ¸ ¦y
i
¦ is created from the training
set T such that
z
y
i
(j) =



[x
i
; 1] , for j = y
i
,
−[x
i
; 1] , for j = y ,
0 , otherwise.
(2.11)
where z
y
i
(j) stands for the j-th slot between coordinates (n+1)(j −1)+1 and (n+1)j.
The described transformation is known as the Kesler’s construction.
The transformed task (2.10) can be solved by the Perceptron algorithm. The simple
form of the Perceptron updating rule allows to implement the transformation implicitly
without mapping the training data T into (n + 1)c-dimensional space. The described
method is implemented in function mperceptron.
References: The Kesler’s construction is described in books [26, 6].
Example: Training multi-class linear classifier by the Perceptron
The example shows application of the Perceptron rule to train the multi-class linear
classifier for the synthetically generated 2-dimensional training data pentagon.mat. A
user’s own data can be generated interactively using function createdata(’finite’,10)
(number 10 stands for number of classes). The found classifier is visualized as its sep-
arating surface which is known to be piecewise linear and the class regions should be
convex (see Figure 2.3).
data = load(’pentagon’); % load training data
model = mperceptron(data); % run training algorithm
figure;
ppatterns(data); % plot training data
pboundary(model); % plot decision boundary
2.3 Fisher Linear Discriminant
The input is a set T = ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of binary labeled y
i
∈ ¦1, 2¦ training
vectors x
i
∈ R
n
. Let 1
y
= ¦i: y
i
= y¦, y ∈ ¦1, 2¦ be sets of indices of training
vectors belonging to the first y = 1 and the second y = 2 class, respectively. The class
separability in a direction w ∈ R
n
is defined as
F(w) =
¸w S
B
w)
¸w S
W
w)
, (2.12)
where S
B
is the between-class scatter matrix
S
B
= (µ
1
−µ
2
)(µ
1
−µ
2
)
T
, µ
y
=
1
[1
y
[

i∈Iy
x
i
, y ∈ ¦1, 2¦ ,
15
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 2.3: Multi-class linear classifier trained by the Perceptron algorithm.
and S
W
is the within class scatter matrix defined as
S
W
= S
1
+S
2
, S
y
=

i∈Yy
(x
i
−µ
y
)(x
i
−µ
y
)
T
, y ∈ ¦1, 2¦ .
In the case of the Fisher Linear Discriminant (FLD), the parameter vector w of the
linear discriminant function f(x) = ¸w x) + b is determined to maximize the class
separability criterion (2.12)
w = argmax
w

F(w

) = argmax
w

¸w

S
B
w

)
¸w

S
W
w

)
. (2.13)
The bias b of the linear rule must be determined based on another principle. The
STPRtool implementation, for instance, computes such b that the equality
¸w µ
1
) + b = −(¸w µ
2
) + b) ,
holds. The STPRtool contains two implementations of FLD:
I. The classical solution of the problem (2.13) using the matrix inversion
w = S
W
−1

1
−µ
2
) .
This case is implemented in the function fld.
16
II. The problem (2.13) can be reformulated as the quadratic programming (QP) task
w = argmin
w

¸w

S
W
w

) ,
subject to
¸w


1
−µ
2
)) = 2 .
The QP solver quadprog of the Matlab Optimization Toolbox is used in the
toolbox implementation. This method can be useful when the matrix inversion
is hard to compute. The method is implemented in the function fldqp.
References: The Fisher Linear Discriminant is described in Chapter 3 of the book [6]
or in the book [3].
Example: Fisher Linear Discriminant
The binary linear classifier based on the Fisher Linear Discriminant is trained on
the Riply’s data set riply trn.mat. The QP formulation fldqp of FLD is used.
However, the function fldqp can be replaced by the standard approach implemented
in the function fld which would yield the same solution. The found linear classifier is
visualized (see Figure 2.4) and evaluated on the testing data riply tst.mat.
trn = load(’riply_trn’); % load training data
model = fldqp(trn); % compute FLD
figure;
ppatterns(trn); pline(model); % plot data and solution
tst = load(’riply_tst’); % load testing data
ypred = linclass(tst.X,model); % classify testing data
cerror(ypred,tst.y) % compute testing error
ans =
0.1080
2.4 Generalized Anderson’s task
The classified object is described by the vector of observations x ∈ A ⊆ R
n
and a
binary hidden state y ∈ ¦1, 2¦. The class conditional distributions p
X|Y
(x[y), y ∈ ¦1, 2¦
are known to be multi-variate Gaussian distributions. The parameters (µ
1
, Σ
1
) and

2
, Σ
2
) of these class distributions are unknown. However, it is known that the
parameters (µ
1
, Σ
1
) belong to a certain finite set of parameters ¦(µ
i
, Σ
i
): i ∈ 1
1
¦.
Similarly (µ
2
, Σ
2
) belong to a finite set ¦(µ
i
, Σ
i
): i ∈ 1
2
¦. Let q: A ⊆ R
n
→ ¦1, 2¦
17
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 2.4: Linear classifier based on the Fisher Linear Discriminant.
be a binary linear classifier (2.2) with discriminant function f(x) = ¸w x) + b. The
probability of misclassification is defined as
Err(w, b) = max
i∈I
1
∪I
2
ε(w, b, µ
i
, Σ
i
) ,
where ε(w, b, µ
i
, Σ
i
) is probability that the Gaussian random vector x with mean
vector µ
i
and the covariance matrix Σ
i
satisfies q(x) = 1 for i ∈ 1
2
or q(x) = 2 for
i ∈ 1
1
. In other words, it is the probability that the vector x will be misclassified by
the linear rule q.
The Generalized Anderson’s task (GAT) is to find the parameters (w

, b

) of the
linear classifier
q(x) =
_
1 , for f(x) = ¸w

x) + b

≥ 0 ,
2 , for f(x) = ¸w

x) + b

< 0 ,
such that the error Err(w

, b

) is minimal
(w

, b

) = argmin
w,b
Err(w, b) = argmin
w,b
max
i∈I
1
∪I
2
ε(w, b, µ
i
, Σ
i
) . (2.14)
The original Anderson’s task is a special case of (2.14) when [1
1
[ = 1 and [1
2
[ = 1.
The probability ε(w, b, µ
i
, Σ
i
) is proportional to the reciprocal of the Mahalanobis
distance r
i
between the (µ
i
, Σ
i
) and the nearest vector of the separating hyperplane
H = ¦x ∈ R
n
: ¸w x) + b = 0¦, i.e.,
r
i
= min
x∈H
¸(µ
i
−x) (Σ
i
)
−1

i
−x)) =
¸w µ
i
) + b
_
¸w Σ
i
w)
.
18
The exact relation between the probability ε(w, b, µ
i
, Σ
i
) and the corresponding Ma-
halanobis distance r
i
is given by the integral
ε(w, b, µ
i
, Σ
i
) =
_

r
i
1


e

1
2
t
2
dt . (2.15)
The optimization problem (2.14) can be equivalently rewritten as
(w

, b

) = argmax
w,b
F(w, b) = argmax
w,b
min
i∈I
1
∪I
2
¸w µ
i
) + b
_
¸w Σ
i
w)
.
which is more suitable for optimization. The objective function F(w, b) is proven to
be convex in the region where the probability of misclassification Err(w, b) is less than
0.5. However, the objective function F(w, b) is not differentiable.
The STPRtool contains implementations of the algorithm solving the original An-
derson’s task as well as implementations of three different approaches to solve the
Generalized Anderson’s task which are described bellow. An interactive demo on the
algorithms solving the Generalized Anderson’s task is implemented in demo anderson.
References: The original Anderson’s task was published in [1]. A detailed description
of the Generalized Anderson’s task and all the methods implemented in the STPRtool
is given in book [26].
2.4.1 Original Anderson’s algorithm
The algorithm is implemented in the function androrig. This algorithm solves the
original Anderson’s task defined for two Gaussians only [1
1
[ = [1
2
[ = 1. In this case,
the optimal solution vector w ∈ R
n
is obtained by solving the following problem
w = ((1 −λ)Σ
1
+ λΣ
2
)
−1

1
−µ
2
) ,
1 −λ
λ
=
¸
¸w Σ
2
w)
¸w Σ
1
w)
,
where the scalar 0 < λ < 1 is unknown. The problem is solved by an iterative algorithm
which stops while the condition
¸
¸
¸
¸
γ −
1 −λ
λ
¸
¸
¸
¸
≤ ε , γ =
¸
¸w Σ
2
w)
¸w Σ
1
w)
,
is satisfied. The parameter ε defines the closeness to the optimal solution.
Example: Solving the original Anderson’s task
19
The original Anderson’s task is solved for the Riply’s data set riply trn.mat. The
parameters (µ
1
, Σ
1
) and (µ
2
, Σ
2
) of the Gaussian distribution of the first and the
second class are obtained by the Maximum-Likelihood estimation. The found linear
classifier is visualized and the Gaussian distributions (see Figure 2.5). The classifier is
validated on the testing set riply tst.mat.
trn = load(’riply_trn’); % load training data
distrib = mlcgmm(data); % estimate Gaussians
model = androrig(distrib); % solve Anderson’s task
figure;
pandr( model, distrib ); % plot solution
tst = load(’riply_tst’); % load testing data
ypred = linclass( tst.X, model ); % classify testing data
cerror( ypred, tst.y ) % evaluate error
ans =
0.1150
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
gauss 1
0.929
gauss 2
1.37
Figure 2.5: Solution of the original Anderson’s task.
20
2.4.2 General algorithm framework
The algorithm is implemented in the function ganders. This algorithm repeats the
following steps:
I. An improving direction (∆w, ∆b) in the parameter space is computed such that
moving along this direction increases the objective function F. The quadprog
function of the Matlab optimization toolbox is used to solve this subtask.
II. The optimal movement k along the direction (∆w, ∆b). The optimal movement k
is determined by 1D-search based on the cutting interval algorithm according to
the Fibonacci series. The number of interval cuttings of is an optional parameter
of the algorithm.
III. The new parameters are set
w
(t+1)
: = w
(t)
+ k∆w , b
(t+1)
: = b
(t)
+ k∆b .
The algorithm iterates until the minimal change in the objective function is less than
the prescribed ε, i.e., the condition
F(w
(t+1)
, b
(t+1)
) −F(w
(t)
, b
(t)
) > ε ,
is violated.
Example: Solving the Generalized Anderson’s task
The Generalized Anderson’s task is solved for the data mars.mat. Both the first
and second class are described by three 2D-dimensional Gaussian distributions. The
found linear classifier as well as the input Gaussian distributions are visualized (see
Figure 2.6).
distrib = load(’mars’); % load input Gaussians
model = ganders(distrib); % solve the GAT
figure;
pandr(model,distrib); % plot the solution
2.4.3 ε-solution using the Kozinec’s algorithm
This algorithm is implemented in the function eanders. The ε-solution is defined as a
linear classifier with parameters (w, b) such that
F(w, b) < ε
21
−5 −4 −3 −2 −1 0 1 2 3 4 5
−4
−3
−2
−1
0
1
2
3
4
5
gauss 1
0.0461
gauss 2
0.0483
gauss 3
0.0503
gauss 4
0.0421
gauss 5
0.047
gauss 6
0.0503
Figure 2.6: Solution of the Generalized Anderson’s task.
is satisfied. The prescribed ε ∈ (0, 0.5) is the desired upper bound on the probability
of misclassification. The problem is transformed to the task of separation of two sets
of ellipsoids which is solved the Kozinec’s algorithm. The algorithm converges in finite
number of iterations if the given ε-solution exists. Otherwise it can get stuck in an
endless loop.
Example: ε-solution of the Generalized Anderson’s task
The ε-solution of the generalized Anderson’s task is solved for the data mars.mat.
The desired maximal possible probability of misclassification is set to 0.06 (it is 6%).
Both the first and second class are described by three 2D-dimensional Gaussian distri-
butions. The found linear classifier and the input Gaussian distributions are visualized
(see Figure 2.7).
distrib = load(’mars’); % load Gaussians
options = struct(’err’,0.06’); % maximal desired error
model = eanders(distrib,options); % training
figure; pandr(model,distrib); % visualization
2.4.4 Generalized gradient optimization
This algorithm is implemented in the function ggradandr. The objective function
F(w, b) is convex but not differentiable thus the standard gradient optimization cannot
22
−5 −4 −3 −2 −1 0 1 2 3 4 5
−4
−3
−2
−1
0
1
2
3
4
5
gauss 1
0.0473
gauss 2
0.0496
gauss 3
0.0516
gauss 4
0.0432
gauss 5
0.0483
gauss 6
0.0516
Figure 2.7: ε-solution of the Generalized Anderson’s task with maximal 0.06 probability
of misclassification.
be used. However, the generalized gradient optimization method can be applied. The
algorithm repeats the following two steps:
I. The generalized gradient (∆w, ∆b) is computed.
II. The optimized parameters are updated
w
(t+1)
: = w
(t)
+ ∆w , b
(t+1)
= b
(t)
+ ∆b .
The algorithm iterates until the minimal change in the objective function is less than
the prescribed ε, i.e., the condition
F(w
(t+1)
, b
(t+1)
) −F(w
(t)
, b
(t)
) > ε ,
is violated. The maximal number of iterations can be used as the stopping condition
as well.
Example: Solving the Generalized Anderson’s task using the generalized
gradient optimization
The generalized Anderson’s task is solved for the data mars.mat. Both the first
and second class are described by three 2D-dimensional Gaussian distributions. The
maximal number of iterations is limited to tmax=10000. The found linear classifier as
well as the input Gaussian distributions are visualized (see Figure 2.8).
23
distrib = load(’mars’); % load Gaussians
options = struct(’tmax’,10000); % max # of iterations
model = ggradandr(distrib,options); % training
figure;
pandr( model, distrib ); % visualization
−5 −4 −3 −2 −1 0 1 2 3 4 5
−4
−3
−2
−1
0
1
2
3
4
5
gauss 1
0.0475
gauss 2
0.0498
gauss 3
0.0518
gauss 4
0.0434
gauss 5
0.0485
gauss 6
0.0518
Figure 2.8: Solution of the Generalized Anderson’s task after 10000 iterations of the
generalized gradient optimization algorithm.
24
Chapter 3
Feature extraction
The feature extraction step consists of mapping the input vector of observations x ∈ R
n
onto a new feature description z ∈ R
m
which is more suitable for given task. The linear
projection
z = W
T
x +b , (3.1)
is commonly used. The projection is given by the matrix W [n m] and bias vector
b ∈ R
m
. The Principal Component Analysis (PCA) (Section 3.1) is a representative of
the unsupervised learning method which yields the linear projection (3.1). The Linear
Discriminant Analysis (LDA) (Section 3.2) is a representative of the supervised learning
method yielding the linear projection. The linear data projection is implemented in
function linproj (See examples in Section 3.1 and Section 3.2). The data type used
to describe the linear data projection is defined in Table 3.
The quadratic data mapping projects the input vector x ∈ R
n
onto the feature
space R
m
where m = n(n + 3)/2. The quadratic mapping φ: R
n
→R
m
is defined as
φ(x) = [ x
1
, x
2
, . . . , x
n
,
x
1
x
1
, x
1
x
2
, . . . , x
1
x
n
,
x
2
x
2
, . . . , x
2
x
n
,
.
.
.
x
n
x
n
]
T
,
(3.2)
where x
i
, i = 1, . . . , n are entries of the vector x ∈ R
n
. The quadratic data mapping
is implemented in function qmap.
The kernel functions allow for non-linear extensions of the linear feature extrac-
tion methods. In this case, the input vectors x ∈ R
n
are mapped into a new higher
dimensional feature space T in which the linear methods are applied. This yields a
non-linear (kernel) projection of data which has generally defined as
z = A
T
k(x) +b , (3.3)
25
where A [l m] is a parameter matrix and b ∈ R
m
a parameter vector. The vector
k(x) = [k(x, x
1
), . . . , k(x, x
l
)]
T
is a vector of kernel functions centered in the training
data or their subset. The x ∈ R
n
stands for the input vector and z ∈ R
m
is the
vector of extracted features. The extracted features are projections of φ(x) onto m
vectors (or functions) from the feature space T. The Kernel PCA (Section 3.3) and
the Generalized Discriminant Analysis (GDA) (Section 3.5) are representatives of the
feature extraction methods yielding the kernel data projection (3.3). The kernel data
projection is implemented in the function kernelproj (See examples in Section 3.3
and Section 3.5). The data type used to describe the kernel data projection is defined
in Table 3. The summary of implemented methods for feature extraction is given in
Table 3.1
Table 3.1: Implemented methods for feature extraction
linproj Linear data projection.
kerproj Kernel data projection.
qmap Quadratic data mapping.
lin2quad Merges linear rule and quadratic mapping.
lin2svm Merges linear rule and kernel projection.
lda Linear Discriminant Analysis.
pca Principal Component Analysis.
pcarec Computes reconstructed vector after PCA projection.
gda Generalized Discriminant Analysis.
greedykpca Greedy Kernel Principal Component Analysis.
kpca Kernel Principal Component Analysis.
kpcarec Reconstructs image after kernel PCA.
demo pcacomp Demo on image compression using PCA.
Table 3.2: Data-type used to describe linear data projection.
Linear data projection (structure array):
.W [n m] Projection matrix W = [w
1
, . . . , w
m
].
.b [m1] Bias vector b.
.fun = ’linproj’ Identifies function associated with this data type.
3.1 Principal Component Analysis
Let T
X
= ¦x
1
, . . . , x
l
¦ be a set of training vectors from the n-dimensional input space
R
n
. The set of vectors T
Z
= ¦z
1
, . . . , z
l
¦ is a lower dimensional representation of the
26
Table 3.3: Data-type used to describe kernel data projection.
Kernel data projection (structure array):
.Alpha [l m] Parameter matrix A [l m].
.b [m1] Bias vector b.
.sv.X [n l] Training vectors ¦x
1
, . . . , x
l
¦.
.options.ker [string] Kernel identifier.
.options.arg [1 p] Kernel argument(s).
.fun = ’kernelproj’ Identifies function associated with this data type.
input training vectors T
X
in the m-dimensional space R
m
. The vectors T
Z
are obtained
by the linear orthonormal projection
z = W
T
x +b , (3.4)
where the matrix W [n m] and the vector b [m 1] are parameters of the projec-
tion. The reconstructed vectors T
˜
X
= ¦ ˜ x
1
, . . . , ˜ x
l
¦ are computed by the linear back
projection
˜ x = W(z −b) , (3.5)
obtained by inverting (3.4). The mean square reconstruction error
ε
MS
(W, b) =
1
l
l

i=1
|x
i
− ˜ x
i
|
2
, (3.6)
is a function of the parameters of the linear projections (3.4) and (3.5). The Principal
Component Analysis (PCA) is the linear orthonormal projection (3.4) which allows
for the minimal mean square reconstruction error (3.6) of the training data T
X
. The
parameters (W, b) of the linear projection are the solution of the optimization task
(W, b) = argmin
W

,b

ε
MS
(W

, b

) , (3.7)
subject to
¸w
i
w
j
) = δ(i, j) , ∀i, j ,
where w
i
, i = 1, . . . , m are column vectors of the matrix W = [w
1
, . . . , w
m
] and
δ(i, j) is the Kronecker delta function. The solution of the task (3.7) is the matrix
W = [w
1
, . . . , w
m
] containing the m eigenvectors of the sample covariance matrix
which have the largest eigen values. The vector b equals to W
T
µ, where µ is the
sample mean of the training data. The PCA is implemented in the function pca. A
demo showing the use of PCA for image compression is implemented in the script
demo pcacomp.
27
References: The PCA is treated throughout for instance in books [13, 3, 6].
Example: Principal Component Analysis
The input data are synthetically generated from the 2-dimensional Gaussian dis-
tribution. The PCA is applied to train the 1-dimensional subspace to approximate
the input data with the minimal reconstruction error. The training data, the recon-
structed data and the 1-dimensional subspace given by the first principal component
are visualized (see Figure 3.1).
X = gsamp([5;5],[1 0.8;0.8 1],100); % generate data
model = pca(X,1); % train PCA
Z = linproj(X,model); % lower dim. proj.
XR = pcarec(X,model); % reconstr. data
figure; hold on; axis equal; % visualization
h1 = ppatterns(X,’kx’);
h2 = ppatterns(XR,’bo’);
[dummy,mn] = min(Z);
[dummy,mx] = max(Z);
h3 = plot([XR(1,mn) XR(1,mx)],[XR(2,mn) XR(2,mx)],’r’);
legend([h1 h2 h3], ...
’Input vectors’,’Reconstructed’, ’PCA subspace’);
2 3 4 5 6 7 8
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
Input vectors
Reconstructed
PCA subspace
Figure 3.1: Example on the Principal Component Analysis.
28
3.2 Linear Discriminant Analysis
Let T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦, x
i
∈ R
n
, y ∈ ¸ = ¦1, 2, . . . , c¦ be labeled set
of training vectors. The within-class S
W
and between-class S
B
scatter matrices are
described as
S
W
=

y∈Y
S
y
, S
y
=

i∈Iy
(x
i
−µ
i
)(x
i
−µ
i
)
T
,
S
B
=

y∈Y
[1
y
[(µ
y
−µ)(µ
y
−µ)
T
,
(3.8)
where the total mean vector µ and the class mean vectors µ
y
, y ∈ ¸ are defined as
µ =
1
l
l

i=1
x
i
, µ
y
=
1
[1
y
[

i∈Iy
x
i
, y ∈ ¸ .
The goal of the Linear Discriminant Analysis (LDA) is to train the linear data projec-
tion
z = W
T
x ,
such that the class separability criterion
F(W) =
det(
˜
S
B
)
det(
˜
S
W
)
=
det(S
B
)
det(S
W
)
,
is maximized. The
˜
S
B
is the between-class and
˜
S
W
is the within-class scatter matrix
of projected data which is defined similarly to (3.8).
References: The LDA is treated for instance in books [3, 6].
Example 1: Linear Discriminant Analysis
The LDA is applied to extract 2 features from the Iris data set iris.mat which
consists of the labeled 4-dimensional data. The data after the feature extraction step
are visualized in Figure 3.2.
orig_data = load(’iris’); % load input data
model = lda(orig_data,2); % train LDA
ext_data = linproj(orig_data,model); % feature extraction
figure; ppatterns(ext_data); % plot ext. data
Example 2: PCA versus LDA
This example shows why the LDA is superior to the PCA when features good for
classification are to be extracted. The synthetical data are generated from the Gaussian
mixture model. The LDA and PCA are trained on the generated data. The LDA and
29
−1.5 −1 −0.5 0 0.5 1 1.5
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
Figure 3.2: Example shows 2-dimensional data extracted from the originally 4-
dimensional Iris data set using Linear Discriminant Analysis.
PCA directions are displayed as connection of the vectors projected to the trained LDA
and PCA subspaces and re-projected back to the input space (see Figure 3.3a). The
extracted data using the LDA and PCA model are displayed with the Gaussians fitted
by the Maximum-Likelihood (see Figure 3.3b).
% Generate data
distrib.Mean = [[5;4] [4;5]]; % mean vectors
distrib.Cov(:,:,1) = [1 0.9; 0.9 1]; % 1st covariance
distrib.Cov(:,:,2) = [1 0.9; 0.9 1]; % 2nd covariance
distrib.Prior = [0.5 0.5]; % Gaussian weights
data = gmmsamp(distrib,250); % sample data
lda_model = lda(data,1); % train LDA
lda_rec = pcarec(data.X,lda_model);
lda_data = linproj(data,lda_model);
pca_model = pca(data.X,1); % train PCA
pca_rec = pcarec(data.X,pca_model);
pca_data = linproj( data,pca_model);
figure; hold on; axis equal; % visualization
ppatterns(data);
30
h1 = plot(lda_rec(1,:),lda_rec(2,:),’r’);
h2 = plot(pca_rec(1,:),pca_rec(2,:),’b’);
legend([h1 h2],’LDA direction’,’PCA direction’);
figure; hold on;
subplot(2,1,1); title(’LDA’); ppatterns(lda_data);
pgauss(mlcgmm(lda_data));
subplot(2,1,2); title(’PCA’); ppatterns(pca_data);
pgauss(mlcgmm(pca_data));
3.3 Kernel Principal Component Analysis
The Kernel Principal Component Analysis (Kernel PCA) is the non-linear extension of
the ordinary linear PCA. The input training vectors T
X
= ¦x
1
, . . . , x
l
¦, x
i
∈ A ⊆ R
n
are mapped by φ: A → T to a high dimensional feature space T. The linear PCA
is applied on the mapped data T
Φ
= ¦φ(x
1
), . . . , φ(x
l
)¦. The computation of the
principal components and the projection on these components can be expressed in
terms of dot products thus the kernel functions k: A A →R can be employed. The
kernel PCA trains the kernel data projection
z = A
T
k(x) +b , (3.9)
such that the reconstruction error
ε
KMS
(A, b) =
1
l
l

i=1
|φ(x
i
) −
˜
φ(x
i
)|
2
, (3.10)
is minimized. The reconstructed vector
˜
φ(x) is given as a linear combination of the
mapped data T
Φ
˜
φ(x) =
l

i=1
β
i
φ(x
i
) , β = A(z −b) . (3.11)
In contrast to the linear PCA, the explicit projection from the feature space T to the
input space A usually do not exist. The problem is to find the vector x ∈ A its image
φ(x) ∈ T well approximates the reconstructed vector
˜
φ(x) ∈ T. This procedure is
implemented in the function kpcarec for the case of the Radial Basis Function (RBF)
kernel. The procedure consists of the following step:
I. Project input vector x
in
∈ R
n
onto its lower dimensional representation z ∈ R
m
using (3.9).
31
II. Computes vector x
out
∈ R
n
which is good pre-image of the reconstructed vector
˜
φ(x
in
), such that
x
out
= argmin
x
|φ(x) −
˜
φ(x
in
)|
2
.
References: The kernel PCA is described at length in book [29, 30].
Example: Kernel Principal Component Analysis
The example shows using the kernel PCA for data denoising. The input data are
synthetically generated 2-dimensional vectors which lie on a circle and are corrupted by
the Gaussian noise. The kernel PCA model with RBF kernel is trained. The function
kpcarec is used to find pre-images of the reconstructed data (3.11). The result is
visualized in Figure 3.4.
X = gencircledata([1;1],5,250,1); % generate circle data
options.ker = ’rbf’; % use RBF kernel
options.arg = 4; % kernel argument
options.new_dim = 2; % output dimension
model = kpca(X,options); % compute kernel PCA
XR = kpcarec(X,model); % compute reconstruced data
figure; % Visualization
h1 = ppatterns(X);
h2 = ppatterns(XR, ’+r’);
legend([h1 h2],’Input vectors’,’Reconstructed’);
3.4 Greedy kernel PCA algorithm
The Greedy Kernel Principal Analysis (Greedy kernel PCA) is an efficient algorithm
to compute the ordinary kernel PCA. Let T
X
= ¦x
1
, . . . , x
l
¦, x
i
∈ R
n
be the set of
input training vectors. The goal is to train the kernel data projection
z = A
T
k
S
(x) +b , (3.12)
where A [d m] is the parameter matrix, b [m 1] is the parameter vector and
k
S
= [k(x, s
1
), . . . , k(x, s
l
)]
T
are the kernel functions centered in the vectors T
S
=
¦s
1
, . . . , s
d
¦. The vector set T
S
is a subset of training data T
X
. In contrast to the
ordinary kernel PCA, the subset T
S
does not contain all the training vectors T
X
thus
the complexity of the projection (3.12) is reduced compared to (3.9). The objective of
the Greedy kernel PCA is to minimize the reconstruction error (3.11) while the size d
of the subset T
S
is kept small. The Greedy kernel PCA algorithm is implemented in
the function greedykpca.
32
References: The implemented Greedy kernel PCA algorithm is described in [11].
Example: Greedy Kernel Principal Component Analysis
The example shows using kernel PCA for data denoising. The input data are
synthetically generated 2-dimensional vectors which lie on a circle and are corrupted
with the Gaussian noise. The Greedy kernel PCA model with RBF kernel is trained.
The number of vectors defining the kernel projection (3.12) is limited to 25. In contrast
the ordinary kernel PCA (see experiment in Section 3.3) requires all the 250 training
vectors. The function kpcarec is used to find pre-images of the reconstructed data
obtained by the kernel PCA projection. The training and reconstructed vectors are
visualized in Figure 3.5. The vector of the selected subset T
X
are visualized as well.
X = gencircledata([1;1],5,250,1); % generate training data
options.ker = ’rbf’; % use RBF kernel
options.arg = 4; % kernel argument
options.new_dim = 2; % output dimension
options.m = 25; % size of sel. subset
model = greedykpca(X,options); % run greedy algorithm
XR = kpcarec(X,model); % reconstructed vectors
figure; % visualization
h1 = ppatterns(X);
h2 = ppatterns(XR,’+r’);
h3 = ppatterns(model.sv.X,’ob’,12);
legend([h1 h2 h3], ...
’Training set’,’Reconstructed set’,’Selected subset’);
3.5 Generalized Discriminant Analysis
The Generalized Discriminant Analysis (GDA) is the non-linear extension of the ordi-
nary Linear Discriminant Analysis (LDA). The input training data T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦,
x
i
∈ A ⊆ R
n
, y ∈ ¸ = ¦1, . . . , c¦ are mapped by φ: A → T to a high dimen-
sional feature space T. The ordinary LDA is applied on the mapped data T
ΦY
=
¦(φ(x
1
), y
1
), . . . , (φ(x
l
), y
l
)¦. The computation of the projection vectors and the pro-
jection on these vectors can be expressed in terms of dot products thus the kernel
functions k: A A → R can be employed. The resulting kernel data projection is
defined as
z = A
T
k(x) +b ,
33
where A [l m] is the matrix and b [m 1] the vector trained by the GDA. The
parameters (A, b) are trained to increase the between-class scatter and decrease the
within-class scatter of the extracted data T
ZY
= ¦(z
1
, y
1
), . . . , (z
l
, y
l
)¦, z
i
∈ R
m
. The
GDA is implemented in the function gda.
References: The GDA was published in the paper [2].
Example: Generalized Discriminant Analysis
The GDA is trained on the Iris data set iris.mat which contains 3 classes of
4-dimensional data. The RBF kernel with kernel width σ = 1 is used. Notice, that
setting σ too low leads to the perfect separability of the training data T
XY
but the kernel
projection is heavily over-fitted. The extracted data ¸
ZY
are visualized in Figure 3.6.
orig_data = load(’iris’); % load data
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel arg.
options.new_dim = 2; % output dimension
model = gda(orig_data,options); % train GDA
ext_data = kernelproj(orig_data,model); % feature ext.
figure; ppatterns(ext_data); % visualization
3.6 Combining feature extraction and linear classi-
fier
Let T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ be a training set of observations x
i
∈ A ⊆ R
n
and
corresponding hidden states y
i
∈ ¸ = ¦1, . . . , c¦. Let φ: A →R
m
be a mapping
φ(x) = [φ
1
(x), . . . , φ
m
(x)]
T
defining a non-linear feature extraction step (e.g., quadratic data mapping or kernel
projection). The feature extraction step applied to the input data T
XY
yields extracted
data T
ΦY
= ¦(φ(x
1
), y
1
), . . . , (φ(x
l
), y
l
)¦. Let f(x) = ¸w φ(x)) + b be a linear
discriminant function found by an arbitrary training method on the extracted data
T
ΦY
. The function f can be seen as a non-linear function of the input vectors x ∈ A,
as
f(x) = ¸w φ(x)) + b =
m

i=1
w
i
φ
i
(x) + b .
In the case of the quadratic data mapping (3.2) used as the feature extraction step,
the discriminant function can be written in the following form
f(x) = ¸x Ax) +¸x b) + c . (3.13)
34
The discriminant function (3.13) defines the quadratic classifier described in Section 6.4.
Therefore the quadratic classifier can be trained by (i) applying the quadratic data
mapping qmap, (ii) training a linear rule on the mapped data (e.g., perceptron, fld)
and (iii) combining the quadratic mapping and the linear rule. The combining is
implement in function lin2quad.
In the case of the kernel projection (3.3) applied as the feature extraction step, the
resulting discriminant function has the following form
f(x) = ¸α k(x)) + b . (3.14)
The discriminant function (3.14) defines the kernel (SVM type) classifier described
in Section 5. This classifier can be trained by (i) applying the kernel projection
kernelproj, (ii) training a linear rule on the extracted data and (iii) combing the kernel
projection and the linear rule. The combining is implemented in function lin2svm.
Example: Quadratic classifier trained the Perceptron
This example shows how to train the quadratic classifier using the Perceptron al-
gorithm. The Perceptron algorithm produces the linear rule only. The input training
data are linearly non-separable but become separable after the quadratic data map-
ping is applied. Function lin2quad is used to compute the parameters of the quadratic
classifier explicitly based on the found linear rule in the feature space and the mapping.
Figure 3.7 shows the decision boundary of the quadratic classifier.
% load training data living in the input space
input_data = load(’vltava’);
% map data to the feature space
map_data = qmap(input_data);
% train linear rule in the feature sapce
lin_model = perceptron(map_data);
% compute parameters of the quadratic classifier
quad_model = lin2quad(lin_model);
% visualize the quadratic classifier
figure; ppatterns(input_data);
pboundary(quad_model);
Example: Kernel classifier from the linear rule
This example shows how to train the kernel classifier using arbitrary training algo-
rithm for linear rule. The Fisher Linear Discriminant was used in this example. The
35
greedy kernel PCA algorithm was applied as the feature extraction step. The Riply’s
data set was used for training and evaluation. Figure 3.8 shows the found kernel clas-
sifier, the training data and vectors used in the kernel expansion of the discriminant
function.
% load input data
trn = load(’riply_trn’);
% train kernel PCA by greedy algorithm
options = struct(’ker’,’rbf’,’arg’,1,’new_dim’,10);
kpca_model = greedykpca(trn.X,options);
% project data
map_trn = kernelproj(trn,kpca_model);
% train linear classifier
lin_model = fld(map_trn);
% combine linear rule and kernel PCA
kfd_model = lin2svm(kpca_model,lin_model);
% visualization
figure;
ppatterns(trn); pboundary(kfd_model);
ppatterns(kfd_model.sv.X,’ok’,13);
% evaluation on testing data
tst = load(’riply_tst’);
ypred = svmclass(tst.X,kfd_model);
cerror(ypred,tst.y)
ans =
0.0970
36
0 1 2 3 4 5 6 7 8
1
2
3
4
5
6
7
LDA direction
PCA direction
(a)
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
LDA
−4 −3 −2 −1 0 1 2 3 4 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
PCA
(b)
Figure 3.3: Comparison between PCA and LDA. Figure (a) shows input data and
found LDA (red) and PCA (blue) directions onto which the data are projected. Figure
(b) shows the class conditional Gaussians estimated from data projected data onto
LDA and PCA directions.
37
−8 −6 −4 −2 0 2 4 6 8 10
−8
−6
−4
−2
0
2
4
6
8
10
Input vectors
Reconstructed
Figure 3.4: Example on the Kernel Principal Component Analysis.
−8 −6 −4 −2 0 2 4 6 8 10
−6
−4
−2
0
2
4
6
8
Training set
Reconstructed set
Selected set
Figure 3.5: Example on the Greedy Kernel Principal Component Analysis.
38
−0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Figure 3.6: Example shows 2-dimensional data extracted from the originally 4-
dimensional Iris data set using the Generalized Discriminant Analysis. In contrast
to LDA (see Figure 3.2 for comparison), the class clusters are more compact but at the
expense of a higher risk of over-fitting.
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Figure 3.7: Example shows quadratic classifier found by the Perceptron algorithm on
the data mapped to the feature by the quadratic mapping.
39
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 3.8: Example shows the kernel classifier found by the Fisher Linear Discriminant
algorithm on the data mapped to the feature by the kernel PCA. The greedy kernel
PCA was used therefore only a subset of training data was used to determine the kernel
projection.
40
Chapter 4
Density estimation and clustering
The STPRtool represents a probability distribution function P
X
(x), x ∈ R
n
as a
structure array which must contain field .fun (see Table 4.2). The field .fun specifies
function which is used to evaluate P
X
(x) for given set of realization ¦x
1
, . . . , x
l
¦ of a
random vector. Let the structure model represent a given distribution function. The
Matlab command
y = feval(X,model.fun)
is used to evaluate y
i
= P
X
(x
i
). The matrix X [n l] contains column vectors
¦x
1
, . . . , x
l
¦. The vector y [1 l] contains values of the distribution function. In
particular, the Gaussian distribution and the Gaussian mixture models are described
in Section 4.1. The summary of implemented methods for dealing with probability
density estimation and clustering is given in Table 4.1
4.1 Gaussian distribution and Gaussian mixture model
The value of the multivariate Gaussian probability distribution function in the vector
x ∈ R
n
is given by
P
X
(x) =
1
(2π)
n
2
_
det(Σ)
exp(−
1
2
(x −µ)
T
Σ
−1
(x −µ)) , (4.1)
where µ [n 1] is the mean vector and Σ [n n] is the covariance matrix. The
STPRtool uses specific data-type to describe a set of Gaussians which can be labeled,
i.e., each pair (µ
i
, Σ
i
) is assigned to the class y
i
∈ ¸. The covariance matrix Σ is
always represented as the square matrix [n n], however, the shape of the matrix can
be indicated in the optimal field .cov type (see Table 4.3). The data-type is described
in Table 4.4. The evaluation of the Gaussian distribution is implemented in the function
41
Table 4.1: Implemented methods: Density estimation and clustering
gmmsamp Generates sample from Gaussian mixture model.
gsamp Generates sample from Gaussian distribution.
kmeans K-means clustering algorithm.
pdfgauss Evaluates for multivariate Gaussian distribution.
pdfgauss Evaluates Gaussian mixture model.
sigmoid Evaluates sigmoid function.
emgmm Expectation-Maximization Algorithm for Gaussian
mixture model.
melgmm Maximizes Expectation of Log-Likelihood for Gaus-
sian mixture.
mlcgmm Maximal Likelihood estimation of Gaussian mixture
model.
mlsigmoid Fitting a sigmoid function using ML estimation.
mmgauss Minimax estimation of Gaussian distribution.
rsde Reduced Set Density Estimator.
demo emgmm Demo on Expectation-Maximization (EM) algorithm.
demo mmgauss Demo on minimax estimation for Gaussian.
demo svmpout Fitting a posteriori probability to SVM output.
Table 4.2: Data-type used to represent probability distribution function.
Probability Distribution Function (structure array):
.fun [string] The name of a function which evaluates probability
distribution y = feval( X, model.fun ).
pdfgauss. The random sampling from the Gaussian distribution is implemented in the
function gsamp.
The Gaussian Mixture Model (GMM) is weighted sum of the Gaussian distributions
P
X
(x) =

y∈Y
P
Y
(y)P
X|Y
(x[y) , (4.2)
where P
Y
(y), y ∈ ¸ = ¦1, . . . , c¦ are weights and P
X|Y
(x[y) is the Gaussian distribu-
tion (4.1) given by parameters (µ
y
, Σ
y
). The data-type used to represent the GMM
is described in Table 4.5. The evaluation of the GMM is implemented in the function
pdfgmm. The random sampling from the GMM is implemented in the function gmmsamp.
Example: Sampling from the Gaussian distribution
The example shows how to represent the Gaussian distribution and how to sample
data it. The sampling from univariate and bivariate Gaussians is shown. The distribu-
tion function of the univariate Gaussian is visualized and compared to the histogram
42
Table 4.3: Shape of covariance matrix.
Parameter cov type:
’full’ Full covariance matrix.
’diag’ Diagonal covariance matrix.
’spherical’ Spherical covariance matrix.
Table 4.4: Data-type used to represent the labeled set of Gaussians.
Labeled set of Gaussians (structure array):
.Mean [n k] Mean vectors ¦µ
1
, . . . , µ
k
¦.
.Cov [n n k] Covariance matrices ¦Σ
1
, . . . , Σ
k
¦.
.y [1 k] Labels ¦y
1
, . . . , y
k
¦ assigned to Gaussians.
.cov type [string] Optional field which indicates shape of the covariance
matrix (see Table 4.3).
.fun = ’pdfgauss’ Identifies function associated with this data type.
of the sample data (see Figure 4.1a). The isoline of the distribution function of the
bivariate Gaussian is visualized together with the sample data (see Figure 4.1b).
% univariate case
model = struct(’Mean’,1,’Cov’,2); % Gaussian parameters
figure; hold on;
% plot pdf. of the Gaussisan
plot([-4:0.1:5],pdfgauss([-4:0.1:5],model),’r’);
[Y,X] = hist(gsamp(model,500),10); % compute histogram
bar(X,Y/500); % plot histogram
% bi-variate case
model.Mean = [1;1]; % Mean vector
model.Cov = [1 0.6; 0.6 1]; % Covariance matrix
figure; hold on;
ppatterns(gsamp(model,250)); % plot sampled data
pgauss(model); % plot shape
4.2 Maximum-Likelihood estimation of GMM
Let the probability distribution
P
XY |Θ
(x, y[θ) = P
X|Y Θ
(x[y, θ)P
Y |Θ
(y[θ) ,
43
Table 4.5: Data-type used to represent the Gaussian Mixture Model.
Gaussian Mixture Model (structure array):
.Mean [n c] Mean vectors ¦µ
1
, . . . , µ
c
¦.
.Cov [n n c] Covariance matrices ¦Σ
1
, . . . , Σ
c
¦.
.Prior [c 1] Weights of Gaussians ¦P
K
(1), . . . , P
K
(c)¦.
.fun = ’pdfgmm’ Identifies function associated with this data type.
−4 −3 −2 −1 0 1 2 3 4 5 6
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
−2 −1 0 1 2 3 4 5
−3
−2
−1
0
1
2
3
4
5
gauss 1
0.127
(a) (b)
Figure 4.1: Sampling from the univariate (a) and bivariate (b) Gaussian distribution.
be known up to parameters θ ∈ Θ. The vector of observations x ∈ R
n
and the hidden
state y ∈ ¸ = ¦1, . . . , c¦ are assumed to be realizations of random variables which
are independent and identically distributed according to the P
XY |Θ
(x, y[θ). The con-
ditional probability distribution P
X|Y Θ
(x[y, θ) is assumed to be Gaussian distribution.
The θ involves parameters (µ
y
, Σ
y
), y ∈ ¸ of Gaussian components and the values
of the discrete distribution P
Y |Θ
(y[θ), y ∈ ¸. The marginal probability P
X|Θ
(x[θ) is
the Gaussian mixture model (4.2). The Maximum-Likelihood estimation of the pa-
rameters θ of the GMM for the case of complete and incomplete data is described in
Section 4.2.1 and Section 4.2.2, respectively.
4.2.1 Complete data
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ which contains both vectors of ob-
servations x
i
∈ R
n
and corresponding hidden states y
i
∈ ¸. The pairs (x
i
, y
i
) are
assumed to samples from random variables which are independent and identically dis-
tributed (i.i.d.) according to P
XY |Θ
(x, y[θ). The logarithm of probability P(T
XY
[θ)
is referred to as the log-likelihood L(θ[T
XY
) of θ with respect to T
XY
. Thanks to the
44
i.i.d. assumption the log-likelihood can be factorized as
L(θ[T
XY
) = log P(T
XY
[θ) =
l

i=1
log P
XY |Θ
(x
i
, y
i
[θ) .
The problem of Maximum-Likelihood (ML) estimation from the complete data T
XY
is
defined as
θ

= argmax
θ∈Θ
L(θ[T
XY
) = argmax
θ∈Θ
l

i=1
log P
XY |Θ
(x
i
, y
i
[θ) . (4.3)
The ML estimate of the parameters θ of the GMM from the complete data set T
XY
has an analytical solution. The computation of the ML estimate is implemented in the
function mlcgmm.
Example: Maximum-Likelihood estimate from complete data
The parameters of the Gaussian mixture model is estimated from the complete
Riply’s data set riply trn. The data are binary labeled thus the GMM has two Gaus-
sians components. The estimated GMM is visualized as: (i) shape of components of the
GMM (see Figure 4.2a) and (ii) contours of the distribution function (see Figure 4.2b)
.
data = load(’riply_trn’); % load labeled (complete) data
model = mlcgmm(data); % ML estimate of GMM
% visualization
figure; hold on;
ppatterns(data); pgauss(model);
figure; hold on;
ppatterns(data);
pgmm(model,struct(’visual’,’contour’));
4.2.2 Incomplete data
The input is a set T
X
= ¦x
1
, . . . , x
l
¦ which contains only vectors of observations
x
i
∈ R
n
without knowledge about the corresponding hidden states y
i
∈ ¸. The
logarithm of probability P(T
X
[θ) is referred to as the log-likelihood L(θ[T
X
) of θ with
respect to T
X
. Thanks to the i.i.d. assumption the log-likelihood can be factorized as
L(θ[T
X
) = log P(T
X
[θ) =
l

i=1

y∈Y
P
X|Y Θ
(x
i
[y
i
, θ)P
Y |Θ
(y
i
[θ) .
45
The problem of Maximum-Likelihood (ML) estimation from the incomplete data T
X
is
defined as
θ

= argmax
θ∈Θ
L(θ[T
X
) = argmax
θ∈Θ
l

i=1

y∈Y
P
X|Y Θ
(x
i
[y
i
, θ)P
Y |Θ
(y
i
[θ) . (4.4)
The ML estimate of the parameters θ of the GMM from the incomplete data set
T
XY
does not have an analytical solution. The numerical optimization has to be used
instead.
The Expectation-Maximization (EM) algorithm is an iterative procedure for ML
estimation from the incomplete data. The EM algorithm builds a sequence of parameter
estimates
θ
(0)
, θ
(1)
, . . . , θ
(t)
,
such that the log-likelihood L(θ
(t)
[T
X
) monotonically increases, i.e.,
L(θ
(0)
[T
X
) < L(θ
(1)
[T
X
) < . . . < L(θ
(t)
[T
X
)
until a stationary point L(θ
(t−1)
[T
X
) = L(θ
(t)
[T
X
) is achieved. The EM algorithm is a
local optimization technique thus the estimated parameters depend on the initial guess
θ
(0)
. The random guess or K-means algorithm (see Section 4.5) are commonly used to
select the initial θ
(0)
. The EM algorithm for the GMM is implemented in the function
emgmm. An interactive demo on the EM algorithm for the GMM is implemented in
demo emgmm.
References: The EM algorithm (including convergence proofs) is described in book [26].
A nice description can be found in book [3]. The monograph [18] is entirely devoted
to the EM algorithm. The first papers describing the EM are [25, 5].
Example: Maximum-Likelihood estimate from incomplete data
The estimation of the parameters of the GMM is demonstrated on a synthetic
data. The ground truth model is the GMM with two univariate Gaussian components.
The ground truth model is sampled to obtain a training (incomplete) data. The EM
algorithm is applied to estimate the GMM parameters. The random initialization is
used by default. The ground truth model, sampled data and the estimate model are
visualized in Figure 4.3a. The plot of the log-likelihood function L(θ
(t)
[T
X
) with respect
to the number of iterations t is visualized in Figure 4.3b.
% ground truth Gaussian mixture model
true_model.Mean = [-2 2];
true_model.Cov = [1 0.5];
true_model.Prior = [0.4 0.6];
sample = gmmsamp(true_model, 250);
46
% ML estimation by EM
options.ncomp = 2; % number of Gaussian components
options.verb = 1; % display progress info
estimated_model = emgmm(sample.X,options);
% visualization
figure;
ppatterns(sample.X);
h1 = pgmm(true_model,struct(’color’,’r’));
h2 = pgmm(estimated_model,struct(’color’,’b’));
legend([h1(1) h2(1)],’Ground truth’, ’ML estimation’);
figure; hold on;
xlabel(’iterations’); ylabel(’log-likelihood’);
plot( estimated_model.logL );
4.3 Minimax estimation
The input is a set T
X
= ¦x
1
, . . . , x
l
¦ contains vectors x
i
∈ R
n
which are realizations of
random variable distributed according to P
X|Θ
(x[θ). The vectors T
X
are assumed to be
a realizations with high value of the distribution function P
X|Θ
(x[θ). The P
X|Θ
(x[θ)
is known up to the parameter θ ∈ Θ. The minimax estimation is defined as
θ

= argmax
θ∈Θ
min
x∈T
X
log P
X|Θ
(x[θ) . (4.5)
If a procedure for the Maximum-Likelihood estimate (global solution) of the distri-
bution P
X|Θ
(x[θ) exists then a numerical iterative algorithm which converges to the
optimal estimate θ

can be constructed. The algorithm builds a series of estimates
θ
(0)
, θ
(1)
, . . . , θ
(t)
which converges to the optimal estimate θ

. The algorithm converges
in a finite number of iterations to such an estimate θ that the condition
min
x∈T
X
log P
X|Θ
(x[θ

) − min
x∈T
X
log P
X|Θ
(x[θ) < ε , (4.6)
holds for ε > 0 which defines closeness to the optimal solution θ

. The inequality (4.6)
can be evaluated efficiently thus it is a natural choice for the stopping condition of
the algorithm. The STPRtool contains an implementation of the minimax algorithm
for the Gaussian distribution (4.1). The algorithm is implemented in the function
mmgauss. An interactive demo on the minimax estimation for the Gaussian distribution
is implemented in the function demo mmgauss.
References: The minimax algorithm for probability estimation is described and ana-
lyzed in Chapter 8 of the book [26].
47
Example: Minimax estimation of the Gaussian distribution
The bivariate Gaussian distribution is described by three samples which are known
to have high value of the distribution function. The minimax algorithm is applied
to estimate the parameters of the Gaussian. In this particular case, the minimax
problem is equivalent to searching for the minimal enclosing ellipsoid. The found
solution is visualized as the isoline of the distribution function at the point P
X|Θ
(x[θ)
(see Figure 4.4).
X = [[0;0] [1;0] [0;1]]; % points with high p.d.f.
model = mmgauss(X); % run minimax algorithm
% visualization
figure; ppatterns(X,’xr’,13);
pgauss(model, struct(’p’,exp(model.lower_bound)));
4.4 Probabilistic output of classifier
In general, the discriminant function of an arbitrary classifier does not have meaning of
probability (e.g., SVM, Perceptron). However, the probabilistic output of the classifier
can help in post-processing, for instance, in combining more classifiers together. Fitting
a sigmoid function to the classifier output is a way how to solve this problem.
Let T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ is the training set composed of the vectors x
i

A ⊆ R
n
and the corresponding binary hidden states y
i
∈ ¸ = ¦1, 2¦. The training
set T
XY
is assumed to be identically and independently distributed from underlying
distribution. Let f: A ⊆ R
n
→R be a discriminant function trained from the data T
XY
by an arbitrary learning method. The aim is to estimate parameters of a posteriori
distribution P
Y |FΘ
(y[f(x), θ) of the hidden state y given the value of the discriminant
function f(x). The distribution is modeled by a sigmoid function
P
Y |FΘ
(1[f(x), θ) =
1
1 + exp(a
1
f(x) + a
2
)
,
which is determined by parameters θ = [a
1
, a
2
]
T
. The log-likelihood function is defined
as
L(θ[T
XY
) =
l

i=1
log P
Y |FΘ
(y
i
[f(x
i
), θ) .
The parameters θ = [a
1
, a
2
]
T
can be estimated by the maximum-likelihood method
θ = argmax
θ

L(θ

[T
XY
) = argmax
θ

l

i=1
log P
Y |FΘ
(y
i
[f(x
i
), θ

) . (4.7)
48
The STPRtool provides an implementation mlsigmoid which uses the Matlab Opti-
mization toolbox function fminunc to solve the task (4.7). The function produces the
parameters θ = [a
1
, a
2
]
T
which are stored in the data-type describing the sigmoid (see
Table 4.6). The evaluation of the sigmoid is implemented in function sigmoid.
Table 4.6: Data-type used to describe sigmoid function.
Sigmoid function (structure array):
.A [2 1] Parameters θ = [a
1
, a
2
]
T
of the sigmoid function.
.fun = ’sigmoid’ Identifies function associated with this data type.
References: The method of fitting the sigmoid function is described in [22].
Example: Probabilistic output for SVM
This example is implemented in the script demo svmpout. The example shows how
to train the probabilistic output for the SVM classifier. The Riply’s data set riply trn
is used for training the SVM and the ML estimation of the sigmoid. The ML estimated
of Gaussian mixture model (GMM) of the SVM output is used for comparison. The
GMM allows to compute the a posteriori probability for comparison to the sigmoid
estimate. The example is implemented in the script demo svmpout. Figure 4.5a shows
found probabilistic models of a posteriori probability of the SVM output. Figure 4.5b
shows the decision boundary of SVM classifier for illustration.
% load training data
data = load(’riply_trn’);
% train SVM
options = struct(’ker’,’rbf’,’arg’,1,’C’,10);
svm_model = smo(data,options);
% compute SVM output
[dummy,svm_output.X] = svmclass(data.X,svm_model);
svm_output.y = data.y;
% fit sigmoid function to svm output
sigmoid_model = mlsigmoid(svm_output);
% ML estimation of GMM model of SVM output
gmm_model = mlcgmm(svm_output);
% visulization
figure; hold on;
49
xlabel(’svm output f(x)’); ylabel(’p(y=1|f(x))’);
% sigmoid model
fx = linspace(min(svm_output.X), max(svm_output.X), 200);
sigmoid_apost = sigmoid(fx,sigmoid_model);
hsigmoid = plot(fx,sigmoid_apost,’k’);
ppatterns(svm_output);
% GMM model
pcond = pdfgauss( fx, gmm_model);
gmm_apost = (pcond(1,:)*gmm_model.Prior(1))./...
(pcond(1,:)*gmm_model.Prior(1)+(pcond(2,:)*gmm_model.Prior(2)));
hgmm = plot(fx,gmm_apost,’g’);
hcomp = pgauss(gmm_model);
legend([hsigmoid,hgmm,hcomp],’P(y=1|f(x)) ML-Sigmoid’,...
’P(y=1|f(x)) ML-GMM’,’P(f(x)|y=1) ML-GMM’,’P(f(x)|y=2) ML-GMM’);
% SVM decision boundary
figure; ppatterns(data); psvm(svm_model);
4.5 K-means clustering
The input is a set T
X
= ¦x
1
, . . . , x
l
¦ which contains vectors x
i
∈ R
n
. Let θ =
¦µ
1
, . . . , µ
c
¦, µ
i
∈ R
n
be a set of unknown cluster centers. Let the objective function
F(θ) be defined as
F(θ) =
1
l
l

i=1
min
y=1,...,c

y
−x
i
|
2
.
The small value of F(θ) indicates that the centers θ well describe the input set T
X
.
The task is to find such centers θ which describe the set T
X
best, i.e., the task is to
solve
θ

= argmin
θ
F(θ) = argmin
θ
1
l
l

i=1
min
y=1,...,c

y
−x
i
|
2
.
The K-means
1
is an iterative procedure which iteratively builds a series θ
(0)
, θ
(1)
, . . . , θ
(t)
.
The K-means algorithm converges in a finite number of steps to the local optimum of
the objective function F(θ). The found solution depends on the initial guess θ
(0)
which
1
The K in the name K-means stands for the number of centers which is here denoted by c.
50
is usually selected randomly. The K-means algorithm is implemented in the function
kmeans.
References: The K-means algorithm is described in the books [26, 6, 3].
Example: K-means clustering
The K-means algorithm is used to find 4 clusters in the Riply’s training set. The
solution is visualized as the cluster centers. The vectors classified to the corresponding
centers are distinguished by color (see Figure 4.6a). The plot of the objective function
F(θ
(t)
) with respect to the number of iterations is shown in Figure 4.6b.
data = load(’riply_trn’); % load data
[model,data.y] = kmeans(data.X, 4 ); % run k-means
% visualization
figure; ppatterns(data);
ppatterns(model.X,’sk’,14);
pboundary( model );
figure; hold on;
xlabel(’t’); ylabel(’F’);
plot(model.MsErr);
51
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
gauss 1
0.981
gauss 2
1.45
(a)
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
(b)
Figure 4.2: Example of the Maximum-Likelihood of the Gaussian Mixture Model from
the incomplete data. Figure (a) shows components of the GMM and Figure (b) shows
contour plot of the distribution function.
52
−5 −4 −3 −2 −1 0 1 2 3 4 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Ground truth
ML estimation
(a)
0 5 10 15 20 25
−460
−458
−456
−454
−452
−450
−448
−446
−444
−442
iterations
l
o
g

l
i
k
e
l
i
h
o
o
d
(b)
Figure 4.3: Example of using the EM algorithm for ML estimation of the Gaussian
mixture model. Figure (a) shows ground truth and the estimated GMM and Figure(b)
shows the log-likelihood with respect to the number of iterations of the EM.
53
−0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
gauss 1
0.304
Figure 4.4: Example of the Minimax estimation of the Gaussian distribution.
54
−4 −3 −2 −1 0 1 2 3 4 5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
svm output f(x)
p
(
y
=
1
|
f
(
x
)
)
P(y=1|f(x)) ML−Sigmoid
P(y=1|f(x)) ML−GMM
P(f(x)|y=1) ML−GMM
P(f(x)|y=2) ML−GMM
(a)
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
(b)
Figure 4.5: Example of fitting a posteriori probability to the SVM output. Figure (a)
shows the sigmoid model and GMM model both fitted by ML estimated. Figure (b)
shows the corresponding SVM classifier.
55
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
(a)
0 2 4 6 8 10 12
0.04
0.06
0.08
0.1
0.12
0.14
0.16
t
F
(b)
Figure 4.6: Example of using the K-means algorithm to cluster the Riply’s data set.
Figure (a) shows found clusters and Figure (b) contains the plot of the objective func-
tion F(θ
(t)
) with respect to the number of iterations.
56
Chapter 5
Support vector and other kernel
machines
5.1 Support vector classifiers
The binary support vector classifier uses the discriminant function f: A ⊆ R
n
→R of
the following form
f(x) = ¸α k
S
(x)) + b . (5.1)
The k
S
(x) = [k(x, s
1
), . . . , k(x, s
d
)]
T
is the vector of evaluations of kernel functions
centered at the support vectors o = ¦s
1
, . . . , s
d
¦, s
i
∈ R
n
which are usually subset of
the training data. The α ∈ R
l
is a weight vector and b ∈ R is a bias. The binary
classification rule q: A → ¸ = ¦1, 2¦ is defined as
q(x) =
_
1 for f(x) ≥ 0 ,
2 for f(x) < 0 .
(5.2)
The data-type used by the STPRtool to represent the binary SVM classifier is described
in Table 5.2.
The multi-class generalization involves a set of discriminant functions f
y
: A ⊆ R
n

R, y ∈ ¸ = ¦1, 2, . . . , c¦ defined as
f
y
(x) = ¸α
y
k
S
(x)) + b
y
, y ∈ ¸ .
Let the matrix A = [α
1
, . . . , α
c
] be composed of all weight vectors and b = [b
1
, . . . , b
c
]
T
be a vector of all biases. The multi-class classification rule q: A → ¸ = ¦1, 2, . . . , c¦ is
defined as
q(x) = argmax
y∈Y
f
y
(x) . (5.3)
The data-type used by the STPRtool to represent the multi-class SVM classifier is
described in Table 5.3. Both the binary and the multi-class SVM classifiers are imple-
mented in the function svmclass.
57
The majority voting strategy is other commonly used method to implement the
multi-class SVM classifier. Let q
j
: A ⊆ R
n
→ ¦y
1
j
, y
2
j
¦, j = 1, . . . , g be a set of g binary
SVM rules (5.2). The j-th rule q
j
(x) classifies the inputs x into class y
1
j
∈ ¸ or y
2
j
∈ ¸.
Let v(x) be a vector [c 1] of votes for classes ¸ when the input x is to be classified.
The vector v(x) = [v
1
(x), . . . , v
c
(x)]
T
is computed as:
Set v
y
= 0, ∀y ∈ ¸.
For j = 1, . . . , g do
v
y
(x) = v
y
(x) + 1 where y = q
j
(x).
end.
The majority votes based multi-class classifier assigns the input x into such class y ∈ ¸
having the majority of votes
y = argmax
y

=1,...,g
v
y
(x) . (5.4)
The data-type used to represent the majority voting strategy for the multi-class SVM
classifier is described in Table 5.4. The strategy (5.4) is implemented in the function
mvsvmclass. The implemented methods to deal with the SVM and related kernel
classifiers are enlisted in Table 5.1
References: Books describing Support Vector Machines are for example [31, 4, 30]
5.2 Binary Support Vector Machines
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of training vectors x
i
∈ R
n
and cor-
responding hidden states y
i
∈ ¸ = ¦1, 2¦. The linear SVM aims to train the linear
discriminant function f(x) = ¸w x) + b of the binary classifier
q(x) =
_
1 for f(x) ≥ 0 ,
2 for f(x) < 0 .
The training of the optimal parameters (w

, b

) is transformed to the following quadratic
programming task
(w

, b

) = argmin
w,b
1
2
|w|
2
+ C
l

i=1
ξ
p
i
, (5.5)
¸w x
i
) + b ≥ +1 −ξ
i
, i ∈ 1
1
,
¸w x
i
) + b ≤ −1 + ξ
i
, i ∈ 1
2
,
ξ
i
≥ 0 , i ∈ 1
1
∪ 1
2
.
58
Table 5.1: Implemented methods: Support vector and other kernel machines.
svmclass Support Vector Machines Classifier.
mvsvmclass Majority voting multi-class SVM classifier.
evalsvm Trains and evaluates Support Vector Machines classi-
fier.
bsvm2 Multi-class BSVM with L2-soft margin.
oaasvm Multi-class SVM using One-Against-All decomposi-
tion.
oaosvm Multi-class SVM using One-Against-One decomposi-
tion.
smo Sequential Minimal Optimization for binary SVM
with L1-soft margin.
svmlight Interface to SV M
light
software.
svmquadprog SVM trained by Matlab Optimization Toolbox.
diagker Returns diagonal of kernel matrix of given data.
kdist Computes distance between vectors in kernel space.
kernel Evaluates kernel function.
kfd Kernel Fisher Discriminant.
kperceptr Kernel Perceptron.
minball Minimal enclosing ball in kernel feature space.
rsrbf Reduced Set Method for RBF kernel expansion.
rbfpreimg RBF pre-image by Schoelkopf’s fixed-point algorithm.
rbfpreimg2 RBF pre-image problem by Gradient optimization.
rbfpreimg3 RBF pre-image problem by Kwok-Tsang’s algorithm.
demo svm Demo on Support Vector Machines.
The 1
1
= ¦i: y
i
= 1¦ and 1
2
= ¦i: y
i
= 2¦ are sets of indices. The C > 0 stands for the
regularization constant. The ξ
i
≥ 0, i ∈ 1
1
∪ 1
2
are slack variables used to relax the
inequalities for the case of non-separable data. The linear p = 1 and quadratic p = 2
term for the slack variables ξ
i
is used. In the case of the parameter p = 1, the L1-soft
margin SVM is trained and for p = 2 it is denoted as the L2-soft margin SVM.
The training of the non-linear (kernel) SVM with L1-soft margin corresponds to
solving the following QP task
β

= argmax
β
¸β 1) −
1
2
¸β Hβ) , (5.6)
subject to
¸β γ) = 1 ,
β ≥ 0 ,
β ≤ 1C .
59
Table 5.2: Data-type used to describe binary SVM classifier.
Binary SVM classifier (structure array):
.Alpha [d 1] Weight vector α.
.b [1 1] Bias b.
.sv.X [n d] Support vectors o = ¦s
1
, . . . , s
d
¦.
.options.ker [string] Kernel identifier.
.options.arg [1 p] Kernel argument(s).
.fun = ’svmclass’ Identifies function associated with this data type.
Table 5.3: Data-type used to describe multi-class SVM classifier.
Multi-class SVM classifier (structure array):
.Alpha [d c] Weight vectors A = [α
1
, . . . , α
c
].
.b [c 1] Vector of biased b = [b
1
, . . . , b
c
]
T
.
.sv.X [n d] Support vectors o = ¦s
1
, . . . , s
d
¦.
.options.ker [string] Kernel identifier.
.options.arg [1 p] Kernel argument(s).
.fun = ’svmclass’ Identifies function associated with this data type.
The 0 [l 1] is vector of zeros and the 1 [l 1] is vector of ones. The vector γ [l 1]
and the matrix H [l l] are constructed as follows
γ
i
=
_
1 if y
i
= 1 ,
−1 if y
i
= 2 ,
H
i,j
=
_
k(x
i
, x
j
) if y
i
= y
j
,
−k(x
i
, x
j
) if y
i
,= y
j
,
where k: A A → R is selected kernel function. The discriminant function (5.1) is
determined by the weight vector α, bias b and the set of support vectors o. The
set of support vectors is denoted as o = ¦x
i
: i ∈ 1
SV
¦, where the set of indices is
1
SV
= ¦i: β
i
> 0¦. The weight vectors are compute as
α
i
=
_
β
i
if y
i
= 1 ,
−β
i
if y
i
= 2 ,
for i ∈ 1
SV
. (5.7)
The bias b can be computed from the Karush-Kuhn-Tucker (KKT) conditions as the
following constrains
f(x
i
) = ¸α k
S
(x
i
)) + b = +1 for i ∈ ¦j: y
j
= 1, 0 < β
j
< C¦ ,
f(x
i
) = ¸α k
S
(x
i
)) + b = −1 for i ∈ ¦j: y
j
= 2, 0 < β
j
< C¦ ,
must hold for the optimal solution. The bias b is computed as an average over all the
constrains such that
b =
1
[1
b
[

i∈I
b
γ
i
−¸α k
S
(x
i
)) .
60
Table 5.4: Data-type used to describe the majority voting strategy for multi-class SVM
classifier.
Majority voting SVM classifier (structure array):
.Alpha [d g] Weight vectors A = [α
1
, . . . , α
g
].
.b [g 1] Vector of biased b = [b
1
, . . . , b
g
]
T
.
.bin y [2 g] Targets for the binary rules. The vector bin y(1,:)
contains [y
1
1
, y
1
2
, . . . , y
1
g
] and the bin y(2,:) contains
[y
2
1
, y
2
2
, . . . , y
2
g
].
.sv.X [n d] Support vectors o = ¦s
1
, . . . , s
d
¦.
.options.ker [string] Kernel identifier.
.options.arg [1 p] Kernel argument(s).
.fun = ’mvsvmclass’ Identifies function associated with this data type.
where 1
b
= ¦i: 0 < β
i
< C¦ are the indices of the vectors on the boundary.
The training of the non-linear (kernel) SVM with L2-soft margin corresponds to
solving the following QP task
β

= argmax
β
¸β 1) −
1
2
¸β (H+
1
2C
E)β) , (5.8)
subject to
¸β γ) = 1 ,
β ≥ 0 ,
where E is the identity matrix. The set of support vectors is set to o = ¦x
i
: i ∈ 1
SV
¦
where the set of indices is 1
SV
= ¦i: β
i
> 0¦. The weight vector α is given by (5.7).
The bias b can be computed from the KKT conditions as the following constrains
f(x
i
) = ¸α k
S
(x
i
)) + b = +1 −
α
i
2C
for i ∈ ¦i: y
j
= 1, β
j
> 0¦ ,
f(x
i
) = ¸α k
S
(x
i
)) + b = −1 +
α
i
2C
for i ∈ ¦i: y
j
= 2, β
j
> 0¦ ,
must hold at the optimal solution. The bias b is computed as an average over all the
constrains thus
b =
1
[1
SV
[

i∈I
SV
γ
i
(1 −
α
i
2C
) −¸α k
S
(x
i
)) .
The binary SVM solvers implemented in the STPRtool are enlisted in Table 5.5.
An interactive demo on the binary SVM is implemented in demo svm.
References: The Sequential Minimal Optimizer (SMO) is described in the books [23,
27, 30]. The SV M
light
is described for instance in the book [27].
Example: Training binary SVM.
61
Table 5.5: Implemented binary SVM solvers.
Function description
smo Implementation of the Sequential Minimal Optimizer (SMO)
used to train the binary SVM with L1-soft margin. It can
handle moderate size problems.
svmlight Matlab interface to SV M
light
software (Version: 5.00) which
trains the binary SVM with L1-soft margin. The executable
file svm learn for the Linux OS must be installed. It can be
downloaded from http://svmlight.joachims.org/. The
SV M
light
is very powerful optimizer which can handle large
problems.
svmquadprog SVM solver based on the QP solver quadprog of the Mat-
lab optimization toolbox. It trains both L1-soft and L2-soft
margin binary SVM. The function quadprog is a part of the
Matlab Optimization toolbox. It useful for small problems
only.
The binary SVM is trained on the Riply’s training set riply trn. The SMO solver
is used in the example but other solvers (svmlight, svmquadprog) can by used as well.
The trained SVM classifier is evaluated on the testing data riply tst. The decision
boundary is visualized in Figure 5.1.
trn = load(’riply_trn’); % load training data
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel argument
options.C = 10; % regularization constant
% train SVM classifier
model = smo(trn,options);
% model = svmlight(trn,options);
% model = svmquadprog(trn,options);
% visualization
figure;
ppatterns(trn); psvm(model);
tst = load(’riply_tst’); % load testing data
ypred = svmclass(tst.X,model); % classify data
cerror(ypred,tst.y) % compute error
62
ans =
0.0920
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 5.1: Example showing the binary SVM classifier trained on the Riply’s data.
5.3 One-Against-All decomposition for SVM
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of training vectors x
i
∈ A ⊆ R
n
and corresponding hidden states y
i
∈ ¸ = ¦1, 2, . . . , c¦. The goal is to train the
multi-class rule q: A ⊆ R
n
→ ¸ defined by (5.3). The problem can be solved by
the One-Against-All (OAA) decomposition. The OAA decomposition transforms the
multi-class problem into a series of c binary subtasks that can be trained by the binary
SVM. Let the training set T
y
XY
= ¦(x
1
, y

1
), . . . , (x
l
, y

l
)¦ contain the modified hidden
states defined as
y

i
=
_
1 for y = y
i
,
2 for y ,= y
i
.
The discriminant functions
f
y
(x) = ¸α
y
k
S
(x)) + b
y
, y ∈ ¸ ,
are trained by the binary SVM solver from the set T
y
XY
, y ∈ ¸. The OAA decomposition
is implemented in the function oaasvm. The function allows to specify any binary SVM
63
solver (see Section 5.2) used to solve the binary problems. The result is the multi-class
SVM classifier (5.3) represented by the data-type described in Table 5.3.
References: The OAA decomposition to train the multi-class SVM and comparison
to other methods is described for instance in the paper [12].
Example: Multi-class SVM using the OAA decomposition
The example shows the training of the multi-class SVM using the OAA decomposi-
tion. The SMO binary solver is used to train the binary SVM subtasks. The training
data and the decision boundary is visualized in Figure 5.2.
data = load(’pentagon’); % load data
options.solver = ’smo’; % use SMO solver
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel argument
options.C = 10; % regularization constant
model = oaasvm(data,options ); % training
% visualization
figure;
ppatterns(data);
ppatterns(model.sv.X,’ko’,12);
pboundary(model);
5.4 One-Against-One decomposition for SVM
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of training vectors x
i
∈ A ⊆ R
n
and
corresponding hidden states y
i
∈ ¸ = ¦1, 2, . . . , c¦. The goal is to train the multi-class
rule q: A ⊆ R
n
→ ¸ based on the majority voting strategy defined by (5.4). The
one-against-one (OAO) decomposition strategy can be used train such a classifier. The
OAO decomposition transforms the multi-class problem into a series of g = c(c −1)/2
binary subtasks that can be trained by the binary SVM. Let the training set T
j
XY
=
¦(x

1
, y

1
), . . . , (x

l
j
, y

l
j
)¦ contain the training vectors x
i
∈ 1
j
= ¦i: y
i
= y
1
_
y
i
= y
2
¦
and the modified the hidden states defined as
y

i
=
_
1 for y
1
j
= y
i
,
2 for y
2
j
= y
i
,
i ∈ 1
j
.
The training set T
j
XY
, j = 1, . . . , g is constructed for all g = c(c −1)/2 combinations of
classes y
1
j
∈ ¸ and y
2
j
∈ ¸ ¸¦y
1
j
¦. The binary SVM rules q
j
, j = 1, . . . , g are trained on
the data T
j
XY
. The OAO decomposition for multi-class SVM is implemented in function
64
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.2: Example showing the multi-class SVM classifier build by one-against-all
decomposition.
oaosvm. The function oaosvm allows to specify an arbitrary binary SVM solver to train
the subtasks.
References: The OAO decomposition to train the multi-class SVM and comparison
to other methods is described for instance in the paper [12].
Example: Multi-class SVM using the OAO decomposition
The example shows the training of the multi-class SVM using the OAO decomposi-
tion. The SMO binary solver is used to train the binary SVM subtasks. The training
data and the decision boundary is visualized in Figure 5.3.
data = load(’pentagon’); % load data
options.solver = ’smo’; % use SMO solver
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel argument
options.C = 10; % regularization constant
model = oaosvm(data,options); % training
% visualization
figure;
ppatterns(data);
ppatterns(model.sv.X,’ko’,12);
pboundary(model);
65
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.3: Example showing the multi-class SVM classifier build by one-against-one
decomposition.
5.5 Multi-class BSVM formulation
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of training vectors x
i
∈ A ⊆ R
n
and
corresponding hidden states y
i
∈ ¸ = ¦1, 2, . . . , c¦. The goal is to train the multi-class
rule q: A ⊆ R
n
→ ¸ defined by (5.3). In the linear case, the parameters of the multi-
class rule can be found by solving the multi-class BSVM formulation which is defined
as
(W

, b

, ξ

) = argmin
W,b
1
2

y∈Y
(|w
y
|
2
+ b
2
y
) + C

i∈I

y∈Y\{y}

i
)
p
. ¸¸ .
F(W,b,ξ)
, (5.9)
subject to
¸w
y
i
x
i
) + b
y
i
−(¸w
y
x
i
) + b
y
) ≥ 1 −ξ
y
i
, i ∈ 1 , y ∈ ¸ ¸ ¦y
i
¦ ,
ξ
y
i
≥ 0 , i ∈ 1 , y ∈ ¸ ¸ ¦y
i
¦ ,
where W = [w
1
, . . . , w
c
] is a matrix of normal vectors, b = [b
1
, . . . , b
c
]
T
is vector of
biases, ξ is a vector of slack variables and 1 = [1, . . . , l] is set of indices. The L1-soft
margin is used for p = 1 and L2-soft margin for p = 2. The letter B in BSVM stands
for the bias term added to the objective (5.9).
In the case of the L2-soft margin p = 2, the optimization problem (5.9) can be
66
expressed in its dual form
A

= argmax
A

i∈I

y∈Y\{y
i
}
α
y
i

1
2

i∈I

y∈Y\{y
i
}

j∈I

u∈Y\{y
j
}
α
y
i
α
u
j
h(y, u, i, j) , (5.10)
subject to
α
y
i
≥ 0 , i ∈ 1 , y ∈ ¸ ¸ ¦y
i
¦ ,
where A = [α
1
, . . . , α
c
] are optimized weight vectors. The function h: ¸¸11 →R
is defined as
h(y, u, i, j) = (¸x
i
x
j
)+1)(δ(y
i
, y
j
)+δ(y, u)−δ(y
i
, u)−δ(y
j
, y))+
δ(y, u)δ(i, j)
2C
. (5.11)
The criterion (5.10) corresponds to the single-class SVM criterion which is simple to
optimize. The multi-class rule (5.3) is composed of the set of discriminant functions
f
y
: A →R which are computed as
f
y
(x) =

i∈I
¸x
i
x)

u∈Y\{y
i
}
α
u
i
(δ(u, y
i
) −δ(u, y)) + b
y
, y ∈ ¸ , (5.12)
where the bias b
y
, y ∈ ¸ is given by
b
y
=

i∈I

y∈Y\{y
i
}
α
u
i
(δ(y, y
i
) −δ(y, u)) , y ∈ ¸ .
The non-linear (kernel) classifier is obtained by substituting the selected kernel k: A
A →R for the dot products ¸xx

) to (5.11) and (5.12). The training of the multi-class
BSVM classifier with L2-soft margin is implemented in the function bsvm2. The opti-
mization task (5.10) can be solved by most the optimizers. The optimizers implemented
in the function bsvm2 are enlisted bellow:
Mitchell-Demyanov-Malozemov (solver = ’mdm’).
Kozinec’s algorithm (solver = ’kozinec’).
Nearest Point Algorithm (solver = ’npa’).
The above mentioned algorithms are of iterative nature and converge to the optimal
solution of the criterion (5.9). The upper bound F
UB
and the lower bound F
LB
on the
optimal value F(W

, b

, ξ

) of the criterion (5.9) can be computed. The bounds are
used in the algorithms to define the following two stopping conditions:
Relative tolerance:
F
UB
−F
LB
F
LB
+ 1
≤ ε
rel
.
67
Absolute tolerance: F
UB
≤ ε
abs
.
References: The multi-class BSVM formulation is described in paper [12]. The trans-
formation of the multi-class BSVM criterion to the single-class criterion (5.10) imple-
mented in the STPRtool is published in paper [9]. The Mitchell-Demyanov-Malozemov
and Nearest Point Algorithm are described in paper [14]. The Kozinec’s algorithm
based SVM solver is described in paper [10].
Example: Multi-class BSVM with L2-soft margin.
The example shows the training of the multi-class BSVM using the NPA solver. The
other solvers can be used instead by setting the variable options.solver (other options
are ’kozinec’, ’mdm’). The training data and the decision boundary is visualized in
Figure 5.4.
data = load(’pentagon’); % load data
options.solver = ’npa’; % use NPA solver
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel argument
options.C = 10; % regularization constant
model = bsvm2(data,options); % training
% visualization
figure;
ppatterns(data);
ppatterns(model.sv.X,’ko’,12);
pboundary(model);
5.6 Kernel Fisher Discriminant
The input is a set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ of training vectors x
i
∈ A ⊆ R
n
and
corresponding values of binary state y
i
∈ ¸ = ¦1, 2¦. The Kernel Fisher Discriminant
is the non-linear extension of the linear FLD (see Section 2.3). The input training
vectors are assumed to be mapped into a new feature space T by a mapping function
φ: A → T. The ordinary FLD is applied on the mapped data training data T
ΦY
=
¦(φ(x
1
), y
1
), . . . , (φ(x
l
), y
l
)¦. The mapping is performed efficiently by means of kernel
functions k: A A → R being the dot products of the mapped vectors k(x, x

) =
¸φ(x) φ(x

)). The aim is to find a direction ψ =

l
i=1
α
i
φ(x
i
) in the feature space
T given by weights α = [α
1
, . . . , α
l
]
T
such that the criterion
F(α) =
¸α Mα)
¸α (N+ µE)α)
=
¸α m)
2
¸α (N+ µE)α)
, (5.13)
68
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.4: Example showing the multi-class BSVM classifier with L2-soft margin.
is maximized. The criterion (5.13) is the feature space counterpart of the crite-
rion (2.13) defined in the input space A. The vector m [l 1], matrices N [l l]
M [l l] are constructed as
m
y
=
1
[1
y
[
K1
y
, y ∈ ¸ , m = m
1
−m
2
,
N = KK
T

y∈Y
[1
y
[m
y
m
T
y
, M = (m
1
−m
2
)(m
1
−m
2
)
T
,
where the vector 1
y
has entries i ∈ 1
y
equal to and remaining entries zeros. The kernel
matrix K [l l] contains kernel evaluations K
i,j
= k(x
i
, x
j
). The diagonal matrix
µE in the denominator of the criterion (5.13) serves as the regularization term. The
discriminant function f(x) of the binary classifier (5.2) is given by the found vector α
as
f(x) = ¸ψ φ(x)) + b = ¸α k(x)) + b , (5.14)
where the k(x) = [k(x
1
, x), . . . , k(x
l
, x)]
T
is vector of evaluations of the kernel. The
discriminant function (5.14) is known up to the bias b which is determined by the linear
SVM with L1-soft margin is applied on the projected data T
ZY
= ¦(z
1
, y
1
), . . . , (z
l
, y
l
)¦.
The projections ¦z
1
, . . . , z
l
¦ are computed as
z
i
= ¸α k(x
i
)) .
The KFD training algorithm is implemented in the function kfd.
69
References: The KFD formulation is described in paper [19]. The detailed analysis
can be found in book [30].
Example: Kernel Fisher Discriminant
The binary classifier is trained by the KFD on the Riply’s training set riply trn.mat.
The found classifier is evaluated on the testing data riply tst.mat. The training data
and the found kernel classifier is visualized in Figure 5.5. The classifier is visualized as
the SVM classifier, i.e., the isolines with f(x) = +1 and f(x) = −1 are visualized and
the support vectors (whole training set) are marked as well.
trn = load(’riply_trn’); % load training data
options.ker = ’rbf’; % use RBF kernel
options.arg = 1; % kernel argument
options.C = 10; % regularization of linear SVM
options.mu = 0.001; % regularization of KFD criterion
model = kfd(trn, options) % KFD training
% visualization
figure;
ppatterns(trn);
psvm(model);
% evaluation on testing data
tst = load(’riply_tst’);
ypred = svmclass( tst.X, model );
cerror( ypred, tst.y )
ans =
0.0960
5.7 Pre-image problem
Let T
X
= ¦x
1
, . . . , x
l
¦ be a set of vector from the input space A ⊆ R
n
. The kernel
methods works with the data non-linearly mapped φ: A → T into a new feature space
T. The mapping is performed implicitly by using the kernel functions k: A A → R
which are dot products of the input data mapped into the feature space T such that
k(x
a
, x
b
) = ¸φ(x
a
) φ(x
b
)). Let ψ ∈ T be given by the following kernel expansion
ψ =
l

i=1
α
i
φ(x
i
) , (5.15)
70
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 5.5: Example shows the binary classifier trained based on the Kernel Fisher
Discriminant.
where α = [α
1
, . . . , α
l
]
T
is a weight vector of real coefficients. The pre-image problem
aims to find an input vector x ∈ A, its feature space image φ(x) ∈ T would well
approximate the expansion point Ψ ∈ T given by (5.15). In other words, the pre-
image problem solves mapping from the feature space T back to the input space A.
The pre-image problem can be stated as the following optimization task
x = argmin
x

|φ(x) −ψ|
2
= argmin
x

|φ(x

) −
l

i=1
α
i
φ(x
i
)|
2
= argmin
x

k(x

, x

) −2
l

i=1
α
i
k(x

, x
i
) +
l

i=1
l

j=1
α
i
α
j
k(x
i
, x
j
) .
(5.16)
Let the Radial Basis Function (RBF) k(x
a
, x
b
) = exp(−0.5|x
a
−x
b
|
2

2
) be assumed.
The pre-image problem (5.16) for the particular case of the RBF kernel leads to the
following task
x = argmax
x

l

i=1
α
i
exp(−0.5|x

−x
i
|
2

2
) . (5.17)
The STPRtool provides three optimization algorithms to solve the RBF pre-image
problem (5.17) which are enlisted in Table 5.6. However, all the methods are guaranteed
to find only a local optimum of the task (5.17).
71
Table 5.6: Methods to solve the RBF pre-image problem implemented in the STPRtool.
rbfpreimg An iterative fixed point algorithm proposed in [28].
rbfpreimg2 A standard gradient optimization method using the
Matlab optimization toolbox for 1D search along the
gradient.
rbfpreimg3 A method exploiting the correspondence between dis-
tances in the input space and the feature space pro-
posed in [16].
References: The pre-image problem and the implemented algorithms for its solution
are described in [28, 16].
Example: Pre-image problem
The example shows how to visualize a pre-image of the data mean in the feature
induced by the RBF kernel. The result is visualized in Figure 5.6.
% load input data
data = load(’riply_trn’);
% define the mean in the feature space
num_data = size(data,2);
expansion.Alpha = ones(num_data,1)/num_data;
expansion.sv.X = data.X;
expansion.options.ker = ’rbf’;
expansion.options.arg = 1;
% compute pre-image problem
x = rbfpreimg(expansion);
% visualization
figure;
ppatterns(data.X);
ppatterns(x,’or’,13);
72
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 5.6: Example shows training data and the pre-image of their mean in the feature
space induced by the RBF kernel.
5.8 Reduced set method
The reduced set method is useful to simplify the kernel classifier trained by the SVM
or other kernel machines. Let the function f: A ⊆ R
n
→R be given as
f(x) =
l

i=1
α
i
k(x
i
, x) + b = ¸ψ φ(x)) + b , (5.18)
where T
X
= ¦x
1
, . . . , x
l
¦ are vectors from R
n
, k: A A → R is a kernel function,
α = [α
1
, . . . , α
l
]
T
is a weight vector of real coefficients and b ∈ R is a scalar. Let
φ: A → T be a mapping from the input space A to the feature space T associated
with kernel function such that k(x
a
, x
b
) = ¸φ(x
a
) φ(x
b
)). The function f(x) can be
expressed in the feature space T as the linear function f(x) = ¸ψ φ(x)) + b. The
reduced set method aims to find a new function
˜
f(x) =
m

i=1
β
i
k(s
i
, x) + b = ¸φ(x)
m

i=1
β
i
φ(s
i
)
. ¸¸ .
˜
ψ
) + b , (5.19)
such that the expansion (5.19) is shorter, i.e., m < l, and well approximates the original
one (5.18). The weight vector β = [β
1
, . . . , β
m
]
T
and the vectors T
S
= ¦s
1
, . . . , s
m
¦,
s
i
∈ R
n
determine the reduced kernel expansion (5.19). The problem of finding the
73
reduced kernel expansion (5.19) can be stated as the optimization task
(β, T
S
) = argmin
β

,T

S
|
˜
ψ −ψ|
2
= argmin
β

,T

S
_
_
_
_
_
m

i=1
β

i
φ(s

i
) −
l

i=1
α
i
φ(x
i
)
_
_
_
_
_
2
. (5.20)
Let the Radial Basis Function (RBF) k(x
a
, x
b
) = exp(−0.5|x
a
−x
b
|
2

2
) be assumed.
In this case, the optimization task (5.20) can be solved by an iterative greedy algo-
rithm. The algorithm converts the task (5.20) into a series of m pre-image tasks (see
Section 5.7). The reduced set method for the RBF kernel is implemented in function
rsrbf.
References: The implemented reduced set method is described in [28].
Example: Reduced set method for SVM classifier
The example shows how to use the reduced set method to simplify the SVM clas-
sifier. The SVM classifier is trained from the Riply’s training set riply trn/mat. The
trained kernel expansion (discriminant function) is composed of 94 support vectors.
The reduced expansion with 10 support vectors is found. Decision boundaries of the
original and the reduced SVM are visualized in Figure 5.7. The original and reduced
SVM are evaluated on the testing data riply tst.mat.
% load training data
trn = load(’riply_trn’);
% train SVM classifier
model = smo(trn,struct(’ker’,’rbf’,’arg’,1,’C’,10));
% compute the reduced rule with 10 support vectors
red_model = rsrbf(model,struct(’nsv’,10));
% visualize decision boundaries
figure; ppatterns(trn);
h1 = pboundary(model,struct(’line_style’,’r’));
h2 = pboundary(red_model,struct(’line_style’,’b’));
legend([h1(1) h2(1)],’Original SVM’,’Reduced SVM’);
% evaluate SVM classifiers
tst = load(’riply_tst’);
ypred = svmclass(tst.X,model);
err = cerror(ypred,tst.y);
red_ypred = svmclass(tst.X,model);
74
red_err = cerror(red_ypred,tst.y);
fprintf([’Original SVM: nsv = %d, err = %.4f\n’ ...
’Reduced SVM: nsv = %d, err = %.4f\n’], ...
model.nsv, err, red_model.nsv, red_err);
Original SVM: nsv = 94, err = 0.0960
Reduced SVM: nsv = 10, err = 0.0960
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Original SVM
Reduced SVM
Figure 5.7: Example shows the decision boundaries of the SVM classifier trained by
SMO and the approximated SVM classifier found by the reduced set method. The
original decision rule involves 94 support vectors while the reduced one only 10 support
vectors.
75
Chapter 6
Miscellaneous
6.1 Data sets
The STPRtool provides several datasets and functions to generate synthetic data. The
datasets stored in the Matlab data file are enlisted in Table 6.1. The functions which
generate synthetic data and scripts for converting external datasets to the Matlab are
given in Table 6.2.
Table 6.1: Datasets stored in /data/ directory of the STPRtool.
/riply trn.mat Training part of the Riply’s data set [24].
/riply tst.mat Testing part of the Riply’s data set [24].
/iris.mat Well known Fisher’s data set.
/anderson task/*.mat Input data for the demo on the Generalized Ander-
son’s task demo anderson.
/binary separable/*.mat Input data for the demo on training linear classifiers
from finite vector sets demo linclass.
/gmm samles/*.mat Input data for the demo on the Expectation-
Maximization algorithm for the Gaussian mixture
model demo emgmm.
/multi separable/*.mat Linearly separable multi-class data in 2D.
/ocr data/*.mat Examples of hand-written numerals. A description is
given in Section 7.1.
/svm samples/*.mat Input data for the demo on the Support Vector Ma-
chines demo svm.
76
Table 6.2: Data generation.
createdata GUI which allows to create interactively labeled vec-
tors sets and labeled sets of Gaussian distributions. It
operates in 2-dimensional space only.
genlsdata Generates linearly separable binary data with pre-
scribed margin.
gmmsamp Generates data from the Gaussian mixture model.
gsamp Generates data from the multivariate Gaussian distri-
bution.
gencircledata Generates data on circle corrupted by the Gaussian
noise.
usps2mat Converts U.S. Postal Service (USPS) database of
hand-written numerals to the Matlab file.
6.2 Visualization
The STPRtool provides tools for visualization of common models and data. The list
of functions for visualization is given in Table 6.3. Each function contains an example
of using.
Table 6.3: Functions for visualization.
pandr Visualizes solution of the Generalized Anderson’s
task.
pboundary Plots decision boundary of given classifier in 2D.
pgauss Visualizes set of bivariate Gaussians.
pgmm Visualizes the bivariate Gaussian mixture model.
pkernelproj Plots isolines of kernel projection.
plane3 Plots plane in 3D.
pline Plots line in 2D.
ppatterns Visualizes patterns as points in the feature space. It
works for 1D, 2D and 3D.
psvm Plots decision boundary of binary SVM classifier in
2D.
showim Displays images entered as column vectors.
77
6.3 Bayesian classifier
The object under study is assumed to be described by a vector of observations x ∈ A
and a hidden state y ∈ ¸. The x and y are realizations of random variables with joint
probability distribution P
XY
(x, y). A decision rule q: A → T takes a decision d ∈ T
based on the observation x ∈ A. Let W: T¸ →R be a loss function which penalizes
the decision q(x) ∈ T when the true hidden state is y ∈ ¸. Let A ⊆ R
n
and the sets
¸ and T be finite. The Bayesian risk R(q) is an expectation of the value of the loss
function W when the decision rule q is applied, i.e.,
R(q) =
_
X

y∈Y
P
XY
(x, y)W(q(x), y) dx . (6.1)
The optimal rule q

which minimizes the Bayesian risk (6.1) is referred to as the
Bayesian rule
q

(x) = argmin
y

y∈Y
P
XY
(x, y)W(q(x), y) , ∀x ∈ A .
The STPRtool implements the Bayesian rule for two particular cases:
Minimization of misclassification: The set of decisions T coincides to the set of
hidden states ¸ = ¦1, . . . , c¦. The 0/1-loss function
W
0/1
(q(x), y) =
_
0 for q(x) = y ,
1 for q(x) ,= y .
(6.2)
is used. The Bayesian risk (6.1) with the 0/1-loss function corresponds to the
expectation of misclassification. The rule q: A → ¸ which minimizes the expec-
tation of misclassification is defined as
q(x) = argmax
y∈Y
P
Y |X
(y[x) ,
= argmax
y∈Y
P
X|Y
(x[y)P
Y
(y) .
(6.3)
Classification with reject-option: The set of decisions T is assumed to be T =
¸ ∪ ¦dont know¦. The loss function is defined as
W
ε
(q(x), y) =



0 for q(x) = y ,
1 for q(x) ,= y ,
ε for q(x) = dont know ,
(6.4)
where ε is penalty for the decision dont know. The rule q: A → ¸ which mini-
mizes the Bayesian risk with the loss function (6.4) is defined as
q(x) =



argmax
y∈Y
P
X|Y
(x[y)P
Y
(y) if 1 −max
y∈Y
P
Y |X
(y[x) < ε ,
dont know if 1 −max
y∈Y
P
Y |X
(y[x) ≥ ε .
(6.5)
78
To apply the optimal classification rules one has to know the class-conditional
distributions P
X|Y
and a priory distribution P
Y
(or their estimates). The data-type
used by the STPRtool is described in Table 6.4. The Bayesian classifiers (6.3) and (6.5)
are implemented in function bayescls.
Table 6.4: Data-type used to describe the Bayesian classifier.
Bayesian classifier (structure array):
.Pclass [cell 1 c] Class conditional distributions P
X|Y
. The distribution
P
X|Y
(x[y) is given by Pclassy which uses the data-
type described in Table 4.2.
.Prior [1 c] Discrete distribution P
Y
(y), y ∈ ¸.
.fun = ’bayescls’ Identifies function associated with this data type.
References: The Bayesian decision making with applications to PR is treated in
details in book [26].
Example: Bayesian classifier
The example shows how to design the plug-in Bayesian classifier for the Riply’s
training set riply trn.mat. The class-conditional distributions P
X|Y
are modeled by
the Gaussian mixture models (GMM) with two components. The GMM are estimated
by the EM algorithm emgmm. The a priori probabilities P
Y
(y), y ∈ ¦1, 2¦ are estimated
by the relative occurrences (it corresponds to the ML estimate). The designed Bayesian
classifier is evaluated on the testing data riply tst.mat.
% load input training data
trn = load(’riply_trn’);
inx1 = find(trn.y==1);
inx2 = find(trn.y==2);
% Estimation of class-conditional distributions by EM
bayes_model.Pclass{1} = emgmm(trn.X(:,inx1),struct(’ncomp’,2));
bayes_model.Pclass{2} = emgmm(trn.X(:,inx2),struct(’ncomp’,2));
% Estimation of priors
n1 = length(inx1); n2 = length(inx2);
bayes_model.Prior = [n1 n2]/(n1+n2);
% Evaluation on testing data
tst = load(’riply_tst’);
ypred = bayescls(tst.X,bayes_model);
cerror(ypred,tst.y)
79
ans =
0.0900
The following example shows how to visualize the decision boundary of the found
Bayesian classifier minimizing the misclassification (solid black line). The Bayesian
classifier splits the feature space into two regions corresponding to the first class and
the second class. The decision boundary of the reject-options rule (dashed line) is
displayed for comparison. The reject-option rule splits the feature space into three
regions corresponding to the first class, the second class and the region in between
corresponding to the dont know decision. The visualization is given in Figure 6.1.
% Visualization
figure; hold on; ppatterns(trn);
bayes_model.fun = ’bayescls’;
pboundary(bayes_model);
% Penalization for don’t know decision
reject_model = bayes_model;
reject_model.eps = 0.1;
% Vislualization of rejet-option rule
pboundary(reject_model,struct(’line_style’,’k--’));
6.4 Quadratic classifier
The quadratic classification rule q: A ⊆ R
n
→ ¸ = ¦1, 2, . . . , c¦ is composed of a set
of discriminant functions
f
y
(x) = ¸x A
y
x) +¸b
y
x) + c
y
, ∀y ∈ ¸ ,
which are quadratic with respect to the input vector x ∈ R
n
. The quadratic discrimi-
nant function f
y
is determined by a matrix A
y
[n n], a vector b
y
[n 1] and a scalar
c
y
[1 1]. The input vector x ∈ R
n
is assigned to the class y ∈ ¸ its corresponding
discriminant function f
y
attains maximal value
y = argmax
y

∈Y
f
y
(x) = argmax
y

∈Y
(¸x A
y
x) +¸b
y
x) + c
y
) . (6.6)
In the particular binary case ¸ = ¦1, 2¦, the quadratic classifier is represented by
a single discriminant function
f(x) = ¸x Ax) +¸b x) + c ,
80
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 6.1: Figure shows the decision boundary of the Bayesian classifier (solid line)
and the decision boundary of the reject-option rule with ε = 0.1 (dashed line). The
class-conditional probabilities are modeled by the Gaussian mixture models with two
components estimated by the EM algorithm.
given by a matrix A [n n], a vector b [n 1] and a scalar c [1 1]. The input vector
x ∈ R
n
is assigned to class y ∈ ¦1, 2¦ as follows
q(x) =
_
1 if f(x) = ¸x Ax) +¸b x) + c ≥ 0 ,
2 if f(x) = ¸x Ax) +¸b x) + c < 0 .
(6.7)
The STPRtool uses a specific structure array to describe the binary (see Table 6.5)
and the multi-class (see Table 6.6) quadratic classifier. The implementation of the
quadratic classifier itself provides function quadclass.
Table 6.5: Data-type used to describe binary quadratic classifier.
Binary linear quadratic (structure array):
.A [n n] Parameter matrix A.
.b [n 1] Parameter vector b.
.c [1 1] Parameter scalar c.
.fun = ’quadclass’ Identifies function associated with this data type.
Example: Quadratic classifier
The example shows how to train the quadratic classifier using the Fisher Linear Dis-
criminant and the quadratic data mapping. The classifier is trained from the Riply’s
81
Table 6.6: Data-type used to describe multi-class quadratic classifier.
Multi-class quadratic classifier (structure array):
.A [n n c] Parameter matrices A
y
, y ∈ ¸.
.b [n c] Parameter vectors b
y
, y ∈ ¸.
.c [1 c] Parameter scalars c
y
, y ∈ ¸.
.fun = ’quadclass’ Identifies function associated with this data type.
training set riply trn.mat. The found quadratic classifier is evaluated on the test-
ing data riply trn.mat. The boundary of the quadratic classifier is visualized (see
Figure 6.2).
trn = load(’riply_trn’); % load input data
quad_data = qmap(trn); % quadratic mapping
lin_model = fld(quad_data); % train FLD
% make quadratic classifier
quad_model = lin2quad(lin_model);
% visualization
figure;
ppatterns(trn); pboundary(quad_model);
% evaluation
tst = load(’riply_tst’);
ypred = quadclass(tst.X,quad_model);
cerror(ypred,tst.y)
ans =
0.0940
6.5 K-nearest neighbors classifier
Let T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ be a set of prototype vectors x
i
∈ A ⊆ R
n
and
corresponding hidden states y
i
∈ ¸ = ¦1, . . . , c¦. Let R
n
(x) = ¦x

: |x −x

| ≤ r
2
¦ be
a ball centered in the vector x in which lie K prototype vectors x
i
, i ∈ ¦1, . . . , l¦, i.e.,
[¦x
i
: x
i
∈ R
n
(x)¦[ = K. The K-nearest neighbor (KNN) classification rule q: A → ¸
is defined as
q(x) = argmax
y∈Y
v(x, y) , (6.8)
82
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 6.2: Figure shows the decision boundary of the quadratic classifier.
where v(x, y) is number of prototype vectors x
i
with hidden state y
i
= y which lie in
the ball x
i
∈ R
n
(x). The classification rule (6.8) is computationally demanding if the
set of prototype vectors is large. The data-type used by the STPRtool to represent the
K-nearest neighbor classifier is described in Table 6.7.
Table 6.7: Data-type used to describe the K-nearest neighbor classifier.
K-NN classifier (structure array):
.X [n l] Prototype vectors ¦x
1
, . . . , x
l
¦.
.y [1 l] Labels ¦y
1
, . . . , y
l
¦.
.fun = ’knnclass’ Identifies function associated with this data type.
References: The K-nearest neighbors classifier is described in most books on PR, for
instance, in book [6].
Example: K-nearest neighbor classifier
The example shows how to create the (K = 8)-NN classification rule from the
Riply’s training data riply trn.mat. The classifier is evaluated on the testing data
riply tst.mat. The decision boundary is visualized in Figure 6.3.
% load training data and setup 8-NN rule
trn = load(’riply_trn’);
model = knnrule(trn,8);
83
% visualize decision boundary and training data
figure; ppatterns(trn); pboundary(model);
% evaluate classifier
tst = load(’riply_tst’);
ypred = knnclass(tst.X,model);
cerror(ypred,tst.y)
ans =
0.1160
−1.5 −1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Figure 6.3: Figure shows the decision boundary of the (K = 8)-nearest neighbor
classifier.
84
Chapter 7
Examples of applications
We intend to demonstrate how methods implemented in the STPRtool can be used to
solve a more complex applications. The applications selected are the optical charac-
ter recognition (OCR) system based on the multi-class SVM (Section 7.1) and image
denoising system using the kernel PCA (Section 7.2).
7.1 Optical Character Recognition system
The use of the STPRtool is demonstrated on a simple Optical Character Recognition
(OCR) system for handwritten numerals from 0 to 9. The STPRtool provides GUI
which allows to use standard computer mouse as an input device to draw images of
numerals. The GUI is intended just for demonstration purposes and the scanner or
other device would be used in practice instead. The multi-class SVM are used as the
base classifier of the numerals. A numeral to be classified is represented as a vector
x containing pixel values taken from the numeral image. The training stage of the
OCR system involves (i) collection of training examples of numerals and (ii) training
the multi-class SVM. To see the trained OCR system run the demo script demo ocr.
The GUI for drawing numerals is implemented in a function mpaper. The function
mpaper creates a form composed of 50 cells to which the numerals can be drawn (see
Figure 7.1a). The drawn numerals are normalized to size 16 16 pixels (different
size can be specified). The function mpaper is designed to invoke a specified function
whenever the middle mouse button is pressed. In particular, the functions save chars
and ocr fun are called from the mpaper to process drawn numerals. The function
save chars saves drawn numerals in the stage of collecting training examples and the
function ocr fun is used to recognize the numerals if the trained OCR is applied.
The set of training examples has to be collected before the OCR is trained. The
function collect chars invokes the form for drawing numerals and allows to save the
numerals to a specified file. The STPRtool contains examples of numerals collected
85
from 6 different persons. Each person drew 50 examples of each of numerals from
’0’ to ’9’. The numerals are stored in the directory /data/ocr numerals/. The file
name format name xx.mat (the character can be omitted from the name) is used.
The string name is name of the person who drew the particular numeral and xx stands
for a label assigned to the numeral. The labels from y = 1 to y = 9 corresponds to the
numerals from ’1’ to ’9’ and the label y = 10 is used for numeral ’0’. For example,
to collect examples of numeral ’1’ from person honza cech
collect_chars(’honza_cech1’)
was called. Figure 7.1a shows the form filled with the examples of the numeral ’1’.
The Figure 7.1b shows numerals after normalization which are saved to the speci-
fied file honza cech.mat. The images of normalized numerals are stored as vectors
to a matrix X of size [256 50]. The vector dimension n = 256 corresponds to
16 16 pixels of each image numeral and l = 50 is the maximal number of numerals
drawn to a single form. If the file name format name xx.mat is met then the function
mergesets(Directory,DataFile) can be used to merge all the file to a single one.
The resulting file contains a matrix X of all examples and a vector y containing labels
are assigned to examples according the number xx. The string Directory specifies the
directory where the files name xx.mat are stored and the string FileName is a file name
to which the resulting training set is stored.
Having the labeled training set T
XY
= ¦(x
1
, y
1
), . . . , (x
l
, y
l
)¦ created the multi-class
SVM classifier can be trained. The methods to train the multi-class SVM are described
in Chapter 5. In particular, the One-Against-One (OAO) decomposition was used as
it requires the shortest training time. However, if a speed of classification is the major
objective the multi-class BSVM formulation is often a better option as it produce less
number of support vectors. The SVM has two free parameters which have to be tuned:
the regularization constant C and the kernel function k(x
a
, x
b
) with its argument.
The RBF kernel k(x
a
, x
b
) = exp(−0.5|x
a
− x
b
|
2

2
) determined by the parameter
σ was used in the experiment. Therefore, the free SVM parameters are C and σ. A
common method to tune the free parameters is to use the cross-validation to select the
best parameters from a pre-selected set Θ = ¦(C
1
, σ
1
), . . . , (C
k
, σ
k
)¦. The STPRtool
contains a function evalsvm which evaluates the set of SVM parameters Θ based on the
cross-validation estimation of the classification error. The function evalsvm returns the
SVM model its parameters (C

, σ

) turned out to be have the lowest cross-validation
error over the given set Θ. The parameter tuning is implemented in the script tune ocr.
The parameters C = ∞ (data are separable) and σ = 5 were found to be the best.
After the parameters (C

, σ

) are tuned the SVM classifier is trained using all
training data available. The training procedure for OCR classifier is implemented in
the function train ocr. The directory /demo/ocr/models/ contains the SVM model
trained by OAO decomposition strategy (function oaosvm) and model trained based
on the multi-class BSVM formulation (function bsvm2).
86
The particular model used by OCR is specified in the function ocr fun which
implicitly uses the file ocrmodel.mat stored in the directory /demo/ocr/. To run the
resulting OCR type demo ocr or call
mpaper(struct(’fun’,’ocr_fun’))
If one intends to modify the classifier or the way how the classification results are
displayed then modify the function ocr fun. A snapshot of running OCR is shown in
Figure 7.2.
7.2 Image denoising
The aim is to train a model of a given class of images in order to restore images
corrupted by noise. The U.S. Postal Service (USPS) database of images of hand-
written numerals [17] was used as the class of images to model. The additive Gaussian
noise was assumed as the source of image degradation. The kernel PCA was used to
model the image class. This method was proposed in [20] and further developed in [15].
The STPRtool contains a script which converts the USPS database to the Matlab
data file. It is implemented in the function usps2mat. The corruption of the USPS
images by the Gaussian noise is implemented in script make noisy data. The script
make noisy data produces the Matlab file usps noisy.mat its contains is summarized
in Table 7.1.
The aim is to train a model of images of all numerals at ones. Thus the unlabeled
training data set T
X
= ¦x
1
, . . . , x
l
¦ is used. The model should describe the non-linear
subspace A
img
⊆ R
n
where the images live. It is assumed that the non-linear subspace
A
img
forms a linear subspace T
img
of a new feature space T. A link between the
input space A and the feature space T is given by a mapping φ: A → T which is
implicitly determined by a selected kernel function k: A A →R. The kernel function
corresponds to the dot product in the feature space, i.e., k(x
a
, x
b
) = ¸φ(x
a
) φ(x
b
)).
The kernel PCA can be used to find the basis vectors of the subspace T
img
from the
input training data T
X
. Having the kernel PCA model of the subspace T
img
it can be
used to map the images x ∈ A into the subspace A
img
. A noisy image ˆ x is assumed to
lie outside the subspace A
img
. The denoising procedure is based on mapping the noisy
image ˆ x into the subspace A
img
. The idea of using the kernel PCA for image denoising
is demonstrated in Figure 7.3. The example which produced the figure is implemented
in demo kpcadenois.
The subspace T
img
is modeled by the kernel PCA trained from the set T
X
mapped
into the feature space T by a function φ: A → T. Let T
Φ
= ¦φ(x
1
), . . . , φ(x
l
)¦ denote
the feature space representation of the training images T
X
. The kernel PCA computes
the lower dimensional representation T
Z
= ¦z
1
, . . . , z
l
¦, z
i
∈ R
m
of the mapped images
87
Table 7.1: The content of the file usps noisy.mat produced by the script
make noisy data.
Structure array trn containing the training data:
gnd X [256 7291] Training images of numerals represented as column
vectors. The input are 7291 gray-scale images of size
16 16.
X [256 7291] Training images corrupted by the Gaussian noise with
signal-to-noise ration SNR = 1 (it can be changed).
y [1 7291] Labels y ∈ ¸ = ¦1, . . . , 10¦ of training images.
Structure array tst containing the testing data:
gnd X [256 2007] Testing images of numerals represented as column vec-
tors. The input are 2007 gray-scale images of size
16 16.
X [256 2007] Training images corrupted by the Gaussian noise with
signal-to-noise ration SNR = 1 (it can be changed).
y [1 2007] Labels y ∈ ¸ = ¦1, . . . , 10¦ of training images.
T
Φ
to minimize the mean square reconstruction error
ε
KMS
=
l

i=1
|φ(x
i
) −
˜
φ(x
i
)|
2
,
where
˜
φ(x) =
l

i=1
β
i
φ(x) , β = A(z −b) , z = A
T
k(x) +b . (7.1)
The vector k(x) = [k(x
1
, x), . . . , k(x
l
, x)]
T
contains kernel functions evaluated at the
training data T
X
. The matrix A [l m] and the vector b [m 1] are trained by the
kernel PCA algorithm. Given a projection
˜
φ(x) onto the kernel PCA subspace an
image x

∈ R
n
of the input space can be recovered by solving the pre-image problem
x

= argmin
x
|φ(x) −
˜
φ(x)|
2
. (7.2)
The Radial Basis Function (RBF) kernel k(x
a
, x
b
) = exp(−0.5|x
a
− x
b
|
2

2
) was
used here as the pre-image problem (7.2) can be well solved for this case. The whole
procedure of reconstructing the corrupted image ˆ x ∈ R
n
involves (i) using (7.1) to
compute z which determines
˜
φ(x) and (ii) solving the pre-image problem (7.2) which
yields the resulting x

. This procedure is implemented in function kpcarec.
In contrast to the linear PCA, the kernel PCA allows to model the non-linearity
which is very likely in the case of images. However, the non-linearity hidden in
88
the parameter σ of the used RBF kernel has to be tuned properly to prevent over-
fitting. Another parameter is the dimension m of the subspace kernel PCA subspace
to be trained. The best parameters (σ

, m

) was selected out of a prescribed set
Θ = ¦(σ
1
, m
1
), . . . , (σ
k
, m
k
)¦ such that the cross-validation estimate of the mean square
error
ε
MS
(σ, m) =
_
X
P
X
(x)
_
|x −x

|
2
_
dx ,
was minimal. The x stands for the ground truth image and x

denotes the image
restored from noisy one ˆ x using (7.1) and (7.2). The images are assumed to be gen-
erated from distribution P
X
(x). Figure 7.4 shows the idea of tuning the parameters
of the kernel PCA. The linear PCA was used to solve the same task for comparison.
The same idea was used to tune the output dimension of the linear PCA. Having the
parameters tuned the resulting kernel PCA and linear PCA are trained on the whole
training set. The whole training procedure of the kernel PCA is implemented in the
script train kpca denois. The training for linear PCA is implemented in the script
train lin denois. The greedy kernel PCA Algorithm 3.4 is used for training the
model. It produces sparse model and can deal with large data sets. The results of
tuning the parameters are displayed in Figure 7.5. The figure was produced by a script
show denois tuning.
The function kpcarec is used to denoise images using the kernel PCA model. In
the case of the linear PCA, the function pcarec was used. An example of denoising of
the testing data of USPS is implemented in script show denoising. Figure 7.6 shows
examples of ground truth images, noisy images, images restored by the linear PCA and
kernel PCA.
89
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Control: left_button − draw, middle_button − classify, right_button − erase.
(a)
(b)
Figure 7.1: Figure (a) shows hand-written training examples of numeral ’1’ and Figure
(b) shows corresponding normalized images of size 16 16 pixels.
90
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Control: left_button − draw, middle_button − classify, right_button − erase.
(a)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
1 2 3 4 5 6 7 8 9 0
4 0
5 3 1 9
6 2 8
(b)
Figure 7.2: A snapshot of running OCR system. Figure (a) shows hand-written nu-
merals and Figure (b) shows recognized numerals.
91
−4 −3 −2 −1 0 1 2 3 4 5 6 7
−4
−3
−2
−1
0
1
2
3
4
5
6
7
Ground truth
Noisy examples
Corrupted
Reconstructed
Figure 7.3: Demonstration of idea of image denoising by kernel PCA.
Data
generator
model
model
Distortion
KPCA Pre-image
problem
θ ∈ Θ
Error
e
r
r
o
r
θ
assessment
Figure 7.4: Schema of tuning parameters of kernel PCA model used for image restora-
tion.
92
0 10 20 30 40 50 60 70 80 90 100
10
11
12
13
14
15
16
17
18
dim
ε
M
S
3.5 4 4.5 5 5.5 6 6.5 7 7.5 8
6
8
10
12
14
16
18
20
σ
ε
M
S
dim = 50
dim = 100
dim = 200
dim = 300
(a) (b)
Figure 7.5: Figure (a) shows plot of ε
MS
with respect to output dimension of lin-
ear PCA. Figure(b) shows plot of ε
MS
with respect to output dimension and kernel
argument of the kernel PCA.
Ground truth
Numerals with added Gaussian noise
Linear PCA reconstruction
Kernel PCA reconstruction
Figure 7.6: The results of denosing the handwritten images of USPS database.
93
Chapter 8
Conclusions and future planes
The STPRtool as well as its documentation is being updating. Therefore we are grateful
for any comment, suggestion or other feedback which would help in the development
of the STPRtool.
There are many relevant methods and proposals which have not been implemented
yet. We welcome anyone who likes to contribute to the STPRtool in any way. Please
send us an email and your potential contribution.
The future planes involve the extensions of the toolbox by the following methods:
• Feature selection methods.
• AdaBoost and related learning methods.
• Efficient optimizers to for large scale Support Vector Machines learning problems.
• Model selection methods for SVM.
• Probabilistic Principal Component Analysis (PPCA) and mixture of PPCA esti-
mated by the Expectation-Maximization (EM) algorithm.
• Discrete probability models estimated by the EM algorithm.
94
Bibliography
[1] T.W. Anderson and R.R. Bahadur. Classification into two multivariate normal
distributions with differrentia covariance matrices. Anals of Mathematical Statis-
tics, 33:420–431, June 1962.
[2] G. Baudat and F. Anouar. Generalized discriminant analysis using a kernel ap-
proach. Neural Computation, 12(10):2385–2404, 2000.
[3] C.M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford,
Great Britain, 3th edition, 1997.
[4] N. Cristianini and J. Shawe-Taylor. Support Vector Machines. Cambridge Uni-
versity Press, 2000.
[5] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incom-
plete data via the EM Algorithm. Journal of the Royal Statistical Society, 39:185–
197, 1977.
[6] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. John Wiley &
Sons, 2nd. edition, 2001.
[7] R.P.W. Duin. Prtools: A matlab toolbox for pattern recognition, 2000.
[8] V. Franc. Programov´e n´astroje pro rozpozn´ av´ an´ı (Pattern Recognition Program-
ming Tools, In Czech). Master’s thesis,
ˇ
Cesk´e vysok´e uˇcen´ı technick´e, Fakulta
elektrotechnick´a, Katedra kybernetiky, February 2000.
[9] V. Franc and V. Hlav´aˇc. Multi-class support vector machine. In R. Kasturi,
D. Laurendeau, and Suen C., editors, 16th International Conference on Pattern
Recognition, volume 2, pages 236–239. IEEE Computer Society, 2002.
[10] V. Franc and V. Hlav´ aˇc. An iterative algorithm learning the maximal margin
classifier. Pattern Recognition, 36(9):1985–1996, 2003.
95
[11] V. Franc and V. Hlav´ aˇc. Greedy algorithm for a training set reduction in the
kernel methods. In N. Petkov and M.A. Westenberg, editors, Computer Analysis
of Images and Patterns, pages 426–433, Berlin, Germany, 2003. Springer.
[12] C.W. Hsu and C.J. Lin. A comparison of methods for multiclass support vector
machins. IEEE Transactions on Neural Networks, 13(2), March 2002.
[13] I.T. Jollife. Principal Component Analysis. Springer-Verlag, New York, 1986.
[14] S.S. Keerthi, S.K. Shevade, C. Bhattacharya, and K.R.K. Murthy. A fast itera-
tive nearest point algorithm for support vector machine classifier design. IEEE
Transactions on Neural Networks, 11(1):124–136, January 2000.
[15] Kim Kwang, In, Franz Matthias, O., and Sch¨olkopf Bernhard. Kernel hebbian
algorithm for single-fram super-resolution. In Leonardis Aleˇs and Bischof Horst,
editors, Statisical Learning in Computer Vision, ECCV Workshop. Springer, May
2004.
[16] J.T. Kwok and I.W. Tsang. The pre-image problem in kernel methods. In Pro-
ceedings of the Twentieth International Conference on Machine Learning (ICML-
2003), pages 408–415, Washington, D.C., USA, August 2003.
[17] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and
L.J Jackel. Backpropagation applied to handwritten zip code recognition. Neural
Computation, 1:541–551, 1989.
[18] G. McLachlan and T. Krishnan. The EM Algorithm and Extensions. John Wiley
& Sons, New York, 1997.
[19] S. Mika, G. R¨ atsch, J. Weston, B. Sch¨olkpf, and K. M¨ uller. Fisher discriminant
analysis with kernel. In Y.H. Hu, J. Larsen, and S. Wilson, E. Douglas, editors,
Neural Networks for Signal Processing, pages 41–48. IEEE, 1999.
[20] S. Mika, B. Sch¨ olkopf, A. Smola, K.R. M¨ uller, M. Scholz, and G. R¨ atsch. Kernel
pca and de-noising in feature spaces. In M.S. Kearns, S.A. Solla, and D.A. Cohn,
editors, Advances in Neural Information Processing Systems 11, pages 536 – 542,
Cambridge, MA, 1999. MIT Press.
[21] I.T. Nabney. NETLAB : algorithms for pattern recognition. Advances in pattern
recognition. Springer, London, 2002.
[22] J. Platt. Probabilities for sv machines. In A.J. Smola, P.J. Bartlett, B. Scholkopf,
and D. Schuurmans, editors, Advances in Large Margin Classifiers (Neural Infor-
mation Processing Series). MIT Press, 2000.
96
[23] J.C. Platt. Sequential minimal optimizer: A fast algorithm for training support
vector machines. Technical Report MSR-TR-98-14, Microsoft Research, Redmond,
1998.
[24] B.D. Riply. Neural networks and related methods for classification (with discu-
sion). J. Royal Statistical Soc. Series B, 56:409–456, 1994.
[25] M.I. Schlesinger. A connection between learning and self-learning in the pattern
recognition (in Russian). Kibernetika, 2:81–88, 1968.
[26] M.I. Schlesinger and V. Hlav´ aˇc. Ten lectures on statistical and structural pattern
recognition. Kluwer Academic Publishers, 2002.
[27] B. Sch¨olkopf, C. Burges, and A.J. Smola. Advances in Kernel Methods – Support
Vector Learning. MIT Press, Cambridge, 1999.
[28] B. Sch¨olkopf, P. Knirsch, and C. Smola, A. Burges. Fast approximation of support
vector kernel expansions, and an interpretation of clustering as approximation in
feature spaces. In P. Levi, M. Schanz, R.J. Ahler, and F. May, editors, Mus-
tererkennung 1998-20. DAGM., pages 124–132, Berlin, Germany, 1998. Springer-
Verlag.
[29] B. Scholkopf, A. Smola, and K.R. Muller. Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998.
[30] B. Sch¨olkopf and A.J. Smola. Learning with Kernels. The MIT Press, MA, 2002.
[31] V. Vapnik. The nature of statistical learning theory. Springer Verlag, 1995.
97

Statistical Pattern Recognition Toolbox for Matlab
Vojtˇch Franc and V´clav Hlav´ˇ e a ac June 24, 2004

Contents
1 Introduction 1.1 What is the Statistical Pattern Recognition Toolbox and how it has been developed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The STPRtool purpose and its potential users . . . . . . . . . . . . . . 1.3 A digest of the implemented methods . . . . . . . . . . . . . . . . . . . 1.4 Relevant Matlab toolboxes of others . . . . . . . . . . . . . . . . . . . . 1.5 STPRtool document organization . . . . . . . . . . . . . . . . . . . . . 1.6 How to contribute to STPRtool? . . . . . . . . . . . . . . . . . . . . . 1.7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Linear Discriminant Function 2.1 Linear classifier . . . . . . . . . . . . . . . . . . 2.2 Linear separation of finite sets of vectors . . . . 2.2.1 Perceptron algorithm . . . . . . . . . . . 2.2.2 Kozinec’s algorithm . . . . . . . . . . . . 2.2.3 Multi-class linear classifier . . . . . . . . 2.3 Fisher Linear Discriminant . . . . . . . . . . . . 2.4 Generalized Anderson’s task . . . . . . . . . . . 2.4.1 Original Anderson’s algorithm . . . . . . 2.4.2 General algorithm framework . . . . . . 2.4.3 ε-solution using the Kozinec’s algorithm 2.4.4 Generalized gradient optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 4 5 5 5 6 8 9 10 11 12 14 15 17 19 21 21 22 25 26 29 31 32 33 34

3 Feature extraction 3.1 Principal Component Analysis . . . . . . . . . . . 3.2 Linear Discriminant Analysis . . . . . . . . . . . 3.3 Kernel Principal Component Analysis . . . . . . . 3.4 Greedy kernel PCA algorithm . . . . . . . . . . . 3.5 Generalized Discriminant Analysis . . . . . . . . . 3.6 Combining feature extraction and linear classifier

1

. .4 Quadratic classifier . . . . . . . . . .1 Gaussian distribution and Gaussian mixture model 4.2 Image denoising . . . .4 One-Against-One decomposition for SVM . . 4. . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Support vector and other kernel machines 5. . . . . . 8 Conclusions and future planes 2 . . . . . . . . . . . . . . . 4. . . . 6. . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . .1 Data sets . . . 6 Miscellaneous 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Optical Character Recognition system . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . 5. 6. . . . . . . . . . . . . . . . . . . . . . . . . .7 Pre-image problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .3 One-Against-All decomposition for SVM . . .2 Maximum-Likelihood estimation of GMM . . . . . . 5. . . . . . . . . 4. . . . . . . . . . . . . . . . 41 41 43 44 45 47 48 50 57 57 58 63 64 66 68 70 73 76 76 77 78 80 82 85 85 87 94 7 Examples of applications 7.8 Reduced set method . . . . .3 Bayesian classifier . . . . . . . . .5 K-nearest neighbors classifier . .4 Density estimation and clustering 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Incomplete data . . . . . . . . . . . 6. . . . . . . . . . . .6 Kernel Fisher Discriminant . .5 K-means clustering . . . . . . . . . . . . . . . .5 Multi-class BSVM formulation . . . . . 4. .3 Minimax estimation . . . . . .1 Support vector classifiers .2 Binary Support Vector Machines . . . . . . . . . . . . . 5. . . . .4 Probabilistic output of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . 5.1 Complete data . . . . . . . .2. . . . . . . . .2 Visualization . 7. . . . . . . . . . . . . .

Chapter 1 Introduction
1.1 What is the Statistical Pattern Recognition Toolbox and how it has been developed?

The Statistical Pattern Recognition Toolbox (abbreviated STPRtool) is a collection of pattern recognition (PR) methods implemented in Matlab. The core of the STPRtool is comprised of statistical PR algorithms described in the monograph Schlesinger, M.I., Hlav´ˇ, V: Ten Lectures on Statistical and Structural ac Pattern Recognition, Kluwer Academic Publishers, 2002 [26]. The STPRtool has been e developed since 1999. The first version was a result of a diploma project of Vojtˇch Franc [8]. Its author became a PhD student in CMP and has been extending the STPRtool since. Currently, the STPRtool contains much wider collection of algorithms. The STPRtool is available at http://cmp.felk.cvut.cz/cmp/cmp software.html. We tried to create concise and complete documentation of the STPRtool. This document and associated help files in .html should give all information and parameters the user needs when she/he likes to call appropriate method (typically a Matlab mfile). The intention was not to write another textbook. If the user does not know the method then she/he will likely have to read the description of the method in a referenced original paper or in a textbook.

1.2

The STPRtool purpose and its potential users

• The principal purpose of STPRtool is to provide basic statistical pattern recognition methods to (a) researchers, (b) teachers, and (c) students. The STPRtool is likely to help both in research and teaching. It can be found useful by practitioners who design PR applications too.

3

• The STPRtool offers a collection of the basic and the state-of-the art statistical PR methods. • The STPRtool contains demonstration programs, working examples and visualization tools which help to understand the PR methods. These teaching aids are intended for students of the PR courses and their teachers who can easily create examples and incorporate them into their lectures. • The STPRtool is also intended to serve a designer of a new PR method by providing tools for evaluation and comparison of PR methods. Many standard and state-of-the art methods have been implemented in STPRtool. • The STPRtool is open to contributions by others. If you develop a method which you consider useful for the community then contact us. We are willing to incorporate the method into the toolbox. We intend to moderate this process to keep the toolbox consistent.

1.3

A digest of the implemented methods

This section should give the reader a quick overview of the methods implemented in STPRtool. • Analysis of linear discriminant function: Perceptron algorithm and multiclass modification. Kozinec’s algorithm. Fisher Linear Discriminant. A collection of known algorithms solving the Generalized Anderson’s Task. • Feature extraction: Linear Discriminant Analysis. Principal Component Analysis (PCA). Kernel PCA. Greedy Kernel PCA. Generalized Discriminant Analysis. • Probability distribution estimation and clustering: Gaussian Mixture Models. Expectation-Maximization algorithm. Minimax probability estimation. K-means clustering. • Support Vector and other Kernel Machines: Sequential Minimal Optimizer (SMO). Matlab Optimization toolbox based algorithms. Interface to the SV M light software. Decomposition approaches to train the Multi-class SVM classifiers. Multi-class BSVM formulation trained by Kozinec’s algorithm, MitchellDemyanov-Molozenov algorithm and Nearest Point Algorithm. Kernel Fisher Discriminant.

4

1.4

Relevant Matlab toolboxes of others

Some researchers in statistical PR provide code of their methods. The Internet offers many software resources for Matlab. However, there are not many compact and well documented packages comprised of PR methods. We found two useful packages which are briefly introduced below. NETLAB is focused to the Neural Networks applied to PR problems. The classical PR methods are included as well. This Matlab toolbox is closely related to the popular PR textbook [3]. The documentation published in book [21] contains detailed description of the implemented methods and many working examples. The toolbox includes: Probabilistic Principal Component Analysis, Generative Topographic Mapping, methods based on the Bayesian inference, Gaussian processes, Radial Basis Functions (RBF) networks and many others. The toolbox can be downloaded from http://www.ncrg.aston.ac.uk/netlab/. PRTools is a Matlab Toolbox for Pattern Recognition [7]. Its implementation is based on the object oriented programming principles supported by the Matlab language. The toolbox contains a wide range of PR methods including: analysis of linear and non-linear classifiers, feature extraction, feature selection, methods for combining classifiers and methods for clustering. The toolbox can be downloaded from http://www.ph.tn.tudelft.nl/prtools/.

1.5

STPRtool document organization

The document is divided into chapters describing the implemented methods. The last chapter is devoted to more complex examples. Most chapters begin with definition of the models to be analyzed (e.g., linear classifier, probability densities, etc.) and the list of the implemented methods. Individual sections have the following fixed structure: (i) definition of the problem to be solved, (ii) brief description and list of the implemented methods, (iii) reference to the literature where the problem is treaded in details and (iv) an example demonstrating how to solve the problem using the STPRtool.

1.6

How to contribute to STPRtool?

If you have an implementation which is consistent with the STPRtool spirit and you want to put it into public domain through the STPRtool channel then send an email to xfrancv@cmp.felk.cvut.cz. 5

Of course. 1. M. Vernon. thanks go to several Czech. Kiev. other STPRtool users and students who used the toolbox in their laboratory exercises. Ukraine who showed us the beauty of pattern recognition and gave us many valuable advices. Schlesinger from the Ukrainian Academy of Sciences. we like to thank to Prof. Third.Please prepare your code according to the pattern you find in STPRtool implementations. the Czech Technical University in Prague. we like to thank for discussions to our colleagues in the Center for Machine Perception. Forth. Second. Provide piece of documentation too which will allow us to incorporate it to STPRtool easily. 6 . our visitors. EU and industrial grants which indirectly contributed to STPRtool development.I.7 Acknowledgements First. your method will be bound to your name in STPRtool. the STPRtool was rewritten and its documentation radically improved in May and June 2004 with the financial help of ECVision European Research Network on Cognitive Computer Vision Systems (IST-2001-35454) lead by D.

x·x Σ µ S X ⊆ Rn Y F φ: X → F k: X × X → R TXY TX q: X → Y fy : X → R f: X → R |·| · det(()·) δ(i. Matrix determinant. Labeled training set (more precisely multi-set) TXY = {(x1 . Logical or. for instance x. . Kernel function (Mercer kernel). z] where the column vector v is (n + m)-dimensional. (xl . Vectors are implicitly column vectors. Classification rule. . Discriminant function of the binary classifier. y1 ). Kronecker delta δ(i. Mapping from input space X to the feature space F . . xl }. n-dimensional input space (space of observations). yl )}. . j) Dot product between vectors x and x . Feature space. j) = 1 for i = j and δ(i. Vectors are denoted by lower-case bold italic letters. . . . . . . Euclidean norm. j) = 0 otherwise. The concatenation of column vectors w ∈ Rn . Cardinality of a set. Discriminant function associated with the class y ∈ Y. Scatter matrix. for instance A. Unlabeled training set TX = {x1 . Covariance matrix. Set of c hidden states – class labels Y = {1. . Mean vector (mathematical expectation). c}. 7 . .Notation Upper-case bold letters denote matrices. z ∈ Rm is denoted as v = [w.

The input vector x ∈ Rn is assigned to class y ∈ {1.1.2 and Section 2. y ∈Y y ∈Y (2. . 2 if f (x) = w · x + b < 0 . yl )} containing pairs of observations xi ∈ Rn and corresponding class labels yi ∈ Y. The methods using this type of training data is described in Section 2. ∀y ∈ Y introduce bias to the discriminant functions. The implemented learning methods can be distinguished according to the type of the training data: I. .3. y1 ). 2} as follows q(x) = 1 if f (x) = w · x + b ≥ 0 . . . the linear classifier is represented by a single discriminant function f (x) = w · x + b .1) In the particular binary case Y = {1. The input vector x ∈ Rn is assigned to the class y ∈ Y its corresponding discriminant function fy attains maximal value y = argmax fy (x) = argmax ( wy · x + by ) .2) The data-type used to describe the linear classifier and the implementation of the classifier is described in Section 2. . 2}. ∀y ∈ Y . The problem is described by a finite set T = {(x1 . (2. c} is composed of a set of discriminant functions fy (x) = w y · x + by .Chapter 2 Linear Discriminant Function The linear classification rule q: X ⊆ Rn → Y = {1. (xl . 8 . Following sections describe supervised learning methods which determine the parameters of the linear classifier based on available knowledge. . 2. given by the parameter vector w ∈ Rn and bias b ∈ R. which are linear with respect to both the input vector x ∈ Rn and their parameter vectors w ∈ Rn . . . The scalars by .

Example: Linear classifier The example shows training of the Fisher Linear Discriminant which is the classical example of the linear classifier. References: The linear discriminant function is treated throughout for instance in books [26. 6]. ggradander Gradient method to solve the Generalized Anderson’s task. 2.1 Linear classifier The STPRtool uses a specific structure array to describe the binary (see Table 2. demo linclass Demo on the algorithms learning linear classifiers.4).1: Implemented methods: Linear discriminant function linclass Linear classifier (linear machine).II. The Riply’s data set riply trn. eanders Epsilon-solution of the Generalized Anderson’s task. ekozinec Kozinec’s algorithm for eps-optimal separating hyperplane.mat. The resulting linear classifier is then evaluated on the testing data riply tst.1. fld Fisher Linear Discriminant. 9 . The Anderson’s task and its generalization are representatives of such methods (see Section 2. perceptron Perceptron algorithm to train binary linear classifier. fldqp Fisher Linear Discriminant using Quadratic Programming. The implementation of the linear classifier itself provides function linclass.mat is used for training. ganders Solves the Generalized Anderson’s task.2) and the multi-class (see Table 2. Table 2. The summary of implemented methods is given in Table 2. The parameters of class conditional distributions are (completely or partially) known. mperceptr Perceptron algorithm to train multi-class linear machine. demo anderson Demo on Generalized Anderson’s task. andrerr Classification error of the Generalized Anderson’s task.3) linear classifier. androrig Original method to solve the Anderson-Bahadur’s task.

Multi-class linear classifier (structure array): . (ii) training ε-optimal hyperplane and (iii) training the multi-class linear classifier. .b [c × 1] Parameters by ∈ R. .fun = ’linclass’ Identifies function associated with this data type. .X. .2. yi = 1 . tst = load(’riply_tst’). The Perceptron (Section 2. tst. c of the linear discriminant functions fy (x) = w y · x + b. . The implemented methods solve the task of (i) training separating hyperplane for binary case c = 2. c}. (xl . trn = load(’riply_trn’). . (2. The problem is formally defined as solving the set of linear inequalities w · xi + b ≥ 0 . . model ). .1) 10 . cerror( ypred.2) with zero training error is equivalent to finding the hyperplane H = {x: w · x + b = 0} which separates the training vectors of the first y = 1 and the second y = 2 class. yi = 2 . .W [n × 1] The normal vector w ∈ Rn of the separating hyperplane f (x) = w · x + b. . . .1080 % % % % % load training data load testing data train FLD classifier classify testing data evaluate testing error 2. . ypred = linclass( tst. If the inequalities (2.W [n × c] Matrix of parameter vectors w y ∈ Rn . y = 1.3) have a solution then the training data T is linearly separable. c. y1 ). w · xi + b < 0 . Binary linear classifier (structure array): . .fun = ’linclass’ Identifies function associated with this data type. . .3) with respect to the vector w ∈ Rn and the scalar b ∈ R.b [1 × 1] Bias b ∈ R of the hyperplane. The definitions of these tasks are given below. model = fld( trn ). y = 1. . The problem of training the binary (c = 2) linear classifier (2. yl )} consists of pairs of observations xi ∈ Rn and corresponding class labels yi ∈ Y = {1.2: Data-type used to describe binary linear classifier. .y ) ans = 0.3: Data-type used to describe multi-class linear classifier. . Table 2. .Table 2.2 Linear separation of finite sets of vectors The input training data T = {(x1 .

If the training data is linearly separable then the optimal separating hyperplane can be defined the solution of the following task (w∗ .3) can be formally rewritten to a simpler form v · zi > 0 . The parameter ε ≥ 0 defines closeness to the optimal solution in terms of the margin. (2. . min − w w i∈I2 .2). 2} training vectors xi ∈ Rn . .6) with respect to the vectors wy ∈ Rn . . The ε-optimal solution is defined as the vector w and scalar b such that the inequality m(w ∗ . where the vector v ∈ Rn+1 is constructed as v = [w. .b = argmax min min i∈I1 w · xi + b w · xi + b .4) where I1 = {i: yi = 1} and I2 = {i: yi = 2} are sets of indices. l . . This task is equivalent to training the linear Support Vector Machines (see Section 5). .2. l . .1 Perceptron algorithm The input is a set T = {(x1 . b∗ ).5) holds. 2.3) or Kozinec’s algorithm. The problem of training the multi-class linear classifier c > 2 with zero training error is formally stated as the problem of solving the set of linear inequalities w yi · xi + byi > wy · xi + by . (2. The optimal separating hyperplane cannot be found exactly except special cases. The optimal separating hyperplane H∗ = {x ∈ Rn : w∗ · x + b∗ = 0} separates training data T with maximal margin m(w ∗ . b∗ ) = argmax m(w. The problem of training the separating hyperplane (2.2. y1).and Kozinec’s (Section 2. . 11 (2. The ε-optimal solution can be found by Kozinec’s algorithm (Section 2. .b w.8) i = 1. .2. i = 1. (2. b] . yl )} of binary labeled yi ∈ {1. (2. An interactive demo on the algorithms separating the finite sets of vectors by a hyperplane is implemented in demo linclass.2. yi = y . The task (2.2) algorithm are useful to train the binary linear classifiers from the linearly separable data.6) for linearly separable training data T can be solved by the modified Perceptron (Section 2. (xl . b) ≤ ε . b∗ ) − m(w.7) . y ∈ Y. Therefore the numerical algorithms seeking the approximate solution are applied instead. . b) w. y ∈ Y and scalars by ∈ R. .

b) of the linear classifier are obtained from the found vector v by inverting the transformation (2. w2 which converge to the vector w ∗ and w ∗ respectively. The generated training data contain 50 labeled vectors which are linearly separable with margin 1. Example: Training binary linear classifier with Perceptron The example shows the application of the Perceptron algorithm to find the binary linear classifier for synthetically generated 2-dimensional training data.7) with respect to the unknown vector v ∈ Rn+1 is equivalent to the original task (2. Perceptron algorithm is implemented in function perceptron.2. (xl . . −[xi . v (t) until the set of inequalities (2. The vector w∗ = 12 . figure. .w2 ∈X2 w1 − w2 .8). The found classifier is visualized as a separating hyperplane (line in this case) H = {x ∈ R2 : w · x + b = 0}. . . ppatterns( data ).1. . . data = genlsdata( 2. . pline( model ). 50. The initial vector v (0) can be set arbitrarily (usually v = 0). The Perceptron algorithm is an iterative procedure which builds a series of vectors (0) v .9) The problem of solving (2. . References: Analysis of the Perceptron algorithm including the Novikoff’s proof of convergence can be found in Chapter 5 of the book [26]. . The Kozinec’s algorithm builds a series of vectors w 1 . yl )} of binary labeled yi ∈ {1. % generate training data % call perceptron % plot training data % plot separating hyperplane 2. w 2 . . . 1] if yi = 1 . 1). where X1 stands for the convex hull of the training vectors of the first class X1 = {xi : yi = 1} and X2 for the convex hull of the second class likewise. (2.7) is satisfied. . . See Figure 2. . The 1 2 vectors w∗ and w ∗ are the solution of the following task 1 2 w∗. 2} training (0) (1) (t) vectors xi ∈ Rn . . The parameters (w. The Novikoff theorem ensures that the Perceptron algorithm stops after finite number of iterations t if the training data are linearly separable. w 1 (0) (1) (t) and w 2 . v (1) .2 Kozinec’s algorithm The input is a set T = {(x1 . model = perceptron( data ). . w∗ = 1 2 argmin w1 ∈X1 . . 1] if yi = 2 . z l } are defined as zi = [xi . . . y1).and transformed training data Z = {z 1 . The Perceptron from the neural network perspective is described in the book [3]. w1 . .3).

3) is sought for (ε < 0).5) is sought for (ε ≥ 0).4) 1 2 2 1 2 separating the training data with the maximal margin. If the ε-optimal optimality stopping condition is used then the Kozinec’s algorithm converges in finite number of iterations. The Kozinec’s algorithm is proven to converge in a finite number of iterations if the separating hyperplane exists. The generated training data contains 50 labeled vectors which are linearly separable with 13 .1: Linear classifier trained by the Perceptron algorithm w∗ − w ∗ and the bias b∗ = 1 ( w ∗ 2 − w∗ 2 ) determine the optimal hyperplane (2. The Kozinec’s algorithm can be also used to solve a simpler problem of finding the separating hyperplane (2. The ε-optimal hyperplane (2.−30 −35 −40 −45 −50 −55 −55 −50 −45 −40 −35 −30 Figure 2. Notice that setting ε = 0 force the algorithm to seek the optimal hyperplane which is generally ensured to be found in an infinite number of iterations t = ∞. The Kozinec’s algorithm which trains the binary linear classifier is implemented in the function ekozinec. Example: Training ε-optimal hyperplane by the Kozinec’s algorithm The example shows application of the Kozinec’s algorithm to find the (ε = 0. The Kozinec’s algorithm is proven to converge to the vectors w∗ and w ∗ in infinite 1 2 number of iterations t = ∞.01)optimal hyperplane for synthetically generated 2-dimensional training data. II. References: Analysis of the Kozinec’s algorithm including the proof of convergence can be found in Chapter 5 of the book [26]. The separating hyperplane (2.3). Therefore the following two stopping conditions are implemented: I.

. (2. . y = Y \ {yi } . . struct(’eps’.10) where the vector v ∈ R(n+1)c contains the originally optimized parameters v = [w1 . . c}.50. . i i = 1. (xl . % plot found hyperplane 8 6 4 2 0 −2 −4 −6 −8 −10 −10 −8 −6 −4 −2 0 2 4 6 8 10 Figure 2. .2: ε-optimal hyperplane trained with the Kozinec’s algorithm.1) which classifies the training data T without error. .2) as a separating hyperplane H = {x ∈ R2 : w · x + b = 0} (straight line in this case). b2 .1). b1 . The task is to design the linear classifier (2.margin 1. The problem of training (2. data = genlsdata(2. . . . 2.2. l . % generate data % run ekozinec model = ekozinec(data.0.1) is equivalent to solving the set of linear inequalities (2. yl )} consists of pairs of observations xi ∈ Rn and corresponding class labels yi ∈ Y = {1. . The found classifier is visualized (Figure 2.margin which satisfies the ε-optimality condition. y1 ). . figure.6). . ppatterns(data). . % plot data pline(model). . It could be verified that the found hyperplane has margin model.01)). w 2 . bc ] .6) can be formally transformed to a simpler set v · zy > 0 . w c . The inequalities (2. 14 . .3 Multi-class linear classifier The input training data T = {(x1 .

The simple form of the Perceptron updating rule allows to implement the transformation implicitly without mapping the training data T into (n + 1)c-dimensional space. l. . The described method is implemented in function mperceptron. Example: Training multi-class linear classifier by the Perceptron The example shows application of the Perceptron rule to train the multi-class linear classifier for the synthetically generated 2-dimensional training data pentagon. The class separability in a direction w ∈ Rn is defined as F (w) = w · SB w . y ∈ {1. 2} be sets of indices of training vectors belonging to the first y = 1 and the second y = 2 class. (xl . 6]. otherwise.12) where SB is the between-class scatter matrix SB = (µ1 − µ2 )(µ1 − µ2 )T . µy = 15 . 2} training vectors xi ∈ Rn . . y1). i The described transformation is known as the Kesler’s construction. % load training data % run training algorithm % plot training data % plot decision boundary 2. . model = mperceptron(data). y ∈ {1. ∈ Y \ {yi } is created from the training for j = yi . (2. . w · SW w 1 xi .3). for j = y .10) can be solved by the Perceptron algorithm. The transformed task (2. |Iy | i∈I y (2. yl )} of binary labeled yi ∈ {1. data = load(’pentagon’). 1] . Let Iy = {i: yi = y}. z i (j) = ⎩ 0. pboundary(model).11) where z y (j) stands for the j-th slot between coordinates (n + 1)(j − 1) + 1 and (n + 1)j. y −[xi . i = 1. figure. respectively. 1] . . 2} . . A user’s own data can be generated interactively using function createdata(’finite’.3 Fisher Linear Discriminant The input is a set T = {(x1 .10) (number 10 stands for number of classes). y i set T such that ⎧ ⎨ [xi . .The set of vectors z y ∈ R(n+1)c . References: The Kesler’s construction is described in books [26. The found classifier is visualized as its separating surface which is known to be piecewise linear and the class regions should be convex (see Figure 2. ppatterns(data). .mat.

Sy = i∈Yy (xi − µy )(xi − µy )T .4 −0. In the case of the Fisher Linear Discriminant (FLD). 16 . the parameter vector w of the linear discriminant function f (x) = w · x + b is determined to maximize the class separability criterion (2. This case is implemented in the function fld.2 −0.2 0.6 −0. The STPRtool contains two implementations of FLD: I.8 1 Figure 2.6 −0. holds.13) using the matrix inversion w = SW −1 (µ1 − µ2 ) . y ∈ {1. computes such b that the equality w · µ1 + b = −( w · µ2 + b) .12) w = argmax F (w ) = argmax w w w · SB w . 2} .3: Multi-class linear classifier trained by the Perceptron algorithm.8 −1 −1 −0. w · SW w (2. The classical solution of the problem (2.4 −0.13) The bias b of the linear rule must be determined based on another principle. The STPRtool implementation. for instance.6 0.4 0.2 0 0. and SW is the within class scatter matrix defined as SW = S1 + S2 .2 0 −0.8 −0.8 0.4 0.6 0.1 0.

The class conditional distributions pX|Y (x|y). This method can be useful when the matrix inversion is hard to compute. ypred = linclass(tst. trn = load(’riply_trn’). The parameters (µ1 . Σi ): i ∈ I2 }. Σi ): i ∈ I1 }.II.tst. model = fldqp(trn).X.13) can be reformulated as the quadratic programming (QP) task w = argmin w · SW w .mat.4) and evaluated on the testing data riply tst. The QP solver quadprog of the Matlab Optimization Toolbox is used in the toolbox implementation. the function fldqp can be replaced by the standard approach implemented in the function fld which would yield the same solution. Let q: X ⊆ Rn → {1. 2} 17 .y) ans = 0.model). References: The Fisher Linear Discriminant is described in Chapter 3 of the book [6] or in the book [3]. The found linear classifier is visualized (see Figure 2.4 Generalized Anderson’s task The classified object is described by the vector of observations x ∈ X ⊆ Rn and a binary hidden state y ∈ {1. Σ2 ) of these class distributions are unknown. tst = load(’riply_tst’). w subject to w · (µ1 − µ2 ) = 2 . Example: Fisher Linear Discriminant The binary linear classifier based on the Fisher Linear Discriminant is trained on the Riply’s data set riply trn. Similarly (µ2 . Σ1 ) belong to a certain finite set of parameters {(µi . it is known that the parameters (µ1 . Σ2 ) belong to a finite set {(µi . cerror(ypred. ppatterns(trn). However. y ∈ {1. The problem (2. pline(model). However.1080 % load training data % compute FLD % % % % plot data and solution load testing data classify testing data compute testing error 2.mat. The method is implemented in the function fldqp. 2}. figure. The QP formulation fldqp of FLD is used. Σ1 ) and (µ2 . 2} are known to be multi-variate Gaussian distributions.

In other words. µi .2 −1. µi .8 0. b. The probability ε(w. q(x) = 2 . b∗ ) = argmin Err(w. b. Σi ) . r i = min (µi − x) · (Σi )−1 (µi − x) = x∈H w · µi + b w · Σi w . for f (x) = w∗ · x + b∗ ≥ 0 . for f (x) = w∗ · x + b∗ < 0 . b) = max ε(w.14) when |I1 | = 1 and |I2 | = 1. The Generalized Anderson’s task (GAT) is to find the parameters (w ∗ . µi . 18 . Σi ) and the nearest vector of the separating hyperplane H = {x ∈ Rn : w · x + b = 0}. The probability of misclassification is defined as Err(w.14) The original Anderson’s task is a special case of (2.6 0. µi . b. be a binary linear classifier (2. such that the error Err(w∗ .5 −1 −0.2) with discriminant function f (x) = w · x + b. Σi ) . Σi ) is probability that the Gaussian random vector x with mean vector µi and the covariance matrix Σi satisfies q(x) = 1 for i ∈ I2 or q(x) = 2 for i ∈ I1 . i∈I1 ∪I2 where ε(w. w.2 1 0. b) = argmin max ε(w.e. b∗ ) of the linear classifier 1 ..4: Linear classifier based on the Fisher Linear Discriminant.4 0.b w. it is the probability that the vector x will be misclassified by the linear rule q.2 0 −0. b∗ ) is minimal (w ∗ .5 1 Figure 2. Σi ) is proportional to the reciprocal of the Mahalanobis distance r i between the (µi . i. b.1.b i∈I1 ∪I2 (2.5 0 0.

The problem is solved by an iterative algorithm which stops while the condition γ− 1−λ ≤ε. b. Σi ) and the corresponding Mahalanobis distance r i is given by the integral ε(w. w · Σ1 w is satisfied.1 Original Anderson’s algorithm The algorithm is implemented in the function androrig. µi . λ γ= w · Σ2 w . b) = argmax min w. b∗ ) = argmax F (w.15) The optimization problem (2. w · Σ1 w where the scalar 0 < λ < 1 is unknown. b) is not differentiable. The STPRtool contains implementations of the algorithm solving the original Anderson’s task as well as implementations of three different approaches to solve the Generalized Anderson’s task which are described bellow. b. This algorithm solves the original Anderson’s task defined for two Gaussians only |I1 | = |I2 | = 1. However.14) can be equivalently rewritten as (w ∗ . An interactive demo on the algorithms solving the Generalized Anderson’s task is implemented in demo anderson. 1−λ = λ w · Σ2 w .The exact relation between the probability ε(w. References: The original Anderson’s task was published in [1].b i∈I1 ∪I2 w · µi + b .b w. A detailed description of the Generalized Anderson’s task and all the methods implemented in the STPRtool is given in book [26]. b) is less than 0.4. Σi ) = ∞ ri 1 2 1 √ e− 2 t dt . The objective function F (w. 2. the optimal solution vector w ∈ Rn is obtained by solving the following problem w = ((1 − λ)Σ1 + λΣ2 )−1 (µ1 − µ2 ) . The parameter ε defines the closeness to the optimal solution. In this case. µi . the objective function F (w.5. b) is proven to be convex in the region where the probability of misclassification Err(w. Example: Solving the original Anderson’s task 19 . 2π (2. w · Σi w which is more suitable for optimization.

y ) % ans = 0. % ypred = linclass( tst.6 0. The parameters (µ1 .1 −0. model ).4 0. The found linear classifier is visualized and the Gaussian distributions (see Figure 2.37 0. The classifier is validated on the testing set riply tst.5 0. distrib ). % model = androrig(distrib).1150 load training data estimate Gaussians solve Anderson’s task plot solution load testing data classify testing data evaluate error 0.6 Figure 2.8 −0.2 0.4 −0.The original Anderson’s task is solved for the Riply’s data set riply trn.9 1.5).4 gauss 1 0.5: Solution of the original Anderson’s task. % tst = load(’riply_tst’). Σ1 ) and (µ2 .2 0 0.2 0. tst. pandr( model.929 0. % cerror( ypred. trn = load(’riply_trn’).7 gauss 2 0.X. % figure.mat. 20 .3 0.8 0.6 −0. Σ2 ) of the Gaussian distribution of the first and the second class are obtained by the Maximum-Likelihood estimation. % distrib = mlcgmm(data).mat.

∆b) in the parameter space is computed such that moving along this direction increases the objective function F . II. i.4. figure. model = ganders(distrib). ∆b).2. The optimal movement k is determined by 1D-search based on the cutting interval algorithm according to the Fibonacci series. The quadprog function of the Matlab optimization toolbox is used to solve this subtask.2 General algorithm framework The algorithm is implemented in the function ganders. The ε-solution is defined as a linear classifier with parameters (w. The algorithm iterates until the minimal change in the objective function is less than the prescribed ε. An improving direction (∆w.6). This algorithm repeats the following steps: I.mat. b) such that F (w.distrib). % load input Gaussians % solve the GAT % plot the solution 2.4. b(t+1) ) − F (w (t) . Both the first and second class are described by three 2D-dimensional Gaussian distributions. pandr(model. III. b(t+1) : = b(t) + k∆b . distrib = load(’mars’). Example: Solving the Generalized Anderson’s task The Generalized Anderson’s task is solved for the data mars. b) < ε 21 .3 ε-solution using the Kozinec’s algorithm This algorithm is implemented in the function eanders. The optimal movement k along the direction (∆w. The found linear classifier as well as the input Gaussian distributions are visualized (see Figure 2. b(t) ) > ε . The new parameters are set w(t+1) : = w (t) + k∆w . is violated. The number of interval cuttings of is an optional parameter of the algorithm.. the condition F (w(t+1) .e.

Example: ε-solution of the Generalized Anderson’s task The ε-solution of the generalized Anderson’s task is solved for the data mars. 0.0503 −1 −2 −3 −4 −5 −4 −3 −2 −1 0 1 2 3 4 5 Figure 2. Otherwise it can get stuck in an endless loop. = struct(’err’. b) is convex but not differentiable thus the standard gradient optimization cannot 22 . distrib options model = figure.4.0421 gauss 6 gauss 4 0. % % % % load Gaussians maximal desired error training visualization 2.06 (it is 6%).047 0. pandr(model.options).5 0.0503 1 0 gauss 5 0.distrib). The found linear classifier and the input Gaussian distributions are visualized (see Figure 2. The problem is transformed to the task of separation of two sets of ellipsoids which is solved the Kozinec’s algorithm. is satisfied. eanders(distrib.7).0483 4 3 0.06’).5) is the desired upper bound on the probability of misclassification. Both the first and second class are described by three 2D-dimensional Gaussian distributions.4 Generalized gradient optimization This algorithm is implemented in the function ggradandr. = load(’mars’). The desired maximal possible probability of misclassification is set to 0. The algorithm converges in finite number of iterations if the given ε-solution exists.6: Solution of the Generalized Anderson’s task.0461 gauss 2 gauss 3 2 gauss 1 0.mat. The prescribed ε ∈ (0.0. The objective function F (w.

5 4 3 gauss 2 0. The found linear classifier as well as the input Gaussian distributions are visualized (see Figure 2.0483 gauss 5 0 −1 gauss 6 −2 gauss 4 0. b(t+1) = b(t) + ∆b . The maximal number of iterations can be used as the stopping condition as well. 23 . the condition F (w(t+1) .0496 gauss 1 0. The generalized gradient (∆w. is violated. The algorithm repeats the following two steps: I. the generalized gradient optimization method can be applied.0516 −3 0. However.e.06 probability of misclassification. Both the first and second class are described by three 2D-dimensional Gaussian distributions.0516 gauss 3 2 1 0. i. b(t+1) ) − F (w (t) .7: ε-solution of the Generalized Anderson’s task with maximal 0. be used. The optimized parameters are updated w(t+1) : = w (t) + ∆w . The maximal number of iterations is limited to tmax=10000..0473 0. ∆b) is computed. Example: Solving the Generalized Anderson’s task using the generalized gradient optimization The generalized Anderson’s task is solved for the data mars.mat.8).0432 −4 −5 −4 −3 −2 −1 0 1 2 3 4 5 Figure 2. b(t) ) > ε . The algorithm iterates until the minimal change in the objective function is less than the prescribed ε. II.

0485 gauss 6 gauss 4 −2 −3 0. distrib ).0475 1 0 gauss 5 0. options = struct(’tmax’. % load Gaussians % max # of iterations % training % visualization 5 4 0. pandr( model. 24 .10000).0434 −4 −5 −4 −3 −2 −1 0 1 2 3 4 5 Figure 2.8: Solution of the Generalized Anderson’s task after 10000 iterations of the generalized gradient optimization algorithm.distrib = load(’mars’).options).0518 −1 0. figure.0498 0.0518 gauss 2 gauss 3 3 2 gauss 1 0. model = ggradandr(distrib.

The Principal Component Analysis (PCA) (Section 3. The quadratic data mapping projects the input vector x ∈ Rn onto the feature space Rm where m = n(n + 3)/2. . This yields a non-linear (kernel) projection of data which has generally defined as z = AT k(x) + b .1 and Section 3.2) .2). 25 (3. . x2 xn . x1 x2 . (3. where xi . xn xn ]T . . .Chapter 3 Feature extraction The feature extraction step consists of mapping the input vector of observations x ∈ Rn onto a new feature description z ∈ Rm which is more suitable for given task. xn . . The Linear Discriminant Analysis (LDA) (Section 3. x2 . . x1 xn . . . The quadratic data mapping is implemented in function qmap. . .3) (3. .1). . The projection is given by the matrix W [n × m] and bias vector b ∈ Rm . In this case. . x1 x1 . The data type used to describe the linear data projection is defined in Table 3. i = 1. The linear projection z = WT x + b . The linear data projection is implemented in function linproj (See examples in Section 3. n are entries of the vector x ∈ Rn .1) is a representative of the unsupervised learning method which yields the linear projection (3. The quadratic mapping φ: Rn → Rm is defined as φ(x) = [ x1 . The kernel functions allow for non-linear extensions of the linear feature extraction methods. . . . x2 x2 . .2) is a representative of the supervised learning method yielding the linear projection. . .1) is commonly used. the input vectors x ∈ Rn are mapped into a new higher dimensional feature space F in which the linear methods are applied.

b [m × 1] Bias vector b. pca Principal Component Analysis. .fun = ’linproj’ Identifies function associated with this data type.1: Implemented methods for feature extraction linproj Linear data projection. lin2quad Merges linear rule and quadratic mapping. . kpca Kernel Principal Component Analysis. . lda Linear Discriminant Analysis. . kerproj Kernel data projection. . . xl } be a set of training vectors from the n-dimensional input space Rn .1 Table 3. z l } is a lower dimensional representation of the 26 . . Table 3.3).3) and the Generalized Discriminant Analysis (GDA) (Section 3. Linear data projection (structure array): . k(x. x1 ). The kernel data projection is implemented in the function kernelproj (See examples in Section 3. xl )]T is a vector of kernel functions centered in the training data or their subset. gda Generalized Discriminant Analysis. wm ]. . The extracted features are projections of φ(x) onto m vectors (or functions) from the feature space F . kpcarec Reconstructs image after kernel PCA.3 and Section 3.5).where A [l × m] is a parameter matrix and b ∈ Rm a parameter vector. demo pcacomp Demo on image compression using PCA. greedykpca Greedy Kernel Principal Component Analysis. The x ∈ Rn stands for the input vector and z ∈ Rm is the vector of extracted features. . lin2svm Merges linear rule and kernel projection.5) are representatives of the feature extraction methods yielding the kernel data projection (3. .1 Principal Component Analysis Let TX = {x1 . . . The Kernel PCA (Section 3. . The summary of implemented methods for feature extraction is given in Table 3. . 3. qmap Quadratic data mapping. The data type used to describe the kernel data projection is defined in Table 3. The vector k(x) = [k(x. The set of vectors TZ = {z 1 . .2: Data-type used to describe linear data projection. .W [n × m] Projection matrix W = [w1 . pcarec Computes reconstructed vector after PCA projection. . .

6) of the training data TX . j) . .Alpha [l × m] Parameter matrix A [l × m]. ∀i. i = 1.4) which allows for the minimal mean square reconstruction error (3.5) obtained by inverting (3. . . . b) = argmin εM S (W .options.b (3. .arg [1 × p] Kernel argument(s). The vector b equals to WT µ. The reconstructed vectors TX = {x1 .Table 3. A demo showing the use of PCA for image compression is implemented in the script demo pcacomp. b) = l l ˜ xi − xi i=1 2 . . b) of the linear projection are the solution of the optimization task (W. . Kernel data projection (structure array): . .4) and (3. . xl } are computed by the linear back ˜ projection ˜ x = W(z − b) .fun = ’kernelproj’ Identifies function associated with this data type. W .X [n × l] Training vectors {x1 . . . The parameters (W. j .ker [string] Kernel identifier.7) is the matrix W = [w 1 . input training vectors TX in the m-dimensional space Rm . . The Principal Component Analysis (PCA) is the linear orthonormal projection (3. wm ] and δ(i. w m ] containing the m eigenvectors of the sample covariance matrix which have the largest eigen values. where µ is the sample mean of the training data.6) is a function of the parameters of the linear projections (3. . (3. The vectors TZ are obtained by the linear orthonormal projection z = WT x + b . . 27 . The mean square reconstruction error 1 εM S (W. . . . .sv. (3. .4) where the matrix W [n × m] and the vector b [m × 1] are parameters of the projec˜ ˜ tion. m are column vectors of the matrix W = [w1 . b ) . where w i . The solution of the task (3. .5). .3: Data-type used to describe kernel data projection. .7) subject to wi · wj = δ(i. The PCA is implemented in the function pca. xl }.4).b [m × 1] Bias vector b. . (3. j) is the Kronecker delta function.options. . .

5 6 5.1). reconstr.mx] = max(Z).’r’).5 4 3.mx)]. model = pca(X. Example: Principal Component Analysis The input data are synthetically generated from the 2-dimensional Gaussian distribution. proj..mx)].References: The PCA is treated throughout for instance in books [13. the reconstructed data and the 1-dimensional subspace given by the first principal component are visualized (see Figure 3.’bo’).5 5 4.0.’Reconstructed’. axis equal.[XR(2. h2 = ppatterns(XR.[1 0.model). h3 = plot([XR(1. XR = pcarec(X.. The training data.5].1: Example on the Principal Component Analysis. The PCA is applied to train the 1-dimensional subspace to approximate the input data with the minimal reconstruction error. ’PCA subspace’). Z = linproj(X.5 3 2. data figure.mn) XR(1.8 1].100).8. % visualization h1 = ppatterns(X.model). X = gsamp([5.5 7 6.’kx’). [dummy. [dummy.mn] = min(Z). 7. % % % % generate data train PCA lower dim.1). 6]. hold on. 3. 28 . .mn) XR(2. ’Input vectors’. legend([h1 h2 h3].5 2 3 4 5 6 7 Input vectors Reconstructed PCA subspace 8 Figure 3.

The synthetical data are generated from the Gaussian mixture model. Example 1: Linear Discriminant Analysis The LDA is applied to extract 2 features from the Iris data set iris. y ∈ Y .model). ext_data = linproj(orig_data.3. ppatterns(ext_data).mat which consists of the labeled 4-dimensional data. (xl . % % % % load input data train LDA feature extraction plot ext. figure. orig_data = load(’iris’). model = lda(orig_data. 6]. . = ˜ det(SW ) det(SW ) ˜ ˜ is maximized. such that the class separability criterion F (W) = ˜ det(SB ) det(SB ) . . 2. . The within-class SW and between-class SB scatter matrices are described as SW = y∈Y Sy . xi ∈ Rn .8). References: The LDA is treated for instance in books [3. (3. y ∈ Y = {1. y ∈ Y are defined as 1 µ= l l xi . The SB is the between-class and SW is the within-class scatter matrix of projected data which is defined similarly to (3.8) SB = y∈Y |Iy |(µy − µ)(µy − µ)T . . . The LDA and 29 .2. The data after the feature extraction step are visualized in Figure 3. The LDA and PCA are trained on the generated data. i=1 µy = 1 xi . c} be labeled set of training vectors.2 Linear Discriminant Analysis Let TXY = {(x1 . y1). . |Iy | i∈I y The goal of the Linear Discriminant Analysis (LDA) is to train the linear data projection z = WT x . . data Example 2: PCA versus LDA This example shows why the LDA is superior to the PCA when features good for classification are to be extracted. Sy = i∈Iy (xi − µi )(xi − µi )T . . where the total mean vector µ and the class mean vectors µy . yl )}.2).

5 0. pca_model = pca(data.Mean = [[5.1 −0.X. 0.5]. figure.2 −0. pca_rec = pcarec(data.X.5 −1 −0.3b).4 −0.4 0. distrib.X.5 0 0.5]].9. distrib. distrib.Prior = [0.:. % Generate data distrib.3a).pca_model).2 0. lda_rec = pcarec(data.5 Figure 3.lda_model).5 −1.5 1 1. The extracted data using the LDA and PCA model are displayed with the Gaussians fitted by the Maximum-Likelihood (see Figure 3.3 0.1).3 −0.:. data = gmmsamp(distrib.9. pca_data = linproj( data. 30 % % % % % mean vectors 1st covariance 2nd covariance Gaussian weights sample data % train LDA % train PCA % visualization .9 1].lda_model).1 0 −0.250).9 1].0.pca_model). ppatterns(data).Cov(:. PCA directions are displayed as connection of the vectors projected to the trained LDA and PCA subspaces and re-projected back to the input space (see Figure 3.2: Example shows 2-dimensional data extracted from the originally 4dimensional Iris data set using Linear Discriminant Analysis.1) = [1 0. lda_data = linproj(data. lda_model = lda(data. 0.Cov(:.4] [4.2) = [1 0. axis equal. hold on.1).5 0.

. subplot(2. 31 . 3. pgauss(mlcgmm(lda_data)). title(’LDA’). pgauss(mlcgmm(pca_data)). The problem is to find the vector x ∈ X its image ˜ φ(x) ∈ F well approximates the reconstructed vector φ(x) ∈ F .’b’). This procedure is implemented in the function kpcarec for the case of the Radial Basis Function (RBF) kernel. ppatterns(pca_data). subplot(2. . the explicit projection from the feature space F to the input space X usually do not exist. h2 = plot(pca_rec(1.:).10) ˜ is minimized.11) In contrast to the linear PCA. (3.1. . The reconstructed vector φ(x) is given as a linear combination of the mapped data TΦ l ˜ φ(x) = i=1 βi φ(xi ) . b) = l l (3. Project input vector xin ∈ Rn onto its lower dimensional representation z ∈ Rm using (3. .:).’r’). φ(xl )}. . .9). xl }. legend([h1 h2]. The input training vectors TX = {x1 . (3. hold on. The procedure consists of the following step: I. such that the reconstruction error 1 εKM S (A.3 Kernel Principal Component Analysis The Kernel Principal Component Analysis (Kernel PCA) is the non-linear extension of the ordinary linear PCA. ppatterns(lda_data). figure.h1 = plot(lda_rec(1.:).:).pca_rec(2. title(’PCA’). .’LDA direction’.lda_rec(2. . The linear PCA is applied on the mapped data TΦ = {φ(x1 ).9) ˜ φ(xi ) − φ(xi ) i=1 2 .1.1). xi ∈ X ⊆ Rn are mapped by φ: X → F to a high dimensional feature space F . β = A(z − b) . The computation of the principal components and the projection on these components can be expressed in terms of dot products thus the kernel functions k: X × X → R can be employed.’PCA direction’). The kernel PCA trains the kernel data projection z = AT k(x) + b .2).

The objective of the Greedy kernel PCA is to minimize the reconstruction error (3.4. xl }. . % compute reconstruced data figure.4 Greedy kernel PCA algorithm The Greedy Kernel Principal Analysis (Greedy kernel PCA) is an efficient algorithm to compute the ordinary kernel PCA. The function kpcarec is used to find pre-images of the reconstructed data (3.1].12) where A [d × m] is the parameter matrix.11). % kernel argument options. . 30].9). The kernel PCA model with RBF kernel is trained. . the subset TS does not contain all the training vectors TX thus the complexity of the projection (3.11) while the size d of the subset TS is kept small. . . . 3.model).250.1). . . sl )]T are the kernel functions centered in the vectors TS = {s1 . . (3. h2 = ppatterns(XR. The Greedy kernel PCA algorithm is implemented in the function greedykpca. such that ˜ xout = argmin φ(x) − φ(xin ) 2 . . % output dimension model = kpca(X. sd }. b [m × 1] is the parameter vector and kS = [k(x.5. x References: The kernel PCA is described at length in book [29.’Input vectors’. Example: Kernel Principal Component Analysis The example shows using the kernel PCA for data denoising. k(x. % compute kernel PCA XR = kpcarec(X. The vector set TS is a subset of training data TX .options). . In contrast to the ordinary kernel PCA. s1 ). Let TX = {x1 . legend([h1 h2]. The goal is to train the kernel data projection z = AT kS (x) + b .arg = 4. . ’+r’). xi ∈ Rn be the set of input training vectors. The result is visualized in Figure 3. The input data are synthetically generated 2-dimensional vectors which lie on a circle and are corrupted by the Gaussian noise. X = gencircledata([1. % Visualization h1 = ppatterns(X).II. % generate circle data options. 32 . % use RBF kernel options.12) is reduced compared to (3.new_dim = 2.ker = ’rbf’. Computes vector xout ∈ Rn which is good pre-image of the reconstructed vector ˜ φ(xin ).’Reconstructed’).

yl )}. The function kpcarec is used to find pre-images of the reconstructed data obtained by the kernel PCA projection. . The computation of the projection vectors and the projection on these vectors can be expressed in terms of dot products thus the kernel functions k: X × X → R can be employed. options. Example: Greedy Kernel Principal Component Analysis The example shows using kernel PCA for data denoising. y ∈ Y = {1. (φ(xl ). .5.12).’Reconstructed set’. ’Training set’. .References: The implemented Greedy kernel PCA algorithm is described in [11]. The Greedy kernel PCA model with RBF kernel is trained. y1). .1]. model = greedykpca(X. The ordinary LDA is applied on the mapped data TΦY = {(φ(x1 ). legend([h1 h2 h3].sv. 3. % reconstructed vectors figure. X = gencircledata([1.ker = ’rbf’. % % % % % use RBF kernel kernel argument output dimension size of sel.1). The number of vectors defining the kernel projection (3. The resulting kernel data projection is defined as z = AT k(x) + b . . The input data are synthetically generated 2-dimensional vectors which lie on a circle and are corrupted with the Gaussian noise. .. . options.’ob’.’Selected subset’). h2 = ppatterns(XR.12) is limited to 25.options). % generate training data options. .5. y1 ). (xl . % visualization h1 = ppatterns(X).X. xi ∈ X ⊆ Rn . The vector of the selected subset TX are visualized as well.model). h3 = ppatterns(model. The training and reconstructed vectors are visualized in Figure 3. .arg = 4.’+r’). In contrast the ordinary kernel PCA (see experiment in Section 3.5 Generalized Discriminant Analysis The Generalized Discriminant Analysis (GDA) is the non-linear extension of the ordinary Linear Discriminant Analysis (LDA).new_dim = 2. 33 .. The input training data TXY = {(x1 . . . options. .m = 25.3) requires all the 250 training vectors. c} are mapped by φ: X → F to a high dimensional feature space F .250. subset run greedy algorithm XR = kpcarec(X. . yl )}.

orig_data = load(’iris’). ext_data = kernelproj(orig_data. . .. .6 Combining feature extraction and linear classifier Let TXY = {(x1 . . Let f (x) = w · φ(x) + b be a linear discriminant function found by an arbitrary training method on the extracted data TΦY .ker = ’rbf’. . . .new_dim = 2. In the case of the quadratic data mapping (3. model = gda(orig_data. y1 ).13) .arg = 1. quadratic data mapping or kernel projection). ppatterns(ext_data). figure. The GDA is implemented in the function gda. The parameters (A. . Let φ: X → Rm be a mapping φ(x) = [φ1 (x).6. % % % % % % % load data use RBF kernel kernel arg. Example: Generalized Discriminant Analysis The GDA is trained on the Iris data set iris. y1 ). yl )} be a training set of observations xi ∈ X ⊆ Rn and corresponding hidden states yi ∈ Y = {1. . The extracted data YZY are visualized in Figure 3. φm (x)]T defining a non-linear feature extraction step (e. output dimension train GDA feature ext. . References: The GDA was published in the paper [2]. . (φ(xl ). . . z i ∈ Rm . The RBF kernel with kernel width σ = 1 is used. Notice. options. .g. options. . .where A [l × m] is the matrix and b [m × 1] the vector trained by the GDA. as m f (x) = w · φ(x) + b = i=1 wiφi (x) + b .model).mat which contains 3 classes of 4-dimensional data. . . visualization 3. . the discriminant function can be written in the following form f (x) = x · Ax + x · b + c . that setting σ too low leads to the perfect separability of the training data TXY but the kernel projection is heavily over-fitted. b) are trained to increase the between-class scatter and decrease the within-class scatter of the extracted data TZY = {(z 1 . 34 (3. The feature extraction step applied to the input data TXY yields extracted data TΦY = {(φ(x1 ). (z l . y1 ). (xl . c}. yl )}. options.options). The function f can be seen as a non-linear function of the input vectors x ∈ X . yl )}. .2) used as the feature extraction step.

7 shows the decision boundary of the quadratic classifier. In the case of the kernel projection (3. Function lin2quad is used to compute the parameters of the quadratic classifier explicitly based on the found linear rule in the feature space and the mapping. (3. pboundary(quad_model). Figure 3. % visualize the quadratic classifier figure. The combining is implement in function lin2quad. % compute parameters of the quadratic classifier quad_model = lin2quad(lin_model).3) applied as the feature extraction step. (ii) training a linear rule on the mapped data (e. % train linear rule in the feature sapce lin_model = perceptron(map_data). % map data to the feature space map_data = qmap(input_data). fld) and (iii) combining the quadratic mapping and the linear rule.13) defines the quadratic classifier described in Section 6. ppatterns(input_data). Example: Kernel classifier from the linear rule This example shows how to train the kernel classifier using arbitrary training algorithm for linear rule. % load training data living in the input space input_data = load(’vltava’).The discriminant function (3. This classifier can be trained by (i) applying the kernel projection kernelproj.4. The combining is implemented in function lin2svm.14) defines the kernel (SVM type) classifier described in Section 5. Therefore the quadratic classifier can be trained by (i) applying the quadratic data mapping qmap.14) The discriminant function (3.g. The Fisher Linear Discriminant was used in this example. Example: Quadratic classifier trained the Perceptron This example shows how to train the quadratic classifier using the Perceptron algorithm. (ii) training a linear rule on the extracted data and (iii) combing the kernel projection and the linear rule. the resulting discriminant function has the following form f (x) = α · k(x) + b . The Perceptron algorithm produces the linear rule only. perceptron. The input training data are linearly non-separable but become separable after the quadratic data mapping is applied. The 35 ..

% load input data trn = load(’riply_trn’).options). % combine linear rule and kernel PCA kfd_model = lin2svm(kpca_model.10).X. % train kernel PCA by greedy algorithm options = struct(’ker’.lin_model). ppatterns(trn). ypred = svmclass(tst. the training data and vectors used in the kernel expansion of the discriminant function. % visualization figure.’ok’. % project data map_trn = kernelproj(trn. kpca_model = greedykpca(trn.1. Figure 3. The Riply’s data set was used for training and evaluation.kfd_model).13).’arg’. pboundary(kfd_model). % evaluation on testing data tst = load(’riply_tst’).0970 36 . % train linear classifier lin_model = fld(map_trn).greedy kernel PCA algorithm was applied as the feature extraction step.X.’new_dim’.sv.y) ans = 0.kpca_model).tst. ppatterns(kfd_model. cerror(ypred.X.8 shows the found kernel classifier.’rbf’.

7

LDA direction PCA direction

6

5

4

3

2

1

0

1

2

3

4

5

6

7

8

(a)
LDA 1.4 1.2 1 0.8 0.6 0.4 0.2 0 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2

PCA 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 −4 −3 −2 −1 0 1 2 3 4 5

(b) Figure 3.3: Comparison between PCA and LDA. Figure (a) shows input data and found LDA (red) and PCA (blue) directions onto which the data are projected. Figure (b) shows the class conditional Gaussians estimated from data projected data onto LDA and PCA directions.

37

10 Input vectors Reconstructed 8

6

4

2

0

−2

−4

−6

−8 −8

−6

−4

−2

0

2

4

6

8

10

Figure 3.4: Example on the Kernel Principal Component Analysis.

8 Training set Reconstructed set Selected set 6

4

2

0

−2

−4

−6 −8

−6

−4

−2

0

2

4

6

8

10

Figure 3.5: Example on the Greedy Kernel Principal Component Analysis.

38

0.15

0.1

0.05

0

−0.05

−0.1

−0.15

−0.2 −0.6

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

Figure 3.6: Example shows 2-dimensional data extracted from the originally 4dimensional Iris data set using the Generalized Discriminant Analysis. In contrast to LDA (see Figure 3.2 for comparison), the class clusters are more compact but at the expense of a higher risk of over-fitting.

0.8

0.6

0.4

0.2

0

−0.2

−0.4

−0.6 −0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure 3.7: Example shows quadratic classifier found by the Perceptron algorithm on the data mapped to the feature by the quadratic mapping.

39

The greedy kernel PCA was used therefore only a subset of training data was used to determine the kernel projection.2 0 −0. 40 .5 −1 −0.2 1 0.6 0.5 1 Figure 3.8: Example shows the kernel classifier found by the Fisher Linear Discriminant algorithm on the data mapped to the feature by the kernel PCA.8 0.1.4 0.2 −1.5 0 0.

3).fun (see Table 4.model. The vector y [1 × l] contains values of the distribution function. .1 Gaussian distribution and Gaussian mixture model The value of the multivariate Gaussian probability distribution function in the vector x ∈ Rn is given by PX (x) = 1 (2π) n 2 1 exp(− (x − µ)T Σ−1 (x − µ)) . the shape of the matrix can be indicated in the optimal field . xl } of a random vector.1 4. The summary of implemented methods for dealing with probability density estimation and clustering is given in Table 4.cov type (see Table 4. Let the structure model represent a given distribution function. Σi) is assigned to the class yi ∈ Y. The data-type is described in Table 4. The matrix X [n × l] contains column vectors {x1 . The field . The covariance matrix Σ is always represented as the square matrix [n × n]. xl }. .Chapter 4 Density estimation and clustering The STPRtool represents a probability distribution function PX (x). . . x ∈ Rn as a structure array which must contain field .e. however. . In particular.2). The STPRtool uses specific data-type to describe a set of Gaussians which can be labeled.. i. . each pair (µi . The Matlab command y = feval(X. The evaluation of the Gaussian distribution is implemented in the function 41 .fun) is used to evaluate yi = PX (xi ).1) where µ [n × 1] is the mean vector and Σ [n × n] is the covariance matrix. . .1.4. the Gaussian distribution and the Gaussian mixture models are described in Section 4.fun specifies function which is used to evaluate PX (x) for given set of realization {x1 . 2 det(Σ) (4.

demo emgmm Demo on Expectation-Maximization (EM) algorithm.Table 4. The distribution function of the univariate Gaussian is visualized and compared to the histogram 42 .fun ). rsde Reduced Set Density Estimator.5. melgmm Maximizes Expectation of Log-Likelihood for Gaussian mixture. kmeans K-means clustering algorithm. Example: Sampling from the Gaussian distribution The example shows how to represent the Gaussian distribution and how to sample data it. The random sampling from the GMM is implemented in the function gmmsamp. pdfgauss Evaluates Gaussian mixture model. . .1: Implemented methods: Density estimation and clustering gmmsamp Generates sample from Gaussian mixture model. model. mmgauss Minimax estimation of Gaussian distribution. pdfgauss Evaluates for multivariate Gaussian distribution. Probability Distribution Function (structure array): .1) given by parameters (µy . emgmm Expectation-Maximization Algorithm for Gaussian mixture model. y ∈ Y = {1. The evaluation of the GMM is implemented in the function pdfgmm. The data-type used to represent the GMM is described in Table 4. Σy ). mlsigmoid Fitting a sigmoid function using ML estimation. c} are weights and PX|Y (x|y) is the Gaussian distribution (4. mlcgmm Maximal Likelihood estimation of Gaussian mixture model. sigmoid Evaluates sigmoid function. demo svmpout Fitting a posteriori probability to SVM output. The Gaussian Mixture Model (GMM) is weighted sum of the Gaussian distributions PX (x) = y∈Y PY (y)PX|Y (x|y) . pdfgauss. gsamp Generates sample from Gaussian distribution.2) where PY (y). (4.2: Data-type used to represent probability distribution function. Table 4. demo mmgauss Demo on minimax estimation for Gaussian. .fun [string] The name of a function which evaluates probability distribution y = feval( X. The random sampling from the Gaussian distribution is implemented in the function gsamp. The sampling from univariate and bivariate Gaussians is shown. .

. θ)PY |Θ (y|θ) . Σk }.1:5]. .Y/500). Labeled set of Gaussians (structure array): .fun = ’pdfgauss’ Identifies function associated with this data type.250)). ppatterns(gsamp(model. . Parameter cov type: ’full’ Full covariance matrix.6 1]. .6.1a).Cov = [1 0. of the sample data (see Figure 4. . 0. hold on.Mean = [1. [Y.2 Maximum-Likelihood estimation of GMM PXY |Θ (x. yk } assigned to Gaussians.1:5]. % univariate case model = struct(’Mean’.3: Shape of covariance matrix.500).Cov [n × n × k] Covariance matrices {Σ1 . . Table 4. µk }.Mean [n × k] Mean vectors {µ1 .y [1 × k] Labels {y1 . of the Gaussisan plot([-4:0. . . 43 Let the probability distribution . ’spherical’ Spherical covariance matrix. % plot pdf.cov type [string] Optional field which indicates shape of the covariance matrix (see Table 4. ’diag’ Diagonal covariance matrix.model).4: Data-type used to represent the labeled set of Gaussians.’Cov’. . The isoline of the distribution function of the bivariate Gaussian is visualized together with the sample data (see Figure 4. hold on. % compute histogram bar(X.’r’). model.pdfgauss([-4:0.10). % plot histogram % bi-variate case model.Table 4. y|θ) = PX|Y Θ (x|y. . . . . . % Gaussian parameters figure.1]. % Mean vector % Covariance matrix % plot sampled data % plot shape 4. pgauss(model).2). figure.1b).X] = hist(gsamp(model.3).1. . .

The Maximum-Likelihood estimation of the parameters θ of the GMM for the case of complete and incomplete data is described in Section 4. .1 Complete data The input is a set TXY = {(x1 . . . y|θ). .1 and Section 4.i. The logarithm of probability P (TXY |θ) is referred to as the log-likelihood L(θ|TXY ) of θ with respect to TXY . . Gaussian Mixture Model (structure array): .1: Sampling from the univariate (a) and bivariate (b) Gaussian distribution. . .1 −1 0. The vector of observations x ∈ Rn and the hidden state y ∈ Y = {1. y ∈ Y of Gaussian components and the values of the discrete distribution PY |Θ (y|θ). Σc }. . .35 5 4 0.d.Table 4. (xl .2. 4.) according to PXY |Θ (x.05 −2 0 −4 −3 −2 −1 0 1 2 3 4 5 6 −3 −2 −1 0 1 2 3 4 5 (a) (b) Figure 4.fun = ’pdfgmm’ Identifies function associated with this data type. . . . .2. .15 gauss 1 0.5: Data-type used to represent the Gaussian Mixture Model. be known up to parameters θ ∈ Θ.2). 0. . . y ∈ Y.3 3 0. yi) are assumed to samples from random variables which are independent and identically distributed (i. The marginal probability PX|Θ (x|θ) is the Gaussian mixture model (4. y|θ).Prior [c × 1] Weights of Gaussians {PK (1). The conditional probability distribution PX|Y Θ (x|y. . Σy ).2. y1). The pairs (xi .Mean [n × c] Mean vectors {µ1 .2 1 0. . PK (c)}.Cov [n × n × c] Covariance matrices {Σ1 . yl )} which contains both vectors of observations xi ∈ Rn and corresponding hidden states yi ∈ Y. . The θ involves parameters (µy . . . θ) is assumed to be Gaussian distribution. . c} are assumed to be realizations of random variables which are independent and identically distributed according to the PXY |Θ (x. .25 2 0.127 0 0. µc }.2. Thanks to the 44 . respectively.

assumption the log-likelihood can be factorized as l L(θ|TXY ) = log P (TXY |θ) = i=1 log PXY |Θ (xi .i. The computation of the ML estimate is implemented in the function mlcgmm. ppatterns(data).i. yi|θ) . hold on. data = load(’riply_trn’). . % load labeled (complete) data model = mlcgmm(data). (4. % ML estimate of GMM % visualization figure. 4.2a) and (ii) contours of the distribution function (see Figure 4. figure. .d. assumption the log-likelihood can be factorized as l L(θ|TX ) = log P (TX |θ) = i=1 y∈Y PX|Y Θ (xi |yi. Example: Maximum-Likelihood estimate from complete data The parameters of the Gaussian mixture model is estimated from the complete Riply’s data set riply trn.2. pgmm(model. yi|θ) . θ)PY |Θ (yi |θ) . Thanks to the i. 45 . The problem of Maximum-Likelihood (ML) estimation from the complete data TXY is defined as l θ = argmax L(θ|TXY ) = argmax θ∈Θ θ∈Θ i=1 ∗ log PXY |Θ (xi .struct(’visual’. xl } which contains only vectors of observations xi ∈ Rn without knowledge about the corresponding hidden states yi ∈ Y.3) The ML estimate of the parameters θ of the GMM from the complete data set TXY has an analytical solution.’contour’)).d. hold on. The data are binary labeled thus the GMM has two Gaussians components. The estimated GMM is visualized as: (i) shape of components of the GMM (see Figure 4. ppatterns(data).2b) . .2 Incomplete data The input is a set TX = {x1 . . pgauss(model). The logarithm of probability P (TX |θ) is referred to as the log-likelihood L(θ|TX ) of θ with respect to TX .i.

such that the log-likelihood L(θ (t) |TX ) monotonically increases. . The ground truth model is sampled to obtain a training (incomplete) data. . The plot of the log-likelihood function L(θ (t) |TX ) with respect to the number of iterations t is visualized in Figure 4.Cov = [1 0. The first papers describing the EM are [25. The EM algorithm is a local optimization technique thus the estimated parameters depend on the initial guess θ (0) . An interactive demo on the EM algorithm for the GMM is implemented in demo emgmm. true_model. % ground truth Gaussian mixture model true_model. The EM algorithm is applied to estimate the GMM parameters. The EM algorithm builds a sequence of parameter estimates θ (0) .4) The ML estimate of the parameters θ of the GMM from the incomplete data set TXY does not have an analytical solution. . . The random guess or K-means algorithm (see Section 4.. 5].Prior = [0.3b. (4. The EM algorithm for the GMM is implemented in the function emgmm. true_model. sampled data and the estimate model are visualized in Figure 4. L(θ (0) |TX ) < L(θ (1) |TX ) < . θ)PY |Θ (yi |θ) . 250).Mean = [-2 2]. θ (t) .5) are commonly used to select the initial θ (0) .5]. The ground truth model is the GMM with two univariate Gaussian components. .6]. A nice description can be found in book [3].The problem of Maximum-Likelihood (ML) estimation from the incomplete data TX is defined as l θ = argmax L(θ|TX ) = argmax θ∈Θ θ∈Θ i=1 y∈Y ∗ PX|Y Θ (xi |yi. References: The EM algorithm (including convergence proofs) is described in book [26]. .e. θ (1) . The Expectation-Maximization (EM) algorithm is an iterative procedure for ML estimation from the incomplete data. < L(θ (t) |TX ) until a stationary point L(θ (t−1) |TX ) = L(θ (t) |TX ) is achieved. 46 . Example: Maximum-Likelihood estimate from incomplete data The estimation of the parameters of the GMM is demonstrated on a synthetic data. The monograph [18] is entirely devoted to the EM algorithm.3a. sample = gmmsamp(true_model. The random initialization is used by default. The numerical optimization has to be used instead. i. The ground truth model.4 0.

The algorithm is implemented in the function mmgauss. .logL ).ncomp = 2. % display progress info estimated_model = emgmm(sample. θ (1) . . ppatterns(sample.’Ground truth’.6) holds for ε > 0 which defines closeness to the optimal solution θ ∗ .6) can be evaluated efficiently thus it is a natural choice for the stopping condition of the algorithm. . % visualization figure. % number of Gaussian components options.5) If a procedure for the Maximum-Likelihood estimate (global solution) of the distribution PX|Θ (x|θ) exists then a numerical iterative algorithm which converges to the optimal estimate θ ∗ can be constructed. x∈TX (4. 47 . plot( estimated_model. The minimax estimation is defined as θ ∗ = argmax min log PX|Θ (x|θ) .’b’)). ’ML estimation’). The PX|Θ (x|θ) is known up to the parameter θ ∈ Θ. The STPRtool contains an implementation of the minimax algorithm for the Gaussian distribution (4. An interactive demo on the minimax estimation for the Gaussian distribution is implemented in the function demo mmgauss. .X).1).options).verb = 1. figure. h2 = pgmm(estimated_model. ylabel(’log-likelihood’). xl } contains vectors xi ∈ Rn which are realizations of random variable distributed according to PX|Θ (x|θ). .struct(’color’. .3 Minimax estimation The input is a set TX = {x1 . The inequality (4. The vectors TX are assumed to be a realizations with high value of the distribution function PX|Θ (x|θ). . xlabel(’iterations’).struct(’color’. θ (t) which converges to the optimal estimate θ ∗ . The algorithm converges in a finite number of iterations to such an estimate θ that the condition x∈TX min log PX|Θ (x|θ ∗ ) − min log PX|Θ (x|θ) < ε .% ML estimation by EM options. h1 = pgmm(true_model. hold on. θ∈Θ x∈TX (4. The algorithm builds a series of estimates θ (0) .X. . legend([h1(1) h2(1)].’r’)). References: The minimax algorithm for probability estimation is described and analyzed in Chapter 8 of the book [26]. 4.

(xl .0] [1. the minimax problem is equivalent to searching for the minimal enclosing ellipsoid.7) 48 .4 Probabilistic output of classifier In general. Let f : X ⊆ Rn → R be a discriminant function trained from the data TXY by an arbitrary learning method. Perceptron). θ ) .d.13). in combining more classifiers together. y1 ). θ) = 1 . The distribution is modeled by a sigmoid function PY |F Θ (1|f (x). . θ) .4).’xr’.1]]. The minimax algorithm is applied to estimate the parameters of the Gaussian.f. The training set TXY is assumed to be identically and independently distributed from underlying distribution. a2 ]T can be estimated by the maximum-likelihood method l θ = argmax L(θ |TXY ) = argmax θ θ i=1 log PY |F Θ (yi |f (xi ). In this particular case. pgauss(model. . The log-likelihood function is defined as l L(θ|TXY ) = i=1 log PY |F Θ (yi|f (xi ). 2}. the probabilistic output of the classifier can help in post-processing. % points with high p. (4. SVM..g. Fitting a sigmoid function to the classifier output is a way how to solve this problem. X = [[0. The aim is to estimate parameters of a posteriori distribution PY |F Θ (y|f (x). model = mmgauss(X). a2 ]T . struct(’p’. 1 + exp(a1 f (x) + a2 ) which is determined by parameters θ = [a1 . . θ) of the hidden state y given the value of the discriminant function f (x). for instance. the discriminant function of an arbitrary classifier does not have meaning of probability (e. . ppatterns(X.exp(model. The parameters θ = [a1 .lower_bound))). Let TXY = {(x1 . The found solution is visualized as the isoline of the distribution function at the point PX|Θ (x|θ) (see Figure 4. yl )} is the training set composed of the vectors xi ∈ X ⊆ Rn and the corresponding binary hidden states yi ∈ Y = {1. However. % run minimax algorithm % visualization figure.0] [0. 4.Example: Minimax estimation of the Gaussian distribution The bivariate Gaussian distribution is described by three samples which are known to have high value of the distribution function.

fun = ’sigmoid’ Identifies function associated with this data type. a2 ]T which are stored in the data-type describing the sigmoid (see Table 4. The function produces the parameters θ = [a1 . The GMM allows to compute the a posteriori probability for comparison to the sigmoid estimate. hold on.5a shows found probabilistic models of a posteriori probability of the SVM output.’arg’. svm_output. The Riply’s data set riply trn is used for training the SVM and the ML estimation of the sigmoid.options). % visulization figure. % train SVM options = struct(’ker’.’C’. svm_model = smo(data.X] = svmclass(data.10). The example shows how to train the probabilistic output for the SVM classifier. References: The method of fitting the sigmoid function is described in [22]. 49 . % ML estimation of GMM model of SVM output gmm_model = mlcgmm(svm_output).y. Example: Probabilistic output for SVM This example is implemented in the script demo svmpout.7).svm_output.svm_model). The evaluation of the sigmoid is implemented in function sigmoid.5b shows the decision boundary of SVM classifier for illustration. % load training data data = load(’riply_trn’). .X.The STPRtool provides an implementation mlsigmoid which uses the Matlab Optimization toolbox function fminunc to solve the task (4.6: Data-type used to describe sigmoid function.y = data. % fit sigmoid function to svm output sigmoid_model = mlsigmoid(svm_output). The ML estimated of Gaussian mixture model (GMM) of the SVM output is used for comparison. Sigmoid function (structure array): . Figure 4.’rbf’.1. % compute SVM output [dummy. The example is implemented in the script demo svmpout.6).A [2 × 1] Parameters θ = [a1 . a2 ]T of the sigmoid function. Table 4. Figure 4.

.xlabel(’svm output f(x)’)..sigmoid_model). sigmoid_apost = sigmoid(fx.:)*gmm_model. hcomp = pgauss(gmm_model).. l i=1 y=1. 200).. ylabel(’p(y=1|f(x))’). (pcond(1. i. The K-means algorithm converges in a finite number of steps to the local optimum of the objective function F (θ).:)*gmm_model.’g’).’P(y=1|f(x)) ML-Sigmoid’. The found solution depends on the initial guess θ (0) which 1 The K in the name K-means stands for the number of centers which is here denoted by c. gmm_apost = (pcond(1.gmm_apost.. legend([hsigmoid.Prior(1)). % SVM decision boundary figure. 4. % GMM model pcond = pdfgauss( fx.. Let θ = {µ1 . . gmm_model).’P(f(x)|y=1) ML-GMM’.X).e. ’P(y=1|f(x)) ML-GMM’..’k’). The task is to find such centers θ which describe the set TX best.c The small value of F (θ) indicates that the centers θ well describe the input set TX . . xl } which contains vectors xi ∈ Rn . . psvm(svm_model). µi ∈ Rn be a set of unknown cluster centers. .:)*gmm_model. hsigmoid = plot(fx. the task is to solve l 1 ∗ min µy − xi 2 . .5 K-means clustering The input is a set TX = {x1 . hgmm = plot(fx. . . % sigmoid model fx = linspace(min(svm_output. θ (t) .Prior(2)))./. .Prior(1)+(pcond(2. ppatterns(data).sigmoid_apost.’P(f(x)|y=2) ML-GMM’).. 50 . µc }.. Let the objective function F (θ) be defined as l 1 F (θ) = min µy − xi 2 . .X). .hgmm... . θ (1) . max(svm_output. θ = argmin F (θ) = argmin y=1... ppatterns(svm_output).hcomp]..c l i=1 θ θ The K-means1 is an iterative procedure which iteratively builds a series θ (0) . .

ylabel(’F’).’sk’. 4 ). 3]. hold on. plot(model. xlabel(’t’). The plot of the objective function F (θ (t) ) with respect to the number of iterations is shown in Figure 4.6b. The K-means algorithm is implemented in the function kmeans. figure. data = load(’riply_trn’). The solution is visualized as the cluster centers.MsErr).X. % run k-means % visualization figure. pboundary( model ).is usually selected randomly. 6.X. ppatterns(data).data. % load data [model. ppatterns(model.14).y] = kmeans(data. The vectors classified to the corresponding centers are distinguished by color (see Figure 4. References: The K-means algorithm is described in the books [26. Example: K-means clustering The K-means algorithm is used to find 4 clusters in the Riply’s training set. 51 .6a).

2 −1.2: Example of the Maximum-Likelihood of the Gaussian Mixture Model from the incomplete data.2 1 0.5 0 0. Figure (a) shows components of the GMM and Figure (b) shows contour plot of the distribution function.8 0. 52 .2 1 0.2 −1.45 0.5 0 0.5 −1 −0.2 0 −0.981 0.5 1 (a) 1.4 gauss 1 0.1.6 1.5 −1 −0.4 0.6 0.5 1 (b) Figure 4.8 gauss 2 0.2 0 −0.

15 0. 53 . Figure (a) shows ground truth and the estimated GMM and Figure(b) shows the log-likelihood with respect to the number of iterations of the EM.25 0.2 0.1 0.35 0.3 0.3: Example of using the EM algorithm for ML estimation of the Gaussian mixture model.0.05 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 (a) −442 −444 −446 −448 log−likelihood −450 −452 −454 −456 −458 −460 0 5 10 iterations 15 20 25 (b) Figure 4.4 Ground truth ML estimation 0.

8 1 Figure 4.1 0.2 0.8 0.2 −0.6 0.4 gauss 1 0.304 0.4 0.4 −0.6 0.4: Example of the Minimax estimation of the Gaussian distribution.2 0 0.2 0 −0. 54 .4 −0.

5 0 0.2 1 0.4 0.8 0.2 0.1 P(y=1|f(x)) ML−Sigmoid P(y=1|f(x)) ML−GMM P(f(x)|y=1) ML−GMM P(f(x)|y=2) ML−GMM 0.2 0 −0.9 0.1 0 −4 −3 −2 −1 0 1 svm output f(x) 2 3 4 5 (a) 1.5: Example of fitting a posteriori probability to the SVM output. Figure (a) shows the sigmoid model and GMM model both fitted by ML estimated.2 −1.6 0.5 0.8 0. Figure (b) shows the corresponding SVM classifier.6 p(y=1|f(x)) 0.4 0.3 0. 55 .7 0.5 −1 −0.5 1 (b) Figure 4.

1 F 0.1.16 0.2 −1. Figure (a) shows found clusters and Figure (b) contains the plot of the objective function F (θ (t) ) with respect to the number of iterations.5 −1 −0.04 0 2 4 6 t 8 10 12 (b) Figure 4.14 0.4 0.6: Example of using the K-means algorithm to cluster the Riply’s data set.5 1 (a) 0.06 0.8 0.12 0.2 1 0.6 0. 56 .2 0 −0.5 0 0.08 0.

y∈Y The data-type used by the STPRtool to represent the multi-class SVM classifier is described in Table 5. bc ]T be a vector of all biases. 2. The binary classification rule q: X → Y = {1. . αc ] be composed of all weight vectors and b = [b1 . . . . . The multi-class classification rule q: X → Y = {1. (5. . c} is defined as (5. . 2 for f (x) < 0 . 57 . c} defined as fy (x) = αy · kS (x) + by . . sd )]T is the vector of evaluations of kernel functions centered at the support vectors S = {s1 . . .3. (5. .1 Support vector classifiers The binary support vector classifier uses the discriminant function f : X ⊆ Rn → R of the following form f (x) = α · kS (x) + b . . . The multi-class generalization involves a set of discriminant functions fy : X ⊆ Rn → R. The α ∈ Rl is a weight vector and b ∈ R is a bias. .2) The data-type used by the STPRtool to represent the binary SVM classifier is described in Table 5.3) q(x) = argmax fy (x) . . . Both the binary and the multi-class SVM classifiers are implemented in the function svmclass. 2. . y∈Y. . . y ∈ Y = {1. sd }. Let the matrix A = [α1 . 2} is defined as q(x) = 1 for f (x) ≥ 0 . . s1 ). .1) The kS (x) = [k(x. k(x.Chapter 5 Support vector and other kernel machines 5. .2. si ∈ Rn which are usually subset of the training data. . .

yj }. y1 ).. ξi ≥ 0 . Let qj : X ⊆ Rn → {yj .4. The implemented methods to deal with the SVM and related kernel classifiers are enlisted in Table 5. b ) = argmin w + C ξip . 2}.The majority voting strategy is other commonly used method to implement the 1 2 multi-class SVM classifier. . The linear SVM aims to train the linear discriminant function f (x) = w · x + b of the binary classifier q(x) = 1 for f (x) ≥ 0 .4) y =1. . The vector v(x) = [v1 (x). . .g The data-type used to represent the majority voting strategy for the multi-class SVM classifier is described in Table 5. (5. w · xi + b ≥ +1 − ξi . g be a set of g binary 1 2 SVM rules (5.. 4. end. ∀y ∈ Y. . The strategy (5. 30] 5. . The training of the optimal parameters (w ∗ . i ∈ I1 ∪ I2 . .b i=1 i ∈ I1 . 2 for f (x) < 0 . vc (x)]T is computed as: Set vy = 0. (5. j = 1. The j-th rule qj (x) classifies the inputs x into class yj ∈ Y or yj ∈ Y.1 References: Books describing Support Vector Machines are for example [31.2).. yl )} of training vectors xi ∈ Rn and corresponding hidden states yi ∈ Y = {1. For j = 1. w · xi + b ≤ −1 + ξi . .4) is implemented in the function mvsvmclass. Let v(x) be a vector [c × 1] of votes for classes Y when the input x is to be classified. (xl .. The majority votes based multi-class classifier assigns the input x into such class y ∈ Y having the majority of votes y = argmax vy (x) . . . 58 .2 Binary Support Vector Machines The input is a set TXY = {(x1 . . . . . . g do vy (x) = vy (x) + 1 where y = qj (x).5) 2 w. i ∈ I2 . b∗ ) is transformed to the following quadratic programming task l 1 ∗ ∗ 2 (w . .

6) subject to β·γ = 1. rbfpreimg3 RBF pre-image problem by Kwok-Tsang’s algorithm. demo svm Demo on Support Vector Machines. minball Minimal enclosing ball in kernel feature space. smo Sequential Minimal Optimization for binary SVM with L1-soft margin.Table 5. i ∈ I1 ∪ I2 are slack variables used to relax the inequalities for the case of non-separable data. oaasvm Multi-class SVM using One-Against-All decomposition. rsrbf Reduced Set Method for RBF kernel expansion. In the case of the parameter p = 1. The C > 0 stands for the regularization constant. β ≥ 0. 59 .1: Implemented methods: Support vector and other kernel machines. diagker Returns diagonal of kernel matrix of given data. 2 (5. The I1 = {i: yi = 1} and I2 = {i: yi = 2} are sets of indices. the L1-soft margin SVM is trained and for p = 2 it is denoted as the L2-soft margin SVM. rbfpreimg RBF pre-image by Schoelkopf’s fixed-point algorithm. The linear p = 1 and quadratic p = 2 term for the slack variables ξi is used. rbfpreimg2 RBF pre-image problem by Gradient optimization. kperceptr Kernel Perceptron. kdist Computes distance between vectors in kernel space. The training of the non-linear (kernel) SVM with L1-soft margin corresponds to solving the following QP task β ∗ = argmax β · 1 − β 1 β · Hβ . svmquadprog SVM trained by Matlab Optimization Toolbox. The ξi ≥ 0. svmclass Support Vector Machines Classifier. svmlight Interface to SV M light software. bsvm2 Multi-class BSVM with L2-soft margin. oaosvm Multi-class SVM using One-Against-One decomposition. kfd Kernel Fisher Discriminant. evalsvm Trains and evaluates Support Vector Machines classifier. β ≤ 1C . kernel Evaluates kernel function. mvsvmclass Majority voting multi-class SVM classifier.

The weight vectors are compute as αi = βi if yi = 1 .arg [1 × p] Kernel argument(s). Multi-class SVM classifier (structure array): . for i ∈ ISV . .Alpha [d × c] Weight vectors A = [α1 . . (5. . . .1) is determined by the weight vector α.options.7) The bias b can be computed from the Karush-Kuhn-Tucker (KKT) conditions as the following constrains f (xi ) = α · kS (xi ) + b = +1 f (xi ) = α · kS (xi ) + b = −1 for i ∈ {j: yj = 1. sd }. . .options.options. . . . . where the set of indices is ISV = {i: βi > 0}.fun = ’svmclass’ Identifies function associated with this data type. xj ) if yi = yj . 0 < βj < C} .ker [string] Kernel identifier.X [n × d] Support vectors S = {s1 . xj ) if yi = yj . . bc ]T .sv. . αc ].Table 5. . 0 < βj < C} . bias b and the set of support vectors S. −1 if yi = 2 .b [c × 1] Vector of biased b = [b1 . . . sd }. The discriminant function (5. The vector γ [l × 1] and the matrix H [l × l] are constructed as follows γi = 1 if yi = 1 .X [n × d] Support vectors S = {s1 . Table 5. . −k(xi . for i ∈ {j: yj = 2. The bias b is computed as an average over all the constrains such that 1 γi − α · kS (xi ) . .2: Data-type used to describe binary SVM classifier. .Alpha [d × 1] Weight vector α.options. .3: Data-type used to describe multi-class SVM classifier. . b= |Ib | i∈I b 60 . . The 0 [l × 1] is vector of zeros and the 1 [l × 1] is vector of ones.ker [string] Kernel identifier.fun = ’svmclass’ Identifies function associated with this data type. Hi. . −βi if yi = 2 . The set of support vectors is denoted as S = {xi : i ∈ ISV }. where k: X × X → R is selected kernel function. . Binary SVM classifier (structure array): . .j = k(xi . must hold for the optimal solution. .arg [1 × p] Kernel argument(s).b [1 × 1] Bias b.sv.

The vector bin y(1. . . . Majority voting SVM classifier (structure array): . .8) subject to β·γ = 1. . 27. yg ].4: Data-type used to describe the majority voting strategy for multi-class SVM classifier. β ≥ 0.Table 5. where E is the identity matrix. The weight vector α is given by (5. The training of the non-linear (kernel) SVM with L2-soft margin corresponds to solving the following QP task β ∗ = argmax β · 1 − β 1 1 β · (H + E)β . . 61 . . . bg ]T . The bias b can be computed from the KKT conditions as the following constrains f (xi ) = α · kS (xi ) + b = +1 − αi 2C αi f (xi ) = α · kS (xi ) + b = −1 + 2C for i ∈ {i: yj = 1. .X [n × d] Support vectors S = {s1 .ker [string] Kernel identifier.:) contains 2 2 2 [y1 .sv. . An interactive demo on the binary SVM is implemented in demo svm.arg [1 × p] Kernel argument(s). γi (1 − |ISV | i∈I 2C SV The binary SVM solvers implemented in the STPRtool are enlisted in Table 5. .7). . . must hold at the optimal solution.:) 1 1 1 contains [y1 . sd }. yg ] and the bin y(2. . 30]. The SV M light is described for instance in the book [27]. Example: Training binary SVM. . References: The Sequential Minimal Optimizer (SMO) is described in the books [23. .options. The bias b is computed as an average over all the constrains thus 1 αi b= ) − α · kS (xi ) . where Ib = {i: 0 < βi < C} are the indices of the vectors on the boundary.fun = ’mvsvmclass’ Identifies function associated with this data type. . . . βj > 0} .options.b [g × 1] Vector of biased b = [b1 . . . for i ∈ {i: yj = 2. αg ]. .Alpha [d × g] Weight vectors A = [α1 . y2 . . . . The set of support vectors is set to S = {xi : i ∈ ISV } where the set of indices is ISV = {i: βi > 0}.5. y2 . .bin y [2 × g] Targets for the binary rules. 2 2C (5. βj > 0} .

It can handle moderate size problems. options. It useful for small problems only.Function smo svmlight svmquadprog Table 5. The decision boundary is visualized in Figure 5. SVM solver based on the QP solver quadprog of the Matlab optimization toolbox. options.ker = ’rbf’. Matlab interface to SV M light software (Version: 5. It can be downloaded from http://svmlight. The SV M light is very powerful optimizer which can handle large problems. options. % train model = % model = % model = % % % % load training data use RBF kernel kernel argument regularization constant SVM classifier smo(trn.5: Implemented binary SVM solvers. psvm(model).joachims.options). The binary SVM is trained on the Riply’s training set riply trn.model). tst = load(’riply_tst’). cerror(ypred. It trains both L1-soft and L2-soft margin binary SVM. description Implementation of the Sequential Minimal Optimizer (SMO) used to train the binary SVM with L1-soft margin.00) which trains the binary SVM with L1-soft margin.tst. svmquadprog) can by used as well. % visualization figure. trn = load(’riply_trn’).org/. The SMO solver is used in the example but other solvers (svmlight.X. ypred = svmclass(tst.options). ppatterns(trn).1. The function quadprog is a part of the Matlab Optimization toolbox. svmquadprog(trn. The trained SVM classifier is evaluated on the testing data riply tst. The executable file svm learn for the Linux OS must be installed.options). svmlight(trn.C = 10.y) % load testing data % classify data % compute error 62 .arg = 1.

The discriminant functions fy (x) = αy · kS (x) + by . .8 0. yl )} contain the modified hidden states defined as 1 for y = yi . . y1 ).2 1 0. .6 0. . Let the training set TXY = {(x1 .3). yl )} of training vectors xi ∈ X ⊆ Rn and corresponding hidden states yi ∈ Y = {1. 5. y1 ). . (xl . . . The problem can be solved by the One-Against-All (OAA) decomposition.2 0 −0. y∈Y.3 One-Against-All decomposition for SVM The input is a set TXY = {(x1 . The goal is to train the multi-class rule q: X ⊆ Rn → Y defined by (5. .5 −1 −0. y are trained by the binary SVM solver from the set TXY . yi = 2 for y = yi . (xl . . . . The OAA decomposition is implemented in the function oaasvm.4 0. The OAA decomposition transforms the multi-class problem into a series of c binary subtasks that can be trained by the binary y SVM. c}. 2. The function allows to specify any binary SVM 63 .0920 1.1: Example showing the binary SVM classifier trained on the Riply’s data.ans = 0.5 0 0.2 −1. y ∈ Y. .5 1 Figure 5.

j The training set TXY . g are trained on j the data TXY . j = 1. . . j = 1.’ko’.4 One-Against-One decomposition for SVM The input is a set TXY = {(x1 . . . The one-against-one (OAO) decomposition strategy can be used train such a classifier.4).arg = 1. The OAO decomposition transforms the multi-class problem into a series of g = c(c − 1)/2 j binary subtasks that can be trained by the binary SVM. (xl . % % % % % % load data use SMO solver use RBF kernel kernel argument regularization constant training 5.ker = ’rbf’. y1). . ppatterns(data). Let the training set TXY = {(x1 . c}. % visualization figure.12). 2. The result is the multi-class SVM classifier (5. . pboundary(model). . . . y1 ). The goal is to train the multi-class rule q: X ⊆ Rn → Y based on the majority voting strategy defined by (5. Example: Multi-class SVM using the OAA decomposition The example shows the training of the multi-class SVM using the OAA decomposition. The SMO binary solver is used to train the binary SVM subtasks.X. options.2) used to solve the binary problems.3. The binary SVM rules qj . i ∈ Ij . . The OAO decomposition for multi-class SVM is implemented in function 64 . .solver = ’smo’. options.C = 10. The training data and the decision boundary is visualized in Figure 5. ylj )} contain the training vectors xi ∈ I j = {i: yi = y 1 yi = y 2} and the modified the hidden states defined as yi = 1 1 for yj = yi . . .options ). (xlj . 2 2 for yj = yi . . model = oaasvm(data.3) represented by the data-type described in Table 5. References: The OAA decomposition to train the multi-class SVM and comparison to other methods is described for instance in the paper [12]. g is constructed for all g = c(c − 1)/2 combinations of 1 2 1 classes yj ∈ Y and yj ∈ Y \ {yj }. options. . data = load(’pentagon’).sv. . . ppatterns(model. . . options.solver (see Section 5.2. yl )} of training vectors xi ∈ X ⊆ Rn and corresponding hidden states yi ∈ Y = {1. .

% visualization figure. The training data and the decision boundary is visualized in Figure 5.8 −0. The SMO binary solver is used to train the binary SVM subtasks. % % % % % % load data use SMO solver use RBF kernel kernel argument regularization constant training 65 . pboundary(model).2: Example showing the multi-class SVM classifier build by one-against-all decomposition.6 0.sv.6 0. options.4 0.X.3.arg = 1.8 1 Figure 5. model = oaosvm(data.2 0 −0.solver = ’smo’.8 −1 −1 −0. options.12). The function oaosvm allows to specify an arbitrary binary SVM solver to train the subtasks. data = load(’pentagon’).ker = ’rbf’. Example: Multi-class SVM using the OAO decomposition The example shows the training of the multi-class SVM using the OAO decomposition. ppatterns(model.4 −0.2 0.4 0.’ko’.6 −0. options.4 −0.8 0. oaosvm.1 0.2 0 0. ppatterns(data). References: The OAO decomposition to train the multi-class SVM and comparison to other methods is described for instance in the paper [12]. options.2 −0.6 −0.C = 10.options).

2 0 0. ξ is a vector of slack variables and I = [1. ξiy ≥ 0 . bc ]T is vector of biases. .1 0. .6 −0.4 −0.3).8 −1 −1 −0.8 1 Figure 5. y1 ). l] is set of indices. . The letter B in BSVM stands for the bias term added to the objective (5.4 −0. . The goal is to train the multi-class rule q: X ⊆ Rn → Y defined by (5. In the linear case. the parameters of the multiclass rule can be found by solving the multi-class BSVM formulation which is defined as 1 (W∗ . . the optimization problem (5. yl )} of training vectors xi ∈ X ⊆ Rn and corresponding hidden states yi ∈ Y = {1. . .6 0. . (xl .2 −0. 5. wc ] is a matrix of normal vectors. b = [b1 . . In the case of the L2-soft margin p = 2.9) y 2 y∈Y W. . . 2.2 0 −0. . b∗ .8 0.9). . . The L1-soft margin is used for p = 1 and L2-soft margin for p = 2.3: Example showing the multi-class SVM classifier build by one-against-one decomposition. .b. i ∈ I . c}. (5.ξ) subject to w yi · xi + byi − ( w y · xi + by ) ≥ 1 − ξiy . . . i ∈ I . where W = [w 1 . y ∈ Y \ {yi } .6 −0.9) can be 66 . .5 Multi-class BSVM formulation The input is a set TXY = {(x1 .4 0. .b i∈I y∈Y\{y} F (W. ξ∗ ) = argmin ( wy 2 + b2 ) + C (ξi )p .2 0.8 −0.6 0. y ∈ Y \ {yi } .4 0. .

y ∈ Y \ {yi} .10) corresponds to the single-class SVM criterion which is simple to optimize. y)) + by . y))+ δ(y.11) 2C The criterion (5. j) . u. (5. u)) . The bounds are used in the algorithms to define the following two stopping conditions: Relative tolerance: FU B − FLB ≤ εrel . (5. ξ ∗ ) of the criterion (5. u)−δ(yi . The multi-class rule (5. The upper bound FU B and the lower bound FLB on the optimal value F (W∗. i∈I y∈Y\{yi } j∈I u∈Y\{yj } (5. FLB + 1 67 . yj )+δ(y. . The non-linear (kernel) classifier is obtained by substituting the selected kernel k: X × X → R for the dot products x·x to (5. i. yi) − δ(y.11) and (5. y∈Y. y∈Y. The training of the multi-class BSVM classifier with L2-soft margin is implemented in the function bsvm2. u)−δ(yj . u. The function h: Y×Y×I×I → R is defined as h(y. yi) − δ(u. y ∈ Y is given by by = i∈I y∈Y\{yi } u αi (δ(y. The optimizers implemented in the function bsvm2 are enlisted bellow: Mitchell-Demyanov-Malozemov (solver = ’mdm’). b∗ . . where A = [α1 . The above mentioned algorithms are of iterative nature and converge to the optimal solution of the criterion (5.expressed in its dual form A∗ = argmax A i∈I y∈Y\{yi } y αi − 1 2 y u αi αj h(y.9).12) where the bias by .10) subject to y αi ≥ 0 . .3) is composed of the set of discriminant functions fy : X → R which are computed as fy (x) = i∈I xi · x u∈Y\{yi } u αi (δ(u. . j) .12). αc ] are optimized weight vectors. The optimization task (5. u)δ(i. Kozinec’s algorithm (solver = ’kozinec’). i ∈ I .9) can be computed.10) can be solved by most the optimizers. j) = ( xi ·xj +1)(δ(yi . Nearest Point Algorithm (solver = ’npa’). i.

The mapping is performed efficiently by means of kernel functions k: X × X → R being the dot products of the mapped vectors k(x. % visualization figure. options. ppatterns(data). (xl .Absolute tolerance: FU B ≤ εabs . The example shows the training of the multi-class BSVM using the NPA solver. .10) implemented in the STPRtool is published in paper [9].ker = ’rbf’.solver = ’npa’. ’mdm’). The ordinary FLD is applied on the mapped data training data TΦY = {(φ(x1 ).solver (other options are ’kozinec’. . . References: The multi-class BSVM formulation is described in paper [12]. . The Kernel Fisher Discriminant is the non-linear extension of the linear FLD (see Section 2.arg = 1. . . . options. The Mitchell-Demyanov-Malozemov and Nearest Point Algorithm are described in paper [14]. model = bsvm2(data. .6 Kernel Fisher Discriminant The input is a set TXY = {(x1 . (φ(xl ). y1 ). The other solvers can be used instead by setting the variable options. The input training vectors are assumed to be mapped into a new feature space F by a mapping function φ: X → F . data = load(’pentagon’).4.options). y1). options. . The training data and the decision boundary is visualized in Figure 5. options. ppatterns(model. αl ]T such that the criterion F (α) = α·m 2 α · Mα = . Example: Multi-class BSVM with L2-soft margin. . α · (N + µE)α α · (N + µE)α 68 (5. The aim is to find a direction ψ = l αi φ(xi ) in the feature space i=1 F given by weights α = [α1 . . The Kozinec’s algorithm based SVM solver is described in paper [10].X.12). yl )} of training vectors xi ∈ X ⊆ Rn and corresponding values of binary state yi ∈ Y = {1. .’ko’. x ) = φ(x) · φ(x ) . pboundary(model). yl )}.13) .3).C = 10. % % % % % % load data use NPA solver use RBF kernel kernel argument regularization constant training 5.sv. The transformation of the multi-class BSVM criterion to the single-class criterion (5. 2}.

.4 0.13) serves as the regularization term.13) defined in the input space X . The discriminant function f (x) of the binary classifier (5.4 −0. The vector m [l × 1]. xj ).4 0.6 −0. is maximized.8 1 Figure 5. |Iy |my mT . The KFD training algorithm is implemented in the function kfd. (5.6 0. The kernel matrix K [l × l] contains kernel evaluations Ki. x)]T is vector of evaluations of the kernel.8 −1 −1 −0. The discriminant function (5. The diagonal matrix µE in the denominator of the criterion (5. M = (m1 − m2 )(m1 − m2 )T .1 0. . . .14) where the k(x) = [k(x1 .14) is known up to the bias b which is determined by the linear SVM with L1-soft margin is applied on the projected data TZY = {(z1 . . . k(xl . m = m1 − m2 .2) is given by the found vector α as f (x) = ψ · φ(x) + b = α · k(x) + b . y1).6 −0. . yl )}.13) is the feature space counterpart of the criterion (2.j = k(xi . zl } are computed as zi = α · k(xi ) . (zl .2 0. y where the vector 1y has entries i ∈ Iy equal to and remaining entries zeros.6 0.4: Example showing the multi-class BSVM classifier with L2-soft margin.2 0 −0. . The criterion (5.2 0 0. . .4 −0. The projections {z1 .8 0.8 −0.2 −0. 69 . matrices N [l × l] M [l × l] are constructed as my = 1 K1y . x). . |Iy | N = KKT − y∈Y y∈Y. .

X. cerror( ypred. 70 (5.7 Pre-image problem Let TX = {x1 . % evaluation on testing data tst = load(’riply_tst’). options. The detailed analysis can be found in book [30]. ppatterns(trn). the isolines with f (x) = +1 and f (x) = −1 are visualized and the support vectors (whole training set) are marked as well. options.. options) % visualization figure.References: The KFD formulation is described in paper [19]. The training data and the found kernel classifier is visualized in Figure 5. xl } be a set of vector from the input space X ⊆ Rn . . trn = load(’riply_trn’). options.0960 % % % % % % load training data use RBF kernel kernel argument regularization of linear SVM regularization of KFD criterion KFD training 5. The kernel methods works with the data non-linearly mapped φ: X → F into a new feature space F .mat.001.e. model ). i.5. Example: Kernel Fisher Discriminant The binary classifier is trained by the KFD on the Riply’s training set riply trn.mat.mu = 0.C = 10. . psvm(model). model = kfd(trn. . The mapping is performed implicitly by using the kernel functions k: X × X → R which are dot products of the input data mapped into the feature space F such that k(xa . The found classifier is evaluated on the testing data riply tst.15) . options. The classifier is visualized as the SVM classifier.y ) ans = 0. .arg = 1.ker = ’rbf’. tst. xb ) = φ(xa ) · φ(xb ) . ypred = svmclass( tst. Let ψ ∈ F be given by the following kernel expansion l ψ= i=1 αi φ(xi ) .

1. its feature space image φ(x) ∈ F would well approximate the expansion point Ψ ∈ F given by (5.5 x − xi 2 /σ 2 ) .2 1 0. . (5. However.15).4 0.2 0 −0.2 −1.17).16) l l = argmin k(x . αl ]T is a weight vector of real coefficients.6. the preimage problem solves mapping from the feature space F back to the input space X . x ) − 2 x i=1 αi k(x . xi ) + i=1 j=1 αi αj k(xi .6 0. In other words.5 −1 −0.8 0. The pre-image problem aims to find an input vector x ∈ X .17) which are enlisted in Table 5. xj ) .5 1 Figure 5. 71 . xb ) = exp(−0. The pre-image problem (5. all the methods are guaranteed to find only a local optimum of the task (5.17) The STPRtool provides three optimization algorithms to solve the RBF pre-image problem (5. where α = [α1 . The pre-image problem can be stated as the following optimization task x = argmin φ(x) − ψ x l 2 = argmin φ(x ) − x i=1 αi φ(xi ) l 2 (5.16) for the particular case of the RBF kernel leads to the following task l x = argmax x i=1 αi exp(−0.5: Example shows the binary classifier trained based on the Kernel Fisher Discriminant.5 0 0. . . .5 xa −xb 2 /σ 2 ) be assumed. Let the Radial Basis Function (RBF) k(xa .

ker = ’rbf’.’or’.13). Example: Pre-image problem The example shows how to visualize a pre-image of the data mean in the feature induced by the RBF kernel.2).X).1)/num_data. % load input data data = load(’riply_trn’).Table 5. ppatterns(x.Alpha = ones(num_data. References: The pre-image problem and the implemented algorithms for its solution are described in [28. ppatterns(data. % visualization figure.X = data. 72 . % define the mean in the feature space num_data = size(data.options.6. expansion. rbfpreimg3 A method exploiting the correspondence between distances in the input space and the feature space proposed in [16]. rbfpreimg2 A standard gradient optimization method using the Matlab optimization toolbox for 1D search along the gradient. rbfpreimg An iterative fixed point algorithm proposed in [28].options.arg = 1. The result is visualized in Figure 5. expansion.6: Methods to solve the RBF pre-image problem implemented in the STPRtool. % compute pre-image problem x = rbfpreimg(expansion). expansion.X.sv. 16]. expansion.

and well approximates the original one (5. m < l.. .19). αl ]T is a weight vector of real coefficients and b ∈ R is a scalar. .1. k: X × X → R is a kernel function. si ∈ Rn determine the reduced kernel expansion (5.5 1 Figure 5. . . The problem of finding the 73 . . The weight vector β = [β1 . ˜ ψ (5. (5. The reduced set method aims to find a new function m m ˜ f (x) = i=1 βi k(si .6 0. . . 5. .19) is shorter. x) + b = φ(x) · i=1 βi φ(si ) + b . Let φ: X → F be a mapping from the input space X to the feature space F associated with kernel function such that k(xa .8 Reduced set method The reduced set method is useful to simplify the kernel classifier trained by the SVM or other kernel machines. .2 0 −0.5 0 0. . i. .5 −1 −0. xb ) = φ(xa ) · φ(xb ) .2 −1. Let the function f : X ⊆ Rn → R be given as l f (x) = i=1 αi k(xi .18) where TX = {x1 . .18). xl } are vectors from Rn . x) + b = ψ · φ(x) + b .e.19) such that the expansion (5. βm ]T and the vectors TS = {s1 . .6: Example shows training data and the pre-image of their mean in the feature space induced by the RBF kernel. α = [α1 .8 0. The function f (x) can be expressed in the feature space F as the linear function f (x) = ψ · φ(x) + b. sm }. . .2 1 0.4 0. .

TS ) = argmin ψ − ψ β . The reduced set method for the RBF kernel is implemented in function rsrbf.model). The reduced expansion with 10 support vectors is found. h2 = pboundary(red_model.’C’.model).X.’Reduced SVM’). err = cerror(ypred.y).10)). ppatterns(trn).struct(’line_style’. (5.struct(’nsv’.’r’)). % load training data trn = load(’riply_trn’).7).20) can be solved by an iterative greedy algorithm.20) Let the Radial Basis Function (RBF) k(xa . The algorithm converts the task (5.1. % train SVM classifier model = smo(trn.mat.struct(’ker’. the optimization task (5. Example: Reduced set method for SVM classifier The example shows how to use the reduced set method to simplify the SVM classifier.7. References: The implemented reduced set method is described in [28].struct(’line_style’.20) into a series of m pre-image tasks (see Section 5.5 xa −xb 2 /σ 2 ) be assumed. % evaluate SVM classifiers tst = load(’riply_tst’).’rbf’. Decision boundaries of the original and the reduced SVM are visualized in Figure 5.’arg’. h1 = pboundary(model.’Original SVM’. The original and reduced SVM are evaluated on the testing data riply tst.10)).reduced kernel expansion (5.X.’b’)). red_ypred = svmclass(tst. 74 . xb ) = exp(−0.tst. % visualize decision boundaries figure. ypred = svmclass(tst. In this case. The trained kernel expansion (discriminant function) is composed of 94 support vectors. legend([h1(1) h2(1)]. % compute the reduced rule with 10 support vectors red_model = rsrbf(model.TS 2 = argmin β .TS i=1 βi φ(si ) − i=1 αi φ(xi ) .19) can be stated as the optimization task m l 2 ˜ (β. The SVM classifier is trained from the Riply’s training set riply trn/mat.

red_model. err = %..0960 1.y).6 0. . 75 .4f\n’]. err = 0. ’Reduced SVM: nsv = %d.5 −1 −0. fprintf([’Original SVM: nsv = %d.5 0 0.nsv.tst.2 −1.7: Example shows the decision boundaries of the SVM classifier trained by SMO and the approximated SVM classifier found by the reduced set method.0960 Reduced SVM: nsv = 10.red_err = cerror(red_ypred.4f\n’ .. red_err). Original SVM: nsv = 94..nsv. err = 0.2 0 −0. err.8 0.2 Original SVM Reduced SVM 1 0. The original decision rule involves 94 support vectors while the reduced one only 10 support vectors. err = %.5 1 Figure 5..4 0. model.

Input data for the demo on the Generalized Anderson’s task demo anderson. A description is given in Section 7. Table 6. The functions which generate synthetic data and scripts for converting external datasets to the Matlab are given in Table 6. The datasets stored in the Matlab data file are enlisted in Table 6.mat /ocr data/*.mat /anderson task/*.mat stored in /data/ directory of the STPRtool. Input data for the demo on the Support Vector Machines demo svm.mat /svm samples/*. Well known Fisher’s data set.mat /gmm samles/*. /multi separable/*.mat /riply tst.1.mat /binary separable/*. Training part of the Riply’s data set [24].1: Datasets /riply trn. Linearly separable multi-class data in 2D.mat /iris. Input data for the demo on training linear classifiers from finite vector sets demo linclass.2. Input data for the demo on the ExpectationMaximization algorithm for the Gaussian mixture model demo emgmm. Testing part of the Riply’s data set [24].1. Examples of hand-written numerals.1 Data sets The STPRtool provides several datasets and functions to generate synthetic data.Chapter 6 Miscellaneous 6.mat 76 .

2: Data generation. pline Plots line in 2D. pandr 77 . Generates data on circle corrupted by the Gaussian noise.2 Visualization The STPRtool provides tools for visualization of common models and data. Each function contains an example of using. Generates linearly separable binary data with prescribed margin. Table 6. It works for 1D. plane3 Plots plane in 3D. showim Displays images entered as column vectors. ppatterns Visualizes patterns as points in the feature space.3. Visualizes solution of the Generalized Anderson’s task. Postal Service (USPS) database of hand-written numerals to the Matlab file. pkernelproj Plots isolines of kernel projection.S. It operates in 2-dimensional space only. The list of functions for visualization is given in Table 6. 6.createdata genlsdata gmmsamp gsamp gencircledata usps2mat Table 6. pgauss Visualizes set of bivariate Gaussians. 2D and 3D. GUI which allows to create interactively labeled vectors sets and labeled sets of Gaussian distributions. Generates data from the multivariate Gaussian distribution.3: Functions for visualization. pboundary Plots decision boundary of given classifier in 2D. Converts U. Generates data from the Gaussian mixture model. pgmm Visualizes the bivariate Gaussian mixture model. psvm Plots decision boundary of binary SVM classifier in 2D.

The 0/1-loss function W0/1 (q(x). i. 1 for q(x) = y . The loss function is defined as ⎧ ⎨ 0 for q(x) = y . ∀x ∈ X .. . y) dx . c}. R(q) = X y∈Y PXY (x. The Bayesian risk R(q) is an expectation of the value of the loss function W when the decision rule q is applied. . y∈Y 78 . Let W : D × Y → R be a loss function which penalizes the decision q(x) ∈ D when the true hidden state is y ∈ Y. .3) = argmax PX|Y (x|y)PY (y) .4) Wε (q(x). y) .6. y∈Y Classification with reject-option: The set of decisions D is assumed to be D = Y ∪ {dont know}. The STPRtool implements the Bayesian rule for two particular cases: Minimization of misclassification: The set of decisions D coincides to the set of hidden states Y = {1. (6. where ε is penalty for the decision dont know.1) is referred to as the Bayesian rule q ∗ (x) = argmin y y∈Y PXY (x. The rule q: X → Y which minimizes the Bayesian risk with the loss function (6. y). The x and y are realizations of random variables with joint probability distribution PXY (x.1) with the 0/1-loss function corresponds to the expectation of misclassification. Let X ⊆ Rn and the sets Y and D be finite.4) is defined as ⎧ ⎨ argmax PX|Y (x|y)PY (y) if 1 − max PY |X (y|x) < ε . y∈Y (6. y)W (q(x). y) = 0 for q(x) = y . . y)W (q(x).1) The optimal rule q ∗ which minimizes the Bayesian risk (6. 1 for q(x) = y . A decision rule q: X → D takes a decision d ∈ D based on the observation x ∈ X . (6.e. y∈Y y∈Y (6. The Bayesian risk (6. The rule q: X → Y which minimizes the expectation of misclassification is defined as q(x) = argmax PY |X (y|x) .2) is used. (6.5) q(x) = ⎩ dont know if 1 − max PY |X (y|x) ≥ ε . y) = ⎩ ε for q(x) = dont know .3 Bayesian classifier The object under study is assumed to be described by a vector of observations x ∈ X and a hidden state y ∈ Y.

Pclass{2} = emgmm(trn. inx2 = find(trn.2)). bayes_model. .inx2).Pclass [cell 1 × c] Class conditional distributions PX|Y . The a priori probabilities PY (y). % Estimation of class-conditional distributions by EM bayes_model.fun = ’bayescls’ Identifies function associated with this data type.inx1).bayes_model). The distribution PX|Y (x|y) is given by Pclassy which uses the datatype described in Table 4. The Bayesian classifiers (6. y ∈ {1. The GMM are estimated by the EM algorithm emgmm. The designed Bayesian classifier is evaluated on the testing data riply tst. Table 6. y ∈ Y.Pclass{1} = emgmm(trn.X(:.mat. .X(:.2. Example: Bayesian classifier The example shows how to design the plug-in Bayesian classifier for the Riply’s training set riply trn. The class-conditional distributions PX|Y are modeled by the Gaussian mixture models (GMM) with two components.5) are implemented in function bayescls.y) 79 . bayes_model.X. cerror(ypred. % Estimation of priors n1 = length(inx1). n2 = length(inx2).3) and (6. % load input training data trn = load(’riply_trn’).struct(’ncomp’.Prior = [n1 n2]/(n1+n2).Prior [1 × c] Discrete distribution PY (y).4. % Evaluation on testing data tst = load(’riply_tst’).y==1). References: The Bayesian decision making with applications to PR is treated in details in book [26].2)).y==2).mat.To apply the optimal classification rules one has to know the class-conditional distributions PX|Y and a priory distribution PY (or their estimates).4: Data-type used to describe the Bayesian classifier.tst.struct(’ncomp’. The data-type used by the STPRtool is described in Table 6. 2} are estimated by the relative occurrences (it corresponds to the ML estimate). ypred = bayescls(tst. inx1 = find(trn. Bayesian classifier (structure array): .

struct(’line_style’. y ∈Y y ∈Y (6.6) In the particular binary case Y = {1. the second class and the region in between corresponding to the dont know decision. 2. 2}. ppatterns(trn). reject_model.eps = 0. which are quadratic with respect to the input vector x ∈ Rn . . .0900 The following example shows how to visualize the decision boundary of the found Bayesian classifier minimizing the misclassification (solid black line). hold on. . % Visualization figure.1. The reject-option rule splits the feature space into three regions corresponding to the first class. % Vislualization of rejet-option rule pboundary(reject_model. The input vector x ∈ Rn is assigned to the class y ∈ Y its corresponding discriminant function fy attains maximal value y = argmax fy (x) = argmax ( x · Ay x + by · x + cy ) .ans = 0. % Penalization for don’t know decision reject_model = bayes_model. pboundary(bayes_model).4 Quadratic classifier The quadratic classification rule q: X ⊆ Rn → Y = {1. a vector by [n × 1] and a scalar cy [1 × 1]. . the quadratic classifier is represented by a single discriminant function f (x) = x · Ax + b · x + c .1. c} is composed of a set of discriminant functions fy (x) = x · Ay x + by · x + cy .fun = ’bayescls’. The quadratic discriminant function fy is determined by a matrix Ay [n × n]. 80 . 6. The decision boundary of the reject-options rule (dashed line) is displayed for comparison. bayes_model.’k--’)). The visualization is given in Figure 6. The Bayesian classifier splits the feature space into two regions corresponding to the first class and the second class. ∀y ∈ Y .

2 −1. a vector b [n × 1] and a scalar c [1 × 1].5: Data-type used to describe binary quadratic classifier. 2 if f (x) = x · Ax + b · x + c < 0 . given by a matrix A [n × n].1: Figure shows the decision boundary of the Bayesian classifier (solid line) and the decision boundary of the reject-option rule with ε = 0.1 (dashed line).6) quadratic classifier.2 0 −0. Table 6.A [n × n] Parameter matrix A.5 1 Figure 6. 2} as follows q(x) = 1 if f (x) = x · Ax + b · x + c ≥ 0 .fun = ’quadclass’ Identifies function associated with this data type.b [n × 1] Parameter vector b. Binary linear quadratic (structure array): .2 1 0. .5 0 0. (6.c [1 × 1] Parameter scalar c. The input vector x ∈ Rn is assigned to class y ∈ {1.5 −1 −0.6 0. Example: Quadratic classifier The example shows how to train the quadratic classifier using the Fisher Linear Discriminant and the quadratic data mapping.8 0.5) and the multi-class (see Table 6. . The classifier is trained from the Riply’s 81 .4 0.7) The STPRtool uses a specific structure array to describe the binary (see Table 6. The class-conditional probabilities are modeled by the Gaussian mixture models with two components estimated by the EM algorithm.1. The implementation of the quadratic classifier itself provides function quadclass. .

e. . y1 ).quad_model). % visualization figure. cerror(ypred. yl )} be a set of prototype vectors xi ∈ X ⊆ Rn and corresponding hidden states yi ∈ Y = {1. |{xi : xi ∈ Rn (x)}| = K. i.2). pboundary(quad_model). lin_model = fld(quad_data). . . The K-nearest neighbor (KNN) classification rule q: X → Y is defined as q(x) = argmax v(x.X. (6. % make quadratic classifier quad_model = lin2quad(lin_model). .b [n × c] Parameter vectors by . .A [n × n × c] Parameter matrices Ay . The boundary of the quadratic classifier is visualized (see Figure 6.6: Data-type used to describe multi-class quadratic classifier. ypred = quadclass(tst. . ppatterns(trn). y) . . quad_data = qmap(trn). The found quadratic classifier is evaluated on the testing data riply trn.5 K-nearest neighbors classifier Let TXY = {(x1 . . Let Rn (x) = {x : x − x ≤ r 2 } be a ball centered in the vector x in which lie K prototype vectors xi . y ∈ Y.0940 % load input data % quadratic mapping % train FLD 6. . y ∈ Y.mat. (xl .c [1 × c] Parameter scalars cy . l}.mat. % evaluation tst = load(’riply_tst’). y ∈ Y. .8) y∈Y 82 . trn = load(’riply_trn’). . .y) ans = 0.Table 6.tst. Multi-class quadratic classifier (structure array): . .. . c}. . training set riply trn.fun = ’quadclass’ Identifies function associated with this data type. i ∈ {1.

fun = ’knnclass’ Identifies function associated with this data type.6 0. % load training data and setup 8-NN rule trn = load(’riply_trn’). . . model = knnrule(trn. where v(x. The classification rule (6. in book [6].8 0.2 1 0. .7: Data-type used to describe the K-nearest neighbor classifier. Table 6.8) is computationally demanding if the set of prototype vectors is large.X [n × l] Prototype vectors {x1 .2 −1.2 0 −0. K-NN classifier (structure array): .3.mat. The data-type used by the STPRtool to represent the K-nearest neighbor classifier is described in Table 6.5 −1 −0. yl }. 83 . xl }. .8).mat. .4 0. The decision boundary is visualized in Figure 6. . Example: K-nearest neighbor classifier The example shows how to create the (K = 8)-NN classification rule from the Riply’s training data riply trn.y [1 × l] Labels {y1 . . . The classifier is evaluated on the testing data riply tst.1. .5 1 Figure 6. y) is number of prototype vectors xi with hidden state yi = y which lie in the ball xi ∈ Rn (x).7. for instance.5 0 0.2: Figure shows the decision boundary of the quadratic classifier. . References: The K-nearest neighbors classifier is described in most books on PR.

pboundary(model).tst.2 1 0.2 0 −0.y) ans = 0.5 1 Figure 6.2 −1.1160 1.8 0.5 0 0.3: Figure shows the decision boundary of the (K = 8)-nearest neighbor classifier. cerror(ypred.6 0.X.model).4 0.5 −1 −0. ypred = knnclass(tst. 84 . % evaluate classifier tst = load(’riply_tst’). ppatterns(trn).% visualize decision boundary and training data figure.

the functions save chars and ocr fun are called from the mpaper to process drawn numerals. The training stage of the OCR system involves (i) collection of training examples of numerals and (ii) training the multi-class SVM. The function collect chars invokes the form for drawing numerals and allows to save the numerals to a specified file. The GUI is intended just for demonstration purposes and the scanner or other device would be used in practice instead.1 Optical Character Recognition system The use of the STPRtool is demonstrated on a simple Optical Character Recognition (OCR) system for handwritten numerals from 0 to 9. The function save chars saves drawn numerals in the stage of collecting training examples and the function ocr fun is used to recognize the numerals if the trained OCR is applied. To see the trained OCR system run the demo script demo ocr. In particular.Chapter 7 Examples of applications We intend to demonstrate how methods implemented in the STPRtool can be used to solve a more complex applications. The function mpaper creates a form composed of 50 cells to which the numerals can be drawn (see Figure 7.1) and image denoising system using the kernel PCA (Section 7. The set of training examples has to be collected before the OCR is trained.2).1a). The multi-class SVM are used as the base classifier of the numerals. The function mpaper is designed to invoke a specified function whenever the middle mouse button is pressed. The applications selected are the optical character recognition (OCR) system based on the multi-class SVM (Section 7. The GUI for drawing numerals is implemented in a function mpaper. A numeral to be classified is represented as a vector x containing pixel values taken from the numeral image. The drawn numerals are normalized to size 16 × 16 pixels (different size can be specified). 7. The STPRtool contains examples of numerals collected 85 . The STPRtool provides GUI which allows to use standard computer mouse as an input device to draw images of numerals.

. (Ck . Therefore. The directory /demo/ocr/models/ contains the SVM model trained by OAO decomposition strategy (function oaosvm) and model trained based on the multi-class BSVM formulation (function bsvm2).5 xa − xb 2 /σ 2 ) determined by the parameter σ was used in the experiment. In particular.from 6 different persons.mat are stored and the string FileName is a file name to which the resulting training set is stored. However. After the parameters (C ∗ . Figure 7. If the file name format name xx. . σ1 ). . the One-Against-One (OAO) decomposition was used as it requires the shortest training time. xb ) with its argument. The string Directory specifies the directory where the files name xx.DataFile) can be used to merge all the file to a single one.mat.1b shows numerals after normalization which are saved to the specified file honza cech. The SVM has two free parameters which have to be tuned: the regularization constant C and the kernel function k(xa . The RBF kernel k(xa . The training procedure for OCR classifier is implemented in the function train ocr. The images of normalized numerals are stored as vectors to a matrix X of size [256 × 50]. . the free SVM parameters are C and σ. The STPRtool contains a function evalsvm which evaluates the set of SVM parameters Θ based on the cross-validation estimation of the classification error. . For example. to collect examples of numeral ’1’ from person honza cech collect_chars(’honza_cech1’) was called. The string name is name of the person who drew the particular numeral and xx stands for a label assigned to the numeral. The labels from y = 1 to y = 9 corresponds to the numerals from ’1’ to ’9’ and the label y = 10 is used for numeral ’0’. yl )} created the multi-class SVM classifier can be trained. σk )}. 86 . (xl . The resulting file contains a matrix X of all examples and a vector y containing labels are assigned to examples according the number xx.mat is met then the function mergesets(Directory. The file name format name xx. σ ∗ ) are tuned the SVM classifier is trained using all training data available. if a speed of classification is the major objective the multi-class BSVM formulation is often a better option as it produce less number of support vectors. Each person drew 50 examples of each of numerals from ’0’ to ’9’. . . xb ) = exp(−0. The parameters C = ∞ (data are separable) and σ = 5 were found to be the best. The function evalsvm returns the SVM model its parameters (C ∗ . The Figure 7.1a shows the form filled with the examples of the numeral ’1’. . y1 ). The numerals are stored in the directory /data/ocr numerals/. Having the labeled training set TXY = {(x1 . σ ∗ ) turned out to be have the lowest cross-validation error over the given set Θ. The parameter tuning is implemented in the script tune ocr. The methods to train the multi-class SVM are described in Chapter 5.mat (the character can be omitted from the name) is used. A common method to tune the free parameters is to use the cross-validation to select the best parameters from a pre-selected set Θ = {(C1 . The vector dimension n = 256 corresponds to 16 × 16 pixels of each image numeral and l = 50 is the maximal number of numerals drawn to a single form.

7. k(xa ..2 Image denoising The aim is to train a model of a given class of images in order to restore images corrupted by noise.The particular model used by OCR is specified in the function ocr fun which implicitly uses the file ocrmodel. This method was proposed in [20] and further developed in [15]. . The subspace Fimg is modeled by the kernel PCA trained from the set TX mapped into the feature space F by a function φ: X → F .mat its contains is summarized in Table 7. The kernel PCA computes the lower dimensional representation TZ = {z 1 . To run the resulting OCR type demo ocr or call mpaper(struct(’fun’.3. It is assumed that the non-linear subspace Ximg forms a linear subspace Fimg of a new feature space F . Having the kernel PCA model of the subspace Fimg it can be ˆ used to map the images x ∈ X into the subspace Ximg .1. Thus the unlabeled training data set TX = {x1 . . The example which produced the figure is implemented in demo kpcadenois. xb ) = φ(xa ) · φ(xb ) . The model should describe the non-linear subspace Ximg ⊆ Rn where the images live. . A snapshot of running OCR is shown in Figure 7. The additive Gaussian noise was assumed as the source of image degradation. i. The denoising procedure is based on mapping the noisy ˆ image x into the subspace Ximg . A noisy image x is assumed to lie outside the subspace Ximg . . The aim is to train a model of images of all numerals at ones. The U. Postal Service (USPS) database of images of handwritten numerals [17] was used as the class of images to model. A link between the input space X and the feature space F is given by a mapping φ: X → F which is implicitly determined by a selected kernel function k: X × X → R. . . . xl } is used. . .S. The STPRtool contains a script which converts the USPS database to the Matlab data file. . The kernel PCA can be used to find the basis vectors of the subspace Fimg from the input training data TX . Let TΦ = {φ(x1 ). z l }.mat stored in the directory /demo/ocr/.e. The script make noisy data produces the Matlab file usps noisy. . The kernel function corresponds to the dot product in the feature space. The kernel PCA was used to model the image class. It is implemented in the function usps2mat. z i ∈ Rm of the mapped images 87 . The corruption of the USPS images by the Gaussian noise is implemented in script make noisy data. . The idea of using the kernel PCA for image denoising is demonstrated in Figure 7.’ocr_fun’)) If one intends to modify the classifier or the way how the classification results are displayed then modify the function ocr fun. φ(xl )} denote the feature space representation of the training images TX .2.

. The input are 7291 gray-scale images of size 16 × 16. . k(xl . . (7. . . TΦ to minimize the mean square reconstruction error l εKM S = i=1 ˜ φ(xi ) − φ(xi ) 2 .2) can be well solved for this case.mat produced by the script make noisy data. . (7. the kernel PCA allows to model the non-linearity which is very likely in the case of images.2) which yields the resulting x . The matrix A [l × m] and the vector b [m × 1] are trained by the ˜ kernel PCA algorithm.1: The content of the file usps noisy. where ˜ φ(x) = l βi φ(x) . X [256 × 2007] Training images corrupted by the Gaussian noise with signal-to-noise ration SNR = 1 (it can be changed). the non-linearity hidden in 88 .2) The Radial Basis Function (RBF) kernel k(xa . The whole ˆ procedure of reconstructing the corrupted image x ∈ Rn involves (i) using (7. z = AT k(x) + b . . The input are 2007 gray-scale images of size 16 × 16. Structure array tst containing the testing data: gnd X [256 × 2007] Testing images of numerals represented as column vectors. .Table 7. . xb ) = exp(−0.1) to ˜ compute z which determines φ(x) and (ii) solving the pre-image problem (7. y [1 × 7291] Labels y ∈ Y = {1.5 xa − xb 2 /σ 2 ) was used here as the pre-image problem (7. . In contrast to the linear PCA. X [256 × 7291] Training images corrupted by the Gaussian noise with signal-to-noise ration SNR = 1 (it can be changed). 10} of training images. Structure array trn containing the training data: gnd X [256 × 7291] Training images of numerals represented as column vectors. x)]T contains kernel functions evaluated at the training data TX . 10} of training images. This procedure is implemented in function kpcarec. x). y [1 × 2007] Labels y ∈ Y = {1. . Given a projection φ(x) onto the kernel PCA subspace an n image x ∈ R of the input space can be recovered by solving the pre-image problem ˜ x = argmin φ(x) − φ(x) x 2 . . i=1 β = A(z − b) .1) The vector k(x) = [k(x1 . However.

(σk . Figure 7. The linear PCA was used to solve the same task for comparison. The figure was produced by a script show denois tuning. The greedy kernel PCA Algorithm 3.4 is used for training the model. The x stands for the ground truth image and x denotes the image ˆ restored from noisy one x using (7.5. Figure 7. . In the case of the linear PCA. mk )} such that the cross-validation estimate of the mean square error PX (x) x − x 2 dx . The same idea was used to tune the output dimension of the linear PCA. It produces sparse model and can deal with large data sets.6 shows examples of ground truth images. εM S (σ.2). The training for linear PCA is implemented in the script train lin denois. An example of denoising of the testing data of USPS is implemented in script show denoising. the function pcarec was used. . 89 . Another parameter is the dimension m of the subspace kernel PCA subspace to be trained. . images restored by the linear PCA and kernel PCA. The images are assumed to be generated from distribution PX (x). The function kpcarec is used to denoise images using the kernel PCA model. The results of tuning the parameters are displayed in Figure 7.1) and (7.4 shows the idea of tuning the parameters of the kernel PCA. noisy images. . The whole training procedure of the kernel PCA is implemented in the script train kpca denois. The best parameters (σ ∗ . Having the parameters tuned the resulting kernel PCA and linear PCA are trained on the whole training set. m∗ ) was selected out of a prescribed set Θ = {(σ1 .the parameter σ of the used RBF kernel has to be tuned properly to prevent overfitting. m1 ). m) = X was minimal.

3 0.8 0.6 0. 0.3 0.2 0.4 0.4 0.1 0.45 0.05 0 0 0. 90 .5 0.15 0. middle_button − classify.2 0. right_button − erase.25 0.1: Figure (a) shows hand-written training examples of numeral ’1’ and Figure (b) shows corresponding normalized images of size 16 × 16 pixels.9 1 (a) (b) Figure 7.Control: left_button − draw.7 0.1 0.5 0.35 0.

05 0 0 1 2 3 4 5 6 7 8 9 0 4 5 6 0.5 0.8 0.4 0.1 0.3 0.35 0.4 0 3 2 0. Figure (a) shows hand-written numerals and Figure (b) shows recognized numerals. 0.25 0.1 0.6 0.Control: left_button − draw.25 0.2 0.15 0.15 0.05 0 0 0.7 0. right_button − erase.4 0.6 0.3 0.9 1 (a) 0.2 0.5 0.1 0.3 0.2 0.2 0.7 0.2: A snapshot of running OCR system.5 0.45 0.1 0.9 1 (b) Figure 7.35 0.4 0.8 1 9 8 0.45 0. middle_button − classify.5 0. 91 .3 0.

4: Schema of tuning parameters of kernel PCA model used for image restoration. Distortion model KPCA model θ∈Θ Figure 7.7 6 5 4 3 2 1 0 −1 −2 −3 −4 −4 Ground truth Noisy examples Corrupted Reconstructed −3 −2 −1 0 1 2 3 4 5 6 7 Figure 7.3: Demonstration of idea of image denoising by kernel PCA. Pre-image problem error θ Data generator Error assessment 92 .

5 5 5.5 13 12 11 10 0 10 20 30 40 50 dim 60 70 80 90 100 4 4.18 20 dim = 50 dim = 100 dim = 200 dim = 300 17 18 16 16 15 14 εMS 14 εMS 12 10 8 6 3. Ground truth Numerals with added Gaussian noise Linear PCA reconstruction Kernel PCA reconstruction Figure 7.5 7 7. Figure(b) shows plot of εM S with respect to output dimension and kernel argument of the kernel PCA.5 σ 6 6.5: Figure (a) shows plot of εM S with respect to output dimension of linear PCA.6: The results of denosing the handwritten images of USPS database.5 8 (a) (b) Figure 7. 93 .

• AdaBoost and related learning methods. Therefore we are grateful for any comment. Please send us an email and your potential contribution. • Efficient optimizers to for large scale Support Vector Machines learning problems. • Discrete probability models estimated by the EM algorithm. 94 . There are many relevant methods and proposals which have not been implemented yet. suggestion or other feedback which would help in the development of the STPRtool. • Model selection methods for SVM. • Probabilistic Principal Component Analysis (PPCA) and mixture of PPCA estimated by the Expectation-Maximization (EM) algorithm.Chapter 8 Conclusions and future planes The STPRtool as well as its documentation is being updating. The future planes involve the extensions of the toolbox by the following methods: • Feature selection methods. We welcome anyone who likes to contribute to the STPRtool in any way.

Generalized discriminant analysis using a kernel approach. Cesk´ vysok´ uˇen´ technick´. Kasturi. Bahadur. Anouar. Master’s thesis. Duda. Maximum likelihood from incomplete data via the EM Algorithm. Prtools: A matlab toolbox for pattern recognition. ac D. Programov´ n´stroje pro rozpozn´v´n´ (Pattern Recognition Programe a a a ı ˇ e ming Tools. Neural Networks for Pattern Recognition. 2003. Journal of the Royal Statistical Society. In R. Hart. [4] N. Dempster. edition.P. [7] R. Hlav´ˇ.B. Shawe-Taylor. 12(10):2385–2404. Franc and V. volume 2. 2001. Franc and V. 95 . Rubin. Clarendon Press. Neural Computation.. 33:420–431. and D.M. IEEE Computer Society. June 1962. John Wiley & Sons. Laurendeau. Multi-class support vector machine. Baudat and F.O. 36(9):1985–1996. ac [10] V. 2000. Great Britain. Hlav´ˇ. [3] C. Cambridge University Press. Fakulta e c ı e elektrotechnick´. Stork. In Czech). 2002. 16th International Conference on Pattern Recognition. Pattern Classification. Duin. [5] A. 1977. 3th edition. Anderson and R. 2000. [8] V. P. Bishop.P.M.E.W. and Suen C. pages 236–239. 2000.Bibliography [1] T. 1997. 2nd. Laird. Oxford. Support Vector Machines. Pattern Recognition. 39:185– 197.W. Anals of Mathematical Statistics. [2] G. Classification into two multivariate normal distributions with differrentia covariance matrices. N. a [9] V. [6] R. Katedra kybernetiky. An iterative algorithm learning the maximal margin classifier. Franc.G. February 2000. Cristianini and J.R. editors. and D.

Denker. Franz Matthias.[11] V. Greedy algorithm for a training set reduction in the ac kernel methods.T.R.K. January 2000.W. Mika. Franc and V. Mika. R. B. Jollife.W.S. Kernel o u a pca and de-noising in feature spaces. Petkov and M. pages 426–433. pages 41–48. Washington. and K. 1999. IEEE Transactions on Neural Networks. 1989. Kernel hebbian o algorithm for single-fram super-resolution. pages 536 – 542. R¨tsch. USA. and G. Germany.A. Hlav´ˇ. Wilson. Platt. [16] J. s editors. B. editors. John Wiley & Sons. Shevade. S. New York. and S. The EM Algorithm and Extensions. Neural Computation. MIT Press. S. Advances in Neural Information Processing Systems 11. M¨ ller. Cambridge. Bhattacharya. Neural Networks for Signal Processing. [20] S. A.A. R¨tsch. Krishnan. P. March 2002. J. Bartlett. NETLAB : algorithms for pattern recognition. London. MIT Press. W. 96 . B. J.S. Cohn. Springer. Henderson. and K. [15] Kim Kwang. August 2003.S. [18] G. 1986.E.J. Backpropagation applied to handwritten zip code recognition. [13] I. Berlin. editors. Hubbard. Probabilities for sv machines. Boser. 13(2). Scholz.. Kwok and I. Larsen. Advances in pattern recognition. In M. and L. Sch¨lkopf. B. In. Hu.T. In N. a o u [19] S. Smola. Lin.C. 11(1):124–136. 1997. [21] I.T. IEEE Transactions on Neural Networks. 1:541–551. MA. Douglas. pages 408–415. McLachlan and T. May 2004. Springer. A comparison of methods for multiclass support vector machins. Keerthi. J. Murthy. M¨ ller. 1999. editors. [17] Y. Fisher discriminant analysis with kernel. 2000. C. The pre-image problem in kernel methods. Howard. In A. M. In Leonardis Aleˇ and Bischof Horst. Schuurmans. LeCun. Springer. K. Springer-Verlag. E. Kearns.A. 2002. [22] J. Advances in Large Margin Classifiers (Neural Information Processing Series). Computer Analysis of Images and Patterns. and D. ECCV Workshop. and D. Statisical Learning in Computer Vision. Hsu and C. 2003. Nabney. New York. Smola. [14] S. D. editors. [12] C.J.J Jackel. In Proceedings of the Twentieth International Conference on Machine Learning (ICML2003). Principal Component Analysis.K.R. IEEE. Scholkopf. Westenberg.J. G. Tsang. and Sch¨lkopf Bernhard.. Weston. D. Sch¨lkpf. A fast iterative nearest point algorithm for support vector machine classifier design.H. In Y. Solla. O.

Ten lectures on statistical and structural pattern ac recognition. C. Series B. o [31] V. J. Smola. [27] B. [30] B. [29] B. Sch¨lkopf. The MIT Press. P. Smola. Scholkopf. Germany. Nonlinear component analysis as a kernel eigenvalue problem. 97 . [26] M. Burges. R. Schanz. 2:81–88. Schlesinger. DAGM. A. Burges. Fast approximation of support o vector kernel expansions. editors. 1994.J. Sch¨lkopf. 56:409–456.D. [28] B.I. pages 124–132.I. Smola.J. Redmond. and C. Neural Computation. 1998. Platt. Hlav´ˇ. Cambridge. Riply. Technical Report MSR-TR-98-14. 2002. Levi.R. Muller. Kibernetika. M. A. Learning with Kernels. Royal Statistical Soc. Smola. Berlin. 1998. and an interpretation of clustering as approximation in feature spaces. SpringerVerlag. Vapnik. Kluwer Academic Publishers. Ahler.J.C. Schlesinger and V. A connection between learning and self-learning in the pattern recognition (in Russian). MA. [24] B. 10:1299–1319.. Knirsch. and F. 2002. [25] M. and A. Springer Verlag. and K. Microsoft Research. Sch¨lkopf and A. Advances in Kernel Methods – Support o Vector Learning. 1968. The nature of statistical learning theory. 1995. Mustererkennung 1998-20. In P. 1999.[23] J. MIT Press. Sequential minimal optimizer: A fast algorithm for training support vector machines. Neural networks and related methods for classification (with discusion). May. 1998.

Sign up to vote on this title
UsefulNot useful