You are on page 1of 8

OPTIMISATION IN SUPPORT VECTOR MACHINES HYBRIDS

We evaluate the performance of 14 Support Vector Machines


models of plain and hybrid form to define the optimal
classifier:

i) 2 plain SVMs and 12 SVM hybrids of alternative


topologies that seek the most efficient classifier in
portfolio selection.
THE SUPPORT VECTOR MACHINES AND
THEIR HYBRIDS
The Support Vector Machines-SVM generally regress and classify the functions from a
set of labeled training data, Cortes and Vapnik (1995), producing a binary output,
whilst the input is categorical.

Training of the SVMs is short in a sequential minimal optimization technique. Courtis


(1978), noted that for instances xi, i= 1,…l in labels yi {1,−1}, SVM are trained
optimising:
1
min f ( )   T Qα - e T α (15)
x 2
(1)

under: 0 ≤ αi ≤ C, i= 1,…l, yTα = 0,

where, e the vector of all 1, Q an l X l symmetric matrix of


Qi, j = yi y jK(xi , x j ), (16)

(2)
In accordance to Min, Lee, and Han, (2006), where optimization
emphasizes on the feature subset and the SVM parameters,
we move forth examining multiple hybrid SVM models.

Specifically the GAs were elaborated in different hybrids:


i) the SVM inputs only,
ii) the SVM outputs only, and
iii) both the inputs and outputs.

Batch learning was preferred on the weights after the


presentation of the whole training set.

All models were tested on 500 and 1000 epochs respectively,


to optimize the iterations number upon convergence.
Figure 3. The Support Vector Machines
Input layer Hidden layers Output layer

K(x, x1)

Bias

K(x, x2)
xi(1)

xi(2)

K(x, x3)
Σ

f(x)
xi(3)

K(x, x3)

xi(D)

K(x, xn)
Figure 4. The Hybrid Genetic Support Vector Machines optimized in GAs on all the
layers with or without Cross Validation, Loukeris et al. (2013)

Input layers Hidden layers Output Layer


Output layer

K(x, x1)
Bias

GA GA
GA

K(x, x2)
xi(1)

xi(2)

K(x, x3) Σ

f(x)
xi(3)

K(x, x3)

xi(D)

K(x, xn)
The wrapper approach selects the SVM optimal feature subset in
GA, the increase function terminates training under the Cross
Validation set when its MSE increases, avoiding the
overtraining, whilst best training weights are loaded on the
test set.

The GA solved the optimal values problem in:


a) number of neurons,
b) the Step size, and
c) the Momentum Rate,
requiring a multiple training of the network to conclude the
lowest error mode.

In case of the models with GA on the output layer, they


optimized the value of the Step size and the Momentum.
CONCLUDING REMARKS

The SVM of 500 epochs whilst in a marginal


lower rank is quite efficient, although
underperforming overfitted and in
partiality.

You might also like