Professional Documents
Culture Documents
Algorithms: Elite Opposition-Based Social Spider Optimization Algorithm For Global Function Optimization
Algorithms: Elite Opposition-Based Social Spider Optimization Algorithm For Global Function Optimization
Article
Elite Opposition-Based Social Spider Optimization
Algorithm for Global Function Optimization
Ruxin Zhao 1 , Qifang Luo 1,2 and Yongquan Zhou 1,2, *
1 School of Information Science and Engineering, Guangxi University for Nationalities, Nanning 530006,
China; nnzhaoms@126.com (R.Z.); luoqifang@mail.gxun.edu.cn (Q.L.)
2 Key Laboratories of Guangxi High Schools Complex System and Computational Intelligence,
Nanning 530006, China
* Correspondence: yongquanzhou@126.com; Tel.: +86-771-326-0264
Academic Editors: Yun-Chia Liang, Mehmet Fatih Tasgetiren and Quan-Ke Pan
Received: 27 November 2016; Accepted: 4 January 2017; Published: 8 January 2017
Abstract: The Social Spider Optimization algorithm (SSO) is a novel metaheuristic optimization
algorithm. To enhance the convergence speed and computational accuracy of the algorithm, in this
paper, an elite opposition-based Social Spider Optimization algorithm (EOSSO) is proposed; we use
an elite opposition-based learning strategy to enhance the convergence speed and computational
accuracy of the SSO algorithm. The 23 benchmark functions are tested, and the results show that the
proposed elite opposition-based Social Spider Optimization algorithm is able to obtain an accurate
solution, and it also has a fast convergence speed and a high degree of stability.
Keywords: social spider optimization; elite opposition-based learning; elite opposition-based social
spider optimization; function optimization
1. Introduction
A swarm intelligence optimization algorithm originates from the simulation of various types
of biological behavior in nature and has the characteristics of simple operation, good optimization
performance and strong robustness. Inspired by this idea, many bio-inspired swarm intelligent
optimization algorithms are proposed, such as, ant colony optimization (ACO) [1], differential
evolution (DE) [2], particle swarm optimization (PSO) [3], firefly algorithm (FA) [4], glowworm
swarm optimization (GSO) [5], monkey search (MS) [6], harmony search (HS) [7], cuckoo search
(CS) [8,9], bat algorithm (BA) [10], krill herd algorithm (KH) [11–13], Swarm intelligence optimization
algorithm can solve problems which traditional methods cannot handle effectively and have shown
excellent performance in many respects, and its application scope has been greatly expanded.
The Social Spider Optimization algorithm (SSO), proposed by Erik Cuevas in 2013 [14], is a
novel metaheuristic optimization algorithm that simulates social-spider behavior. Although SSO has
obtained good performance on many optimization problems, it still falls easily into a local optimal
solution. In order to enhance the performance of SSO on optimization problems, this paper presents a
novel SSO algorithm called EOSSO by using OBL and elite selection mechanism. Opposition-based
Learning (OBL) is a new concept in computational intelligence, many algorithms have used the
OBL mechanism [15,16], and it has been proven to be an effective strategy to improve performance
of various optimization algorithms. The main idea behind OBL is to transform solutions in the
current search space to a new search space. By simultaneously considering the solutions in the
current search space and the transformed search space, OBL can provide a higher chance of finding
solutions which are closer to the global optimum. However, OBL could not be suitable for all kinds
of optimization problems. For instance, the transformed candidate may jump away from the global
optimum when solving multimodal problems. To avoid this case, a new elite selection mechanism
based on the population is used after the transformation. The proposed elite opposition-based Social
Spider Optimization algorithm is validated by 23 benchmark functions. The results show that the
proposed algorithm is able to obtained accurate solution, and it also has a fast convergence speed and
a high degree of stability.
The rest of the paper is organized as follows. Section 2 introduces the original Social Spider
Optimization algorithm (SSO). Section 3 proposes a new elite opposition-based Social Spider
Optimization algorithm (EOSSO). A series of comparison experiments on various benchmarks in
Section 4. The results analysis is described in Section 5. Finally, conclusion and future works can be
found and discussed in Section 6.
where, rand is a random number in the range [0, 1], whereas f loor (.) maps a real number to an integer
number. The number of male spiders (Nm ) is computed as the complement between N and N f . It is
calculated as follows:
Nm = N − N f (2)
where, the population (S) is composed by N elementsnand is divided o into sub-groups F and M.
The group (F) include the set of female individuals (F = f 1 , f 2 , . . . , f N f ) and the group (M) include
n of male individuals (M = {m1 , m2 , . . . , m Nm }), where S = F ∪ M (S = {o
the set s1 , s2 , . . . , s N }), such that
S = s1 = f 1 , s2 = f 2 , . . . , s N f = f N f , s N f +1 = m1 , s N f +2 = m2 , . . . , S N = m Nm .
J (si ) − worsts
wi = (3)
bests − worsts
where, J (si ) is the fitness value obtained by the evaluation of the spider position (si ) with regard
to the objective function (J (.)). The values of worsts and bests are defined as follows (considering a
maximization problem):
−d2i,j
Vibi,j = w j ∗ e (5)
where, di,j is the Euclidian distance between the spiders i and j, such that di,j = si − s j .
Although it is virtually possible to compute perceived vibrations by considering any pair of
individuals, but only three special relationships are considered within the SSO approach:
(1) Vibrations (Vibci ) are perceived by the individual (i) as a result of the information transmitted
by the member (c) who is an individual that has two characteristics: it is the nearest member to
individual (i) and possesses a higher weight in comparison to individual (i) (wc > wi ).
2
Vibci = wc ∗ e−di,c (6)
(2) The vibrations (Vibbi ) perceived by the individual (i) as a result of the information transmitted
by the member (b), with individual (b) being the individual holding the best weight (best fitness
value) of the entire population (S), such that wb = max(wk ) .
k ∈{1,2,...,N }
2
Vibbi = wb ∗ e−di,b (7)
(3) The vibrations (Vib f i ) perceived by the individual (i) as a result of the information transmitted by
the member ( f ), with individual ( f ) being the nearest female individual to individual (i).
−d2i, f
Vib f i = w f ∗ e (8)
high high
0
f k,j = plow
j + rand(0, 1) ∗ ( p j − plow 0 low
j ) mk,j = p j + rand(0, 1) ∗ ( p j − plow
j ) (9)
k=1,2,...,N f ;j=1,2,...,n k =1,2,...,Nm ;j=1,2,...,n
where k and j are the individual indexes, whereas zero signals the initial population. The function
(rand(0, 1)) generates a random number between 0 and 1.
For this operation, a uniform random number (rm ) is generated within the range [0, 1]. If random
number (rm ) is smaller than a threshold (PF), an attraction movement is generated; otherwise, a
repulsion movement is produced. Therefore, such operator can be modeled as follows:
(
f ik + α ∗ Vibci ∗ (sc − f ik ) + β ∗ Vibbi ∗ (sb − f ik ) + δ ∗ (rand − 21 ); rm < PF
f ik+1 = (10)
f ik − α ∗ Vibci ∗ (sc − f ik ) − β ∗ Vibbi ∗ (sb − f ik ) + δ ∗ (rand − 21 ); rm > PF
where, α, β, δ and rand are random numbers between 0 and 1, whereas k represents the iteration
number. The individual (sc ) represent the nearest and holds a higher weight of member to individual
(i), the individual (ss ) represent that the social-spider with a highest weight in the entire population (S).
where, the individual (s f ) represents the nearest female individual to the male member (i), whereas
.
median (∑hN=m1 mkh ∗ w N f +h ∑hN=m1 w N f +h ) correspond to the weighted mean of the male population (M).
where i ∈ T g .
Algorithms 2017, 10, 9 5 of 21
Once the new spider is formed, it is compared to holding the worst spider (swo ) of the colony,
according to their weight values (where wwo = min(wl ) ). If the new spider is better than the worst
l ∈{1,2,...,N }
spider, the worst spider is replaced by the new one. Otherwise, the new spider is discarded and the
population does not suffer changes.
p∗ = x + y − p (14)
B. Opposite point:
Let p = (s1 , s2 , . . . , sn ) be a point in n-dimensional search space, where pr ∈ [ xr , yr ] and r =
{1, 2, . . . , n}. The opposite point is defined by
pr∗ = xr + yr − pr (15)
where a = 1, 2, . . . , N, b = 1, 2, . . . , n and p a,b and op a,b denote the bth variable of the ath
vector of the population and opposite population, respectively.
comparison, the fittest N individuals are selected. In each generation, search space is reduced to
calculate the opposite points, i.e.,
gn gn
op a,b = Minb + Maxb − p a,b (17)
gn gn
where a = 1, 2, . . . , N and b = 1, 2, . . . ., n. [ Minb , Maxb ] is the current interval in the population
which is becoming increasingly smaller than the corresponding initial range [ xb , yb ].
∗
xi,j = k(da j + db j ) − xe,j , i = 1, 2, . . . , N; j = 1, 2, . . . , n. (18)
where N is the population size, n is the dimension of X, k ∈ U (0, 1) and (da j db j ) is the dynamic bound
of jth decision variable. da j , db j can be obtained by following equation:
As we know that the shrink of searching space may cause algorithm stuck in local minimal. Thus,
in this proposed algorithm, we will update da j and db j every 50 generations. Dynamic bound is good
at restoring searching experience. However, it can make xi,j ∗ jump out of ( da db ), if that happened,
j j
∗.
equation below should be used to reset xi,j
∗ ∗ ∗
xi,j = rand(da j , db j ), if xi,j < da j or xi,j > db j (20)
In global searching process, this strategy expands the searching space of algorithm and it can
strengthen the diversity of the population, thus the proposed global searching ability can be enhanced
by this optimization strategy.
4. Simulation Experiments
In this section, 23 benchmark test functions [37] are applied to evaluate the optimal performance
of EOSSO. The space dimension, scope and optimal value of 23 functions are shown in Tables 1–3.
The rest of this section is organized as follows: experimental setup is given in Section 4.1 and the
comparison of each algorithm performance is shown in Section 4.2.
Algorithms 2017, 10, 9 8 of 21
yi = 1 +
x i +1 30 [−50, 50] 0
4 m
k ( xi − a ) ; xi > a
u( xi , a, k, m) = 0; − a < xi < a
k (− xi − a)m ; xi < a
n
f 13 ( x ) = 0.1 sin2 (3πx1 ) + ∑in=1 ( xi − 1)2 [1 + 10 sin2 (3πxi + 1)]
o 30 [−50, 50] 0
+( xn − 1)2 [1 + sin2 (2πxn )] + ∑in=1 u( xi , 5, 100, 4)
2
x1 (bi2 +bi x2 )
f 15 ( x ) = ∑11
i =1 [ a i − bi2 +bi x3 + x4
] 4 [−5, 5] 0.00030
The benchmark functions selected can be divided into three categories (i.e., unimodal benchmark
functions, multimodal benchmark functions and fixed-dimension multimodal benchmark functions).
Algorithms 2017, 10, 9 9 of 21
They are f 1 – f 7 for unimodal benchmark functions, f 8 – f 13 for multimodal benchmark functions and
f 14 – f 23 for fixed-dimension multimodal benchmark functions.
Table 4. Cont.
4
x 10 44
7 xx 10
10
7
7 ABC
ABC
ABC
BA
BA
BA
DA
6 DA
DA
6 GGSA
6 GGSA
GGSA
SSO
SSO
SSO
ESSO
5 ESSO
ESSO
5
5
value
value
value
4
4
fitness
4
fitness
fitness
Average
3
Average
3
Average
3
2
2
2
1
1
1
0
0 0
0
10
1
10 11
2
10 22
3
10 33
0
100
10 10
10 10
10 10
10
Iterations
Iterations
Iterations
Figure
Figure 1.
Figure
Figure 1. Dim
1.
1. Dim====30,
Dim 30,evolution
30,
30, evolutioncurves
evolution
evolution curves
curves
curves
of
of
fitness
ofof
fitness
fitness
value
fitness
value
value forforfff111f 1....
for
value
for
120
120
120 ABC
ABC
ABC
BA
BA
BA
DA
DA
DA
100 GGSA
100 GGSA
GGSA
100 SSO
SSO
SSO
ESSO
ESSO
ESSO
80
80
value
80
value
value
fitness
fitness
fitness
60
60
60
Average
Average
Average
40
40
40
20
20
20
0
0 0
0
10
1
10 11
2
10 22 10 33
3
0
100
10 10
10 10
10 10
10
Iterations
Iterations
Iterations
Figure
Figure 2.
Figure
Figure 2. Dim
2.
2. Dim ====30,
Dim 30,
30,
evolution
evolution
30,evolution
curves
curves
evolutioncurves
of
curvesof
fitness
ofof
fitness
fitness
value
value
value
fitness forforfff444f 4....
for
for
value
120
120
120 ABC
ABC
ABC
BA
BA
BA
DA
DA
DA
100 GGSA
100 GGSA
GGSA
100 SSO
SSO
SSO
ESSO
ESSO
ESSO
80
80
value
80
fitness
fitness
fitness value
value
60
60
60
Average
Average
Average
40
40
40
20
20
20
0
0 0
0
1 2 3
10 0 10 11 10 22 10 33
100
10 10
10 10
10 10
10
Iterations
Iterations
Iterations
Figure
Figure 3.
Figure 3. Dim
3. Dim === 30,
Dim 30, evolution
evolution curves
curves of
of fitness
fitness value for fff777 ...
value for
Figure 3. Dim =30,
30,evolution
evolutioncurves
curvesofoffitness value
fitness for
value for f 7 .
Algorithms 2017, 10, 9 13 of 21
Algorithms
Algorithms 2017,
Algorithms 2017, 10,
2017, 10, 999
10, 14
14 of
14 of 22
of 22
22
450
450
450 ABC
ABC
ABC
BA
BA
BA
400
400
400 DA
DA
DA
GGSA
GGSA
GGSA
350
350 SSO
SSO
350 SSO
ESSO
ESSO
ESSO
300
300
300
value
fitnessvalue
250
250
250
Averagefitness
200
200
200
Average 150
150
150
100
100
100
50
50
50
000
00 11 22 33
10
10
10
0 10
10
10
1 10
10
10
2 10
10
10
3
Iterations
Iterations
Iterations
Figure
Figure
Figure
4.
4. Dim
4.
Figure 4. Dim ====30,
Dim
Dim
30,
30, evolution
evolution curves
30, evolution
curves of
evolutioncurves
of fitness
fitness value
curvesofoffitness
fitnessvalue
valueforforfff999f 9....
value for
for
30
30
30 ABC
ABC
ABC
BA
BA
BA
DA
DA
DA
25
25 GGSA
GGSA
25 GGSA
SSO
SSO
SSO
ESSO
ESSO
ESSO
20
20
20
value
fitnessvalue
Averagefitness
15
15
15
Average
10
10
10
55
5
00
0 00 11 22 33
10
10 0 10
10 1 10
10 2 10
10 3
10 10 10 10
Iterations
Iterations
Iterations
f 10...
Figure
Figure 5.
5.
Figure5.
Figure Dim
Dim
5.Dim === 30,
Dim = 30, evolution
30,
30, evolutioncurves
evolution
evolution curvesof
curves
curves ofoffitness
of
fitnessvalue
fitness
fitness value
value
value forff10
for
for
for f 10 .
10
ABC
ABC
ABC
700
700
700 BA
BA
BA
DA
DA
DA
GGSA
GGSA
GGSA
600
600
600 SSO
SSO
SSO
ESSO
ESSO
ESSO
500
500
500
value
fitnessvalue
Averagefitness
400
400
400
Average
300
300
300
200
200
200
100
100
100
000
00 11 22 33
10
10
10
0 10
10
10
1 10
10
10
2 10
10
10
3
Iterations
Iterations
Iterations
Figure
Figure 6.
Figure 6. Dim
6. Dim === 30,
Dim 30, evolution
30, evolution curves
evolution curves of
curves of fitness
of fitness value
fitness value for fff11
value for
for 11 ...
Figure 6. Dim = 30, evolution curves of fitness value for 11 f 11 .
Algorithms 2017, 10, 9 14 of 21
Algorithms 2017,
Algorithms 2017, 10,
10, 99 15 of
15 of 22
22
0.4
0.4 ABC
ABC
BA
BA
0.2 DA
0.2 DA
GGSA
GGSA
SSO
SSO
0 ESSO
0 ESSO
value
-0.2
value
fitnessvalue
-0.2
fitness
Averagefitness
-0.4
-0.4
Average
Average -0.6
-0.6
-0.8
-0.8
-1
-1
-1.2
-1.2 00 1 2 3
100 1011 1022 1033
10 10 10 10
Iterations
Iterations
Figure 7.
Figure 7. Dim
Dim == 2,
2, evolution curves
curves of fitness value for ff16 ..
value for
Figure 7. Dim = 2, evolution
evolution curvesofoffitness
fitness value f 16 .
for 16
50
50 ABC
ABC
BA
45 BA
45 DA
DA
GGSA
40 GGSA
40 SSO
SSO
ESSO
ESSO
35
35
value
value
fitnessvalue
30
30
fitness
Averagefitness
25
25
Average
Average
20
20
15
15
10
10
5
5
0
0 00 1 2 3
100 1011 1022 1033
10 10 10 10
Iterations
Iterations
Figure 8.
Figure 8. Dim
Dim == 2,
2, evolution curves
curves of fitness value for ff18 ..
value for
Figure 8. Dim = 2, evolution
evolution curvesofoffitness
fitness value for 18
f 18 .
-3.1
-3.1 ABC
ABC
BA
BA
-3.2 DA
-3.2 DA
GGSA
GGSA
SSO
SSO
-3.3 ESSO
-3.3 ESSO
value
-3.4
value
fitnessvalue
-3.4
fitness
Averagefitness
-3.5
-3.5
Average
Average
-3.6
-3.6
-3.7
-3.7
-3.8
-3.8
-3.9
-3.9 00 1 2 3
100 1011 1022 1033
10 10 10 10
Iterations
Iterations
Figure 9.
Figure 9. Dim
Dim == 3,
3, evolution curves
curves of fitness value for ff19 ..
value for
Figure 9. Dim = 3, evolution
evolution curvesofoffitness
fitness value for 19
f 19 .
Algorithms 2017, 10, 9 15 of 21
Algorithms 2017, 10, 9 16 of 22
Algorithms 2017, 10, 9 16 of 22
00
0 ABC
ABC
ABC
BA
BA
BA
DA
DA
DA
-2-2 GGSA
-2 GGSA
GGSA
SSO
SSO
SSO
ESSO
ESSO
ESSO
-4-4
-4
value
value
fitnessvalue
fitness
Averagefitness
-6-6
-6
Average
Average -8-8
-8
-10
-10
-10
-12
-12 0
-12 1 2 3
10
100
0 10
101
1 10
102
2 10
103
3
10 10 10 10
Iterations
Iterations
Iterations
Figure
Figure 10. Dim
10.Dim
Figure10. Dim===4,
4,evolution
4, evolutioncurves
evolution curvesof
curves offitness
of fitnessvalue
fitness value
value forforff23
for f2323.. .
6000
6000
6000
5000
5000
5000
4000
4000
4000
Value
Value
FitnessValue
3000
3000
3000
Fitness
Fitness
2000
2000
2000
1000
1000
1000
00
0
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
Figure 11.
11.Dim
Figure11.
Figure Dim==
Dim =30,
30, variance diagram
30, variance
variancediagram
of fitness value
diagramofoffitness
fitnessvalue
value forforff1f1 ...
for
1
70
70
70
60
60
60
50
50
50
Value
Value
FitnessValue
40
40
40
Fitness
Fitness
30
30
30
20
20
20
10
10
10
00
0
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
Figure
Figure12.
12.Dim
Dim==30,
30,variance
variancediagram
diagramof
offitness
fitnessvalue for ff4 ..
valuefor
Figure 12. Dim = 30, variance diagram of fitness value for 4f 4 .
Algorithms 2017, 10, 9 16 of 21
Algorithms
Algorithms 2017, 10, 9 17 of 22
Algorithms 2017,
2017, 10,
10, 99 17
17 of
of 22
22
1
11
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
Value
Value
0.6
0.6
FitnessValue
0.6
0.5
Fitness
0.5
Fitness
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
00
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
Figure 13.
13. Dim
Figure 13. = 30, variance diagram of fitness value for f
forforf 777f 7...
Figure Dim == 30,
30,variance
variancediagram
diagramofoffitness value
fitness value
250
250
250
200
200
200
Value
150
Value
150
FitnessValue
150
Fitness
Fitness
100
100
100
50
50
50
0
00
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
Figure 14.
14. Dim
Figure 14. Dim ===30,
30, variance diagram of fitness value for f
forforf 999f 9...
Figure Dim 30,variance
variancediagram
diagramofoffitness value
fitness value
20
20
20
18
18
18
16
16
16
14
14
14
12
Value
12
Value
12
FitnessValue
10
10
Fitness
10
Fitness
8
88
6
66
4
44
2
22
0
00
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
500
500
500
400
400
400
Value
Value
FitnessValue
300
300
300
Fitness
Fitness
200
200
200
100
100
100
00
0
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DAAlgorithms
GGSA SSO EOSSO
Algorithms
Algorithms
Figure16.
Figure
Figure 16.Dim
16. Dim === 30,
Dim 30, variance
variance diagram
variancediagram ofoffitness
diagramof fitness value
fitnessvalue
value forff11
for
for f 11.. .
11
11
22
2
1.5
1.5
1.5
11
1
Value
Value
FitnessValue
0.5
0.5
Fitness
0.5
Fitness
00
0
-0.5
-0.5
-0.5
-1
-1
-1
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
ABC BA DA GGSA
Algorithms SSO EOSSO
Algorithms
Algorithms
80
80
80
70
70
70
60
60
60
Value
Value
50
50
FitnessValue
50
Fitness
Fitness
40
40
40
30
30
30
20
20
20
10
10
10
00
0 ABC BA DA GGSA SSO EOSSO
ABC BA DA GGSA SSO EOSSO
ABC BA DAAlgorithms
GGSA SSO EOSSO
Algorithms
Algorithms
-3.1
-3.1
-3.2
-3.2
-3.3
-3.3
-3.4
-3.4
Fitness Value
-3.5
-3.5
-3.6
-3.6
-3.7
-3.7
-3.8
-3.8
-3.9
-3.9
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
Algorithms
Algorithms
Figure
Figure 19. Dim
Figure 19.
19. Dim
Dim ===3,
3, variance
3,variance diagram
variancediagram of
diagram of fitness
of fitness value
fitness value for
value for f19
for ff19
19. ..
-2
-2
-3
-3
-4
-4
-5
-5
Fitness Value
-6
-6
-7
-7
-8
-8
-9
-9
-10
-10
ABC
ABC BA
BA DA
DA GGSA
GGSA SSO
SSO EOSSO
EOSSO
Algorithms
Algorithms
Figure
Figure 20. Dim
Figure 20.
20. Dim
Dim ===4,
4, variance
4,variance diagram
variancediagram of
diagram of fitness
of fitness value
fitness value for
value for f23
for ff23
23. ..
5. Result
5. Result Analysis
Analysis
Seen from
Seen fromTable Table4,4,ininunimodal
unimodalbenchmark
benchmark functions,
functions, EOSSO
EOSSO cancan getget a better
a better optimal
optimal solution
solution for
f 1 , f 2 f,f11f,,3f,f22f,,4f,f33f,,5 ffand
for 44 ,, ff55 fand
7 and ff
has
7
7 and
a has
very a very
strong strong
robustness.robustness.
For f 6 , For
the f
f , the
precision
6
6 precision
of of
optimal optimal
fitness fitness
value
and
value mean andfitness mean value fitnessofvalue
GGSA ofare higher
GGSA arethan
higher other algorithms.
than For the seven
other algorithms. For the functions
seven in unimodal
functions in
benchmark
unimodal benchmark functions, functions, standard deviation
standard of EOSSO of
deviation is less
EOSSO thanisthat
less of other
than thatalgorithm. In addition,
of other algorithm. In
this meansthis
addition, thatmeans in thethat optimization of unimodal
in the optimization function, EOSSO
of unimodal function, has better has
EOSSO stability.
better stability.
Similarly,
Similarly, seen seen from Table 5, EOSSO EOSSOcan canfind findthe theoptimal
optimalsolution
solutionfor for fff999 and ff11
and f 1111. . In addition,
In addition,
the standard deviations
the standard deviations are zeros. For f 99 f10 are zeros. For f 9 , f and
10 and ff11
10 f , the precision of mean fitness
11 , the precision of mean fitness value, best
11 value, best
fitness value, worst fitness value and standard deviation
fitness value, worst fitness value and standard deviation of EOSSO are better than other algorithms. of EOSSO are better than other algorithms.
These
These results results mean mean that that EOSSO
EOSSO hashas aa strong
strong searching
searching ability
ability andand aa great
great stability
stability for for solving
solving
multimodal
multimodal function optimization.function optimization.
For f 14
For ff14
– f 23 , ,we
14 – ff23
can see from Table 6 above that the best fitness value, the worst fitness value,
23 we can see from Table 6 above that the best fitness value, the worst fitness value,
mean fitness value and standard deviation produced by EOSSO are better than other algorithms.
mean fitness value and standard deviation produced by EOSSO are better than other algorithms.
In addition, for, EOSSO can get better best fitness value and mean fitness value and worst fitness
Algorithms 2017, 10, 9 19 of 21
In addition, for, EOSSO can get better best fitness value and mean fitness value and worst fitness value,
but standard deviation are worse than that of SSO. After analyzing Tables 4–6, a conclusion can be
easily drawn that EOSSO has a great ability for solving function optimization problems according to
the experimental results.
Figures 1–10 show the evolution curves of fitness value for f 1 , f 4 , f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and
f 23 . From these Figures, we can easily find that EOSSO converge faster than other population based
algorithms mentioned above, and the values obtained by EOSSO are closer to the optimal value of
benchmark functions. These show that EOSSO has a faster convergence speed and a better precision
than SSO and other population based algorithms. Figures 11–20 show the anova test of global
minimum for f 1 , f 4, f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 . From these figures, we can discover that the
standard deviation of EOSSO is much smaller, and the number of outlier is less than other algorithms.
These imply that EOSSO has a great performance with a high degree of stability. In sum, proposed
EOSSO is an algorithm with fast convergence speed, high level of precision and a great performance
of stability.
In Section 4.2, 23 standard benchmark functions are selected to evaluate the performance of elite
opposition-based Social Spider Optimization algorithm (EOSSO). f 1 – f 7 are unimodal function, f 8 – f 13
are multimodal function, f 14 – f 23 are fixed-dimension multimodal benchmark function. Experiment
results are listed in Tables 4–6. Figures 1–20 are evaluation curves of fitness values and anova test
of global minimum for f 1 , f 4, f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 . The results in tables show that a more
precise solution can be found by EOSSO. Figures listed in paper reflect a fact that EOSSO has a faster
convergence speed and a higher stability.
Acknowledgments: This work is supported by National Science Foundation of China under Grants No. 61463007;
61563008, and Project of Guangxi Natural Science Foundation under Grant No. 2016GXNSFAA380264.
Algorithms 2017, 10, 9 20 of 21
Author Contributions: Ruixin Zhao designed algorithm; Qifang Luo performed the experiments and results
analysis; Yongquan Zhou wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173.
[CrossRef]
2. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over
continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [CrossRef]
3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference
on Neural Networks, Perth, Australia, 1995; Volume IV, pp. 1942–1948.
4. Yang, X.S. Multi-objective firefly algorithm for continuous optimization. Eng. Comput. 2013, 29, 175–184.
[CrossRef]
5. Liu, J.; Zhou, Y.; Zhao, G. Leader glowworm swarm optimization algorithm for solving nonlinear equations
systems. Electr. Rev. 2012, 88, 101–106.
6. Mucherino, A.; Seref, O. Monkey search: A novel metaheuristic search for global optimization. In Proceedings
of the American Institute of Physics Conference, Gainesville, FL, USA, 28–30 March 2007; pp. 162–173.
7. Alatas, B. Chaotic harmony search algorithms. Appl. Math. Comput. 2010, 216, 2687–2699. [CrossRef]
8. Yang, X.S.; Deb, S. Cuckoo search via Levy flights. In Proceedings of the World Congress on Nature and
Biologically Inspired Computing (NaBIC2009), Coimbatote, India, 9–11 December 2009; pp. 210–214.
9. Wang, G.-G.; Gandomi, A.H.; Zhao, X.; Chu, H.E. Hybridizing harmony search algorithm with cuckoo search
for global numerical optimization. Soft Comput. 2016, 20, 273–285. [CrossRef]
10. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for
Optimization; Gonzalez, J.R., Pelta, D.A., Cruz, C., Eds.; Springer: Berlin, Germany, 2010; pp. 65–74.
11. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274,
17–34. [CrossRef]
12. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370.
[CrossRef]
13. Wang, G.; Guo, L.; Wang, H.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm
for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [CrossRef]
14. Cuevas, E.; Cienfuegos, M.; Zaldivar, D.; Perez-Cisneros, M. A swarm optimization algorithm inspired in
the behavior of the social-spider. Expert Syst. Appl. 2013, 40, 6374–6384. [CrossRef]
15. Wang, H.; Rahnamay, S.; Wu, Z. Parallel differential evolution with self-adapting control parameters and
generalized opposition-based learning for solving high-dimensional optimization problems. J. Parallel
Distrib. Comput. 2013, 73, 62–73. [CrossRef]
16. Zhou, Y.; Wang, R.; Luo, Q. Elite opposition-based flower pollination algorithm. Neurocomputing 2016, 188,
294–310. [CrossRef]
17. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the
International Conference on Computation Intelligence on Modeling Control Automation and International
Conference on Intelligent Agents, Web Technologies Internet Commerce, Vienna, Austria, 28–30 November
2005; pp. 695–701.
18. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition versus randomness in soft computing
techniques. Appl. Soft Comput. 2008, 8, 906–918. [CrossRef]
19. Wang, H.; Li, H.; Liu, Y.; Li, C.; Zeng, S. Opposition based particle swarm algorithm with Cauchy mutation.
In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007;
pp. 4750–4756.
20. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution for optimization
of noisy problems. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC,
Canada, 16–21 July 2006; pp. 1865–1872.
21. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution algorithms.
In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006;
pp. 2010–2017.
Algorithms 2017, 10, 9 21 of 21
22. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution. IEEE Trans.
Evol. Comput. 2008, 2, 64–79. [CrossRef]
23. Tizhoosh, H.R. Opposition-based reinforcement learning. J. Adv. Comput. Intell. Inf. 2006, 10, 578–585.
24. Khalvati, F.; Tizhoosh, H.R.; Aagaard, M.D. Opposition-based window memorization for morphological
algorithms. In Proceedings of the IEEE Symposium on Computational Intelligence in Image and Signal
Processing, Honolulu, HI, USA, 1–5 April 2007; pp. 425–430.
25. Al-Qunaieer, F.S.; Tizhoosh, H.R.; Rahnamayan, S. Oppositional fuzzy image thresholding. In Proceedings
of the IEEE International Conference on Fuzzy Systems, Barcelona, Spain, 18–23 July 2010; pp. 1–7.
26. Tizhoosh, H.R. Opposite fuzzy sets with applications in image processing. In Proceedings of the Joint 2009
International Fuzzy Systems Association World Congress and 2009 European Society of Fuzzy Logic and
Technology Conference, Lisbon, Portugal, 20–24 July 2009; pp. 36–41.
27. Malisia, A.R.; Tizhoosh, H.R. Applying opposition-based ideas to the ant colony system. In Proceedings of
the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 182–191.
28. Haiping, M.; Xieyong, R.; Baogen, J. Oppositional ant colony optimization algorithm and its application to
fault monitoring. In Proceedings of the 29th Chinese Control Conference (CCC), Beijing, China, 29–31 July
2010; pp. 3895–3903.
29. Lin, Z.Y.; Wang, L.L. A new opposition-based compact genetic algorithm with fluctuation. J. Comput. Inf. Syst.
2010, 6, 897–904.
30. Ventresca, M.; Tizhoosh, H.R. Improving the convergence of back propagation by opposite transfer functions.
In Proceedings of the International Joint Conference on Neural Networks (IJCNN’06), Vancouver, BC,
Canada, 16–21 July 2006; pp. 4777–4784.
31. Ventresca, M.; Tizhoosh, H.R. Opposite transfer functions and back propagation through time. In Proceedings
of the IEEE Symposium on Foundations of Computational Intelligence, Honolulu, HI, USA, 1–5 April 2007;
pp. 570–577.
32. Han, L.; He, X.S. A novel opposition-based particle swarm optimization for noisy problems. In Proceedings
of the Third International Conference on Natural Computation (ICNC 2007), Haikou, China, 24–27 August
2007; pp. 624–629.
33. Shaw, B.; Mukherjee, V.; Ghoshal, S.P. A novel opposition-based gravitational search algorithm for combined
economic and emission dispatch problems of power systems. Int. J. Electr. Power Energy Syst. 2012, 35, 21–33.
[CrossRef]
34. Chatterjee, A.; Ghoshal, S.P.; Mukherjee, V. Solution of combined economic and emission dispatch problems
of power systems by an opposition based harmony search algorithm. Int. J. Electr. Power Energy Syst. 2012,
39, 9–20. [CrossRef]
35. Bhattacharya, A.; Chattopadhyay, P.K. Solution of economic power dispatch problems using oppositional
biogeography-based optimization. Electr. Power Compon. Syst. 2010, 38, 1139–1160. [CrossRef]
36. Ergezer, M.; Simon, D.; Du, D.W. Oppositional biogeography-based optimization. In Proceedings of the
IEEE International Conference on Systems, Man and Cybernetics (SMC 2009), San Antonio, TX, USA, 11–14
October 2009; pp. 1009–1014.
37. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [CrossRef]
© 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).