Professional Documents
Culture Documents
An Effective Hybrid Cuckoo Search Algorithm For Constrained Global Optimization
An Effective Hybrid Cuckoo Search Algorithm For Constrained Global Optimization
net/publication/278665541
CITATIONS READS
47 304
4 authors, including:
Wen Long
Guizhou University of Finance and Economics
14 PUBLICATIONS 271 CITATIONS
SEE PROFILE
All content following this page was uploaded by Wen Long on 23 August 2019.
ORIGINAL ARTICLE
Yixiong Chen
Received: 17 August 2013 / Accepted: 15 March 2014 / Published online: 5 April 2014
Ó Springer-Verlag London 2014
123
912 Neural Comput & Applic (2014) 25:911–926
region of global minimum, it is slow at exploiting the 2 The proposed hybrid algorithm
solutions [12].
It is necessary to note that EAs are unconstrained 2.1 Augmented Lagrangian method
optimization methods that need additional mechanisms to
deal with constraints when solving constrained global An augmented Lagrangian method solves a sequence of
optimization problems [13]. As a result, a variety of very simple subproblems where the objective function
constraint-handling techniques targeted at EAs have been penalizes all or some of the constraint violation. This
developed [4]. A common technique in global optimiza- objective function is an augmented Lagrangian that
tion literature transforms the constrained problem into an depends on a penalty parameter and the multiplier vectors
unconstrained one, by adding a penalty term to the [6]. In this work, using the ideas in [25], the herein
objective function that aims to penalize constraint viola- implemented augmented Lagrangian function is
tion and depends on a penalty parameter [14]. The Xp
1
implementation of this technique is not easy, since the Pðx; k; rÞ ¼ f ðxÞ kj gj ðxÞ rj ðgj ðxÞÞ2
2
selection of appropriate penalty parameters for the prob- j¼1
ð2Þ
lem at hand is not a straightforward issue. The augmented Xm
~
Pj ðx; k; rÞ
Lagrangian is an interesting penalty function that avoids
j¼pþ1
the side-effects associated with ill-conditioning of simple
penalty and barrier functions [6]. Therefore, the aug- where k is the Lagrangian multiplier vector, r is the pen-
mented Lagrangian methods have received a large atten- alty parameter vector, P~j ðx; k; rÞ is defined as follows:
tion in the recent past for solving constrained global 8
> 1
optimization problems [15–19]. < kj gj ðxÞ rj ðgj ðxÞÞ2 ; if kj rj gj ðxÞ [ 0
P~j ðx; k; rÞ ¼ 2
Recently, hybridization of population-based algorithms
> 1
: k r;2
with either deterministic or random local search have j otherwise
2 j
appeared in the literature, for instance, the hybrid electro-
ð3Þ
magnetism-like mechanism algorithm and Solis and Wets
local search method by Alikhani et al. [20], the hybrid- Function Pðx; k; rÞ aims to penalize solutions that violate
ization of differential evolution algorithm and the cross- only the equality and inequality constraints. Note that we
over-based adaptive local search operation proposed in do not include the simple bounds li xi ui in the aug-
[21], the hybrid Nelder-Mead simplex search and particle mented Lagrangian function. The solution method for
swarm optimization by Zahara and Kao [22], the hybrid- solving the subproblems will ensure that the bound con-
izing harmony search algorithm with sequential quadratic straints are always satisfied. Hence, at the k-th step, assume
programming by Fesanghary et al. [23], and the hybrid- that the Lagrange multiplier vector kk and penalty param-
ization of genetic algorithm and pattern search method eter vector rk are given, the corresponding subproblem is
presented in [6]. Thus, we propose an effective hybrid formulated as:
cuckoo search algorithm based on Solis and Wets local
search [24] that relies on an augmented Lagrangian func- min Pðx; kk ; rk Þ
ð4Þ
tion for constraint-handling. Experiments using 13 bench- s.t. l i x i ui
mark functions and four engineering design problems are
presented and compared with the best known solutions The solution x to sub-problem (4) can be obtained by
reported in the literature. The comparison results with other searching the search space if k is known and r is large
evolutionary optimization methods demonstrate that enough.
cuckoo search with the embedded local search strategy In our algorithm, the superscript k indicates the k-th
proves to be extremely effective and efficient at locating optimization problem for which Lagrangian multipliers kkj
optimal solutions. are kept fixed. However, these multipliers are updated at
This paper is organized as follows: Sect. 2 describes the the end of each optimization task. It is well known that if
hybrid algorithm. It introduces the augmented Lagrangian the original problem has feasible solution, the multiplier
method for constraint-handling and provides details con- penalty function method has finite convergence. The
cerning the cuckoo search algorithm and the Solis and options for initializing the Lagrange multiplier vector
Wets local search method. The experimental results and the within our algorithm allow two choices. The first is setting
comparisons on 13 constrained global optimization test the initialization of the vector of multipliers to any positive
functions and four engineering design optimization prob- vector, and the second is using the provision of initial
lems are presented in Sect. 3. Finally, the conclusions are guesses of the multipliers for each constraint explicitly by
proposed in Sect. 4. the user (e.g., as stored from a previous run of a similar
123
Neural Comput & Applic (2014) 25:911–926 913
problem). Assuming that the Lagrange multiplier vector kk optimization methods ought to be used when solving the
and penalty parameter vector rk are given and x^k is the subproblems (4). The main differences between augmented
global minimum of sub-problem (4), from the first opti- Lagrangian algorithms are located on the framework used
mality condition of the original problem (1) and the sub- to find an approximation to a global minimizer of the
problem (4), the Lagrange multiplier vector kkj are updated augmented Lagrangian function subject to the bound con-
straints. This study presented a hybrid global optimization
after k-th optimization and kept for the ðk þ 1Þ-th iteration
method based on the modified augmented Lagrangian
as follows:
( k function, where the cuckoo search algorithm with local
kþ1
kj rkj gj ð^
xk Þ; j ¼ 1; 2; . . .; p Solis and Wets search method is used to find the approx-
kj ¼ ð5Þ
k k
maxfkj rj gj ð^ k
x Þ; 0g; j ¼ p þ 1; . . .; m imate global solutions to the subproblems (4).
The initial values of penalty parameters are set to any 2.2 Cuckoo search algorithm
arbitrary positive values, where typically r0 ¼
r0 ð1; 1; . . .; 1ÞT and r0 ¼ 10 or r0 ¼ 100. The updating Cuckoo search (CS) algorithm is population-based sto-
scheme is: chastic global search metaheuristics. In CS algorithm,
potential solutions correspond to cuckoo eggs. CS algo-
rkþ1 ¼ crk ð6Þ rithm is introduced in three idealized rules: (1) Each
cuckoo lays one egg at a time and dumps it in a randomly
where c [ 1 and typically c ¼ 10 or c ¼ 100. Instead of
chosen nest. (2) The best nest with high quality of eggs
increasing the values of the components of the penalty
(solutions) will carry over to the next generation. (3) The
parameter vector in every iteration, they may be increased
number of available host nests is fixed, and a host can
only if no sufficient progress is made toward feasibility of
discover an alien egg with a probability Pa 2 ½0; 1. If
the original problem (1) from the previous iteration to the
cuckoo egg is disclosed by the host, it may be thrown
current one. The schemes available to the user are:
away, or the host may abandon its own nest and commit it
Scheme 1. c ¼ 1 if gðxkþ1 Þ2 fgðxk Þ2 , otherwise
to the cuckoo intruder.
c [ 1, where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi To make the things even simpler, the last assumption
Xp 2
Xm can be approximated by the fraction of Pa of n nests that
kgðxÞk2 ¼ ðg j ðxÞÞ þ ðminfgj ðxÞ; 0gÞ2
j¼1 j¼pþ1 are replaced by new nests with new random solutions. For
ð7Þ maximization problem, the fitness of a solution can be
proportional to the value of its objective function. Other
is the feasibility norm and 0 f 1 and typically f ¼ 0:25; forms of fitness can be defined in a similar way to the
Scheme 2. fitness function in other evolutionary algorithm. A simple
8 8
> gj ðx Þ fgj ðx Þ; representation where one egg in a nest represents a solution
kþ1 k
>
> >
>
> >
< and a cuckoo egg represents a new solution is used here.
>
< rk ; if for j 2 f1; . . .; pg
>
The aim is to use the new and potentially better solutions
rkþ1
j
> kþ1 k
j ¼
> > minfgj ðx Þ; 0g f minfgj ðx Þ; 0g ;
>
: (cuckoos) to replace worse solutions that are in the nests.
>
>
>
> for j 2 fp þ 1; . . .; mg When generating new solutions xðtþ1Þ for, say cuckoo i, a
>
: k 2
max crj ; k ; otherwise: Levy flight is performed
ð8Þ ðtþ1Þ
xi ¼ xti þ a L
evyðkÞ ð9Þ
It is also noted that, in the above schemes, when the current
feasibility norm is less than the user-required tolerance e, it where aða [ 0Þ represents a step scaling size. This
is useful to restrain increase of e for insignificant values of parameter should be related to the scales of problem the
the norm, which will not be the one determining the dis- algorithm is trying to solve. In most cases, a can be set to
satisfaction of the convergence criteria in this case. In our the value of 1 or some other constant. The product
algorithm, the user can couple with a strict upper bound eu means entry-wise multiplication. The random step length is
on the values of the penalty parameters, which is a fail-safe drawn from a L evy distribution which has an infinite var-
mechanism for not driving them to an unrealistically large iance with an infinite mean:
value. evy u ¼ tk
L ð10Þ
The traditional augmented Lagrangian methods are
locally convergent if the subproblems (4) are approxi- where 0 k 3. Here the consecutive jumps/steps of a
mately solved. However, our algorithm aims at converging cuckoo essentially form a random walk process which
to a global solution of problem (1). Thus, global obeys a power-law step-length distribution with heavy tail.
123
914 Neural Comput & Applic (2014) 25:911–926
123
Neural Comput & Applic (2014) 25:911–926 915
approximations to the minimum (exploitation) [12]. Solis solution is less than the user-specified tolerance, or the
and Wets local search method is good at improving maximum iteration number has been elapsed. The pseudo
approximations to the minimum [24]. Thus, a promising code of hybrid cuckoo local search and augmented
idea may provide a combination of global and local opti- Lagrangian method is proposed in Fig. 3.
mization techniques.
In this study, an effective hybrid cuckoo search algo-
rithm based on Solis and Wets local search method and 3 Experimental results and comparisons
augmented Lagrangian method (HCS-LSAL) is presented.
The Lagrangian multiplier vector and penalty parameter 3.1 Constrained benchmark test functions
vector are initialized at first. Then the bound constrained
subproblems (4) are solved by hybrid cuckoo search In this work, to evaluate the performance of the proposed
algorithm and Solis and Wets local search method, and an HCS-LSAL algorithm, 13 constrained benchmark test
approximate global solution to problem (1) is obtained. If functions were considered. The detailed formulation of
the feasibility norm of global solution is not less than the these functions is shown in [2]. The main characteristics of
user-specified tolerance, Lagrangian multiplier vector and the 13 test functions are summarized in Table 1 that lists
penalty parameter vector will be updated according to the the number of decision variables (n), the type of objective
information of global solution. After that, the hybrid function, the number of linear inequality constraints (LI),
cuckoo search algorithm is run again and a new approxi- the number of nonlinear equality constraints (NE), the
mate global solution to problem (1) will be obtained. This number of nonlinear inequality constraints (NI), and the
process is repeated until the feasibility norm of global known optimal value (fglobal ).
123
916 Neural Comput & Applic (2014) 25:911–926
Table 1 Details of the 13 constrained benchmark test functions functions are relatively small except for g10, which reflects
Function n Type of f ðxÞ LI NE NI fglobal
that HCS-LSAL is capable of performing a robust and
stable search. In particular, the standard deviations for test
g01 13 Quadratic 9 0 0 15.00000 functions g08 and g12 are equal to 0. So, the HCS-LSAL is
g02 20 Nonlinear 1 0 1 0.803619 very effective, efficient and robust for solving these con-
g03 10 Nonlinear 0 1 0 1.000500 strained optimization problems.
g04 5 Quadratic 0 0 6 30665.539 Our comparison results against the conventional CS
g05 4 Nonlinear 2 3 0 5126.4967 algorithm with augmented Lagrangian method (denoted as
g06 2 Nonlinear 0 0 2 6961.814 CS-AL) is presented in Table 3. As shown in Table 3,
g07 10 Quadratic 3 0 5 24.3062 compared with CS-AL, the proposed HCS-LSAL obtained
g08 2 Nonlinear 0 0 2 0.095825 similar results for six functions (g01, g03, g06, g08, g11 and
g09 7 Nonlinear 0 0 4 680.6301 g12) and better results for four test functions (g02, g05, g09
g10 8 Linear 3 0 3 7049.248 and g10). With respect to CS-AL, for test functions g04, g07
g11 2 Quadratic 0 1 0 0.749900 and g13, HCS-LSAL found similar best values and better
g12 3 Quadratic 0 0 93 1.000000 mean, worst and worst results. Figure 4 depicts the box
g13 5 Nonlinear 0 3 0 0.0539415 plots of the experimental results with HCS-LSAL and CS-
AL on functions (a) g02, (b) g04, (c) g05, (d) g07, (e) g10,
and (f) g13 over 30 independent runs. Figure 5 illustrates
the convergence graphs of objective values over number of
3.2 Algorithm parameters iterations for the functions g01, g02, g06 and g11.
123
Neural Comput & Applic (2014) 25:911–926 917
Table 2 Experimental results Function Optimal Best Median Mean Worst Std.
obtained by HCS-LSAL with 30
independent runs on 13 test g01 15.0000 15.0000 15.0000 15.0000 15.0000 1.21E08
functions
g02 0.803619 0.803619 0.786258 0.762088 0.640249 4.10E02
g03 1.000500 1.000000 1.000000 1.000000 1.000000 1.00E10
g04 30665.539 30665.539 30665.539 30665.539 30665.539 7.20E06
g05 5126.4967 5126.4981 5126.4981 5126.4981 5126.4981 6.36E06
g06 6961.814 6961.814 6961.814 6961.814 6961.814 8.36E06
g07 24.3062 24.3062 24.3062 24.3062 24.3062 4.93E08
g08 0.095825 0.095825 0.095825 0.095825 0.095825 0.00Eþ00
g09 680.6301 680.6301 680.6301 680.6301 680.6301 1.43E05
g10 7049.248 7049.237 7049.237 7099.668 7250.957 8.65Eþ01
g11 0.749900 0.749999 0.750000 0.750000 0.750000 3.70E09
g12 1.00000 1.00000 1.00000 1.00000 1.00000 0.00Eþ00
g13 0.0539415 0.0539498 0.0539498 0.0539498 0.0539498 1.00E10
Table 3 Comparison results of Function Optimal Algorithm Best Mean Worst Std.
test functions between HCS-
LSAL and CS-AL g01 15.0000 CS-AL 15.0000 15.0000 14.999 3.01E06
HCS-LSAL 15.0000 15.0000 15.0000 1.21E08
g02 0.803619 CS-AL 0.794897 0.738112 0.619743 3.93E02
HCS-LSAL 0.803619 0.762088 0.640249 4.10E02
g03 1.000500 CS-AL 1.000000 0.999996 0.999993 3.02E06
HCS-LSAL 1.000000 1.000000 1.000000 1.00E10
g04 30665.539 CS-AL 30665.539 30634.501 30490.517 5.41Eþ01
HCS-LSAL 30665.539 30665.539 30665.539 7.20E06
g05 5126.4967 CS-AL 5126.4983 5186.1672 5311.5278 6.21Eþ01
HCS-LSAL 5126.4981 5126.4981 5126.4981 6.36E06
g06 6961.814 CS-AL 6961.814 6961.814 6961.814 1.09E04
HCS-LSAL 6961.814 6961.814 6961.814 8.36E06
g07 24.3062 CS-AL 24.3062 24.7987 27.6021 9.95E01
HCS-LSAL 24.3062 24.3062 24.3062 1.27E08
g08 0.095825 CS-AL 0.095825 0.095825 0.095825 3.55E08
HCS-LSAL 0.095825 0.095825 0.095825 0.00Eþ00
g09 680.6301 CS-AL 680.6308 680.6325 680.6342 6.12E03
HCS-LSAL 680.6301 680.6301 680.6301 1.43E05
g10 7049.248 CS-AL 7073.155 8256.458 9739.084 6.95Eþ02
HCS-LSAL 7049.237 7099.668 7250.957 8.65Eþ01
g11 0.749900 CS-AL 0.750000 0.750000 0.750000 1.36E05
CSLSAL 0.749999 0.750000 0.750000 3.70E09
g12 1.00000 CS-AL 1.00000 1.00000 1.00000 6.28E08
HCS-LSAL 1.00000 1.00000 1.00000 0.00Eþ00
g13 0.0539415 CS-AL 0.0539498 0.1052700 0.4388512 1.31E01
HCS-LSAL 0.0539498 0.0539498 0.0539498 1.00E10
g05, g07, g09, g10, and g13), HCS-LSAL found better g08, g11, and g12. ABC in contrast had a better ‘‘mean’’ and
‘‘best’’ results. Regarding the mean value, HCS-LSAL ‘‘worst’’ results in function g02. Finally, a better ‘‘std.’’
obtained better ‘‘mean’’ and ‘‘worst’’ solutions in test value was found by HCS-LSAL in functions g03, g05, g07,
functions g05, g07, g09, g10, and g13 and a similar ‘‘mean’’ g08, g09, g10, g11, and g13. ABC had a better ‘‘std.’’ value
and ‘‘worst’’ values in rest functions g01, g03, g04, g06, in functions g01, g02, g04, and g06.
123
918 Neural Comput & Applic (2014) 25:911–926
4
x 10
−0.62
−3.05
−0.64
−3.052
−0.66
Objective values
Objective values
−3.054
−0.68
−3.056
−0.7
−3.058
−0.72
−3.06
−0.74
−3.062
−0.76
−0.78 −3.064
−0.8 −3.066
Objective values
5260
26.5
5240
5220 26
5200 25.5
5180
25
5160
5140 24.5
5120
HCS−LSAL CS−AL HCS−LSAL CS−AL
(c) (d)
0.45
9500 0.4
Objective values
0.35
Objective values
9000
0.3
8500 0.25
0.2
8000
0.15
7500
0.1
7000 0.05
HCS−LSAL CS−AL HCS−LSAL CS−AL
(e) (f)
Fig. 4 Box plots of best function values to compare HCS-LSAL and CS-AL, (a) Function g02, (b) Function g04, (c) Function g05, (d) Function
g07, (e) Function g10, (f) Function g13
With respect to EM algorithm, HCS-LSAL provided found better ‘‘mean’’ and ‘‘worst’’ solutions in functions
better ‘‘best’’ results in functions g02, g03, g04, g05, g07, g03, g07, and g09 and similar ‘‘mean’’ and ‘‘worst’’ values
g09, and g10 and similar ‘‘best’’ values in functions g01, in functions g01, g04, g05, g06, g08, g11. In contrast, GA
g06, g08, g11, and g12. However, the better ‘‘best’’ results reached better ‘‘mean’’ and ‘‘worst’’ results in functions
were obtained by EM in function g13. Regarding the g02 and g10. In functions g03, g04, g07, and g09 HCS-
‘‘mean’’ and ‘‘worst’’ result, HCS-LSAL found better LSAL obtained better standard deviation values. In the rest
‘‘mean’’ and ‘‘worst’’ values in functions g02, g03, g04, of test functions (g01, g02, g05, g06, g08, g10, and g11),
g05, g07, g09, g10, and g13 and similar ‘‘mean’’ and the better ‘‘Std.’’ results are found by GA.
‘‘worst’’ results in the remaining functions. In comparison with BBO algorithm, HCS-LSAL pro-
Compared with GA, HCS-LSAL was able to provide vided better ‘‘best’’ results in three functions (g02, g07, and
better ‘‘best’’ results in functions g02, g03, g07, g09, and g10) and similar ‘‘best’’ results in other nine cases (g01,
g10 and similar ‘‘best’’ results in the remaining functions. g03, g04, g05, g06, g08, g09, g11, and g12). Also, HCS-
Regarding the ‘‘mean’’ and ‘‘worst’’ results, HCS-LSAL LSAL reached better ‘‘mean’’ and ‘‘worst’’ results in
123
Neural Comput & Applic (2014) 25:911–926 919
Objective values
Objective values
−6 −0.4
−8 −0.5
−10 −0.6
−12 −0.7
−14 −0.8
−16 −0.9
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Number of iterations Number of iterations
−4500 0.754
0.7535
Function g11
−5000 Function g06 0.753
Objective values
Objective values
0.7525
−5500
0.752
0.7515
−6000
0.751
−6500 0.7505
0.75
−7000 0.7495
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Number of iterations Number of iterations
function g07. A ‘‘similar’’ mean and worst solutions were The impact of the step size Pa on the behavior of our
obtained in nine functions (g01, g03, g04, g05, g06, g08, algorithm has been studied empirically. We have used five
g09, g11, and g12). The BBO algorithm found a better different value of Pa (i.e., 0.05, 0.15, 0.25, 0.35, and 0.45)
‘‘mean’’ and ‘‘worst’’ results in functions g02 and g10. to optimize the two complicated test functions g02 and g10
HCS-LSAL obtained better standard deviation values in based on 30 independent runs, with the aim of determining
three functions (g05, g07, and g08). In the remaining the more robust value for this parameter.
functions, the better ‘‘Std.’’ values were obtained by BBO The results are presented in Fig. 6. It is clear that the
algorithm. global convergence reliability of HCS-LSAL with Pa ¼
Compared with the PSO algorithm, it can be observed 0:25 is higher than that of other algorithms. Although it is
from Table 3 that the proposed HCS-LSAL found better hard to provide a conclusive observation about the optimal
‘‘best’’ solutions in five functions (g02, g03, g07, g09, step size Pa for HCS-LSAL, a visual inspection of Fig. 5
and g10) and similar ‘‘best’’ values in other six functions allows one to conclude that the setting of Pa ¼ 0:25 for
(g01, g04, g06, g08, g11, and g12). Slightly better HCS-LSAL is an appropriate choice with regard to all the
‘‘best’’ results were obtained by PSO in two functions Pa values analyzed.
(g05 and g13). HCS-LSAL provided better ‘‘mean’’ and
‘‘worst’’ results in eight functions (g01, g02, g03, g05, 3.6 Experiment on engineering design optimization
g07, g09, g10, and g13). It also found similar ‘‘mean’’ problem
and ‘‘worst’’ solutions in five functions (g04, g06, g08,
g11, and g12). In order to study the performance of solving the real-world
As a general remark on the comparison above, HCS- engineering design optimization problems, the proposed
LSAL shows a very competitive performance with respect HCS-LSAL algorithm is applied to four well-known
to five state-of-the-art stochastic methods in terms of the engineering design problems.
efficiency, the quality, and the robustness of search.
3.6.1 Tension/compression spring design problem
3.5 Influence of the parameter Pa
The tension/compression spring design problem (as shown
In this section, a series of experiments using the two test in Fig. 7) is described in Belegundu [29]. The design
functions (g02 and g10) are carried out to demonstrate the optimization problem involves three continuous variables
rationality of the parameter Pa setting in HCS-LSAL. and four nonlinear inequality constraints.
123
920 Neural Comput & Applic (2014) 25:911–926
Table 4 Comparison of HCS-LSAL with respect to ABC, EM, GA, BBO and PSO on 13 benchmark functions
Func Feature ABC [3] EM [5] GA [26] BBO [27] PSO [28] HCS-LSAL
123
Neural Comput & Applic (2014) 25:911–926 921
Table 4 continued
Func Feature ABC [3] EM [5] GA [26] BBO [27] PSO [28] HCS-LSAL
−0.62
−0.64
−0.66 8000
−0.68
Values
Values
−0.7
−0.72
−0.74 7500
−0.76
−0.78
−0.8
7000
0.05 0.15 0.25 0.35 0.45 0.05 0.15 0.25 0.35 0.45
Different values of Pa Different values of Pa
Fig. 6 Box plots of the experimental results with different step sizes on problems (a) g02 and (b) g10 over 30 independent runs
The mathematical model of the spring design problem is design problem were compared with the five best solutions
expressed as: reported in the literature, and are presented in Table 5.
min f ðxÞ ¼ ðx3 þ 2Þx2 x21 Their statistical results are listed in Table 6.
As shown in Tables 5 and 6, the searching quality of the
x32 x3 proposed HCS-LSAL is superior to that of other five
g1 ðxÞ ¼ 1 0
71785x41 methods. Moreover, the standard deviation of the results by
4x22 x1 x2 1 HCS-LSAL is much smaller than that of other five sto-
s.t. g2 ðxÞ ¼ þ 10
12566ðx2 x1 x1 Þ 5108x21
3 4 chastic approaches.
140:45x1
g3 ðxÞ ¼ 1 0 3.6.2 Pressure vessel design problem
x22 x3
x1 þ x2
g4 ðxÞ ¼ 10 The pressure vessel design problem (see Fig. 8) is described
1:5 in Sandgren [34] who first proposed this problem. The design
where 0:25 x1 1:3; 0:05 x2 2:0, and 2 x3 15. problem involves four continuous variables and four
This problem has already been solved by several inequality constraints. The problem can be stated as follows:
researchers, including Belegundu [29], who used mathe- f ðxÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3
matical programming methods (MPM), Coello and Mez- s.t. g1 ðxÞ ¼ x1 þ 0:0193x3 0
ura-Montes [30], who used genetic algorithm (GA),
g2 ðxÞ ¼ x2 þ 0:00954x3 0
Krohling and Coelho [31], who employed an approach of 4
coevolutionary particle swarm optimization (CEPSO), g3 ðxÞ ¼ px23 x4 px33 þ 1296000 0
3
Huang et al. [32] proposed an effective coevolutionary g4 ðxÞ ¼ x4 240 0
differential evolution (CEDE), and Eskandar et al. [33],
who applied a water cycle algorithm (WCA). The best where 0 x1 100; 0 x2 100; 10 x3 200 and
solutions obtained by HCS-LSAL in this work for spring 10 x4 200.
123
922 Neural Comput & Applic (2014) 25:911–926
Ts L Th
P
D
P
R R
Coello and Mezura-Montes [30] used a GA-based Fig. 8 Pressure vessel design problem
method to solve pressure vessel design problem, Renato
and Leandro [31] applied coevolutionary PSO to solve this and three inequality constraints. The design optimization
problem, The CEDE algorithm is proposed by Huang et al problem can be formulated as follows:
[32]. To solve this problem, in addition, Eskandar [33] pffiffiffi
solved this problem by using WCA algorithm. Table 7 lists min f ðxÞ ¼ ð2 2x1 þ x2 Þ l
pffiffiffi
the optimal solutions that have been determined by GA 2x1 þ x2
s.t. g1 ðxÞ ¼ pffiffiffi 2 P r 0;
[30], CEPSO [31], CEDE [32], WCA [33], Nonlinear 2x1 þ 2x1 x2
integer and discrete programming (NIDP) [34], as well as x2
g2 ðxÞ ¼ pffiffiffi 2 P r 0;
the proposed HCS-LSAL in this study, and their statistical 2x1 þ 2x1 x2
results are given in Table 8. 1
From Tables 7 and 8, with respect to GA, CEPSO, g3 ðxÞ ¼ pffiffiffi P r 0:
x1 þ 2x2
CEDE, and NIDP, HCS-LSAL provided better results for
pressure vessel design problem. Although the best objec- where 0 x1 1; 0 x2 1; l ¼ 100 cm; P ¼ 2 KN=cm2 ,
tive value derived by WCA algorithm is better than those and r ¼ 2 KN=cm2 .
of HCS-LSAL, the reported value is not feasible. This is The proposed HCS-LSAL method was applied to the
because the fourth variable (x4 ðLÞ ¼ 200) is significantly three-bar truss design problem and the optimal solutions
violated in the results of WCA. were compared to earlier results reported by CS [11], WCA
[33], PSO-DE [36], DE-DSS [37], and HEA-ACT [38], as
3.6.3 Three-bar truss design problem shown in Table 9, and their statistical results are listed in
Table 10.
Consider the three-bar truss design shown in Fig. 9, taken As can be seen in Tables 9 and 10 that the best function
from Nowcki [35]. This problem involved two variables value 263.895843 obtained by the proposed HCS-LSAL is
Table 5 Comparison of the MPM [29] GA [30] CEPSO [31] CEDE [32] WCA [33] HCS-LSAL
best solution for tension/
compression spring design x1 ðdÞ 0.05000 0.051989 0.051728 0.051609 0.051689 0.051689
problem found by different
x2 ðDÞ 0.31590 0.363965 0.357644 0.354714 0.356718 0.356718
algorithms
x3 ðPÞ 14.2500 10.890522 11.244543 11.410831 11.288957 11.28896
g1 ðxÞ 0.000014 0.000013 8.25E04 3.90E05 1.65E13 6.41E06
g2 ðxÞ 0.003782 0.000021 2.52E05 1.83E04 7.90E14 3.90E06
g3 ðxÞ 3.938302 1.061338 4.051306 4.048627 4.053399 4.053775
g4 ðxÞ 0.756067 0.722698 0.727085 0.729118 0.727864 0.727729
f ðxÞ 0.0128334 0.0126810 0.012674 0.0126702 0.012665 0.0126652
Table 6 Statistical results of MPM [29] GA [30] CEPSO [31] CEDE [32] WCA [33] HCS-LSAL
different approaches for tension/
compression spring design Best 0.0128334 0.0126810 0.012674 0.0126702 0.012665 0.0126652
problem
Mean NA 0.0127420 0.012730 0.0126703 0.012746 0.0126683
Worst NA 0.0129730 0.012924 0.0126790 0.012952 0.0126764
Std. NA 5.9E05 1.58E05 2.70E05 8.06E06 5.37E07
123
Neural Comput & Applic (2014) 25:911–926 923
Table 7 Comparison of the GAFR [30] CEPSO [31] CEDE [32] WCA [33] NIDP [34] HCS-LSAL
best solution for pressure vessel
design problem found by x1 ðTs Þ 0.8125 0.8125 0.8125 0.7781 1.1250 0.8125
different algorithms
x2 ðTh Þ 0.4375 0.4375 0.4375 0.3846 0.6250 0.4375
x3 ðRÞ 42.097398 42.0913 42.0984 40.3196 48.3807 42.09844
x4 ðLÞ 176.65405 176.7465 176.6376 200.000 11.7449 176.6366
g1 ðxÞ 0.000020 1.37E06 6.67E07 2.95E11 0.1913 2.01E09
g2 ðxÞ 0.035891 3.59E04 3.58E02 7.15E11 0.1634 0.035880
g3 ðxÞ 27.886075 118.7687 3.705123 1.35E06 75.875 0.002495
g4 ðxÞ 63.345953 63.2535 63.3623 40.0000 128.2551 63.36340
f ðxÞ 6059.9463 6061.0777 6059.7340 5885.3327 8048.6190 6059.7143
Table 8 Statistical results of GAFR [30] CEPSO [31] CEDE [32] WCA [33] NIDP [34] HCS-LSAL
different approaches for
pressure vessel design problem Best 6059.9463 6061.0777 6059.7340 5885.3327 8048.6190 6059.7143
Mean 6177.2533 6147.1332 6085.2303 6198.6172 NA 6087.3225
Worst 6469.3220 6363.8041 6371.0455 6590.2129 NA 6137.4069
Std. 130.9297 86.4500 43.0130 213.0490 NA 2.21E02
1:93x34
g3 ðxÞ ¼ 1 0;
x2 x46 x3
1:93x35
g4 ðxÞ ¼ 1 0;
x2 x57 x3
P ½ð745x4 =x2 x3 Þ2 þ 16:9 106 1=2
g5 ðxÞ ¼ 1 0;
Fig. 9 Three-bar truss design problem
110x36
½ð745x5 =x2 x3 Þ2 þ 157:5 106 1=2
g6 ðxÞ ¼ 1 0;
85x37
the best and the same as the results by WCA, PSO-DE, DE- x2 x3
g7 ðxÞ ¼ 1 0;
DSS, and HEA-ACT. Compared with CS algorithm, HCS- 40
LSAL found better results for three-bar truss design 5x2
g8 ðxÞ ¼ 1 0;
problem. x1
x1
g9 ðxÞ ¼ 1 0;
12x2
3.6.4 Speed reducer design problem 1:5x6 þ 1:9
g10 ðxÞ ¼ 1 0;
x4
This speed reducer design problem (see Fig. 10) is 1:1x6 þ 1:7
g11 ðxÞ ¼ 1 0:
described in Mezura-Montes [39] who proposed this x5
problem. This problem involved seven continuous vari-
ables and eleven inequality constraints. The optimization where 2:6 x1 3:6; 0:7 x2 0:8; 17 x3 28; 7:3
design problem can be formulated as follows: x4 8:3; 7:3 x5 8:3; 2:9 x6 3:9, and 5:0 x7 5:5.
123
924 Neural Comput & Applic (2014) 25:911–926
Table 9 Comparison of the CS [11] WCA [33] PSO-DE [36] DE-DSS [37] HEA-ACT [38] HCS-LSAL
best solution for three-bar truss
design problem found by x1 0.78867 0.788651 0.788675 0.788675 0.78868 0.788675
different algorithms
x2 0.40902 0.408316 0.408248 0.408248 0.40823 0.408248
g1 ðxÞ 0.00029 0.000000 5.29E11 1.77E08 0.000000 7.55E15
g2 ðxÞ 0.26853 1.464024 1.463747 1.464101 1.464118 1.464102
g3 ðxÞ 0.73176 0.535975 0.536252 0.535898 0.535898 0.535898
f ðxÞ 263.9716 263.895843 263.895843 263.895843 263.895843 263.895843
Table 10 Statistical results of CS [11] WCA [33] PSO-DE [36] DE-DSS [37] HEA-ACT [8] HCS-LSAL
different approaches for three-
bar truss design problem Best 263.97156 263.895843 263.895843 263.895843 263.895843 263.895843
Mean 264.0669 263.895903 263.895843 263.895843 263.895865 263.895843
Worst NA 263.896201 263.895843 263.895849 263.896099 263.895843
Std. 9.00E05 8.71E05 4.5E10 9.7E07 4.9E05 6.58E12
Table 11 Comparison of the CS [11] EA [39] SIIS [40] SBSM [41] SEA [42] HCS-LSAL
best solution for speed reducer
design problem found by x1 ðbÞ 3.5015 3.49999 3.514185 3.506122 3.506163 3.500000
different algorithms
x2 ðmÞ 0.70000 0.69999 0.700005 0.700006 0.700831 0.70000
x3 ðzÞ 17.0000 17 17 17 17 17.00000
x4 ðl1 Þ 7.6050 7.3000 7.497343 7.549126 7.460181 7.30000
x5 ðl2 Þ 7.8181 7.8000 7.8346 7.85933 7.962143 7.7153199
x6 ðd1 Þ 3.3520 3.350215 2.9018 3.365576 3.3629 3.3502147
x7 ðd2 Þ 5.2875 5.286683 5.0022 5.289773 5.3090 5.2866545
g1 ðxÞ 0.0743 0.0739 0.0777 0.0755 0.0777 0.073915
g2 ðxÞ 0.1983 0.01980 0.2012 0.1994 0.2013 0.197998
g3 ðxÞ 0.4349 0.49917 0.0360 0.4562 0.4741 0.499172
g4 ðxÞ 0.9008 0.9015 0.8754 0.8994 0.8971 0.904644
g5 ðxÞ 0.0011 0.0000 0.5395 0.0132 0.0110 3.0359E08
g6 ðxÞ 0.0004 0.0000 0.1805 0.0017 0.0125 1.9875E08
g7 ðxÞ 0.7025 0.7025 0.7025 0.7025 0.7022 0.702500
g8 ðxÞ 0.0004 0.00000 0.0040 0.0017 0.0006 0.000000
g9 ðxÞ 0.5832 0.5833 0.5816 0.5826 0.5831 0.583333
g10 ðxÞ 0.0890 0.0513 0.1660 0.0796 0.0691 0.051326
g11 ðxÞ 0.0130 0.0109 0.0552 0.0179 0.0279 6.4806E09
f ðxÞ 3000.9810 2996.3481 2732.9006 3008.08 3025.005 2994.471066
123
Neural Comput & Applic (2014) 25:911–926 925
Table 12 Statistical results of CS [11] EA [39] SIIS [40] SEA [41] SBSM [42] HCS-LSAL
different approaches for speed
reducer design problem Best 3000.9810 2996.3481 2732.9006 3008.08 3025.005 2994.471066
Mean 3007.1977 2996.3481 2758.8878 3012.1200 3088.7778 2994.471066
Worst 3009.0000 2996.3481 2758.3071 3028.2800 3078.5918 2994.471066
Std. 4.9634 0.00Eþ00 NA NA NA 3.88E10
better than those of HCS-LSAL, the reported results are not 5. Ali MM, Golalikhani M (2010) An electromagnetism-like
feasible. This is because the fifth and sixth constraints method for nonlinearly constrained global optimization. Comput
Math Appl 60(8):2279–2285
(g5 ðxÞ; g6 ðxÞ) are significantly violated in the results of 6. Costa L, Santo IACPE, Fernandes EMGP (2012) A hybrid
SIIS algorithm. genetic pattern search augmented Lagrangian method for con-
Based on the aforementioned simulation results and strained global optimization. Appl Math Comput 218(18):
comparisons, it can be concluded that the proposed HCS- 9415–9426
7. Long W, Liang XM, Huang YF, Chen YX (2013) A hybrid dif-
LSAL sustains competitiveness on solving constrained ferential evolution augmented Lagrangian method for constrained
numerical optimization and engineering design optimiza- numerical and engineering optimization. Comput Aided Des
tion problems. 45(12):1562–1574
8. Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired
optimization algorithm. Commun Nonlinear Sci Numer Simul
17(12):4831–4845
4 Conclusions 9. Yang XS, Deb S (2009) Cuckoo search via L evy flights. In: Proc
World Congress on Nature and Biologically Inspired Computing.
In this paper, we developed an effective hybrid approach IEEE Press, USA, pp 210–214
10. Burnwal S, Deb S (2013) Scheduling optimization of flexible
for constrained global optimization that combines the manufacturing system using cuckoo search-based approach. Int J
augmented Lagrangian method for handling the equality Adv Manuf Tech 64(5–8):951–959
and inequality constraints with a cuckoo search algorithm 11. Gandomt AH, Yang XS, Alavi AH (2013) Cuckoo search algo-
as global optimizer and a Solis and Wets search as local rithm: a metaheuristic approach to solve structural optimization
problem. Eng Comput 29(1):17–35
optimizer. Experimental results for a set of benchmark test 12. Li XT, Yin MH (2012) Parameter estimation for chaotic systems
functions and engineering design problems seem to show using the cuckoo search algorithm with an orthogonal learning
that hybridization provides a more effective trade-off method. Chin Phys B 21(5):050507
between exploitation and exploration of the search space. 13. Mezura-Montes E, Coello CAC (2005) A simple multimembered
evolution strategy to solve constrained optimization problems.
The proposed HCS-LSAL algorithm has shown to be IEEE Trans Evol Comput 9(1):1–17
competitive and efficient when compared with the others. 14. Coello CAC (2002) Theoretical and numerical constraint-han-
However, further research is required to examine the effi- dling techniques used with evolutionary algorithm: a survey of
ciencies of the proposed algorithm on other real-world the state of the art. Comput Methods Appl Mech Eng 191(11):
1245–1287
optimization and large-scale optimization problems. 15. Birgin EG, Martinez JM (2012) Augmented Lagrangian method
with nonmonotone penalty parameters for constrained optimiza-
Acknowledgments This research is supported in part by the Sci- tion. Comput Optim Appl 51(3):941–965
ence and Technology Foundation of Guizhou Province([2013]2082), 16. Jansen PW, Perez RE (2011) Constrained structural design
in part by the Excellent Science Technology Innovation Talents in optimization via a parallel augmented Lagrangian particle swarm
University of Guizhou Province([2013]140) and in part by the Beijing optimization approach. Comput Struct 89(13–14):1352–1366
Natural Science Foundation (4122022). 17. Tahk MJ, Sun BC (2000) Coevolutionary augmented Lagrangian
methods for constrained optimization. IEEE Trans Evol Comput
4(2):114–124
References 18. Zhou YY, Yang XQ (2010) Augmented Lagrangian functions for
constrained optimization problems. J Glob Optim 52(1):95–108
19. Rocha AMAC, Martins TFMC, Fernandes EMGP (2011) An
1. Gandomi AH, Yang XS, Alavi AH, Talatahari S (2013) Bat augmented Lagrangian fish swarm based method for global
algorithm for constrained optimization tasks. Neural Comput optimization. J Comput Appl Math 235(16):4611–4620
Appl 22(6):1239–1255 20. Alikhani MG, Javadian N, Tavakkoli-Moghaddam R (2009) A
2. Runarsson TP, Yao X (2000) Stochastic ranking for constrained novel hybrid approach combining electromagnetism-like method
evolutionary optimization. IEEE Trans Evol Comput 4(3): with Solis and Wets local search for continuous optimization
284–294 problems. J Glob Optim 44(2):227–234
3. Mezura-Montes E, Cetina-Dominguez O (2012) Empirical ana- 21. Noman N, Iba H (2008) Accelerating differential evolution using
lysis of a modified artificial bee colony for constrained numerical an adaptive local search. IEEE Trans Evol Comput 12(1):107–125
optimization. Appl Math Comput 218(22):10943–10973 22. Zahara E, Kao YT (2009) Hybrid Nelder-Mead simplex search
4. Gandomi AH, Yang XS, Alavi AH (2011) Mixed variable and particle swarm optimization for constrained engineering
structural optimization using firefly algorithm. Comput Struct design problems. Expert Syst Appl 36(2):3880–3886
89(23–24):2325–2336
123
926 Neural Comput & Applic (2014) 25:911–926
23. Fesanghary M, Mahdavi M, Minary-Jolandan M, Alizadeh Y method for solving constrained engineering optimization prob-
(2008) Hybridizing harmony search algorithm with sequential lems. Comput Struct 110–111:151–166
quadratic programming for engineering optimization problems. 34. Sandgren E (1988) Nonlinear integer and discrete programming
Comput Methods Appl Mech Eng 197(33–40):3080–3091 in mechanical design. In: Proc ASME design technology con-
24. Solis FJ, Wets JB (1981) Minimization by random search tech- ference Kissimine, USA, pp 95–105
nique. Math Oper Res 6(1):19–30 35. Nowcki H (1974) Optimization in precontract ship design. In:
25. Liang XM, Hu JB, Zhong WT, Qian JX (2001) A modified Fujita Y, Lind K, Williams TJ (eds) Computer applications in the
augmented Lagrange multiplier methods for large-scale optimi- automation of shipyard operation and ship design, vol 2. North-
zation. Dev Chem Eng Miner Proc 9(1–2):115–124 Holland, Elsevier, New York, pp 327–338
26. Chootinan P, Chen A (2006) Constraint handling in genetic 36. Liu H, Cai ZX, Wang Y (2010) Hybridizing particle swarm
algorithms using a gradient-based repair method. Comput Oper optimization with differential evolution for constrained numerical
Res 33(8):2263–2281 and engineering optimization. Appl Soft Comput 10(2):327–338
27. Boussaı̈d I, Chatterjee A, Siarry P, Ahmed-Nacer M (2012) 37. Zhang M, Luo W, Wang XF (2008) Differential evolution with
Biogeography-based optimization for constrained optimization dynamic stochastic selection for constrained optimization. Inf Sci
problems. Comput Oper Res 39(12):3293–3304 178(15):3043–3074
28. Lu HY, Chen WQ (2008) Self-adaptive velocity particle swarm 38. Wang Y, Cai ZX, Zhou YR, Fan Z (2009) Constrained optimi-
optimization for solving constrained optimization problems. zation based on hybrid evolutionary algorithm and adaptive
J Glob Optim 41(3):427–445 constraint-handling techniques. Struct Multidiscip Optim 37(4):
29. Belegundu AD (1982) A study of mathematical programming 395–413
methods for structural optimization. PhD thesis, University of 39. Mezura-Montes E, Coello CAC (2005) Useful infeasible solu-
Iowa, Iowa tions in engineering optimization with evolutionary algorithms.
30. Coello CAC, Mezura-Montes E (2002) Constraint-handling in MICAI’2005 Lect Notes Artif Int 3789:652–662
genetic algorithms through the use of dominance-based tourna- 40. Ray TK, Saini P (2001) Engineering design optimization using a
ment selection. Adv Eng Inform 16(3):193–203 swarm with an intelligent information sharing among individuals.
31. Krohling RA, Coelho LDS (2006) Coevolutionary particle swarm Eng Optim 33(6):735–748
optimization using gaussian distribution for solving constrained 41. Akhtar S, Tai K, Ray T (2002) A socio-behavioural simulation
optimization problems. IEEE Trans Syst Man Cybern 36(6): model for engineering design optimization. Eng Optim 34(4):
1407–1416 341–354
32. Huang FZ, Wang L, He Q (2007) An effective coevolutionary 42. Mezura-Montes E, Coello CAC, Ricardo L (2003) Engineering
differential evolution for constrained optimization. Appl Math optimization using a simple evolutionary algorithm. In: Proc 15th
Comput 186(1):340–356 International Conference on Tools with Artificial Intelligence.
33. Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) CA, USA, pp 149–156
Water cycle algorithm—a novel metaheuristic optimization
123