You are on page 1of 3

Many practical problems involve the search for the global extremum

in the space of the system parameters. The functions to be


optimized are often highly multiextremal, black-box with unknown
analytical representations, and hard to evaluate even in the case of
one parameter to be adjusted in the presence of non-linear
constraints. The interest of both the stochastic (in particular,
metaheuristic) and mathematical programming (in particular,
deterministic) communities to the comparison of metaheuristic and
deterministic classes
While stochastic methods can often provide good solutions to difficult problems in practice, they
offer no guarantee regarding the optimality of solution in finite time; in

Metaheuristic methods form an important part of the state-of-the-


art global optimization algorithms. These algorithms are often
nature-inspired with multiple interacting agents (see, e.g., [5], [8]).
A significant subset of metaheuristcs consists of the so-called swarm
intelligence algorithms, developed by mimicking the behavioral
characteristics of biological agents such as fish, birds, bees, and so
on. For example, particle swarm optimization is based on the
swarming intelligence of birds and fish (see, e.g., [35]), the firefly
algorithm reflects the flashing pattern of tropical fireflies (see,
e.g., [8]). A great number of metaheuristic algorithms such as
differential evolution, particle swarm optimization, simulated
annealing, artificial bee colony, and firefly algorithms have
appeared and shown their potential in solving important
engineering decision-making problems (see, e.g., the references
given in [5], [6], [8]). Basic versions of these methods (with
candidate solutions encoded as real-valued vectors) are briefly
described in this Section and their parameters are specified as those
often recommended in the literature to solve black-box global
optimization problems. All of these methods were then used to
conduct numerical experiments (see Section 5).
PSO.
Particle Swarm Optimization algorithm in its classical
version, e.g., from [35]. It solves problem (1) starting from a
population (swarm) of candidate solutions (particles) and
moving these particles in the search domain according to the
particle’s position and velocity. At each iteration, a particle of
the swarm updates its position by following the best local
particle’s position and the best solution of the whole swarm,
thus guiding the swarm toward the best solutions.
Cognitive φl and social φg parameters of PSO control the
weighing on the personal (local) and swarm (global)
experience, respectively; they were set to their default value
2.0 in our experiments. The inertia weight ω determines the
influence of the previous velocity of a particle on its further
velocity; we set it equal to 0.6, as in the PSO literature. In
numerical experiments (see Section 5), the maximal velocity
value was set equal to 15% of the search interval length.
FireFly algorithm as described in [8], [37] (see,
e.g., http://github.com/firefly-cpp/Firefly-algorithm--FFA- for its
implementation). FF belongs to the swarm intelligence algorithms
and is inspired by the flashing behavior of fireflies. Each firefly
(candidate solution) flashes its lights with some brightness
(associated with the objective function) attracting other fireflies
within its neighborhood. This attractiveness depends on the
(Euclidean) distance r between the two fireflies and is determined
by β(r)=β0e−γr2,where β0 is the attractiveness at r=0. Hence, the
search domain is explored by moving the fireflies towards more
attractive neighbors (with some randomized moves allowed).
In our numerical experiments, the attractiveness parameter β0 was
set equal to 1.0, the absorbtion coefficient γ was set equal
to 0.01/l, and the randomization parameter α was set equal to 0.005 
l, as recommended, e.g., in [37], where l is the average scaling factor
of problem (1) (l=∑i=1N(bi−ai)/N for the hyperinterval D in (1),
with N=1 in our case).

The optimization algorithms can be broadly classified into the deterministic and stochastic
algorithms. Deterministic algoritms use specific rules for moving one solution to other and
the output is fully determined by the parameter values and initial conditions. But  For
stochastic problems, the random variables appear in the formulation of the optimization
problem itself and thus for for the same set of parameters, the algorithm will produce
different results.  

You might also like