You are on page 1of 28

Journal of Heuristics (2019) 25:809–836

https://doi.org/10.1007/s10732-018-9397-6

Swarm hyperheuristic framework

Surafel Luleseged Tilahun1,2 · Mohamed A. Tawhid3,4

Received: 15 August 2017 / Revised: 10 October 2018 / Accepted: 15 October 2018 /


Published online: 26 October 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract
Swarm intelligence is one of the central focus areas in the study of metaheuristic algo-
rithms. The effectiveness of these algorithms towards solving difficult problems has
attracted researchers and practitioners. As a result, numerous type of this algorithm
have been proposed. However, there is a heavy critics that some of these algorithms
lack novelty. In fact, some of these algorithms are the same in terms of the updat-
ing operators but with different mimicking scenarios and names. The performance
of a metaheuristic algorithm depends on how it balance the degree of the two basic
search mechanisms, namely intensification and diversification. Hence, introducing
novel algorithms which contributes to a new way of search mechanism is welcome
but not for a mere repetition of the same algorithm with the same or perturbed oper-
ators but different metaphor. With this regard, it is ideal to have a framework where
different custom made operators are used along with existing or new operators. Hence,
this paper presents a swarm hyperheuristic framework, where updating operators are
taken as low level heuristics and guided by a high level hyperheuristic. Different
learning approaches are also proposed to guide the intensification and diversification
search behaviour of the algorithm. Hence, a swarm hyperheuristic without learning
(SHH1), with offline learning (SHH2) and with an online learning (SHH3) is pro-
posed and discussed. A simulation based comparison and discussion is also presented
using a set of nine updating operators with selected metaheuristic algorithms based on
twenty benchmark problems. The problems are selected from both unconstrained and
constrained optimization problems with their dimension ranging from two to fifty.
The simulation results show that the proposed approach with learning has a better
performance in general.

Keywords Swarm intelligence · Hyperheuristic · Swarm hyperheuristic ·


Intensification versus diversification · Swam framework

B Surafel Luleseged Tilahun


surafelaau@yahoo.com
Extended author information available on the last page of the article

123
810 S. L. Tilahun, M. A. Tawhid

1 Introduction

An optimization problem is a common problem in our daily activity. Its applications


and uses goes beyond our daily activity into complex scientific disciplines including
engineering, transportation, management, agriculture, economics, medicine and even
politics (Deb 2012; Taylor 2007; Manzini and Bindi 2009; Tilahun and Asfaw 2012;
Kamien and Schwartz 2012; Ghate 2011; Mehrotra et al. 1998). Once these problems
are formulated mathematically, different approaches can be used to find their solutions.
Basically, solution methods to optimization problems can be categorized as determinis-
tic and non-deterministic methods. The deterministic methods or also called classical
methods use a mathematical argument to arrive to the optimal solution. However,
these methods are case dependent. For example, in order to apply simplex methods
the problem needs to be linear. Metaheuristic algorithms are class of non-deterministic
solution methods, which try to find a good approximate solution within a reasonable
time. Unlike classical methods, these algorithms are not case dependent and can be
adjusted for different class for problems. The two major categories of metaheuristic
algorithms are evolutionary computing and swarm intelligence.
Swarm based metaheuristic algorithms has been the central focus of many
researchers. The research community focusses on different research issues including,
analysis of metaheuristic algorithms, hybridizing these algorithms, the application of
these algorithms and introduction of new algorithms. Currently, there are hundreds of
metaheuristic algorithms. However, there is a heavy critics that some of the algorithms
are not novel or new as they are claimed to be Piotrowski et al. (2014), Srensen (2015),
Weyland (2012). Some of the algorithms are in fact the repetition of existing algo-
rithm but with mimicking a different scenario. It should be noted that the performance
of these algorithms are as good as the performance of their predecessor. One of the
main components of metaheuristic algorithms in general and swarm intelligence in
particular is the updating step of solutions. Different updating operators, ranging from
a local search to a diversification jump can be used. Some of the ’new’ algorithms
uses a perturbed version of existing operators. Since the performance of a metaheuris-
tic algorithm highly depends on its operators, the novelty of an algorithm can be
measured based on the novelty of these operators. Perhaps having a hyperheuristic
approach where the operators as low level heuristics can prevent these explosion of
’new’ algorithms with an introduction of perturbed operators.
The other key point is that, the performance of metaheuristic algorithms depends on
the balance between the intensification and diversification search behaviour of the oper-
ators, also called exploitation and exploration. The exploration and exploitation terms
in the paper are used in the sense that is attached to diversification and intensification
but not in as evolutionary computing literature as given in Glover and Laguna (1997).
A diversification is a search behaviour of searching beyond the already explored area
whereas intensification is a search behaviour to look deep in the already explored area
or neighbourhood. In Crepinek et al. (2013), the two terms are defined as follow:

Exploration is the process of visiting entirely new region of a search space, whilst
exploitation is the process of visiting those regions of a search space within the
neighborhood of previously visited points.

123
Swarm hyperheuristic framework 811

The diversification behaviour is very useful to escape from local solutions. However,
unbalanced high degree of diversification may result a bad approximate solution. In
the other hand, intensification focuses on improving the quality of a solution by doing
a local search. It is the search behaviour which helps to produce best approximate solu-
tion but again high and unbalanced degree of intensification may result the solutions to
be trapped in local solutions (not global) and also may result slow convergence. Hence,
balancing the degree of intensification and diversification has been one of the central
issues of research. It should be noted that, a good intensification–diversification setup
for a given problem may not be good for another, i.e., good degree of intensification
and diversification is problem dependent (Tilahun 2017). Furthermore, it also depends
on the search state, for example at the beginning a high degree of diversification and
near the end a high degree of intensification seems a reasonable search behaviour.
Hence, it would be ideal to couple a proper learning mechanism which can adjust the
degree of intensification and diversification based on the search state and the prob-
lem. The learning can be done by identifying the operators which are responsible for
intensification and diversification and giving more favour to the operator according to
the needed search behaviour.
In this paper a general framework of swarm based metaheuristic algorithms is
proposed. The approach is a framework where any new, existing or perturbed operators
can be used. The hyperheuristic approach is used to deal with the way of choosing
operators which gives proper degree of search behaviour based on the performance of
the algorithm for the problem give. Hence, the aim of this paper, basically, is twofold.
(1) to introduce a generalized framework which may reduce the introduction of ’new’
and ’novel’ algorithms by perturbing an operator. In the framework, any new operators
can be used and guided by a higher level hyperheuristic. (2) to use a hyperheuristic
algorithm to balance the degree of intensification and diversification possibly coupled
with online or offline learning.
The paper is organized as follows. In the next section a literature review of swarm
intelligence will be given followed by a discussion on hyperheuristic algorithms in
Sect. 3. Section 4 describes the proposed framework. An experimental based compar-
ison and discussion will be presented in Sect. 5 followed by conclusions and future
work in Sect. 6.

2 Swarm intelligence

Metaheuristic algorithms are an essential tool in solving global optimization problems


and their applications in engineering, science, operations research, economics, and
other fields. These algorithms are mostly based on nature-inspired approaches. Many
of these are from a class of algorithms known as swarm intelligence algorithms (SIAs).
Spontaneously speaking, metaheuristic algorithms can be categorized into two
classes: the single-solution based algorithms and the population based algorithms
(Blum and Roli 2003). The single-solution based algorithms, also called the trajectory
methods (Consoli and Darby-Dowman 2006), for instance, Tabu Search (TS), Sim-
ulated Annealing (SA) and various local search methods where only one candidate
solution exists during the whole search process. Nonetheless, the population based

123
812 S. L. Tilahun, M. A. Tawhid

algorithms demonstrate that the search process begins with a population of candidate
choices, and the whole population further progress. See the pros and cons of both the
single-solution based algorithms and the population based algorithms in Glover and
Kochenberger (2003), Jones et al. (2002). Two essential instances of the population
based algorithms are Evolutionary Algorithms (EAs) and Swarm intelligence Algo-
rithms (SI). The most classical instance of EAs is the Genetic Algorithm (GA), which
was proposed by Holland (1975) and simulates the Darwin evolution concept.
Swarm intelligence is a growing area in the field of optimization and researchers
have developed various algorithms by modeling the behaviours of different swarm of
animals and insects such as ants, termites, bees, birds, and fishes, see some examples
in Table 1. For instance, artificial neural network is considered as a simple model
of human brain; Genetic Algorithm (GA) Holland (1975) is based on the human
evolution, Evolution Strategies (ESs) (Rechenberg 1973; Schwefel 1975), are based
on the concepts of biological evolution such as reproduction, mutation, recombination
and selection, particle swarm optimization (PSO) inspired by the swarming behaviour
of birds and fish (Kennedy and Eberhart 1995), Ant colony optimization (ACO) was
inspired by the behaviors of ants (Dorigo et al. 1996), Artificial bee colony (ABC) is
based on bee colony optimization (Bonabeau et al. 1999; Karaboga and Basturk 2007;
Karaboga and Akay 2009), the firefly algorithm was inspired by the flashing pattern
of tropical fireflies (Yang 2008), cuckoo search algorithm was based on the brood
parasitism of some cuckoo species (Yang and Deb 2009), prey predator algorithm is
based on the interaction of a predator and its prey (Tilahun and Ong 2015).
SIAs employ non-deterministic and approximate procedures to efficiently and effec-
tively exploit and explore and the search space in order to obtain near-optimal solutions
(Blum and Li 2008; Blum and Merkle 2008).
In the last few decades, a huge number of new SIA algorithms has been devel-
oped and proposed, (see the references in Table 1), to solve various applications such
as engineering, business, economics, and finance (Vasant 2012; Toklu 2013), wire-
less sensor networks (Saleem et al. 2011), routing in telecommunications networks
(Ducatelle et al. 2010), data mining and clustering (Martens et al. 2011; Abraham et al.
2008), big data (Cheng et al. 2013), electric power system (Bai and Zhao 2006), dis-
crete optimization (Krause et al. 2013; Blum and Li 2008), intrusion detection systems
(Wu and Banzhaf 2010), scheduling (Pacini et al. 2014), traffic signal control (Zhao
et al. 2012), bioinformatics (Kelemen et al. 2008), logistics (Zhang et al. 2015), image
segmentation (Ye et al. 2012), and further applications (Yang et al. 2013; Panigrahi
et al. 2011).
In this paper, we consider one of the evolutionary algorithms which is called Evo-
lution Strategies (ESs).

2.1 Evolution strategies

In the 1960s, Rechenberg (1973) introduced Evolution Strategies (ESs) and Schwefel
(1975) further developed it. Hartmann (1974) implemented the first numerical simu-
lations and Schwefel (1975) solved discrete optimization as the first attempt. ESs are
in several manners alike to GAs. As their name referes, ESs imitate natural evolution.

123
Swarm hyperheuristic framework 813

Table 1 Examples of swarm intelligence

Swarming behaviour Entities SIAs

Echolocation Bats Bat algorithm (Yang 2010)


Aggregating Particles Particle swarm optimization (Kennedy and Eberhart 1995)
Fishes Artificial fish school algorithm (Li et al. 2004)
Foraging Bees Bee algorithm (Pham et al. 2006), artificial bee colony
algorithm (Karaboga 2005)
Marriage in honey bees optimization algorithm (Abbass 2001)
Bee colony algorithm (Lucic and Teodorovic 2002), wasp
swarm algorithm (Pinto et al. 2007)
Bee collecting pollen algorithm (Lu and Zhou 2008)
Ants Ant colony optimization (Colorni et al. 1991), termite algorithm
(Martina and Stephen 2006)
Ant colony system (Dorigo and Gambardella 1997),
MAX–MIN ant system (Sttzle and Hoos 2000)
Ant system (Dorigo et al. 1996)
Flies Fruit fly optimization algorithm (Pan 2012)
Cockroaches Roach infestation optimization (Havens et al. 2008)
Growth Bacteria Bacteria foraging optimization (Passino 2010)
Mating Birds Bird mating optimizer (Askarzadeh and Rezazadeh 2013)
Clustering Dolphins Dolphin partner optimization (Shiqin et al. 2009)
Brooding Cuckoos Cuckoo search algorithm (Yang and Deb 2009)
Climbing Monkeys Monkey search (Mucherino and Seref 2007)
Gathering Fireflies Firefly algorithm (Yang 2008), glowworm swarm optimization
(Krishnan and Ghose 2005)
Masses Binary gravitational search algorithm (Rashedi et al. 2010)
Gravitational search algorithm (Rashedi et al. 2009)
Herding Krill Krill herd algorithm (Gandomi and Alavi 2012)
Jumping Frogs Jumping frogs optimization (Garcia and Perez 2008)
Preying Wolves Gray wolf optimizer (Mucherino and Seref 2007)

Both GAs and ESs were proposed for various applications, for example, GAs were
developed to solve discrete or integer optimization problems and ESs were applied
first to continuous parameter optimization problems associated with laboratory tests.
Both GAs and ESs share a similar concepts of imitating the evaluation of individual
structures via procedures of crossover, mutation, and selection. More specifically, ESs
keep a population structures based on selection, crossover and mutation operators.
Similar to GAs, ESs vary from classical optimization algorithms in which ESs are
derivative free methods (they utilize only objective function), probabilistic methods
and searching from one population of solutions to another.
Also, we consider in our numerical computations a few of SIAs namely, FFA, PSO,
and PPA, we mention briefly about these algorithms for the sake of completeness of
our readers:

123
814 S. L. Tilahun, M. A. Tawhid

2.2 Firefly algorithm (FFA)

Yang (2008) developed FFA as one of swarm intelligence methods, population-based,


stochastic, nature-inspired, meta-heuristic algorithm to solve difficult optimization
problems. FFA was based on the flashing lights of fireflies in nature and relied on a
randomization in searching for a set of solutions (stochastic nature). in FFA, the search
process get influenced by certain trade-off between randomization and local search
and the heuristic part (lower level) generates new solutions inside the search space
and hence, picks the best solution for survival. the stochastic nature of FFA makes
search process to stay away the solution being trapped into local optima. Local search
enhances a candidate solution until enhancements are found. Many researchers have
employed Firefly algorithm on various applications such as digital image compression,
feature selection, multimodal design problems, antenna design optimisation NP-hard
scheduling problems, multiobjective load dispatch problems, scheduling and travelling
salesman problem, classifications and clustering, train neural networks, and other
applications, see Yang and He (2013), Tilahun and Ngnotchouye (2017), Tilahun
et al. (2017), Tilahun and Ong (2012), Tilahun and Ong (2013a), Khan et al. (2016),
Ong et al. (2015) and the references therein.

2.3 Particle swarm optimization (PSO)

Eberhart and Kennedy (1995) proposed particle swarm optimization (PSO) as


population-based and stochastic optimization algorithm inspired by intelligent cumu-
lative behaviour of some animals such as flocks of schools of fish or birds. PSO
algorithm imitates social behaviour of swarms in animals, or insects, herds, birds and
fishes. These swarms follow a collaborative way to get food, and each individual in
the swarms maintains changing the search pattern based on the learning experiences
of its own and other individuals. PSO is a simple and easy to formulate, implement,
apply, extend and hybridize so it is consider one of the poles in SIAs. Since 1995,
many researchers have developed several variants of PSO in order to solve various
complex problems arising in engineering and optimization, see, e.g., Olsson (2011),
Patnaik et al. (2017), Shi (2001), Wang et al. (2017).

2.4 Prey predator algorithm (PPA)

Tilahun and Ong (2015) and Tilahun (2013) proposed the prey predator algorithm
(PPA) is a swarm-based metaheuristic algorithms which simulates a predator which
run after its prey. In PPA, one can balance the intensification and diversification degree
according to the number of predator, the best prey, a prey with better performance,
or by modifying the algorithm parameter. A number of modifications have proposed
and used on some applications, see Tilahun and Ngnotchouye (2016), Tilahun et al.
(2016), Hamadneh et al. (2013), Tilahun and Ong (2013b), Tilahun et al. (2017), Ong
et al. (2017), Hamadneh et al. (2018), Tilahun and Matadi (2018) and the references
therein.

123
Swarm hyperheuristic framework 815

Fig. 1 Hyperheuristic versus metaheuristic (Tilahun 2017)

3 Hyperheuristic

A heuristic search method that seeks to automate the process of selecting, combining,
generating or adapting several simpler heuristics (or components of such heuristics)
to efficiently solve computational search problems is called hyperheuristic approach
(Tilahun 2017). Early studies, since 1960’s suggested that by combining different
methods produces a superior method where the weakness of a method will be com-
pensated by the strength of another (Fisher and Thompson 1963; Crowston et al. 1963).
Motivated mainly by this concept, hyperheuristic algorithms are introduced and used
where a new algorithm for solving a class of problems is devised by combining or
generating from a known simpler heuristics.
The concept of a hyperheuristic is introduced as an approach that operates at a higher
lever of abstraction than current metaheuristic approaches (Cowling et al. 2000). One
of the main difference of hyperheuristic from metaheuristic is that, the former works
on the heuristic space whereas the later on the solution space of a particular problem,
as presented in Fig. 1.
Hyperheuristic algorithms can be classified into different classification. One of
these categories is heuristic generation and heuristic selection hyperheuristic. In the
case of heuristic generation, new heuristics are generated and used whereas in the case
of heuristic selection, existing heuristics will be used one after the other without intro-
ducing a new low level heuristic. Some of these hyperheuristic works by perturbing a
complete solution whereas others construct a complete solution starting from a partial
solution. Hence, there are constructive and perturbation hyperheuristic. In addition,
a hyperheuristic may involve some learning. Hence there are three categories based
on these, namely without learning, with online learning and with offline learning.
A hyperheuristic is said to be without learning if it doesn’t take a feedback from the
search process, otherwise it is with learning. The difference between an online learning
and offline learning is that the first one uses a feedback while the algorithm is running
whereas the second one uses a rule based learning before running the algorithm. The
idea is summarized in Fig 2.
The selection approach of a lower level heuristic is an important issue in the applica-
tion of hyperheuristic algorithms. In Ozcan et al. (2010), the authors discuss different

123
816 S. L. Tilahun, M. A. Tawhid

Fig. 2 Classifications of hyperheuristic approaches (Burke et al. 2013)

selection approach including, random, random descent, greedy, random permutation


descent, peckish, choice function and reinforcement learning.
It was pointed out that, combining multiple rules or methods may produce a supe-
rior result compared to each individual methods. Automating heuristic sequence for
scheduling problem can be considered as one of the stepping stone for the study and
application of these algorithms in early 1960’s. Hyperheuristic algorithms has been
used in difficult and intractable problems including exam timetabling, travelling sales-
man problem and packing problems. Detailed review on the classifications as well as
the application can be found in Burke et al. (2013). In Tilahun (2017), hyperheuristic
has been used to adjust the degree of intensification and diversification of a meta-
heuristic algorithm by adjusting the number of solutions in best and worst category.
Similarly, in this paper updating operators of swarm based metaheuristic algorithms
will be considered as low level heuristic and by combining and using them together
a hyperheuristic approach is proposed which will own a good search behaviour, i.e.,
diversification as well as intensification behaviours, and also can be used as a general
framework were any additional operators can be used based on the problem domain.

4 Swarm hyperheuristic framework

4.1 Swarm intelligence operators

Swarm based metaheuristic algorithms are population based algorithms. Hence, mul-
tiple solutions interact and update themselves using different updating operators. The
performance of a given algorithm mainly depends on the performance of these opera-
tors. Commonly and recently used operators includes the following (even though the
equations of the operators can be designed in different ways the commonly used forms
are given):

• A random move in the neighbourhood It is an operator which moves a solution xi


in its current neighbourhood.

123
Swarm hyperheuristic framework 817

xi := xi + (λmin )(rand)u (1)

where λmin is an intensification step length, rand is a random number between


zero and one from a uniform distribution and u is a random unit direction.
• Following better solutions It is another operator where a solution follows all other
solutions with better performance.

xi := xi + (λ)(rand)(x j − xi ) (2)

for all x j where f (x j ) < f (xi ), we are considering a minimization problem. This
operator can be used for any solution except the one with current best performance.
• Following the best solution It is similar with the previous operator but here the
solution follows the best solution only, not all better performing solutions.

xi := xi + (λ)(rand)(xb − xi ) (3)

where f (xb ) ≤ f (x) for all x in that iteration.


• Following own best This operator lets the solution to keep its historic best solution
and move towards this solution.

xi := xi + (λ)(rand)(xibest − xi ) (4)

where xibest is the best performance of the solution xi in the previous iterations.
• Random long jump It is similar with the first one but with bigger step length.

xi := xi + λmax u (5)

where λmax is an diversification step length.


• Mutation This is an operator where a solution in other neighbourhood is generated.
This includes regenerating the solution.
• Run away from the worst It can be done based on a random direction or using an
opposite direction to where it will take the solution towards the worst solution.
In a random direction means, a random direction v will be generated whichever
takes the solution away from the worst solution, xw , will be taken as an updating
direction.

xi := xi + (λ)(rand)u (6)

where u = argmin{norm(xw − (xi + λu)) : u ∈ {v, −v}}. Whereas randomly


running away in the opposite direction from the worst solution can be done as
follows:

xi := xi + (λ)(rand)(xi − xw ) (7)

• Run away from all worst solution It is the same as the previous operator but here
the solution will be running away not only from the worst but also from all solution
worse than itself, i.e., f (x j ) > f (xi ).

123
818 S. L. Tilahun, M. A. Tawhid

• Improving local search It is an operator where it will do a local random move


and accept the result only if it is improving. The number of random move can be
multiple.

xi := xi + (λmin )(rand)u (8)




where u = argmin{ f (xi + λmin (rand)u) : u ∈ {u 1 , u 2 , . . . , u m , 0 }} where u i s


are random directions and 0 is a zero vector.
Many additional operators can be discussed. However, the proposed approach a
general framework and it doesn’t depend on a particular set of operators, i.e., it is a
general case where selected operators (new and also existing) can be used.
Even though operators play the major role on the performance of an algorithm
there are also additional issues, like acceptance rule, selection for updating and so
on. Different accepting criteria, to accept the updated solution, can be used. In most
cases, the updated solution will replace the old solution whereas in some cases, the old
solution will be selected with a given probability. However, in this paper, acceptance of
the updated solution is with probability of 1. In addition all the solutions will undergo
updating in each iteration. However, it is also possible to consider those as additional
low level heuristics.

4.2 The proposed approach

Updating operators play the major role in swarm intelligence in particular and meta-
heuristic algorithms in general. These operators controls the diversification as well as
the intensification behaviours of an algorithm. Having a generalized framework where
the user can use any set of updating operator, and also properly adjust the degree of
intensification and diversification can be done by using the concept of hyperheuristic.
The operators take a solution and produce another solution in iterations. Suppose
o· is an operator, which will take a solution x (t) in iteration t and produce another
solution x (t+1) .

ox t  = x t+1 (9)

Some of these operators focuses on increasing the degree of diversification of the


algorithm and others on intensification. Note that diversification refers to a search
strategy away from the explored region (from the current neighborhood) whereas an
intensification is moving in the currently explored and promising area. Whenever a
swarm of solutions exchange information about the landscape of the solution space,
they tend to move towards the promising region which increases the intensification
degree of the algorithm and possibly run away from the non-promising region. If we
consider a local search operator, it moves the solution in its neighbourhood, it refers to
the exploitation of the current neighbourhood which is the promising region. However,
a random move with big jump will move the solution out of the current neighbourhood
and increases the diversification behaviour of the algorithm. For both these cases an
operator of the following form can be used.

123
Swarm hyperheuristic framework 819

ox = x + λu (10)

where u is a random direction, and the search behaviour of this operator depends
on the parameter λ. If it is small, then the operator magnifies intensification, but if
it is large then the operator is for diversification. Hence, in some cases, the search
behaviour of a given operator depends on the algorithm parameter values. The swarm
hyperheuristic framework works on a set of operators, say O = {o1 , o2 , . . . , ok }. The
framework starts with initial solutions. These solutions can be generated randomly,
pseudo-randomly or using other approaches. Updating operators from the operator
pool then can be selected and used in each iteration for each of the solutions. The
operator selection can be done by using a hyperheuristic either with or without a
learning parameter. The learning can also be either online or offline learning.
Swarm hyperheuristic framework without learning (SHH1) is when each of the
operators have equal probability of being selected in each iteration. That is each of
the operates will have a probability of k1 to be used in each iteration for each of the
N solutions. Hence if we have a probability of selection vector, P it will have the
following form:
 
1 1 1
P= ··· (11)
k k k

The second framework is swarm hyperheuristic framework (SHH2) with an offline


learning. The offline learning is a learning approach where the selection probability of
the operators changes with iteration with a pre-run determined way. With the increase
of the iteration the intensification of the algorithm needs to increase. Hence, starting
from the initial probability of selection it needs to go to a predetermined final proba-
bility of selection. Hence, the probability of selection will be a function of iteration,
P(t). Equation (12), shows a linear offline learning function for selection probability
with final value of P f by the end of iteration, tmax

1
P(t) = ((k P f − 1)t + tmax − k P f ) (12)
k(tmax − 1)

The final probability of selection will be determined before running the algorithm
and based on the needed search behaviour. One of the possible ways is, with the
increase of the iteration the diversification should decreases and more intensification,
towards getting a fine value, will be favored. Hence, the final selection probability of
the diversification operators will be set to be smaller than that of the intensification
operators.
The online learning can also be done based on the results after each iteration,
resulting swarm hyperheuristic framework with an online learning (SHH3). At the
beginning and end of each iteration the performance of the algorithm is measured.
This performance will have two components, intensification and diversification per-
formance. The intensification performance is checked based on the best objective
function values before and after the iteration. That is if f xbt−1 is the objective func-
 
tion value of the best solution xb at the beginning of the tth iteration and f xbt at

123
820 S. L. Tilahun, M. A. Tawhid

Table 2 Learning description (the upward arrow (↑) refers to increasing that particular search behaviour)

d t−1 > d t d t−1 ≤ d t


   
f xbt−1 > f xbt ↑Diversification –
   
f xbt−1 ≤ f xbt ↑Diversification ↑ intensification ↑Intensification

   
the end of the iteration. If f xbt−1 > f xbt , for a minimization problem, then the
algorithm did a good intensification, otherwise not. The diversification performance
can be done based on how disperse the solutions are from the central point, where
the central point is their average location, as discussed in Tilahun (2017). That is the
dispersion distance, d, can be computed using:


N
d= distance(xi , xc ) (13)
i=1
N
x
where the central point xc = i=1 N
i
, and any distance function, distance, can be used.
If d t−1 > d , then the diversification property has decreases. Hence, after running the
t

algorithm for an iteration there will be four possibility, as shown Table 2.


Increasing a particular search behaviours, either intensification or diversification,
refers to increasing the probability of selection of the corresponding operators. Suppose
we want to increase the diversification search behaviour, then P t (oi ) = γ P t−1 (oi ),
for all i where oi is an diversification operator and γ > 1. Then the new probability
vector will be normalized, P = PP . The same concept will be used to increase the
intensification. The value of γ plays a vital role in the convergence of the learning
vector. If it is assigned to be too big then the vector may quickly converge to a certain
values giving a biased search by eliminating one of the search behaviour. Hence, a
proper value needs to be used. In this paper γ is set to be 1.1.
Note that all the operators are not suitable for all the solutions. For example the
best solution can not follow better solutions because there are not any. Hence, when
it comes to the best solution, some of the operators may not work properly and their
probability of selection will be kept zero throughout the iterations (Fig. 3).
The low level heuristics used here are the solution updating operators. However,
we would like to point out that additional set of heuristics like acceptance strategy
can also be coupled with it. i.e., accepting the updated solution based on different
acceptance strategies.

5 Experimental study

5.1 Experimental setup

The simulation experiment is conducted on Intel(R) Core(TM) i5-6200U CPU


2.30 GHz and 64-bit operating system laptop machine. In doing a simulation based
comparison and experiment, the setting should be the same to validate the report

123
Swarm hyperheuristic framework 821

Fig. 3 Swarm hyperheuristic framework (if P = P(t) is constant then it is without learning, if it is updated
using Eq. (12) then it is with offline learning and if it is updated based on the algorithms performance as
discussed in Table 2 then it is with online learning)

(Crepinek et al. 2014). Hence, same setting and same machine is used for all the
experiments.

5.1.1 Algorithms

The experiment uses four algorithms and three of the proposed frameworks, against
a selected benchmark problems. The three versions of the proposed algorithm, which
are swarm hyperheuristic without learning (SHH1), with offline learning (SHH2) and
with online learning(SHH3) are used. Evolutionary strategy (ES) (Rechenberg 1973)
has different versions and for our simulation the (1 + m)—evolutionary strategy is
used. That is when a parent gives birth to m solutions and the best will replace the
parent. The other algorithm used is particle swarm optimization (PSO) (Kennedy and
Eberhart 1995), which is considered to be one of the foundations of swarm intelligent
algorithms. Firefly algorithm (FFA) is another algorithm which become popular due to

123
822 S. L. Tilahun, M. A. Tawhid

its easy implementation and effectiveness (Yang 2008). Hence, firefly algorithm is the
other algorithm selected for simulation. The last algorithm is prey predator algorithm
(PPA) (Tilahun and Ong 2015). It is a swarm based algorithm which becomes useful
in different applications.

5.1.2 Benchmark problems

Two set of benchmark problems are used for the simulation. The first set is uncon-
strained in a sense that the variables are bounded without additional constraint and
the second one is constrained where different set of constraints are applied on the
feasible region. Ten benchmark problems are selected from each of these classes with
the problem dimension varying from two to fifty.
Unconstrained (bounded) benchmark problems The ten set of problems from the first
class of benchmark problems, unconstrained but bounded problems, are listed below:
1. f 1 The first problem is the Rastrigin function (Molga and Smutnicki 2005).
2. f 2 Shifted and rotated Ackley’s function from CEC 2005 is taken as one of the
benchmark problems (Suganthan et al. 2005).
3. f 3 The fourth problem, F4 , of CEC 2005, which is the Shifted Schewafel function,
is selected as the third benchmark problem (Suganthan et al. 2005).
4. f 4 From CEC 2005, Shifted and rotated Weierstrass function is selected as the
fourth benchmark problem (Suganthan et al. 2005).
5. f 5 Rotated Rosenbreck function, the fourth problem from CEC 2014, is selected
as one of the benchmarks (Liang et al. 2013).
6. f 6 The twelfth problem from CEC 2014, F12 , which is Shifted and rotated Katsuura
function, is the other benchmark problem selected (Liang et al. 2013).
7. f 7 Shifted and rotated Happycat function from CEC 2014, F13 is selected, (Liang
et al. 2013).
8. f 8 The other benchmark is selected from CEC 2014, F16 , (Liang et al. 2013). It is
called Shifted, rotated and expanded Scaffier’s function.
9. f 9 Tilahun function is a discontinuous function which is used as the other bench-
mark problem selected (Tilahun 2017).
10. f 10 XinSheYang04 is a non-separable, continuous and multi-radial problem
(Gavana 2017).
Constrained benchmark problems Similarly, other ten problems from the second class
of benchmark problems, constrained problems, are selected as follows:
1. g1 The third benchmark problem (g07) from the set of constrained benchmark
problems in CEC 2006 Liang et al. (2006) is taken to be one of the constrained
benchmark problem for our simulation.
2. g2 The eleventh constrained benchmark problem (g11) from CEC 2006 is the other
benchmark problem selected Liang et al. (2006).
3. g3 The eighth problem given in CEC 2006 (g08) is the other problem selected
Liang et al. (2006).
4. g4 The twenty fourth problem (g24) is taken to be the other benchmark problem
from CEC 2006 Liang et al. (2006).

123
Swarm hyperheuristic framework 823

5. g5 The fifth constrained problem is the thirty seventh benchmark problem (p37)
listed in Hock and Schittkowski (1980).
6. g6 The sixth function from CEC 2006 (g06) is the other constrained problem
selected Liang et al. (2006).
7. g7 The other problem used for our simulation is the constraint fourteenth bench-
mark problem (g14) from Liang et al. (2006).
8. g8 From CEC 2006, the fifteenth problem (g15) is selected to be the eighth con-
strained benchmark problem Liang et al. (2006).
9. g9 The second problem from CEC 2006, (g02), is the other constrained benchmark
problem used Liang et al. (2006).
10. g10 Problem 105 from Hock and Schittkowski (1980) is selected to be the last
constrained benchmark problem.

5.1.3 Parameter tuning

The algorithm parameters are set based on the problem dimension and the size of the
feasible region. The initial number of solution N is one of the parameter which needs
to be larger for wider feasible region and also for higher dimension.

N ∝ D∧N ∝W (14)

where W is the width of the feasible region which is used to determine how
wide the feasible region is and D is the dimension of the problem. W can be
set by
calculating the maximum possible distance in the feasible region. Here,
D
W = i=1 (x max,i − x min.i ) for x max,i and x min.i being the maximum and the min-
imum possible values√of xi in the feasible region. If xmin ≤ xi ≤ xmax for all i, then
W = (xmax − xmin) D. In this paper N is approximated using N ≈ W Db, where b
is the number of solution per unit width of the feasible region. For our simulation, b
is set to be one.
The diversification as well as the intensification step length, λmin and λmax , is setup
to increase with the size of the feasible region as well as the dimension of the problem.
Furthermore, the diversification step length should be larger than the intensification
step length. In this paper, the diversification step length is approximated to be about
twice of the intensification step length, λmax ≈ 2λmin . The intensification step length
λmin is approximated to be norm of a vector step length where each of its components
are about one fifth of the change D between the maximum and minimum values of the
x −x
feasible region, i.e. λmin ≈ i=1 ( max,i 5 min,i ) . The number of directions for local
search m, and also the number of children for evolutionary strategy, is set to increase
with the dimension. For our simulation m ≈ 2D. The parameter values used are
summarized and given in Table 3.
The termination criterion used is a maximum number of iterations and it is set to
be 100.

123
824 S. L. Tilahun, M. A. Tawhid

Table 3 Parameter setup (λ = [0.1 0.1 19 19 15 4 4 4])

D λmin λmax m N D λmin λmax m N

f1 30 11 22 60 1700 g1 10 0.6 1.2 20 35


f2 30 70 140 60 10,500 g2 2 0.5 1 5 10
f3 20 175 350 40 17,900 g3 2 2.8 5.6 5 30
f4 2 0.25 0.5 5 5 g4 2 0.5 1 5 15
f5 10 125 250 20 6325 g5 3 10.2 20.1 8 150
f6 10 125 250 20 6325 g6 2 26.5 53 5 270
f7 20 175 350 40 17,900 g7 10 6.3 12.6 20 320
f8 2 55 110 5 565 g8 3 3.5 7 8 55
f9 50 15 30 100 4600 g9 20 9 18 40 900
f 10 40 25 50 80 5000 g10 8 λ 2λ 46 1100

5.2 Results and discussion

5.2.1 Friedman test

To compare the performance of the algorithms based on the selected benchmark prob-
lems a non-parametric statistical test called Friedman test is used Villegas (2011).
The test is used to compare p algorithms based on q benchmark problems (Derrac
et al. 2011), for q > 1 and p > 1. Based on their performance the algorithms will be
ranked in each problems, if there is a tie, the algorithms will share the sum rank, i.e.,
for example if n algorithm are equally ranked r , then each will have a rank of r + n−1
2 .
Then the test statistics, T , can be computed using Eq. (15).
 2

(q − 1) B − pq( p+1)
4
T = (15)
A−B
 
q p p q
ri j
2
where A = i=1 j=1 ri2j and B = j=1 qi=1 for ri j being the rank of algorithm
j in problem i.
The null hypothesis, which is “there is no significant difference between the per-
formance of the algorithms”, will be rejected with a level of significance α, if T is
greater than the 1 − α quantile of the F-distribution with p − 1 and ( p − 1)(q − 1)
degrees of freedom.
If there is a significant difference between the algorithms then a pairwise compar-
ison will be done to see the better performing algorithm. The performance of two
algorithms, say i and j, is different if Eq. (16) is true.

2q(A − B)
|ri − r j | > t1− α2 (16)
( p − 1)(q − 1)

where ri is the sum of the ranks of algorithm i over all the problems, t1− α2 is the 1 − α2
quantile of the t-distribution with ( p − 1)(q − 1) degrees of freedom.

123
Swarm hyperheuristic framework 825

Fig. 4 Simulation results for some of the benchmark problems. a Performance of the algorithms on f 1 , b
performance of the algorithms on f 2 , c performance of the algorithms on f 5 , d performance of the algorithms
on f 7 , e performance of the algorithms on g2 , f performance of the algorithms on g3 , g performance of the
algorithms on g4 , h performance of the algorithms on g8

5.2.2 Experimental results

Each simulation is run thirty times and the average objective value and its standard
deviation is computed. Furthermore, the average CPU time is also registered. The
results after running the simulation for the two class of problems is given in Table 4
and Fig. 4.

Simulation results for the unconstrained benchmark problems Based on the simulation
results given in Table 4, a Friedman test is used to compare the performance for the

123
826 S. L. Tilahun, M. A. Tawhid

first class of problems. To do so, their ranks have been computed and presented as
given in Table 5.
By applying Eq. (15), the test statistics, T = 19.9866. In addition, for α = 0.01,
F1−α,k−1,(b−1)(k−1) = 3.24. Hence, there are algorithms with significance perfor-
mance difference.
The right side of the expression in Eq. (16) is 15.0557. Hence to do the comparison
consider the sum rank difference given in Table 6.
The statistical discussion of the simulation results shows that the swarm hyper-
heuristic framework has a superior performance in general, where the online learning
swam hyperheuristic framework outperforming the others followed by the swarm
hyperheuristic framework with offline learning. Prey predator algorithm is as good as
the swarm hyperheuristic without learning.
By performing a similar analysis, the swarm hyperheuristic frameworks are slightly
computationally expensive over all the algorithms except evolutionary strategy. Evo-
lutionary strategy is found to be expensive whereas particle swarm optimization has
a minimum running time compared to all the algorithm followed by firefly algorithm
and prey predator algorithm.

Simulation results for the constrained benchmark problems Similar to the uncon-
strained problems, the simulation for benchmark problems were done on the same
setup. Feasibility repairing along with penalty method is used to deal with the con-
straints (Michalewicz and Schoenauer 1996). For the fifth benchmark problem the
reported optimal value in literature was with objective function value of − 3300
(Hock and Schittkowski 1980), however, in our simulation a better result which is
x = [20.000000000000000, 13.000193415166255, 12.999806584833745] is found
with objective function value of − 3379.999999251812. Similarly, for the tenth prob-
lem the final result is found to be x = [0.3498381939584, 0.4563325981942,
129.4332632697069, 160.8866636540198, 216.5181103181580, 11.72388749
57298, 17.5273686033379, 21.3642333473341] with 1136.881528914754 being the
objective function value, which is better than 1138.416240 reported in Hock and
Schittkowski (1980).
The Friedman test is used to compare the algorithms with the proposed framework
based on the simulation results. By using a similar analysis given for the unconstraint
bounded problems the test statistics becomes 8.22 which is greater than the corre-
sponding F-distribution value which is 3.16. Hence, there is a significant performance
difference on the methods used. For the pairwise comparison of the methods, the table
of the difference of the sum rank of the algorithms is given in Table 7.
Furthermore the right side of Eq. (16) becomes 17.9330. Hence, any entry in
the table greater than this value in magnitude shows there is a significant different
between the corresponding algorithms. As a result, in general, the swarm hyper-
heuristic approach performers significantly better than the metaheuristic algorithms.
In particular, the version with learning (both online and offline) outperforms all the
algorithms except prey predator algorithm. The performance of firefly algorithm is the
least followed by particle swarm optimization.
Furthermore, the running time difference between the algorithms is not significant,
by performing a similar analysis using Friedman test.

123
Table 4 Simulation results, the CPU represents the average CPU time over number of iterations (i.e., CPUtime
100 )

f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10

SHH1
μ 0.1066 − 139.7319 − 448.5914 90.1987 417.8681 1200.8 1300.5 1600.2 − 107.8263 − 0.9086
SD 0.0533 0.0713 0.0094 0.1470 23.2050 0.1729 0.1359 0.0005 11.2274 0.2774
CPU 1.5799 7.2858 1.5531 0.0825 2.7787 16.9003 22.0682 1.5233 0.3416 0.2922
SHH2
Swarm hyperheuristic framework

μ 0.0564 − 139.5945 − 449.9792 90.1335 420.0397 1200.6 1300.5 1600.1 − 109.5028 − 0.8130
SD 0.0442 0.0256 0.1023 0.0671 10.9467 0.1991 0.2127 0.0001 12.3498 0.0129
CPU 1.2845 7.2039 1.8660 0.0828 3.1836 19.6633 21.5873 1.5678 0.3181 0.2787
SHH3
μ 0.0014 − 139.7519 − 449.0512 90.1205 410.6372 1200.2 1300.1 1600 − 111.6130 − 0.9066
SD 0.0049 0.0747 0.0523 0.1135 4.7322 0.1633 0.0432 0.0078 5.6338 0.0017
CPU 2.1286 7.1044 2.1705 0.0891 3.2848 19.9974 28.2704 1.6327 0.3901 0.3690
ES
μ 0.1705 − 139.5846 − 447.0759 90.1804 421.8853 1200.6 1300.3 1600.5 − 105.1408 − 0.3118
SD 0.3054 0.0012 0.2033 0.0359 0.5550 0.1331 0.0491 0.0029 3.1606 0.1186
CPU 9.7987 9.1048 6.3139 0.1406 10.6614 64.5730 88.8156 2.7507 0.9310 1.0775
FFA
μ 0.1529 − 139.1088 − 446.8995 90.2098 426.8486 1202.9 1300.6 1600.4 − 103.1478 − 0.1106
SD 0.2440 0.1661 0.4591 0.1550 1.3654 0.5220 0.0657 0.0017 13.3923 0.1133
CPU 0.3112 6.2453 1.3353 0.0412 1.1704 10.7898 15.4032 0.8322 0.9839 0.8247

123
827
Table 4 continued
828

f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10

123
PPA
μ 0.0172 − 139.7188 − 448.0370 90.2101 412.3537 1200.2 1300.5 1600.3 − 110.8199 − 0.9786
SD 0.0004 0.0005 0.2987 0.0363 18.7433 0.0311 0.0275 0.0045 14.6388 0.0042
CPU 0.7199 6.7594 0.78558 0.0454 1.0019 6.4761 7.0202 0.7157 0.2488 0.1575
PSO
μ 0.1931 − 139.5165 − 4479.5058 90.2011 423.6881 1200.7 1300.6 1600.7 − 104.9013 − 0.5214
SD 1.8589 0.0013 0.1295 0.1363 26.6320 0.2618 0.0147 0.0001 10.1374 0.6099
CPU 0.3103 6.57813 0.6061 0.0402 0.9397 5.5772 5.3225 0.6995 0.0783 0.0284

g1 g2 g3 g4 g5 g6 g7 g8 g9 g10

SHH1
μ − 0.9709 0.75 − 0.0944 − 5.4071 − 3379.99 − 6960.185 − 47.4561 961.7152 − 0.4466 1141.2
SD 0.0183 0.0091 0.0010 0.0649 0.0162 0.0280 0.0023 0.0001 0.0128 1.7478
CPU 0.0063 0.0038 0.0055 0.0055 0.0175 0.0038 0.0130 0.0015 0.0588 0.4578
SHH2
μ − 0.9896 0.75 − 0.0942 − 5.4867 − 3379.99 − 6950.185 − 47.7556 961.7152 − 0.5231 1139.0
SD 0.0049 0.0235 0.0015 0.0270 0.0169 0.0282 0.0016 0.0001 0.0129 1.2350
CPU 0.0074 0.0044 0.0059 0.0021 0.0177 0.0040 0.0134 0.0019 0.0601 0.4862
SHH3
μ − 0.9990 0.75 − 0.0958 − 5.5068 − 3379.99 − 6960.185 − 47.7540 961.7152 − 0.6512 1139.0
SD 0.0005 0.0260 0.0021 0.0047 0.0269 0.0288 0.0013 0.0001 0.0154 1.2261
CPU 0.0052 0.0037 0.0053 0.0019 0.0232 0.0044 0.0122 0.0015 0.0793 0.5340
S. L. Tilahun, M. A. Tawhid
Table 4 continued

g1 g2 g3 g4 g5 g6 g7 g8 g9 g10

ES
μ − 0.9388 0.75 − 0.0932 − 5.5038 − 3378.71 − 6950.185 − 47.7472 961.7152 − 0.4450 1437.4
SD 0.0181 0.0370 0.0008 0.0013 0.0251 0.0280 0.0053 0.0001 0.0268 1.2032
CPU 0.0038 0.0009 0.0016 0.0018 0.0057 0.0037 0.0180 0.0023 1.5826 1.8207
FFA
Swarm hyperheuristic framework

μ − 0.9057 0.7503 − 0.0889 − 5.4024 − 3352.92 − 6886.5378 − 47.7261 961.7164 − 0.4923 1142.0
SD 0.0867 0.0580 0.1638 0.0947 0.0848 0.0103 0.0017 0.0001 0.0256 1.0080
CPU 0.0011 0.0010 0.0021 0.0009 0.0014 0.0027 0.0102 0.0004 0.0171 0.1663
PPA
μ − 0.9978 0.75 − 0.0958 − 5.5072 − 3379.61 − 6950.185 − 47.7445 961.7152 − 0.4450 1137.0
SD 0.0011 .0.0315 0.0031 0.0005 0.2785 0.0384 0.0051 0.0001 0.0142 0.3412
CPU 0.0020 0.0003 0.0007 0.0006 0.0008 0.0011 0.0052 0.0004 0.0543 0.1931
PSO
μ − 0.9595 0.75 − 0.0956 − 5.5046 − 3367.55 − 6950.185 − 47.6900 961.7152 − 0.4450 1139.3
SD 0.0276 0.0346 0.0001 0.0032 0.1362 0.0024 0.0206 0.0001 0.0200 1.4499
CPU 0.0010 0.0003 0.0006 0.0005 0.0005 0.0008 0.004 0.0003 0.0638 0.1612

123
829
830 S. L. Tilahun, M. A. Tawhid

Table 5 Rank of the algorithms


f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10
against each problems
SHH1 4 2 3 4 3 6 4 3 4 2
SHH2 3 4 1 2 4 3.5 4 2 3 4
SHH3 1 1 2 1 1 1.5 1 1 1 3
ES 6 5 6 3 5 3.5 2 6 5 6
FFA 5 7 7 6 7 7 6.5 5 7 7
PPA 2 3 4 7 2 1.5 4 4 2 1
PSO 7 6 5 5 6 5 6.5 7 6 5

Table 6 Sum rank differences of the algorithms for pairwise comparison for the unconstrained problems

SHH1 SHH2 SHH3 ES FFA PPA PSO

SHH1 0 4.5 21.5 − 12.5 − 29.5 4.5 − 23.5


SHH2 − 4.5 0 17 − 17 − 34 0 − 28
SHH3 − 21.5 − 17 0 − 34 − 51 − 17 − 45
ES 12.5 17 34 0 − 17 17 − 11
FFA 29.5 34 51 17 0 34 6
PPA − 4.5 0 17 − 17 − 34 0 − 28
PSO 23.5 28 45 11 −6 28 0

Table 7 Sum rank differences of


SHH1 SHH2 SHH3 ES FFA PPA PSO
the algorithms for pairwise
comparison for the constrained SHH1 0 8 19.5 −8 − 22.5 9.5 − 3
problems
SHH2 − 8 0 11.5 − 16 − 30.5 1.5 − 11
SHH3 − 19.5 − 11.5 0 − 27.5 − 42 − 10 − 22.5
ES 8 16 27.5 0 − 14.5 17.5 5
FFA 22.5 30.5 42 14.5 0 32 19.5
PPA − 9.5 − 1.5 10 − 17.5 − 32 0 − 12.5
PSO 3 11 22.5 −5 − 19.5 12.5 0

5.2.3 Discussion and future works

The proposed swarm hyperheuristic framework is a framework where any set of opera-
tors can used to adjust the proper search behaviour, intensification and diversification,
which can possible guided by learning approach. Since, it is a framework, any new,
hybridized, modified as well as existing operators can be used as low level heuristics.
A simulation based comparison on unconstrained as well as constrained benchmark
problems shows that the proposed framework can work better when compared to algo-
rithms which uses some of the operators which are part of the low level heuristics for
the framework. Since, the proposed approach focuses on swam based algorithms, the
study of a different set of operators as well as neighborhood structure is a possible
future work and can be studied further. In SHH with learning, the search behaviours are

123
Swarm hyperheuristic framework 831

adjusted and gets better learning with the iteration. Hence, one possible future work
is to study the learning performance and also efficiency as a function of the number of
iterations. In our simulation the number of iteration is set to be fixed to a value of 100.
However, intuitively we may say the learning should be finer as the iteration number
is getting larger.
Another possible future work is to expand the low level heuristic type and studying
if the performance of the framework depends on the size of the set of the low level
heuristics (updating operators). Even though updating operators play a crucial role, the
low level heuristics can also be expanded by using additional low level heuristics like
acceptance rules and selection for updating. Hence, (1) by adding more operators, the
effect on the performance of the framework can be studied, (2) by adding additional
class of low level heuristic, in addition to the updating operators, the performance of
the framework can be studied further.
The learning effect of the operators depends on the value γ . As we mentioned
a bigger value of γ makes the learning to converge quickly and the smaller value,
which is γ = 1, eliminates the learning aspect of the framework and it will be without
learning. Hence, the effect and of this parameter can be studied supported by simulation
runs. In addition a dynamic value where the value of this parameter depends on either
the number of iteration or the performance of the algorithm can be studied further.
Another possible future work is having a mixed learning approaches including no-
learning approach. A guiding approach on when to use learning, which type of learning
and when to not use learning can be studied further.
Expanding the set of low level heuristics in such a way that it includes evolu-
tionary algorithm operators can also be another future work. Along with testing the
approach on a high dimensional and with large constrained optimization problems
(Ali and Tawhid 2016a). Applying and testing the framework in different applica-
tions are also another planned working, including applications in potential energy for
molecules (Ali and Tawhid 2017; Tawhid and Ali 2017) and engineering problems
(Ali and Tawhid 2016a), integer programming and minimax problems (Ali and Tawhid
2016b, c; Tawhid and Ali 2017; Tawhid et al. 2016) by employing various constraint
handling strategies in Wang et al. (2016).

6 Conclusions

In this paper a swarm hyperheuristic framework is proposed. The framework uses a set
of updating operators as low level heuristic. It is intended that the framework can be a
good way of designing a custom made algorithm for the problem domain at hand. That
in turn helps to reduce the ’explosion’ of the introduction of ’new’ algorithms where
some of these algorithms are in fact a repetition of existing or a hybrid of existing
algorithms. The other major contribution of the paper is that the hyperheuristic to
guide the search behaviour focusing either on intensification or diversification based
on the performance of the algorithm. That can be done by coupling the approach with a
possible learning approach, which can be either online or offline learning. Simulation
results on selected twenty benchmark problems, ten unconstrained and ten constrained,

123
832 S. L. Tilahun, M. A. Tawhid

of dimension ranging from two to fifty, shows that the proposed framework is indeed
promising. Possible future works have also discussed.

Acknowledgements The first author would like to acknowledge a support from the IMU—Simons African
Fellowship Program 2017 while visiting the Department of Mathematics and Statistics, Thompson Rivers
University, BC, Canada. The research of the 2nd author is supported in part by the Natural Sciences and
Engineering Research Council of Canada (NSERC).

References
Abbass, H.A.: MBO: Marriage in honey bees optimization: a haplometrosis polygynous swarming approach.
In: Proceedings of the 2001 IEEE Congress on Evolutionary Computation, pp. 207–214 (2001)
Abraham, A., Das, S., Roy, S.: Swarm intelligence algorithms for data clustering. In: Soft Computing for
Knowledge Discovery and Data Mining, pp. 279–313 (2008)
Ali, A.F., Tawhid, M.A.: A hybrid PSO and DE algorithm for solving engineering optimization problems.
Appl. Math. Inf. Sci. 10(2), 431–449 (2016)
Ali, A.F., Tawhid, M.A.: Hybrid simulated annealing and pattern search method for solving minimax and
integer programming problems. Pac. J. Optim. 12(1), 151–184 (2016)
Ali, A.F., Tawhid, M.A.: A hybrid cuckoo search algorithm with Nelder Mead method for solving global
optimization problems. SpringerPlus 5(1), 473 (2016)
Ali, A.F., Tawhid, M.A.: Hybrid particle swarm optimization and genetic algorithm for minimizing potential
energy function. Ain Shams Eng. J. 8(2), 191–206 (2017)
Askarzadeh, A., Rezazadeh, A.: A new heuristic optimization algorithm for modelling of proton exchange
membrane fuel cell: bird mating optimizer. Int. J. Energy Res. 37, 1196–1204 (2013)
Bai, H., Zhao, B.: A survey on application of swarm intelligence computation to electric power system. In:
Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress on, Vol. 2, pp.
7587–7591. IEEE (2006)
Blum, C., Li, X.: Swarm intelligence in optimization. In: Blum, C., Merkle, D. (eds.) Swarm Intelligence,
pp. 43–85. Springer, Berlin (2008)
Blum, C., Merkle, D.: Swarm Intelligence: Introduction and Applications. Springer, Berlin (2008)
Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: overview and conceptual comparison.
ACM Comput. Surv. (CSUR) 35, 268–308 (2003)
Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: Fromnaturalto Artificial Systems. Oxford
University Press, NewYork (1999)
Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Ozcan, E., Qu, R.: Hyper-heuristics: a survey
of the state of the art. J. Oper. Res. Soc. 64, 1695–1724 (2013)
Cheng, S., Shi, Y., Qin, Q., Bai, R.: Swarm intelligence in big data analytics. In: International Conference
on Intelligent Data Engineering and Automated Learning, pp. 417–426. Springer, Berlin (2013)
Colorni, A., Dorigo, M., Maniezzo, V.: Distributed optimization by ant colonies. In: Proceedings of the
First European Conference on Artificial Life, pp. 134–142. Paris (1991)
Consoli, S., Darby-Dowman, K.: Combinatorial optimization and metaheuristics. Brunel University (2006)
Cowling, P., Kendall, G., Soubeiga, E.: A hyperheuristic approach to scheduling a sales summit. In: Inter-
national Conference on the Practice and Theory of Automated Timetabling, pp. 176–190. Springer,
Berlin (2000)
Crepinek, M., Liu, S.H., Mernik, M.: Exploration and exploitation in evolutionary algorithms: a survey.
ACM Comput. Surv. (CSUR) 45(3), 35 (2013)
Crepinek, M., Liu, S.H., Mernik, M.: Replication and comparison of computational experiments in applied
evolutionary computing: common pitfalls and guidelines to avoid them. Appl. Soft Comput. 19, 161–
170 (2014)
Crowston, W.B., Glover, F., Thompson, G.L., Trawick, J.D.: Probabilistic and parametric learning combi-
nations of local job shop scheduling rules. In: ONR Research Memorandum, GSIA. Carnegie Mellon
University, Pittsburgh (1963)
Deb, K.: Optimization for Engineering Design: Algorithms and Examples. PHI Learning Pvt. Ltd., Delhi
(2012)

123
Swarm hyperheuristic framework 833

Derrac, J., Garca, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical
tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol.
Comput. 1(1), 3–18 (2011)
Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE
Trans. Syst. Man Cybern. Part B: Cybern. 26, 29–41 (1996)
Dorigo, M., Gambardella, L.M.: Ant colonies for the traveling salesman problem. BioSystems 43, 73–81
(1997)
Ducatelle, F., Di Caro, G.A., Gambardella, L.M.: Principles and applications of swarm intelligence for
adaptive routing in telecommunications networks. Swarm Intell. 4(3), 173–198 (2010)
Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth
International Symposium on Micro Machine and Human Science, 1995. MHS’95, pp. 39–43. IEEE.
(1995). https://doi.org/10.1109/MHS.1995.494215
Fisher, H., Thompson, G.L.: Probabilistic learning combinations of local job-shop scheduling rules. In:
Muth, J., Thompson, G. (eds.) Industrial Scheduling, pp. 225–251. Prentice Hall, Upper Saddle River
(1963)
Gandomi, A.H., Alavi, A.H.: Krill herd: a new bio-inspired optimization algorithm. Commun. Nonlinear
Sci. Numer. Simul. 17, 4831–4845 (2012)
Garcia, F.J.M., Perez, J.A.M.: Jumping frogs optimization: a new swarm method for discrete optimization.
Documentos de Trabajo del DEIOC (2008)
Gavana, A.: Global optimization benchmarks and AMPGO. http://infinity77.net/global_optimization/test_
functions_nd_X.html. Accessed 08 July 2017
Ghate, A.: Dynamic optimization in radiotherapy. In: Transforming Research into Action, pp. 60–74.
INFORMS (2011)
Glover, F., Kochenberger, G.A.: Handbook of Metaheuristics. Springer, New York (2003)
Glover, F., Laguna, M.: Tabu search foundations: longer term memory. In: Tabu Search, pp. 93–124. Springer,
Boston (1997)
Hamadneh, N.N., Tilahun, S.L., Sathasivam, S., Choon, O.H.: Prey-predator algorithm as a new optimization
technique using in radial basis function neural networks. Res. J. Appl. Sci. 8(7), 383–387 (2013)
Hamadneh, N.N., Khan, W., Tilahun, S.L.: Optimization of microchannel heat sinks using prey-predator
algorithm and artificial neural networks. Machines 6(2), 26 (2018)
Hartmann, D.: Optimierung Balkenartiger Zylindeerschalen aus Stahlbeton mit Elastischem und Plastis-
chem Werkstoffverhalten. Ph.D. Thesis, University of Dortmund (1974)
Havens, T.C., Spain, C.J., Salmon, N.G., Keller, J.M.: Roach infestation optimization. In: Proceedings of
the IEEE Swarm Intelligence Symposium (SIS 2008), pp. 1–7 (2008)
Hock, W., Schittkowski, K.: Test examples for nonlinear programming codes. J. Optim. Theory Appl. 30(1),
127–129 (1980)
Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor
(1975)
Jones, D.F., Mirrazavi, S.K., Tamiz, M.: Multi-objective meta-heuristics: an overview of the current state-
of-the-art. Eur. J. Oper. Res. 137, 1–9 (2002)
Kamien, M.I., Schwartz, N.L.: Dynamic Optimization: the Calculus of Variations and Optimal Control in
Economics and Management. Courier Corporation, North Chelmsford (2012)
Karaboga, D.: An Idea Based on Honey Bee Swarm for Numerical Optimization, Vol. 200. Technical
Report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005)
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial
bee colony (ABC) algorithm. J. Glob. Optim. 39, 459–471 (2007)
Karaboga, D., Akay, B.: A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 214,
108–132 (2009)
Kelemen, A., Abraham, A., Chen, Y. (eds.): Computational Intelligence in Bioinformatics, vol. 94. Springer,
Berlin (2008)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference
on Neural Networks, pp. 1942–1948. Perth (1995)
Khan, W.A, Hamadneh, N.N., Tilahun S.L., Ngnotchouye, J.M.T.: A review and comparative study of firefly
algorithm and its modified versions. In: Ozgur Baskan (ed.) Chapter 13 of Optimization Algorithms
Methods and Applications. InTech (2016). https://doi.org/10.5772/62472

123
834 S. L. Tilahun, M. A. Tawhid

Krishnan, K., Ghose, D.: Detection of multiple source locations using a glow-worm metaphor with appli-
cations to collective robotics. In: Proceedings of the IEEE Swarm Intelligence Symposium, pp. 84–91
(2005)
Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of swarm algorithms applied to discrete opti-
mization problems. In: Swarm Intelligence and Bio-inspired Computation: Theory and Applications,
pp. 169–191. Elsevier Science and Technology Books (2013)
Li, X.-L., Lu, F., Tian, G.-H., Qian, J.-X.: Applications of artificial fish school algorithm in combinatorial
optimization problems. J. Shandong Univ. Eng. Sci. 34, 64–67 (2004)
Liang, J.J., Runarsson, T.P., Mezura-Montes, E., Clerc, M., Suganthan, P.N., Coello, C.C., Deb, K.: Problem
definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter
optimization. J. Appl. Mech. 41(8), 8–31 (2006)
Liang, J.J., Qu, B.Y., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2014 special
session and competition on single objective real-parameter numerical optimization. In: Computational
Intelligence Laboratory. Zhengzhou University, Zhengzhou China and Technical Report, Nanyang
Technological University, Singapore (2013)
Lu, X., Zhou, Y.: A novel global convergence algorithm: bee collecting pollen algorithm. In: Advanced
Intelligent Computing Theories and Applications with Aspects of Artificial Intelligence, pp. 518–525.
Springer, Berlin (2008)
Lucic, P., Teodorovic, D.: Transportation modeling: an artificial life approach (ICTAI 2002). In: Proceedings
of the 14th IEEE International Conference on Tools with Artificial Intelligence, pp. 216–223 (2002)
Manzini, R., Bindi, F.: Strategic design and operational management optimization of a multi stage physical
distribution system. Transp. Res. Part E: Logist. Transp. Rev. 45(6), 915–936 (2009)
Martens, D., Baesens, B., Fawcett, T.: Editorial survey: swarm intelligence for data mining. Mach. Learn.
82(1), 1–42 (2011)
Martin, R., Stephen W.: Termite: A swarm intelligent routing algorithm for mobilewireless Ad-Hoc net-
works. In: Stigmergic Optimization. Studies in Computational Intelligence, vol 31. Springer, Berlin,
Heidelberg (2006)
Mehrotra, A., Johnson, E.L., Nemhauser, G.L.: An optimization based heuristic for political districting.
Manag. Sci. 44(8), 1100–1114 (1998)
Michalewicz, Z., Schoenauer, M.: Evolutionary algorithms for constrained parameter optimization prob-
lems. Evol. Comput. 4(1), 1–32 (1996)
Molga, M., Smutnicki, C.: Test functions for optimization needs. In: Test Functions for Optimization Needs
(2005)
Mucherino, A., Seref, O.: Monkey Search: A Novel Metaheuristic Search for Global Optimization, Data
Mining, Systems Analysis and Optimization in Biomedicine, pp. 162–173. American Institute of
Physics, New York (2007)
Olsson, A.E.: Particle Swarm Optimization: Theory, Techniques and Applications. Nova Science Publishers,
Inc. (2010)
Ong, H.C., Tilahun, S.L., Tang, S.S.: A comparative study on standard, modified and chaotic firefly algo-
rithms. Pertanika J. Sci. Technol. 23(2), 251–269 (2015)
Ong, H. C., Tilahun, S. L., Lee, W. S., Ngnotchouye, J. M. T.: Comparative study of prey predator algorithm
and firefly algorithm. Intelli. Autom. Soft Computi. pp. 1–8 (2017). https://doi.org/10.1080/10798587.
2017.1294811
Ozcan, E., Misir, M., Ochoa, G., Burke, E.: A reinforcement learning: great-deluge hyper-heuristic for
examination timetabling. Int. J. Appl. Metaheur. Comput. 1(1), 40–60 (2010)
Pacini, E., Mateos, C., Garino, C.G.: Distributed job scheduling based on swarm intelligence: a survey.
Comput. Electr. Eng. 40(1), 252–269 (2014)
Pan, W.-T.: A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl
Based Syst. 26, 69–74 (2012)
Panigrahi, B.K., Shi, Y., Lim, M.H. (eds.): Handbook of Swarm Intelligence: Concepts, Principles and
Applications, vol. 8. Springer, Berlin (2011)
Passino, K.M.: Bacterial foraging optimization. Int. J. Swarm Intell. Res. (IJSIR) 1, 1–16 (2010)
Patnaik, S., Yang, X.S., Nakamatsu, K. (eds.): Nature-Inspired Computing and Optimization: Theory and
Applications, vol. 10. Springer, Berlin (2017)
Pham, D., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M.: The bees algorithm: a novel tool
for complex optimisation problems. In: Proceedings of the 2nd Virtual International Conference on
Intelligent Production Machines and Systems (IPROMS 2006), pp. 454–459 (2006)

123
Swarm hyperheuristic framework 835

Pinto, P.C., Runkler, T.A., Sousa, J.M.: Wasp swarm algorithm for dynamic MAX-SAT problems. In:
Adaptive and Natural Computing Algorithms, pp. 350–357. Springer, Berlin (2007)
Piotrowski, A.P., Napiorkowski, J.J., Rowinski, P.M.: How novel is the novel black hole optimization
approach? Inf. Sci. 267, 191–200 (2014)
Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179, 2232–
2248 (2009)
Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.: BGSA: binary gravitational search algorithm. Nat. Comput.
9, 727–745 (2010)
Rechenberg, I.: Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen
Evolution. Frommann-Holzboog, Stuttgart (1973)
Saleem, M., Di Caro, G.A., Farooq, M.: Swarm intelligence based routing protocol for wireless sensor
networks: survey and future directions. Inf. Sci. 181(20), 4597–4624 (2011)
Schwefel, H.-P.: Evolutionsstrategie und Numerische Optimierung, Dissertation, Technical University of
Berlin (1975)
Schwefel, H.-P.: Binre Optimierung durch Somatische Mutation, Technical Report, Technical University
of Berlin and Medical University of Hannover (1975)
Shi, Y.: Particle swarm optimization: developments, applications and resources. In: Proceedings of the 2001
Congress on Evolutionary Computation, Vol. 1, pp. 81–86. IEEE (2001)
Shiqin, Y., Jianjun, J., Guangxing, Y.: A dolphin partner optimization. In: IEEE WRI Global Congress on
Intelligent Systems (GCIS’09), pp. 124–128 (2009)
Srensen, K.: Metaheuristicsthe metaphor exposed. Int. Trans. Oper. Res. 22(1), 3–18 (2015)
Sttzle, T., Hoos, H.H.: Maximin ant system. Future Gener. Comput. Syst. 16, 889–914 (2000)
Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.P., Auger, A., Tiwari, S.: Problem definitions
and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL
Report, 2005 (2005)
Taylor, Christine Pia: Integrated Transportation System Design Optimization. Dissertation, Massachusetts
Institute of Technology (2007)
Tawhid, M.A., Ali, A.F.: A Hybrid grey wolf optimizer and genetic algorithm for minimizing potential
energy function. Memet. Comput. 9, 347–359 (2017)
Tilahun, S.L., Asfaw, A.: Modeling the expansion of Prosopis juliflora and determining its optimum uti-
lization rate to control the invasion in Afar Regional State of Ethiopia. Int. J. Appl. Math. Res. 1(4),
726–743 (2012)
Tilahun, S.L., Ong, H.C.: Modified firefly algorithm. J. Appl. Math. 2012, Article ID 467631 (2012)
Tilahun, S.L., Ong, H.C.: Comparison between genetic algorithm and prey-predator algorithm. Mal. J.
Fund. Appl. Sci. 9(4), 167–170 (2013a)
Tilahun, S.L., Ong, H.C.: Vector optimisation using fuzzy preference in evolutionary strategy based firefly
algorithm. Int. J. Oper. Res. 16(1), 81–95 (2013b)
Tilahun, S.L.: Prey Predator Algorithm: A New Metaheuristic Optimization Approach. A Thesis submitted
to School of Mathematical Sciences, Universiti Sains Malaysia, as a partial fulfilment for Ph.D. Degree
(2013)
Tilahun, S.L., Ong, H.C.: Prey predator algorithm: a new metaheuristic optimization algorithm. Int. J. Inf.
Technol. Decis. Mak. 14, 1331–1352 (2015)
Tilahun, S.L., Ong, H.C., Ngnotchouye, J.M.T.: Extended prey-predator algorithm with a group hunting
scenario. Adv. Oper. Res. 2016, 1–14 (2016)
Tilahun, S.L., Ngnotchouye, J.M.T.: Prey predator algorithm with adaptive step length. Int. J. Bio-Inspir.
Comput. 8(4), 195–204 (2016)
Tilahun, S.L., Ngnotchouye, J.M.T.: Firefly algorithm for discrete optimization problems: a survey. KSCE
J. Civ. Eng. 21(2), 535–545 (2017)
Tilahun, S.L., Ngnotchouye, J.M.T., Hamadneh, N.N.: Continuous versions of firefly algorithm: a review.
Artif. Intell. Rev. pp. 1–48 (2017). https://doi.org/10.1007/s10462-017-9568-0
Tilahun, S.L.: Prey predator hyperheuristic. Appl. Soft Comput. 59, 104–114 (2017)
Tilahun, S.L., Goshu, N.N., Ngnotchouye, J.M.T.: Prey predator algorithm for travelling salesman prob-
lem: application on the Ethiopian tourism sites. In: Handbook of Research on Holistic Optimization
Techniques in the Hospitality, Tourism, and Travel Industry, pp. 400–422. IGI Global (2017)
Tilahun, S.L., Matadi, M.B.: Weight minimization of a speed reducer using prey predator algorithm. Int. J.
Manuf. Mater. Mech. Eng. (IJMMME) 8(2), 19–32 (2018)

123
836 S. L. Tilahun, M. A. Tawhid

Toklu, Y.C.: Metaheuristics and engineering. In: AIP Conference Proceedings, Vol. 1558, No. 1, pp. 421–
424. AIP (2013)
Villegas, J.G.: Using nonparametric test to compare the performance of metaheuristics. https://juangvillegas.
les.wordpress.com/2011/08/friedman-test24062011.pdf. Retrieved June 2017 (2011)
Yang, X.-S.: Firefly algorithm. In: Nature-inspired Metaheuristic Algorithms. Luniver Press, Bristol (2008)
Yang, X.-S.: A New Metaheuristic Bat-Inspired Algorithm, Nature Inspired Cooperative Strategies For
Optimization (NICSO 2010), pp. 65–74. Springer, Berlin (2010)
Yang, X.S., Cui, Z., Xiao, R., Gandomi, A.H., Karamanoglu, M. (eds.): Swarm Intelligence and Bio-inspired
Computation: Theory and Applications. Newnes, Oxford (2013)
Yang, X.-S., Deb, S.: Cuckoo search via Lvy flights. (NaBIC 2009). In: IEEE World Congress on Nature
and Biologically Inspired Computing, pp. 210–214 (2009)
Yang, X.-S., He, X.: Firefly algorithm: Recent advances and applications? Int. J. Swarm Intell. 1(1), 36–50
(2013). https://doi.org/10.1504/IJSI.2013.055801
Ye, Z., Hu, Z., Lai, X., Chen, H.: Image segmentation using thresholding and swarm intelligence. J. Softw.
7(5), 1074–1082 (2012)
Zhao, D., Dai, Y., Zhang, Z.: Computational intelligence in urban traffic signal control: a survey. IEEE
Trans Syst Man Cybern Part C Appl Rev 42(4), 485–494 (2012)
Zhang, S., Lee, C.K., Chan, H.K., Choy, K.L., Wu, Z.: Swarm intelligence applied in green logistics: a
literature review. Eng. Appl. Artif. Intell. 37, 154–169 (2015)
Vasant, P.M. (ed.): Meta-Heuristics Optimization Algorithms in Engineering, Business, Economics, and
Finance. IGI Global, Hershey (2012)
Wang, D., Tan, D., Liu, L.: Particle swarm optimization algorithm: an overview. Soft Comput. 22, 387–408
(2017)
Wang, Y., Wang, B.C., Li, H.X., Yen, G.G.: Incorporating objective function information into the feasibility
rule for constrained evolutionary optimization. IEEE Trans. Cybern. 46, 2938–2952 (2016). https://
doi.org/10.1109/TCYB.2015.2493239
Weyland, D.: A rigorous analysis of the harmony search algorithm: how the research community can be.
In: Modeling, Analysis, and Applications in Metaheuristic Computing: Advancements and Trends:
Advancements and Trends, vol. 72 (2012)
Wu, S.X., Banzhaf, W.: The use of computational intelligence in intrusion detection systems: a review.
Appl. Soft Comput. 10(1), 1–35 (2010)

Affiliations

Surafel Luleseged Tilahun1,2 · Mohamed A. Tawhid3,4

Mohamed A. Tawhid
mtawhid@tru.ca
1 Department of Mathematical Sciences, University of Zululand, Private Bag X1001,
KwaDlangezwa 3886, South Africa
2 Computational Science Program, College of Natural Sciences, Addis Ababa University, 1176
Addis Ababa, Ethiopia
3 Department of Mathematics and Statistics, Faculty of Science, Thompson Rivers University, 900
McGill Road, Kamloops, BC V2C 0C8, Canada
4 Department of Mathematics and Computer Science, Faculty of Science, Alexandria University,
Moharam Bey, Alexandria 21511, Egypt

123

You might also like