4 views

Uploaded by Imran Basha

Research paper

Research paper

© All Rights Reserved

- Scheduling of Train Driver in Tiwan Rail Companyd
- Design Optimization of Mechanical Components
- Optimization Techniques for Water Supply Network a Critical Review
- Genetic Algorithms a Step by Step Tutorial
- Gs 3511851192
- Parameter Estimation in a Mathematical Model of a Heat-Conducting Rod
- Roulette Wheel
- Review on Optimization of Machining Operation
- Chapters
- Thèse Gustavo Mendoza
- Linear-Quadratic McKean-Vlasov Stochastic Differential Games
- Bio Linguistic A
- SAMRITI IJCAIT
- Machine Learning in Embedded System
- ENHANCEMENT OF AVAILABLE TRANSFER CAPABILITY WITH FACTS DEVICE IN COMPETITIVE POWER MARKET
- Th40-3-AK1325
- biology
- Milawa module.pdf
- Operations Research Model.ppt
- tihd

You are on page 1of 14

journal homepage: www.elsevier.com/locate/eswa

Section

Erik Cuevas a,b,∗, Luis Enríquez a, Daniel Zaldívar a,b, Marco Pérez-Cisneros a

a

Departamento de Electrónica, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, Guadalajara, Mexico

b

Centro Tapatío Educativo A.C, Av. Juárez 340, Colonia Centro, Guadalajara, Mexico

a r t i c l e i n f o a b s t r a c t

Article history: During millions of years, nature has developed patterns and processes with interesting characteristics.

Received 5 March 2017 They have been used as inspiration for a signiﬁcant number of innovative models that can be extended

Revised 5 March 2018

to solve complex engineering and mathematical problems. One of the most famous patterns present in

Accepted 31 March 2018

nature is the Golden Section (GS). It deﬁnes an especial proportion that allows the adequate formation,

Available online 9 April 2018

selection, partition, and replication in several natural phenomena. On the other hand, Evolutionary algo-

Keywords: rithms (EAs) are stochastic optimization methods based on the model of natural evolution. One important

Evolutionary algorithms process in these schemes is the operation of selection which exerts a strong inﬂuence on the performance

Golden Section of their search strategy. Different selection methods have been reported in the literature. However, all of

Selection methods them present an unsatisfactory performance as a consequence of the deﬁcient relations between elitism

Genetic algorithms (GA) and diversity of their selection procedures. In this paper, a new selection method for evolutionary com-

Evolutionary strategies (ES)

putation algorithms is introduced. In the proposed approach, the population is segmented into several

Genetic Programming (GP)

Evolutionary computation

groups. Each group involves a certain number of individuals and a probability to be selected, which are

determined according to the GS proportion. Therefore, the individuals are divided into categories where

each group contains individual with similar quality regarding their ﬁtness values. Since the possibility

to choose an element inside the group is the same, the probability of selecting an individual depends

exclusively on the group from which it belongs. Under these conditions, the proposed approach deﬁnes

a better balance between elitism and diversity of the selection strategy. Numerical simulations show that

the proposed method achieves the best performance over other selection algorithms, in terms of its so-

lution quality and convergence speed.

© 2018 Elsevier Ltd. All rights reserved.

tion methods based on the model of natural evolution. In gen-

Expert systems (Jackson, 1998) are approaches commonly eral, Genetic Algorithms (GA) (Goldberg, 1989) are the most pop-

adopted to support decision-making processes and problem- ular representatives of such techniques. EAs operate on a popula-

solving applications. Some of their main characteristics include tion Pk ({pk1 , pk2 , . . . , pkN }) of N candidate solutions (individuals) that

their ability to solve complex problems and their capacity to pro- evolve from the initial point (k = 0) to a total gen number of iter-

duce consistent decisions. One of the most critical operations, in ations (k = gen). A candidate solution pki (i ∈ [1, …, N]) represents

the decision-making process, is the evaluation or ranking of all a d-dimensional vector { pki,1 , pki,2 , . . . , pki,d } where each dimension

possible alternatives of action in order to ﬁnd the best solution for corresponds to a decision variable of the optimization problem. In

a particular problem. Therefore, optimization algorithms work co- each iteration, a set of stochastic operations are applied over the

operatively with expert systems schemes in the eﬃcient search of population Pk to build the new population Pk + 1 . Such operations

potential solutions in the decision-making process. are mutation, recombination, and selection. The quality of each

candidate solution pki is evaluated by using an objective function

f (pki ) whose ﬁnal result represents the ﬁtness value of pki .

∗

The effect of the evolutionary operators in the search

Corresponding author at: Departamento de Electrónica, Universidad de Guadala-

jara, CUCEI, Av. Revolución 1500, Guadalajara, Mexico.

strategy has been extensively demonstrated and documented

E-mail addresses: erik.cuevas@cucei.udg.mx, cuevas@inf.fu-berlin.de (E. Cuevas), (Hancock et al., 1994). Mutation incorporates modiﬁcations in the

daniel.zaldivar@cucei.udg.mx (D. Zaldívar), marco.perez@cucei.udg.mx (M. Pérez- population as a mechanism to escape of local optima. Recombi-

Cisneros).

https://doi.org/10.1016/j.eswa.2018.03.064

0957-4174/© 2018 Elsevier Ltd. All rights reserved.

184 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

Table 1

Comparative overview of classical selection methods.

Proportional method (Holland, 1975) It assigns a selecting probability to each individual in Problems in case of negative objective functions values or

terms of its relative ﬁtness value. minimization tasks.

Tournament selection (Blickle et al., It selects the best solution for a set of q different Low selective pressure that promotes extensively the

1995) individuals. exploration, but adversely punishes the exploitation.

Linear ranking (Darrell, 1989) It uses a linear function to map the ranking index of each Solutions with the same ﬁtness value can obtain a very

solution to a selection probability. different selection probability affecting the search

strategy.

nation interchanges essential characteristics of the search space mance of the EAs, the current research for the design or modiﬁca-

among individuals. Selection (Bäck, 1992) conducts the search tion of selection operators has been practically overlooked.

strategy towards promising regions of the search space by the use The concept of selective pressure (Li et al., 2015; Pascal et al.,

of information currently available in the population. Both Mutation 2011) has been extensively used in the literature to characterize

and recombination (Bäck, 1992) allow the exploration of the search the performance of a selection approach. The index of takeover

space while selection exploits the information already present in time (Goldberg & Deb, 1991) allows evaluating the selective pres-

the population with the objective of improving it. sure of a determined selection method adequately. It quantiﬁes the

The balance between exploration and exploitation (Nandar & number of generations required by a selection method to ﬁll the

Ponnuthurai, 2015) in a search strategy can be interpreted, as the complete population with copies of the best initial solution. There-

conﬂicting action of increasing the solution diversity and simulta- fore, initially, the population presents only a single best individual

neously reﬁning the solutions, already known, which mainly main- g (g1 ,g2 ,…gd ) and N-1 worse elements. Then, the selection method

tain the best ﬁtness values. In EAs, this balance is critical in or- is operated until the whole population contains N copies of g.

der to achieve a good performance of the search strategy. Under On the other hand, nature is an exciting and inexhaustible

such circumstances, the selection operator provides an important source of solutions to several biological problems, which were

mechanism to modify the relation exploration-exploitation (Bäck & solved as a result of natural selection during millions of years

Hoffmeister, 1991). To increase the intensity of selecting individu- of evolution (Julian, Olga, Nikolaj, Bowyer, & Pahl, 2006; Sonya &

als with high ﬁtness values augments the exploitation of the search William, 2010). As functional entities, nature has suggested impor-

strategy (Bäck, 1994). On the other hand, decreasing the emphasis tant patterns with interesting characteristics. They have been used

on selecting such individuals permits the selection of low-quality as inspiration for a signiﬁcant number of innovative models that

solutions. In these conditions, the exploration of the optimization can be extended to solve complex engineering and mathematical

strategy is promoted (Baker, 1987). problems (Clark, Kok, & Lacroix, 1999). One of the most famous

Different selection techniques have been proposed in the liter- patterns present in nature is the Golden Section (GS) (Benavoli &

ature with different performance levels. The most popular meth- Chisci, 2009; Newell &Pennybacker, 2013). It deﬁnes an especial

ods are the Roulette method (Holland, 1975), the tournament se- proportion that allows the adequate formation, selection, partition,

lection (Blickle, Thiele, & Eshelman, 1995) and the linear rank- and replication of several natural phenomena (Walser, 2001). It ap-

ing (Darrell, 1989). The proportional method assigns a selecting pears in a variety of schemes, including the geometry of crystals,

probability to each individual regarding its relative ﬁtness value. the spacing of stems in plants, the proportion of body parts in

This mechanism presents several ﬂaws in case of negative objec- animals, and in the proportion of feature size in the human face

tive functions or minimization tasks (Noraini &, Geraghty, 2011). (Dunlap, 1997). GS, sometimes known as the golden ratio or golden

In the tournament selection technique, it is selected the best so- number, has been studied widely and has attracted the interest of

lution of a set of q different individuals randomly obtained from many scientiﬁc communities. As a result, its use has been extended

the whole population. The standard size of the tournament set is to several disciplines such as architecture (Krishnendra, 2015),

q = 2. Under tournament, it has been demonstrated that the se- arts (Loai, 2012), engineering, industrial design (Lu, 2003), biol-

lective pressure is very low, favoring the exploration extensively ogy (Ciucurel, Georgescu, & Iconaru, 2018) and quantum mechanics

and punishing the exploitation of the search strategy (Brad & Gold- (Coldea1 et al., 2010).

berg, 1995). Finally, the linear ranking method uses a linear func- Alternatively, to traditional selection methods, in this paper, a

tion to map the ranking index of each solution to a selection prob- new simple selection method for EAs is introduced. In the pro-

ability. Although linear ranking maintains a good selective pres- posed method, the population is segmented into several groups.

sure, the method presents a critical diﬃculty. Since linear rank- The number of individuals and the selection probability of each

ing assigns a selection probability to each solution depending on group is determined so that the proportion of two consecutive

its respective ranking index, two solutions with the same ﬁtness groups maintains the GS. The proportions are assigned consider-

value can obtain a very different selection probability. This incon- ing that the group with the highest proportion of individuals cor-

sistency adversely affects the performance of the search strategy responds to the smallest probability to be selected. This group as-

during the optimization (Blickle & Thiele, 1996). In general, most sembles the elements with the worst ﬁtness quality of the popula-

of the selection methods present an unsatisfactory performance as tion. In contrast, the group with the lowest proportion of elements

a consequence of the deﬁcient relations between elitism and di- associates the highest probability to be chosen. Such a group con-

versity of their selection procedures. To provide an overview of all tains the best individuals in the population. Since the possibility

methods, we summarize the set of comparative features for each to choose an element inside the group is the same, the probabil-

method in Table 1. ity to select an individual depends exclusively on the group from

A selection operator is entirely independent of the Evolutionary which it belongs. Numerical simulations show that the proposed

Algorithm (EA). In particular, any selection method can be modi- method achieves the best performance over other selection algo-

ﬁed or replaced, regardless of the global structure or the rest of rithms, with regard to solution quality and convergence speed. The

the operators used for a speciﬁc EA (De la, Maza, & Tidor, 1993). main contributions of this research are:

In spite of the importance of the selection operation in the perfor-

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 185

deﬁnes a better balance between elitism and diversity in com-

parison to other selection methods currently in use.

(II) Its effective implementation in order to maintain a simple

structure and fast performance.

(III) The integration of an experimental scheme for the evaluation of

Fig. 1. Graphical representation of the GS proportion.

selection methods that includes benchmark functions and con-

sistent performance indexes.

(IV) The application of the proposed selection method to improve as the selection result z. Frequently, the selection is attained be-

the search strategy of Genetic Algorithms (GA). tween only two individuals (q = 2), but sometimes it is veriﬁed by

The rest of this paper is organized as follows. In Section 2, the using more elements. Tournament can be eﬃciently implemented

main selection methods are described. Section 3 introduces the as no sorting procedure for the population is required. The pseudo

concept of the Golden Section (GS). Section 4 explains the pro- code of tournament is given by Algorithm 2.

posed selection algorithm. Section 5 shows the experimental re- Under tournament, it has been demonstrated that the selective

sults. Finally, some conclusions are discussed in Section 6. pressure is very low, favoring the exploration extensively and pun-

ishing the exploitation of the search strategy (Bäck, 1992).

2. Selection methods

2.3. Linear ranking

EAs are stochastic methods that use a number of N candidate

solutions (known as individuals pki ∈ Pk , where Pk represents the The ranking selection was ﬁrst proposed by Baker (Grefenstette

entire population) of the optimization problem to produce new po- & Baker, 1989) to eliminate the ﬂaws and disadvantages of the pro-

tential solutions. During the process, the population Pk is modiﬁed portional selection method. In the linear ranking algorithm, the in-

to build a new population Pk + 1 . In each execution, as a search dividuals of Pk are sorted according to their ﬁtness values. There-

strategy, EAs use the iterative application of three evolutionary op- fore, the rank N is assigned to the best element of Pk whereas

erators: mutation, recombination, and selection. Such operations the rank 1 is given to the worst individual. Then, a linear func-

are executed until a determined termination criterium has been tion is used to map the rank i of each individual to a selection

reached. Each iteration is called generation and is denoted by k. probabilitysi . Under such conditions, the selection probability of

The selection operation aims to improve the solution quality of each solution is computed as follows:

the population Pk by conferring to an individual of high quality pki

1 i−1

a high probability to be chosen (Bäck, 1992). Therefore, selection si = η + (η − η

− + −

) (2)

allows the proliferation of good quality solutions and avoids the N N−1

election of bad quality solutions. After the operation of a selection

In Eq. (2), (η − /N) represents the selection probability of the

method, it is chosen a ﬁnal solution which is denoted by z (z ∈

worst individual of Pk whereas (η + /N) exhibits the selection prob-

Pk ). Several selection methods have been proposed in the litera-

ability of the best element. The restriction N i=1 si = 1requires that

ture with different performance levels. The most popular methods

1 ≤ η ≤ 2 and η = 2 − η are fulﬁlled. Under this approach, it

+ − +

are the proportional selection (Holland, 1975), the tournament se-

is noted that all individuals receive a different selection probabil-

lection (Blickle et al., 1995) and the linear ranking (Darrell, 1989).

ity, even, if they possess the same ﬁtness value. This inconsistency

adversely affects the performance of the search strategy. Once cal-

2.1. Roulette method

culated the selection probability si of each individual of the popu-

lation Pk , the ﬁnal selected element z is obtained by using the PS

The roulette method was introduced to genetic algorithms by

method, as it has been explained in sub-Section 2.1. The pseudo

Holland (Holland, 1975). It assigns to each element pki a selection

code of linear ranking is schematized in Algorithm 3.

probability si according to its relative ﬁtness value. si can be com-

puted as follows:

3. Golden Section (GS)

f ( pk )

s i = N i (1)

l=1 f (pl )

k 3.1. Golden Section

Once calculated the selection probability si of each individual of

The Golden Section (GS), characterized by the Greek letter Phi

the population Pk , the ﬁnal selected element z is obtained by us-

(φ ), is also known as the Golden Ratio, Golden Number, Golden

ing the Proportional Selection (PS) Method (RWM) (Holland, 1975).

Mean, Divine Proportion and Divine Section. GS is deﬁned as an ir-

Under this approach, the cumulative probability Ai of each solution

rational and continuous number [2] with the value of 1.61803398.

pki is calculated(Ai = ij=1 s j ). Therefore, to choose the ﬁnal result

In general, the GS is characterized by a relative proportion between

z, a random number between zero and one is produced. Thus, the

two elements. Therefore, two quantities A and B present a GS rela-

solution that represents the chosen random number in the cumu-

tion if their proportion (A/B) is the same as the ratio of their sum

lative probability range for the individual is selected. Although the

in comparison with the larger of the two quantities ((A + B)/A).

proportional selection method maintains good selective pressure

Fig. 1 illustrates this geometric relationship. According to Fig. 1, a

properties, its selection mechanism presents several ﬂaws in case

line segment is divided into two parts A and B so that the ratio

of negative objective functions or minimization tasks (Bäck, 1992).

between the longer part A and the shorter one B is equal to the

The pseudo code of the proportional selection technique is given

ratio between the whole line segment A + B and the longer part A.

by Algorithm 1.

Such relations, expressed algebraically, can be modeled as follows:

2.2. Tournament selection

A A+B

= (3)

B A

In tournament selection (Blickle et al., 1995), ﬁrst, q different

individuals from Pk are randomly chosen to produce a group of ele- Operating this relationship, it is found that A2 = (A + B) • B.

ments B. Then, the best element of B (B = {b1 ,…, bq }) is considered Then, assuming that B = 1, it is obtained the following expression:

186 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

A2 − A − 1 = 0 (4) (Esmaeili, Gulliver, & Kakhbod, 2009) that the proportion be-

tween two consecutive terms tends to the GS number, such as

The positive root of Eq. (4) represents the solution of the ge- follows:

ometric problem deﬁned by Eq. (3). This solution is called the

Fn

Golden Section (φ ): ϕ = nlim (12)

√ →∞ Fn−1

1+ 5

A=ϕ = ≈ 1.618 (5) 3.2.3. Interconnection of the GS with Fractals

2

A fractal is a complex mathematical macro-structure built from

If it is converted the ratio between A (1.618) and B (1) in terms

an identical pattern that is self-similar across arbitrarily small

of percentages, such proportions can be established as A = 61.8%

scales (Mandelbrot, 1982). They are produced by repeating a sim-

and B = 38.2%.

ple construction process over and over in an ongoing feedback

loop. In the construction process, two elements are necessary: a

3.2. Mathematical framework of the Golden Section

self-similar structure and a drawing rule (Kenneth, 2003). The GS

appears in fractal geometry, due to its self-similarity and repeti-

3.2.1. General properties of the GS

tion properties. This self-similar nature can be seen in a certain

The GS number (φ ) presents interesting mathematical proper-

geometrical ﬁgure where one can divide it into two parts accord-

ties which has been exploited for several sciences such as physics,

ing to the GS proportion. From this operation, two regions are ob-

computer sciences, medicine and architecture (Iosa, Morone, &

tained A with the 61.8% and B with the 38.2% of the geometrical

Paolucci, 2018). Most of these properties express the capacity of

ﬁgure. If this procedure is inﬁnitely repeated, dividing always the

the GS number to replicate the same structure at different scales.

smaller part (B), then a fractal shape is generated. Several inter-

The Eq. (4) can be formulated in terms of the GS number (φ )

esting fractal are constructed from the GS, some examples include

as follows:

the Fibonacci fractals (Ramírez, Rubiano, & De Castro, 2014) and

1

ϕ =1+ (6) the Golden trees (Bliss & Brown, 2009).

ϕ

3.3. Division process with GS

1

ϕ−1= (7)

ϕ The Golden Section (GS) is one of the most famous patterns

Under such circumstances, when it is subtracted 1 from present in nature. It has been used in a great variety of con-

φ = 1.618, it is obtained the inverse of the GS(1/φ = 0.618). It has texts. One of them is the division of the space in several sec-

been conﬁrmed that the GS is the single positive integer hold- tions or groups. With this division, the main objective is to sepa-

ing this characteristic. Substituting multiple times the expression rate the space in such a way that allows better functionality, spac-

of Eq (6) forφ , the GS can be expressed as a continuous fraction ing, and distribution (Walser, 2001). Fig. 2 exhibits examples of

containing only ones: the GS presence in several natural structures. This approach has

been already applied in different disciplines such as architecture

1

ϕ =1+ 1

(8) (Krishnendra, 2015), industrial design and aeronautics (Lu, 2003).

1+ 1+ 1+1··· The idea under the division process is to segment a space E in m

different sections. Each section Si (i ∈ 1, …, m) that maintains a de-

Considering that Eq. (4) can be formulated in terms of the GS

termined proportion ci of the space E conserves the GS proportion

as φ 2 = φ + 1, the GS can be deﬁned as a continuous square roots

with the rest of the sections.

of ones:

The process of dividing E in m groups begins with the ini-

tial separation in two sections A1 (S1 ) and B1 (S2 ). Such sections

ϕ= 1+ 1+ 1+ 1 + ··· (9) present according to the GS proportion the c1 = 61.8% and the

c2 = 38.2% of the total space, respectively. Then, the small sec-

Both characteristics speciﬁed by Eq. (8) and Eq. (9) exhibit the tion B1 is divided into two different parts A2 andB2 maintaining

capacity of the GS of being auto-similar. Under such conditions, the the GS proportion between them. Therefore, the new sections A2

GS has the property of replicating the same pattern model at dif- and B2 occupy the 61.8% and the 38.2% of the space correspon-

ferent scale levels. dent toB1 (whose area corresponds to the 38.2% of the total area

Since the GS can be expressed as φ 2 = φ + 1, it can be gener- of E), respectively. After this procedure, the total space E is classi-

alized so that any power n of φ is equal to the sum of the two ﬁed into three sections A1 (S1 ), A2 (S2 ) and B2 (S3 ), representing the

immediately preceding powers n-1 and n-2 such as: c1 = 61.8%, c2 = 23.6% and c3 = 14.6% of E, respectively. In case of in-

corporating a fourth section or group, the process is repeated over

ϕ n = ϕ n−1 + ϕ n−2 (10) B2 , generating the groups A1 (S1 ),A2 (S2 ), A3 (S3 ) and B3 (S4 ). With this

conﬁguration, the total space E is distributed as follows:c1 = 61.8%,

This representation manifest the property of the GS to repeat a

c2 = 23.6%, c3 = 9% and c4 = 5.6%. Thereby, the division operation

speciﬁc spatial pattern in distinct magnitude levels, which is nor-

can repeatedly be executed until the number of necessary sections

mally exploited in fractal geometry (Sigalotti & Mejias, 2006).

m has been reached. Table 2 shows the produced segmentation

considering m = 2, m = 3, m = 4, and m = 5. Fig. 3 exhibits the di-

3.2.2. GS and Fibonacci numbers

vision process of an initial space E in six sections (m = 6). In Fig. 3,

Fibonacci numbers are a sequence of integers related with dif-

it is also illustrated the percentages of each section regarding the

ferent phenomena in nature (Benavoli et al., 2009). Fibonacci num-

space that they cover.

bers are a series of elements Fn of the type 1, 1, 2, 3, 5, 8, 13, 21,

34, 55, 89, 144,…, where each term (except the ﬁrst two) is the

4. The proposed GS selection method

sum of the two immediately preceding elements, according to the

following recurrence formulation:

Alternatively to traditional selection methods, in this paper, a

Fn = Fn−1 + Fn−2 , n ≥ 2 (11) new simple selection method for EAs is introduced. In the pro-

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 187

Produced segmentations considering m = 2, m = 3, m = 4 and

rate an extensive quantity of them.

m = 5.

The main idea of a selection method is to allow the prolifera-

m Proportions tion of good quality solutions and avoids the accumulation of bad

2 c1 c2 quality solutions in the new population Pk + 1 . Under such con-

S1 , S 2 61.8% 38.2% ditions, groups that store better individuals are selected with a

3 c1 c2 c3 higher probability than those that contain low-quality solutions.

S1 - S3 61.8% 23.6% 14.6% In the proposed method, it is assumed the existence of m dif-

4 c1 c2 c3 c4 ferent groups in the population. Therefore, the entire population Pk

S1 - S 4 is segmented according to the GS proportion in m different groups

61.8% 23.6% 9% 5.6%

or sections. With the division, each group gi (i ∈ 1, …, m) main-

5 c1 c2 c3 c4 c5

S1 - S5 tains a (4.1) number ui of individuals and a (4.2) probability wi to

61.8% 23.6% 9% 3.4% 2.2%

be selected.

m different groups considering the proportions of the GS. The

groups are arranged so that the ﬁrst group g1 corresponds to the

smallest proportion (cm ) of Pk whereas the last group gm cor-

responds to the highest proportion (c1 ) of Pk . Considering that

Pk has N individuals, the number of elements in ui is deter-

mined byround (N • (cm + 1 − i /100)), where round( • )symbolizes

a function which delivers the close integer to N • (cm + 1 − i /100).

In case of a segmentation of Pk in three sections, according to

the GS, the groups g1 , g2 and g3 maintain the following distribu-

tion: c3 = 14.6%, c2 = 23.6%and c1 = 61.8%. Therefore, assuming that

Pk has 100 elements (N = 100), the individuals are distributed as

follows:u1 = 14, u2 = 24 and u3 = 62.

To select the individuals that correspond to each group, the

elements of Pk are re-ordered regarding their ﬁtness values, so

that the best individual is in the ﬁrst position pk1 whereas the

worst element is the last one pkN . Under such conditions, the ﬁrst

u1 elements (pk1 -pkg1 ) of the sorted population corresponds to the

groupg1 . Then, the next u2 individuals (pkg -pkg1 +g2 ) are included

1 +1

as part of groupg2 , and so on. Considering as an example that the

population Pk is distributed so thatu1 = 14, u2 = 24 and u3 = 62,

the groups include the following intervals of individuals: g1 (pk1 -

pk14 ), g2 (pk15 -pk38 ) and g3 (pk39 -pk100 ). Under this approach, the group

g1 contains the best 14 individuals found in the population, so that,

as the group number increases the quality of the involved indi-

Fig. 3. Division process of a space E in six sections considering the GS. (a) Initial

viduals also decreases. Therefore, the elements of the last group

space (E), (b) division with m = 2, (c) m = 3, (d) m = 4, (e) m = 5, (f) m = 6 and

(g) ﬁnal conﬁguration. represent the worst individuals contained in Pk . Fig. 4 illustrates

graphically the segmentation of the population space considering

the proportions of the GS.

posed method, the population is segmented into several groups ac-

cording to the GS. The groups are distributed so that each group 4.2. Selection probability of each group

contains individuals of a homogeneous quality in terms of the ﬁt-

ness function. The number of individuals in each group depends on The main idea of a selection method is to allow the prolifera-

the quality of the individuals that it contains. Therefore, the group tion of good quality solutions and avoids the accumulation of low-

that has the best individuals contains a reduced amount of them quality individuals in the new population Pk + 1 . In the proposed

188 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

algorithm, the number of groups remains ﬁxed. As a consequence,

several operations can be calculated only once. Then, they can be

used as a ﬁxed constant throughout the optimization process.

eral groups. The number of groups deﬁnes a particular relation be-

Fig. 4. Segmentation of the population space considering three groups. tween elitism and diversity of the selection strategy. Each group

involves a number of individuals and a probability to be selected,

which are determined according to the GS. Considering the basic

division of the population Pk in two sections, two groups are pro-

β

duced g21 and g22 . In each group gα the super-index β represents

the number of considered sections and the sub-index α the group

number. The ﬁrst group maintains an individual proportion BI1 of

38.2% and a probability proportion AP1 of 61.8% (where the super-

index I or P symbolizes if the proportion is in the space of the

individuals or of the probability, respectively). Therefore, g21 con-

tains the best individuals of the population and possesses a higher

Fig. 5. Segmentation of the probability space considering three groups. probability to be selected. On the other hand, the last group g22 has

an individual proportion AI1 of 61.8% and a probability proportion

BP1 of 38.2%. With these proportions, g22 stores the worst elements

method, the probability space is divided into m different propor-

of the population and involves an inferior probability to be chosen.

tions according to the GS. With the division, each group gi main-

With only two groups, their proportions allow the accumulation of

tains a selection probability wi which corresponds to a proportion

individuals with different qualities in the same group. Under such

of the probability space suggested by the GS. The proportions are

conditions, the diversity increases and the elitism decrease as a

assigned so that the groups with the best individuals maintain big-

consequence of the selection of such groups.

ger probability proportions compared to groups with low-quality

If the population Pk is divided into more sections, for example,

elements. Under such conditions, the biggest probability propor-

three, the spaces of individuals and the probability are modiﬁed.

tion (c1 ) corresponds to the ﬁrst group g1 whereas the small-

Under three sections, the individuals of Pk are segmented in g31 , g32

est probability (cm ) corresponds to the last group gm . Each selec-

and g33 . Similar to the last group g22 in the case of two sections, the

tion probability wi can be calculated from the original proportions

last group g33 maintains the worst individuals of the population.

shown in Table 2 such as:

However, the groupsg31 and g32 are obtained by the sub-division

ci

wi = (5) of the group g21 (g21 = g31 + g32 ). Therefore, the individuals of g21 that

100

represent the best elements of Pk are re-classiﬁed in two sections

In case of a segmentation of Pk in three sections, the groups according to their ﬁtness values. The ﬁrst group g31 stores the best

g1 , g2 and g3 maintain the following probability distribution: elements of g21 whereas g32 maintains the rest. With this division,

w1 = 0.618(61.8%), w2 = 0.236 (23.6%) and w3 = 0.146(14.6%). Fig. 5 the groups g31 , g32 and g33 assume the following individual propor-

illustrates graphically the segmentation of the probability space tions 14.6%, 23.6% and 61.8%, respectively. In case of the probability

considering the proportions of the GS. space, in comparison with the case of two sections, the ﬁrst group

Finally, the selection of an individual is conducted in two steps. g31 has the highest probability to be selected. However, with the

In the ﬁrst step, one from m possible groups is selected by using inclusion of another section, the selection probability of the last

the probability selection method (PS). Once a group has been cho- group (low-quality elements) is reduced to assign higher proba-

sen, a uniformly distributed random individual is selected within bilities to groups with individuals of better qualities. Therefore, if

it. This element is considered the ﬁnal selection result z. several groups are deﬁned, the elitism of the selection method is

emphasized whereas the diversity is simultaneously diminished.

4.3. Computational procedure To illustrate the division process of the individual space inPk , a

graphical example is shown in Fig. 6. The experiment presents a

The proposed algorithm is a selection method that allows determined distribution of the population Pk considering Fig. 6(b)

choosing an individual from a population Pk considering the pro- two, Fig. 6(c) three and Fig. 6(d) four sections regarding the objec-

portions established by the GS. As a ﬁrst step, the number m of tive function presented in Fig. 6(a). Fig. 6(b) exhibits the distribu-

groups is deﬁned. (I) The elements of Pk are re-ordered regard- tion of the population assuming two sections. Red dots represent

ing their ﬁtness values, so that the best individual is in the ﬁrst the best elements of Pk with a proportion of 38.2% whereas the yel-

position pk1 whereas the worst element is the last one pkN . After- low squares symbolize the worst individuals of Pk with a propor-

wards, (II) individuals of the ordered population Pk are classiﬁed in tion of 61.8%. In Fig. 6(c), the spatial localization of Pk under three

terms of their ﬁtness values for the m proportions. Similarly, (III) section is shown. Analyzing Fig. 6(c), it is clear that the best in-

the probability portions are assigned to all the groups. Then, (IV) dividuals from Fig. 6(b) are sub-divided in two new groups: red

by using the RWM a group gi is chosen. Finally, (V) a uniformly dots and blue triangles. It is also remarkable that the worst ele-

distributed random individual is selected within gi as the ﬁnal se- ments, represented by the yellow squares, remain without change.

lection z. The pseudo code of the GS method is schematized in Fig. 6(d) illustrates the distribution of Pk assuming four sections.

Algorithm 4. From this Figure, it can be seen that the best elements from

According to Algorithm 4, the proposed approach incorporates, Fig. 6(c) are still granulated in a extra group. Under such condi-

such as other selection methods, a sorting process of the popu- tions, the original best individuals (red dots) from Fig. 6(b) are ﬁ-

lation. This operation represents the most expensive operation in nally segmented into three groups: red dots, blue triangles, and

the method. The rest of the operations does not present a signiﬁ- green inverted triangles. The worst elements, represented by the

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 189

(a) (b)

(c) (d)

Fig. 6. Division process of the individual space of Pk considering 6(b) two, 6(c) three and 6(d) four sections in terms of the objective function presented in 6(a).

yellow squares, remain without change. With the distributions pre- 1. (S) The proposed method presents a better balance between

sented in Fig. 6(b)−(d), it is clear that the more groups are set, the elitism and diversity of the selection strategy.

more elitist the selection method becomes. 2. (S) The structure of the algorithm is simple and fast. Once

On the other hand, Fig. 7 presents the probability distribution the amount of groups is deﬁned, the number of individu-

of each group conﬁguration. In Fig. 7(a) the selection probabilities als and their selecting probabilities are ﬁxed without using

of groups g21 and g22 are shown. From the ﬁgure, it is clear that the other operation during the optimization process.

best individuals of maintaining a high probability to be selected in 3. (W) The selection of an individual involves the use of two

comparison with g22 . Fig. 7(b) considers the probability distribution different probabilistic decisions: One for selecting the group

in the case of three sections. Under this conﬁguration, the best in- and other for extracting the individual inside the group. This

dividuals, stored in g31 , also have the maximal probability to be se- fact can be understood as a disadvantage considering that all

lected. However, the probabilities of groups g32 and g33 are obtained selection methods use only one probabilistic determination.

by the sub-division of the probability assigned to g22 (g21 = g31 + g32 ). 4. (W) The proposed approach incorporates a conﬁguration pa-

Therefore, the probability of g22 that includes the worst elements of rameter m which speciﬁes the number of groups. Its deﬁni-

Pk is sub-divided in two new probability sections. In case of seg- tion can be considered a disadvantage since all the other se-

menting Pk in four sections, the probability proportions are dis- lection methods operate automatically without specifying an

tributed according to Fig. 7(c). Similar to the cases of two and extra parameter. (S) On the other hand, its use allows modi-

three groups, the ﬁrst group g41 maintains the best selection fre- fying the association between elitism and diversity of the se-

quency. On the other hand, the selection probability of g41 is still lection strategy. Under such conditions, it can be employed

reduced to assign higher probabilities to groups with individuals to fulﬁll the exploration and exploitation requirements of

of better qualities. speciﬁc optimization problems.

Different advantages and disadvantages of the proposed ap-

proach can be identiﬁed through the comparison of similarities 5. Experimental results

and differences with other selection strategies. As a summary, we

can enumerate the strengths (S) and weaknesses (W) of the pro- In this paper, the proposed method based on the GS is used

posed method as follows: to select individuals from a population. The method is also evalu-

190 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

(a) (b)

(c)

Fig. 7. Division process of the probability space considering 7(a) two, 7(b) three and 7(c) four sections.

ated in comparison with other selection approaches popularly em- In the comparison, a representative set of 7 functions, col-

ployed in evolutionary algorithms. In the experiments, we have ap- lected from Refs. (Ali et al., 1995; Chelouah & Siarry, 20 0 0) have

plied the GS method to different selection schemes whereas its re- been considered to test the performance of the proposed selection

sults are also compared to those produced by the roulette method method. Table 3 presents the test functions employed in the simu-

(Holland, 1975) and the tournament selection approach (Blickle lations. In Table 3, d represents the dimension of the function, f(x∗ )

et al., 1995; Clark et al., 1999). In case of the roulette method, a characterizes the minimum value of the function in location x∗ and

positive constant is added to each ﬁtness value in order to avoid S is the search space. In the test, all functions are operated in 30

negative probabilities. On the other hand, the tournament selec- dimensions (d = 30).

tion approach is set with q = 2. This conﬁguration is considered the In the simulations, two versions of the proposed GS method

most popular for selection proposes. are considered: with 3 sections (GS3) and with 4 sections

The experimental results are divided into three sub-sections. (GS4). Such techniques are compared along with the roulette

In the ﬁrst Section (5.1), the performance of selection methods is method (Holland, 1975) and the tournament selection approach

compared regarding solution quality and convergence time. In the (Blickle et al., 1995) considering the functions from Table 3. To

second Section (5.2), the takeover time index is used to charac- minimize the stochastic effect of the results, each benchmark func-

terize the selective pressure of each method. Then, in the third tion is executed independently 30 times. The results, presented in

Section (5.3), the methods are analyzed using the concepts of the Table 4, report the averaged best ﬁtness values f¯ and the standard

differential selection and the response to selection. Finally, in the deviations (σ f ) obtained through the 30 executions.

fourth Section (5.4), an analysis of the experimental results is pre- The best results in Table 4 are highlighted. According to Table 4,

sented. the approach GS3 provides better performance than GS4, tourna-

ment, and roulette for all functions. These performance differences

are directly related to a better trade-off between elitism and diver-

5.1. Solution quality and convergence sity produced by the formulated selection method. Fig. 8 presents

graphically the evolution of the average results of each method for

In this section, the performance of the selection methods re- functions 8(a) f1 , 8(b) f3 , 8(c) f4 and 8(d) f7 . With the visualiza-

garding solution quality and convergence time are compared. In tion of these graphs, the objective is to evaluate the velocity with

the comparisons, a generic Genetic Algorithm (GA) is employed to which a compared method reaches the optimum. An analysis of

solve several optimization problems represented by a set of bench- Fig. 8 shows that the proposed methods with 3 or 4 sections are

mark functions. In the experiments, each selection method is used faster in reaching their optimal ﬁtness values than the tournament

as selection operator in a genetic algorithm whereas its operations and the roulette techniques.

of crossover and mutation remain without change. The comparison of the ﬁnal ﬁtness values cannot completely

The employed GA is set according to the following parameters, describe the performance of a selection algorithm. Therefore, in

the population size (N) is 100, the crossover probability is 0.60, and this part, a convergence test on the four compared algorithms has

the mutation probability is 0.10. Such values present according to been conducted. In the experiments, the convergence time t is as-

(Hamzaçebi, 2008) an acceptable performance. As termination cri- sessed as follows:

teria, it is considered the maximum number of iterations which

has been set to 500 (gen = 500). This stop criterion has been se-

lected to maintain compatibility with similar works reported in the

literature (Shilane, Martikainen, Dudoit, & Ovaska, 2008). t = min(tb , gen ), (6)

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 191

Table 3

Benchmark functions used in the experimental study.

d d x∗ = ( 0, . . . , 0 );

f 1 (x ) = −20 exp(−0.2 1

x2i ) − exp( 1

cos(2π xi ) ) + 20 [ − 32, 32]d d = 30

d i=1 n i=1

f ( x∗ ) = 0

d d x∗ = ( 0, . . . , 0 );

f 2 (x ) = 1

i=1 cos ( i ) + 1

x2i − x

√i [ − 60 0, 60 0]d d = 30

40 0 0 i=1

f ( x∗ ) = 0

⎧ 2 ⎫

⎪sin (3π x1 ) ⎪

⎨ d ⎬

f 3 ( x ) = 0.1 + (xi − 1 ) [1 + sin (3π xi + 1)]

2 2

⎪

⎩ i=1 ⎪

⎭

+ (xd − 1 )2 [1 + sin (2π xd )]

2

x∗ = ( 1, . . . , 1 );

d [ − 10, 10]d d = 30

+ u(xi , 5, 100, 4); f ( x∗ ) = 0

i=1 ⎧

⎨ k(xi − a )m xi > a

u(xi , a, k, m) = 0 −a < xi < a

⎩k(−x − a )m xi < −a

i

d

ix2 x∗ = ( 1, . . . , 1 );

f 4 (x ) = sin(xi )sin ( πi ) [0, π ]d

2

d = 30

i=1 f ( x∗ ) = 0

2

d

d x∗ = ( 1, 2, . . . , d );

f 5 (x ) = ( ( ji + β )( ( xjj ) − 1)) [ − d, d]d d = 30

i=1 j=1 f ( x∗ ) = 0

d−1

2 x∗ = ( 1, . . . , 1 );

f 6 (x ) = [100(xi+1 − x2i ) + (xi − 1 ) ]

2

[ − 30, 30]d d = 30

i=1 f ( x∗ ) = 0

d

i x∗ = ( 0, . . . , 0 );

f 7 (x ) = x2j [ − 65.536, 65.536]d d = 30

i=1 j=1 f ( x∗ ) = 0

Table 4

Comparison of solution quality among the selection methods.

f2 0.0159(0.0522) 0.0 011(0.0 017) 0.0174(0.0141) 0.0274(0.0047)

f3 0.6202(0.1505) 0.2874(0.1336) 0.6284(0.2410) 0.7414(0.0892)

f4 −23.5395(1.2346) −28.3944(0.6200) −22.8714(1.1017) −20.7483(1.8747)

f5 0.0213(0.0452) 0.0 057(0.0 0 02) 0.0317(0.0107) 0.04981(0.0271)

f6 0.3261(0.0341) 0.08512(0.0075) 0.3514(0.0711) 0.4101(0.0751)

f7 0.0431(0.0032) 0.0 0 06(0.0 0 01) 0.0482(0.0085) 0.0541(0.0620)

where gen represents the maximum number of allowed genera- and convergence properties of a selection method in the ab-

tions and tb is the generation number in which the best ﬁtness sence of any other evolutionary operation. Assuming a popula-

value has been found. tion Pk ({pk1 , pk2 , . . . , pkN }) of N candidate solutions with a single best

In the simulations, the algorithms are executed 30 times over elementpbest , takeover time is deﬁned as the number of genera-

seven functions. Under such conditions, 210 different executions tions GT spent by a selection method until the rest of the N-1 in-

are conducted for each selection method. To evaluate the conver- dividuals consists of copies of pbest .

gence of the selection methods, a study of the produced conver- Therefore, considering that E(k) represents the number of

gence times is conducted. In the study, the frequency of the re- copies of pbest in Pk at the k iteration:

sulting 210 convergence times is analyzed by using histograms.

E (k ) = NepkNe = pbest , Ne = 1, . . . , N (7)

Fig. 9 presents the resulting histograms for 9(a) GS4, 9(b) GS3,

9(c) Tournament and 9(d) Roulette. In each histogram, the com- The takeover time GT is calculated as follows:

plete number of generations gen are divided into intervals of ap-

proximately 50 iterations. For the sake of clarity, the intervals that GT = min {k|E (k ) = N } (8)

do not contain more than ten events have been deleted from the Small GT values deﬁne a high selective pressure while large val-

histogram. Fig. 9(a) exhibits the convergence time distribution for ues characterize a low selective pressure of the search strategy.

the case of the GS with four sections. According to this Figure, the To evaluate the takeover time of each selection method, an ex-

frequency peak is reached in the interval from 225 to 275 gen- periment has been conducted. In the test, a population Pk of 400

erations whereas the average convergence time is approximately (N = 400) individuals is randomly initialized, containing a single

located in 290 iterations. Fig. 9(b) presents the convergence time best individual pbest . Then, the selection method is applied to the

distribution for the case of the GS with three sections. An analy- population whereas the empirical takeover times GT are registered.

sis of this histogram shows that the maximal frequency is attained It is important to remark that the process does not consider the

in the interval from 170 to 210 generations whereas the average effect of any other evolutionary operation such as crossover and

convergence time is localized in 160 iterations. On the other hand, mutation.

Fig. 9(c) and (d) show the convergence time distributions for the In the simulations, two versions of the proposed GS method

case of the Tournament and Roulette methods. From the ﬁgures, are considered: with 3 sections (GS3) and with 4 sections

it is clear that both methods present average convergence times (GS4). Such techniques are compared along with the roulette

comparatively higher than those produced by the GS approaches. method (Holland, 1975) and the tournament selection approach

(Blickle et al., 1995). In Table 5, the takeover times produced by

5.2. Takeover time each selection method are presented. In the Table, it is also re-

ported the execution time in which GT time takes place. To min-

The concept of takeover time, introduced by Goldberg and imize the stochastic effect, the results consider the average val-

Deb (1991), has been used to evaluate the selective pressure ues obtained from 30 executions of each method. An analysis of

192 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

(a) (b)

(c) (d)

Fig. 8. Fitness average results obtained during the evolution process for functions 8(a) f1 , 8(b) f3 , 8(c) f4 and 8(d) f7 .

(a) (b)

(c) (d)

Fig. 9. Resulting histograms for (a) GS4, (b) GS3, (c) Tournament and 9(d) Roulette.

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 193

Table 5

Algorithm 2 Pseudo code of the tournament selection method.

Takeover times produced by each selection method.

GS3 5.2 2.12

Tournament 14.8 8.62

Roulette 33.4 12.52

Table 5 suggests that the GS3 method maintains the best perfor-

mance in comparison to the other selection methods. From the

results, it is clear than the Tournament, and Roulette methods

present the worst indexes regarding their takeover and execution

times. generations, Fig. 10 shows only 50 generations from 500 used in

Fig. 10 shows the comparative dynamics of E(k) of each method the evolution process. From Fig. 10, it is clear that the GS3 method

through the takeover time test. The curve represents the averaged is the fastest in ﬁlling Pk with pbest (around of 5 Generations). On

values obtained from a set of 30 independent executions. Since the the other hand, the Roulette and Tournament approaches obtain

takeover times of all approaches are located within the ﬁrst 50 the worst performance.

194 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

to the tournament and roulette scheme. Such improvements take

place in the ﬁrst generations as result of their better convergence

properties. On the other hand, the methods of tournament and

roulette present several oscillations as an inappropriate balance be-

tween elitism and diversity during the optimization process.

method on the simulations, we can express the following conclu-

sions:

performance than GS4, tournament and roulette approaches

for all optimization problems. Such performance differences

are directly related to a better trade-off between elitism and

Fig. 10. Comparative dynamics of E(k) of each method through the takeover time diversity produced by the division of the population in three

test. For the sake of visualization, only 50 generations from 500 are shown. segments.

2. The results of the convergence time for the case of the GS

approach with four sections show that its frequency peak is

reached in the interval from 225 to 275 generations whereas

the average convergence time is approximately located in

290 iterations. For the case of the GS with three sections,

the maximal frequency is attained in the interval from 170

to 210 generations whereas the average convergence time is

localized in 160 iterations. On the other hand, for the case of

the Tournament and Roulette methods, it is clear that both

methods present the worst convergence times comparatively

than those produced by the GS. These results exhibit that

the proposed selection method improves the performance of

the search strategy. This enhancement is attributed to the

selection of candidate solutions which allow increasing the

diversity of the population during the optimization process.

3. With regard to the Takeover time, results suggest that the

Fig. 11. Response to selection R(k) values during the optimization of function f3 . GS3 method maintains the best performance in compari-

son to the other selection methods. From the results, it is

clear than the Tournament and Roulette methods present

the worst indexes regarding their takeover and execution

5.3. Response to selection times. The takeover time is an index extensively used to

evaluate the performance and convergence properties of a

Selection methods invariably use the ﬁtness value of each in- selection method. Under this perspective, the proposed ap-

dividual to choose new elements from the population. Therefore, proach demonstrates that its selection strategy can identify

the state of a population Pk at the iteration k can be approxi- the best element of the population and to assign it an ap-

mately described by the mean ﬁtness value of their individuals propriate probability for proliferating.

M(k). Response to selection is an additional way to characterize the 4. The results of the response index R(k) indicate that the pro-

performance of a selection method suggested by Mühlenbein and posed methods obtain the highest values as a consequence

Schlierkamp-Voosen, 1993. Response to selection R(k) is deﬁned of a better population improvement in comparison to the

as the population mean ﬁtness difference between two contiguous other selection strategies. Such improvements are more evi-

generations (R(k) = M(k + 1)-M(k)). In general, R(k) provides an al- dent in the ﬁrst generations as result of their better conver-

ternative to measure the progress of the population during the op- gence properties.

timization process. Under such conditions, high positive values of

R(k) express strong improvements in the population whereas val- 6. Conclusions

ues near to zero indicate marginal improvements.

To produce R(k), each selection method is used as selection op- In this paper, a new selection method for evolutionary com-

erator in a genetic algorithm whereas its operations of crossover putation algorithms is introduced. In the proposed approach, the

and mutation remain without change. This GA is set according population is segmented into several groups. Each group involves

to the following parameters, the population size (N) is 100, the a certain number of individuals and a probability to be selected,

crossover probability is 0.60, and the mutation probability is 0.10. which are determined according to the GS proportion. Therefore,

Such values present according to (Hamzaçebi, 2008) an acceptable the individuals are divided into categories where each group con-

performance. As termination criteria, it is also considered the max- tains individuals with similar quality regarding their ﬁtness values.

imum number of iterations which has been set to 500 (gen = 500). Since the possibility to choose an element inside the group is the

Fig. 11 exhibits the response to selection R(k) values during the same, the probability of selecting an individual depends exclusively

optimization of function f3 . Such results consider the average val- on the group from which it belongs. Under these conditions, the

ues from 20 different executions. An analysis of Fig. 11 indicates proposed approach deﬁnes a better balance between elitism and

that the methods GS3 and GS4 maintain high values of R(k) as a diversity of the selection strategy

E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196 195

To show the performance of the proposed approach, several ex- Blickle, T., & Thiele, L. (1996). A comparison of selection schemes used in evo-

periments have been conducted. In the study, it has been com- lutionary algorithms. Evolutionary Computation, 4(4), 361–394 December 1996.

doi:http://dx.doi.org/10.1162/evco.1996.4.4.361.

pared to other selection methods such as tournament and roulette Bliss, P. M., & Brown, D. A. (2009). Geometric properties of three-dimensional fractal

schemes. The results demonstrated that the proposed GS method trees. Chaos, Solitons and Fractals, 42, 119–124.

outperforms the other algorithms for most of the experiments in Brad, L. Miller, & Goldberg, D. (1992). Genetic algorithms, tournament selection and

the effects of noise. Complex Systems, 9(1995), 193–212.

terms of solution quality and convergence properties. Chelouah, R., & Siarry, P. (20 0 0). A continuous genetic algorithm designed for the

In general, the main contributions of this paper can be sum- global optimization of multimodal functions. Journal of Heuristics, 6(2), 191–213.

marized as follows: The design of a new selection technique based Ciucurel, C., Georgescu, L., & Iconaru, E. (2018). ECG response to submaximal exer-

cise from the perspective of Golden Ratio harmonic rhythm, biomedical. Signal

on the GS that deﬁnes a better balance between elitism and diver-

Processing and Control, 40, 156–162.

sity in comparison to other selection methods currently in use. Its Clark, G., Kok, R., & Lacroix, R. (1999). Mind and autonomy in engineered biosys-

effective implementation in order to maintain a simple structure tems,. Engineering Applications of Artiﬁcial Intelligence, 12, 389–399.

Coldea1, R., Tennant, D., Wheelerl, E., Wawrzynska, E., Prabhakaran1, D., Telling, D.,

and fast performance. The integration of an experimental scheme

et al. (2010). Quantum criticality in an ising chain: experimental evidence for

for the evaluation of selection methods that includes benchmark emergent E8 symmetry. Science, 327(5962), 177–180.

functions and consistent performance indexes. The application of Darrell, L. W. (1989). The GENITOR algorithm and selection pressure: why

the proposed selection method to improve the search strategy of rank-based allocation of reproductive trials is best. In J. D. Schaffer (Ed.), Pro-

ceedings of the 3rd international conference on genetic algorithms (pp. 116–123).

Genetic Algorithms (GA). San Francisco, CA: Morgan Kaufmann Publishers Inc..

This paper shows an attempt to enhance the performance of De Jong, K. (2006). Evolutionary computation: A uniﬁed approach. Cambridge, MA:

evolutionary algorithms through the modiﬁcation of their selec- MIT Press.

De la Maza, M., & Tidor, B. (1993). An analysis of selection procedures with particu-

tion strategy, and lots of work can be done as future research to lar attention paid to proportional and Boltzmann selection. In Proceedings of the

improve the performance of the proposed selection method. The 5th international conference on genetic algorithms (pp. 124–131). June 01.

main directions that deserve further research include: Dunlap, R. A. (1997). The golden ratio and ﬁbonacci numbers. Singapore: World Sci-

entiﬁc Publishing Co. Pte. Ltd..

Esmaeili, M., Gulliver, T. A., & Kakhbod, A. (2009). The Golden mean, Fibonacci ma-

• To characterize the performance of the proposed method in trices and partial weaklysuper-increasing sources. Chaos, Solitons and Fractals,

other evolutionary approaches. In this paper, extensive work 42, 435–440.

Goldberg, D.E. (1989). Genetic algorithm in search optimization and machine learn-

has been conducted to evaluate the GS selection strategy in

ing, Addison–Wesley.

Genetic Algorithms (GA). Further work is necessary to prove Goldberg, D. E., & Deb, D. (1991). A comparative analysis of selection schemes used

the capacities of the proposed technique in other evolutionary in genetic algorithms. In G. J. E. Rawlins (Ed.), Foundations of genetic algorithms

schemes such as Evolutionary Strategies and Genetic Program- (pp. 69–93).

Grefenstette, J., & Baker, J. (1989). How genetic algorithms work: A critical look at

ming. With the inclusion of the GS selection method in such implicit parallelism. In D. Schaffer (Ed.), Proceeding of the third international con-

algorithms, they could substantially improve their performance ference on genetic algorithms (pp. 20–27). Morgan Kaufmann publishers.

as search strategies. Hamzaçebi, C. (2008). Improving genetic algorithms’ performance by local search for

continuous function optimization. Applied Mathematics and Computation, 196(1),

• To examine the abilities of the GS selection technique in dy- 309–317.

namic optimization problems. It is well known that dynamic Hancock, P. (1994). An empirical comparison of selection methods in evolutionary

optimization problems require an appropriate diversity of solu- algorithms. In T. C. Fogarty (Ed.), Evolutionary computing: AISB workshop leeds

(pp. 80–94). U.K.: Springer. April 11–13, 1994 Selected Papers, 1994 AISB Work-

tions due to the constant changes in the objective function. Un- shop, Leeds, Berlin Heidelberg.

der such conditions, the proposed method could provide good Holland, J. (1975). Adaptation in natural and artiﬁcial systems. Ann Arbor, MI: The

results through the deﬁnition of an adequate number of groups University of Michigan Press.

Iosa, M., Morone, G., & Paolucci, S. (2018). Phi in physiology, psychology and biome-

m.

chanics: The golden ratio between myth and science. BioSystems, 165, 31–39.

• To design a mechanism for the adaptation of the proposed GS Jackson, P. (1998). Introduction to expert systems, (3 ed.), (p. 2). Addison Wesley. ISBN

method during the evolution. In this paper, the selection strat- 978-0-201-87686-4.

Julian, F. V. V., Olga, A. B., Nikolaj, R. B., Bowyer, A., & Pahl, A. K. (2006). Biomimet-

egy maintains its operation without changes during the opti-

ics: Its practice and theory. Journal of The Royal Society Interface, 3, 471–482.

mization process. However, a further state is that the selection Kenneth, F. (2003). Fractal geometry: Mathematical foundations and applications. John

strategy experiment changes in the association between elitism Wiley & Sons. ISBN 0-470-84862-6.

and diversity as it is required by the optimization strategy. Shekhawat, K. (2015). Why golden rectangle is used so often by architects: A math-

ematical approach. Alexandria Engineering Journal, 54, 213–222.

• To deﬁne new performance indexes that allow correctly to eval- Li, F. Z., Zhou, C. X., He, R., Xu, Y., & Yan, M. L. (2015). A novel ﬁtness allocation

uate the capacities of a selection method. The Takeover time algorithm for maintaining a constant selective pressure during GA procedure.

and the response index R(k) do not directly reﬂex the effect of Neurocomputing, 148, 3–16.

Loai, M. D. (2012). Geometric proportions: The underlying structure of design

the selection method in the search strategy. process for Islamic geometric patterns. Frontiers of Architectural Research, 1,

380–391.

Lu, Y. (2003). A Golden Section approach to optimization of automotive friction ma-

References terials. Journal of Materials Science, 38(5), 1081–1085.

Mandelbrot, B.B. (1982). The fractal geometry of nature. W.H. Freeman and Com-

Ali, M., Khompatraporn, C., & Zabinsky, Z. (1995). A numerical evaluation of several pany, New York.

stochastic algorithms on selected continuous global optimization test problems. Mühlenbein, H., & Schlierkamp-Voosen, D. (1993). Predictive models for the breeder

Journal of Global Optimization, 31(4), 635–672. genetic algorithm: Continuous parameter optimization. Evolutionary Computa-

Bäck, T. (1992). The Interaction of mutation rate, selection, and self-adaptation tion, 1(1), 25–49.

within a genetic algorithm. PPSN, 87–96. Nandar, L., & Ponnuthurai, N. S. (2015). Heterogeneous comprehensive learning par-

Bäck, T. (1994). Selective pressure in evolutionary algorithms: A characterization of ticle swarm optimization with enhanced exploration and exploitation. Swarm

selection mechanisms. In International conference on evolutionary computation and Evolutionary Computation, 24, 11–24.

(pp. 57–62). Newell, A. C., & Pennybacker, M. (2013). Fibonacci patterns: Common or rare? Pre-

Bäck, T., & Hoffmeister, F. (1991). Extended selection mechanisms in genetic algo- cedia IUTAM, 9, 86–109.

rithms. ICGA, 92–99. Noraini, M. R., & Geraghty, J. (2011). Genetic Algorithm Performance with Different

Baker, J. (1987). Reducing bias and ineﬃciency in the selection algorithm. In John Selection Strategies in Solving TSP. In Proceedings of the World Congress on Engi-

J. Grefenstette (Ed.), Proceedings of the second international conference on genetic neering 2011, Vol II, WCE 2011, July (pp. 6–8).

algorithms on genetic algorithms and their application (pp. 14–21). Hillsdale, NJ: Pascal, C., Vassiliev, S., Houghten, S., & Bruce, D. (2011). Genetic algorithm with al-

L. Erlbaum Associates Inc. ternating selection pressure for protein side-chain packing and pK(a) prediction.

Benavoli, A., & Chisci, L. F. (2009). Fibonacci sequence, Golden Section, Kalman ﬁlter Biosystems, 105(3), 263–270.

and optimal control. Signal Processing, 89, 1483–1488. Ramírez, J. L., Rubiano, G. N., & De Castro, R. (2014). A generalization of the Fi-

Blickl, T., & Thiele, L..A mathematical analysis of tournament selection. In Proceed- bonacci word fractal and the Fibonacci snowﬂake. Theoretical Computer Science,

ings of the 6th international conference on genetic algorithms (1995), Eshelman, 528, 40–56.

L. J. (Ed.), San Francisco, CA, USA: Morgan Kaufmann Publishers Inc, 9–16. Shilane, D., Martikainen, J., Dudoit, S., & Ovaska, S. (2008). A general framework

196 E. Cuevas et al. / Expert Systems With Applications 106 (2018) 183–196

for statistical performance comparison of evolutionary computation algorithms. Sonya, Q., & William, G. (2010). Bionics—An inspiration for intelligent manufacturing

Information Sciences, 178, 2870–2879. and engineering. Robotics and Computer-Integrated Manufacturing, 26, 616–621.

Sigalotti, L., & Mejias, A. (2006). The golden ratio in special relativity, Chaos,. Solitons Walser, H. (2001). The Golden Section. The Mathematical Association of America.

and Fractals, 30, 521–524.

- Scheduling of Train Driver in Tiwan Rail CompanydUploaded byPeraPeric
- Design Optimization of Mechanical ComponentsUploaded byVeki77
- Optimization Techniques for Water Supply Network a Critical ReviewUploaded byIAEME Publication
- Genetic Algorithms a Step by Step TutorialUploaded bysamking1986
- Gs 3511851192Uploaded byAnonymous 7VPPkWS8O
- Parameter Estimation in a Mathematical Model of a Heat-Conducting RodUploaded byTina Bargo
- Roulette WheelUploaded byALFafa
- Review on Optimization of Machining OperationUploaded bySadasiva Rao T
- ChaptersUploaded byKrushnasamy Suramaniyan
- Thèse Gustavo MendozaUploaded byLuis Horacio Martínez Martínez
- Linear-Quadratic McKean-Vlasov Stochastic Differential GamesUploaded bygerardodanielrossi
- Bio Linguistic AUploaded byJorge Segura
- SAMRITI IJCAITUploaded byEditor IJCAIT
- Machine Learning in Embedded SystemUploaded bynhungdieubatchot
- ENHANCEMENT OF AVAILABLE TRANSFER CAPABILITY WITH FACTS DEVICE IN COMPETITIVE POWER MARKETUploaded byPoornima Tadikonda
- Th40-3-AK1325Uploaded byisraaahmed12
- biologyUploaded byapi-362528217
- Milawa module.pdfUploaded byBrandy Gonzalez
- Operations Research Model.pptUploaded bykrishhhhh0079547
- tihdUploaded byhram_phd
- 3Uploaded byParul Kaushik
- 1-s2.0-S2468502X17300086-mainUploaded byAnkitSingh
- srivastava2016-1 (1) (1) (1) (1)Uploaded byAmbika Singh
- part_cUploaded byTushar Kadiya
- Hg 2614401444Uploaded byAnonymous 7VPPkWS8O
- Optimum Design of Worm Gears With Multiple ComputeUploaded bymmottola12
- Advanced - Design OptimizationUploaded bygustavo5150
- Se 7204 Big Data Analysis Unit III Final 20.4.2017Uploaded byDr.A.R.Kavitha
- Ch01 Introduction design of thermal systemUploaded byalzyoud
- One Side Can Be WrongUploaded byBronxAlbo

- MakingThings ElectronicsUploaded byShashank Pundir
- Crafting Literature Review .pptUploaded byImran Basha
- naeem2013.pdfUploaded byImran Basha
- QCCE.pdfUploaded byImran Basha
- Review of ProbabilityUploaded bychanphat01001
- Han 2015Uploaded byImran Basha
- Introduction to ScilabUploaded byImran Basha
- Cmc the BestUploaded byImran Basha
- C2D34C38d01Uploaded byMudit Mittal
- Lecture2huffmancoding 151018181815 Lva1 App6892Uploaded byImran Basha
- WIMAX_VIVUploaded byImran Basha
- Wireless Access Tech_VIV.pptUploaded byImran Basha
- seminar silent sound technology.pptxUploaded byImran Basha
- Basic ElectronicsUploaded byHari
- Electronic vs Electrical DevicesUploaded byImran Basha
- Image Enhancement in the fd.pptUploaded byImran Basha
- coding_standards_pt1.pdfUploaded byImran Basha
- 12_chapter4(DISTANCE MEASURES).pdfUploaded byImran Basha
- Chapter10.pptUploaded byImran Basha
- Compression Lecture 1AUploaded byImran Basha
- Image Compression Coding SchemesUploaded byresmi_ng
- 13imagecompression-120321055027-phpapp02.pptxUploaded byTripathi Vina
- UNIT III.pptUploaded byImran Basha
- C#.NET 2005 BasicsUploaded bysanoj
- NotificationUploaded byssankar451

- A Hybrid Approach for Test Case GenerationUploaded byijteee
- cs.pdfUploaded bySomu
- In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains.Uploaded byabhywa
- Performance Analysis between PSO and Deer Algorithms (DA)Uploaded byAnonymous vQrJlEN
- Topic - 4 (Informed Search and Exploration) [14.02.17].pptUploaded bymusabbir
- 2013_XiaoyuBai.pdfUploaded bycapitan_barboza
- Malai 540 Elevator SchedulingUploaded byrakheep123
- Recent Advances on Meta-Heuristics and Their Application to Real ScenariosUploaded bycemoksz
- he2011Uploaded byumair
- Credit Card Fraud Detection and Prevention - A SurveyUploaded byIJIRST
- Gentic Algorithm ReportUploaded byRohith Nandakumar
- RTDGPS Implementation by Online Prediction of GPS Position Components Error Using GA-ANN ModelUploaded byJECEI
- Global Optimization Toolbox User's GuideUploaded bycansu
- Hierarchical Genetic Algorithms for the Derivation of L-SystemUploaded bykamranali
- Literature Survey on Detection of Brain Tumor from MRI ImagesUploaded byInternational Organization of Scientific Research (IOSR)
- 10.1.1.140.4763Uploaded bysheetal taneja
- Tuohy DanielUploaded byManuel Ramos
- Var 4Uploaded byVimalaKattam
- REEVALUATION OF GENETIC ALGORITHM APPLIED TO MINIMIZE MULTILEVEL BOOLEAN EXPRESSIONS IN DIGITAL SYSTEMS.Uploaded byIJAR Journal
- GauUploaded byMV Tejaxlrai
- Dounis & Caraiscos - 2009Uploaded byMarwa Sahnoun
- The Many Facets of Natural ComputingUploaded bymdickson
- Javadoc salesman.docxUploaded byJeremy Andre Galindo Fernandez
- Modelling and Optimization of Micro-channel and Thermal Energy Storage Heatsinks for Microelectronic DevicesUploaded byMax Slobodenyuk
- A Reviews the Aplication of Meta-heuristic Algorithms to 2D Strip Packin ProblemUploaded byRaul Cantillo
- Coello Constraint HandlingUploaded bydaskhago
- (Studies in Fuzziness and Soft Computing) Patricia Melin, Oscar Castillo-Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing_ an Evolutionary Approach for Neural Networks and FuzzyUploaded bylig
- Enhanced Traveling Salesman Problem Solving Using Genetic Algorithm Technique With Modified Sequential Constructive Crossover OperatorUploaded byBang Ha Ban
- Optimal Location and Parameters Setting of UPFC Based on GA and PSO for Enhancing Power System Security Under Single ContingenciesUploaded byKamlesh Joshi
- An Evolutionary Particle Filter With the Immune Genetic Algorithm for Intelligent Video Target TrackingUploaded bySamKian