You are on page 1of 11

~

Nontraditional Optimization
Algorithms

This chapter describes two nontraditional search and optimization


methods which are becoming popular in engineering optimization
problems in the recent past. These algorithms are included in
this book not because they are new but because they are found
to be potential search and optimization algorithms for complex
engineering optimization problems. Genetic algorithms (GAs) mimic
the principles of natural genetics and natural selection to constitute
search and optimization procedures. Simulated annealing mimics
the cooling phenomenon of molten metals to constitute a search
procedure. Since both these algorithms are abstractions from a
natural phenomenon, they are very different search methods than
those described in the previous chapters. We describe genetic
algorithms first, followed by the simulated annealing procedure.

6.1 Genetic Algorithms

Genetic algorithms are computerized search and optimization


algorithms based on the mechanics of natural genetics and natural
selection. Professor John Holland of the University of Michigan,
Ann Arbor envisaged the concept of these algorithms in the mid-
sixties and published his seminal work (Holland, 1975). Thereafter,
a number of his students and other researchers have contributed
to developing this field. To date, most of the GA studies are
available through a few books (Davis, 1991; Goldberg, 1989; Holland,
290
Nontraditional Optimization Algorithms 291

1975; Michalewicz, 1992) and through a number of international


conference proceedings (Belew and Booker, 1991; Forrest , 1993·
Grefenstette, 1985, 1987; Rawlins, 1991; Schaffer, 1989, Whitley
1993). An exten.;ive list of GA-related papers is referenced elsewhere
(Goldberg, et. al, 1992). GAs are fundamentally different than
classical optimization algorithms we have discussed in Chapters 2
through 5. We begin the discussion of GAs by first outlining the
working principles of GAs and then highlighting the differences GAs
on have with the traditional search methods. Thereafter, we show a
computer simulation to illustrate the working of GAs.

6.1.1 Working principles

To illustrate the working principles of GAs, we first consider ~n


unconstrained optimization problem. Later, we shall discuss how
GAs can be used to solve a constrained optimization problem . Let
us consider the following maximization problem:

(L) . (U) . _
Maximize f(x ), xi_ ~ xt ~xi , ~ -1,2, . .. ,N.

Although a maximization problem is considered here, a minimization


problem can also be handled using GAs. The working of GAs is
completed by performing the following tasks:

Coding

In order to use GAs to solve the above problem, variables xi's are
first coded in some string structures. It is important to mention
~re that the coding of the variables is not absolutely necessary.
There exist some studies where GAs are directly used on the variables
themselves, but here we shall ignore the exceptions and discuss the
working principle of a simple genetic algorithm. Binary-co~ed string§
having 1·'s and9's are mostly used. The length of the string is usually
determined according to the desired solution accuracy. For example,
if four bits are used to code -each variable in a two-variable function
optimization problem, the strings .( 0000 0000) and ( 1111 1111)
al would represent the points

(q:~L)' x~L)yT (U) . (U))T


( xl '~2 '
f

respectively, because -the substrings (0000) and (1111) have the


minimunl and -the maximum decoded values. Any other eight-bit
string can be found to represent a point in the search space according
292 Optimization for Engineering Duign: Algorithmll and Ea:amples Nontraditional Optimiza~ion Algorithm 298

to a fixed mapping rule. Usually, the following linear mapping rule This transformation does not alter the location of the minimum,
is used: .
but converts a minimization problem to an equivalent maximization
(L) x~U) - X~L) . : problem. The fitness function value of a string is known as the
Xi= xi + l; _ decoded value (si)· (6.1)
2 1 string's fitness.
In the above equation, the variable Xi is coded in a substring ·Si of The operation of GAs begins with a population of random strings
length li. The decoded value of a binary substring -si is calculated representing design or decision variables. Thereafter, each string is
as Ef;;;;~ 2i Si, where si E ( 0, 1) and the . string s is represented as ev~ua.'ted to find the fitness value. The population is then operated
(st-ISt-2 ... s2s1so). For example, a four~bit string (0111) has a by three main operators-reproduction, crQ~§!l.Y.er, ana mutation+ to
create a new population of points. The new population is further
decoded value equal to ((1)2° + (1)2 1 + (1)2 2 + (0)23 ) or 7. It is evaluated and tested for termination. If the termination criterion is
worthwhile to mention here that with four bits to code each variable, not met, the population is iteratively operated by the above three
there are only 24 or 16 distinct substrings possible, because each operators and evaluated. This ptocedure · is continued until the
bit~position can take a value either 0 or 1. The accuracy that can termination criterl.on is m,et . One cycle of these operatio:riS and the
be obtained with a four~bit coding is only approximately 1/16th of subsequent evaluation procedure is known as a generation in GA's
the search space. But as the string length is increased by one,_the terminology. The operators are described next.
obtainable accuracy increases exponentially to 1/32th of the search
.l
space. It is not necessary' to code all variables ·in equal substring G A operators ~ J
length. The length of a substring representing a variable depends
on the desired accuracy in that variable. Generalizing tills concept, Reproduction is usually the first operator applied on a population.
we may say that with an li~bit coding for a variable, the obtainable Reproduction sele.cts good strings in a population and forms a mating
accuracy in that variable is approximately (x~U)- x!L))j2l;. Once pool. That is why ' the reproduction operator is sometimes known
the coding of the variables has been done, the corresponding point as. the selection operator. There exist a number of reproduction
X= (x~, x2, ... 'XNf can be found using.Equation (6.1). Thereafter, operators in GA literature, but the essential idea in all of them is
the function value at the point x can also be calculated by that the above~average strings are picked from the current popvJation
SUbstituting X in fhe given objective function f( X). and their multiple copies are inserted i1,1 the mating poof in a
probabilistic fi!.anner. The commonly~used reproduc~i9n operator is
Fitness function the proportionate reproduction operator where a string is selected
fqr_the matj_~.l.t pool with a Pr.~J:>ability proportional to its. ~tne~s.
As pointed out earlier, GAs mimic the survival~of~the~fittest principle Thus, the i~th string in the population is selected with a probabili'ty
of nature to make a search process. -Therefore, GAs are naturally proportional to Fi. Since the population size is usually kept fixed in
suitable for solving mci.ximization problems. Minimization problems a simple GA, the sum of the probability of each string being selected
are usually transformed into maximization _ problems by some for the mating pool must be one. Therefore, the probability for
suitable transformation. In general, a fi!ness functiol! _. _F(~) is selecting the i~ th ·string is
fir~t . derived from the objective function _and used in successive
genetic operations. Certain genetic operators require that the fitn~ss Fi
fu!;,tjon be, nonnegative, although certain. operators do no.t ~ave t~s
Pi= -n--,
~.9uirement. For maximization problems, the fitness function can be EFi
i=l
considered to be the same as the objective function or F( x) = f( x ).
For minimization problems, the fitness function is an equivalent where n is the population size. One way to implement ,this selection
maXimization problem chosen such that the optimum point remains scheme is to imagine a rouleite~wheel with it's circumference marked
unchanged. A number of such transformations are possible. The for each string proportionate to the string's fitness , The roulett~
following fitness function is often used: · wheel is spun n times, each time selecting ·a.n instance of the string
chosen by the roulette-wheel pointer. Since the circumference of the
F(x) = 1/(1 + f(x)). (6.2) wheel is marked a,c;cordlna to a. string's fi.tness, this roulette-wheel
294 OptlmiJalfotl / or Engineering Du lgn : A.lgorlthm1 and 8tample~

mechanism is expected to make Fi/ F copies of the i-th string in the


values and therefore has a. higher probabj]lty of bolng copied into
mating pool. The average fitness of the population is calculated as
the mating pool. On the other hand, a. string with a. smaller fitness
n value represents a. smaller range in eumula.tive probability values and
F= EFifn. has a. smaller probability of being copied into the mating pool. We
i=l
illustrate the working ofthis roulette-wheel simulation later through
a computer simulation of GAs.
Figure 6.1 shows a. roulette-wheel for five individuals having different lri reproduction, good strings in a. population are proba.bilistically
fitness values. Since the third individual ha.S a. higher fitness value assigned a. larger number of copies and a. mating pool is formed.
It is important to note that no new strings are formed in the
reproduction phase. In __ . 'the crossover operator, new strings are
_~ ~ (

created }>y exchanging information among stnngs'oof the mating


Point Fitness pool. Many crossover operators exist hi the GA' literature. In ·most
1 25.0 crossover operators, two strings are picked fro:qt the mating pool at
random and s·ome portions of the strhigs ·are exchanged betwee-n the
2 5.0 strings. A single-point crossover operator is performed by ra.ndonily
choosing a.· crossing site along the string and by exchanging all bits
3 40.0 on the right side of the crossing site as shown: '

4 10.0
,f 0 010 0 0 0 011 1 1
5 20.0 5 ' ., :::}
(
111111 111000 d
Figure 6.1 A rouiette-wheel marked for five individuals according
to their fitness values. The third individual has a higher probability The two strings participating in the crossover operation are known •
of selection than any other 0
as parent strings and the resulting strings are known a.s children
strings. It is intuitive from this construction that good substrings
than any other' it is expected that the roulette-wheel selection will from parent strings can be combined to form a. better child string, if
choose the third individual more than any other individual. This an appropriate site is chosen. Since the knowledge of an appropriate
roulette-wheel selection scheme can be simulated easily. Using the
0

site is usually not known beforehand, a. random site is often chosen.


fitness value Fi of all strings, the probability of selecting a. string With a. random site, the children strings produced may or may not
Pi can be ' calculated. There~fter' th~_ ClJJ.nula.t.iY:e probabilitj' (pi ) of have a. combination of good substrings f;rom parent strings, depending
each string being copied can be calculated by adding the individual on whether or nQt the crossing site fall1 in the appropriate place. But
probabilities from the top of the list. Thu_s, the bottom-most string we do not worry about this too much, because if good s.t rings are
in the population should have a. cumulative probability (Pn) equal to created by crossover, there will be more copies of them in the next
l · The roulette-wheel concept can be simulated by realizi11g that the '
mating pool generated by the reproduction operator. But if good
i-th string in the population represents the cumulative probability strings are not cr eated by crossover, they will not survive too long,
-values~from Pi-1 to Pi. The first string represents the cumulative o eca.use-reproduction will select against those strings in subsequent
values from zero to P 1 • Thus , the cumulative probability of any string ~nera.tions. · · '
lies .between 0 to 1. lri order to choose n strings, n random numbers It is clear from this discussion that the effect of crossover may be
between zero to one are ·created at random. Thus, a string that .
g__etrimental or beneficial. Thus,, in order to preserve some of the good
represents the chosen random number in the cumulative probability strings that are already present in the mating pool, not all strings in
range (calculated from the fitness values) for the string is copied t he mating pool are used in crossover. When a. crossover probability
to the mating pool. This way, the string with a higher fitness of Pc is used, only 100pc per cent strings in the population are used
value will represent a. rla.rger range in the cumulative probability
in the crossover operation and 100(1 - Pc) per cent of the populatjon
296 Optimization for Engin eering Design: Algorithms and Examples Nontraditional Optimization A lgorithms 297

remains a.s they a.re in the current population 1 . 6.1.2 Differences between GAs a,nd traditional methods
A crossover operator is mainly responsible for the search of new
As seen from the above description of the working principles of GAs,
strings, even though a. mutation operator is also used for this purpose
sparingly. The mutation operator changes 1 to 0 . a.nd vice versa. they a.re ra.dica.lly different from most of the traditional optimization
methods described in Chapters 2 to 4. However, the fundamental
with a. smail mutation proba.bilit.YL..Em. Thebit-wise mutation is
..- . - - ~~--------- differences a.re described in the following paragraphs. ·
performed bit by bit by flipping a. coin 2 with a. proba.bilit~
at'a:'ny oit the outcome is true~~~otherwise GAs ~ork w_!!h a. string-coding of ~es instead ~f thQ
t~t ~~· The need for mutation is to create a. variables. The a.dva.nta.ge of working with a. coiling of variables is
point in the neighEourhood of the current po!nt, thereby achieving a. that the ~ding discretizes the search space, even though the funct!op.
local search around the current solution. Tile mutation is also U'Sed- ma.y be continuous. On the other hand, _since GAs require only
!9.E.J-~jtain.J!iver_sity _i~h~_2i)u.J.a.t'io~. Forexa.mple, 7 onsider the- ronchon vatu~~ various discrete p~ip.~ , a. discrete or discontinuous
following population having four eight-bit strings: function ca.n be lia.naled with no extra. cost. This a.llows GAs to be
applied to a. wide .variety of prohl;-~~s. An other a.dva.nta.ge is that the
GA opera.tors exploit the sim~rities in stri ng-st:t;.l!&.tJ!res to_ma._ke
0110 1011
a.n effective sea.rcli. Let us discuss this important aspect of GAs
0011 1101
in somewhat more details. A schema (pl. schemata.) represents 'a.
0001 0110
number of strings ~ith similarities a.t certain string positions. For
0111 1100 example, in a. ·five-bit problem, the ·schema. (10h*) (a. '!' denotes
either a. 0 or a.1) represents four strings (10100), (10101), (10110),
Notice that a.ll four strings have a. 0 in the left-most bit posi~ion. If a.nd ( 10111). In the decoded parameter space, a. schema. represents
the true optimum solution requires 1 in that position, then neither a. continuous or discontinuous region in the search space. Figure 6.2
fe"proauction nor crossover operator described above will be able to shows that the above schema. represents one-eighth of the search
create 1 in that position.- The inclusion of mutation introduces some space. Since in a.n £-bit schema., every position ca.n take either 0, 1,
piOba.bilitYT NPm) ofturning o into 1.
-These- three operators a.re simple a.nd straightforward. The 1--- -l---- -~---- -~---- -~---- --- - -~---- -1
reproduction operator selects good strings a.nd the crossover operator I I I I I I I
recomljiiies -good substrings from good st_!J:ggs together to honefully I OOO••I 001••1 010••1 011••1 100••1 101•• 110••1 111••1
I I I I I I I
crea.te_a. better substrin$;_ 'l;:_he muta.ti~n operator alters a. .string 1
I I I I I I I
loca.lly to hopefully create a. better string. Even though none o
xmin X max
these cla.i~s ~ guaranteed a.nd}or t~st~d while crea.tin_g a. ~~&)t
is expected that if ba.d strings a.re created thE!Y will _be eliminated by Figure 6.2 A schema with three fixed positions divides the search
the re~duction operator in. the next genera.tio"ii"'and. If good s~r}ig§ space into eight regions. The schema (10h*) is highlighted.
a.re ~t~g, they will be in.crea.singly emJth~~~terested readers
ma.y refer to Goldberg (1989) a.nd other GA literature given in the or *, there a.re a. total of 3l schemata. possible. A finite population of
references for further insight a.nd some mathematical foundations of size n contains only n strings, but contains many schemata.. Goldberg
genetic algorithms. (1989) ha.s shown that due to the action of GA operators, the number
Here, we outline some differences a.nd similarities of GAs with of strings m(H, t) representing a. schema. H a.t a.ny generation t grows
traditional optimization methods. to a. number m(H, t + 1) in the next generation a.s follows:
..._
:F(H) [ IJ(H) ]
1
Even though the best (1- Pc)100% of the current population can be copied m(H, t + 1) ~ m(H, t) --=y- 1- Pet=!- Pmo(H) J' (6.3)
deterministically to the new population, ·this is usually performed at random.
2
Flipping of a coin with a probability p is simulated as follows. A number growth factor, <P
between 0 to 1 is chosen at random . If the random number is smaller than p, th
ontcomc of coin- flipping is true, othcrwiAe th e outcome is false. wht•ro :T( II) Is tho fitnCRil of th o l)rhoma 11 calculated by a.vcra,;in,;
the fit ness of all strings representing the schema, 8(!/) is the defining previously obtu.incd lnforma.tlon offldouUy. In GA8, previou sly found
length of the schema H calculated as the difference in the outermost good lnform.atlon ls ompl1asizcd using reproduction OJHirator an d
defined positions, and o(H) is the order of the schema H calculated propagated adaptively through crossove·r attd mu ta.tio u o:pe.r ators.
as the number of fixed positions in the schema. For example, the Anot her advantage wi th a population-based search algorithm is
schema H =10h* has a defining length equal to o(H) = 3- 1 = 2 that multiple optim al solutions can be captured in the P,Opulation
and has an order o(H) = 3. The growth factor <P defined ;in the above easily, thereby reducing the effort to use t he same a1gorit hm many
equation can be gre~ter than, less than, or equal to 1 depending times. Some extensions of GAs along these directions- multimodal
on the schema H and the chosen GA parameters. If for a schema function optimization (Deb , 1989; Goldberg and Richardson, 1987)
the growth factor <P 2: 1, the number of strings representing that and multiobjective function optimization (Horn and Nafpliotis, 1993;
schema grows with generation, otherwise the representative strings Schaffer, 1984; Srinivas and Deb, 1995)- have been researched and
of the schema reduce with generation. The above inequality suggests are outlined in Section 6 .1. 7.
that the schema having a small defirung length (small o(H)), a few
In discussing GA operators or their working principles in the
fixed positions (small o(H)), and above-average fitness (F(H) >F), previous section, nothing has been mentioned about the gradient or
the growth factor <P is likely to be greater than 1. Schemata for any other auxiliary problem information. In fact , GAs do not require
which the growth factor is greater than 1 grows exponentially with any auxiliary information except the objective function values.
generation. These schemata usually represent a large, good region Although the direct search methods used in traditional optimization
(a region with many high fitness points) in the search space. These methods do not explicitly require the gradient information, some of
schemata are known as building blocks in GA parlance. These those methods use search directions that are similar in concept to the
building blocks representing different good regions in the search space gradient of the function. Moreover, some direct search methods work
get exponentially more copies and get combined with each other under the assumption that the function to be optimized is unimodal
by the action of GA operators and finally form the optimum o:r a and continuous. In GAs, no such assumption is necessary.
near-optimum solution. Even though this is the basic understanding
of how GAs work, there exists some mathematical rigour to this __.:;; One oth~r difference in the operation of GAs is !he use of
hypothesis (Davis and Principe, 1991; Vose and Liepins, 1991). probabilities in their operators. None of the enetic o erators work
Holland (1975) has shown that even though n population members -a:eFefmmishcaJly. n e repro uction operator, even though a string
are modified in a generation, about n 3 schemata get processed in is expected to have Fi/ F copies in the mating pool, a siinulatiori
a generation. This leverage comes without any extra book-keeping of the roulette-wheel selection scheme is used to assign the true
(Goldberg, 1989) and provides an implicit parallelism in the working number of copies. In the crossover operator, even though good strings
of genetic algorithms. Even though there are a number of advantages (obtained from the mating pool) are crossed, strings to be crossed
of using a coding of variables, there are also some disadvantages. One are created at random and cross-sites are created at random. In the
of the drawbacks of using a coding representation is that a meaningful mutation operator, a random bit is suddenly altered. The actidn
and an appropriate coding of the problem needs to be used, otherwise of these operators may appear to be l).aive, but careful studies may
GAs may not converge to the right solution (Goldberg, et. al, 1989; provide"some interesting insights about this type of search. The basic
Kargupta, et, al, '1992). However, a general guideline would be to problem with most of the traditional methods is that they use fixed
use a coding that does not make the problem harder than the original transition rules to move from one point to another. For instance, in
problem. the steepest descent method; the search direction is always calculated
as the negative ofthe gradient at any point, because in that direction
~ The most striking difference between GAs and many traditional the reduction in the function value is maximum. In trying to solve a
optimization methods is that GAs work with a -eopulation of multimodal problem with many local optimum points (interestingly,
points instead of a single point. Because there are more t han many real-world engineering optimization problems are likely to be
On'e string '6eing processed. simultaneously, it is very likely that the multimodal), search procedures may easily get trapped in one of
expected GA solution may be a global solution. Even though some the local optimum points. Consider the bimodal function shown
traditional algorithms are population-based, like Box's evolutionary in Figure 6.3. The objective function has one local minimum and
optimization and complex search' methods, those methods do not use one global minimum. If the initial point is chosen to be a point
800
~

copverge in some bit positions, tho search direction narrows and


a ;near-optimal solution is achieved. T his nature of narrowing t he
search space as the search progresses is adaptive and is a unique

--
~
~
characteristic of genetic algorithms.

6.1.3 Similarities between GAs and traditional methods

Even though GAs are different than most traditional search


algorithms, there are some similarities. In traditional search
Global optimum methods , where a search direction is used to find a new p..Qin1.....a1
X ~ast two P?ints are eit1lefimPfic!.~ · or_ e~cl!£itly_ used to ·define
the seardi direction. In the Hooke-Jeeves pattern search method, a
Figure 6.3 An objective function with one local optimum and one pattern move is created using two points. In gradient-based methods,
global optiqmm. The point x(t) is in the local basin. the search direction requires derivative information which is usually
calculated using function values at two neighbouring points. In the
in the loc~l basin (point x(t) in the figure), the steepest descent
crossover operator (which' is mainly responsible for the GA search),
algorithm will eventually find the local optimum point. Since the
two points are also used to create two new points. Thus, the crossover
transition rules are rigid, there is no escape from these local optima.
?Peration is simi!!l-!--.!.<2...a dire <j;jg~~ar.s!u!!.eth.Q,d_Jl]CC~pj_ _th.~Ltli~
The only way to solve the above problem to global optimality is to search direction is not fixed for all P-Oints in the population and that
have a starting point in the global basin. Since this information ~~ made- to find the optimal pOlnt~articular directigq,
is usually not known in any problem, the steepest-descent method C onsider a~eoptlrri1zat1o-n problem shown in Figure 6;4,
(and for that matter most traditional methods) fails .to locate the
where two parent points P1 and P2 are participated in the crossover.
global optimum. We show simulation results showing inability of
the steepest descent method to find the global optimum point on
a multimodal problem in Section 6.3. However, these traditional
methods can be best applied to a special class of problems suitable
for those methods. For example, the gradient .search methods will
.cl
I
I
outperform almost any algo~ithm in solving continuous, unimodal I
-~-L____ _ ___________ __ _ ~ Yz .. cz
problems, but they are not suitable for multimodal problem. Thus, in
general, traditional methods are not robust. A robust algorithm can .
I
C\1
be designed in such a way that it uses the steepest descent direction ~ i
most of the. time, but also uses the steepest ascent direction (or any I
I r
other direction) with some probability. Such a mixed strategy may
require more number of function evaluations to solve "continuous,
unimodal problems, because of the extra computations involved in
trying with .non-descent directions. But this strategy may be able Ct
dt
to solve complex, multimodal problems to global optimalitY,. In the I
muHimodal problem, shown in the above figure, the mixed strategy t c2
Xt
may take the point x(t) into the global basin (when t:r.ied with Figure 6.4 The action of a single-point crossover operator on a two-
non-descent directions) and finally find the global optimum point.
variable search space. The points Pl and P2 are parent points and ~~
GAs use similar search strategies by using probability in all their and c2 are children points. '
operators. Since an initial random population is used, to start with,
the search can proceed in any direction and no major decisions are Under the single-point crossover operator, one of the two substrings .
made in the beginning. · Later on, when the pQpulation begins to .is crossed. It can be shown that the two children points can only lie
30 lml••'ion tift: A ltoril~ ml tJnfl l11mp/. 'plim~••' 30

along directions ( Ct and c2) shown in the figure (either along soHd EXERCISE 6. 1.1
arrows or along dashed arrows). The exact locations of the children
points along these directions depend on the relative distance between T he objective is to minimize the function ,.... ,;'

the parents (Deb and Agrawal, 1994). The points Yt and Y2 are the
two typical children points obtained after crossing the parent points f(xt,x2) =(xi+ X2 -11) 2 +(xi+ x~- 7) 2
PI and p 2 • Thus, it may be envisaged that point PI has moved in the
direction from di up to the point YI and similarly the point P2 has in the interval 0 ;:;; x 1 , x 2 ;:;; 6. Recall that the true solution to this
moved to the point Y2. problem is (3, 2f having a function value equal to ~ero.
Since the two points used in the crossover operator are chosen
at random, many such search directions are possible. Among them Step 1 In order to solve this problem using genetic algorithms,
some directions may lead to the global basin and some directions we choose binary coding to represent variables Xt and x2. In
may not . The reproduction operator has an indirect effect of filtering the calculati<ID.§....Jlere, 10-bits are chosen for each variable, thereby
the good search directions and help guide the search. The purpose making the ~rmg lengt!\~qu!il}o ~ With 10 bits, we can get a
of the mutation operator is .to create a point in the vicinity of the solution accuracy of (6-0)/(2 -1) or 0.006 in the interval (0, 6). We
current point. The search in the mutation operator is similar to choose roulette-wheel selection, a single-point crossover, and a bit-
a local search method such as the exploratory search used in the wise mutation operator. The crossover and mutation probabilities
Hooke-Jeeves method. With the discussion of the differences and are assigned to be 0.8 and 0.05, respectively. We decide to have
similarities of GAs with traditional methods, we are now ready to 20 points in the population. The random population created using
present the algorithm in a step-by-step format. Knuth's (1981) random number generator3 with a random sMd equal
to 0.760 is shown in Table 6.1. We set tmax = 30 and initialize the
Algorithm generation counter t = 0.
Step 1 Choose a coding to represent problem parameters, a Step 2 The next step is to evaluate each string in the
selection operator, a crossover operator, and a mutation operator. population. We calculate the fitness of the first string. The first
substring ( 1100100000) decodes to a value equal to (2 9 + 2 + 2 ) or
8 5
Choose population size, n, crossover probability, Pc, and mutation
probability, Pm. Initialize a random population of strings of size f. 800. Thus, the corresponding parameter value is equal to 0 + (6 -
Choose a maximum allowable generation number tmax· Set t = 0. 0) X 800/1023 or 4.692. The second substring (1110010000) decodes
Step 2 Evaluate each string in the population. to a value equal to (2 9 + 28 + 27 + 24 ) or 912. Thus, the corresponding
parameter value is equal to 0 + (6- 0) x 912/1023 or 5.349. Thus,
Step 3 If t > tmax or other termination criteria is satisfied, the first string corresponds to the point x(t) = (4.692, 5.349f. These
Terminate. values can now be substituted in the objective' function expression
Step 4 Perform reproduction on the population. to obtain the function value. It is found that the function value
at this point is equal to f(x(l)) = 959.680. We now calculate the
Step 5 Perform crossover on random pairs of strings. fitness function value
, I
at this point using the transformationI rule: /

Step 6 Perform mutation on every string. F(x(l)) = 1.0/(1.0 + 959.680) = 0.001. This value is use~ in the
reproduction operatio:q. Similarly, other strings in the population
Step 7 Evaluate strings in the new population. Set t = t +1 are evaluated and fitness values are· calculated. Table 6.1 shows the
and go to Step 3. objective function value and the fitness value for all 20 strings in the
initial population. ·
' '

The algorithm is straightforward with repeated application of Step 3 Since t = 0 < = 30, we proceed to Step 4. ·j I
tmax
three operators (Steps 4 to 7) to a population of points . We show the ' )
'
working of this algorithm to the unconstrained Himmelblau function 3 A FORT RAN code implementing the random number generator app~&rs in'

used in Chapter 3. the GA code presented at the end of this chapter.


Nr'"lrvadilin,.tal Opltmt :101'1
!104

,4,' /,IIJ) 1/ i\ t, t hlH Htop, wo tlolcct ~nt> d Ht,J'iu gH iu th o popul a.t i<l n to

-bo
.s 0
0
......
......
..., ......
..... 0
0
0
0
......
0
0
...... 0 0 ...... ...... 0 ......
...... ...... ...... ...... ...... ...... ......
...... 0 0 0 0 ...... ......
0
0
......
0
0
0
...... 0
00 0 0
...... 0
0
0
0
0
0
0
0
0 0 0 ...... ......
0 0 0 0 0
...... 0 0
............
...... ......
0 ...... ...... ...... ......
...... ...... 0
0 ...... ...... 0 0
...... 0 ...... ...... ...... 0
0 0 ...... 0 0 ......
...... ............ 0 ...... 0
...... 0 0 ...... ...... ......
...... ...... 0 0 ...... 0
0 0 ...... 0 0 ......
00
A.
bO
for'rtl t. ho rtl l1t.luK pool. IH order to \I H(l th <l roulotto wltcol Hd octlon
pror(lduro, wo fir Ht calcul ate th e a.verago fitn ess of tho population.
Jly n,ddiu g t.h o fitn ess values of a.IJ strin gs a nd dividin g th e Hlllrl
0 "';:I ......
..0 0 ...... ...... ...... 0 ...... ...... ......
...... ...... ...... ...... ...... ...... ......
0 0 ...... ...... 0 0 0 0 ..... 0 0 0
...... 0 ...... 0 ...... ...... ~ by th e population size, we obtain :F = 0.008. T he next ll l cp is to
0 0 0 ...... ...... ...... ......
A. ...... 0 ...... 0 0 0 0 0 0 0 0 ...... ...... ...... 0 0 ...... ...... 0 ·~
bO U) 0
...... 0 ...... ...... ...... ...... ...... 0 ............ 0 0 ...... :ampule th e expected count of each string as :F(x)j:F. The values
.::
0 0 0 ...... 0 0 0
o .................. o o o o ...... o .................. o ...... o ...... o ...... o s are calculated and shown in column A of Table 6.1. In other word s,
·~
~ o o o ...... o .............................. o o o ...... o ...... o o o o Q)
....c:
bo oo ...... 0 ...... 0 ...... 0 0 0 0 ..................................................................
............ o o o o ...... o o o .................. o ...... o ...... o ,_.-+> we can compute the probability of each string being copied in th
.::
0
::E .s
..., oo
..... o o o ...... o o o o ...... o o o .................. o .................. o
............ oo .................. o o o .............................. o ...... o ............
Q) .::
mating pool by dividing these numbers with the population siz
...... oo .......................................... o o o ............ o o o ..0 ·-
-~ "';:I
..0 ...... ...... 0 ...... ...... ...... ...... ...... 0 0 ...... ...... 0 0 0 0 0 ...... 0 ...... S+> (column B). Once these probabilities are calculated, the cumulati v
;:I .::
0 g
0 0 0 ...... 0 0 0 ...... ...... 0 0 0 0 0 0 00 0 00
""3 0 ...... ...... 0 0 0 0 0 ...... o 0 0 0 0 ............ ...... 0 0 probability can also be computed. These distributions are also shown
A.
0
U) 0
bOU
p... k. o--~o~-o-MoM--oo--~o .:: Q) in column C of Table 6.1. In order to form the mating pool, w
·-
~~
;:I
s ~ 10
,....... M ~ ,.......,.......,.......,.......,.......
~ 0 ~ ~ ~ ~ ~ ~ ~ ,.......,.......,.......
~ M ~ ~ ~ ,.......,.......,.......
00 ~ 0 create random numbers between zero and one (given in column D)
0
"'0 ~~OO~M~-~~M~OOO~~~~~~~M and identify the particular string which is specified by each of thes
~o~~MM~-~oo~oo--o~oo~oo-
~ ~~~~~~~~~~~~~~~~~~~~ ~k.
random numbers . For example, if the random number 0.472 is
~ 00000000000000000000
created, the tenth string gets a copy in the mating pool, becaus
C,) I~ ~ - ~ ~ M 0 ~ - ~ ~ 00 ~ 00 ~ t- ~ - ~ 0
"'
.::
o~-o-o~~o~~~-~~~M~~o
~~~~~~~~~~~~~~~~~~~~ .::
that string occupies the interval (0.401, 0.549), as shown in column C .
Column E refers to the selected string. Similarly, other strings ar
0 ooooooooooooooooooo- o-
-~ ""0
"' I:X:ll~ ~ ~ g; ;3 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ g g ~ 8' ~ .::
(.) selected according to the random numbers shown in column D. After
~rno"'
Q)

~ ~~~~~~~~~~~~~~~~~~~~ Q) this selection procedure is repeated n times ( n is the population


....c: 00000000000000000000
p...
~ IM 0 00 ~ ~.!";) - ~ ~ ~ ~ ~ ~ ~ ~ ~ - ~ ~ 00 M
......0 .::Q) size), the number of selected copies for each string is counted. This
.:: ~~~~~~~~~~~~~~~~~~~~ :>. Q) number is shown in column F. The complete mating pool is also
0 o-o-o--oo~o~oooo--~o
~~ shown in the table. Columns A and F reveal that the theoretical
·-e ~-~ootn~~--~~-MM~M~0~~-
~..0
·- Q)
;:I
"'0
~000-o--oo~O~OOOO_O_O
~~~~~~~~~~~~~~~~~~~~~ ..0 .....
expected count and the true count of ~ach string more or less agree
0
0
.....
~00000000000000000000
..... ..o
Q)
with each other. Figure 6.5 shows the initial random population
A. ~oo>n~~OOMM~~~oo~~o~~-M>n A. s and the mating pool after reproduction. The points marked with an
~ ~00~00~~~~~1.!";)~-~~~0tn~Otntn
~~~~~~~~~~~~~~~~~~~~~
Q)
> .::
;:I

~tn~~~OOO~tn~~~OO~~~~M~~
·~ 8
enclosed box are the points in the mating pool. The action of th

-
"'0 tnO~~-~OOM~M-~-~00~~-~00
.:: r
~-- tn ~~ 00 M~~- - ~ - ;:::1"'0
0 reproduction operator is clear from this plot. The inferior points have
"'
.:: ~~~ ~ t! ~ ~ ~ ~ ~ ~ 8 ~ ~ ~ ·~ ~ ~ ~ ~ ~ ~
s .:: been probabilistically eliminated from further consideration. Notic
-~
0
~M~-M~OtnMO~~M~OOO-tn-~
;:I
0~
"'
that not all selected points are better than all rejected points. For
~~~M~~o~~~o~~~~~~~~~ example, the 14th individual (with a fitness value 0.002) is selected
;:I
"'I~ ~ ~ M ~.!";) M ~.!";) 0 ~ ~ - 00 ~ M ~ M ~ - ~ ~
~ ~~tn~-~~M~M~~tn~-~OOM~~M
M~~~~MO~~~OOMM~O~~OOMM
C,)~ but the 16th individual (with a function value 0.005) is not selected.
~
~oMM~~~~Mo~oo~~~~M~~ Although the above roulette-wheel selection is easier to
.-I -o.,....oo ...... o.,.....,.....,....oo.,.....,....oo 0 0 ............ 0 ·
1 0 ...... 0 0 ...... 0 0
........................ 0 0 0 ...... 0 ...... 0 ...... 0 .:: implement, it is noisy. A more stable version of this selection operator
co ~gc;~c;g~c;~c;~c;::::::g~ ...... 0 ...... 0 0 0
.9
·~ 0 0 0 ...... 0 ...... 0 0 0 0 ............ 0 0 ...... ......
0 ...... 0 0
0 0 0 ......
...,
(.)
is sometimes used. After the expected count for each individual
Q)
tl.,.....,....oo.,.....,....o.,.....,.....,.....,....ooo.,.... ...... o o o o Q)
str;ng is calculated, the strings are first assigned copies exactly equal
~
,.oo.,.....,....o.,.....,....oo ...... o.,....oo.,....o 0 ...... 0 ...... ...... Q)
o ............ o ............ o .................. o ........................ 0 ...... 0 ...... ...... ...,.::..._."' to th e mantissa of the expected count. Thereafter, the regular
.sbO,E . . . o 0...... oooo.,....ooo.,.....,.....,....o
V-' ...... 0
...... 0 ...... 0 ......
............ 0 0 .................. 0 0 ...... 0 ...... ............. 0 ............. ;:I 0
roulette- wheel selection is implemented using the decimal part of th
~~o.,.....,....oo.,.....,....o.,....o.,.....,....o.,....o ...... 0 0 0 ...... 8 ...,:>.
U) , o o o ...... oo ...... o o o o o .................. .....t-tO....-t....-t
""0;..= expected count as th e probability of selection. This selection method
bCO.,....o.,....o ...... oo.,.....,....o.,.....,....oo ...... 0 ...... 0 0
.::o ...... oo ...... o .................. o .............................. ...... 0 0 0 0 ...,,.o
Q) · -
is loss noisy a nd iH kn own as the stochastic remainder selectiqn.
·Ec;g~gc;~c;~:::~~::::::c;:::
rno ...... o ...... o ............ o ...... o ...... o o o o
...... o ...... oo
0 0 0 ...... ......
0 ...... 0 ...... ......
QJ,.o
(.)

A.o
"'
..o ...... o ...... oo ...... o ...... o ............ oo ............ 0 ................... ....... >< ..... Step 5 At. t.hi l'l Hl.op, the strings in the mating pool are used in
............ 0 0 0 ~p...
J5:::g~~:::gc;:::~g:::gg;:~ 0 0 ...... 0 ......
tll<l r.roRsovor O!Hif'll,t,lon . In a. single- point crossover, two strings ar
-~M~~~~~~o-~M~~~~oo~o
,.......,.......,.......,.......,.......,.......,.......,.......,.......,.......C"'
l'lclo<:l.llcl 1~t m nclom nml <' roHscd at a random site. Sin ce th e matin g
~I:X:l
306 Optimization fo r b'ngin eet·ing Design; Algorithms and b'xamples NmthrHltiiiiMI OtJLimirt~tlcm A louf'tlhm ao
6~
0 -·- - - - - - - - - - 6 r en :n . .•-,J...... ~ ..... t :
. . . 1500.0 ; t • t : 1
c o. ............ ····· o 1iatina pool
---- '-.._0
850.0 o After orosaover
5 ' ··.··.
-·-·-·-·-·-·- 'o
' "
500.0 ··--
'
·,~_

' 300.0 ·.
4 ························-·-·-·
c c--·--.. ........ .
~ " \ 0
' 150.0

.-----
-- ·. ·· ... a \ \ \ '\
-&::f-.... 75.0 ·.
--- '\. \ \ \ I
~3 30.0
If ·3
~'\" \c\·...
~"' "\
( \ I 10.0 .a\.
I I ' I I \ \
2 \ M~. ,\\·I
1rumum :
o Initial population
2
0 0 Mating pool
' oint \ i
~ \ I I :

'L·
1
\ I lc \ ·- ..
\ c\ I \ I I
J. :. ~
I
0 '. I. I o~~~--~~~~--~~~~~~

0 1 2 3 4 5 6 0 1 2 3 4 5 6
Xt xl

Figure 6.5 The initial population (marked with empty circles) and Figure 6.6 The population after the crossover operation. Two
the mating pool (marked with boxes) on a contour plot of the objective points are crossed over to form two new points. Of ten pairs of strings ,
function. The best point in the population has a function value 39.849 seven pairs are crossed.
and the average function value of the initial population is 360.540.
all 10 pairs of points in the mating pool cross with each other. With
pool contains strings at random, we pick pairs of strings from the top the flipping of a coin with a probability Pc = 0.8, it turns out that
of the list. Thus, strings 3 and 10 participate in the first crossover fourth, seventh, and tenth crossovers come out to be false. Thus,
operation. When two strings are chosen for crossover, first a coin in th ese cases, the strings are copied directly into the intermediat
is flipped with a probability Pc = 0.8 to check whether a crossover population. The complete population at the end of the crossover
is desired or not. If the outcome of the coin-flipping is true, the operation is shown in Table 6.2. It is interesting to note that with
crossing over is performed, otherwise the strings are directly placed Pc = 0.8, the expected number of crossover in a population of size
in an intermediate population for subsequent genetic operation. It 20 is 0.8 X 20/2 or 8. In this exercise problem, we performed seven
turns out that the outcome of the first coin-flipping is true, meaning ~rosso vers and in three cases we simply copied the strings to the
that a crossover is required to be performed. The next step is to ltttormediate population. Figure 6.6 shows that some good points
find a cross-site at random. We choose a site by creating a random a.nd some not-so-good points are created after crossover. In some
number between (0, .e- 1) or (0, 19). It turns out that the obtained ascs, points far away from the parent points are created and in
random number is 11. Thus, we cross the strings at the site 11 some cases points close to the parent points are created.
and create two new strings. After crossover, the children strings
are placed in the intermediate population. Then, strings 14 and 2 Step 6 The next step is to perform mutation on strings in th
(selected at random) are used in the crossover operation. This time intermediate population. For bit-wise mutation, we flip a coin with
the coin-flipping comes true again and we perform the crossover at a proba biHty Pm = 0.05 for every bit. If the outcome is true, we alter
the site 8 found at random. The new children strings are put into th o bit to 1 or 0 depending on the bit va]ue. With a probability of
the intermediate population. Figure 6.6 shows how points cross over 0.0&, a population Rlzo 20, and a string length 20, we ca.n expect to ·
and form new points. The points marked with a small box are the n.!t.M IL tota.l of a.hou t 0.01) x 20 x 20 or 20 bitR ln tho population ,
points in the mating pool and the points marked with a small circle l'lihl<, 0.2 Hhowl'l th11 11111 tlitod blts in bold cha.ra.ctcrtl In tho t1~bl o. A
are children points created af~er crossover operation. Notice that not <·ount<ld from t, hi~ t.n.hlu, we~ luwo a.c t.u 1~ll y altorod 10 bitR. Flp;um 0.7
308 Optimi~tdion for engineering De1ign: AlgorltAm1 and Examples Nontraditional Optimization Algorithms 309

shows the ,:ffect of mutation on the intermediat e population. In


.----.1 0 "<t' t- C"' 00 C"' C"' "<t' ..-< "<t' "<t' "<t' t- M ~ C"' 0 L<':> M "<t'
~L<':>OOO.-<O.-<.-<.-<.-<.-<.-<MOC"'.-<.-<OC"'C"'
~~~~~~~~~~~~~~~~~~~~~ Mutation Operation
~00000~00000000000000

co 00 0') 6 I I .. J. . .. . J.. .. .. ..... I I I I I I I I I


C"'"<t'O,...;.M 0') C"' M 0') 00 00 M CO C"' t- 00 C"'O')"<t' o After crossover
.----.1
~ co
..__, 00 C"' 0
00 "<t' L<':> 00 C"'t-MOOCOCO 00 "<t' N C"' co
M C"' M 00 L<':> 0!~~1:-:<X?<X? t-:t-:0! ...... 0') ~t.q<X?
....... oocic0.,;~ MOt-C"'OO L<':> 00 C"' OL<':> 0') C"' 0')
00 M"<t'O')L<':>C"' ··· ··-·· ····--·· ·
~ o After Mutation
...... 00 t- 00 t- t- t- C"' ...... "<t' 00 0') .-<"<t'M
C"' 5
C"' ...... L<':> "<t' M
"'I "<t' 0 t- C"' t- ...... "<t' "<t' t- "<t' ·L<':> L<':> C"' ...... ~ M "<t' C"' C"' 0
~t-OOOOO"<t'OMCO.-<"<t'L<":>L<':>O')~.-<C"'L<':>OOC"'O
COOMO.-<L<':>M.-<.-<O~"<t't-M"<t'.-<L<':>O')"<t'O 4 ................................. ..
~.,;d~.,;~.,;.,;d~~~~~~d~~~.,. --- ~
.-<I L<':> t-. C"' M 0 t- ...... C"' L<':> t- M M "<t' "<t' 00 M co ...... 00 C"'
~.-<"<t'L<':>.-<L<':>MMOOMO"<t'"<t'COCOL<':>.-<Mt-L<':>CO
OO')"<t'"<t'O')MMO')OL<':>MMC"'MM"<t't-OOMO')
~~d~d~~~~~~~dd~~~~cici ~3
0 0 .......... 0 ..... 0 0 ..... 0 0 0:::::0 .... o ................ o
............................ o .... o o o o o o o .... o ....
0 00:::::0 .0 0 0 ............... 0 0 ..... ..... ........ o ........ o
·..·.I·-..\
.... o o o .... o o o o ............ .... .... 0 .... 0 .... ..... 2
11.1
c; ~g~g~dil:Jgc:g;:;: .... g ........ o o .... o
o o o .... o ....
-<c;;

0
~
o.,....,....,...o.,....,...oo.,....,...~o oooa:::ooo
..... r::c::J:CJ ..... ..... ..... ..... . 0 0 ..... ..... ..... ..... .... 0 0 .............
OO.,...OG3JOoOOOO.,....,... .,..._a_.,...o.,...o
..... 0 0 ..................... 0 ..... o . 0 0 ..... Oo::::1JO.,...O.,...
.,...G3J.,....,...oo~o.,...o.,....,....,...o.,...o.,...o.,...o
o o o ........ o ................ o o o .... o .... o o o o
1

-~
'r;J
\ liiil ... _ ':,\ \ ,"'\
: \ b
~ .... o .... oo .... o o o ........................................ .... ~
0 roo.,....,...ooo_a_ .... ~oo.,....,... .... o.,...o .... o oL-~._~~~~~~~~~~
0 0 0 .... 000~ .... ooo:::::o ........ o ............ o
~
.... ........ o o ............ oo .................... o .... o .... ....
o o .... oo ............................ o o o ........ o o o 0 1 2 3 4 5 6
;=j ........ o ................ o:::::ooo ........ o o o o o .... o .... Xt
o o o .... o o o ........ o o o o o o o o o o o
::E o .... o .... o o o o o .... o o o o o ............ oo
Figure 6. 7 The population after mutation operation. Some points
~ ~---t-------------------------------------------------------i
ro~.-<ooo.,....,...o.,...oo.,...oo.,....,...o.,....,....,....,...o do not get mutated and remain unaltered . The best point in the
~
0 ~o.,....,....,....,....,....,....,...o.,...ooooooo.,...o.,...
~ 0 0 0 ..... 0 0 0 ............... 0 0 .................... 0 .......... 0
<I)
population has a function value 18.886 and the average function value
<I) ·- .-<;:::
~ -<c;;
~=~o.,...o.,...o
·o:: c: '8 g g g ....c: goooo.,....,...ooooo.,...o.,...
g ~ g ;: ;: ;: c: ;: ~ c: g ;: c: 11.1
bO
of the population is 140.210, an improvement of over 60 per cent.
o~~.,...o.,....,....,...o.,....,...oo.,....,...ooooo.,...oo
~
0
;=j.,....,....,....,....,....,....,....,...oo.,....,....,....,....,...oo.,....,....,...
.s11.1
U~ooc;~gc;~~~~g~ggc;;:c;~C:~C:~
11.1
0
some cases, the mutation operator changes a point locally and in
Q) · ~
u some other it can bring a large change. The points marked with a
~!C"'c:gc:;:~g~~;:~sc:c:~c:~c:gc:g small circle are points in the intermediate population. The points
~~~c;g;:~gc;gg~c:c:c:;:;:;:c;;:c:;:c: ~
~ s .s 0 0 0 ..... 0 0 0 0 ..... 0 0 0 ............... 0 ............... 0 marked with a small box constitute the new population (obtained
~~~c:c:~gc;;:~~~~;:;:c:c:c:~;:gc;c; after reproduction, crossover, and mutation). It is interesting to
~ .... ~.,....,...o.,....,....,....,....,...oo.,....,...ooooo.,...o.,...
~~;=jOOO.,...OOO.,....,...OOOOOOOOOOO .---. note that if only one bit is mutated in a string, the point is moved
- 00 o.,...o.,...ooooo.,...ooooo.,....,....,...oo 0
~
~O')O')~~L<':>L<':> ~~............ ~~~~
along a particular variable only. Like the crossover operator, the
z mutation operator has created some points better and some points
\!:) ;;... ;;... ;;... ;;... ;;... ;;... z z ;;... ;;... ;;... ;;... z z ;;... ;;... ;;... ;;...·z z
.-<oo.,...oo.,....,...o.,...ooo.,... .,;....,...o.,....,....,...o
~.,...o.,....,....,....,....,....,....,...oooo 0 0 0 .... 0 0 ....
~ worse than the original points. This flexibility enables GA operators
.s ~ ~ c: g ~ g g c: c: .............
c: ~ ~ ;:.... ................. 0 .... 0
......... 0 0 ........ ....
;;...
..__, to explore the search space properly before converging to a region
-~
~000000000
.... 0 ..... 0 .... 0 0 0 0 0 ........ 0
0 ........ 0 0 .... 0
0 .0 0 ..... 0 0 .....
~ prematurely. Although this requires some extra computation, this
o~o.,....,....,...o.,....,....,...oo.,....,...o
~ 0 0 0 ..... 0 0 0
O;=j.,....,....,....,....,....,....,....,...oo.,....,....,... ........ 0 .... 0 ..... .... flexibility is essential to solve global optimization problems.
..,~
~00 0 ..... 0 ..... 0 0 0 0 0 0 0 0 ..... ........ 0 0 ........ 0
bO .... o o o ................ o .... o o o .... 0 ........ 0 0 ....
~
o.,....,....,...oooo~o ............ o .... o.,...o.,...o
~
~Nooo.,...o.,....,....,....,....,...ooo.,...o.,...oooo Step 7 The resulting population becomes the new population.
•ro .,... o .,... o .,... o o o o .,... .,... .,... .,... .,... .,... .,... .,... .,... .,... .,...
~ ~

::E .s g 8' c: ;: g g g g ;: g g g ;: ;: ;: g ;: ~ ;: g J We now evaluate each string as before by first identifying the

~
....... oo.,....,....,...ooo.,....,....,....,....,...o.,...o.,....,...
~ substrings for each variable and mapping t he decoded values of the
~~~c:~~;:;:;:c;c;;:;:gggc;c;~g~ 8Ubstrings in tho chosen intervals. This complet es one iteration of
~ooo.,...ooo.,....,...ooooooooooo
ooO.,...O.,...OOOOO.,...OOOOO.,....,....,...OO
l \!:)
notlc algorlthm11. We increment the generation count er t o ,t = 1
nd procood to S~Ofl 3 for the next iteration. The new population

You might also like