You are on page 1of 11

~

Nontraditional Optimization
Algorithms

This chapter describes two nontraditional search and optimization


methods which are becoming popular in engineering optimization
problems in the recent past. These algorithms are included in
this book not because they are new but because they are found
to be potential search and optimization algorithms for complex
engineering optimization problems. Genetic algorithms (GAs) mimic
the principles of natural genetics and natural selection to constitute
search and optimization procedures. Simulated annealing mimics
the cooling phenomenon of molten metals to constitute a search
procedure. Since both these algorithms are abstractions from a
natural phenomenon, they are very different search methods than
those described in the previous chapters. We describe genetic
algorithms first, followed by the simulated annealing procedure.

6.1 Genetic Algorithms


Genetic algorithms are computerized search and optimization
algorithms based on the mechanics of natural genetics and natural
selection. Professor John Holland of the University of Michigan,
Ann Arbor envisaged the concept of these algorithms in the midsixties and published his seminal work (Holland, 1975). Thereafter,
a number of his students and other researchers have contributed
to developing this field. To date, most of the GA studies are
available through a few books (Davis, 1991; Goldberg, 1989; Holland,
290

Nontraditional Optimization Algorithms

291

1975; Michalewicz, 1992) and through a number of international


conference proceedings (Belew and Booker, 1991; Forrest , 1993
Grefenstette, 1985, 1987; Rawlins, 1991; Schaffer, 1989, Whitley
1993). An exten.;ive list of GA-related papers is referenced elsewhere
(Goldberg, et. al, 1992). GAs are fundamentally different than
classical optimization algorithms we have discussed in Chapters 2
through 5. We begin the discussion of GAs by first outlining the
working principles of GAs and then highlighting the differences GAs
have with the traditional search methods. Thereafter, we show a
computer simulation to illustrate the working of GAs.

on

6.1.1 Working principles

To illustrate the working principles of GAs, we first consider ~n


unconstrained optimization problem. Later, we shall discuss how
GAs can be used to solve a constrained optimization problem . Let
us consider the following maximization problem:
Maximize

f(x ),

(L)

xi_

xt

(U) . _
~xi , ~ -1,2,

. .. ,N.

Although a maximization problem is considered here, a minimization


problem can also be handled using GAs. The working of GAs is
completed by performing the following tasks:
Coding

al

In order to use GAs to solve the above problem, variables xi's are
first coded in some string structures. It is important to mention
~re that the coding of the variables is not absolutely necessary.
There exist some studies where GAs are directly used on the variables
themselves, but here we shall ignore the exceptions and discuss the
working principle of a simple genetic algorithm. Binary-co~ed string
having 1's and9's are mostly used. The length of the string is usually
determined according to the desired solution accuracy. For example,
if four bits are used to code -each variable in a two-variable function
optimization problem, the strings .( 0000 0000) and ( 1111 1111)
would represent the points

(q:~L)' x~L)yT

(U) . (U))T
( xl '~2
'

respectively, because -the substrings (0000) and (1111) have the


minimunl and -the maximum decoded values. Any other eight-bit
string can be found to represent a point in the search space according

292

Optimization for Engineering Duign: Algorithmll and Ea:amples

to a fixed mapping rule. Usually, the following linear mapping rule


is used:
.
(L)
x~U) - X~L)
. :
decoded
value
Xi= xi +
l; _
(si)
(6.1)
2
1
In the above equation, the variable Xi is coded in a substring Si of
length li. The decoded value of a binary substring -si is calculated
as Ef;;;;~ 2i Si, where si E ( 0, 1) and the . string s is represented as
(st-ISt-2 ... s2s1so). For example, a four~bit string (0111) has a
decoded value equal to ((1)2 + (1)2 1 + (1)2 2 + (0)23 ) or 7. It is
worthwhile to mention here that with four bits to code each variable,
there are only 24 or 16 distinct substrings possible, because each
bit~position can take a value either 0 or 1. The accuracy that can
be obtained with a four~bit coding is only approximately 1/16th of
the search space. But as the string length is increased by one,_the
obtainable accuracy increases exponentially to 1/32th of the search
space. It is not necessary' to code all variables in equal substring
length. The length of a substring representing a variable depends
on the desired accuracy in that variable. Generalizing tills concept,
we may say that with an li~bit coding for a variable, the obtainable
accuracy in that variable is approximately (x~U)- x!L))j2l;. Once
the coding of the variables has been done, the corresponding point
X= (x~, x2, ... 'XNf can be found using.Equation (6.1). Thereafter,
the function value at the point x can also be calculated by
SUbstituting X in fhe given objective function f( X).
Fitness function

As pointed out earlier, GAs mimic the survival~of~the~fittest principle


of nature to make a search process. -Therefore, GAs are naturally
suitable for solving mci.ximization problems. Minimization problems
are usually transformed into maximization _ problems by some
suitable transformation. In general, a fi!ness functiol! _. _F(~) is
fir~t . derived from the objective function _and used in successive
genetic operations. Certain genetic operators require that the fitn~ss
fu!;,tjon be, nonnegative, although certain. operators do no.t ~ave t~s
~.9uirement. For maximization problems, the fitness function can be
considered to be the same as the objective function or F( x) = f( x ).
For minimization problems, the fitness function is an equivalent
maXimization problem chosen such that the optimum point remains
unchanged. A number of such transformations are possible. The
following fitness function is often used:

F(x) = 1/(1 + f(x)).

(6.2)

Nontraditional

Optimiza~ion

Algorithm

298

This transformation does not alter the location of the minimum,


but converts a minimization problem to an equivalent maximization
problem. The fitness function value of a string is known as the
string's fitness.
The operation of GAs begins with a population of random strings
representing design or decision variables. Thereafter, each string is
ev~ua.'ted to find the fitness value. The population is then operated
by three main operators-reproduction, crQ~!l.Y.er, ana mutation+ to
create a new population of points. The new population is further
evaluated and tested for termination. If the termination criterion is
not met, the population is iteratively operated by the above three
operators and evaluated. This ptocedure is continued until the
termination criterl.on is m,et . One cycle of these operatio:riS and the
subsequent evaluation procedure is known as a generation in GA's
terminology. The operators are described next.
.l

G A operators

Reproduction is usually the first operator applied on a population.


Reproduction sele.cts good strings in a population and forms a mating
pool. That is why ' the reproduction operator is sometimes known
as. the selection operator. There exist a number of reproduction
operators in GA literature, but the essential idea in all of them is
that the above~average strings are picked from the current popvJation
and their multiple copies are inserted i1,1 the mating poof in a
probabilistic fi!.anner. The commonly~used reproduc~i9n operator is
the proportionate reproduction operator where a string is selected
fqr_the matj_~.l.t pool with a Pr.~J:>ability proportional to its. ~tne~s.
Thus, the i~th string in the population is selected with a probabili'ty
proportional to Fi. Since the population size is usually kept fixed in
a simple GA, the sum of the probability of each string being selected
for the mating pool must be one. Therefore, the probability for
selecting the i~ th string is
Pi=

Fi

-n--,
EFi

i=l

where n is the population size. One way to implement ,this selection


scheme is to imagine a rouleite~wheel with it's circumference marked
for each string proportionate to the string's fitness , The roulett~
wheel is spun n times, each time selecting a.n instance of the string
chosen by the roulette-wheel pointer. Since the circumference of the
wheel is marked a,c;cordlna to a. string's fi.tness, this roulette-wheel

294

OptlmiJalfotl / or Engineering Du lgn : A.lgorlthm1 and 8tample~

mechanism is expected to make Fi/ F copies of the i-th string in the


mating pool. The average fitness of the population is calculated as

values and therefore has a. higher probabj]lty of bolng copied into


the mating pool. On the other hand, a. string with a. smaller fitness
value represents a. smaller range in eumula.tive probability values and
has a. smaller probability of being copied into the mating pool. We
illustrate the working ofthis roulette-wheel simulation later through
a computer simulation of GAs.
lri reproduction, good strings in a. population are proba.bilistically
assigned a. larger number of copies and a. mating pool is formed.
It is important to note that no new strings are formed in the
reproduction phase. In
__ . 'the crossover operator, new strings are
created }>y exchanging information among stnngs'oof the mating
pool. Many crossover operators exist hi the GA' literature. In most
crossover operators, two strings are picked fro:qt the mating pool at
random and some portions of the strhigs are exchanged betwee-n the
strings. A single-point crossover operator is performed by ra.ndonily
choosing a. crossing site along the string and by exchanging all bits
on the right side of the crossing site as shown:
'

F= EFifn.
i=l

Figure 6.1 shows a. roulette-wheel for five individuals having different


fitness values. Since the third individual ha.S a. higher fitness value

_~

Point

Fitness

25.0

5.0

40.0

10.0

20.0

,f

0 010 0 0

' .,

than any other' it is expected that the roulette-wheel selection will


choose the third individual more than any other individual. This
roulette-wheel selection scheme can be simulated easily. Using the
fitness value Fi of all strings, the probability of selecting a. string
Pi can be ' calculated. There~fter' th~_ ClJJ.nula.t.iY:e probabilitj' (pi ) of
each string being copied can be calculated by adding the individual
probabilities from the top of the list. Thu_s, the bottom-most string
in the population should have a. cumulative probability (Pn) equal to
l The roulette-wheel concept can be simulated by realizi11g that the
i-th string in the population represents the cumulative probability
-values~from Pi-1 to Pi. The first string represents the cumulative
values from zero to P 1 Thus , the cumulative probability of any string
lies .between 0 to 1. lri order to choose n strings, n random numbers
between zero to one are created at random. Thus, a string that .
represents the chosen random number in the cumulative probability
range (calculated from the fitness values) for the string is copied
to the mating pool. This way, the string with a higher fitness
value will represent a. rla.rger range in the cumulative probability
0

'

0 011 1 1
:::}

111111

Figure 6.1 A rouiette-wheel marked for five individuals according


to their fitness values. The third individual has a higher probability
of selection than any other

111000

The two strings participating in the crossover operation are known


as parent strings and the resulting strings are known a.s children
strings. It is intuitive from this construction that good substrings
from parent strings can be combined to form a. better child string, if
an appropriate site is chosen. Since the knowledge of an appropriate
site is usually not known beforehand, a. random site is often chosen.
With a. random site, the children strings produced may or may not
have a. combination of good substrings f;rom parent strings, depending
on whether or nQt the crossing site fall1 in the appropriate place. But
we do not worry about this too much, because if good s.t rings are
created by crossover, there will be more copies of them in the next
mating pool generated by the reproduction operator. But if good
strings are not cr eated by crossover, they will not survive too long,
o eca.use-reproduction will select against those strings in subsequent
~nera.tions.

'
It is clear from this discussion that the effect of crossover may be
g__etrimental or beneficial. Thus,, in order to preserve some of the good
strings that are already present in the mating pool, not all strings in
t he mating pool are used in crossover. When a. crossover probability
of Pc is used, only 100pc per cent strings in the population are used
in the crossover operation and 100(1 - Pc) per cent of the populatjon

296

remains a.s they a.re in the current population 1 .

6.1.2 Differences between GAs a,nd traditional methods

A crossover operator is mainly responsible for the search of new


strings, even though a. mutation operator is also used for this purpose
sparingly. The mutation operator changes 1 to 0 . a.nd vice versa.
with a. smail mutation proba.bilit.YL..Em. Thebit-wise mutation is
...
- - ~~--------performed bit by bit by flipping a. coin 2 with a. proba.bilit~
at'a:'ny oit the outcome is true~~~otherwise
t~t ~~ The need for mutation is to create a.
point in the neighEourhood of the current po!nt, thereby achieving a.
local search around the current solution. Tile mutation is also U'Sed!9.E.J-~jtain.J!iver_sity _i~h~_2i)u.J.a.t'io~. Forexa.mple, 7 onsider thefollowing population having four eight-bit strings:
0110
0011
0001
0111

1011
1101
0110
1100

Notice that a.ll four strings have a. 0 in the left-most bit posi~ion. If
the true optimum solution requires 1 in that position, then neither
fe"proauction nor crossover operator described above will be able to
create 1 in that position.- The inclusion of mutation introduces some
piOba.bilitYT NPm) ofturning o into 1.

-These- three operators a.re simple a.nd straightforward.

The
reproduction operator selects good strings a.nd the crossover operator
recomljiiies -good substrings from good st_!J:ggs together to honefully
crea.te_a. better substrin$;_ 'l;:_he muta.ti~n operator alters a. .string
loca.lly to hopefully create a. better string. Even though none o
these cla.i~s ~ guaranteed a.nd}or t~st~d while crea.tin_g a. ~~&)t
is expected that if ba.d strings a.re created thE!Y will _be eliminated by
the re~duction operator in. the next genera.tio"ii"'and. If good s~r}ig
a.re ~t~g, they will be in.crea.singly emJth~~~terested readers
ma.y refer to Goldberg (1989) a.nd other GA literature given in the
references for further insight a.nd some mathematical foundations of
genetic algorithms.

As seen from the above description of the working principles of GAs,


they a.re ra.dica.lly different from most of the traditional optimization
methods described in Chapters 2 to 4. However, the fundamental
differences a.re described in the following paragraphs.
GAs ~ork w_!!h a. string-coding of ~es instead ~f thQ
variables. The a.dva.nta.ge of working with a. coiling of variables is
that the ~ding discretizes the search space, even though the funct!op.
ma.y be continuous. On the other hand, _since GAs require only
ronchon vatu~~ various discrete p~ip.~ , a. discrete or discontinuous
function ca.n be lia.naled with no extra. cost. This a.llows GAs to be
applied to a. wide .variety of prohl;-~~s. An other a.dva.nta.ge is that the
GA opera.tors exploit the sim~rities in stri ng-st:t;.l!&.tJ!res to_ma._ke
a.n effective sea.rcli. Let us discuss this important aspect of GAs
in somewhat more details. A schema (pl. schemata.) represents 'a.
number of strings ~ith similarities a.t certain string positions. For
example, in a. five-bit problem, the schema. (10h*) (a. '!' denotes
either a. 0 or a.1) represents four strings (10100), (10101), (10110),
a.nd ( 10111). In the decoded parameter space, a. schema. represents
a. continuous or discontinuous region in the search space. Figure 6.2
shows that the above schema. represents one-eighth of the search
space. Since in a.n -bit schema., every position ca.n take either 0, 1,
1--I
I
I
I

-l----

--- -

-~---- -~---- -~----

Here, we outline some differences a.nd similarities of GAs with


traditional optimization methods.

-1

I
I

I
I

OOOI 0011 0101 0111 1001 101 1101 1111


I

X max

xmin

Figure 6.2 A schema with three fixed positions divides the search
space into eight regions. The schema (10h*) is highlighted.

or *, there a.re a. total of 3l schemata. possible. A finite population of


size n contains only n strings, but contains many schemata.. Goldberg
(1989) ha.s shown that due to the action of GA operators, the number
of strings m(H, t) representing a. schema. H a.t a.ny generation t grows
to a. number m(H, t + 1) in the next generation a.s follows:

..._

Even though the best (1- Pc)100% of the current population can be copied
deterministically to the new population, this is usually performed at random.
2
Flipping of a coin with a probability p is simulated as follows. A number
between 0 to 1 is chosen at random . If the random number is smaller than p, th
ontcomc of coin- flipping is true, othcrwiAe th e outcome is false.

-~----

297

Nontraditional Optimization A lgorithms

Optimization for Engin eering Design: Algorithms and Examples

m(H, t

:F(H) [
+ 1) ~ m(H, t) --=y1-

IJ(H)
]
Pet=!- Pmo(H) J'

growth factor,

(6.3)

<P

whtro :T( II) Is tho fitnCRil of th o l)rhoma 11 calculated by a.vcra,;in,;

the fit ness of all strings representing the schema, 8(!/) is the defining
length of the schema H calculated as the difference in the outermost
defined positions, and o(H) is the order of the schema H calculated
as the number of fixed positions in the schema. For example, the
schema H =10h* has a defining length equal to o(H) = 3- 1 = 2
and has an order o(H) = 3. The growth factor <P defined ;in the above
equation can be gre~ter than, less than, or equal to 1 depending
on the schema H and the chosen GA parameters. If for a schema
the growth factor <P 2: 1, the number of strings representing that
schema grows with generation, otherwise the representative strings
of the schema reduce with generation. The above inequality suggests
that the schema having a small defirung length (small o(H)), a few
fixed positions (small o(H)), and above-average fitness (F(H) >F),
the growth factor <P is likely to be greater than 1. Schemata for
which the growth factor is greater than 1 grows exponentially with
generation. These schemata usually represent a large, good region
(a region with many high fitness points) in the search space. These
schemata are known as building blocks in GA parlance. These
building blocks representing different good regions in the search space
get exponentially more copies and get combined with each other
by the action of GA operators and finally form the optimum o:r a
near-optimum solution. Even though this is the basic understanding
of how GAs work, there exists some mathematical rigour to this
hypothesis (Davis and Principe, 1991; Vose and Liepins, 1991).
Holland (1975) has shown that even though n population members
are modified in a generation, about n 3 schemata get processed in
a generation. This leverage comes without any extra book-keeping
(Goldberg, 1989) and provides an implicit parallelism in the working
of genetic algorithms. Even though there are a number of advantages
of using a coding of variables, there are also some disadvantages. One
of the drawbacks of using a coding representation is that a meaningful
and an appropriate coding of the problem needs to be used, otherwise
GAs may not converge to the right solution (Goldberg, et. al, 1989;
Kargupta, et, al, '1992). However, a general guideline would be to
use a coding that does not make the problem harder than the original
problem.
~

The most striking difference between GAs and many traditional


optimization methods is that GAs work with a -eopulation of
points instead of a single point. Because there are more t han
On'e string '6eing processed. simultaneously, it is very likely that the
expected GA solution may be a global solution. Even though some
traditional algorithms are population-based, like Box's evolutionary
optimization and complex search' methods, those methods do not use

previously obtu.incd lnforma.tlon offldouUy. In GA8, previou sly found


good lnform.atlon ls ompl1asizcd using reproduction OJHirator an d
propagated adaptively through crossover attd mu ta.tio u o:pe.r ators.
Anot her advantage wi th a population-based search algorithm is
that multiple optim al solutions can be captured in the P,Opulation
easily, thereby reducing the effort to use t he same a1gorit hm many
times. Some extensions of GAs along these directions- multimodal
function optimization (Deb , 1989; Goldberg and Richardson, 1987)
and multiobjective function optimization (Horn and Nafpliotis, 1993;
Schaffer, 1984; Srinivas and Deb, 1995)- have been researched and
are outlined in Section 6 .1. 7.
In discussing GA operators or their working principles in the
previous section, nothing has been mentioned about the gradient or
any other auxiliary problem information. In fact , GAs do not require
any auxiliary information except the objective function values.
Although the direct search methods used in traditional optimization
methods do not explicitly require the gradient information, some of
those methods use search directions that are similar in concept to the
gradient of the function. Moreover, some direct search methods work
under the assumption that the function to be optimized is unimodal
and continuous. In GAs, no such assumption is necessary.
__.:;; One oth~r difference in the operation of GAs is !he use of
probabilities in their operators. None of the enetic o erators work
-a:eFefmmishcaJly. n e repro uction operator, even though a string
is expected to have Fi/ F copies in the mating pool, a siinulatiori
of the roulette-wheel selection scheme is used to assign the true
number of copies. In the crossover operator, even though good strings
(obtained from the mating pool) are crossed, strings to be crossed
are created at random and cross-sites are created at random. In the
mutation operator, a random bit is suddenly altered. The actidn
of these operators may appear to be l).aive, but careful studies may
provide"some interesting insights about this type of search. The basic
problem with most of the traditional methods is that they use fixed
transition rules to move from one point to another. For instance, in
the steepest descent method; the search direction is always calculated
as the negative ofthe gradient at any point, because in that direction
the reduction in the function value is maximum. In trying to solve a
multimodal problem with many local optimum points (interestingly,
many real-world engineering optimization problems are likely to be
multimodal), search procedures may easily get trapped in one of
the local optimum points. Consider the bimodal function shown
in Figure 6.3. The objective function has one local minimum and
one global minimum. If the initial point is chosen to be a point

800
~

--

copverge in some bit positions, tho search direction narrows and


a ;near-optimal solution is achieved. T his nature of narrowing t he
search space as the search progresses is adaptive and is a unique
characteristic of genetic algorithms.

6.1.3 Similarities between GAs and traditional methods

Global optimum
X

Figure 6.3 An objective function with one local optimum and one
global optiqmm. The point x(t) is in the local basin.

in the loc~l basin (point x(t) in the figure), the steepest descent
algorithm will eventually find the local optimum point. Since the
transition rules are rigid, there is no escape from these local optima.
The only way to solve the above problem to global optimality is to
have a starting point in the global basin. Since this information
is usually not known in any problem, the steepest-descent method
(and for that matter most traditional methods) fails .to locate the
global optimum. We show simulation results showing inability of
the steepest descent method to find the global optimum point on
a multimodal problem in Section 6.3. However, these traditional
methods can be best applied to a special class of problems suitable
for those methods. For example, the gradient .search methods will
outperform almost any algo~ithm in solving continuous, unimodal
problems, but they are not suitable for multimodal problem. Thus, in
general, traditional methods are not robust. A robust algorithm can
be designed in such a way that it uses the steepest descent direction
most of the. time, but also uses the steepest ascent direction (or any
other direction) with some probability. Such a mixed strategy may
require more number of function evaluations to solve "continuous,
unimodal problems, because of the extra computations involved in
trying with .non-descent directions. But this strategy may be able
to solve complex, multimodal problems to global optimalitY,. In the
muHimodal problem, shown in the above figure, the mixed strategy
may take the point x(t) into the global basin (when t:r.ied with
non-descent directions) and finally find the global optimum point.
GAs use similar search strategies by using probability in all their
operators. Since an initial random population is used, to start with,
the search can proceed in any direction and no major decisions are
made in the beginning. Later on, when the pQpulation begins to

Even though GAs are different than most traditional search


algorithms, there are some similarities.
In traditional search
methods , where a search direction is used to find a new p..Qin1.....a1
~ast two P?ints are eit1lefimPfic!.~ or_ e~cl!itly_ used to define
the seardi direction. In the Hooke-Jeeves pattern search method, a
pattern move is created using two points. In gradient-based methods,
the search direction requires derivative information which is usually
calculated using function values at two neighbouring points. In the
crossover operator (which' is mainly responsible for the GA search),
two points are also used to create two new points. Thus, the crossover
?Peration is simi!!l-!--.!.<2...a dire <j;jg~~ar.s!u!!.eth.Q,d_Jl]CC~pj_ _th.~Ltli~
search direction is not fixed for all P-Oints in the population and that
~~ made- to find the optimal pOlnt~articular directigq,
C onsider a~eoptlrri1zat1o-n problem shown in Figure 6;4,
where two parent points P1 and P2 are participated in the crossover.

.cl
I
I
I

-~-L____ _ ___________ __ _

~ Yz

.. cz

C\1
~

Ct

dt
I

Xt

t c2

Figure 6.4 The action of a single-point crossover operator on a twovariable search space. The points Pl and P2 are parent points and ~~
and c2 are children points.
'

Under the single-point crossover operator, one of the two substrings .


.is crossed. It can be shown that the two children points can only lie

30

lml'ion

along directions ( Ct and c2) shown in the figure (either along soHd
arrows or along dashed arrows). The exact locations of the children
points along these directions depend on the relative distance between
the parents (Deb and Agrawal, 1994). The points Yt and Y2 are the
two typical children points obtained after crossing the parent points
PI and p 2 Thus, it may be envisaged that point PI has moved in the
direction from di up to the point YI and similarly the point P2 has
moved to the point Y2.
Since the two points used in the crossover operator are chosen
at random, many such search directions are possible. Among them
some directions may lead to the global basin and some directions
may not . The reproduction operator has an indirect effect of filtering
the good search directions and help guide the search. The purpose
of the mutation operator is .to create a point in the vicinity of the
current point. The search in the mutation operator is similar to
a local search method such as the exploratory search used in the
Hooke-Jeeves method. With the discussion of the differences and
similarities of GAs with traditional methods, we are now ready to
present the algorithm in a step-by-step format.
Algorithm

Step 1 Choose a coding to represent problem parameters, a


selection operator, a crossover operator, and a mutation operator.
Choose population size, n, crossover probability, Pc, and mutation
probability, Pm. Initialize a random population of strings of size f.
Choose a maximum allowable generation number tmax Set t = 0.
Step 2

Evaluate each string in the population.

Step 3 If t >
Terminate.

tmax

or other termination criteria is satisfied,

Step 4 Perform reproduction on the population.


Step 5 Perform crossover on random pairs of strings.
Step 6 Perform mutation on every string.
Step 7 Evaluate strings in the new population. Set t = t
and go to Step 3.

+1

EXERCISE 6. 1.1

,....

T he objective is to minimize the function

f(xt,x2)

,;'

=(xi+ X2 -11) 2 +(xi+ x~- 7) 2

in the interval 0 ;:;; x 1 , x 2 ;:;; 6. Recall that the true solution to this
problem is (3, 2f having a function value equal to ~ero.

Step 1 In order to solve this problem using genetic algorithms,


we choose binary coding to represent variables Xt and x2. In
the calculati<ID.....Jlere, 10-bits are chosen for each variable, thereby
making the ~rmg lengt!\~qu!il}o ~ With 10 bits, we can get a
solution accuracy of (6-0)/(2 -1) or 0.006 in the interval (0, 6). We
choose roulette-wheel selection, a single-point crossover, and a bitwise mutation operator. The crossover and mutation probabilities
are assigned to be 0.8 and 0.05, respectively. We decide to have
20 points in the population. The random population created using
Knuth's (1981) random number generator3 with a random sMd equal
to 0.760 is shown in Table 6.1. We set tmax = 30 and initialize the
generation counter t = 0.
Step 2 The next step is to evaluate each string in the
population. We calculate the fitness of the first string. The first
8
5
substring ( 1100100000) decodes to a value equal to (2 9 + 2 + 2 ) or
800. Thus, the corresponding parameter value is equal to 0 + (6 0) X 800/1023 or 4.692. The second substring (1110010000) decodes
to a value equal to (2 9 + 28 + 27 + 24 ) or 912. Thus, the corresponding
parameter value is equal to 0 + (6- 0) x 912/1023 or 5.349. Thus,
the first string corresponds to the point x(t) = (4.692, 5.349f. These
values can now be substituted in the objective' function expression
to obtain the function value. It is found that the function value
at this point is equal to f(x(l)) = 959.680. We now calculate the
fitness function value
at this point using the transformationI rule:
,
I
F(x(l)) = 1.0/(1.0 + 959.680) = 0.001. This value is use~ in the
reproduction operatio:q. Similarly, other strings in the population
are evaluated and fitness values are calculated. Table 6.1 shows the
objective function value and the fitness value for all 20 strings in the
initial population.

'

The algorithm is straightforward with repeated application of


three operators (Steps 4 to 7) to a population of points . We show the
working of this algorithm to the unconstrained Himmelblau function
used in Chapter 3.

30

'plim~'

tift: A ltoril~ ml tJnfl l11mp/.

'

= 30, we proceed to Step 4. j I


'
'
3 A FORT RAN code implementing the random number generator app~&rs in'

Step 3 Since t = 0 <

tmax

the GA code presented at the end of this chapter.

Nr'"lrvadilin,.tal Opltmt

!104

......
0
......
.....
0
..., ......
0
"';:I ......
..0
U) 0
......

bo

.s

0
A.

bO
.::

::E

.::
0

-~

0
0
0

......
0
0

...... 0 0 ...... ...... 0 ......


...... ...... ...... ...... ...... ...... ......
...... 0 0 0 0 ...... ......
0
0

0
0
0

...... 0
0
0
00 0 0
...... 0
0
0
0 ...... ...... ......

......
...... ...... ......
...... ...... ...... ...... ...... ...... ......
...... 0 ...... 0 0 0 0
0 ...... ...... ...... ......
0 0

0
0
0
0
0
0
0

0 0 0 ...... ......
0 0 0 0 0
...... ......
...... 0 0
0 ...... ...... ...... ......
............
...... ...... 0
0 ...... ...... 0 0
0 ...... ...... 0 0
0 ...... ...... ...... ......
0 0 0 ...... ......
......
...... 0 0 0

...... 0 ...... ...... ...... 0


0 ......
0 0 ...... 0
...... ............ 0 ...... 0
...... 0 0 ...... ...... ......
...... ...... 0 0 ...... 0
0 0 ...... 0 0 ......
0 0 ..... 0 0 0
...... 0 ...... 0 ...... ......
...... 0 0 ...... ...... 0
0 ......
0 ............ 0

o .................. o o o o ...... o .................. o ...... o ...... o ...... o


o o o ...... o .............................. o o o ...... o ...... o o o o
...... 0
...... 0 ...... 0 0 0 0 ..................................................................
bo oo
............ o o o o ...... o o o .................. o ...... o ...... o
.s
o o o ...... o o o o ...... o o o .................. o .................. o
.....
..., oo
............ oo .................. o o o .............................. o ...... o ............
...... oo .......................................... o o o ............ o o o

"';:I

...... ...... ...... ...... ...... 0 0 ...... ......


...... 0 0 0 ...... ...... 0 0 0
...... 0 0 0 0 0 ...... o 0

0 0 ...... 0 ......
00 0 00
............ ...... 0 0

U)

...... ......
0 0
0 ......

p...

k.

o--~o~-o-MoM--oo--~o

..0

0
0
0

0
0
0

0
0
0

0
0
0

~ 10
~ 0 ~ ~ ~ ~ ~ ~ ~ ,.......,.......,.......
~ M ~ ~ ~ ,.......,.......,.......
00 ~ 0
,....... M ~ ,.......,.......,.......,.......,.......

"'0

~~OO~M~-~~M~OOO~~~~~~~M
~o~~MM~-~oo~oo--o~oo~oo-

~~~~~~~~~~~~~~~~~~~~
00000000000000000000

C,)

"'
0

I:X:ll~ ~ ~

"'
Q)

~
....c:
p...

g;

;3 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

g g

8' ~

~~~~~~~~~~~~~~~~~~~~
00000000000000000000

~ IM 0

00 ~ ~.!";)

~ ~ ~ ~ ~ ~

~ ~ ~

~ ~ 00 M

.::

~~~~~~~~~~~~~~~~~~~~

-e

~-~ootn~~--~~-MM~M~0~~-

"'0

~~~~~~~~~~~~~~~~~~~~~

o-o-o--oo~o~oooo--~o

~000-o--oo~O~OOOO_O_O

;:I

~00000000000000000000

0
A.

.....

~oo>n~~OOMM~~~oo~~o~~-M>n
~00~00~~~~~1.!";)~-~~~0tn~Otntn

~~~~~~~~~~~~~~~~~~~~~
~tn~~~OOO~tn~~~OO~~~~M~~

"'0
.::

~--

"'
.::

~~~ ~

s
Q)

....c:

,_.-+>
Q)

.::

..0 -

S+>
;:I .::

0 g
bOU
.::

Q)

~~

;:I

~k.

~~

~ ~ ~ ~ ~ ~

00

M~~-

~ ~ ~ ~ ~ ~ ~ ~ ~ ~

.::

o.::

-~ ""0
(.)

~rno"'
Q)

......0 .::Q)
:>.

Q)

~~

~..0

Q)

..0
0

.....
Q)

..... ..o
A.
Q)

s
;:I

> .::
~ 8
- ;:::1"'0
0

s .::
;:I

"'

0~

~~~M~~o~~~o~~~~~~~~~

;:I

"'I~ ~ ~ M

~.!";) M

~.!";) 0

~ ~ -

00 ~

M ~ M ~ -

~ ~

~~tn~-~~M~M~~tn~-~OOM~~M

M~~~~MO~~~OOMM~O~~OOMM

C,)~

~oMM~~~~Mo~oo~~~~M~~

-o.,....oo ...... o.,.....,.....,....oo.,.....,....oo

0
0
0
~gc;~c;g~c;~c;~c;::::::g~ ......
~ 0
0 0 ...... 0 ...... 0 0
0 0 ............ 0
0 ...... ......

.-I

1 0 ...... 0 0 ...... 0

co

t!

tn

~M~-M~OtnMO~~M~OOO-tn-~

-~

Q)

tnO~~-~OOM~M-~-~00~~-~00
r

I~
~ - ~ ~ M 0 ~ - ~ ~ 00 ~ 00 ~ t- ~ - ~ 0
o~-o-o~~o~~~-~~~M~~o
~~~~~~~~~~~~~~~~~~~~
ooooooooooooooooooo-

.::

A.

bO

""3

A.
0

00

........................ 0 0 0 ...... 0

0 ............ 0
...... 0 ...... 0
...... 0 0 0
0 ...... 0 0
0 0 0 ......

tl.,.....,....oo.,.....,....o.,.....,.....,.....,....ooo.,.... ...... o o o o
,.oo.,.....,....o.,.....,....oo ...... o.,....oo.,....o 0 ...... 0 ...... ......
o ............ o ............ o .................. o ........................ 0 ...... 0 ...... ......
...... 0 ...... 0 ......
. . . o 0...... oooo.,....ooo.,.....,.....,....o
.sbO,E
V-' ...... 0
............ 0 0 .................. 0 0 ...... 0 ...... ............. 0 .............
~~o.,.....,....oo.,.....,....o.,....o.,.....,....o.,....o
U)

, o o o ...... oo ...... o o o o o ..................


bCO.,....o.,....o ...... oo.,.....,....o.,.....,....oo
.::o ...... oo ...... o .................. o ..............................

...... 0 0 0 ......
.....t-tO....-t....-t

...... 0 ...... 0 0
...... 0 0 0 0

...... o ...... oo

Ec;g~gc;~c;~:::~~::::::c;:::

0 0 0 ...... ......

J5:::g~~:::gc;:::~g:::gg;:~

0 ................... .......
............ 0 0 0
0 0 ...... 0 ......

rno ...... o ...... o ............ o ...... o ...... o o o o


..o ...... o ...... oo ...... o ...... o ............ oo ............

...... 0

...... ......

.::

.9
...,
(.)

Q)

Q)

...,.::..._."'
;:I

8 ...,:>.

""0;..=

Q) ...,,.o
(.)
QJ,.o

"'

A.o

>< .....

~p...

-~M~~~~~~o-~M~~~~oo~o
,.......,.......,.......,.......,.......,.......,.......,.......,.......,.......C"'

~I:X:l

:101'1

,4,' /,IIJ) 1/ i\ t, t hlH Htop, wo tlolcct ~nt> d Ht,J'iu gH iu th o popul a.t i<l n to
for'rtl t. ho rtl l1t.luK pool. IH order to \I H(l th <l roulotto wltcol Hd octlon
pror(lduro, wo fir Ht calcul ate th e a.verago fitn ess of tho population.
Jly n,ddiu g t.h o fitn ess values of a.IJ strin gs a nd dividin g th e Hlllrl
by th e population size, we obtain :F = 0.008. T he next ll l cp is to
:ampule th e expected count of each string as :F(x)j:F. The values
are calculated and shown in column A of Table 6.1. In other word s,
we can compute the probability of each string being copied in th
mating pool by dividing these numbers with the population siz
(column B). Once these probabilities are calculated, the cumulati v
probability can also be computed. These distributions are also shown
in column C of Table 6.1. In order to form the mating pool, w
create random numbers between zero and one (given in column D)
and identify the particular string which is specified by each of thes
random numbers . For example, if the random number 0.472 is
created, the tenth string gets a copy in the mating pool, becaus
that string occupies the interval (0.401, 0.549), as shown in column C .
Column E refers to the selected string. Similarly, other strings ar
selected according to the random numbers shown in column D. After
this selection procedure is repeated n times ( n is the population
size), the number of selected copies for each string is counted. This
number is shown in column F. The complete mating pool is also
shown in the table. Columns A and F reveal that the theoretical
expected count and the true count of ~ach string more or less agree
with each other. Figure 6.5 shows the initial random population
and the mating pool after reproduction. The points marked with an
enclosed box are the points in the mating pool. The action of th
reproduction operator is clear from this plot. The inferior points have
been probabilistically eliminated from further consideration. Notic
that not all selected points are better than all rejected points. For
example, the 14th individual (with a fitness value 0.002) is selected
but the 16th individual (with a function value 0.005) is not selected.

Although the above roulette-wheel selection is easier to


implement, it is noisy. A more stable version of this selection operator
is sometimes used. After the expected count for each individual
str;ng is calculated, the strings are first assigned copies exactly equal
to th e mantissa of the expected count. Thereafter, the regular
roulette- wheel selection is implemented using the decimal part of th
expected count as th e probability of selection. This selection method
is loss noisy a nd iH kn own as the stochastic remainder selectiqn.

Step 5 At. t.hi l'l Hl.op, the strings in the mating pool are used in
tll<l r.roRsovor O!Hif'll,t,lon . In a. single- point crossover, two strings ar
l'lclo<:l.llcl 1~t m nclom nml <' roHscd at a random site. Sin ce th e matin g

6~

~3

0 -- - - - - - - - - -

----

5
4

NmthrHltiiiiMI OtJLimirt~tlcm A louf'tlhm

Optimization fo r b'ngin eeting Design; Algorithms and b'xamples

306

'

---------

.-----

~"' "\

c----..

--

-&::f-....

---

'

'\.

"

\
0

'

oint
\

lc

c\

I.

30.0

If 3

Initial population

Mating pool

2
1

I \

J. :. ~
5

..

.a\.
\ \

10.0

\
I

,~_

75.0
I

I
I

'

150.0

'\

o After orosaover

300.0
\ 0

'

M~.
,\\I
1rumum :

'

"

--

500.0

. ... a \ \ \

'.

........ .

~'\" \c\...

'L
0

6 r en :n . .-,J...... ~ ..... t :
;
t

t
............ o 1iatina pool

850.0

'

'o

. . . 1500.0

o.

'-.._0

ao

I
I

- ..

o~~~--~~~~--~~~~~~

Xt

Figure 6.5 The initial population (marked with empty circles) and
the mating pool (marked with boxes) on a contour plot of the objective
function. The best point in the population has a function value 39.849
and the average function value of the initial population is 360.540.

pool contains strings at random, we pick pairs of strings from the top
of the list. Thus, strings 3 and 10 participate in the first crossover
operation. When two strings are chosen for crossover, first a coin
is flipped with a probability Pc = 0.8 to check whether a crossover
is desired or not. If the outcome of the coin-flipping is true, the
crossing over is performed, otherwise the strings are directly placed
in an intermediate population for subsequent genetic operation. It
turns out that the outcome of the first coin-flipping is true, meaning
that a crossover is required to be performed. The next step is to
find a cross-site at random. We choose a site by creating a random
number between (0, .e- 1) or (0, 19). It turns out that the obtained
random number is 11. Thus, we cross the strings at the site 11
and create two new strings. After crossover, the children strings
are placed in the intermediate population. Then, strings 14 and 2
(selected at random) are used in the crossover operation. This time
the coin-flipping comes true again and we perform the crossover at
the site 8 found at random. The new children strings are put into
the intermediate population. Figure 6.6 shows how points cross over
and form new points. The points marked with a small box are the
points in the mating pool and the points marked with a small circle
are children points created af~er crossover operation. Notice that not

xl

Figure 6.6 The population after the crossover operation. Two


points are crossed over to form two new points. Of ten pairs of strings ,
seven pairs are crossed.

all 10 pairs of points in the mating pool cross with each other. With
the flipping of a coin with a probability Pc = 0.8, it turns out that
fourth, seventh, and tenth crossovers come out to be false. Thus,
in th ese cases, the strings are copied directly into the intermediat
population. The complete population at the end of the crossover
operation is shown in Table 6.2. It is interesting to note that with
Pc = 0.8, the expected number of crossover in a population of size
20 is 0.8 X 20/2 or 8. In this exercise problem, we performed seven
~rosso vers and in three cases we simply copied the strings to the
ltttormediate population. Figure 6.6 shows that some good points
a.nd some not-so-good points are created after crossover. In some
ascs, points far away from the parent points are created and in
some cases points close to the parent points are created.

Step 6 The next step is to perform mutation on strings in th


intermediate population. For bit-wise mutation, we flip a coin with
a proba biHty Pm = 0.05 for every bit. If the outcome is true, we alter
th o bit to 1 or 0 depending on the bit va]ue. With a probability of
0.0&, a population Rlzo 20, and a string length 20, we ca.n expect to
n.!t.M IL tota.l of a.hou t 0.01) x 20 x 20 or 20 bitR ln tho population ,
l'lihl<, 0.2 Hhowl'l th11 11111 tlitod blts in bold cha.ra.ctcrtl In tho t1~bl o. A
<ount<ld from t, hi~ t.n.hlu, we~ luwo a.c t.u 1~ll y altorod 10 bitR. Flp;um 0.7

Optimi~tdion

308

Nontraditional Optimization Algorithms

for engineering De1ign: AlgorltAm1 and Examples

shows the ,:ffect of mutation on the intermediat e population.

.----.1
0 "<t' t- C"' 00 C"' C"' "<t' ..-< "<t' "<t' "<t' t- M ~ C"' 0 L<':> M "<t'
~L<':>OOO.-<O.-<.-<.-<.-<.-<.-<MOC"'.-<.-<OC"'C"'

~~~~~~~~~~~~~~~~~~~~~

309

In

Mutation Operation

~00000~00000000000000

co 00 0')
C"'"<t'O,...;.M 0') C"' M 0') 00 00 M CO C"' t- 00 C"'O')"<t'
.----.1
"<t' L<':> 00 C"'t-MOOCOCO 00 "<t' N C"' co
~ co
00 C"' 0
..__,
00
M C"' M 00 L<':> 0!~~1:-:<X?<X? t-:t-:0! ...... 0') ~t.q<X?
.......
oocic0.,;~ MOt-C"'OO L<':> 00 C"' OL<':> 0') C"' 0')
00 M"<t'O')L<':>C"'
......
00 t- 00 t- t- t- C"' ...... "<t' 00 0') .-<"<t'M
C"'
M
C"' ...... L<':>
"<t'
"'I
"<t' 0
tC"'
t......
"<t' "<t' t- "<t' L<':> L<':> C"' ...... ~ M "<t' C"' C"' 0
~t-OOOOO"<t'OMCO.-<"<t'L<":>L<':>O')~.-<C"'L<':>OOC"'O

I ..

J. . .. . J.. .. .. ..... I

o After crossover
o After Mutation

- --

COOMO.-<L<':>M.-<.-<O~"<t't-M"<t'.-<L<':>O')"<t'O

4 ................................. ..

~.,;d~.,;~.,;.,;d~~~~~~d~~~.,.

---

.-<I L<':> t-. C"' M 0 t- ...... C"' L<':> t- M M "<t' "<t' 00 M co ...... 00 C"'
~.-<"<t'L<':>.-<L<':>MMOOMO"<t'"<t'COCOL<':>.-<Mt-L<':>CO
OO')"<t'"<t'O')MMO')OL<':>MMC"'MM"<t't-OOMO')
~~d~d~~~~~~~dd~~~~cici

~3

0
0
.......... 0
..... 0 0
..... 0 0 0:::::0 .... o ................ o
............................ o .... o o o o o o o .... o ....
0
00:::::0 .0
0
0
............... 0 0
..... ..... ........ o ........ o
.... o o o .... o o o o ............ .... .... 0 .... 0 .... .....
........ o o .... o
~g~g~dil:Jgc:g;:;:
o o o .... o ....
o.,....,....,...o.,....,...oo.,....,...~o oooa:::ooo
..... r::c::J:CJ ..... ..... ..... ..... . 0 0 ..... ..... ..... ..... .... 0 0 .............

.... g

11.1

c;

-<c;;

'r;J

OO.,...OG3JOoOOOO.,....,... .,..._a_.,...o.,...o
..... 0
0 ..................... 0
..... o . 0
0 ..... Oo::::1JO.,...O.,...

o o o ........ o ................ o o o .... o .... o o o o


.... o .... oo .... o o o ........................................ ....
roo.,....,...ooo_a_ .... ~oo.,....,... .... o.,...o .... o
0 0 0 .... 000~ ....
ooo:::::o ........ o ............ o
........ o o ............ oo
.................... o .... o .... ....
o o .... oo ............................ o o o ........ o o o
........ o ................ o:::::ooo ........ o o o o o .... o ....
o o o .... o o o ........ o o o o o o o o o o o
o .... o .... o o o o o .... o o o o o ............ oo

~
....
;=j

::E

-~

-<c;;

~o.,....,....,....,....,....,....,...o.,...ooooooo.,...o.,...

o::

..... 0

............... 0

.................... 0

.......... 0

c: '8 g g g ....c: goooo.,....,...ooooo.,...o.,...


g ~ g ;: ;: ;: c: ;: ~ c: g ;: c:

~=~o.,...o.,...o

o~~.,...o.,....,....,...o.,....,...oo.,....,...ooooo.,...oo

;=j.,....,....,....,....,....,....,....,...oo.,....,....,....,....,...oo.,....,....,...

U~ooc;~gc;~~~~g~ggc;;:c;~C:~C:~
Q)

~!C"'c:gc:;:~g~~;:~sc:c:~c:~c:gc:g

~~~c;g;:~gc;gg~c:c:c:;:;:;:c;;:c:;:c:

s .s

..... 0

.....

...............

............... 0

<I)

.-<;:::
11.1

bO

.s
11.1
11.1

~~~c:c:~gc;;:~~~~;:;:c:c:c:~;:gc;c;
~

....

~.,....,...o.,....,....,....,....,...oo.,....,...ooooo.,...o.,...

00

~~;=jOOO.,...OOO.,....,...OOOOOOOOOOO

o.,...o.,...ooooo.,...ooooo.,....,....,...oo

~O')O')~~L<':>L<':>
\!:) ;;... ;;... ;;... ;;... ;;... ;;...

~~............

~~~~

z z ;;... ;;... ;;... ;;... z z ;;... ;;... ;;... ;;...z z

.-<oo.,...oo.,....,...o.,...ooo.,... .,;....,...o.,....,....,...o
~.,...o.,....,....,....,....,....,....,...oooo

.s

~ ~

c: g ~ g g c: c: .............
c: ~ ~ ;:....

.... 0

.....

~000000000

-~

.... 0 0 0 0 0 ........

0 0 0 .... 0 0 ....

................. 0 .... 0
......... 0 0 ........ ....

0 ........ 0 0 .... 0
0 .0
0 ..... 0 0 .....
0 0 0 ..... 0 0 0

.---.
0

;;...
..__,
~

~
..,~
::E .s g 8' c: ;: g g g g ;: g g g ;: ;: ;: g ;: ~ ;: g J
....... oo.,....,....,...ooo.,....,....,....,....,...o.,...o.,....,...
o~o.,....,....,...o.,....,....,...oo.,....,...o

O;=j.,....,....,....,....,....,....,....,...oo.,....,....,... ........ 0 .... 0 ..... ....


~00
0
..... 0
..... 0
0
0
0 0
0 0 0
..... ........ 0 0 ........ 0
........ 0 0 ....
bO
.... o o o ................ o .... o o o .... 0
~
o.,....,....,...oooo~o ............ o .... o.,...o.,...o
~Nooo.,...o.,....,....,....,....,...ooo.,...o.,...oooo
ro
.,... o .,... o .,... o o o o .,... .,... .,... .,... .,... .,... .,... .,... .,... .,... .,...
~

~~~c:~~;:;:;:c;c;;:;:gggc;c;~g~

~ooo.,...ooo.,....,...ooooooooooo

ooO.,...O.,...OOOOO.,...OOOOO.,....,....,...OO
l

Xt

ro~.-<ooo.,....,...o.,...oo.,...oo.,....,...o.,....,....,....,...o

':,\ \ ,"'\

oL-~._~~~~~~~~~~

~ ~---t-------------------------------------------------------i
~

\ liiil ... _

.,...G3J.,....,...oo~o.,...o.,....,....,...o.,...o.,...o.,...o

<I)

...I-..\

\!:)

Figure 6. 7 The population after mutation operation. Some points


do not get mutated and remain unaltered . The best point in the
population has a function value 18.886 and the average function value
of the population is 140.210, an improvement of over 60 per cent.

some cases, the mutation operator changes a point locally and in


some other it can bring a large change. The points marked with a
small circle are points in the intermediate population. The points
marked with a small box constitute the new population (obtained
after reproduction, crossover, and mutation). It is interesting to
note that if only one bit is mutated in a string, the point is moved
along a particular variable only. Like the crossover operator, the
mutation operator has created some points better and some points
worse than the original points. This flexibility enables GA operators
to explore the search space properly before converging to a region
prematurely. Although this requires some extra computation, this
flexibility is essential to solve global optimization problems.

Step 7 The resulting population becomes the new population.


We now evaluate each string as before by first identifying the
substrings for each variable and mapping t he decoded values of the
8Ubstrings in tho chosen intervals. This complet es one iteration of
notlc algorlthm11. We increment the generation count er t o ,t = 1
nd procood to S~Ofl 3 for the next iteration. The new population

You might also like