Genetic Algorithms

• Search Technique
• No need Derivative
• John Holland (1975)
• Relative Grading
• Don't try to get one optimization
• We get successfully generation better and
better.(each generation in terms of

Differences between GAs and
Traditional optimization methods
• GAs work with the coding of the parameter
set, not the parameters themselves.
• GAs search for the population of points, not a
single point.
• GAs use the objective function information
and not the derivative or second derivative.
• GAs use Stochastic transition rules, not
Basic genetic Algorithm
• A population of 2n to 4n solutions is used(n-
No of variables)
• Each Solution usually represented by a string
of binary variables.
• The string length can be made large enough to
achieve any desired fitness of approximation.
• The numerical value of the objective function
corresponds to the concept of fitness in

• After trial solutions are selected a new
generation is produced by selecting, using
stochastic principles.
• The fittest parents to produce children from
among the trial solutions.
• In each child cross over or exchange of portion
of string of each of the two parents generates
new solutions.
• Some random alteration of binary digits in a
string reproduces the effects of mutations.

How do we represent a set of variables
as a string?
• As binary
• X=(01111,00100,10101,00111)
• X1,x2=A
• X1,x2=B
• X1,x2=C to find Y(A) and ……

Cross over:
Cutting and pasting two parents.
cross over repeatable. It get good parents
some times…
Global Optimum Solutions
Slow process

Particle swarm optimization

• Same for Genetic Algorithms
• Each of the birds follows to the leader.
• After 15min….Leader will be change
Hybrid Optimization
GAs +Gradient method
GAs +conjugate method or other Nontraditional