You are on page 1of 80

GENETIC ALGORITHMS

UNIT-3
MODULE—III

Genetic Algorithms
• Fundamentals of genetic algorithms
• Encoding
• Fitness functions, Reproduction
• Genetic Modeling: Cross cover, Inversion and
deletion, Mutation operator, Bit-wise operators &
its uses in GA
• Convergence of Genetic algorithm, Applications
• Real life Problems
Fundamentals of genetic algorithms

• GA initiated and developed by John Holland


(1970)
• GAs are computerized search and optimization
algorithms based on mechanics of natural
genetics and natural selection.
• It perform direct random searches through a
given set of alternatives with the aim of finding
the best alternative with respect to the given
criteria of goodness.
• Genetic Algorithms (GAs) are adaptive heuristic
search algorithm based on the evolutionary ideas of
Fundamentals of genetic algorithms
• Genetic algorithms (GAs) are a part of Evolutionary
computing, a rapidly growing area of artificial intelligence.
GAs are inspired by Darwin's theory about evolution -
"survival of the fittest".
 Solving problems mean looking for
solutions, which is best among others.
• Finding the solution to a problem is often thought :
 In computer science and AI, as a process of search through
the space of possible solutions. The set of possible
solutions defines the search space (also called state space)
for a given problem. Solutions or partial solutions are
viewed as points in the search space.
Fundamentals of genetic algorithms

• In engineering and mathematics, as a process


of optimization. The problems are first
formulated as mathematical models
expressed in terms of functions and then to
find a solution, discover the parameters that
optimize the model or the function
components that provide optimal system
performance.
Why Genetic Algorithms ?

- It is better than conventional AI ; It is more robust.

- Unlike older AI systems, the GA's do not break easily


even if the inputs changed slightly, or in the presence
of reasonable noise.

- While performing search in large state-


space, or multi-modal state-space, or n-
dimensional surface, a genetic algorithms offer
significant benefits over many other typical search
optimization techniques like - linear programming,
heuristic, breath-first ,depth-first.
Why Genetic Algorithms ?
• "Genetic Algorithms are good at taking large,
potentially huge search spaces and navigating
them, looking for optimal combinations of
things, the solutions one might not otherwise
find in a lifetime.”
• (Salvatore Mangano Computer Design, May
1995)
Basic Concepts
1. Optimization
• Optimization is a process that finds a best, or optimal,
solution for a problem.
• The Optimization problems are centered around three
factors :
1) An objective function which is to be minimized or maximized.

 Examples:-
In
or manufacturing, we .want to maximize the profit
minimize the cost
 In designing an automobile panel, we want to
maximize the strength.
Basic Concepts
2) A set of unknowns or variables that affect
the objective function.
Examples:
• In manufacturing, the variables are amount of
resources used or the time spent.
• In panel design problem, the variables are
shape and dimensions of the panel.
Basic Concepts
3) A set of constraints that allow the unknowns
to take on certain values but exclude others.
• Examples:
• In manufacturing, one constrain is, that all
"time" variables to be non-negative.
• In the panel design, we want to limit the
weight and put constrain on its shape.
Search Optimization Algorithms
Search Optimization Algorithms
 Our main concern is to understand
the evolutionary algorithms :
 how to describe the process of
search,
 how to implement and carry out
search,
 what are the elements required
to carry out search, and
 the different search strategies
Biological Background – Basic Genetics

•Every organism has a set of rules, describing


how that organism is built. All living
organisms consist of cells.
•In each cell there is same set of
chromosomes.
•Chromosomes are strings of DNA and serve
as a model for the whole organism.
Biological Background – Basic
Genetics
• A chromosome consists of genes, blocks of DNA.
Chromosome

Chromosome
Other Cell bodies

• Each geneencodes a particular protein that represents a trait


(feature), e.g., color of eyes.

• Possible settings for a trait (e.g. blue, brown) are called alleles
• Each gene has its own position in the chromosome called its locus.

• Complete set of genetic material (all chromosomes) is


called a genome.
Biological Background – Basic
Genetics
• Particular set of genes in a genome is called
genotype.

• The physical expression of the genotype (the


organism itself after birth) is called the phenotype,
its physical and mental characteristics, such as eye
color, intelligence etc.

• When two organisms mate they share their genes;


the resultant offspring may end up having half the
genes from one parent and half from the other.
This process is called recombination (cross over) .
Biological Background – Basic
Genetics
Biological Background – Basic
Genetics
• The new created offspring can then
be mutated. Mutation means, that
the elements of DNA are a bit
changed. This changes are mainly
caused by errors in copying genes
from parents.
• The fitness of an organism is
measured by success of the
organism in its life (survival).
Encoding

• Before a genetic algorithm can be put to


work on any problem, a method is needed
to encode potential solutions to that problem
in a form so that a computer can process.
• One common approach is to encode solutions as
binary strings: sequences of 1's and 0's, where
the digit at each position represents the value of
some aspect of the solution.
• Example :
• A Gene represents some data (eye color, hair
color, sight, etc.).
• A chromosome is an array of genes. In binary
form
Encoding
• a Gene looks like : (11100010)
• a Chromosome looks like:
Gene1 Gene2 Gene3 Gene4
(11000010) (00001110) (001111010) (10100011)

• A chromosome should in some way contain information


about solution which it represents; it thus requires encoding.
The most popular way of encoding is a binary string like :
• Chromosome 1 : 1101100100110110
• Chromosome 2 : 1101111000011110
Each bit in the string represent some characteristics of the solution

 There are many other ways of encoding, e.g., encoding values

as integer or real numbers or some permutations and so on.


 The virtue of these encoding method depends on the problem
to work on .
Binary Encoding
• Binary encoding is the most common to
represent information contained. In genetic
algorithms, it was first used because of its
relative simplicity.
• In binary encoding, every chromosome is a
string of bits : 0 or 1, like
• Chromosome 1: 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1
1100101
• Chromosome 2: 1 1 1 1 1 1 1 0 0 0 0 0 1 1 0
000011111
Binary encoding gives many possible chromosomes
even with a small number of alleles i.e possible
settings for a trait (features).
Binary Encoding
• This encoding is often not natural for
many problems and sometimes
corrections must be made after
crossover and/or mutation.
•Example 1:
•One variable function, say 0 to
15 numbers, numeric values
represented by 4 bit binary string.
Binary Encoding
Numeric value 4–bit string Numeric value 4–bit string Numeric value 4–bit string

0 0000 6 0110 12 1100

1 0001 7 0111 13 1101

2 0010 8 1000 14 1110

3 0011 9 1001 15 1111

4 0100 10 1010

5 0101 11 1011
Value Encoding
• The value encoding can be used in
problems where values such as real
numbers are used. Use of binary
encoding for this type of problems would
be difficult.
1. In value encoding, every chromosome is
a sequence of some values.
2. The values can be anything connected to
the problem, such as : real numbers,
characters or objects.
Value Encoding

• Examples :
Chromosome A 1.2324 5.3243 0.4556 2.3293 2.4545

Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT

Chromosome C (back), (back), (right), (forward), (left)

• Value encoding is often necessary to develop some


new types of crossovers and mutations specific for
the problem.
Permutation Encoding

•Permutation encoding can be used in


ordering problems, such as traveling
salesman problem or task ordering
problem.
• 1. In permutation encoding, every
chromosome is a string of numbers that
represent a position in a sequence.
Permutation Encoding
• 2. Permutation encoding is useful for
ordering problems. For some
problems, crossover and mutation
corrections must be made to leave
the chromosome consistent.
Permutation Encoding
•Examples :

1. The Traveling Salesman problem:

There are cities and given distances between them.


Traveling salesman has to visit all of them, but he does not want
to travel more than necessary. Find a sequence of cities with a
minimal traveled distance. Here, encoded chromosomes describe
the order of cities the salesman visits.
2. The Eight Queens problem :
• There are eight queens. Find a way to place them on a chess
board so that no two queens attack each other. Here, encoding
describes the position of a queen on each row.
Tree Encoding
•Tree encoding is used mainly for evolving programs
or expressions. For genetic programming:
 In tree encoding, every chromosome is a tree of

some objects, such as functions or commands in


programming language.
 Tree encoding is useful for evolving programs or any

other structures that can be encoded in trees.


 The crossover and mutation can be done relatively
easy way .
Tree Encoding
Note : Tree encoding is good for evolving programs. The
programming language LISP is often used. Programs in
LISP can be easily parsed as a tree, so the crossover
and mutation is relatively easy

Chromosome B
Chromosome A

(+ x(/ 5y )) ( do until step wall )


Operators of Genetic Algorithm
• Genetic operators used in genetic
algorithms maintaingenetic diversity.
• Genetic diversity or variation is a necessity
for the process of evolution.
• Genetic operators are analogous to those
which occur in the natural world:
- Reproduction(or Selection) ;
- Crossover (or Recombination);
- Mutation.
Operators of Genetic Algorithm

• In addition to these operators, there are


some parameters of GA. One important
parameter is Population size.
• Population size says how many
chromosomes are in population (in one
generation).
• If there are only few chromosomes, then
GA would have a few possibilities to
perform crossover and only a small part
of search space is explored.
Operators of Genetic Algorithm

 If there are many chromosomes, then GA


slows down.

 Research shows that after some limit, it is

not useful to increase population size,


because it does not help in solving the
problem faster. The population size depends
on the type of encoding and the problem.
Fitness functions
• GAs are usually suitable for solving
maximization problems.
• Minimization problems are transformed into
maximization problems by some suitable
transformation.
• Fitness function F(X) is first derived from the
objective function and used in successive
genetic operations.
Fitness functions
• Consider the following transformations
• F(X)=f(x) for maximization problem
• F(x)=1/f(x) for minimization problem,
if f(X) "≠“0
• F(X)= 1/(1+f(X)) , if f(X)=0
A number of such transformations are
possible. The fitness function value of the
string is known as string’s fitness.
Fitness functions
Reproduction, or Selection
• Reproduction is usually the first
operator applied on population. From
the population, the chromosomes
are selected to be parents to
crossover and produce offspring.

• The problem is how to select these


chromosomes ?
Reproduction, or Selection
• According to Darwin's evolution theory "survival of the
fittest" – the best ones should survive and create new
offspring.
 The Reproduction operators are also called Selection
operators.
 Selection means extract a subset of genes from an

existing population, according to any definition of


quality. Every gene has a meaning, so
• one can derive from the gene a kind of quality
measurement called fitness function. Following this
quality (fitness value), selection can be performed.
Reproduction, or Selection

 Fitness function quantifies the optimality of a solution


(chromosome) so that a particular solution may be
ranked against all the other solutions.
• The function depicts the closeness of a given ‘solution’ to
the desired result.
• Many reproduction operators exists and they all essentially
do same thing. They pick from current population the
strings of above average and insert their multiple copies
in the mating pool in a probabilistic manner.
Reproduction, or Selection

• The most commonly used methods


of selecting chromosomes for
parents to crossover are:
Roulette wheel selection,
Boltzmann selection,
Tournament selection,
Rank selection,
Steady state selection.
Roulette wheel selection
•Evolutionary Algorithms is to maximize the function f(x) = x2 with x in
the integer interval [0 , 31], i.e., x = 0, 1, . . . 30, 31.

1. The first step is encoding of chromosomes; use binary representation

for integers; 5-bits are used to represent integers up to 31.


2.Assume that the population size is 4.

3. Generate initial population at random. They are chromosomes or


genotypes; e.g., 01101, 11000, 01000, 10011.
4.Calculate fitness value for each individual.
(a)Decode the individual into an integer (called phenotypes),
• 01101  13; 11000  24; 01000  8; 10011  19;
(b)Evaluate the fitness according to f(x) = x2 ,
• 13  169; 24  576; 8  64; 19  361.
• j=1
Roulette wheel selection
•Select parents (two individuals) for crossover based on their
fitness in pi. Out of many methods for selecting the best
chromosomes, if roulette-wheel selection is used, then the
n
probability of the i th
string in the population is

pi = F i / ( F j ),
j=1

•where F i is fitness for the string i in the population, expressed


as f(x) pi is probability of the string i being selected,
•n is no of individuals in the population, is population size, n=4
•n * pi is expected count
Roulette wheel selection
String No Initial X value Fitness Fi f(x) p i Expected count N *
Population = x2 Prob i

1 01101 13 169 0.14 0.58

2 11000 24 576 0.49 1.97

3 01000 8 64 0.06 0.22

4 10011 19 361 0.31 1.23

Sum 1170 1.00 4.00

Average 293 0.25 1.00

Max 576 0.49 1.97

The string no 2 has maximum chance of selection


Roulette wheel selection (Fitness-Proportionate Selection)

•Roulette-wheel selection, also known as Fitness


Proportionate Selection, is a genetic operator,
used for selecting potentially useful solutions for
recombination.

•In fitness-proportionate selection :


the chance of an individual's being selected is

proportional to its fitness, greater or less than its


competitors' fitness.
conceptually, this can be thought as a game of
Roulette.
Roulette wheel selection (Fitness-Proportionate Selection)

8 2 individuals with fitness values


20% 9%
marked at its circumference; e.g.,

3  the 5th individual a higher


13 has
7 %
fitness than others, so the wheel
8%
would choose the 5th individual

6 more than other individuals .


8%  the
17 fitness of the individuals is
%
4 calculated as the wheel is spun n =
20%
8 times, each time selecting
5
an instance, of the string, chosen by
Fig. Roulette-wheel Shows
8 individual with fitness the wheel pointer.
Roulette wheel selection (Fitness-Proportionate Selection)
•Probability of i thstring is

pi = Fi /(  F j ) ,where
j=1
n = no of individuals, called population
size; pi = probability of i string being
th

selected; Fi = fitness for string in


the population.
Because the circumference of the wheel
is marked according to a string‘ fitness,
the Roulette-wheel mechanism is F
expected to make copies of the string
Average fitness = F Fj /n ; Expected count = (n =8 ) x pi
F
Cumulative
N=5

i=1
pi
Boltzmann Selection
•Simulated annealing is a method used to minimize or
maximize a function.

 This method simulates the process of slow cooling of

molten metal to achieve the minimum function value in a


minimization problem.

 The cooling phenomena is simulated by controlling a


temperature like parameter introduced with the concept of
Boltzmann probability distribution.

 The system in thermal equilibrium at a temperature T has

its energy distribution based on the probability defined by


• P(E) = exp ( - E / kT ) were k is Boltzmann constant.
Boltzmann Selection
 This expression suggests that a system at a higher

temperature has almost uniform probability at any


energy state, but at lower temperature it has a
small probability of being at a higher energy state.

 Thus, by controlling the temperature T and


assuming that the search process follows
Boltzmann probability distribution, the convergence
of the algorithm is controlled.
Roulette-Wheel selection
Following are the point to be noted:
1 The bottom-most individual in the population has a
cumulative probability PN = 1
2
Cumulative probability of any individual lies between 0 and 1
3
The i-th individual in the population represents the
cumulative probability from P i − 1 to Pi
4
The top-most individual represents the cumulative
probability values between 0 and p1
5
It may be checked that the selection is consistent with
the expected count Ei = N × pi for the i-th individual.
Does the selection is sensitive to ordering, say in
ascending order of their fitness values?

Debasis Samanta (IIT Kharagpur)


Suppose, there are only four binary string in a population,
whose fitness values are f1, f2, f3 and f4.

Their values 80%, 10%, 6% and 4%, respectively.

What is the expected count of selecting f3, f4, f2 or f1?

Debasis Samanta (IIT Kharagpur)


Problem with Roulette-Wheel selection
scheme
The limitations in the Roulette-Wheel selection scheme can be
better illustrated with the following figure.

80
%

10
% 6% 4%

The observation is that the individual with higher fitness values will
guard the other to be selected for mating. This leads to a lesser
diversity and hence fewer scope toward exploring the alternative
solution and also premature convergence or early convergence
with local optimal solution.
Debasis Samanta (IIT Kharagpur)
Rank-based selection
To overcome the problem with Roulette-Wheel selection, a
rank-based selection scheme has been proposed.
The process of ranking selection consists of two steps.
1 Individuals are arranged in an ascending order of their fitness
values. The individual, which has the lowest value of fitness
is assigned rank 1, and other individuals are ranked
2
accordingly.
The proportionate based selection scheme is then followed
based on the assigned rank.
Note:
The % area to be occupied by a particular individual i, is given by
Σ Nri ×
r
100i =1 i
where ri indicates the rank of i − th individual.
Two or more individuals with the same fitness values should have
the same rank.
Debasis Samanta (IIT Kharagpur)
Rank-based selection: Example
Continuing with the population of 4 individuals with fitness values:
f1 = 0.40, f2 = 0.05, f3 = 0.03 and f4 = 0.02.
Their proportionate area on the wheel are: 80%, 10%, 6% and 4%
Their ranks are shown in the following figure.

80%

30%
40%

20%
10% 10%
6% 4%

It is evident that expectation counts have been improved compared


to
Routlette-Wheel selection.
Debasis Samanta (IIT Kharagpur)
Rank-based selection: Implementation
Input: A population of size N with their fitness values
Output: A mating pool of size Np .
Steps:
1 Arrange all individuals in ascending order of their fitness value.
2 Rank the individuals according to their position in the order,
that is, the worst will have rank 1, the next rank 2 and best will
have rank N.
3 Apply the Roulette-Wheel selection but based on their
assigned ranks. For example, the probability pi of the i-th
individual would be
p = r
Σi i
i
j rj
=1
4 Stop
Debasis Samanta (IIT Kharagpur)
Comparing Rank-based selection
with Roulette-Wheel selection
Individual % Area % Area
fi Rank (ri)
1 80 % 0.4 4 40 %
2 10 % 0.05 3 30 %
3 7% 0.03 2 20 %
4 4% 0.02 1 10 %

40 %
80 %

1 1

2
2 10 %
3 30 %
4 4 3
7%
10 % 20 %
3%

Roulette-Wheel based on Roulette-Wheel based on


proportionate-based selection ordinal-based selection

A rank-based selection is expected to performs better than


the Roulette-Wheel selection, in general.
Debasis Samanta (IIT Kharagpur)
Basic concept of tournament selection
Who will win the match in this tournament?

Winner

?
?

? ? ? ?

India New Zealand England Sri Lanka S. Africa Australia Pakistan Zimbabwe

Debasis Samanta (IIT Kharagpur)


Tournament selection
1 In this scheme, we select the tournament size n (say 2 or 3)
at random.
2 We pick n individuals from the population, at random and
determine the best one in terms of their fitness values.
3 The best individual is copied into the mating pool.
4 Thus, in this scheme only one individual is selected per
tournament and Np tournaments are to be played to make the size
of mating pool equals to Np .
Note :
Here, there is a chance for a good individual to be copied into
the mating pool more than once.
This techniques founds to be computationally more faster
than both Roulette-Wheel and Rank-based selection
scheme.
Debasis Samanta (IIT Kharagpur)
Tournament selection : Implementation

The tournament selection scheme can be stated as follows.


Input : A Population of size N with their fitness values
Output : A mating pool of size Np (Np ≤ N)
Steps:
1
Select NU individuals at random (NU ≤ N).
2
Out of NU individuals, choose the individual with highest fitness
value as the winner.
3
Add the winner to the mating pool, which is initially empty.
4
Repeat Steps 1-3 until the mating pool contains Np individuals
5
Stop

Debasis Samanta (IIT Kharagpur)


Tournament selection : Example
N = 8, NU = 2, Np = 8
Input :
Individual 1 2 3 4 5 6 7 8
Fintess 1.0 2.1 3.1 4.0 4.6 1.9 1.8 4.5

Output :
Trial I ndividuals Selected

1 2, 4 4
2 3, 8 8
3 1, 3 3
4 4, 5 5
5 1, 6 6
6 1, 2 2
7 4, 2 4
8 8, 3 8

If the fitness values of two individuals are same, than there is a tie
in the match!! So, what to do????
Debasis Samanta (IIT Kharagpur)
Tournament selection

Note :
There are different twists can be made into the basic Tournament
selection scheme:

1 Frequency of NU = small value (2, 3), moderate 50 % of N and


large NU ≈ N.
2
Once an individual is selected for a mating pool, it can be
discarded from the current population, thus disallowing the
repetition in selecting an individual more than once.
3
Replace the worst individual in the mating pool with those are not
winners in any trials.
Steady-State selection algorithm

Steps :

1
NU individuals with highest fitness values are selected.
2
NU individuals with worst fitness values are removed and NU
individuals selected in Step 1 are added into the mating pool.
This completes the selection procedure for one iteration. Repeat
the iteration until the mating pool of desired size is obtained.

Debasis Samanta (IIT Kharagpur)


Genetic Modeling:
• Reproduction in Genetic Algorithms:

• Crossover

• Mutation
Mating Pool: Prior to crossover operation

A mating pair (each pair consists of two strings) are selected


at
random. Thus, if the size of mating pool is N, then N2 mating pairs
are formed.[Random Mating]

The pairs are checked, whether they will participate in


reproduction or not by tossing a coin, whose probability being
pc . If pc is head, then the parent will participate in reproduction.
Otherwise, they will remain intact in the population.

Note :
Generally, pc = 1.0, so that almost all the parents can participate in
production.

Soft Computing Applications


Crossover operation

Once, a pool of mating pair are selected, they undergo


through crossover operations.

1 In crossover, there is an exchange of properties between two


parents and as a result of which two offspring solutions are
produced.
2 The crossover point(s) (also called k-point(s)) is(are)
decided using a random number generator generating
integer(s) in between 1 and L, where L is the length of the
chromosome.
3

Then we perform exchange of gene values with respect to


the k-point(s)
There are many exchange mechanisms and hence
crossover strategies.

Soft Computing Applications


Crossover operations in Binary-coded
GAs
There exists a large number of crossover schemes, few
important of them are listed in the following.
1 Single point crossover
2 Two-point crossover
3 Multi-point crossover (also called n-point
4 crossover) Uniform crossover (UX)
5 Half-uniform crossover (HUX)
6 Shuffle crossover

Soft Computing Applications


Single point crossover

1 Here, we select the K-point lying between 1 and L. Let it be k .


2 A single crossover point at k on both parent’s strings is selected.
3 All data beyond that point in either string is swapped between
the two parents.
4 The resulting strings are the chromosomes of the offsprings
produced.

Soft Computing Applications


Single point crossover: Illustration

Before
Crossover

Parent 0 1 1 0 0 0 1 0 Two
1: diploid
Parent 1 0 1 0 1 1 0 0 from a
2: mating
pair
Crossover Point -
k

Select crossover points randomly


0 1 1 0 1 1 0 0 Two diploid
Offspring
1: for two
Offspring 1 0 1 0 0 0 1 0 new
2: offspring is
produced
After
Crossver
Soft Computing Applications
Two-point crossover

1
In this scheme, we select two different crossover points k1 and k2
lying between 1 and L at random such that k1 ƒ= k2.
2
The middle parts are swapped between the two
3
strings. Alternatively, left and right parts also can be

swapped.

Soft Computing Applications


Two-point crossover: Illustration

Before Crossover

Parent 1 : 0 1 1 0 0 0 1 0

Parent 2 : 1 0 1 0 1 1 0 0

Crossover Point k1 Crossover Point k2

Select two crossover points randomly

Offspring 1: 0 1 1 0 1 0 1 0

Offspring 2: 1 0 1 0 0 1 0 0

After Crossver

Soft Computing Applications


Multi-point crossover

1 In case of multi-point crossover, a number of crossover points


are selected along the length of the string, at random.
2 The bits lying between alternate pairs of sites are then swapped.

k1 k2 k3

Parent 1 Offspring 1

Parent 2 Offspring 2

Swap 1 Swap 2

Soft Computing Applications


Uniform Crossover (UX)

Uniform crossover is a more general version of the multi-point


crossover.

In this scheme, at each bit position of the parent string, we toss


a coin (with a certain probability ps ) to determine whether there
will be swap of the bits or not.

The two bits are then swapped or remain unaltered, accordingly.

Soft Computing Applications


Uniform crossover (UX): Illustration

Before crossover

Parent 1 : 1 1 0 0 0 1 0 1 1 0 0 1

Parent 2 : 0 1 1 0 0 1 1 1 0 1 0 1

Coin tossing: 1 0 0 1 1 0 1 1 0 0 1
1

After crossover

Offspring 1: 1 1 1 0 0 1 1 1 1 1 0 1

Offspring 2: 0 1 0 0 0 1 0 1 0 0 0 1

Rule: If the toss is 0 than swap the bits between P1 and P2

Soft Computing Applications


Uniform crossover with crossover mask

Here, each gene is created in the offspring by copying the


corresponding gene from one or the other parent chosen
according to a random generated binary crossover mask of
the same length as the chromosome.

Where there is a 1 in the mask, the gene is copied from the


first parent

Where there is a 0 in the mask, the gene is copied from


the second parent.

The reverse is followed to create another offsprings.

Soft Computing Applications


Uniform crossover with crossover mask:
Illustration

Before Crossover

Parent 1 : 1 1 0 0 0 1 0 1

Parent 2 : 0 1 1 0 0 1 1 1

Mask 1 0 0 1 1 1 0 1

When there is a 1 in the mask, the gene is


Offspring 1: 1 1 1 0 0 1 1 1 copied from Parent 1 else from Parent 2.

Offspring 2: When there is a 1 in the mask, the gene is


0 1 0 0 0 1 0 1 copied from Parent 2 else from Parent 1.

After Crossver

Soft Computing Applications


Half-uniform crossover (HUX)

In the half uniform crossover scheme, exactly half of


the non-matching bits are swapped.
1 Calculate the Hamming distance (the number of differing bits)
between the given parents.
2 This number is then divided by two.
3 The resulting number is how many of the bits that do not match
between the two parents will be swapped but probabilistically.
4 Choose the locations of these half numbers (with some
strategies, say coin tossing) and swap them.

Soft Computing Applications


Half-uniform crossover: Illustration

Before crossover

Parent 1 : 1 1 0 0 0 0 1 0
Here, Hamming
distance is 4
Parent 2 : 1 0 0 1 1 0 1 1

Tossing: 1 0 1 1
If toss is 1, then swap the
bits else remain as it is

Offspring 1: 1 0 0 0 1 0 1 1

Offspring 2: 1 1 0 1 0 0 1 0

After crossver

Soft Computing Applications


Shuffle crossover

A single crossover point is selected. It divides a chromosome into


two parts called schema.

In both parents, genes are shuffled in each schema. Follow some


strategy for shuflling bits

Schemas are exchanged to create offspring (as in single


crossover)

Soft Computing Applications


Shuffle crossover: Illustration
Before crossover

P1 : 1 1 0 0 0 1 1 0

P2 : 1 0 0 1 1 0 1 1

K-point

P1' : 0 0 1 0 1 1 0 1
After shuffing bits
P2' : 0 1 1 1 0 1 0 1

Offspring 1: 0 0 1 0 1 1 0 1
Single point
Offspring 2: crossover
0 1 1 1 0 1 0 1

After crossver

Soft Computing Applications


Mutation

After a crossover is performed, mutation takes


place.
Mutation is a genetic operator used to
maintain genetic diversity from one generation of a
population of chromosomes to the next.

Mutation occurs during evolution according


to a user-definable mutation probability, usually set
to fairly low value, say 0.01 a good first choice.
Mutation

Mutation alters one or more gene values


in a chromosome from its initial state. This can result
in entirely new gene values being added to the gene
pool. With the new gene values, the genetic algorithm
may be able to arrive at better solution than was
previously possible.
Mutation is an important part of the
genetic search, helps to prevent the population
from stagnating at any local optima. Mutation is
intended to prevent the search falling into a local
optimum of the state space.
Mutation
• The Mutation operators are of many type.
one simple way is, Flip Bit.
the others are Boundary, Non-Uniform,
Uniform, and Gaussian.
• The operators are selected based on the way
chromosomes are encoded
• The mutation operator imply inverts the value of
the chosen gene i.e. 0 goes to 1 and 1 goes to
0.

• Consider the two original off-springs selected for


mutation.

You might also like