You are on page 1of 35

Intelligent Systems Theory

Lecture Eight
Dan Humpert
Associate Professor
University of Cincinnati
Mechanical Engineering

Agenda
Introduction
The

History of Evolutionary Computation

Evolutionary Strategies, Evolutionary


Programming, Genetic Algorithms, Genetic
Programming, Differential Evolution, Particle
Swarm Optimization and Swarm Intelligence.

Genetic

Algorithms
Particle Swarm Optimization
Swarm Intelligence

Evolutionary Computation

Evolutionary Computation refers to a class of


algorithms that utilize simulated evolution to some
degree as a means to solve a variety of problems.
Simulated evolution means having the ability to
evolve a population of potential solutions such that
weaker solutions are removed and replaced with
incrementally better or stronger solutions. The
algorithms follow the principle of natural selection.
Each algorithm is biologically inspired and is based
on the simulation of natural systems.

Chapter 7 Evolutionary
Computation

Chapter Outline

History of Evolutionary Computation


Evolutionary

Strategies

Developed by Rechenberg in the 1960s


Introduced evolution strategies as a means to
optimize vectors of real value physical systems
such as airfoils.
Population size was two members: the parent and
the child. The child member was modified via a
form of mutation and which ever member was
more fit became the parent of the next generation.
Mutation, Recombine, Select

History of Evolutionary Computation

Evolutionary Strategies (continued)


As shown below the child member is more fit than
the parent in the first generation which results in it
being the parent in the next generation.

History of Evolutionary Computation


Evolutionary

Programming

Introduced by Fogel in the 1960s


Evolved populations of finite state machines (FSM)
An FSM is a sequential logic device that goes from
one state to anotheran example would be an
elevator.
Can be modeled as a graph with state transitions.
Example: the goal is to evolve a parsing finite state
machine that can recognize patterns such as aabb,
aabbaabb, aabbaabbaabb etc.

History of Evolutionary Computation


Genetic

Algorithms

Introduced by Holland in the 1960s


Expanded the number of operators from just mutation
to crossover, inversion, and exchange, etc.
Used 8 bit and 16 bit string representation.

History of Evolutionary Computation


Genetic

Algorithms (Continued)

Example of Mutation and Inversion Operators

History of Evolutionary Computation


Genetic

Programming

Introduced by Koza in the 1990s


Instead of bit-strings (Genetic Algorithms) or real
values (Evolutionary Strategies), genetic programming
evolves a population of computer programs. It uses sexpressions (tree structures)
Fell out of favor in the late 1990s but has recently
become very popular in computer science research.
Example: using the crossover operator to create a
new S-expression

Differential Evolution (DE)


Introduced

by Storn and Price in 1996.


Uses vector differences as a means to
perturb the vector and probe the search
space.
DE includes two tunable parameters: F, the
weighting factor and CR, the crossover
probability (multi-point crossover occur)

Differential Evolution

Newer stochastic
population-based
method.
The (difference of
vectors seven and
nine) multiplied by the
weighting factor are
added to random
vector zero to create
a new vector.
That vector and the
original vector
undergo multi-point
crossover if
determined by
crossover rate.

History of Evolutionary Computation


Particle

Swarm Optimization (PSO)

Introduced by Eberhart and Kennedy in 1995


Inspired by birds flocking and fish schooling.
A population of particles (n-dimensional vectors) with
position and velocity move in random directions
searching n-dimensional space
Keep track of global best particlethe center of the
swarm. As other particles become the best particle they
become the center of the swarm.
Very effective for a variety of optimization problems.

History of Evolutionary Computation


Swarm

Intelligence

Improved by Eberhart and Kennedy in 2001


Based on Multi-Agent Systems where multiple
agents within an internal system environment such
as a smart house use cooperative control to work
together and communicate much like a PSO.
A multi-agent system that can evolve, update and
learn from its swarm.
The intelligent system evolves and grows more
intelligent as the swarm operates.

Genetic Algorithms

The most popular and most flexible evolutionary computation


algorithm is the genetic algorithm.
Indeed over the last thirty years, this whole classification of
evolution computation algorithms is now called Genetic
Algorithms by most engineers instead of its correct
designation.
The only change is the recent phenomenal growth of Particle
Swarm Optimization (PSO) and Swarm Intelligence (SI).
So we will focus on genetic algorithms and PSO/SI for the
rest of the lecture.

Genetic Algorithms: Optimization

In the next lecture, we will show a robot navigation algorithm


using genetic algorithms.
For this application, genetic algorithms do provide an excellent
adaptive search method but do not always provide for an
optimal solution.
Genetic algorithms do not fit the definition of Optimal for a
search engine.
They provide excellent optimized solutions. However, there are
many solutions that could be better.
In particular, a genetic algorithm could end up in a local
optimum versus the global optimum for a particular search.

Representation

The knowledge representation for most of Hollands


work was bit strings. However, since Holland, many
different representation schemes have been used.
For genetic algorithms, Hollands bit strings have been
shown to the most popular and widely used
chromosomal representation.
Many industrial engineering applications use other
representations such as ordered lists (for bin packing),
embedded lists (for factory scheduling problems), and
variable-element lists (for semiconductor layout).

Basic Flow of a Genetic Algorithm

1.) A pool of random potential solutions is created that serves


as the first generation. This pool should be diverse.
2.) The fitness of each member is computed. Fitness is a
measure of how well each individual solves the problem. The
higher (or in terms of cost, the lower) the fitness the better the
solution in relative to other members.
3.) There is some selection process for selecting members;
typically either roulette wheel or elitist. In elitist selection, the
higher fit members are selected forcing the lesser fit to die off.
4.) We now have a number of members that have the right to
propagate their genetic material to the next generation.
The next step is to recombine these members to form the next
generation. Recombination uses genetic operators.

Genetic Operators

The fourth component of genetic algorithms are


genetic operators.
We have seen the crossover operator. Using
crossover, the parents are combined by picking a
crossover point and then swapping the tails. Doublepoint crossover swaps the genetic material in from one
point to the next.
For bit string, an inversion operator, inverts the bits in a
string of chromosome; the range of bits are flipped.
For bit string, the mutation operation, simply mutates or
flips a chosen bit of chromosome.
23

Basic Flow: Termination

The most obvious termination strategy is that the


algorithm arrives at the solution and terminates the
process.
If the population lacks diversity, and members of the
population become similar, there is a loss in the
ability to search. We terminate if the average fitness
is greater then 99% of the maximum fitnessno
diversity. This means the algorithm is stuck in a local
maximum and needs to be restarted.

24

Genetic Algorithms: Diversity

The mutation and inversion operators insure that


diversity will be maintained in the population.
With elitist selection, using only the crossover
operation and picking only the best parents each
time, diversity can be lost. So the algorithm
periodically uses some form of mutation or inversion
to maintain diversity.
This is called mutation rate. An example mutation
rate can be after every five crossover operations.
If after 24 operations (four sets of five crossover and
one mutation) does not maintain diversity, than an all
gene mutation can be done.
25

Genetic Algorithms: Example

The Traveling Salesman Problem (TSP)


We are a traveling salesman and we are given a list of
cities (companies within cities) that we need to travel
to. The goal is to find the shortest possible travel route
thus minimizing the costs associated with travel.
Genetic Algorithms have been found to be very
successful at very quickly determining an excellent if
not optimal route.
For twenty cities, it can do this in less than 100
iterations. Our example will use seven cities, A-G

The cost is to minimize the overall Euclidean distance.

TSP: Sample Crossover, Mutation


We

need to created two new operators:


Swap Mutation and Ordered Crossover.
Swap Mutation
Assume two parents:
[B, G, C, A, F, E, D] and [F, A, E, G, D, C, B]
Suppose our random selection process says swap
F with B in the first parent and G with C in the
second parent. The two children would be:
[F, G, C, A, B, E, D] and [F, A, E, C, D, G, B]

TSP: Ordered Crossover


For

the Ordered Crossover Operator, using


the same parents as before:
[B, G, C, A, F, E, D] and [F, A, E, G, D, C, B]
Ordered crossover says swap the last three cities.
This gives two children:
[#, #, #, #, D, C, B] and [#, #, #, #, F, E, D]
Starting from the cut from the first parent, fill in the
not already used cities gives [F, E, G, A, D, C, B]
and the second child is starting from the cut of the
second parent [C, B, A, G, F, E, D]

Genetic Algorithms: TSP


Once

the knowledge representation is


defined and once the crossover and mutation
operators are defined, it is relatively simple to
plug in the earlier genetic algorithm to solve
the problem very quickly and very accurately.
We will also apply TSP to Particle Swarm
Optimization next.

Particle Swarm Optimization


Population-

based technique
that uses
particle swarms
to wander the
search space.
Particles swarm
around the most
fit particle.

Particle Swarm Optimization: PSO

32

PSO simulates a collection of particles that swarm with one


another in N-dimensional space where N is the size of the
solution vector.
A set of equations is used to implement a flocking behavior
which gives the particles some amount of freedom to search
the N-dimensional space.
A particle within the swarm exists as an object that contains
the N-dimensional vector; a vector velocity; the fitness for the
current vector; and a vector representing the personal best
position found so far.
Each iteration updates best position and velocity for the
swarm and particles.

Flow of PSO

1.) A population of random vectors and velocities are


created. Fitness of each is evaluated, personal best
vector is evaluated and global best particle is
determined.
2.) If the program does not terminate, the velocity of
each particle is updated and each particles position is
updated. Fitness of each is evaluated and personal
best vector and global best particle is determined. This
process continues till the program terminates.
A very quick, efficient, and accurate program similar to
that of Genetic Algorithms.

PSO: TSP

The Travelling Salesman Problem has not been an easy


solution using PSO.
Many have tried in the late 1990s and early 2000s either
with good results but slow times compared to Genetic
Algorithms or fast times but inaccurate being caught in a
local minimum.
Many recent papers have been written that have
overcome the problems. The papers all use a similar
algorithm to Jones in Artificial Intelligence but have
variations on improving the communications and
movement of the individuals of the swarm.

Swarm Intelligence

Based on Multi-Agent Systems where multiple agents


within an internal system environment such as a smart
house use cooperative control to work together and
communicate much like a PSO.
A multi-agent system that can evolve, update and learn
from its swarm.
The intelligent system evolves and grows more intelligent
as the swarm operates.
The combination of Cooperative Control and Swarm
Intelligence make this a very exciting research field for
Intelligent Systems.