You are on page 1of 168

Introduction

 The idea behind soft computing is to model cognitive behavior of human


mind.

 Soft computing introduced by Zadeh is an innovative approach to construct


computationally intelligent hybrid systems.

 Soft computing is an association of computing methodologies which


collectively provide a foundation for the conception and deployment of
intelligent systems[1].

 Soft Computing is an approach for constructing systems which are:


• Computationally intelligent and possess human like expertise in
particular domain.
• Adapt to the changing environment and learn to do better.
Some Domains of Intelligence in Biological Systems (Computational
Perspective)

Competition

Evolution
Reproduction

Learning

Swarming
Communication
Soft Computing Models
 Components of SC includes
• Fuzzy Logic (FL)
• Artificial Neural Network (ANN)
• Evolutionary Computation (EC) - based on the origin of the species.
• Evolutionary Algorithm (EA): Copying ideas of Nature
• Swarm algorithm or Swarm Intelligence (SI) : group of species or
animals exhibiting collective behavior.
• Hybrid Systems

 Soft computing models employs different techniques like ANN,FL,EA’s & SI in a


complementary rather than a competitive way.

 Integrated architectures like Neuro Fuzzy , ANN-EA combination and ANN-SI


techniques are some of the hybrid approaches used for performance
improvements.
Biological Basis: Neural Networks & EC
 Complex adaptive biological structure of human brain facilitates performance of
complex tasks.

 The processing element in an ANN is generally considered to be very roughly


analogous to a biological neuron .

 Neural networks ties with genetics “branch of biology that deals with the heredity
and variation of organisms”[3].

 Chromosomes: Structures in cell bodies that transmit genetic information,


Individual patterns in EA corresponds to chromosomes in biological systems.

 The genotype completely specifies an organism, in EC a structure specifies a


system.
 Swarm Intelligence is the new member of EC , nature inspired algorithm which
mimic insect’s problem solution abilities[4].
Artificial Neural Network
 The basic concept of an artificial neural network (ANN) is derived from an analogy
with the biological nervous system of the human brain.

 An ANN is a massively parallel system composed of many neurons, where


synapses are actually variable weights, specifying the connections between
individual neurons[2].

 The neurons continuously evaluate their output by looking at their


inputs ,calculating the weighted sum and compares to a threshold to decide if they
should fire.

 Learning algorithm gives the inputs ,adjust the weights to produce the required
output.

 ANN’s are algorithms for optimization & learning based loosely on concepts
inspired by the nature of the brain.
Two-layer feed forward network
 Just as individuals learn differently, neural network have different learning rules.
Learning may be Supervised or Unsupervised.

 Supervised learning requires that when the input stimuli are applied, the desired
output is known a priori.

 The most popular algorithm for adjusting weights during the training phase is
called back propagation of error.
• Feed-forward neural network is the simplest form of an ANN

INPUT HIDDEN OUTPUT

LAYER (X1,X2) LAYER (H1,H2,H3) LAYER(O)

H1
W1,1
X1
W2,1 W1,1

W 1,2 H2 W2,1 O

X2 W 3,1
W2,2 W1,3
H3
W2,3
Error correction learning
 Error correction learning used with supervised learning is the technique of
comparing the system output to the desired output value and using that
error to direct the training.

 It is formulated as the minimization of an error function such as the total


mean square error between the actual output and the desired output
summed over all available data.

 The most popular learning algorithm for use with error correction learning
is the back propagation algorithm(BP).

 The delta rule is often utilized by the most common class of ANNs called
back propagation neural networks.
Gradient Decent Optimization
 A gradient descent based optimization algorithm such as BP can be used to adjust
connection weights in the ANN iteratively in order to minimize the error.

 BP is the variation on gradient search, the key to BP is a ,method for calculating


the gradient of the error with respect to the weights for a given input by
propagating error backwards through the network.

 When a neural network is initially presented with a pattern it makes a random


guess as to what it might be.

 It then sees how far its answer was from the actual one and makes an appropriate
adjustment to its connection weights[2].

In p u t

D e s ir e d
O u tp u t
Limitations
 Neural networks are used for solving a variety of problems but they still have
some limitations. One of the most common is associated with neural network
training.

 The BP learning algorithm cannot guarantee an optimal solution.

 In real world applications ,the BP algorithm might converge to a set of


suboptimal weights from which it cannot escape. So the neural network is often
unable to find a desirable solution to a problem at hand.

 Another difficulty is related selecting an optimal topology for the neural network.

 Evolutionary computations are effective optimization techniques that can guide


both weight optimization and topology selection[6].
Optimization
• The main goal of optimization is to find values of the variables that minimize or
maximize the objective function.

• The main components of optimization problem are objective function which we


want to minimize or maximize and the design variables.

• Modelling is the process of identifying objective function and variables.

• Formulating good mathematical model of the optimization problem requires


algorithms with robustness, efficiency and accuracy.

• Optimization algorithms are classified in to:


– Local Optimization
– Global Optimization
Issues in evolving Neural Networks

 EC methodologies have greatly been applied to four main attributes of neural


networks:
 Network connection weights and network architecture
 Network learning algorithm and evolution of inputs

 The architecture of the network often determines the success or failure of the
application, usually the network architecture is decided by trial and error .

 There is a great need for a method of automatically designing the architecture for a
particular application.

 Both the network architecture and the connection weights need to be adapted
simultaneously or sequentially.

 Thus EC methodologies have been applied to evolve the network weights, the
network topology.
Evolutionary Computation -EC
 Evolutionary Computation (EC) refers to computer-based problem solving
systems that use computational models of evolutionary process.

 EC is the study of computational systems which use ideas and inspirations


from natural evolution and other biological systems.

 EC is based on biological metaphors[4].

 Two biological metaphors which are the two important


classes of population based optimization algorithms are :
• Evolutionary algorithms
• Swarm algorithms

 EC techniques are used in optimization ,machine learning and


Automatic design.
Evolutionary Algorithms
 GA is specific class of EC that performs a stochastic search by using the basic
principles of natural selection.

 GA’s are algorithms for optimization and learning based loosely on several
features of biological evolution[3].

 GA’s are inspired by Darwin's theory of natural evolution.

 Formally introduced in the US in the 1975 by John Holland referred as simple


genetic algorithm( SGA).

 Used to optimize a given objective function, where parameters are encoded in


something analogous to a gene.

 A GA applies biological principles into computational algorithm to obtain the


optimum solutions and is a robust method for searching the optimum solution to a
complex problem.
Evolutionary Algorithms
 EA’s are optimization methods based on evolutionary metaphor that
showed effective in solving difficult problems.

 The 4 main processes in evolutionary algorithms are:


• Initialization process
• Fitness evaluation
• selection
• Generation of new population

 After initialization, the population is evaluated and stopping criteria are


checked.

 If none of the stopping criteria is met ,a new population is generated again


and again and the process is repeated.
Framework of Genetic algorithm
1. t := 0;

2. Generate initial Population P(t) at random;

3. Evaluate the fitness of each individual in P(t);

4. while (not termination condition) do


5. Select parents, Pa(t) from P(t) based on their fitness in P(t);
6. Apply crossover to create offspring from parents: Pa(t) ->O(t)
7. Apply mutation to the offspring: O(t) ->O(t)
8. Evaluate the fitness of each individual in O(t);
9. Select population P(t+1) from current offspring O(t) and parents P(t);
10. t := t+1;

11. end-do
Genetic Algorithms (II)
Population of
individuals or alternative
(feasible) solutions

Next generation
Evaluate individuals
of
on their fitness
individuals

a l
i
t i on
n
e re uct
i ffSelect
od individuals
Arbitrarily change pr
D ebased on fitness
R
some characteristic
tio n for subsequent mating
ia
arV
i c
n et
Ge Select individuals Heredity Mating pool of
& exchange charac- “fitter”
teristics to create individuals
new individuals
Genetic Algorithms (IV)
Basic Tasks

Generation of initial population

Evaluation

Selection (Reproduction operation)

Exchange characteristics to develop new


individuals (Crossover operation)

Arbitrarily modify characteristics in new


individuals (Mutation operation)
Genetic Algorithms (V)
Reproduction / Selection Operator

The purpose is to bias the mating pool (those who can pass on their
traits to the next generation) with fitter individuals

Assign p as the prob. of choosing Choose n individuals randomly


an individual for the mating pool
Pick the one with highest fitness
p is proportional to the fitness
Place n copies of this individual in
Choose an individual with prob. p the mating pool
and place it in the mating pool
Choose n different individuals and
Continue till the mating pool size repeat the process till all in the
is the same as the initial population’s original population have been chosen
Genetic Algorithms (VI)
Crossover operator

1001101 1001111

1100101
1100111
Genetic Algorithms (VII)
Mutation

1001101 1000101
Components of a GA
A problem to solve…..
• Encoding technique :Representation of individuals (gene, chromosome)

• Initialization procedure (Creation)

• Evaluation function (Environment)

• Selection of parents (Reproduction)

• Genetic operators (Mutation, Recombination)

• Parameter settings (Practice and art)


GA working principle
Start

Create initial random Population


population

Evaluate fitness for each Fitness evaluation


member of the population

Store best individuals

Create mating pool Selection

Create next generation Crossover


using crossover

Perform Mutation Mutation

YES Optimal or good NO


solution found?
Stop
Basic Genetic Algorithm
 [Start] algorithm begins with a set of initial solutions (represented by set of
chromosomes) called population.

 [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.

 Repeat until terminating condition is satisfied


[Selection] Select two parent chromosomes from a population according
to their fitness (the better fitness, the bigger chance to be selected).
• [Crossover] Crossover the parents to form new offspring's (children). If
no crossover was performed, offspring is the exact copy of parents.
• [Mutation] Mutate new offspring at selected position(s) in
chromosome).
• [Accepting] Generate new population by placing new offspring's.

 Return the best solution in current population


An example after Goldberg
 Simple problem: max x2 over {0,1,…,31}

 GA approach:
• Representation: binary code, e.g. 01101  13

• Population size: 4

• one-point cross over, bitwise mutation

• Roulette wheel selection

• Random initialization
Initial population
 Encoding:

• code the decision variable ‘x’ into a finite length string. Using a five-bit

unsigned integer, numbers between 0 (00000) and 31(11111) can be obtained.

• The objective function here is f(x) = x2 which is to be maximized.


 Initial Population :
• An initial population of size 4 is randomly chosen : {12, 25, 5, 19}
• Then, we should obtain the decoded x values for the initial population
String #   X value Binary Code

generated:
1 12   01100

2 25   11001

3 5   00101

4 19   10011
Fitness evaluation
 Objective function
• Calculate the fitness or objective function for each individual.

• This is obtained by simply squaring the ‘x’ value, since the given function is f(x) =
x2.

 Probability of selection :
Compute the probability of selection as follows:

• for string 1, Fitness f(x1) = 144, and Σf(x ) = 1155


i

• The probability that string 1 occurs is given by, =144/1155=0.1247=12.47%.


Roulette Selection
 Roulette selection: Expected count
• The expected and actual count method is proposed for roulette selection.

• The next step is to calculate the expected count

• For string 1, Expected count = Fitness/Average = 144/288.75 = 0.4987

 The expected count gives an idea of which population can be selected for further processing in the

mating pool.

 Roulette selection: Actual count :


• The actual count is to be obtained to select the individuals, which would participate in the
Roulette wheel Selection

 The Roulette wheel is formed as follows:


String4 (19)
31.26%
String3 (5)
2.16% String1 (12)
12.47%

String2 (25)
54.11%

Maximum 625 54.11% 2.1645 2


Mating pool
 String 1 occupies 12.47%, so there is a chance for it to occur at least once. Hence its actual count may

be 1.

With string 2 occupying 54.11% of the Roulette wheel, it has a fair chance of being selected more than

once. Thus its actual count can be considered as 2.

 On the other hand, string 3 has the least probability percentage of 2.16%, so their occurrence for next

cycle is very poor. As a result, it actual count is 0.

 String 4 with 31.26% has at least one chance for occurring while Roulette wheel is spun, thus its actual

count is 1.
X value Mating pool
String #

 Based on actual count the mating pool is formed as follows:


1 12 01100

2 25 11001

2 25 11001

4 19 10011
Crossover
 Crossover operation is performed to produce new offspring (children)

 The crossover probability is assumed to 1.0.

 The crossover point is specified and based on the crossover point (chosen

String #randomly), single point


X value Matingcrossover
pool isCross
performed
point and Offsprings
new offspring
code is produced.
Offsprings x value

1 12 0110|0 4 0110|1 13

2 25 1100|1 4 1100|0 24

2 25 11|001 2 11|011 27

4 19 10|011 2 10|001 17
Mutation
 Mutation operation is performed to produce new off springs after crossover

operation.

 We select the mutation-flipping operation to be performed and then new off

springs are produced.

String
 TheOffspring
mutation probability
Offspringiscode
assumed Mutation
to 0.001.
Chromosome
Offsprings code after
mutation
Offsprings x value
after mutation
# X value before mutation
1 13 01101 10000 11101 29

2 24 11000 00000 11000 24

3 27 11011 00000 11011 27

4 17 10001 00100 10101 21


Evaluation

 Once selection, crossover and mutation are performed, the new population is now ready to be

tested. The population and the corresponding fitness values are now ready for another round

producing another generation. More generations are produced until some stopping criterion is

met.

It can be noted how maximal and average performance has improved in the new population. The population
Example of Genetic Algorithm
Example
Testing GA
• It cannot be said with certainty that the genetic
algorithm has found the global minimum value. Only by
testing the algorithm with analytical benchmark
functions, you can find the algorithm is correct.
• In other cases, you should compare the results with the
laboratory data, or find a way to make sure the answers
are correct. for example, find a way to predicts the order
of magnitude of optimal points.
• Looking at the fitness of the best-found solution so far
can be a good sign, but totally if you have no idea of the
global optimum, let to progress the optimisation until the
rate of improvement is negligible.
Advantages/ disadvantages of GA

• Advantages
– parallelism and solution space is wide
• Disadvantages :
– The problem of finding fitness function
– definition of representation of the problem
– premature convergence occurs
– parameter sensitive
– An effective GA representation and
meaningful fitness evaluation are the keys of
the success in GA applications.
ANN-GA Hybrid
Some randomly generated chromosome made of 8 genes
representing 8 weights for BPN
ANN-GA hybrid
Steps
Advantages/ disadvantages of GA

• Advantages
– parallelism and solution space is wide
• Disadvantages :
– The problem of finding fitness function
– definition of representation of the problem
– premature convergence occurs
– parameter sensitive
– An effective GA representation and
meaningful fitness evaluation are the keys of
the success in GA applications.
Swarm Intelligence

• Swarm Intelligence has two fundamental concepts:

• self organizing
– Positive feedback: Amplification
– Negative feedback: Balancing
– Fluctuations
– Multiple interactions

• Division of labor

– simultaneous task performance by cooperating specialized individuals


– enables the swam to respond to changed conditions in the search space.
Swarm algorithm

 Mimicking emergent behaviors observed in social animals on computer systems[4].


• Bacteria
• Immune system
• Ants (ACO) , Honey bees ( ABC)
• Birds (PSO) and other social animals

 Particle Swarm Optimization( PSO) and Artificial Bee Colony(ABC) are widely
used Swarm Intelligence based method.

 PSO is Inspired by simulation social behavior related to bird flocking, fish


schooling and swarming theory.

 ABC is inspired by simulation of foraging behavior related to real honey bees and
swarming theory.
Particle Swarm Optimization
 Inspired by simulation social behavior Related to bird flocking, fish
schooling and swarming theory:

- steer toward the center


- match neighbors’ velocity
- avoid collisions

 Suppose A group of birds are randomly searching food in an area.

• There is only one piece of food in the area being searched.

• All the birds do not know where the food is. But they know how far
the food is in each iteration.

• So what's the best strategy to find the food? The effective one is to
follow the bird which is nearest to the food.
Overview of basic PSO
 Particle swarm optimization (PSO) is a population based on stochastic
optimization algorithms to find a solution and then solve an optimization
problem in a search space.

 It has been developed by Eberhart and Kennedy in 1995, inspired by


social behavior of bird flocking or fish schooling.

 How can birds or fish exhibit such a coordinated collective behavior?


PSO
 PSO is a robust stochastic optimization technique based on the movement and
intelligence of swarms, applies this concept of social interaction to problem
solving.

 It uses a number of agents (particles) that constitute a swarm moving around in the
search space looking for the best solution.

 Each particle is treated as a point in a N-dimensional space which adjusts its


“flying” according to its own flying experience as well as the flying experience of
other particles.

 Each particle keeps track of its coordinates in the solution space which are
associated with the best solution (fitness) that has achieved so far by that particle
pbest and the best value obtained so far by any particle in the neighborhood of
that particle called gbest.
PSO
 In PSO, each single solution is a "bird" in the search space called "particle".

• All of particles have fitness values which are evaluated by the fitness function to be
optimized.

 All particles have velocities which direct the flying of the particles. The particles
fly through the problem space by following the current optimum particles.

 Initialize with randomly generated particles. Update through generations in search


for optima.

• Each particle has a velocity and position.

• Update for each particle uses two “best” values.


• pbest: best solution (fitness) it has achieved so far. (The fitness value is also
stored.)
• gbest: best value, obtained so far by any particle in the population.
PSO
 Each particle tries to modify its position using the following information:
• the current positions and the current velocities,
• the distance between the current position and pbest,
• the distance between the current position and the gbest.

 The modification of the particle’s position can be mathematically modeled


according the following equation :
Vik+1 = wVik +c1 rand1(…) x (pbesti-sik) + c2 rand2(…) x (gbest-sik)
Where vik : velocity of agent i at iteration k,
w: weighting function,
cj :
weighting factor or learning factor
rand : uniformly distributed random number between 0
and 1, sik : current position of
agent i at iteration k,
pbesti : pbest of agent i, and gbest : gbest of the group
PSO algorithm
 Let particle swarm move towards the best position in search space,
remembering each particle’s best known position and global (swarm’s) best
known position.
 Let
xi – specific particle
vi – particle’s velocity
pi – particle’s (personal) best known position
g – swarm’s (global) best known position

vi ← ωvi + φprp(pi - xi) + φgrg(g - xi)


inertia cognitive social

and xi ← xi + vi
Example problem : PSO
ANN weight optimization process using PSO

• The PSO technique is used for the weight optimization of


feed forward neural network structure. The network
• was pre-trained using the PSO to arrive at the initial network
weights. The searching process of PSO-BP algorithms
• is started from initializing the starting position and the velocity
of particles. In this case, the particle is a group of the
• weights of the feed forward neural network structure. There
are 13 weights for the 2-3-1 feed forward neural
• network structure node topology and thus the particle consists
of 13 real numbers as shown in Fig1.
ANN weight optimization process using PSO
Methodology
• The present work integrates the PSO with the Back Propagation algorithm
to form a hybrid-learning algorithm for training the feed forward neural
networks.
• In the proposed work for calculating the global optimum, the PSO and the
ANN algorithms are integrated to increase the efficiency. The forecasting
models were developed using the historical groundwater level and the
rainfall data, which were recorded from three observation wells, located
inUdupi district, India.
• The water level and the rainfall data of the observation wells located in
Brahmavar, Kundapur, and Hebri taluks were used for the year 2000-2013.
• The groundwater in these regions mainly occurs in water table conditions.
The PSO is used to evolve the neural network weights
• The particles are evaluated and updated until a new
generation set of particles are generated. The Root Mean
Square Error (RMSE) is used as the fitness function.
• This searching procedure is repeated to search the global best
position in the search space. If the fitness function is greater
than the particle best, then the particle best is considered the
particle position, otherwise the global best as the particle best
which has the minimum value of fitness function.
• Based on the pBest, the gBest, and the current best values, the
updated velocity is computed.
• The particle position is updated based on the updated velocity.
The process is repeated for iterations until a minimum error is
obtained.
• The PSO can be applied to train the ANN and this process is iterated until
we get a minimum error. Thus, the PSO is integrated with the ANN in
order to search the optimal weights for the network.

• Finally, the network is trained using the updated weights and finally the
trained network is used to forecast the groundwater level of the testing set.
• The analysis is being performed for forecasting the groundwater levels for the
different input combinations as identified by all the three well locations. Initially
nine years (2000-2008) of data is considered as the training set and the ground
water level is forecasted for 2009.

• a comparison was made
• between the values predicted using the BP and the Hybrid ANN-PSO algorithms.
The forecasted groundwater level
• using ANN and ANN-PSO models during testing for the located wells of the study
area are shown graphically from
• Fig. 3 to Fig. 8.
Artificial Bee Colony
• ABC algorithm is one of the most recently introduced swarm based
optimization algorithm proposed by Karaboga (2005)

• ABC simulates the intelligent foraging behavior of honeybee swarm.

• Based on inspecting the behavior of honey bees on finding nectar and sharing
the information of food sources to the bees in the hive.

• Observations and studies on honey bee behaviors resulted in a new


generation of optimization algorithm called as “Artificial Bee Colony”.

• Karaboga has described the Artificial Bee Colony (ABC) algorithm based on
the foraging behavior of honey bees for numerical optimization problems.
Behavior of Honey Bee Swarm

Three essential components of forage selection:

• Food Sources: The value of a food source depends on many factors such as its
proximity to the nest, its richness or concentration of its energy, and the ease of
extracting this energy.

• Employed Foragers: Associated with a particular food source which they are
currently exploiting or are “employed” at. They carry with them information about
this particular source, its distance and direction from the nest, the profitability of
the source and share this information with a certain probability.

• Unemployed Foragers: Continually at look out for a food source to exploit.


There are two types of unemployed foragers: scouts, searching the environment
surrounding the nest for new food sources and onlookers waiting in the nest and
establishing a food source through the information shared by employed foragers.
Exchange of Information among bees

• The model defines two leading modes of the behavior:


– recruitment to a nectar source
– the abandonment of a source.

• The exchange of information among bees is the most important occurrence


in the formation of collective knowledge.

• The most important part of the hive with respect to exchanging information
is the dancing area.

• Communication among bees related to the quality of food sources takes


place in the dancing area and this dance is called a Waggle dance.
ABC
• Employed foragers share their information with a probability proportional to the
profitability of the food source, and the sharing of this information through waggle
dancing is longer in duration.

• An onlooker on the dance floor, probably can watch numerous dances and decides
to employ themselves at the most profitable source.

• The bees evaluate the different patches according to the quality of the food and the
amount of energy usage.

• Bees communicate through a waggle dance which contains information about


– the direction of flower patches (angle between the Sun and patch
– the distance from the hive( duration of the dance)
– The quality rating( frequency of the dance)
• Thus ABC is developed based on inspecting the behaviors of real
bees on finding nectar and sharing the information of food sources to the bees in the
hive.
Bees in Nature
• Colony Contains 3 groups of bees :
• The employed Bees( 50%)
– It stays on a food source and provides the neighborhood of the source in its
memory
• The onlooker Bee( 50%)
– It gets the information of food sources from the employed bees in the hive and select one of the
food sources from the employed bees in the hive and select one of the food source to gathers
the nectar.
• The Scout ( 5-10%)
– It is responsible for finding new food ,the new nectar, sources.
• The employed bee whose food source has been exhausted by the bees ,becomes
a scout. Scouts are the colony’s explorer’s.
• Number of employed bees=number of food sources
• Food source position=possible solution to the problem
• The amount of nectar of a food source=quality of the solution
• There is a greater probability of onlookers choosing more profitable sources
since more information is circulated about the more profitable sources.
Artificial Bee Colony Algorithm
• Simulates behavior of real bees for solving multidimensional and multimodal
optimization problems.

• The first half of the colony consists of the employed artificial bees and the
second half includes the onlookers.

• The number of employed bees is equal to the number of food sources around
the hive.

• The employed bee whose food source has been exhausted by the bees
becomes a scout.
Components of Honey bee swarm
ABC Fitness Evaluation
Employed Bee Phase
Onlooker bee phase
Employed Bee Phase Implementation
Evaluation & soln generation
Pseudocode for Employed bee phase
Generation and selection
Condition for Onlooker bee phase
Onlooker bee phase
Onlooker Bee phase
Pseudocode for OLB phase
Limit
Scout phase
Pseudocode SBP
Selection
Complete pseudocode ABC
Framework of ABC Algorithm
ABC algorithm
• Each cycle of search consists of three steps:
– moving the employed and onlooker bees onto the food sources
– calculating their nectar amounts
– determining the scout bees and directing them onto possible food sources.

• A food source position represents a possible solution to the problem to be


optimized.

• The amount of nectar of a food source corresponds to the quality of the solution.

• Onlookers are placed on the food sources by using a probability based selection
process.

• As the nectar amount of a food source increases, the probability value with which
the food source is preferred by onlookers increases, too.
• The scouts are characterized by low search costs and a low average in food
source quality. One bee is selected as the scout bee.

• The selection is controlled by a control parameter called "limit".

• If a solution representing a food source is not improved by a predetermined


number of trials, then that food source is abandoned and the employed bee
is converted to a scout.

• Control parameters of ABC algorithm are:


– Swarm size
– Limit
– number of onlookers: 50% of the swarm
– number of employed bees: 50% of the swarm
– number of scouts: 1
Flow chart of ABC
Initialise a Population of n Scout Bees

Evaluate the Fitness of the Population

Select m Sites for Neighbourhood Search


Neighbourhood Search

Determine the Size of Neighbourhood


(Patch Size ngh)

Recruit Bees for Selected Sites


(more Bees for the Best e Sites)

Select the Fittest Bee from Each Site

Assign the (n–m) Remaining Bees to Random Search

New Population of Scout Bees


Movement of the Onlookers
•  
Example :ABC
2rd
4th
Limit
Scout phase
Pseudocode SBP
Selection
Hybrid approaches

• ABC is good at exploration but poor at exploitation


• There are some studies touching on the hybridization of PSO
and ABC algorithms
•  the hybridization of techniques is realized based on the need
of the PSO algorithm. 
• Particle Swarm Optimization includes a handicap, which is
the absence of the regeneration of ineffective particles that
cannot improve their Pbest values. On the other hand, the ABC
algorithm contains a scout bee phase to eliminate the handicap
of regeneration. For this reason, we added the scout bee phase
into Standard PSO to upgrade its performance
• The Standard PSO algorithm  doesn’t contain a control
parameter to regenerate insufficient particles. At this point,
these particles are the ones that cannot retrieve
their Pbest value.
• Particles are updated without any diversity in Standard PSO,
and their adequacies are not controlled . It is obvious that PSO
needs a control parameter to improve its convergence
capability, but this parameter must not increase its
convergence time significantly.
• Consequently, it looks similar to a reasonable idea to insert the
scout bee phase into the Standard PSO algorithm. By adding
the scout bee phase into PSO, ScPSO is obtained . In ScPSO,
all processes (except limit) are the same with the PSO
algorithm.
Pseudocode for ScPSO
Pseudocode of ScPSO
-Initialize all particles within the user defined boundaries
(The first best position (Pbest) values are equal to the position of particles)

-Define a limit value within the range [1, (maximum iteration number-1)]

While (iteration number < maximum iteration number)


-Calculate fitness according to the cost function for all particles

-Update the best position values according to fitness values for all particles

-Choose the best Pbest vector as being Gbest (vector achieved to the minimum cost)

-Calculate new positions according to following equations for all particles

Vi(t + 1) = ωVi(t)+c1r1(Xpbest(i)(t)-Xi(t))+c2r2(Xgbest(t)-Xi(t))
Xi(t + 1) = Xi(t)+Vi(t + 1)
-If a variable inertia weight is used, change it in accordance with the utilized rule

-Control all particles which exceed the parameter ‘limit’, then regenerate the useless ones

End
Hybrid systems
• The combination of knowledge based systems ,neural networks and evolutionary
computation forms the core of an emerging approach to building hybrid intelligent
systems.

• The hybridization of genetic algorithm with other methods like gradient descent
methods will help to achieve balance between robustness and efficiency.

• Start with GA a search heuristic which mimics evolution by taking a population of


strings ,which encode possible solutions and combines them based on a fittest
function to produce individuals that are more fit and switch later to a gradient
descent based method.

• There has been a great interest in combining learning and evolution with ANN in
recent years.
• A GA-based ANN (ANN-GA) model, a hybrid integration of ANN and GA
algorithms may have better performance by taking advantages of the characteristics
of both of them.
Evolutionary neural networks
• The architecture of the network often determines the success or failures of the
application.

• There is a great need for a method of automatically designing the architecture for a
particular application.

• Evolutionary Computations are effective optimization techniques that can guide


both weight optimization and topology selection.

• Genetic algorithms may well suited for this task.

• The basic idea behind evolving a suitable network architecture is to conduct a


genetic search in a population of possible architectures.

• The GA performs global search capable of effectively exploring large search space
which has been used for optimally designing the ANN parameters including
connection weights connection weights, ANN architectures and input selection.
Integrated Back propagation based genetic
algorithm
BP/GA algorithm
Start: generate random population of ‘p’ chromosomes ( suitable solution
for the problem)
Extraction: extract weights for input-hidden-output layers from each
chromosome x.
Fitness: evaluate the fitness f(x) of each chromosome x in the population by
reciprocating the cumulative error obtained for each input set.
New population : Create a new population by reprating following steps until
the new population is complete.
– selection: select two parent chromosomes from a population according to their fitness
– Crossover: cross over the parents to form new offstring.
– Mutation: with a mutation probability mutate new offspring at each position in
chromosome.
– Acceptance: place the new offspring in the new population
Repeat steps 3 to 5 until stopping condition is met
Test: return the best solution in current population using the test set inputs
and the weights.
Hybrid approach
 The original population is a set of N chromosomes which is generated randomly.

• Fitness of each chromosome is computed by minimum optimization method.

• The training set of examples is presented to the network & the sum of squared
errors is calculated.

• Fitness is given by fitness value formula which is error minimization, a simple


function defined by the sum of squared errors. Smaller the sum fitter the
chromosome.

• The GA attempts to find a set of weights that minimizes the sum of squared errors.
Hybrid approach

• The new population is given as input to PN to compute the fitness of each


chromosome followed by selection, crossover and mutation to generate the
next population.

• This process is repeated till more or less all the chromosomes converge to
the same fitness value.

• The weights represented by the chromosomes in the final converged


population are the optimized connection weights of the BPN.
ANN weight optimization using GA

1. Encoding a set of weights in a chromosome.


 We must first choose a method of encoding a network’s architecture into a
chromosome.
 The second step is to define a fitness function for evaluating the chromosome’s
performance. This function must estimate the performance of a given neural
network. We can apply here a simple function defined by the sum of squared
errors.
 The training set of examples is presented to the network ,and the sum of squared
errors is calculated. The smaller the sum ,the fitter the chromosome. The genetic
algorithm attempts to find a set of weights that minimises the sum of squared
errors.
 The third step is to choose the genetic operators- crossover and mutation. A
crossover operator takes two parent chromosomes and creates a single crossover
child with genetic material from both parents. Each gene in the child’s
chromosome is represented by the corresponding gene of the randomly selected
parent.

 A mutation operator selects a gene in a chromosome and adds a small random


value between -1 and 1 to each weight in this gene
Crossover in weight optimisation

THANK YOU
Mutation in weight optimisation
Encoding the network architecture

 The network architecture is decided by trial and error; there is a great


need for a method of automatically designing the architecture for a
particular application.

 Genetic algorithms may well be suited for this task.

 The basic idea behind evolving a suitable network architecture is to


conduct a genetic search in a population of possible architectures.

 The connection topology of a neural network can be represented by a


square connectivity matrix
 We must choose a method of encoding a networks architecture in to a
chromosomes.
Encoding of the network topology
 Each entry in the matrix defines the type of connection from one neuron
(column) to another (row), where 0 means no connection and 1 denotes
connection for which the weight can be changed through learning.

 To transform the connectivity matrix into a chromosome, we need only to


string the rows of the matrix together.
The cycle of evolving a neural network topology
Conclusion
 Soft computing is an association of computing methodologies for
constructing computationally intelligent hybrid systems.

 The core techniques of SC are ANN,EC ,Fuzzy Logic and probabilistic


reasoning and hybridization is one of the central aspect of this field.

 BP algorithm cannot guarantee an optimal solution. EC’s are effective


optimization techniques that can guide both weight optimization and
topology selection

 Integrated architectures like Neuro-Fuzzy, ANN-GA combinations and


ANN-SI techniques are some of the hybrid approaches used for
performance improvement.

 Direct EC technique could fail to obtain a n optimal solution. This clearly


says the need for hybridization of EC techniques with other optimization
algorithms , machine learning techniques and heuristics.
Quiz link GA


https://nptel.ac.in/content/storage2/course
s/downloads_new/103103164/noc20_ch19
_assigment_5.pdf

You might also like