Professional Documents
Culture Documents
CHAPTER-3
3.1 OPTIMIZATION
There are millions of very simple processing elements or neurons in the brain,
linked together in a massively parallel manner. This is believed to be responsible for
the human intelligence and discriminating power [35]. Neural Networks are
developed to try to achieve biological system type performance using a dense
interconnection of simple processing elements analogous to biological neurons.
Neural Networks are information driven rather than data driven [47]. Typically, there
are at least two layers, an input layer and an output layer. One of the most common
networks is the Back Propagation Network (BPN) which consists of an input layer,
and an output layer with one or more intermediate hidden layers [41].
SOFTCOMPUTINGTECHNIQUES 13
Optimization
Continuous Combinatorial
Linear
Quadratic NonͲLinear Approximate
Method ExactMethod
LocalMethod GlobalMethod
Heuristic
ChemicalMethod
Meta
Heuristic
PopulationBased
NeighborhoodBased
Metaheuristics
Population
Neighborhood BasedAlgorithm
BasedAlgorithm
TabuSearch Swarm
Simulated
Intelligence
Annealing
AntColony ParticleSwarm
Optimization Optimization
Evolutionary
computation
The performance of this very slow and mostly trapped in local optima, there
are another training algorithms which has faster coverage speed and tries to avoid to
struck in local optima like Delta-bar-delta (DBD).
SOFTCOMPUTINGTECHNIQUES 15
Target
NNincluding
Input connections(Weights) Output
Compare
betweenneurons
AdjustWeights
Limitations The major issues of concern today are the scalability problem, testing,
verification and integration of neural network systems into the modern environment.
Neural network programs sometimes become unstable when applied to larger
problems. The defense, nuclear and space industries are concerned about the issue of
testing and verification. The mathematical theories used to guarantee the performance
of an applied neural network needs development. The solution for the time being may
be to train and test these intelligent systems much as we do for humans. Also there are
some more practical problems like: the operational problem encountered when
attempting to simulate the parallelism of neural networks. Since the majority of neural
networks are simulated on sequential machines, giving rise to a very rapid increase in
processing time requirements as size of the problem expands. Networks function as
"Black Boxes" whose rules of operation are completely unknown.
The basic concepts were developed by Holland [8], while the practicality of
using the GA to solve complex problems was demonstrated in [49, 51]. Genetic
Algorithms (GAs) is a soft computing approach. GAs are general-purpose search
algorithms, which use principles inspired by natural genetics to evolve solutions to
problems [48]. As one can guess, genetic algorithms are inspired by Darwin's theory
about evolution. They have been successfully applied to a large number of scientific
and engineering problems, such as optimization, machine learning, automatic
programming, transportation problems, adaptive control etc.
SOFTCOMPUTINGTECHNIQUES 16
Mechanism of GA
Initialize Selection
SelectIndividualsforMating
Population
Crossover
MateIndividuals
Mutation MutateOffspring
InsertInto InsertOffspringintoPopulation
Population
No Yes
Criteria Answer
Satisfied?
i) Representation of Chromosomes
ii) Fitness
iii) Selection
Parents are selected in pairs. There are various types of selection methods which
are use to select the chromosomes:
• Uniform Selection
This is a stochastic method, where individual having higher fitness value have
more chances to select. This is the most common selection method used in GAs.
Roulette wheel selection is shown in figure 3.6.
SOFTCOMPUTINGTECHNIQUES 19
Chromosome1
Chromosome2
Chromosome3
Chromosome4
Chromosome5
iv) Crossover
After reproduction simple crossover may proceed in two steps. First, members
of the newly reproduced strings in the mating pool are mated at random. Second each
SOFTCOMPUTINGTECHNIQUES 20
pair of chromosomes undergoes crossing over. Basically there are different types of
crossing over which is explained as:
Crossover position is randomly selected between one and (L-1), where L is the
length of chromosome and two parents are crossed at that point. In this crossover, first
child is identical to first parent up to the crossing point and identical to the second
parent after the crossover point [62]. Figure 3.7 shows single point crossover.
x Uniform Crossover
+ =
v) Mutation
The mutation probability Pm controls the rate at which new gene values are
introduced into the population. If it is too small, many gene values that would have
been useful are never tried out. If it is too high, too much random perturbation will
occur and the offspring will lose their resemblance to the parents [62].
The mutation operator plays a secondary role in simple GA. Mutation rates are
smaller in natural populations, leading us to conclude that mutation is appropriately
considered as a secondary mechanism of genetic algorithm adaptation.
SOFTCOMPUTINGTECHNIQUES 22
Although GAs provides good solution but they not keep information about the
best solution in the whole community. This strategy extends search by the
introduction of memory. In this optimization, along with the local best solution, a
global best solution is also stored somewhere in the memory, so that all particles not
trapped into local optima but moves to global optima.
The PSO algorithm consists of just few steps, which are repeated until some
stopping condition is met. The steps are as follow:
The first two steps are fairly trivial. Fitness evaluation is conducted by
supplying the candidate solution to the objective function. Individual, global best
fitnesses and positions are updated by comparing the newly evaluated fitnesses
against the previous individual and global best fitnesses, and replacing the best
fitnesses and positions as necessary. The velocity and position update step is
responsible for the optimization ability of the PSO algorithm.
Begin
Initializetheparticlespositionand
velocity
Calculatethefitnessofeveryparticle
N
Updatetheparticlespositionand
velocity
Criteriasatisfied
Generatereportandthenstop
In the basic PSO technique, suppose that the search space is d-dimensional [21]
1. Each member is called particle, and each particle (i-th particle) is represented by d
dimensional vector and described as x i = [xi1, x i2,...,x id].
2. The set of n particle in the swarm are called population and described as pop=[x1,
x2,...,xd].
3. The best previous position for each particle (the position giving the best fitness
value) is called particle best and described as pbi =[pbi1, pbi2,...,pbid].
4. The best position among all of the particle best position achieved so far is called
global best and described as gbi =[gbi1, gbi2,...,gbid].
5. The rate of position change for each particle is called the particle velocity and
described as Vi =[Vi1, Vi2,...,Vid].
Where i= 1,2,..,n and n is the size of population, w is the inertia weight, c1 and c2
are the acceleration constants, and r1 and r2 are two random values in range [0,1].
Each of the three terms of the velocity update equation 1 has different roles in
the PSO algorithm. The first term wVKid is the inertia component, responsible for
keeping the particle moving in the same direction it was originally heading. The value
of the inertial coefficient w is typically between 0.8 and 1.2 which can either dampen
the particle’s inertia or accelerate the particle in its original direction. Generally,
lower values of the inertial coefficient speed up the convergence of the swarm to
optima, and higher values of the inertial coefficient encourage exploration of the
entire search space.
The second term c1r1(pbkid - xkid) called the cognitive component, acts as the
particle’s memory, causing it to tend to return to the regions of the search space in
which it has experienced high individual fitness. The cognitive coefficient c1 is
usually close to 2 and affects the size of the step the particle takes toward its
individual best candidate solution.
The third term c2r2(gbkid - xkid) called the social component, causes the particle
to move to the best region the swarm has found so far. The social coefficient c2 is
typically close to 2 and represents the size of the step the particle takes toward the
global best candidate solution gbkid the swarm has found up until that point.
PSO Algorithm to automatically generate test cases for the given program
defines:
Step 1: (Initialization):
Set the iteration number k=0. Generate randomly n particles, xi , i = 1, 2,..., n,
where x i = [xi1, xi2,...,x id]. and their initial velocities Vi =[Vi1, Vi2,...,Vid]. Evaluate the
SOFTCOMPUTINGTECHNIQUES 26
evaluation function for each particle eval (x i) using fitness function. If the constraints
are satisfied, then set the particle best PB i = x i and set the particle best which give the
best objective function among all the particle bests to global best gb. Else, repeat the
initialization.
Step 2: Update iteration counter k=k+1.
Step 3: Update velocity using Eq. (1).
Step 4: Update position using Eq. (2).
Step 5: Update particle best:
If eval i (xki ) > eval i (pbkí1 i ) then
pbki = xki
Else pbki = pbk -1i
Step 6: Update global best:
eval(gbk) = max(eval i (pbkí1 i ))
If eval(gbk) > eval(gbkí1) then
gbk = gbk
Else gbk = gbk -1
Step 7: (Stopping criterion): If the number of iteration exceeds the maximum number
iteration or accumulated coverage is 100% then stop, otherwise go to step 2..
The idea of ant colony optimization is as its name suggests, inspired from the
ant colonies. Ant Colony Optimization (ACO) is a population-based, general search
technique for the solution of difficult combinatorial problems, which is inspired by
the pheromone trail laying behavior of real ant colonies [73]. Each ant moves along
some unknown path in search of food and while it goes it leaves behind a trail of what
is known as pheromone. The special feature of this pheromone is that it evaporates
with time such that as time proceeds, the concentration of the pheromone decreases on
any given path. Now it’s obvious that the path with maximum pheromone is the one
that has been traversed the most recently or in fact by most number of ants and hence
the most desirable for following ant [74]. The first ACO technique is known as Ant
System [75] and it was applied to the traveling salesman problem. This work was
further carried by Dorigo, Di Caro, Blum etc [75-79]. Initial attempts at an ACO
SOFTCOMPUTINGTECHNIQUES 27
algorithm were not very satisfying until the ACO algorithm was coupled with a local
optimizer. One problem is premature convergence to a less than optimal solution this
is because too much virtual pheromone was laid quickly. To avoid this stagnation,
pheromone evaporation is implemented. In other words, the pheromone associated
with a solution disappears after a period of time [40].
In ACO, a set of software agents called artificial ants search for good solutions
to a given optimization problem. The choice of a heuristic technique is quite justified,
as the use of any classic greedy approach shows very poor results [80]. The use of ant
colony optimization is best for the graph based problems [74].
The ACO is a natural for the traveling salesperson problem. It begins with a
number of ants that follow a path around the different cities. Each ant deposits a
pheromone along the path. The algorithm begins by assigning each ant to a randomly
selected city. The next city is selected by a weighted probability that is a function of
the strength of the pheromone laid on the path and the distance of the city. The
probability that ant k will travel from city m to city n is given by
߬ Ȁ݀
ܲ ൌ
σ ߬ Ȁ݀
SOFTCOMPUTINGTECHNIQUES 28
Where
IJ = pheromone strength
Short paths with high pheromone have the highest probability of selection [40].