You are on page 1of 11

Genetic Algorithm.

The GA computational and optimizing technique that is used to solve both controlled and
uncontrolled optimization hitches [1]. The GA is an Algorithm which is based on Evolution [2]
[3]. For adaptive systems, it appears to be a computational technique. GA is popular for its
ability to cover large area or search space for possible solutions and thus find the most suitable
solution [1] [4]. Thus GA is capable to find not only best solution, but also gives list of available
optimal solutions. GA is also used to do optimization of problems containing multiple tasks [26].
But at the time of calculation, GA experience hurdles in fitness measurement and appear to be a
bit slower than other algorithms. As CSA is inspiration from Clonal Selection Theory, similarly
GA is inspiration from Evolution Theory [5]. The working principle of GA is based on
population generation of individuals (chromosomes) that are produced randomly [6]. The
population will then undergo through process of Natural Selection with the help of crossover and
mutation, through fitness function. The reproduction of population is directly proportional to
fitness [7].The higher fitness of the population results in more chance for reproduction [12] [13].
Since Genetic Algorithm had many advantages as it gives choice of available optimal solution,
can perform multiple tasks optimization problems, cover large area or we can say search space
and can perform both controlled and uncontrolled optimization problems, but it has one major
drawback, that is in case of real time applications it gives slower response. There is valid reason
behind slow response of Genetic Algorithm, because of indiscrimination and huge number of
manipulation in data type. Depending upon nature of applications the parameters that are used in
Genetic Algorithm may have different weights.

The GA work as follow:

1 Initialization: The GA starts with generation of initial population, which is generate


randomly say “M(0)”.
2 Measure Fitness: From present population M(t) calculate and keep the fitness” U(m)”
for each Individual “m”. In M(t) specify probability “Pb(m)” for each individual “m “ so
that “pb(m)” is proportional to “U(m)”.
3 Generation of new population: New population “(t+1)” generate from selecting
individuals from M(t) via genetic operators to produce offspring through three main
features for selection of optimal solution, that are listed as follows.
A. Selection: In present population, the individuals with best fitness level are selected.
B. Crossover: For production of new individuals, crossover is done. With the help of
defined probability “Pb(m)”, the selected individuals will produce new individuals.
C. Mutation: New individuals will then undergo through the process which is called
mutation.
4 Stopping Criteria: The whole process is repeated with above mentioned steps, until a
optimum or best solution is obtained from set of maximum available number of
population generated.

Since we are challenging GA, that s why we have to understand its working and figure 1, will
help us in this context. The working of GA is shown in figure 1 which represents flow chart of
GA.

Fig.1: Functioning of GA.


2 Particle Swarm Optimization.

The Particle Swarm Optimization (PSO) Algorithm was first introduced by Kennedy, Eberhart
and Shi [8] [9], in order to simulate social behavior of bird flocks or fish school to symbolize
stylized movement of organisms. Thus PSO is population based stochastic optimization
technique, inspiration from bird flocks or fish school [10]. PSO also shares some similarities
with GA, such as searching method and evolutionary calculation. On PSO applications, Poli [8]
[9], has done inclusive research and conclude that it is metaheuristic technique and needs no or
little assumptions of problem that is needed to be optimized. Thus PSO is straightforward,
speedy and efficient computational technique that optimize problem iteratively [10]. PSO uses
less parameter to optimize problems, which are partially irregular, change over time and noisy,
but PSO does not guarantee every time to give optimal solution. The outline of PSO is given as
follows.

1 Initial Population: Population is generated randomly possessing of “n” particles.


2 Population Size: It is the total number of particles in swarm. The population depends
upon situation of problem and based on tradeoff among precision and calculation time.
3 Swarm: The particles or populations say bird or fish.
4 Search Space: Array use by Algorithm to find solution.
5 Number of Iterations: The maximum steps required to calculate the fitness value.
6 Inertia Weight: The quantity that controls convergence of the Algorithm. The value of
weight is depends upon problem.
Ant Colony Optimization.

In order to search food ants seams to roam freely in area or terrain, but in reality it is the part of
their search plan and it is well organized and pre planned search which give and comes up with
efficient results. It is model for bio simulation, and it is inspiration from individual relations of
ants, which is quite simple and their complex group behavior [11]. The Ant Colony Optimization
(ACO) Algorithm was first introduced by Marco Dorigo in 1991 in his PHD thesis which is
“Optimization learning and Natural Algorithms”. In his research work he shows how ants solve
path optimization problem, to search food by using pheromones, and how we can adapt and copy
their technique to solve optimization problems related to our world. In search of food ants uses
several paths to collect food and then return to their nest leaving chemicals in their followed
paths which is known as pheromones. In order to understand ACO we will perform double
bridge experiment, which help us working of ACO and reveals the secrets, how actually
optimization is done. This whole effort is done by ants to select the best path among several
paths, but in experiment we will take an example in which two paths are considered and used by
ants to search food as shown in figure 3.4. Initially at t = 0s, there is an equal probability that
which path is chosen by ants, both ants are standing at point “A”, let say one ant chooses path “
B1” and other chooses “ B2”, in order to approach food as shown in figure 3.4.

Fig.3.4: ACO procedure at t = 0.

At, t = 4s one ant that chooses path “ B1”, has nearly approached food and reached point “C”,
while other that chooses “ B2” is at its half way as shown in figure 3.5.
Fig.3.5: ACO procedure at t = 4s.

The pheromone that is left by both ants in paths is presented by different colors and at t = 6s, one
ant has collected food and when she return back to point “C”, she faces a little challenge that
which path she should select to go back to nest? Either “ B1” or “ B2”, the solution is simple, At
point “C”, she can smell pheromone that is present at path “ B1”, and which is absent till now at
path “ B2”, so there is 70 % chances that she chooses path “ B1” and 30 % chances that she prefer
path “ B2”, in order to return nest as shown in figure 3.6.
Fig.3.6: ACO procedure at t = 6s.

At t = 10 s, the second ant will collect food and reaches at point “C”, and faces similar challenge
for path selections, that either to choose path “ B1” or to choose path “ B2”, as faced by earlier ant,
but in this case, she can smell pheromone from both paths, “ B1” and “ B2”, but the pheromone
level at path “ B1” is higher and stronger than path “ B2”, so in this case the solution is also very
simple, the second ant will select path with higher pheromone level “ B1” in order to reach nest as
shown in figure 3.7. Note that there is 60 % chances of path “ B1” selection and 40 % chances for
path “ B2” selection. The 10 % drop for selection of path “ B1” as compared to earlier case has a
valid reason. This is because at t = 10s, the smell of pheromone is also present at path “ B2”. But
with the passage of time, the pheromone level present at path “ B2” will decrease and percentage
of path selection of path “ B1” would increase.

Fig.3.7: ACO procedure at t = 10s.

When second ant selects path “ B1” and reaches to nest than the pheromone level at path “ B1” will
go higher than earlier, as shown in figure 3.8.
Fig.3.8: ACO procedure after t = 10s.

Since pheromone is a chemical substance and evaporate with passage of time, therefore as time
passes it will completely evaporate from path “ B2”, and “ B1” is selected as shown in figure 3.9.

Fig.3.9: ACO procedure and selection of path “ B1”

Hence this is basic theory besides ACO, and this technique can be used to solve optimization
problem in our engineering world. But ACO has drawback also, if no ant has discovered path “
B1” then all will follow path “ B2”, this is called “missed opportunity”, thus there is chance that
ACO will not provide best solution always as shown in figure 3.10.

Fig.3.10: ACO procedure showing Missed Opportunity.


[1] Payal Mishra, Mrs. Neelam Dewangan, “Survey on Optimization Methods For Spectrum
Sensing in Cognitive Radio Network”, International Journal of New Technology and
Research (IJNTR) ISSN: 2454-4116, Volume-1, Issue-6, Pages 23-28, October 2015.
[2] Faisal Riaz, Saeed Ahmed, Imran Shafi, “White Space Optimization Using Memory
Enabled Genetic Algorithm in Vehicular Cognitive Radio” in the 11th International
Conference of IEEE on Cybernetic Intelligent Systems, Limerick, Ireland pp.133-138 .
August 23-24, 2012.
[3] Faisal Riaz, Shafaq Murtaza, Saeed Ahmed "Multi- Computational Intelligence Techniques
based on White Space Optimization in Cognitive Radio Enable Vehicular Network ".
[4] Yasmina El Morabit, FatihaMrabti and El Houssain Abarkan, “Spectrum Allocation Using
Genetic Algorithm in Cognitive Radio Networks (Conference style),” IEEE, 2015
[5] Maninder Jeet Kaur, Moin Uddin and Harsh K. Verma. "Performance Evaluation of Qos
Parameters in Cognitive Radio Using Genetic Algorithm". World Academy of Science,
Enineering and Technology 70, pp. 957-961, 2010.
[6] P. J. Angeline Lecture Notes in Computer Science, “Evolutionary optimization verses
particle swarm optimization”, Philosophy and the performance difference. vol. 1447, Proc.
7th Int. Conf. Evolutionary Programming - Evolutionary Programming VII, pp. 600–610,
Mar. 1998.
[7] Wei Zhang, Ranjan K. Mallik,and Khaled Ben Letaief, “Optimization of Cooperative
Spectrum Sensing with Energy Detection in Cognitive Radio Networks(Translation Journals
style),” IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, vol. 8, no. 12,
Dec. 2009
[8] M. E. Sahin, I. Guvenc, And H. Arslan, “Optimization Of Energy Detector Receivers For
UWB Systems,” In Proc. IEEE Vehic. Technol. Conf., Vol. 2, Stockholm, Sweden, Pp.
1386–1390, May 2005.
[9] Kaligineedi, Praveen, Majid Khabbazian, and Vijay K. Bhargava. “Malicious User
Detection in a Cognitive Radio Cooperative Sensing System”. IEEE Transactions on
Wireless Communications 9.8. 2488–2497. Web. ©2010 IEEE, 2010.
[10] Nidhi Rai and Archana Singh, “Improved Clonal Selection Algorithm (ICLONALG)”,
CSE Dept., Sam Higginbottom institute of Agriculture, Technology and Sciences,
Allahabad, U.P., India, Accepted 10 July 2015, Available online 14 July 2015, Vol.5, No.4,
Aug. 2015.
[11] Saad Ghaleb Yaseen, Nada M. A.AL-Slamy, “Ant Colony Optimization”, Department of
Management Information Systems College of Economics & Business Al-Zaytoonah
University of Jordan, IJCSNS International Journal of Computer Science and Network
Security, VOL.8 No.6, June 2008.
[12] Pawan Yadav1, Rita Mahajan2, “Energy Detection for Spectrum Sensing In Cognitive
Radio Using Simulink”, International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering (An ISO 3297: 2007 Certified Organization),
Vol. 3, Issue 5, MAY 2014.
[13] Nisha Yadav and Suman Rathi, “A COMPREHENSIVE STUDY OF SPECTRUM
SENSING TECHNIQUES IN COGNITIVE RADIO”, International Journal of Advances in
Engineering & Technology, ©IJAET ISSN: 2231-1963, July 2011.

You might also like