Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Intelligent Algorithm for Optimum Solutions Based on the Principles of Bat Sonar

Intelligent Algorithm for Optimum Solutions Based on the Principles of Bat Sonar

Ratings: (0)|Views: 25|Likes:
Published by ijcsis
This paper presents a new intelligent algorithm that can solve the problems of finding the optimum solution in the state space among which the desired solution resides. The algorithm mimics the principles of bat sonar in finding its targets. The algorithm introduces three search approaches. The first search approach considers a single sonar unit (SSU) with a fixed beam length and a single starting point. In this approach, although the results converge toward the optimum fitness, it is not guaranteed to find the global optimum solution especially for complex problems; it is satisfied with finding “acceptably good” solutions to these problems. The second approach considers multisonar units (MSU) working in parallel in the same state space. Each unit has its own starting point and tries to find the optimum solution. In this approach the probability that the algorithm converges toward the optimum solution is significantly increased. It is found that this approach is suitable for complex functions and for problems of wide state space. In the third approach, a single sonar unit with a moment (SSM) is used in order to handle the problem of convergence toward a local optimum rather than a global optimum. The momentum term is
added to the length of the transmitted beams. This will give the chance to find the best fitness in a wider range within the state space. The algorithm is also tested for the case in which there is more than one target value within the interval range such as trigonometric or periodic functions. The algorithm shows high performance in solving such problems. In this paper a comparison between the proposed algorithm and genetic algorithm (GA) has been made. It showed that both of the algorithms can catch approximately the optimum solutions for all of the testbed functions except for the function that has a local minimum, in which the proposed algorithm's result is much better than that of the GA algorithm. On the other hand, the comparison showed that the required execution time to obtain the optimum solution using the proposed algorithm is much less than that of the GA algorithm.
This paper presents a new intelligent algorithm that can solve the problems of finding the optimum solution in the state space among which the desired solution resides. The algorithm mimics the principles of bat sonar in finding its targets. The algorithm introduces three search approaches. The first search approach considers a single sonar unit (SSU) with a fixed beam length and a single starting point. In this approach, although the results converge toward the optimum fitness, it is not guaranteed to find the global optimum solution especially for complex problems; it is satisfied with finding “acceptably good” solutions to these problems. The second approach considers multisonar units (MSU) working in parallel in the same state space. Each unit has its own starting point and tries to find the optimum solution. In this approach the probability that the algorithm converges toward the optimum solution is significantly increased. It is found that this approach is suitable for complex functions and for problems of wide state space. In the third approach, a single sonar unit with a moment (SSM) is used in order to handle the problem of convergence toward a local optimum rather than a global optimum. The momentum term is
added to the length of the transmitted beams. This will give the chance to find the best fitness in a wider range within the state space. The algorithm is also tested for the case in which there is more than one target value within the interval range such as trigonometric or periodic functions. The algorithm shows high performance in solving such problems. In this paper a comparison between the proposed algorithm and genetic algorithm (GA) has been made. It showed that both of the algorithms can catch approximately the optimum solutions for all of the testbed functions except for the function that has a local minimum, in which the proposed algorithm's result is much better than that of the GA algorithm. On the other hand, the comparison showed that the required execution time to obtain the optimum solution using the proposed algorithm is much less than that of the GA algorithm.

More info:

Categories:Types, Research
Published by: ijcsis on Dec 26, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

12/26/2012

pdf

text

original

 
(IJCSIS)International Journal of Computer Science and Information Security,Vol.10, No.10,October2012
Intelligent Algorithm for Optimum Solutions Basedon the Principles of Bat Sonar
Dr. Mohammed Ali Tawfeeq
Computer and Software Engineering DepartmentCollege of Engineering, Al-Mustansiriya UniversityBaghdad,Iraqe-mail: drmatawfiq@yahoo.com
 Abstract
This paper presents a new intelligent algorithm thatcan solve the problems of finding the optimum solution in thestate space among which the desired solution resides. Thealgorithm mimics the principles of bat sonar in finding its targets.The algorithm introduces three search approaches. The firstsearch approach considers a single sonar unit (SSU) with a fixedbeam length and a single starting point. In this approach,although the results converge toward the optimum fitness, it isnot guaranteed to find the global optimum solution especially for
complex problems; it is satisfied with finding “acceptably good”
solutions to these problems. The second approach considersmultisonar units (MSU) working in parallel in the same statespace. Each unit has itsown starting point and tries to find theoptimum solution. In this approach the probability that thealgorithm converges toward the optimum solution is significantlyincreased. It is found that this approach is suitable for complexfunctions and for problems of wide state space. In the thirdapproach, a single sonar unit with a moment (SSM) is used inorder to handle the problem of convergence toward a localoptimum rather than a global optimum. The momentum term isadded to the length of the transmitted beams. This will give thechance to find the best fitness in a wider range within the statespace. The algorithm is also tested for the case in which there ismore than one target value within the interval range such astrigonometric or periodic functions. The algorithm shows highperformance in solving such problems.In this paper acomparison between the proposed algorithm and geneticalgorithm (GA) has beenmade. It showed that both of thealgorithms can catch approximately the optimum solutions for allofthe testbed functions except for the function that has a localminimum, in which the proposed algorithm's result is muchbetter than that of the GA algorithm. On the other hand, thecomparison showed that the required execution time to obtain theoptimumsolution using the proposed algorithm is much less thanthat of the GA algorithm.
 Keywords-Bat sonar; Genetic Algorithm; Particle swarm optimization
I.I
NTRODUCTION
Thebasic concept of any optimizing problem is to identifythe alternative means of a given objective and then to select thealternative that accomplishes the objective in the most efficientmanner, subject to constraints on the means. The problem canbe represented mathematically as,Optimize
 y = f 
(
 x
1
 , x
2
 , …, x
n
)(1)Subject to
 jni
b x x xg
),...,,(
21
 j=1, 2, …, m
(2)Equation (1) is the objective function and (2) constitutes theset of constraints imposed on the solution. The
 x
i
(i = 1,2,…, n)
represent the set of decision variables, and
 y=f(x
1
 , x
2
 , …, x
n
)
isthe objective function expressed in terms of these decisionvariables. Depending on the nature of the problem, the termoptimize means either maximize or minimize the value of realfunction by systematically choosing input values from withinan allowed set and computing the value of the function.In general, optimization can be defined as the process of finding a best optimal solution for the problem underconsideration.Today, optimization comprises a wide variety of techniques.These techniques canbe found in several literatures.Evolutionary computing may be the most prominent one inthis field. In the 1950s and the 1960s several computerscientists independently studied evolutionary systems with theidea that evolution could be used as an optimization tool forengineering problems. The idea in all these systems was toevolve a population of candidate solutions to a given problem,using operators inspired by natural genetic variation andnatural selection [1]. In 1975, Holland described how to applythe principles of natural evolution to optimization problemsand built the first genetic algorithms (GA) [2]. In the lastseveral years there have been widespread interaction amongresearchers studying various evolutionary computationmethods, and the boundaries between GAs, evolutionstrategies, evolutionary programming, and other evolutionaryapproaches have broken down to some extent. Thesetechniques are being increasingly widely applied to a varietyof problems, ranging from practical applications inindustryand commerce to leading-edge scientific research [3].Particle swarm optimization (PSO) is another techniquethat optimizes a problem by iteratively trying to improve acandidate solution with regard to a given measure of quality.PSO is a form of swarm intelligence and is inspired by birdflocks, fish schooling and swarm ofinsects [2]. It is used as a
Dr. Mohammed Ali Tawfeeq
11http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS)International Journal of Computer Science and Information Security,Vol.10, No.10,October2012
heuristic search method for the exploration of solution spacesof complex optimization problems. Development on the PSOtechnique over the last decade has been made by differentresearchers. The heuristic in PSO suffers from relatively longexecution times as the update step needs to be repeated manythousands of iterations to converge the swarm on the globaloptimum. Soudan, B. and Saad, M. [4] explored two dynamicpopulation size improvements for classical PSO with the aimof reducing execution time. The most attractive features of PSO are its algorithmic simplicity and fast convergence.However, PSO tends to suffer from premature convergencewhenapplied to strongly multimodal optimization problems.Lu H., et al. [5] proposed a method of incorporating a real-valued mutation (RVM) operator into the PSO algorithms,aimed at enhancing global search capability. The PSOcontains many control parameters. These parameters cause theperformance of the searching ability to be significantlyalternated. In order to analyze the dynamics of such PSOsystem rigorously,Tsujimoto, T.et al. [6] proposed acanonical deterministic PSO system which does not containany stochastic factors, and its coordinate of the phase space isnormalized. The found global best information influences thedynamics. They regarded this situation as the full-connectionstate. The authors tryto clarify the effective parameters on theCD-PSO performance.Feng Chen, et al. [7] proposed animproved PSO by incorporating the sigmoid function into thevelocity update equation of PSO to tackle some drawbacks of PSO in order to obtain better global optimization result andfaster convergence speed.PSOshares many common points with GA. Bothalgorithms start with a group of a randomly generatedpopulation; both have fitness values to evaluate thepopulation, both update the population and search for theoptimum with random techniques. However, unlike GA,PSOhas no evolution operators such as crossover and mutation. Onthe other hand, it is important to mention in this introductionthat GAs and PSO do not guarantee success [2], and sometimes are not guaranteed to find the global optimum solutionto a problem. They are satisfied with finding acceptablysolutions to the problem.This paper introduces a new intelligent algorithm. Theproposed algorithm is a problem solving technique that usesthe principles of bat sonar as its model in searching theapproximate optimum solution for the problems. Thealgorithm introduces three search approaches, a single searchunit, multisearch units, and a single search unit with amomentum. Each of these approaches can approximately findthe optimum solution in solving therequired problem with areasonable efficiency depending on the complexity of theproblem and the number of optimum points that exist in theproblem.This paper is organized as follows; the next sectiondescribes the main proposed algorithm. Section 3 introducesmore efficient searchapproaches. Section 4 contains theexperimental results, while, section 5 presents the conclusionof this work.II.M
AIN
A
LGORITHM
The sonar of a bat is an active echolocation system. Inaddition to providing information about howfar away a targetis, bat sonar conveys information about the relative velocity of the target, the size of various features of the target, and theazimuth and elevation of the target [8].In order to find its prey the bat may sit on a perch or flyaroundusing its sonar signals. Some type of bats areconsidered as a 'high duty cycle' bat since it produces signals80% of the time that it spends echolocating [9]. When a batbegins to echolocate it usually produces short millisecond longpulses of sonar, andlistens to the returning echoes. If prey isdetected by the bat, it will generally fly toward the source of the echo. The bat appears to be an amazing signal processingmachine that has an accuracy of 99%. The way in which thebat can measure the distanceand the size of its prey is asshown in Fig. 1 [10].
Fig. 1. Sonar signal of a bat
The proposed algorithm search for optimum solutions inproblems depends mainly on these principles. In thisalgorithm, each and every point in the search space representsone possible solution. The sonar in this algorithm transmitsseveral signals in different directions starting from a proposedstarting point. Each transmitted signal contains a batch of Nbeams. These beams are of fixed length. The returned values(value of the fitness function at the end point of each beam)are checked with each other and compared with the startingpoint to determine the optimum one. If optimum point isdetected, the sonar unit flies toward this point exchanging itsstarting point with the new one, then starts to transmit signalsagain from this point in different directions searching forbetter optimum solution. Otherwise, the sonar unit stays in itsoriginal starting point and retransmits signals in otherdirections. This process isrepeated until the algorithm findsthe best optimum solution.Fig. 2 illustrates a sample on how the proposed algorithmsearches for the optimum point. In this figure the sonar unittransmits beams of signals starting from point P1. Thereturned signalsfind better solution in P2. This causes thesonar to fly toward P2. This process is continued with P3, thenwith P4.In this example, it is assumed that the entire returnedsignals to P4 are not fitter than P4, thus the algorithmconsiders P4 to be the optimum point.
12http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS)International Journal of Computer Science and Information Security,Vol.10, No.10,October2012
Fig. 2. Search process for optimum solution
The fitness function considered in the proposed algorithm,is the evaluation function that is used to determine thesolution. This function can be n-dimensional. The optimalsolution is the one with the best fitness function.The main proposed algorithm in this paper considers asingle sonar unit (SSU) flying in the state space searching forthe optimum solution. This scenario represents the first searchapproach introduced in this work. The details of the algorithmare as follows:
Step 1
.Initialize the following main parameters:-Solution range: min, max values of the search spacevariables.-Beam length
 L
: random value not exceeding half thesolution range:
2 / _*
rangeSolution Rand  L
-Number of beams
 N 
: Small integer random valuerepresenting the number of beams in each singletransmitted signal.-Starting point
 pos
s
: any point in the search spaceselected randomly.-Angle between beams
θ
: one of two techniques areassumed to be used in this algorithm. The first one is torandomly select a small fixed value
θ
between any twosuccessive beams, while the other technique is torandomly select a different angle
θ
i
between any twosuccessive b
eams, where (i=1, … , N
-1). We calledthese two techniques "Fixed
θ
" and "Rand
θ
" respectively.The above mentioned parameters areshowedin Fig. 3.
Step 2.
Evaluate the fitness function at the start point
s
.
Step 3.
While stopping condition is false, do Steps 4-7.
Step 4.
Select random value representing the main beamdirection
θ
m
starting from
 pos
s
.
Step 5.
Transmit
 N 
beams starting from pos
s
with mainbeam direction of 
θ
m
and angle
θ
between any twosuccessive beams.
Step 6.
Determine the coordinates of the remote end point
 pos
i
for each transmitted beam (
i=1,…,N 
), thenevaluate the fitness function
i
at these ends. As anexample, in a three dimension state space:
))1(cos(
i L x x
m posi
s
(3)
))1(sin(
i L y y
m posi
s
(4)
],[
iii
 y x pos
),(
 y x f 
i
Step 7.
Compare the fitness values;If 
s
is the optimum value (i.e., for maximizing
s
i
, and for minimizing
s
i
) then go to step 3Otherwise:Replace the coordinates of 
 pos
s
with thecoordinates of the optimum point of 
i
andreplace
s
with the optimum
i
:
 pos
s
= pos
i
of optimum
i
s
= optimum
i
,then go to step 3
Step 8.
Test for stopping condition:The algorithm can beterminated according to following stopping criteria:-A fixed number of iterations have occurred.-All solutions converge to the same value and noimprovements in the fitness value are found.
Fig. 3.Single batch of beams contained in a single transmitted signal
The algorithm is a kind of a parallel search; this comesfrom the fact that the technique used here is to check forseveral solutions at once. Over iterations, selection for bestfitness leaves out bad solutions and gets the best in each step.Thus, the proposed algorithm tries to converge to optimalsolutions.In SSU, although the results converge toward the minimumor maximum fitness,it is not guaranteed to obtain the globaloptimum solution, especially in complex problems with widestate space. This leads to develop more efficient searchapproaches.III.M
ORE
E
FFICIENT
S
EARCH
This paper introduces two other more efficient searchapproaches, in which, the first one uses multisonar searchunits,while the other one adds a momentum term to the beamslength. The backbone of these two algorithms is the mainalgorithm of SSU approach mentioned previously.
θ
m
 L
θ
1N
 pos
 s
i
13http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->