5 views

Uploaded by Mohammad Musthak Ahmed

save

- OPTIMIZACION DE MUROS DE GRAVEDAD.pdf
- Loss mini Pso.pdf
- qpso-ieee
- Report Writing and Presentation
- Tetra a Parallelizable, Capability-based Secure-computation Protocol Related to Virtual Anti-censorship Routing Metrics
- Multi Agent Algorithm for Finding Multiple Noisy Radiation
- SECTION 2 - Design Criteria for Water Distribution Systems_201304251344365252
- Hilt - A Methodology for the Study of Journaling File Systems
- energies-10-01213-v2
- Simplified Velocity MPSO for Directional Over
- FAKE RESEARCH PAPER - Autogenerated Techno Babble
- A-Novel-Optimal-Setting-for-Directional-over-Current-Relay-Coordination-using-Particle-Swarm-Optimization.pdf
- ciis1006
- adversary oracle
- EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSO
- numeracy difficulties
- Essentials
- 2017-01!18!172750 @C the Current State of Automated Machine Learning
- 157
- Simple Genetic Algorithm
- Optimization Programming Problems For Control And Management Of Bacterial Disease With Two Stage Growth/Spread Among Plants
- AMpc
- Scimakelatex.19378.Oh
- Investigating Redundancy Using Relational Information
- 2011 Wang Seismic Impedance Inversion Using l1norm and Gradient Descent Methods
- Modified Globalk-means Algorithm Forminimum Sum-Of-squaresclusteringproblems
- 1.4. What is Programming- — Problem Solving With Algorithms and Data Structures
- 05460073
- Fault Diagnosis Inverse Problems Solution With Metaheuristics
- Programme for Hnicem Htc 2015 Conference
- 4814
- IJIRST Sample Paper
- Timoshenko
- LetterHead 2012[2]Partiipations Invitation.

You are on page 1of 11

17, 815 - 825

**A New Hybrid of Evolutionary and Conventional Optimization Algorithms
**

A. Khosravi Faculty of Electrical and Computer Engineering Noushirvani University of Technology Babol, 47135-484, Iran akhosravi@nit.ac.ir A. Lari Faculty of Electrical and Computer Engineering Noushirvani University of Technology Babol, Iran a.lari@stu.nit.ac.ir J. Addeh Faculty of Electrical and Computer Engineering Noushirvani University of Technology Babol, Iran jalil-addeh@stu.nit.ac.ir

Abstract Combination of the Evolutionary and Conventional algorithms has opened a new horizon to Optimization algorithms. In the previous hybrid algorithm, ﬁrstly an evolutionary algorithm has been applied and then the obtained result has been employed as an initial guess for a conventional algorithm. In this method the advantages of each algorithm cannot be realized while they are running at the same time. In addition, it is not determined in which iteration the evolutionary algorithm should stop and the conventional one should start. Certainly an improper iteration selection leads to an ineﬃcient hybrid algorithm. To overcome above shortages, this paper proposes a novel hybrid algorithm that uses the abilities of evolutionary and conventional algorithm simultaneously. In each iteration of the proposed algorithm both evolutionary and conventional algorithms have been applied. Simulation results on some benchmark problems show that the proposed hybrid

816

A. Khosravi, A. Lari and J. Addeh algorithm has faster convergence and a more reliable solution than the conventional hybrid algorithm.

Keywords: Evolutionary Algorithm, Conventional Algorithm, Particle ptimization, Steepest Descent

1

Introduction

There are two types of optimization algorithms, Evolutionary and Conventional. Evolutionary optimization algorithms have drawn considerable attention in the last decade. Many types of evolutionary algorithms such as particle swarm optimization (PSO), Bees and Ant colony have been introduced. They have rapidly progressed in recent years and have presented many successful applications in solving real world optimization problems [7, 9]. On the other hand conventional optimization algorithms such as steepest descent (SD), Newton and successive quadratic programming (SQP) have been applied in variety of optimization problems. However in some cases they have been confronted with same problems such as an improper initial guess. In addition they cannot use to a discreet system. Even though evolutionary algorithms can be employed in problems with a lot of local extremum but they cannot search the optimum point near the global extremum properly. Conventional algorithms show diﬀerent properties and they can rapidly move toward the global optimum when a proper initial guess has been selected. However in problems which have some local extremum, the conventional algorithms usually fall within the local extremum. So it seems that these two types of algorithms are a good complement for each other and a proper combination of them can lead to a better result. In many situations where this combination has been used, ﬁrstly an evolutionary algorithm has been applied and then the obtained swarm has been employed as initial guess for a conventional algorithm [3]. In this manner the abilities of evolutionary and classical algorithms are used separately. On the other hand it is not determined in which iteration the evolutionary algorithm should stop and the conventional one should start. Certainly an improper selection of this iteration leads to an ineﬃcient hybrid algorithm. In a randomly search in an evolutionary algorithm if in each iteration a classical algorithm be used, the abilities of precise search near an extremum and randomly search can be satisﬁed at the same time. The proposed new hybrid algorithm is based on the above idea. For the purpose of this algorithm, the particle swarm optimization has been selected among evolutionary algorithms and steepest descent among conventional algorithms. PSO has been used to solve many optimization problems. This algorithm is a powerful evolutionary algorithm and because of the ease of implementation and fast convergence speed, it has been widely applied

Evolutionary and conventional optimization algorithms

817

in many areas [5, 8]. SD algorithm is a highly eﬃcient direct optimization approach. This method is the simplest of the gradient methods that uses only ﬁrst order derivation. These two algorithms have the same properties; they are simple algorithms among evolutionary and conventional algorithms which have relatively high convergence speed. The hybrid algorithm obtained by these two algorithms is a simple, fast and easy implemented algorithm. The proposed algorithm has been tested by four benchmark problem and simulation results show that the proposed hybrid algorithm has faster convergence and more precise solution than the conventional hybrid algorithm. It should also be mentioned that to obtain better results, a conventional optimization algorithm can be used when the proposed new hybrid algorithm is converged to a ﬁxed value. The rest of this paper is organized as follow: Section 2 discusses the PSO evolutionary algorithm. In section 3, the steepest descend algorithm is illustrated and in section 4 the proposed new hybrid algorithm has been stated. Simulation results are presented in section 5 and ﬁnally, section 6 is dedicated to conclusion.

2

Particle Swarm Optimization

Particle swarm optimization (PSO) is originally attributed to Kennedy and Eberhart [2], based on the social behavior of collection of animal such as birds ﬂocking and ﬁsh schooling. In PSO algorithm each individual of the swarm, be called particle, remembers the best solution found by itself and by the whole swarm along the search trajectory. The particles move along the search space and exchange information with other particle according to the following equation. Vid = wVid + c1 r1 (Pid − Xid ) + c2 r2 (Pgd − Xid ) Xid = Xid + Vid , f ord = 1, 2, . . . , Ni = 1, 2, . . . , S (1) (2)

Where Xid represent the current position of the particle, Pid is the best remembered individual particle position, Pgd denote the best remembered swarm position. c1 and c2 are cognitive and social parameters. r1 and r2 are random numbers between 0 and 1 and w is inertia weight which is used to balance the global and local search abilities. A large inertia weight facilitates global search while a small inertia weight facilitates local search. In an empirical study on PSO [4], Shi and Eberhart claimed that a linearly decreasing inertia weight could improve local search towards the end of a run, rather than using

818

A. Khosravi, A. Lari and J. Addeh

a constant value throughout. A decreasing function for the dynamic inertia weight can be devised in the following winitial − wf inal + wf inal (3) itermax Where winitial and wf inal represent the initial and ﬁnal inertia weights respectively at the start of a given run, itermax the maximum number of iterations in a oﬀered run, and itermax the current iteration number at the present time step. However, global search ability at the end of the run may be inadequate due to the utilization of a linearly decreasing inertia weight . The PSO may fail to ﬁnd the required optimal in cases when the problem is too complicated. But to some extent, this can be overcome by employing a self-adapting strategy for adjusting the acceleration coeﬃcients. Suganthan [6] applied the optimizing method that make the two factors decrease linearly with the increase of iteration numbers, but the results were not as good as the ﬁxed value 2 of c1 and c2 Ratnaweera [1] improve the convergence of particles to the global optima based on the way that make c1 decrease and c2 increase linearly with the increase of iteration numbers. The c1 and c2 is given by the following equations: w = (itermax − itercur ) . c1 = c1s + itercur (c1e − c1s )/itermax c2 = c2s + itercur (c2e − c2s )/itermax (4)

3

Steepest Descent Algorithm

The method of steepest descent (SD) is the simplest of the gradient methods. Imagine that there’s a function F(x) , which can be deﬁned and diﬀerentiable within a given boundary, so the direction it decreases the fastest would be the negative gradient of F(x) .To ﬁnd the local minimum of F(x) , the method of SD is employed, where it uses a zig-zag like path from an arbitrary point and gradually slide down the gradient, until it converges to the actual point of minimum. Although this optimizing method is less time consuming than the population based search algorithms, it is highly dependent on the initial estimate of solution . This method can be describe by the following formula, uk = g(xk ) g(xk ) (5) (6)

xk+1 = xk + αk uk

Where xk+1 represent the position of new solution than xk , g(xk ) is gradient of F (x) in xk . αk is a scalar that should decrease in each iteration. Near the optimum point, lower range of αk make more little movement in xk and so, a more precise search.

Evolutionary and conventional optimization algorithms

819

4

The New Hybrid Evolutionary-Conventional Algorithm

In this section the proposed method has been described. This method is based on a new combination of conventional and evolutionary algorithms for sweeping the search space. In the former works the evolutionary algorithm has been used to scanning the search space in ﬁrst, then a conventional algorithm has been employed such that the obtained swarm has used as initial guess by it. This method cause the abilities of the two algorithms be used separately. So we cannot guarantee a better result because the proper iteration that we should changed the algorithms cannot be determined. On the other hand when we separately use these two algorithms we cannot gain their abilities in the same time. In the proposed method in each iteration after using the evolutionary algorithm, the swarm be employed in the conventional algorithm for a deterministic time. So in each iteration the main abilities such as randomly search and a more precisely search near the optimum point can be satisﬁed. When the number of iterations increased and the proposed algorithm approach to a global extremum, SD algorithm can be used (New hybrid-SD) to a more precise search near the global extremum. Because near the global extremum there is no need to a randomly search. The procedure for this algorithm can be summarized as follow: Step1. Specify the number of times which steepest descent algorithm should be applied in each iteration of PSO algorithm. This number can be a noninteger number. This means that PSO algorithm applies for a speciﬁed number in each iteration of steepest algorithm. Step2. Determine the PSO parameters, like number of particle, number of iteration, cognitive and social parameter and so on. Step3. Initialize the positions and velocities of particles randomly in search space. Step4. Evaluate each particle ﬁtness value. Calculate Pid and Pgd based on their deﬁnitions. Step5. Based on the number which is speciﬁed in step1,move the particles through the search space by PSO and SD algorithms in each iteration. The positions and velocities of all particles are updated according to equations 1, 2, 5 and 6, and then a group of new particles are generated. Step6. If the maximal iterative of generation are arrived go to step 8else go to step 7. Step7. Calculate the new amount of w , c1 and c2 according to equations 3 and 4 and go to step 4. Step8. Use the obtained swarm as initial guess of SD algorithm and move the particles by SD algorithm for speciﬁed times.

820

A. Khosravi, A. Lari and J. Addeh

5

Simulation Result

In order to show abilities of the new hybrid algorithm, four benchmark problems are considered. The performance of improved PSO, conventional hybrid algorithm, the new hybrid algorithm and the new hybrid-SD algorithm is compared in each benchmark problem. All of results are obtained with 50 particles , 1200 iteration. , and are linearly functions of iteration which were described in section 2. It should be mentioned that in New hybrid-SD and conventional hybrid algorithms the last 200 iterations has dedicated to steepest descent algorithm. The result for four benchmark problems have been obtained by a number of improved PSO particle’s movement and SD particle’s movement in each iteration of proposed hybrid algorithm. These numbers are showd in table 1. Limitation on search space for each benchmark problem is given in this table, too. Benchmark P roblem Sphere De Jung Rosenbrock Rastrigin Number of improved P SO particle s movement 1 1 1 2 Number of SD particle s movement 1 1 4 1

Table 1: Number of improved PSO and SD particle’s movements in each iteration of proposed hybrid algorithm Table 2 describes the four optimization problems in more detail. The PSO parameters in all of the benchmark problems are considered as table 3. Benchmark P roblem Sphere De Jung Rosenbrock Rastrigin dimension 10 10 3 5 Constraint on variables [-100 100] [-100 100] [-100 100] [-5.1 5.1]

Table 2: Search space of benchmark problems

5.1

Sphere Function

Sphere function is given as

Evolutionary and conventional optimization algorithms

821

Benchmark P roblem winitial wf inal c1e c1s c2e c2s

Sphere and DeJung 1.2 0.4 2 2 2 2

Rosenbrock and Rastrigin 1.2 0.4 0.5 1.5 1.5 0.5

Table 3: Values of parameters of PSO algorithm

Figure 1: Comparison between the Algorithms for Sphere function.

n

f1 (x) =

i=1

x2 i

(7)

The global minimum is located at the origin and its function value is zero. Figure 1 is a comparision between thealgorithms for sphere function. As we can seen the New hybrid-SD algorithm have much more precisely solution than the others.Meanwhile the New hybrid algorithm shows relatively good results.Theconvergecy of improved PSO and conventional hybrid algorithms are not saticfactory.

822

A. Khosravi, A. Lari and J. Addeh

5.2

**De Jung’s Function
**

n

f2 (x) =

i=1

ix4 i

(8)

The global minimum is located at the origin and its function value is zero. Based on simulation result on Figure 2,the New hybrid-SD algorithm have the lowest cost functions among algorithms.In addition, from ﬁgure 2 It is clear that the New hybrid algorithm shows more exact result than conventional hybrid and improved PSO algorithms.

Figure 2: Comparison between the Algorithms for De Jong function.

5.3

Rosenbrock’s Function

**Rosenbrock’s function is given as
**

n−1

f3 (x) =

i=1

100 xi+1 − x2 i

2

+ (xi − 1)2

(9)

The global minimum is located at the origin and its function value is zero. Figure 3 is a comparision between the algorithms for Rosenbrock’s function. The New hybrid-SD and New hybrid algorithm in this ﬁgure have the same results. It means that the steepest descent algorithm in the last 200 iteration was not able to improve the cost function. In this ﬁgure the new hybrid algorithm

Evolutionary and conventional optimization algorithms

823

have much more precisely results than conventional hybrid and improved PSO algorithms.

Figure 3: Comparison between the Algorithms for Rosenbrock function.

5.4

Rastrigin’s Function

**Restrain’s function is given as
**

n

f4 (x) =

i=1

(x2 − 10 cos(2πxi ) + 10) i

(10)

The global minimum is located at the origin and its function value is zero. Figure 4 is a comparisionbetween the algorithms for Rastrigin’s function. As we canseen the New hybrid-SD algorithmnearly have a same costwith convetional hybrid algorithm. Meanwhile the New hybrid algorithm showsbetter results than improved PSO algorithm. a glance this can be stated that new hybrid algorithm together with a conventional algorithm like SD,have the best results for all of the functions.The reason can be stated as follow: better local search in the new hybrid algorithm has ocurred, that’s why a better initial guess is provided for SD algorithm and it means that SD algorithm can converge to lower cost.

824

A. Khosravi, A. Lari and J. Addeh

Figure 4: Comparison between the Algorithms for Rastrigin function.

6

Conclusion

This paper proposed a new hybrid of evolutionary and conventional algorithms. This method, unlike the similar past works uses the abilities of evolutionary algorithms and conventional algorithms at the same time. In former hybrid technique the iteration that evolutionary algorithm should stop and the conventional algorithm (with initial guess obtained by evolutionary algorithm) starts to work, was not determined. In the proposed hybrid method this problem has been solved. An improved PSO algorithm and steepest descent algorithm have been selected as evolutionary and conventional algorithms respectively. The simulation results of the new hybrid algorithm on four benchmark problems show that this method not only has a more convergence speed than the conventional method but also it obtains a more reliable solution.

References

[1] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration

Evolutionary and conventional optimization algorithms

825

Coeﬃcients, IEEE Transactions on Evolutionary Computation 8(3) 240255 (2004). [2] B. Liu, L. Wang, and Y. H. Jin, An eﬀective PSO-based memetic algorithm for ﬂow shop scheduling, iIEEE Trans. Syst., Man, Cybern. B, Cybern.,vol. 37, no. 1, pp. 18-27, Feb. 2007. [3] H. Modares, M. NaghibiSistani, Solving nonlinear optimal control problems using a hybrid IPSO-SQP algorithm, Engineering Applications of Artiﬁcial Intelligence 2010. [4] J. Kennedy,R. Eberhart, Particle swarm optimization, In:Proc IEEE intconft neural networks, vol.IV, Perth,Australia;1995. p.1942-48. [5] N. Franken and A. P. Engelbrecht, Particle swarm optimization approaches to coevolve strategies for the iterated prisoner’s dilemma, IEEE Trans. Evol. Comput., vol. 9, no. 6, pp. 562-579, Dec. 2005. [6] P. N. Suganthan, Particle Swarm Optimizer with Neighborhood Operator, i, in Proceedings of IEEE International Conference on Evolutionary Computation.Washinington D.C. IEEE Press (1999), pp. 1958-1962. [7] R. C. Eberhart and Y. H. Shi, Particle swarm optimization: Developments, applications and resources, in Proc. IEEE Congr. Evol. Comput, Seoul, Korea, 2001, pp. 81- 86. [8] S.Y. Ho, H.S. Lin, W.H. Liauh, and S.J. Ho, OPSO: Orthogonal particle swarm optimization and its application to task assignment problems, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 2,pp. 288-298, Mar. 2008. [9] X. D. Li and A. P. Engelbrecht, Particle swarm optimization: Particle swarm optimization: An introduction and its recent developments, in Proc. Genetic Evol. Comput. Received: June, 2011

- OPTIMIZACION DE MUROS DE GRAVEDAD.pdfUploaded byHeber Guillen
- Loss mini Pso.pdfUploaded bySujoy Das
- qpso-ieeeUploaded byharinima
- Report Writing and PresentationUploaded byVeroen Siddharth
- Tetra a Parallelizable, Capability-based Secure-computation Protocol Related to Virtual Anti-censorship Routing MetricsUploaded byscribduser53
- Multi Agent Algorithm for Finding Multiple Noisy RadiationUploaded byAndra Tudor
- SECTION 2 - Design Criteria for Water Distribution Systems_201304251344365252Uploaded byaikaless
- Hilt - A Methodology for the Study of Journaling File SystemsUploaded byGuilherme Sanches de Oliveira
- energies-10-01213-v2Uploaded byAlan
- Simplified Velocity MPSO for Directional OverUploaded byanton_jr0702
- FAKE RESEARCH PAPER - Autogenerated Techno BabbleUploaded byDavid Capino
- A-Novel-Optimal-Setting-for-Directional-over-Current-Relay-Coordination-using-Particle-Swarm-Optimization.pdfUploaded byAhmed M H Al-Yousif
- ciis1006Uploaded byharinima
- adversary oracleUploaded bygirdhargopal87
- EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOUploaded byCS & IT
- numeracy difficultiesUploaded byapi-371671595
- EssentialsUploaded byMohamedAmineSehli
- 2017-01!18!172750 @C the Current State of Automated Machine LearningUploaded byDat Nguyen
- 157Uploaded byRanjit Menon
- Simple Genetic AlgorithmUploaded byGama Wirata Putra
- Optimization Programming Problems For Control And Management Of Bacterial Disease With Two Stage Growth/Spread Among PlantsUploaded byinventionjournals
- AMpcUploaded bystathiss11
- Scimakelatex.19378.OhUploaded byOne TWo
- Investigating Redundancy Using Relational InformationUploaded byadifo asdoif
- 2011 Wang Seismic Impedance Inversion Using l1norm and Gradient Descent MethodsUploaded byVictor Martins Gomes
- Modified Globalk-means Algorithm Forminimum Sum-Of-squaresclusteringproblemsUploaded byJian Du
- 1.4. What is Programming- — Problem Solving With Algorithms and Data StructuresUploaded byErr33
- 05460073Uploaded byPradeep Vunnam
- Fault Diagnosis Inverse Problems Solution With MetaheuristicsUploaded byEdwin De La Cruz Torres
- Programme for Hnicem Htc 2015 ConferenceUploaded byspeldabolio