0 Up votes0 Down votes

12 views5 pagesAnt Colony Optimization for Continuous Domains

Nov 30, 2013

© Attribution Non-Commercial (BY-NC)

PDF, TXT or read online from Scribd

Ant Colony Optimization for Continuous Domains

Attribution Non-Commercial (BY-NC)

12 views

Ant Colony Optimization for Continuous Domains

Attribution Non-Commercial (BY-NC)

You are on page 1of 5

Ping Guo

School of Computer Science Chongqing University Chongqing, 400044, China

AbstractThe ant colony algorithm has been successfully used to solve discrete problems. However, its discrete nature restricts applications to the continuous domains. In this paper, we introduce two methods of ACO for solving continuous domains. The first method references the thought of ACO in discrete space and need to divide continuous space into several regions and the pheromone is assigned on each region discrete, the ants depend on the pheromone to construct the path and find the solution finally. Compared with the first method, the second one which the distribution of pheromone in definition domain is simulated with normal distribution has essential difference to the first one. In order to improve the solving ability of those two algorithms, the pattern search method will be used. Experimental results on a set of test functions show that those two algorithms can obtain the solution in continuous domains well. Keywords- Swarm Intelligence, ACO, continuous domains, normal distribution

Lin Zhu

School of Computer Science Chongqing University Chongqing, 400044, China Another method is mainly based on Gaussian probability density function which the pheromone is assigned on definition domains like a normal distribution function; in the optimal point the pheromone is stronger than the other points. Ants are searching in a continuous space just like nature ants to find the source of food. In this paper, we will use the above two kinds of thinking and propose two ant colony algorithms for continuous domains and analysis the performance and the application scope of two methods by experimental. It is organized as follows. In section II, the basic model of the two algorithms will be given. In section III, we use the pattern search method to improve those two algorithms. In section IV, Experiments and results, we will analysis the performance of those two algorithms and their improved algorithms. Finally, the conclusions are summarized in the last section V. II. THE RELEVANT RESEARCH

I.

INTRODUCTION

Inspired by the ants foraging behavior, the first ant colony algorithm was proposed in 1992. The main idea of an ant system is based on the fact that deposits pheromone on the foraging path. In the early research, the algorithm was successfully applied for solving combinatorial optimization problems, including TSP (Traveling Salesman Problem), QAP (Quadratic Assignment Problem), JSP (job shop scheduling Problem ) and so on[2]. The classic ant colony algorithms solve combinatorial optimization problems by sharing the pheromone, co-evolution and form a positive feedback to find the solution [3]. But it is not suitable for continuous optimization problems. So in order to solve continuous optimization problems it need to extension the original ant colony algorithm. Generally, the extension can be accomplished by either discretizing the continuous domain into several regions[6] or shifting from using a discrete probability distribution to using a continuous one such as a Gaussian probability density function[2]. More researchers use the first approach to study, the pheromone is assigned discrete, but there exist a problem when the scale of the problem become larger and larger, after discrediting, the solution space of the problem will be a sharp increase. For instance, if the scale of the problem is ndimensional, discrediting each dimension into x regions, so the problem has xn feasible path which will be exponential growth. So for large-scale problems, the applicability of this method needs to verify[3].

In this section, we introduce the basic model of the two algorithms in detail, and describe the pheromone distribution and the state transition rules will be given then. The pheromone updating rules are introduced finally. A. The definition of the continuous domains problem Before introduce the algorithms, we first give a general form of continuous function optimization problems. A model p=(S,,f) of a continuous domains problem (CDP) consists of [4]: (1) A search space S defined over a finite set of discrete decision variables and a set of constraints among the variables. (2) An objective function f: SR+={r|r0} to be minimized. The search space S is defined as follows: Given is a set of discrete variables xi (i=1,n), with xi[ximin, ximax] (i=1,n). A variable instantiation is the assignment of a value to a variable xi (i=1,n). A solution sS is a complete assignment which each decision variable has a value assigned that satisfies all the constraints in is a feasible solution of the given CDP. If the set is empty, p is called an unconstrained problem model; otherwise it is called a constrained one. A solution s*S is called a global optimum if and only if: f(s*) f(s) (sS). The set of all globally optimal solution is denoted by {s*}S. Solving a CDP requires finding at least one s*S.

758

In order to measure whether the solution is good or bad, here we introduce a fitness function, the model is like this: fitness(s*)=-f(s*). B. The discrete ant colony optimization (DACO) Inspired by the ACO for solving TSP problem, the DACO also need to construct a feasible path as the solution. We should find which path can let the fitness function max. 1) The pheromone distribution of DACO The pheromone of DACO distributes in the solution space S discrete. First, we divide the continuous domains into several regions. Supposed that dimension xi is divided into N discrete regions, as the Figure 1 shows, the continuous optimization problems will become an n-dimensional decision problem. Each path of the ant construct is a solution.

In the DACO algorithm, the update of the pheromone divided into two steps. First is local updating rule. In the process of establish the tour by ants, the residual information should be weakened on the passed paths constantly by formula.(3). So the probability of the next ant choosing the same path can be reduced, expect it has already be determined the best path by many times of circulation.

x i (t + 1) = (1 ) x i (t ) + 0

n n

(3)

choose the region. And [0,1] is the evaporation rate. Another is global updating rules. When the ant has reached the node end, the path has built as Figure 1 shows for us. For this path we can assign a value to each variable X(x1,x2,xn) which used to compute the fitness value by creating a random value in each region of the path. After all of the ants have finished its tour construction, we can choose an ant whose fitness value is maximum. This ant is called iteration optimal ant. If it is larger than the fitness value of the global optimal ant, the previous global optimal ant will be replaced. When all of the ants finished path construction, the pheromone of the regions which the iteration optimal ant passed will be updated. This updating rule is for the optimal path in order to increase algorithm convergence speed.

k =1

x i (t ) is the residual pheromone on the xn region i at t moment. At the beginning (t= 0), x i (0) = C . The larger the

n

n

x i (t + n) = x i (t ) +

n n

(4)

x i (t ) is stand for there are more ants choose the region i of dimension n at t moment.

n

= 1/ ( fitnessmax + C )

(5)

2) The state transition rules of DACO When the ant starts to search, it should choose next region and go to it until all ants have reached the end node. Ants k to choose the next depend on the transition probability px i (t )

n

Where fitnessmax is the max fitness value of the function in this iteration, C is an constant, the reason why need to add C is to prevent fitnessmax is a negative number or a number close to 0. General C is always a big positive number. C. Ant colony optimization based on Gaussian distribution (GACO) On the basis of the features of continuous problems, we change ants to find the shortest circuit into ants to find the optimal food source that is over the feasible domain S. Every point X(x1,x2,xn) is a feasible solution. 1) The pheromone distribution of GACO The pheromone distributes discrete in DACO, while in GACO, depending on the features of ants to find food source, the pheromones distribution should suffuse all around the space and every ants pheromone overlying each other. In the process of ants to find the food source, the better food source is the more ants will come and the more pheromone will be. In order to simulation the situation, we use Gaussian distribution function to simulate the distribution of pheromone. At point X(x1,x2,xn), in fact, each dimensional xi the pheromone meet the Gaussian distribution function same. The formula is like this:

region on xn. In order to reflect the ants exploration and k in DACO mining properties the transition probability px i (t )

n

arg max{ xni (t )}, q q0 , i, j [1, m] k px (t ) = ni , otherwise J

xni (t ) , j, s [1, m] m J = xn s (t ) s =1 , otherwise 0

(1)

(2)

Where q[0,1] is a random number which is used to determine the probability of random selection, q0[0,1] is a constant which is usually set to 0.8. 3) The pheromone updating rules of DACO

759

( x opt xi )2 1 exp( i ) ( xi ) = 2 i 2 2 i

(6)

Preparation phase. Choose a point A=(x1,x2,xn)T as the start point. Compute the fitness value of fitness(A), set the tentative step length d . Tentative search. Starting from the first dimension, compute two points (x1d,x2,xn)T fitness value, compared with fitness(A), choose the best one as new start point. Then the second dimension and go on until to the last dimension. Suppose that at last the best point is B, compute fitness(B). Pattern moves. Through point A and point B to do a straight line, extend to point C, let the coordinates of point C meet: CA=(B-A). The is a coefficient of adjustment range of movement patterns. Generally, =0.0~2.5. Compute fitness(C). At step (2), if the fitness value at points (x1,x2, xid,xn)T smaller than at point (x1,x2,xn)T,we can let d=0.5d. At step (3), if fitness(C)<fitness(B),we can let the coefficients reduce. Apply this strategy to the above two algorithms. For DACO, when all the ants have finished construct the path, changing generate a solution randomly into use pattern search to find a best solution in the selected regions, we named this algorithm DACO2. For GACO, after the ant obtain a new point Anew through sampling, if fitness(Anew)<fitness(Aold), starting with point Aold we use pattern search to find a better one. Otherwise, Anew is the point of ant chooses. We named this algorithm GACO2. After improvement, the solving steps of DACO2 and GACO2 are following: DACO2 : (1) Initialization. Set the initial value of each parameter of DACO. Discretizing the continuous domain into several regions, set every ant at node start. Set x i (0) = C .

n

Where xiopt is the distribution center, stand for the location of the optimal ant, and i is the width of the distribution function, stand for the degree of aggregation of ants. 2) The state transition rules of GACO In each round of iteration, we use formula.6 as the probability density function for sampling, and generate m new new feasible solutions X new = {X1new , X2 ,...Xn } , use formula.7 to sure every points probability, and choose a point X snew as the new location for ants.

new new P( X new j ) = ( X j ) / ( X j ) j =1 m

(7)

This operation also reflects the ants exploration and mining properties, because sampling has randomness. On the one hand, it has relatively large probability to take the point near the optimal ant. On the other hand, the points all around the solution space have opportunity to be chosen. 3) The pheromone updating rules of GACO In GACO, when all ants complete a state transition, we need to update the pheromone, the distribution center become the new optimal ants location in this iteration and the width of the distribution function i also need to update, the formula is like this :

i 2 =

j =1 k k (xiopt xi )2 1 / opt fitness( X ) fitness( X j ) j=1 fitness( X opt ) fitness( X j )

(8)

Where k is the number of ant, fitness(X) is the fitness function value of point X. At the beginning of the searching, the ants do not know where is the best food source, the width of the distribution function i should be large enough that let the pheromone distributes on the solution space S uniform. With the increase number of iterations, ants move to the vicinity of the best food sources gradually, i will become smaller and smaller, meanwhile it also reflects the degree of aggregation of pheromone. III. THE IMPROVE STRATEGY OF DACO AND GACO

k (2) Ants state transition. Depend on px formula.1 ants i (t ) choose a region at xn.

n

(3) Update local pheromone according to formula.3 (4) Use pattern search to obtain solutions to each ant. And compute the fitness value. (5) Update global pheromone according to formula.4.Ants return to the node start. (6) Iterative loop. Repeat steps (2) to (5) until the number of iterations achieve maximum n0. Save the global optimal path and cut the other regions and repeat steps (1) to (5) until the width of the region is smaller than . GACO2: (1) Initialization. Set the initial value of each parameter of GACO. Set k ants location randomly and sure the location of optimal ant Xopt, and set max min n i = 3 max{x j x j } j =1 . (2) Ants state transition. Use formula.6 as the probability density function for sampling, and generate m feasible new new solution X new = {X1new , X2 ,...Xn } . Move the ant i following the next step.

In the last section, we had introduced DACO and GACO, in order to improve the performance of global optimization, pattern search method which is suitable for local search will be used in this section. This method which formed by exploratory search and pattern moves was proposed by Hooke and Jeeves. The detail [5] steps are following :

760

A. B.

Use formula.7 to sure every points probability, and new choose a point X s . If fitness ( X snew ) fitness ( X i ) , then let the point Xi as the start point use pattern search to find a better one Xbetter, ant i go to this point. Otherwise choose point X snew .

IV.

(3) Pheromone update. According to the ants new location, compute each ants fitness value, and choose the best point Xopt, and update i and (xi) according to equation (6) and (8). (4) Iterative loop. Repeat steps (2) and (3) until the number of iterations achieve maximum.

TABLE I. Function Sphere Rosenbrock Rastrigin Ackley Gnewangks100 Formula

n

A. Test Function In order to test the performance of DACO, DACO2, GACO, GACO2, some representative functions[2-4] are chosen, as the table 1. F1 is a simple test function, only have one optimal point within the definition space. F2 also has only one optimal point, but near the optimal point the function shows pathological, the function values change little. For F3, there also has only one optimal point, but it has 10n number of extreme points, so it is easy to fall into local optimum. While F4 is a widely used multi-modal test functions, only have one optimal point. For F5, the definition space is very large, and it is very hard to obtain the optimal point.

THE LIST OF TEST FUNCTION Domain xi[-5.12,5.12], i=1,2,n xi[-5.12,5.12], i=1,2,n xi[-5.12,5.12], i=1,2,n xi[-32,32], i=1,2,n xi[-100,100], i=1,2,n Best Solution xi=0, i=1,2,n xi=1, i=1,2,n xi=0, i=1,2,n xi=0, i=1,2,n xi=0, i=1,2,n Min F 0 0 0 0 0

F1 ( x ) = x

i =1

2 i

F2 ( x) = (100( xi +1 xi2 ) 2 + ( xi 1) 2 )

F3 ( x) = ( xi2 10 cos(2xi ) + 10)

i =1

n 1

i =1 n 1

i =1 i =1

0.2

1 n

xi2

1 n

cos( 2xi )

B. Parameter settings For DACO and DACO2, set =0.7, 0=0.01, q0=0.8, =0.0001, n=3, d is half of region, while the number of ants m=32 and iterations n0=600. For GACO and GACO2, set the number of ants 40, max iterations 200, the number of samples for each ant is m =10. C. Performance testing and results analysis In order to reduce the occasional affect, we compute 30 times for every algorithm. When the optimization results within a small area of the optimal solution we consider the algorithm has found the optimal solution. The results are shown in table 2, that include the optimal solution and the number of times to get the optimal solution in 30 times compute. From the experimental results in Table 2, we can obtain the following few conclusions: A. For simple or low-dimensional function such as F1 (n=2), F2 (n=2), F3 (n=2), F4 (n=2,5), F5 (n=2) four algorithms can find the global optimum successfully. Whether GACO or DACO the ability of solving highdimensional and complex problem is not strong, easy to fall into local optimal solution. Using pattern search strategy for DACO and GACO respectively, the algorithm for solving high -dimensional complex problems have been improved. Obviously

GACO improve more. Compared with GACO and GACO2, we can see that it has a great quality and efficiency improvement. D. Compared with DACO2 and GACO2, we can find when we solving a high-dimensional and complex problem, in particular, the functions have multiple extreme points in the definition space. DACO2 easy to fall into local optimal solution.

On the whole, GACO2 do better than the DACO2. But in the experiment, we found when we are solving F4, DACO2 can always find the optimal solution quickly. For instance, DACO2 solving F4 (n=5) only need 0.64 seconds average, but GACO2 need 3.09 seconds average. Similarly, when solving F1, DACO2 also faster than GACO2. So if we want to solve simple problems (have and only have one extreme point), may be DACO2 is better. V. CONCLUSION

B.

C.

In this paper, we introduce two algorithms to solve continuous domains. Two algorithms are fundamentally different, the first algorithms idea comes from the ACO for TSP, then the second one simulate the process of ants find the optimal food source, and use the Gaussian distribution function to simulate the distribution of the pheromone. On this basis, we improved the algorithms, so that the algorithms can solve more high-dimensional and complex problems. The experimental

761

result shows that, for simple problems, DACO2 can obtain the optimal solution faster than GACO2, but when solving a highTABLE II. Function F1 F1 F1 F2 F2 F2 F3 F3 F3 F4 F4 F4 F5 F5 F5 n 2 5 10 2 5 10 2 5 10 2 5 10 2 5 10 DACO

Value Times

Value Times Iterations (avg.) Value

DACO2

Value Times

GACO2

Times Iterations (avg.)

30 30 30 30 0 0 30 0 0 30 30 0 30 0 0

30 30 30 30 0 0 30 25 0 30 30 30 30 30 0

30 30 30 30 6 0 30 0 0 30 30 30 30 30 0

6 15 33 10 142 9 11 25 65 23 56

30 30 30 30 30 30 30 30 30 30 30 30 30 30 30

2 6 7 6 54 138 4 7 7 6 11 25 2 4 5

ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China-Youth Fund (Grant No. 1010200220090070). REFERENCE

[1] DORIGO M, GAMBARDELLA LM. Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem. IEEE Trans on Evolutionary Computation, 1997, Vol. 1, No. 1, p. 5366. Jing Xiao, LiangPing Li. A hybrid ant colony optimization for continuous domains. Expert Systems with Applications, 2011, Vol. 38, No. 9, p. 11072-11077. Chen ZhiGang, Chen DeZhao, Wu XiaoHua. Continuous ant colony optimization system based on normal distribution model of pheromone, Systems Engineering and Electronics, 2006, Vol. 28, No. 3, p. 458-462. (In Chinese).

[4]

[5] [6]

[7]

[8]

[2]

[9]

[3]

Krzysztof Socha , Marco Dorigo. Ant colony optimization for continuous domains, European Journal of Operational Research, 2008, Vol. 185, No. 3, p. 1155-1173. Hu ShangXu, Chen DeZhao. Observation data analysis and processing, HangZhou, Zhejiang University Publish. 1996, p. 167-190. (In Chinese). Wang Lei,Wu Qidi. Ant system algorithm for optimization in continuous space, Proc. of the IEEE Internatinal Conference On Control Applications, 2001, p. 401-406. Chen Ye. Ant Colony System for Continuous Function Optimization, Journal of Sichuan university (Engineering Science Edition), 2004, Vol. 36, No. 6, p. 117-120. (In Chinese). Walid Tfaili, Patrick Siarry. A new charged ant colony algorithm for continuous dynamic optimization, Applied Mathematics and Computation, 2008, Vol. 197, No. 2, p. 604-613. Zhou JianXin, Yang WeiDong, Li Qing. Improved Ant Colony Algorithm and Simulation for Continuous Function Optimization, Journal of System Simulation, 2009, Vol. 21, No. 6, p. 1685-1688. (In Chinese).

762

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.