You are on page 1of 25
The Artificial Bee Colony (ABC) optimisation Itis a swarm based meta-heuristic algorithm that was introduced by Karaboga in 2005 for optimizing numerical problems. It was inspired by honey bees’ clever foraging behaviour. To understand the ABC algorithm, one needs to understand the Bee Colony. About Bee Colony Food sources, employed bees, unemployed bees, foraging behaviour, and dances may all be summarised in the brains of genuine honey bees. 1. Food sources Bee chooses a certain flower (food source) when hunting for food. The amount of nectar in this food source, efforts to put in to collect this nectar, and distance to be travelled and in which direction it is from the nest. This information that the bee gathers from this food supply. For the purpose of ease and simplicity, Bee maintains this information as a single amount which could be referred to as overall profitability for this particular food source. 2. Employed bees A certain group of bees takes use of the available food sources. These bees are known as employed bees, and each of them maintains the profitability of the linked food supply i.e., richness, distance, and orientation from the hive. 3. Unemployed bees With a certain likelihood, employed bees communicate their information with another set of bees known as unemployed bees. Unemployed bees are in charge of summarising information from employed bees and choosing a food source to exploit. These unemployed bees are further separated into two groups. > The Onlooker bees are the bees who gather information from the colony’s hired workers and, after analysing the data, build a food source for the colony. > The Scouts bees are in charge of locating new food sources around the hive. 4. Foraging behaviour Foraging behaviour is the most significant attribute of a honey bee swarm. During the foraging phase, the bee exits the hive and begins looking for food. When a bee finds a food supply, it extracts and stores the nectar. The honey- making process then begins with the production of enzymes, and after reaching the hive, nectar is unloaded in empty cells, Finally, the bee uses various sorts of dance to convey the knowledge to other bees in the hive. 5. Dance To communicate with other bees in the hive about the information collected about food sources the employed bee performs a dance on various portions of the hive area. Employed bees execute one of the following dance types depending on the profitability of the food supply. + The round dance is done when the food source is close to the hive. + Waggle dance notifies other bees about the direction of the food supply in relation to the sunlight. This also indicated that the food source is far away from the hive. The distance between the food source and the hive is proportional to the speed of the dance. + Ifa bee takes a long time to discharge nectar, then begins to tremble, indicating that the present profitability of the food supply is unaware. About Artificial Bee Colony (ABC) Algorithm Honey bees’ foraging behaviour inspired the ABC algorithm. The honey bee swarm is an example of a natural swarm that searches for food in a coordinated and intelligent manner. The honey bee swarm has several characteristics, including the ability to convey information, memorise the surroundings, retain and distribute knowledge, and make decisions based on that information. The swarm adapts to changes in the environment by dynamically assigning jobs and progressing through social learning and teaching, ABC algorithm is a population-based optimisation method that evaluates fitness, therefore the population of candidate solutions is predicted to gravitate toward the search space’s better fitness areas. Through natural motivation, population-based optimisation algorithms find near-optimal solutions to challenging optimisation problems. Working of ABC algorithm Swarm-based optimization algorithms use collaborative trial and error approaches to identify solutions. The ABC optimisation algorithms are driven by the peer-to-peer learning behaviour of social colonies. ABC consists of a population of potential solutions and finds the optimal solution with an iterative process. The two essential factors that determine the development of an ABC population are variation and selection. The variation process explores diverse sections of we search space. The selecting procedure guarantees that past experiences are utilised. The ABC algorithm is divided into four phases: the initialization phase, the employed bees phase, the scout and the onlooker bees phase. In the initialisation of the population, ABC generates a uniformly distributed population of solutions where each solution is a dimensional vector. The number of dimensions depends on the number of variables in the optimization problem for a particular food source in the population. The employed bees modify the current solution based on the information from individual experiences and the fitness value of the new solution. If the fitness value of the new food source is higher than that of the old food source, the bee updates the position with the new one and discards the old one. The position is updated using the dimensional vectors defined earlier in the initial phase with the size of steps needed to get the updated positions. Step size is a random number between —1 to !. O+>0-0 iv Xk LTT TT) The accompanying diagram depicts the position updating procedure in the employed bee phase. A two-dimensional search space is used. The highlighted box depicts the randomly chosen direction, while Xi displays the current position of a bee. Xk is the chosen bee at random. The direction of a random bee is subtracted from the same direction of the specified group of bees in this phase. This difference is compounded by the step size, which is a random integer. Finally, the dimension of the new food position ‘V’ is calculated by adding this quantity to the dimensional vector of Xi. This vector is formed in the vicinity of Xi and has the same other dimensions as Xi. With the spectator bees in the hive, the employed bees communicate the fitness information (nectar) of the updated solutions (food sources) as well as their location information. Onlooker bees evaluate the given data and choose a solution based on its fitness likelihood. As with the employed bee, the observer bee modifies its memory location and assesses the suitability of the prospective source. The bee memorises the new location and forgets the old one if the fitness is higher than the previous one. The position of a food source is deemed to be abandoned if it is not updated for a predefined amount of cycles. The abandoned food source’s bee transforms into a scout bee, and the food source is replaced with a randomly selected food source inside the search space. The set number of cycles, known as the limit for abandonment in ABC, is a critical control parameter. Initial Pood source Evaluate the quality of each food source Initial new Pood sources For employed bees and evaluate solution quality }¢—_________ Compare solution quality by the Initialize new sources for new sources and onlook bees and evaluate, witialized one quality Calculate positions For Food source Store the position | of best food source p ed to optimize problems? In order to customise the ABC algorithm for issue solving. Deb’s constrained handling approach is utilised instead of the ABC algorithm’s greedy selection procedure since Deb’s method only has three heuristic criteria. Deb’s technique employs a tournament selection operator, which compares two solutions at a time and always enforces the following requirements. + Any viable solution is preferred over an infeasible solution. + The one with a higher objective function value is preferred between the two feasible solutions. + The one with a lower constraint violation is preferred between the two infeasible alternatives. The ABC method does not regard the initial population to be viable since initialization with feasible solutions is a time-consuming procedure. In certain circumstances, it is impossible to construct a feasible solution randomly. Because Deb’s principles were used instead of greedy selection, the structure of the algorithm already steers the solutions to a viable area in the running process. The algorithm’s scout creation process provides a diversity mechanism that permits new and likely impossible individuals to enter the population. Real time industry use cases of ABC The ABC algorithm has become prominent because of its robustness and ease of application. Here are some applications listed. > Recommendation system used in Facebook The suggested method’s goal was to efficiently find appropriate learning resources. The ABC was used to create a customised auxiliary material recommendation system on Facebook, which would suggest relevant auxiliary items for a student based on their learning style, interests, and difficulty. > Optimizing network configuration The algorithm was used to solve the network reconfiguration problem in a radial distribution system in order to reduce actual power loss, enhance voltage profile, and balance feeder load, all while adhering to the radial network structure’s requirement that all loads be turned on. In terms of solution quality and computing efficiency, the ABCA produced superior results to the other approaches tested. > Optimizing Camera calibration The ABCA was used to solve direct linear transformation (DLT), a camera calibration approach that establishes a linear relationship between 3D object coordinates and 2D picture planes. The ABCA algorithm’s output was compared to the DE algorithm’s output. > Image template matching The goal was to locate a pattern or reference picture (template) of an item ina target landscape scene that has been moved, scaled, rotated, and/or partially obscured. As a consequence, the specified reference picture will be located in the target landscape image. Experiments with grayscale and colour pictures revealed that the ABC is quicker and more accurate than other evolutionary algorithms. > Segmentation of medical images Segmentation of Magnetic Resonance (MR) images of the brain is complex and ic considered a huge challenge in image processing. The ABC algorithm with the fuzzy C-mean (FCM) algorithm was used to segment brain images. The aim of using the ABC algorithm was to reduce the time as well as to reach higher quality than that obtained by the previous algorithm. Conclusion ‘The ABC algorithm is unaffected by rising issue complexity and is unaffected by beginning parameter values. ABC has the flaw of early convergence, which 1 lis in the loss of ABC’s exploration and exploitation capabilities. With this ticle, we have understood the operation and resemblance of the Artificial Bee Colony algorithm with the bee colony and its utilization in optimizing problems. / ion ‘] Swain datellijend Spslew olyoxifom * S —— Aalipeed fee oloy el y Ls dnt telong ty 0 ? ‘d © (Ace (Ave) alapendim Particle Swarm Optimization (PSO) : Introduction Particle swarm optimization (PSO) has been successfully applied in many research and application areas. For my part, 1 really enjoyed the application of this algorithm in the article by G. Sermpinis [1] on foreign exchange rate ‘forecasting. oe It is demonstrated that PSO can have better results in a faster, cheaper way compared with other methods. It can also be parallelized. Moreover, it does not use the gradient of the problem being optimized. In other words, unlike traditional optimization methods, PSO does not require the problem to be differentiable. Last but not least, there are very few hyperparameters. ‘These parameters are very simple to understand and do not require advanced notions. For the same hyperparameters, PSO will work on a very wide variety of tasks, which makes it a very powerful and flexible algorithm. Particle swarm optimization (PSO) is one of the bio-inspired algorithms and itis a simple one to search for an optimal solution in the solution space. Its different from other optimization algorithms in such a way that only the objective function ig needed and it is not dependent on the gradient or any differential form of the objective. It also has very few hyperparameters Particle Swarms Particle Swarm Optimization was proposed by Kennedy and Eberhart in 1995. ‘As mentioned in the original paper, sociobiologists believe a school of fish or a flock of birds that moves in a group “can profit from the experience of all other members”. In other words, while a bird flying and searching randomly for food, for instance, all birds in the flock can share their discovery and help the entire flock get the best hunt. While we can simulate the movement of a flock of birds, we can also imagine each bird is to help us find the optimal solution in a high-dimensional solution space and the best solution found by the flock isthe best solution in the space. ‘This is a heuristic solution because we can never prove the real global optimal solution can be found and it is usually not, However, we often find that the solution found by PSO is quite close to the global optimal. PSO is best used to find the maximum or minimum of a function defined on a multidimensional vector space. Similar to the flock of birds looking for food, we start with a number of random points on the plane (call them particles) and let them look for the minimum point in random directions. At each step, every particle should search around the minimum point it ever found as well as around the minimum point found by the entire swarm of particles. After certain iterations, we consider the minimum point of the function as the minimum point ever explored by this swarm of particles. © Where is PSO algorithm used? PSO is best used to find the maximum or minimum of a function defined on a multidimensional vector space. © What type of algorithm is PSO? Particle Swarm Optimization (PSO) is a powerful meta-heuristic optimization algorithm and inspired by swarm behavior observed in nature such as fish and bird schooling. PSO is a Simulation of a simplified social system « What are PSO applications? PSO can be applied for various optimization problems, for example, Energy- Storage Optimization. PSO can simulate the movement of a particle swarm and can be applied in visual effects like those special effects in the Hollywood film © Who proposed PSO algorithm? PSO is one of the most well-known metaheuristics; it was proposed by Kennedy and Eberhart (1995). This algorithm is inspired from swarm behavior such as bird flocking and schooling in nature. Inspiration of the algorithm Particle Swarm Optimization (PSO) is a powerful meta-heuristic optimization algorithm and inspired by swarm behavior observed in nature such as fish and bird schooling, PSO is a Simulation of a simplified social system. The original intent of PSO algorithm was to graphically simulate the graceful but unpredictable choreography of a bird flock. In nature, any of the bird’s observable vicinity is limited to some range. However, having more than one birds allows all the birds in a swarm to be aware of the larger surface of a fitness function. What kind of optimization problems can be solved by PSO? / / That's why PSO is an ideal optimization problem solver in optimization problems. PSO is well suited to solve the non-linear, non-convex, continuous, discrete, integer variable type problems. Initialize group of particles Evaluate pBest for each particle Update pBest Assign pBest to gBest Compute velocity Update particle position Target reached? HaMpUr neice ve eee moe 8.7 PARTICLE SWARM INTELLIGENT SYSTEMS Particle swarm intelligent system (PSIS) was developed by Dr. Eberhart and Dr. Kennedy [15] as an optimisation technique known as particle swarm optimisa- tion (PSO), inspired by the flocking of birds. Let us try to understand the concept by means of an example. A villager used to offer food to crows in front of her house. It was really surprising to see that there used to be a loud cry of crows from different places. The cry calls seem to be the communication means for the crows, inviting the others to share the food. Immediately, many crows around respond with cry call, perhaps affirming their arrival at the food site as well. We can try to decipher two responses from these crows positioned at different places. The first response may be that, ‘I have already found a food nearby and I don’t want to fly or travel much distance to your place and have it’. The second response may be that ‘Another crow has called me to share the food and since the distance to fly in a particular speed or travel is less, I can reach there quickly and therefore, I won’t be coming to have the food at your place’. The way the informa- tion is shared among them to find the food is amazing. ‘Swarm Inteligent System 583 As stated before, PSO simulates the behaviour of bird flocking. Suppose the following scenario is observed: a group of birds are randomly searching food in an area. There is only one piece of food in the area being searched. All the birds do not know where the food is. However, they know how far the food happens to be in each iteration. Hence, what is the best strategy to find the food? The effective one is to follow the bird which is nearest to the food. PSO learned from the scenario can be used to solve the optimisation problems, In PSO, each single solution is a ‘bird’ in the search space. We call it ‘particle’. All of the particles have fitness values which are evaluated by the fitness func- tion to be optimized, and have velocities which dire The particles fly through the problem sj particles. ct the flying of the particles. pace by following the current optimum The concept is simple, has few parameters, is easy to implement, and has found applications in many areas. This intelligent technique has been researched exten- sively, and scientists are exploring its ineeri Potential as an optimizer applicable to many fields of engineering, The PSIS originated as a simulation of a simplified social system. The main idea was to simulate the unpredictable choreography of a bird flock. These simula- tions were analysed to incorporate nearest-neighbour velocity matching, eliminate ancillary variables, and incorporate multi-dimensional search and acceleration by distance. Based on the observation of the evolution of the algorithm, it has been realized that the conceptual model is in fact an optimizer. Particle swarm optimisation can be categorized into five parts: (i) algorithms, (ii) topology, (iii) parameters, (iv) merging or combining with other evolution- ary techniques, and (v) applications. Initially, PSO was developed for real-valued problems; however, it can be extended to cover binary and discrete problems, Its most exciting industrial application has been ingredient mix optimisation by a major American corporation. In this work, ‘ingredient mix’ refers to the mixture of ingredients that are used to grow production strains of microorganisms. The PSO provided an optimized. ingredient mix that has over twice the fitness value found using traditional meth- ods, at a very different location in ingredient space. The occurrence of an ingre- dient becoming contaminated hampered the search for a few iterations, but in the end did not give poor results; PSO is thus considered robust. Particle swarm optimisation by nature searches a much larger portion of the problem space than the traditional method. It was used for reactive power and voltage control by a Japanese electric utility; it was employed to find a control strategy with continuous and discrete control variables, resulting in a sort of hybrid binary and real-valued Version ofthe algorithm. Voltage stability in the system was achieved using a con- tinuous power flow technique. . . F Particle swarm sare has also proved its pee Soe networks(., training neural networks using ae applied to solve most other evolutionary computation (EC) algorithms, be converted into optimisation Optimisation problems as well as problems that can 984 SOM COMPUUNG WILT MATLAB rivyrattirmiy y roblems. It has been successfully applied for tracking dynamic systems and tackling Multi-objective optimisation and constraint optimisation problems. The Potential application areas also include classification, pattern recognition, biologi- cal System modelling, scheduling (planning), signal processing, games, robotic applications, decision-making, and simulation and identification. Examples include fuzzy controller design, job shop scheduling, real-time robot path plan- ning, image segmentation, EEG signal simulation, speaker verification, time fre- quency analysis, modelling the spread of antibiotic resistance, burn diagnosing, Sesture recognition, automatic target detection, etc. This natural phenomenon has thus proved successful and has paved the way for future research. 8.7.1. Basic PSO Method Particle swarm optimisation is initialized by a population of random solutions and each potential solution is assigned a randomized velocity. The potential solutions, called particles, are then ‘flown’ through the problem space. Each particle keeps track of its coordinates in the problem space, which are associated with the best solution or fitness achieved so far. The fitness value is also stored. This value is called pbest’. Another ‘best’ value that is tracked by the global version of the PSO is the overall best value, and its location, obtained so far by any particle in the population. This value is termed (gbest’, Thus, at each time step, the ‘pal icle changes its velocity (accelerates) and moves towards its pbest and gbest; this is the global version of PSO. When, in addition to pbest, each particle keeps track of the best solution, called nbest (neighbourhood best) or Ibest (local best), attained within a local topological neighbourhood of the particles, the process is known as the local version of PSO. In addition, with respect to different applications, the discrete or binary version of PSO has come into existence. This is due to applications such as scheduling or routing problems, for which some changes have to be made in order to adapt to discrete spaces [16], [17]. 8.7.2 Characteristic Features of PSO The PSO method appears to adhere to the following five basic principles of swarm intelligence[17]: a (a) Proximity—the swarm must be able to perform simple space and time computations. (b) Quality—the swarm should be able to tespond to quality factors in the environment. (c) Diverse response—the swarm should not commit its acti sively narrow channels. (d) Stability—the swarm should not change its behavi environment alters. (e) Adaptability—the swarm must be able to chan computational cost is not prohibitive, ivities along exces- our every time the ige its behaviour, when the Swarm incre ime steps. It 5 A ti Indeed, the swarm in PSO performs space calculations for several on and the responds to the quality factors implied by each particle's best positi diversity. best particle in the swarm, allocating the responses in a way that ensures Av the Moreover, the swarm alters its behaviour (state) only when the best particle in swarm (or in the neighbourhood, in the local variant of PSO) changes. Thus, itis both adaptive and stable (18). 8.7.3 Procedure of the Global Version The procedure of the global version [19] is as follows: @) Anarray of population of particles with random positions and velocities on d dimensions in the problem Space are initialized. (b) Evaluate the fitness function in d variables for each particle. (©) Compare particle's fitness evaluation with particle's ‘pbest’. If the current ‘luc is better than ‘phest’ then the current value is saved as the ‘pbest’ and the ‘pbest’ location Corresponds to the current location in D-dimensional space. (@) Compare fitness evaluation with the population’s overall previous best. If the current value is better than the “gbest’, then current value is saved as ‘gbest’ to the current Particle’s array index and value, (©) Modify the velocity and position of the particle according to the following equations: vit = id = Ya * Grand 0! x (Pig ~x44)+e,Rand(" x(pt x4), (8.27) wig! =a ty! (8.28) (f) If the desired criterion is not met, go to step (b) otherwise stop the process. Usually the desired criterion may be a good fitness function or a maximum number of iterations. Suppose that the search space is D-dimensional, then the i-th particle of the swarm can be represented by a D-dimensional vector, X;, = Op%gre Xp)” The velocity (position change) of this particle, can be represented by another D-dimensional vector Viz = (vj15¥j2>----¥p)"- The best previously visited position of the i-th particle is denoted as Py = (Pjy»Pig»~~»Pip)"- Defining g as the index of the best particle in the swarm (ice., the g! particle is the best), and let the super- scripts denote the iteration number, then the swarm is manipulated according to the two Eqs (Eqs 8.27 and 8.28), where d= 1,2, ..., Ds i= 1,2...N, and Nis the size of the swarm; c is a positive constant, called acceleration constant; rand () and Rand () are random numbers, uniformly distributed in (0, 1]; and ¢ determines the iteration number. The previous version with Eqs (8.27 and 8.28) is the basic version of PSO. ; However, the basic version has no mechanism to control the velocity of the par- ticle, which compelled to impose a maximum value F,,,, in the positive direction and V,_, in the negative direction. This can be explained in Eqs (8.29 and 8.30), 596 Soft Computing with MATLAB Programming f Vg >Voays then Vis =Voay (829) if vg <-| : id < Vergy » then Vig = Vay (8.30) This parameter ve ticles moving . ae ie ah eiaes nnn large values could result in par- exploration in the search space eran = small value results in inefficient velocity resulted in the lowe - ck of control mechanism for the position ee fer performance of the PSO when compared to other EC. as be able to locate the optimum area faster than the EC tech- ques, but fails in adjusting its velocity step size to continue the search fora finer grain. To overcome this limitation, the problem is addressed by incorporating a weight parameter for the previous velocity of the particle. Hence, the modified version of the equations is given as follows: vif! = pov’, + qrand()"% (Ply — 24) epRandO!X (Peg — Fa) xl sata are two positive constants, called cognitive Rand() are two random numbers inde- factor, which is used alternatively to (831) (8.32) where w is called inertia weight; c,, ¢) and social parameter respectively; rand(), pendently generated; and @ is a constriction w to limit velocity. 8.7.4 Parameters of PSO It becomes necessary to choose the optimum value of setting the parameter for the best performance of PSO for different types of ‘applications. Hence, the selection of the important parameters such as (1) pest (p,q) (2) nbest (P,q) and gbest (p,,), (3) learning factors (cy, C2), (4) inertia weight (1), and (5) constriction factor (9) have to be taken care of, which are discussed in this section. 8.7.4.1 phest (Pjq) pes she best poston ofthe patie tained Sf and can Sons the pales memory an one memory slot aloe teach PaCS The best iocation does not necessarily always depend on the value of the fitness function. To adapt to different problems, many constraints can be applied to the definition of the best location. In certain non-linear constrained optimisation problems, the particles remember those positions in the feasible space and disregard unfeasible solutions. In some techniques, memory reset mechanism is adopted (i.e., in dynamic environ- ments, particles ‘pbest’ ‘will be reset to the current value if the environment changes). 8.7.4.2 ‘nbest’ (p,q) and ‘gbest’ (Pya) ‘The best position that neighbours ofa particle achieved so faris the “nbes We “ghest’ is the extreme of “nbest’ and takes whole population as the neighbours of is the social environment a particle, each particle, The neighbourhood o: a particle encounters. The selection of the ‘nbest’ consists of two phases. In the first phase the determination of the neighbourhood come into picture and in the second phase the selection of the ‘nbest’ is done. Usually, certain predetermined conjunct particles are considered as neighbours. y ce oy neighbourhoods do es ize of the neighbourhoo size of the neigh- Premature oF pre Neighbours are defined as topological neighbours: change during run. The number of neighbours or ee the will affect the convergence speed of the algorithm. Larger ‘i bour more the convergence rate of the particles are observe ‘ od size is small. convergence of the particles is prevented when the neighbour! a values among The selection of ‘nbest’ is usually determined by comparing fitnes for instance. neighbours, that is, ifthe neighbourhood size is defined as two, for inst particle (i) compares its fitness value with particle (i - 1) and particle See The population size is problem-dependent and population sizes Cr 20-30 ee in are probably most common. So far, smaller populations are optimal for P. ; terms of minimizing the total number of evaluations (i.e.. population times 7 number of generations) to obtain a good solution. It will be difficult in the case 0! multi-objective optimisation environment, where multiple fitness values for each particle has to be taken care. In certain cases, the ratio of the fitness and the dis- tance of other particles are used to determine the ‘nbest’. 8.7.4.3 Learning Factors The constants c, and c, are the learning factors, which represent the weight- ing of the stochastic acceleration terms that pull each particle towards ‘pbest’ and ‘nbest’ positions. Thus, adjustments of these constants change the amount of ‘tension’ in the system. Low values allow particles to roam far from target regions before being tugged back, whereas high values result in abrupt move- ment towards, or past, target regions. Generall ly c, and c, is set to 2.0, which will make the search cover all surrounding regions which is centred at the ‘pbest” and ‘nbest’, 8.7.4.4 Inertia Weight Inertia weight w, has become very important for the convergence behaviour of the PSO. As already mentioned, maximum velocity Vas been a constraint to con- trol the global exploration ability of a particle swarm. It has been understood larger Fax facilitates global exploration, whereas a smaller V.,. encourages local exploi- tation. The concept of the inertia weight is developed to have a better-controlled exploration and exploitation. The inertia weight is initially set to a constant, but later experimental results suggested having a larger value initially, in order to pro- mote global exploration of the search space, and gradually decrease it to get more refined solutions. Thus, an initial value of around 1.2 and a gradual decline towards 0 can be considered as a good choice for w. In addition randomized inertia weights are used in many reports, (ie., the inertia weight can be set to [0.5 + rand ()/ 2.0). 8.7.4.5 Constriction Factor () The constriction factor @ controls on the magnitude of the velocities, similar to the parameter resulting in a variant of PSO, different from the the inertia weight [20]. Use of the constriction factor Convergence of PSO. The constriction factor is consi erating constants ¢, and c, as in the Eq. (8.33). in a way one with 9 may be necessary to ensure idered as function of the acce]- yen es OmMputing with MATLAB Programming _ 2 = 2 a ee Here, the fact i ctor wis id thus the constant multiplier ¢ becomes 0,729, Even though initially j ally it was thought that Vj h is not ne 7 quent experimental results foun ous spa Fe contain F d from various research papers, V. ite s Papers, V,,, can be lim- ied and can be often Set it at about 10-20 per cent of the dynamic range of the on each dimension and by also selecting an appropriate inertia w, : loucpulrbeor 8.7.5 Comparison with Other EC Techniques Cettdie a eboney we Particle Swatm optimisation is an EC technique because it has the common evolu- Honary attributes as detailed in the following section [21], [22]. y+, w>4 (8.33) S Set to 4.1 anc (@) During initialization, there is a population that is made up of a certain num- ber of individuals, and each individual in the population is given a random solution initially. (b) Ithas a mechanism for searching for a better solution in the problem space and producing a better new generation. (c) The production of the new generation is based on the previous generation. Particle swarm optimisation can easily be implemented and is computa- tionally inexpensive, since its memory and CPU speed requirements are low. Moreover, it does not require gradient information of the objective function under consideration, but only its values, and it uses only primitive mathematical operators. Particle swarm optimisation has been Proved to be an efficient method for many global optimisation problems, and in some cases it does not suffer the difficulties encountered by other EC techniques. In_EC techniques, three main operators are involved: (a) recombination, © sation, and @) selection, Pattee pant opunaierein eee recor perator. However, the stochastic acceleration of a particle towards its previous best position as well as towards the best particle of the swarm (or towards the best particle in its neighbourhood in the local version) resembles the recombination procedure of EC. In PSO, information exchange takes place only between the particle's own experience and the experience of the best particle in the swarm, instead of being carried forward from ‘parents’ selected based on their fitness to descendants as in GAs. Moreover, the PSOs directional position updating operation resembles the mutation of GAs, with a kind of in-built memory. This mutation-like procedure is multi-directional in PSO, like in GA, and includes control of the mutation’s severity using factors such as V7, and y. Particle swarm optimisation is actually the only evolutionary algorithm that does not use the ‘survival of the fittest’ concept. It does not utilize a direct selection function. Thus, particles with lower fitness can survive during the optimisation and potentially visit any point of the search space. 8.7.6 Engineering Applications of PSIS and Future Research Particle swarm optimisation is attractive due to easy implementation and very few parameters to adjust and, therefore, has been used in a wide variety of applica- tions. Many PSO applications have already been pointed out in the previous sec- tions. As mentioned before, it can be used in the place of the other EC techniques. Nowadays, it is also being used for training artificial neural networks; not only the network weights but also the network structure’s evolution. As an example of evolving neural networks, PSO has been applied to the analysis of human tremor. Particle swarm optimisation has become very popular and researchers are trying to apply it to various fields of engineering. However, it is necessary to know its scope in the near future. The still many unexplored areas of PSO are as follows [23]: (a) Convergent analysis: It is still not clear how PSO converges, so thorough work has to be done in the theoretical research of swarm intelligence and chaos systems. (b) The combination of various PSO techniques as well as other hybridized techniques with PSO dealing with complex problems has to be understood. (c) Discrete/binary PSO: Available literature has shown the potential of EC tech- niques in dealing with discrete or binary variables. However, in the case of PSO, some difficulties have been encountered, which have yet to be solved. (d) Particle swarm optimisation can be treated as an agent-based distributed computational technique; many of its computing characteristics still remain to be uncovered. 8.7.7 Working of PSO The flow chart for the conventional PSO is given in Fig. 8.18. Let us understand the working of PSO algorithm with the following example. 590 Soft Computing with MATLAB Programming Initialize swarm Evaluate particle fitness satisfied? Determine gbest, pbest and change in fitness Update particle velocities and positions Fig. 8.18 Basic flow chart of PSO pe Harmony search is a music-based metaheuristic optimization algorithm. It js" was i Harmony search When listening to a beautiful piece of classical music, who has ever wondered if there is any connection between music and finding an optimal _, solution to a tough design problem such as the water distribution networks or other design problems in engineering? Now for the first time ever, scientists have found such an interesting connection by developing a new algorithm, called Harmony Search. Harmony Search (HS) was first developed by Zong Woo Geem et al. in 2001, though it is a relatively new metaheuristic™algorithm, its effectiveness and advantages have been demonstrated in various applications. Since its first appearance in 2001, it has been applied to solve many optimization ¢. problems including function optimization, engineering optimization, water —— distribution networks, groundwater modelling, energy-saving dispatch, truss design, vehicle routing, and others. ay nspired by the observation that the aim of music is to search for a perfect) Gp” state of harmony. This harmony in music is analogous to find the optimality inan «i optimization process. The search process in optimization can be compared to a La jazz musician’s improvisation process. be On the one hand, the perfectly pleasing harmony is determined by the audio aesthetic standard. A musician always intends to produce a piece of music with perfect harmony. On the other hand, an optimal solution to an optimization problem should be the best solution available to the problem under the given objectives and limited by constraints. Both processes intend to produce the best or optimum. Such similarities between two processes can be used to develop new algorithms by learning from each other. Harmony Search is just such a successful example by transforming the qualitative improvisation process into some quantitative rules lization, and thus turning the beauty and harmony of music into an optimization procedure through search for a perfect harmony, namely, the Harmony Search (HS) or Harmony Search algorithm. 7 Harmony search (HS) is a meta-heuristic search algorithm which tries to mimic the improvisation process of musicians in finding a pleasing harmony. In recent years, due to some advantages, HS has received a significant attention Is Harmony search population based? Harmony search (HS) was introduced in 2001 as a heuristic population-based optimisation algorithm. Since then HS has become a popular alternative to other heuristic algorithms like simulated annealing and particle swarm optimisation. aS Initialization of an optimization problem and algorithm parameters Knew harmony. better thana stored harmony in MZ Termination riterion satisfied?, 2 jzz musician's improvisation process. On the one hand, the perfectly pleasing harmony is determined by the audio aesthetic standard. A musician always intends to produce a piece of music with perfect harmony. On the other hand, an optimal solution to an optimization problem should be the best solution available to the pproblem under the given objectives and limited by constrains. Both processes in- tend to produce the best or optimum. ‘Such similarities between two processes can be sed to develop new algorithms ‘by learning from each other. Harmony Search is just such a successful example by transforming the qualitative improvisation process into some quantitative rules bY idealization, and thus turning the beauty and harmony of music into an optimiza- tion procedure through search fora perfect harmony, namely, the Harmony Search (HS) or Harmony Search algorithm. 2 Harmony Search as a Metaheuristic Method Before we introduce the fundamentals of HS algorithm tus first briefly describe the way to describe the aesthetic quality of music. Then, we will discuss the pseudo code of HS algorithm and two simple examples t0 demonstrate how i vor 2.1 Aesthetic Quality of Music “The aesthetic quality ofa musical instrument is exsently determined by its pitch (or equeny) timbre (or sod ail), and empltude (or oudes) Timbre is largely determined by the harmonic content that is in turn determined by the srvefeems or modulation ofthe sound signal, However, the harmonics hat it can enerac wil largely depend onthe pitch orfequency range of th paicularin- stument Different notes have diferetfequencics. For example, the note A above mide € (or standard concert AS) has a fundamental frequency of f=440 He. As the sped of sound in ry i about 310.67 ms where Tis the temperature in degress Celsius oar T=. So at room temperature T=20C, the A noe has 3 Movelength Zl, -0.795 m. When we aj the itch, we sin fat ying to Change the Frequency, In music theory, pitch pin MIDI is fen represented as a tumerical scale (a linear pitch space) axing the following formula _f 7, = 69412053 Goa) o pada xin, a 3 which means that the Ad notes has @ pitch number 69. On this scale, octaves cor- respond to size 12 while semitone corresponds to size 1, which leads to thatthe r tio of frequencies of two notes that are an octave apart is 2:1. Thus, the frequency of a note is doubled (halved) when it raised (lowered) an octave. For example, A2 has a frequency of 110Hz while AS has a frequency of 880Hz. "The measurement of harmony where different pitches occur simultaneously, like any aesthetic quality, is subjective to some extent. However, it is possible to use some standard estimation for harmony. The frequency ratio, pioneered by an- cient Greek mathematician Pythagoras, is a good way for such estimation. For ex- ‘ample, the octave with a ratio of 1:2 sounds pleasant when playing together, s0 are the notes with a ratio of 2:3. However, itis unlikely for any random notes played bya monkey to produce a pleasant harmony. 2.2 Harmony Search In order to explain the Harmony Search in more detail, let us first idealize the improvisation process by a skilled musician, When a musician is improvising, he ‘or she has three possible choices: (1) play any famous piece of music (a series of pitches in harmony) exactly from his or her memory; (2) play something similar ‘a-known piece (thus adjusting the pitch slightly); or (3) compose new or random notes, Zong Woo Gem et al. formalized these three options into quantitative op- timization process in 2001, and the three corresponding components become: Us- age of harmony memory. pitch adjusting, and randomization [1]. "The usage of harmoay memory is important, as itis similar to the choice of the best-fit individuals in genetic algorithms (GA). This will ensure that the best har- monies will be carried over to the new harmony memory. In order to use this memory more effectively, it is typically assigned as a parameter Faxgy €(0.1), called harmony memory accepting or considering rate. If this rate is 0 low, only few best harmonies are selected and it may converge too slowly. If this rate is ex- tremely high (near 1), almost all the harmonies are used in the harmony memory, then other harmonies are not explored well, leading to potentially wrong solutions. ‘Therefore, typically, We USe Faay?=0.7 ~ 0.95. ‘The second component is the pitch adjustment determined by a pitch band- ‘width Drone and a pitch adjusting rate ryy. Though in music, pitch adjustment means to change the frequencies, it corresponds to generate a slightly different so- lution in the Harmony Search algorithm (1]. In theory, the pitch can be adjusted linearly or nonlinearly, but in practice, linear adjustment is used. So we have @ Suen Kd + Danse” where xyuis the existing pitch or solution from the harmony memory, and tyey is the new pitch after the pitch adjusting action. This essentially produces a new so~ lution around the existing quality solution by varying the pitch slightly by a small 4 random amount [1,2]. Here €isa random number generator in the range of (1,1. Pitch adjustment is similar tothe mutation operator in genetic algorithms. We ean assign a pitch-adjustng rate (rq) to control the degree of the adjustment. A low pitch adjusting rate witha narrow bandwidth can slow down the convergence of HS because the imitation inthe exploration of only a smal subspace of the whole Search space. On the other hand, a very high pitch-adjusing rate with a wide bandwidth may cause the solution to scatter around some potential optima as in & random search. Thus, we usually use r,,=0.1 ~0.5 in most applications. Harmony Search begin Objective function fix), X=(x1-%3, kal" Generate initial harmonies (real number arrays) Define pitch adjusting rate (ry pitch limits and bandwidth Define harmony memory accepting rate (ran) while (trgagy), Choose a existing harmonic randomly else if (rand>r.), adjust the pitch randomly within limits else generate new harmonics via randomization endif ‘Accept the new harmonics (solutions) if better end while Find the current best solutions end Figur: Pacudo code ofthe Harmony Search algoithm. ‘The third component is the randomization, which is to increase the diversity ‘of the solutions. Although adjusting pitch has a similar role, but iis limited to cer- tain local pitch adjustment and thus corresponds to local search. The use of ran-

You might also like