This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Optimization Methods|Views: 168|Likes: 0

Published by prasad243243

See more

See less

https://www.scribd.com/doc/50636495/Optimization-Methods

03/13/2011

text

original

is the act of obtaining the best result under the given circumstances. In design construction and maintenance of any engineering system, many technological and managerial decisions have to be taken at several stages. The ultimate goal of all such decisions is either to minimize the effort required or to maximize the desired benefit. Hence optimization can be defined as the process of finding the conditions that give the minimum or maximum value of a function, where the function represents the effort required or the desired benefit. This module starts with a glance through the historical development of optimization methods. Engineering applications of optimizations are scanned through from which one would get a broad picture of the multitude of applications, the optimization methods have. The Art of modeling is briefly explained with the various phases involved in modeling. In the second lecture various components of the Optimization problem are discussed and summarized with steps involved in formulating a mathematical programming problem. In the third lecture the optimization problems are classified under various criteria to enable choosing an appropriate model applicable to different types of optimization problems. In the final lecture a brief introduction to the classical and advanced optimization techniques in use are given. At the end of the module the reader will be able to 11. Understand the need and origin of the optimization methods. 22. Get a broader picture of the various applications of optimization methods used in engineering. 33. Define an optimization problem and its various components. 44. Formulate optimization problems as mathematical programming problems. 55. Classify optimization problems to suitably choose the method needed to solve the particular type of problem. 66. Briefly learn about classical and advanced techniques in optimizations.

such as. The method of optimization for constrained problems. Lagrange.Historical Development and Model Building Introduction In this lecture. The reason for this is that most real world applications fall under this category of problems. 3• Work of Carroll and Fiacco and McCormick facilitated many difficult problems to be solved by using the well-known techniques of unconstrained optimization. Zener. some recently developed novel approaches. and Weistrass. Some of the major developments in the area of numerical methods of unconstrained optimization are outlined here with a few milestones. producing a massive literature on optimization techniques. Goal programming is a well-known technique for solving specific types of multi-objective optimization problems. historical development of optimization methods is glanced through. Lagrange. simulated annealing. The necessity to optimize more than one objective or goal while satisfying the physical limitations led to the development of multi-objective programming methods. one of the most exciting and rapidly developing areas of optimization. By the middle of the twentieth century. . Historical Development The existence of optimization methods can be traced to the days of Newton. Cauchy made the first application of the steepest descent method to solve unconstrained optimization problems. were laid by Bernoulli. and Peterson. Spectacular advances followed. 5• Gomory did pioneering work in integer programming. which deals with the minimization of functions. 2• The contributions of Zoutendijk and Rosen to nonlinear programming during the early 1960s have been very significant. 4• Geometric programming was developed in the 1960s by Duffin. and neural network methods are briefly mentioned tracing their origin. The development of differential calculus methods for optimization was possible because of the contributions of Newton and Leibnitz to calculus. goal programming for multi-objective optimization. Euler. genetic algorithms. which involve the addition of unknown multipliers. 1• Work by Kuhn and Tucker in 1951 on the necessary and sufficient conditions for the optimal solution of programming problems laid the foundation for later research in nonlinear programming. and Cauchy. the high-speed digital computers made implementation of the complex optimization procedures possible and stimulated further research on newer methods. The foundations of calculus of variations. Lagrange. Engineering applications of optimization with different modeling approaches are scanned through from which one would get a broad picture of the multitude applications of optimization techniques. 1• Development of the simplex method by Dantzig in 1947 for linear programming problems 2• The enunciation of the principle of optimality in 1957 by Bellman for dynamic programming problems. became known by the name of its inventor. Apart from the major developments. 6• Dantzig and Charnes and Cooper developed stochastic programming techniques and solved problems by assuming design parameters to be independent and normally distributed. This advancement also resulted in the emergence of several well defined new areas in optimization theory.

The goal programming was originally proposed for linear problems by Charnes and Cooper in 1961. 19• Allocation of resources or services among several activities to maximize the benefit. 20• Controlling the waiting and idle times in production lines to reduce the cost of production. 9• Selection of machining conditions in metal-cutting processes for minimizing the product cost.. 12• Optimum design of electrical machinery such as motors. 4• Design of water resources systems for obtaining maximum benefit. 14• Optimum design of control systems. trucks and cranes for minimizing cost. 6• Design of aircraft and aerospace structure for minimum weight 7• Finding the optimal trajectories of space vehicles. 3• Optimal plastic design of frame structures (e. wind and other types of random loading. genetic algorithms. chimneys and dams for minimum cost. 21• Planning the best strategy to obtain maximum profit in the presence of a competitor. towers. turbines and heat transfer equipment for maximum efficiency. 3• Analysis of statistical data and building empirical models to obtain the most accurate representation of the statistical phenomenon. Neural network methods are based on solving the problem using the computing power of a network of interconnected ‘neuron’ processors. . 16• Selection of a site for an industry.g. and other mechanical components. 2• Design of minimum weight structures for earth quake. Simulated annealing is analogous to the physical process of annealing of metals and glass. 5• Design of optimum pipeline networks for process industry. 18• Inventory control. 17• Planning of maintenance and replacement of equipment to reduce operating costs. machine tools. 10• Design of material handling equipment such as conveyors. and neural network methods represent a new class of mathematical programming techniques that have come into prominence during the last decade. Simulated annealing. The genetic algorithms are search techniques based on the mechanics of natural selection and natural genetics. bridges. Only during the last few years has game theory been applied to solve engineering problems. 1• Designing the shortest route to be taken by a salesperson to visit various cities in a single tour. to determine the ultimate moment capacity for minimum weight of the frame). controlling and scheduling. 13• Optimum design of electrical networks. 11• Design of pumps. 1• Design of civil engineering structures such as frames. generators and transformers. some typical applications in different engineering disciplines are given below. 15• Optimum design of chemical processing equipments and plants. The foundation of game theory was laid by von Neumann in 1928 and since then the technique has been applied to solve several mathematical. economic and military problems. 8• Optimum design of linkages. cams. foundations. Engineering applications of optimization To indicate the widespread scope of the subject. gears. 2• Optimal production planning.

The model validation and evaluation phase is checking the performance of the model as a whole. Another point to keep in mind is that no single validation process is appropriate for all models. the number of equations required to describe the system. This phase also is an iterative process and may require returning to the model definition and formulation phase. Model application and implementation include the use of the model in the particular area of the solution and the translation of the results into operating instructions issued in understandable form to the individuals who will administer the recommended system. In performing these steps the following are to be considered. and software development. etc. nonlinear programming. 3• Evaluate the structure and complexity of the model 4• Select the degree of accuracy required of the model Model development includes the mathematical description. One important aspect of this process is that in most cases data used in the formulation process should be different from that used in validation. Model validation consists of validation of the assumptions and parameters of the model. parameter estimation. Different modeling techniques are developed to meet the requirements of different types of optimization problems. linear programming. The performance of the model is to be evaluated using standard performance measures such as Root mean squared error and R2 value. These modeling approaches will be discussed in subsequent modules of this course. 1• Identify the important elements that the problem consists of.Art of Modeling: Model Building Development of an optimization model can be divided into five major phases. dynamic programming. formulation of the model objective(s) and the formulation of the model constraints. evolutionary algorithms. 1• Data collection 2• Problem definition and formulation 3• Model development 4• Model validation and evaluation of performance 5• Model application and interpretation Data collection may be time consuming but is the fundamental basis of the model-building process. A sensitivity analysis should be performed to test the model inputs and parameters. 2• Determine the number of independent variables. The availability and accuracy of data can have considerable effect on the accuracy of the model and on the ability to evaluate the model. The model development phase is an iterative process that may require returning to the model definition and formulation phase. stochastic programming. The problem definition and formulation includes the steps: identification of the decision variables. integer programming. geometric programming. input development. . and the number of unknown parameters. Major categories of modeling approaches are: classical optimization techniques.

A set of constraints are those which allow the unknowns to take on certain values but exclude others. In some cases (for example. the user may like to optimize a number of different objectives concurrently. the different objectives are not compatible. the goal is to maximize the strength and minimize size. the aim is minimizing the total deviation of the predictions based on the model from the observed data. In some cases. the objective function is the mathematical function one wants to maximize or minimize. This type of problems is usually called a feasibility problem. one would probably want to limit the breadth of the base and to constrain its size. In practice. In the manufacturing problem.Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization methods and their engineering applications. Usually. the variables that optimize one objective may be far from optimal for the others. The user does not particularly want to optimize anything and so there is no reason to define an objective function. Basic components of an optimization problem: An objective function expresses the main aim of the model which is either to be minimized or maximized. its various components and its formulation as a mathematical programming problem. 2• Multiple objective functions. problems with multiple objectives are reformulated as single-objective problems by either forming a weighted combination of the different objectives or by treating some of the objectives as constraints. (When they don't they can often be reformulated so that they do) The two exceptions are: 1• No objective function. the variables are the shape and dimensions of the pier. In fitting-the-data problem. In the pier design problem. In the manufacturing problem. so one constraint is that the "time" variables are to be non-negative. the goal is to find a set of variables that satisfies the constraints of the model. the unknowns are the parameters of the model. In comparing the data prescribed by a user-defined model with the observed data. the aim may be to maximize the profit or minimize the cost. For instance. For example. subject to certain constraints. in the optimal design of panel of a door or window. it would be good to minimize weight and maximize strength simultaneously. The optimization problem is then to find values of the variables that minimize or maximize the objective function while satisfying the constraints. Many optimization problems have a single objective function. In this lecture we will study the Optimization problem. one cannot spend negative amount of time on any activity. design of integrated circuit layouts). In designing a bridge pier. In the pier design problem. A set of unknowns or variables control the value of the objective function. . in a manufacturing process. the variables may include the amounts of different resources used or the time spent on each activity. Objective Function As already stated. A brief introduction was also given to the art of modeling.

for which a lot of algorithms and software are available. Constraints . In many practical problems..Statement of an optimization problem An optimization or a mathematical programming problem can be stated as follows: To find X = which minimizes f(X) (1. Optimum point f = C3 f = C2 f= C4 f = C5 C1 > C2 > C3 >C4 …. When drawn with the constraint surfaces as shown in Fig 1 we can identify the optimum point (maxima). it can form a family of surfaces in the design space called the objective function surfaces.2) Such problems are called unconstrained optimization problems.1) Subject to the constraints gi(X) . 2. f(X) is called the objective function. The field of unconstrained optimization is quite a large and prominent one.> Cn f = C1 Variables These are essential. i = 1. They have to satisfy certain specified functional and other requirements. is considered. This is possible graphically only when the number of design variables is two. . When we have three or more design variables because of complexity in the objective function surface. 2. j = 1. If there are no variables. This type problem is called a constrained optimization problem. ….. we have to solve the problem as a mathematical problem and this visualization is not possible. we cannot define the objective function and the problem constraints. The number of variables n and the number of constraints m and/or p need not be related in any way. respectively. and gi(X) and lj(X) are known as inequality and equality constraints.. p where X is an n-dimensional vector called the design vector. one cannot choose the design variable arbitrarily. …. To find X = which minimizes f(X) (1. m lj(X) . If the locus of all points satisfying f(X) = a constant c. Fig 1 Optimization problems can be defined without any constraints as well.

The two-dimensional design space is bounded by straight lines as shown in the figure.1 with only the inequality constraint g i(X) . which separates the acceptable region is called the composite constraint surface. Free and unacceptable point 13. any variable denoting the "number of objects" in a system can only be useful if it is less than the number of elementary particles in the known universe! In practice though.Constraints are not essential. constraints may be nonlinear as well and the design space will be bounded by curves in that case. fabricability. It's been argued that almost all problems really do have constraints. Design constraints are restrictions that must be satisfied to produce an acceptable design. Free and acceptable point 22. However. j= 1. The depth D below the ground level depends on the soil pressure coefficients Ka and Kp. The set of values of X that satisfy the equation gi(X) forms a boundary surface in the design space called a constraint surface. for the retaining wall design shown in the Fig 2. This is the case when the constraints are linear. The number of anchors provided along a cross section Ni cannot be any real number but has to be a whole number. A design point that lies on more than one constraint surface is called a bound point. 2. The constraint surface divides the design space into two regions: one with gi(X) (feasible region) and the other in which gi(X) > 0 (infeasible region). and transportability. For example. The design points that lie in the acceptable or unacceptable regions can be classified as following: 11. Fig 3 shows a hypothetical two-dimensional design space where the feasible region is denoted by hatched lines. Bound and unacceptable point. 22) Geometric or Side constraints: These represent physical limitations on design variables such as availability. This will be a (n-1) dimensional subspace where n is the number of design variables. 1. the base width W cannot be taken smaller than a certain value due to stability requirements. and the associated constraint is called an active constraint. The points lying on the hyper surface will satisfy gi(X) =0. Hence this is a side constraint. Bound and acceptable point 24. For example. The collection of all the constraint surfaces gi(X) = 0. answers that make good sense in terms of the underlying physical or economic criteria can often be obtained without putting constraints on the variables. m. Since these constraints depend on the performance of the retaining wall they are called behavioral constraints. …. Free points are those that do not lie on any constraint surface. Constraint Surfaces Consider the optimization problem presented in eq. Formulation of design problems as mathematical programming problems . Constraints can be broadly classified as: 11) Behavioral or Functional constraints: These represent limitations on the behavior performance of the system. Similarly thickness of reinforcement used is controlled by supplies from the manufacturer.

.In mathematics. if that is the goal) the objective function is called an optimal solution. The function f is called an objective function. where a local minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that . Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming. The domain A of f is called the search space. and models of the discipline/design. refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. The elements of A are called candidate solutions or feasible solutions. objective function(s). equalities or inequalities that the members of A have to satisfy. The branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a non-convex problem is called global optimization. Typically. or mathematical programming. For instance. on some region around x* all the function values are greater than or equal to the value at that point. there may be several local minima and maxima. often specified by a set of constraints. the thickness of a structural member can be considered a design variable. in linear programming – (see module 3)). A is some subset of the Euclidean space Rn. Many real-world and theoretical problems may be modeled in this general framework. Local maxima are defined similarly. and that is to say. constraints. discrete (such as the number of reinforcement bars used in a beam). This problem can be represented in the following way Given: a function f : A R from some set A to the real numbers Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that f(x 0) ≥ f(x) for all x in A ("maximization"). the term optimization. and will treat the former as the actual solutions to the original problem. or cost function. that takes a numeric or binary value. It is the selection of design variables. A large number of algorithms proposed for solving non-convex problems – including the majority of commercially available solvers – are not capable of making a distinction between local optimal solutions and rigorous optimal solutions. Design problems with continuous variables are normally solved more easily. Design variables can be continuous (such as the length of a cantilever beam). or Boolean. A feasible solution that minimizes (or maximizes. Problem formulation Problem formulation is normally the most difficult part of the process. Generally. is controllable from the point of view of the designer. when the feasible region or the objective function of the problem does not present convexity (refer module 2). but still in use for example. Selection of design variables A design variable.

by using Lagrange multipliers. theoretical models. The structural deformation in turn changes the shape of the bridge and hence the aerodynamic loads. Representation in standard form Once the design variables. Many solution methods work only with single objectives. When using these methods. constraints can reflect resource limitations. As an example. the aerodynamic and structural analyses must be run a number of times in turn until the loads and deformation converge. . user requirements. the designer normally weights the various objectives and sums them to form a single objective. a designer may wish to maximize profit or minimize weight. Thus. These include gradient-based algorithms. such as a regression analysis of aircraft prices. it can be considered as a cyclic mechanism. For example.1 Maximization problems can be converted to minimization problems by multiplying the objective by -1. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective. the problem can be expressed as shown in equation 1. Models The designer has to also choose models to relate the constraints and the objectives to the design variables. objectives. or reduced-order models of either of these. Often several iterations are necessary between the disciplines’ analyses in order to find the values of the objectives and constraints. An example of a constraint in beam design is that the resistance offered by the beam at points of loading must be equal to or greater than the weight of structural member and the load supported. Depending on the adopted method.Design variables are often bounded. constraints. In addition to physical laws. and the relationships between them have been chosen. these bounds can be treated as constraints or separately. Equality constraints can be replaced by two inequality constraints. Therefore. Selection of constraints A constraint is a condition that must be satisfied to render the design to be feasible. in that case the techniques of linear programming are applicable. Problem solution The problem is normally solved choosing the appropriate techniques from those available in the field of optimization. or others. the aerodynamic loads on a bridge affect the structural deformation of the supporting structure. such as the calculation of a Pareto front. population-based algorithms. or bounds on the validity of the analysis models. They may be empirical models. they have maximum and minimum values. The multidisciplinary nature of most design problems complicates model choice and implementation. such as from computational fluid dynamics. Other methods allow multi-objective optimization (module 8). Objectives An objective is a numerical value that is to be maximized or minimized. In choosing the models the designer must trade-off fidelity with the time required for analysis. Very simple problems can sometimes be expressed linearly. Constraints can be reversed in a similar manner. that is. in analyzing a bridge. These models are dependent on the discipline involved.

Some problems may not require that the engineer follow the steps in the exact order. energy balance. The solution can therefore be extremely time-consuming.. Use well known physical principles such as mass balances. The disciplinary models are often very complex and can take significant amount of time for a single evaluation. make a list of all the variables. implicit concepts and external restrictions. 33) Develop via mathematical expressions a valid process model that relates the input-output variables of the process and associated coefficients. Include both equality and inequality constraints. 44) If the problem formulation is too large in scope: 0 1 break it up into manageable parts. Much of the current research is focused on methods of decreasing the computation time. Identify the independent and dependent variables to get the number of degrees of freedom.e. . 22) Determine the criterion for optimization and specify the objective function in terms of the above variables together with coefficients. i. 11) Analyze the process itself to identify the process variables and specific characteristics of interest. but each of the steps should be considered in the process.Gradient-based methods 1• Newton's method 2• Steepest descent 3• Conjugate gradient 4• Sequential quadratic programming Population-based methods 1• Genetic algorithms 2• Particle swarm optimization Other methods 1• Random search 2• Grid search 3• Simulated annealing Most of these techniques require large number of evaluations of the objectives and the constraints. Many of the optimization techniques are adaptable to parallel computing. or simplify the objective function and the model 55) Apply a suitable optimization technique for mathematical statement of the problem. The following steps summarize the general procedure used to formulate and solve optimization problems. empirical relations.

Classical and Advanced Techniques for Optimization In the previous lecture having understood the various classifications of optimization problems. viz. 4• Combinatorial optimization: is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. . Classical Optimization Techniques The classical optimization techniques are useful in finding the optimum solution or unconstrained maxima or minima of continuous and differentiable functions. For problems with equality constraints the Lagrange multiplier method can be used. These methods assume that the function is differentiable twice with respect to the design variables and that the derivatives are continuous. These methods lead to a set of nonlinear simultaneous equations that may be difficult to solve. particularly in automated reasoning). 3• Quadratic programming: allows the objective function to have quadratic terms. 6• Constraint satisfaction: studies the case in which the objective function f is constant (this is used in artificial intelligence. 3• Dynamic programming: studies the case in which the optimization strategy is based on splitting the problem into smaller sub-problems. Yet. These classical methods of optimization are further discussed in Module 2. If the problem has inequality constraints.66) Examine the sensitivity of the result. such as a space of functions. the study of these classical techniques of optimization form a basis for developing most of the numerical techniques that have evolved into advanced techniques more suitable to today’s practical problems. These are analytical methods and make use of differential calculus in locating the optimum solution. single variable functions. 1• Nonlinear programming: studies the general case in which the objective function or the constraints or both contain nonlinear parts. 2• Stochastic programming: studies the case in which some of the constraints depend on random variables. Most of these techniques will be discussed in subsequent modules. the Kuhn-Tucker conditions can be used to identify the optimum solution. Three main types of problems can be handled by the classical optimization techniques. while the set A must be specified with linear equalities and inequalities. let us move on to understand the classical and advanced optimization techniques. multivariable functions with no constraints and multivariable functions with both equality and inequality constraints.. 5• Infinite-dimensional optimization: studies the case when the set of feasible solutions is a subset of an infinite-dimensional space. to changes in the values of the parameters in the problem and the assumptions. The other methods of optimization include 1• Linear programming: studies the case in which the objective function f is linear and the set A is specified using only linear equalities and inequalities. The classical methods have limited scope in practical applications as some of them involve objective functions which are not continuous and/or differentiable. (A is the design variable space) 2• Integer programming: studies linear programs in which some or all variables are constrained to take on integer values.

each point of the search space is compared to a state of some physical system. returning and reinforcing it. ants (initially) wander randomly. to a state with the minimum possible energy. for reaching a goal state from a starting node. and the function to be minimized is interpreted as the internal energy of the system in that state. Steepest ascent hill climbing is similar to best first search but the latter tries all possible extensions of the current path in order. Traditionally. selection. 1• Ant colony optimization In the real world. and crossover (also called recombination). solutions are represented in binary as strings of 0s and 1s. Genetic algorithms are typically implemented as a computer simulation. If other ants find such a path. The evolution starts from a population of completely random individuals and occur in generations. whereas steepest ascent only tries one. evolves toward better solutions. the fitness of the whole population is evaluated. and upon finding food return to their colony while laying down pheromone trails. The new population is then used in the next iteration of the algorithm. from an arbitrary initial state. In simple hill climbing. the first closer node is chosen whereas in steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance. the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. Hill climbing is used widely in artificial intelligence fields. in which a population of abstract representations (called chromosomes) of candidate solutions (called individuals) to an optimization problem.Advanced Optimization Techniques 1• Hill climbing Hill climbing is a graph search algorithm where the current path is extended with a successor node which is closer to the solution than the end of the current path. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy. but different encodings are also possible. Therefore the goal is to bring the system. mutation. . In the simulated annealing method. This may happen if there are local maxima in the search space which are not solutions. 1• Genetic algorithms A genetic algorithm (GA) is a search technique used in computer science to find approximate solutions to optimization and search problems. a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. 1• Simulated annealing The name and inspiration come from annealing process in metallurgy. Choice of next node starting node can be varied to give a number of related algorithms. Both forms fail if there is no closer node. Specifically it falls into the category of local search techniques and is therefore generally an incomplete search. they are likely not to keep traveling at random. and modified (mutated or recombined) to form a new population. but instead follow the trail laid by earlier ants. multiple individuals are stochastically selected from the current population (based on their fitness). In each generation. if they eventually find any food.

by comparison. the more time the pheromones have to evaporate. thus reducing its attractive strength. Ant colony optimization algorithms have been used to produce near-optimal solutions to the traveling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches when the graph may change dynamically. In that case. The more time it takes for an ant to travel down the path and back again. This is of interest in network routing and urban transportation systems. however. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the search space representing the problem to be solved.Over time. and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. The ant colony algorithm can be run continuously and can adapt to changes in real time. and such positive feedback eventually leaves all the ants following a single path. the paths chosen by the first ants would tend to be excessively attractive to the following ones. the exploration of the solution space would be constrained. If there was no evaporation at all. gets marched over faster. Thus. the pheromone trail starts to evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a local optimal solution. A short path. when one ant finds a good (short) path from the colony to a food source. other ants are more likely to follow that path. 1 .

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd