1

ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

CHAPTER 1
INTRODUCTION
1.1 BACK GROUND AND RELATED WORK.
Genetic Algorithms (GA) have been used to evolve computer programs for specific
tasks, and to design other computational structures. The recent resurgence of interest in AP with
GA has been spurred by the work on Genetic Programming (GP). GP paradigm provides a way
to do program induction by searching the space of possible computer programs for an individual
computer program that is highly fit in solving or approximately solving the problem at hand. The
genetic programming paradigm permits the evolution of computer programs which can perform
alternative computations conditioned on the outcome of intermediate calculations, which can
perform computations on variables of many different types, which can perform iterations and
recursions to achieve the desired result, which can define and subsequently use computed values
and subprograms, and whose size, shape, and complexity is not specified in advance. GP use
relatively low-level primitives, which are defined separately rather than combined a priori into
high-level primitives, since such mechanism generate hierarchical structures that would facilitate
the creation of new high-level primitives from built-in low-level primitives. Unfortunately, since
every real life problem are dynamic problem, thus their behaviors are much complex, GP suffers
from serious weaknesses. Random systems chaos is important, in part, because it helps us to
cope with unstable system by improving our ability to describe, to understand, perhaps even to
forecast them.
Ant Colony Optimization (ACO) is the result of research on computational intelligence
approaches to combinatorial optimization originally conducted by Dr. Marco Dorigo, in
collaboration with Alberto Colorni and Vittorio Maniezzo. The fundamental approach
underlying ACO is an iterative process in which a population of simple agents repeatedly
construct candidate solutions; this construction process is probabilistically guided by heuristic
information on the given problem instance as well as by a shared memory containing experience
gathered by the ants in previous iteration. ACO has been applied to a broad range of hard
combinatorial problems. Problems are defined in terms of components and states, which are
sequences of components. Ant Colony Optimization incrementally generates solutions paths in
the space of such components, adding new components to a state. Memory is kept of all the
observed transitions between pairs of solution components and a degree of desirability is
associated to each transition depending on the quality of the solutions in which it occurred so far.
While a new solution is generated, a component y is included in a state, with a probability that is
proportional to the desirability of the transition between the last component included in the state,
and y itself.
2
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

The main idea is to use the self-organizing principles to coordinate populations of artificial
agents that collaborate to solve computational problems. Self-organization is a set of dynamical
mechanisms whereby structures appear at the global level of a system from interactions among
its lower-level components. The rules specifying the interactions among the system‟s constituent
units are executed on the basis of purely local information, without reference to the global
pattern, which is an emergent property of the system rather than a property imposed upon the
system by an external ordering influence. For example, the emerging structures in the case of
foraging in ants include spatiotemporally organized networks of pheromone trails.
1.1.1 GENETIC PROGRMMING.
Some specific advantages of genetic programming are that no analytical knowledge is
needed and still could get accurate results. GP approach does scale with the problem size. GP
does impose restrictions on how the structure of solutions should be formulated. There are
several variants of GP, some of them are: Linear Genetic Programming (LGP), Gene Expression
Programming (GEP), Multi Expression Programming (MEP), Cartesian Genetic Programming
(CGP), Traceless Genetic Programming (TGP) and Genetic Algorithm for Deriving Software
(GADS).Cartesian Genetic Programming was originally developed by Miller and Thomson for
the purpose of evolving digital circuits and represents a program as a directed graph. One of the
benefits of this type of representation is the implicit re-use of nodes in the directed graph.
Originally CGP used a program topology defined by a rectangular grid of nodes with a user
defined number of rows and columns. In CGP, the genotype is a fixed-length representation and
consists of a list of integers which encode the function and connections of each node in the
directed graph. The genotype is then mapped to an indexed graph that can be executed as a
program. In CGP there are very large numbers of genotypes that map to identical genotypes due
to the presence of a large amount of redundancy. Firstly there is node redundancy that is caused
by genes associated with nodes that are not part of the connected graph representing the program.
Another form of redundancy in CGP, also present in all other forms of GP is, functional
redundancy. Simon Harding, and Ltd introduce computational development using a form of
Cartesian Genetic Programming that includes self-modification operations..The interesting
characteristic of CGP are :
1. More powerful program encoding using graphs, than using conventional GP tree-like
representations, the population of strings are of fixed length, whereas their corresponding graphs
are of variable length depending on the number of genes in use.
2. Efficient evaluation derived from the intrinsic feature of sub graph-reuse exhibited by graphs.
3. Less complicated graph recombination via the crossover and mutation genetic operators.


3
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

1.1.2 SWARM INTELLIGENCE
Swarm intelligence (SI) describes the collective behavior of decentralized, self-
organized systems, natural or artificial. The concept is employed in work on artificial
intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the
context of cellular robotic systems.
Swarm intelligence is the discipline that deals with natural and artificial systems composed of
many individuals that coordinate using decentralized control and self-organization. In particular,
the discipline focuses on the collective behaviours that result from the local interactions of the
individuals with each other and with their environment. Examples of systems studied by swarm
intelligence are colonies of ants and termites, schools of fish, flocks of birds, herds of land
animals. Some human artefacts also fall into the domain of swarm intelligence, notably some
multi-robot systems, and also certain computer programs that are written to
tackle optimization and data analysis problems.
Emphasis is given to such topics as the modelling and analysis of collective biological systems;
application of biological swarm intelligence models to real-world problems; and theoretical and
empirical research in ant colony optimization, particle swarm optimization, swarm robotics, and
other swarm intelligence algorithms. Articles often combine experimental and theoretical work.









Fig 1.1
4
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

The field of swarm intelligence deals with systems composed of many individuals that
coordinate using decentralized control and self-organization. In particular, it focuses on the
collective behaviours that result from the local interactions of the individuals with each other and
with their environment. It is a fast-growing field that encompasses the efforts of researchers in
multiple disciplines, ranging from ethology and social science to operations research and
computer engineering.
Swarm Intelligence will report on advances in the understanding and utilization of swarm
intelligence systems, that is, systems that are based on the principles of swarm intelligence. The
following subjects are of particular interest to the journal: modeling and analysis of collective
biological systems such as social insect colonies, flocking vertebrates, and human crowds as well
as any other swarm intelligence systems; application of biological swarm intelligence models to
real-world problems such as distributed computing, data clustering, graph partitioning,
optimization and decision making; theoretical and empirical research in ant colony optimization,
particle swarm optimization, swarm robotics, and other swarm intelligence algorithms.
During the course of the last 20 years, researchers have discovered the variety of interesting
insect and animal behaviors in nature. A flock of birds sweeps across the sky. A group of ants
forages for food. A school of fish swims, turns, flees together, etc. We call this kind of aggregate
motion Swarm behavior. Recently, biologists and computer scientists have studied how to model
biological swarms to understand how such social animals interact, achieve goals, and evolve.
Furthermore, engineers are increasingly interested in this kind of swarm behavior since the
resulting swarm intelligence can be applied in optimization (e.g. in telecommunication systems)
,robotics track patterns in transportation systems, and military applications .A high-level view of
a swarm suggests that the N agents in the swarm are cooperating to achieve some purposeful
behavior and achieve some goal. This apparent collective intelligence seems to emerge from
what are often large groups of relatively simple agents. The agents use simple local rules to
govern their actions and via the interactions of the entire group, the swarm achieves its
objectives. A type of self organization emerges from the collection of actions of the group.
Swarm intelligence is the emergent collective intelligence of groups of simple autonomous
agents. Here, an autonomous agent is a subsystem that interacts with its environment, which
probably consists of other agents, but acts relatively independently from all other agents. The
autonomous agent does not follow commands from a leader, or some global plan. For example,
for a bird to participate in a flock, it only adjusts its movements to coordinate with the
movements of its flock mates, typically its neighbors that are close to it in the flock. A bird in a
flock simply tries to stay close to its neighbours, but avoid collisions with them. Each bird does
not take commands from any leader bird since there is no lead bird. Any bird can fly in the front,
center or back of the swarm. Swarm behavior helps birds take advantage of several things
including protection from predators (especially for birds in the middle of the flock), and
searching for food (as each bird is essentially exploiting the eyes of every other bird).
5
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

1.2 ANT COLONY
The complex social behaviours of ants have been much studied by science, and
computer scientists are now finding that these behaviour patterns can provide models for
solving difficult combinatorial optimization problems. The attempt to develop algorithms
inspired by one aspect of ant behaviour, the ability to find what computer scientists would
call shortest paths, has become the field of ant colony optimization (ACO), the most
successful and widely recognized algorithmic technique based on ant behaviour. This book
presents an overview of this rapidly growing field, from its theoretical inception to practical
applications, including descriptions of many available ACO algorithms and their uses.
The book first describes the translation of observed ant behaviour into working
optimization algorithms. The ant colony metaheuristic is then introduced and viewed in the
general context of combinatorial optimization. This is followed by a detailed description and
guide to all major ACO algorithms and a report on current theoretical findings. The book
surveys ACO applications now in use, including routing, assignment, scheduling, subset,
machine learning, and bioinformatics problems. Ant Net, an ACO algorithm designed for
the network routing problem, is described in detail. The authors conclude by summarizing
the progress in the field and outlining future research directions. Each chapter ends with
bibliographic material, bullet points setting out important ideas covered in the chapter, and
exercises. Ant Colony Optimization will be of interest to academic and industry researchers,
graduate students, and practitioners who wish to learn how to implement ACO algorithms.

Fig 1.2
6
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

1.2.1 BEHAVIOUR OF ANT IN PRESENCE OF AN OBSTACLE
Ants are forced to decide whether they should go left or right. The choice that is made is a
random one. Pheromone accumulation is faster on shortest path.


Fig 1.3
In this picture we can see the movement of natural ants; that is ants are moving in a line
following each others, when we place an obstacle in between the path, some of the ants
overcome the obstacle and create a new path of their own, while moving they produces
pheromone, all other ants detects the pheromone and follow the same path.

7
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

1.2.2 REAL BEHAVIOUR OF ANTS
Natural behaviour of ants have inspired scientists to mimic insect operational methods to solve
real-life complex problems such as Travelling sales man problem, Quadratic assignment
problem, Network model, Vehicle routing. By observing ant behaviour, scientists have begun to
understand their means of communication.
Ants communicate with each other through tapping with the antennae and smell. They are
considered, together with the bees, as one of the most socialized animals. They have a perfect
social organization, and each type of individual specializes in a specific activity within the
colony. They are thought by many as having a collective intelligence, and each ant is considered
then as an individual cell of a bigger organism. Ants wander randomly & on finding food return
to their colony while laying “PHEROMONE TRIALS”.

















Fig 1.4
If other Ants find such paths they do not travel randomly but follow the Pheromone trail. Ants
secrete pheromone while travelling from the nest to food and vice versa in order to communicate
with each other to find shortest path.
Pheromone is a highly volatile substance which starts to evaporate, more the time taken by the
ant to travel to and fro more time the pheromone have to evaporate. A shortest path gets marched
over faster and thus the pheromone density remains high. If one of the ants finds the shortest path
from colony to food source other ants are more likely to follow the same path.
8
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

1.3 ANTS
One of the first researchers to investigate the social behaviour of insects was the French
entomologist Pierre-Paul Grasse. In the forties and fifties of the 20-th century, he was observing
the behaviour of termites in particular, the Bellicositermes natalensis and Cubitermes species. He
discovered that these insects are capable to react to what he called significant stimuli, signals that
activate a genetically encoded reaction. He observed that the effects of these reactions can act as
new significant stimuli for both the insect that produced them and for the other insects in the
colony. Grasse used the term stigmergy to describe this particular type of indirect
communication in which the workers are stimulated by the performance they have achieved".
The two main characteristics of stigmergy that dierentiate it from other means of communication
are:
- The physical, non-symbolic nature of the information released by the communicating
insects, which corresponds to a modification of physical environmental states visited by
the insects and
- The local nature of the released information, which can only be accessed by those insects
that visit the place where it was released (or its immediate neighbourhood).

Examples of stigmergy can be observed in colonies of ants. In many ant species, ants walking to,
and from, a food source deposit on the ground a substance called pheromone. Other ants are able
to smell this pheromone, and its presence influences the choice of their path i.e., they tend to
follow strong pheromone.


Fig 1.5

9
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


CHAPTER 2
ANT COLONY OPTIMIZATION.
2.1 ANT SYSTEM.

The importance of the original Ant System (AS) resides mainly in being the prototype of a
number of ant algorithms which collectively implement the ACO paradigm. The move
probability distribution defines probabilities pιψk to be equal to 0 for all moves which are
infeasible (i.e., they are in the tabu list of ant k, that is a list containing all moves which are
infeasible for ants k starting from state ι).
After each iteration t of the algorithm, i.e., when all ants have completed a solution, trails are
updated by means of formula :
τιψ (τ) = ρ τιψ (τ − 1) + Δτιψ
where Δτιψ represents the sum of the contributions of all ants that used move (ιψ) to construct
their solution, ρ, 0 ≤ ρ ≤ 1, is a user-defined parameter called evaporation coefficient, and Δτιψ
represents the sum of the contributions of all ants that used move (ιψ) to construct their solution.
The ants‟ contributions are proportional to the quality of the solutions achieved, i.e., the better
solution is, the higher will be the trail contributions added to the moves it used.
2.1.1 ANT COLONY SYSTEM (ACS).
Ant System was the first algorithm inspired by real ants behavior. AS was initially applied to
the solution of the traveling salesman problem but was not able to compete against the state-of-the
art algorithms in the field. On the other hand he has the merit to introduce ACO algorithms and to
show the potentiality of using artificial pheromone and artificial ants to drive the search of always
better solutions for complex optimization problems. The next researches were motivated by two
goals: the first was to improve the performance of the algorithm and the second was to investigate
and better explain its behavior. Gambardella and Dorigo proposed in 1995 the Ant-Q algorithm,
an extension of AS which integrates some ideas from Q-learning, and in 1996 Ant Colony System
(ACS) a simplified version of Ant-Q which maintained approximately the same level of
performance, measured by algorithm complexity and by computational results. Since ACS is the
base of many algorithms defined in the following years we focus the attention on ACS other than
Ant-Q. ACS differs from the previous AS because of three main aspects:
2.1.2 PHEROMONE.
In ACS once all ants have computed their tour (i.e. at the end of each iteration) AS updates
the pheromone trail using all the solutions produced by the ant colony.

10
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

Each edge belonging to one of the computed solutions is modified by an amount of pheromone
proportional to its solution value. At the end of this phase the pheromone of the entire system
evaporates and the process of construction and update is iterated. On the contrary, in ACS only the
best solution computed since the beginning of the computation is used to globally update the
pheromone.
As was the case in AS, global updating is intended to increase the attractiveness of promising
route but ACS mechanism is more effective since it avoids long convergence time by directly
concentrate the search in a neighborhood of the best tour found up to the current iteration of the
algorithm.
In ACS, the final evaporation phase is substituted by a local updating of the pheromone
applied during the construction phase. Each time an ant moves from the current city to the next the
pheromone associated to the edge is modified in the following way: τ ij (t) = ρ ⋅τ ij (t -1)+ (1−ρ ) ⋅τ
0 where 0 ≤ ρ ≤ 1 is a parameter (usually set at 0.9) and τ0 is the initial pheromone value. τ0 is
defined as τ0=(n·Lnn)-1, where Lnn is the tour length produced by the execution of one ACS
iteration without the pheromone component (this is equivalent to a probabilistic nearest neighbor
heuristic). The effect of local-updating is to make the desirability of edges change dynamically:
every time an ant uses an edge this becomes slightly less desirable and only for the edges which
never belonged to a global best tour the pheromone remains τ0. An interesting property of these
local and global updating mechanisms is that the pheromone τij(t) of each edge is inferior limited
by τ0. A similar approach was proposed with the Max-Min-AS (MMAS) that explicitly introduces
lower and upper bounds to the value of the pheromone trials.

2.1.3 STATE TRANSITION RULE.
During the construction of a new solution the state transition rule is the phase where each ant
decides which is the next state to move to. In ACS a new state transition rule called pseudo-
random-proportional is introduced. The pseudorandom- proportional rule is a compromise
between the pseudo-random state choice rule typically used in Q-learning and the random-
proportional action choice rule typically used in Ant System. With the pseudo-random rule the
chosen state is the best with probability q0 (exploitation) while a random state is chosen with
probability 1-q0 (exploration). Using the AS random-proportional rule the next state is chosen
randomly with a probability distribution depending on ηij and τij. The ACS pseudo-random-
proportional state transition rule provides a direct way to balance between exploration of new
states and exploitation of a priori and accumulated knowledge. The best state is chosen with
probability q0 (that is a parameter 0 ≤ q0 ≤ 1 usually fixed to 0.9) and with probability (1-q0) the
next state is chosen randomly with a probability distribution based on ηij and τ ij weighted by α
(usually equal to 1) and β (usually equal to 2) .


11
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


2.1.4 HYBRIDIZATION AND PERFORMANCE IMPROVEMENT.
ACS was applied to the solution of big symmetric and asymmetric traveling salesman
problems (TSP/ATSP). For these purpose ACS incorporates an advanced data structure known as
candidate list. A candidate list is a static data structure of length cl which contains, for a given city
i, the cl preferred cities to be visited. An ant in ACS first uses candidate list with the state
transition rules to choose the city to move to. If none of the cities in the candidate list can be
visited the ant chooses the nearest available city only using the heuristic value ηij. ACS for
TSP/ATSP has been improved by incorporating local optimization heuristic (hybridization):
The idea is that each time a solution is generated by the ant it is taken to its local minimum by
the application of a local optimization heuristic based on an edge exchange strategy, like 2-opt, 3-
opt or Lin-Kernighan. The new optimized solutions are considered as the final solutions produced
in the current iteration by ants and are used to globally update the pheromone trails. This ACS
implementation combining a new pheromone management policy, a new state transition strategy
and local search procedures was finally competitive with state-of-the-art algorithm for the solution
of TSP/ATSP problems. This opened a new frontier for ACO based algorithm. Following the
same approach that combines a constructive phase driven by the pheromone and a local search
phase that optimizes the computed solution, ACO algorithms were able to break several
optimization records, including those for routing and scheduling problems.

2.2 ANTS
ANTS are an extension of the AS proposed in, which specifies some under defined elements of the
general algorithm, such as the attractiveness function to use
or the initialization of the trail distribution. This turns out to be a variation of the
general ACO framework that makes the resulting algorithm similar in structure to
tree search algorithms. In fact, the essential trait which distinguishes ANTS from a
tree search algorithm is the lack of a complete backtracking mechanism, which is substituted by a
probabilistic (Non-deterministic) choice of the state to move into and by an incomplete
(Approximate) exploration of the search tree: this is the rationale behind the name ANTS, which is
an acronym of Approximated Nondeterministic Tree Search. In the following, we will outline two
distinctive elements of the ANTS algorithm within the ACO framework, namely the attractiveness
function and the trail updating mechanism.

2.2.1 ATTRACTIVENESS
The attractiveness of a move can be effectively estimated by means of lower bounds (upper
bounds in the case of maximization problems) on the cost of the completion of a partial solution.

12
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

In fact, if a state ι corresponds to a partial problem solution it is possible to compute a lower bound
on the cost of a complete solution containing ι. Therefore, for each feasible move ι,ψ, it is possible
to compute the lower bound on the cost of a complete solution containing ψ: the lower the bound
the better the move. Since a large part of research in ACO is devoted to the identification of tight
lower bounds for the different problems of interest, good lower bounds are usually available.
When the bound value becomes greater than the current upper bound, it is obvious that the
considered move leads to a partial solution which cannot be completed into a solution better than
the current best one. The move can therefore be discarded from further analysis. A further
advantage of lower bounds is that in many cases the values of the decision variables, as appearing
in the bound solution, can be used as an indication of whether each variable will appear in good
solutions. This provides an effective way of initializing the trail values.

2.3 COMBINATORIAL OPTIMIZATION

Combinatorial optimization problems involve finding values for discrete variables such that
the optimal solution with respect to a given objective function is found. Many optimization
problems of practical and theoretical importance are of combinatorial nature. Examples are the
shortest path problems as well as many other important real world problems like finding a
minimum cost plan to deliver goods to customers, an optimal assignment of employees to tasks to
be performed, best routing scheme data packets in the internet.
When attacking a combinatorial optimization problem it is useful to know how difficult it is to
find an optimal solution. A way of measuring this difficulty is given by the notion of worst-case
complexity: a combinatorial optimization problem „pi‟ is said to have worst-case complexity
O(g(n)) if the best algorithm known for solving „pi‟ finds an optimal solution to any instance of
„pi‟ having size „n‟ in a computation time bounded from above by const.g(n).
In particular, we say that „pi‟ is solvable in polynomial time if the maximum amount of
computing time necessary to solve any instance of size n of „pi‟ is bounded from above by a
polynomial in n. If k is the largest exponent of such a polynomial, then the combinatorial
optimization problem is said to be solvable in O (n ^ k) time.
Although some important combinatorial optimization problems have been shown to be
solvable in polynomial time, for great majority of combinatorial problems no polynomial bound
on the worst-case solution time could be found so far. For these problem the run time of the best
algorithms known increases exponentially with the instance size and consequently, so does the
time required to find an optimal solution. A notorious example of such a problem is TSP.
An important theory that characterizes the difficulty of combinatorial problem is that of NP
completeness. The theory classifies combinatorial problems in two main classes: those that are
known to be solvable in polynomial time and those that are not. The first are said to be tractable,
the latter intractable.
13
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

2.4 CONCEPT OF ANT COLONY OPTIMIZATION
Ant Colony Optimization (ACO) is a recently proposed metaheuristic approach for solving
hard combinatorial optimization problems. The inspiring source of ACO is the pheromone trail
laying and following behavior of real ants which use pheromones as a communication medium.

In analogy to the biological example, ACO is based on the indirect communication of a colony
of simple agents, called (artificial) ants, mediated by (artificial) pheromone trails. The
pheromone trails in ACO serve as distributed, numerical information which the ants use to
probabilistically construct solutions to the problem being solved and which the ants adapt during
the algorithm‟s execution to reflect their search experience.
The first example of such an algorithm is Ant System (AS) , which was proposed using as
example application the well known Traveling Salesman Problem (TSP). Despite encouraging
initial results, AS could not compete with state-of-the-art algorithms for the TSP.
Nevertheless, it had the important role of stimulating further research on algorithmic variants
which obtain much better computational performance, as well as on applications to a large
variety of different problems. In fact, there exists now a considerable amount of applications
obtaining world class performance on problems like the quadratic assignment, vehicle routing,
sequential ordering, scheduling, routing in Internet-like networks, and so on. Motivated by this
success, the ACO metaheuristic has been proposed as a common framework for the existing
applications and algorithmic variants. Algorithms which follow the ACO metaheuristic will be
called in the following ACO algorithms. Current applications of ACO algorithms fall into the
two important problem classes of static and dynamic combinatorial optimization problems. Static
problems are those whose topology and cost do not change while the problems are being solved.
This is the case, for example, for the classic TSP, in which city locations and intercity distances
do not change during the algorithm‟s run-time. Differently, in dynamic problems the topology
and costs can change while solutions are built.
An example of such a problem is routing in telecommunications networks, in which traffic
patterns change all the time. The ACO algorithms for solving these two classes of problems are
very similar from a high-level perspective, but they differ significantly in implementation details.
The ACO metaheuristic captures these differences and is general enough to comprise the
ideas common to both application types. The (artificial) ants in ACO implement a randomized
construction heuristic which makes probabilistic decisions as a function of artificial pheromone
trails and possibly available heuristic information based on the input data of the problem to be
solved. As such, ACO can be interpreted as an extension of traditional construction heuristics
which are readily available for many combinatorial optimization problems. Yet, an important
difference with construction heuristics is the adaptation of the pheromone trails during algorithm
execution to take into account the cumulated search experience.
Ant Colony Optimization (ACO) is a paradigm for designing metaheuristic algorithms for
combinatorial optimization problems. The first algorithm which can be classified within this
14
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

framework was presented in 1991 and, since then, many diverse variants of the basic principle
have been reported in the literature. The essential trait of ACO algorithms is the combination of a
priori information about the structure of a promising solution with posterior information about the
structure of previously obtained good solutions.
Metaheuristic algorithms are algorithms which, in order to escape from local optima, drive
some basic heuristic: either a constructive heuristic starting from a null solution and adding
elements to build a good complete one, or a local search heuristic starting from a complete
solution and iteratively modifying some of its elements in order to achieve a better one.
The metaheuristic part permits the low-level heuristic to obtain solutions better than those it
could have achieved alone, even if iterated. Usually, the controlling mechanism is achieved either
by constraining or by randomizing the set of local neighbour solutions to consider in local search
(as is the case of simulated annealing or tabu), or by combining elements taken by different
solutions (as is the case of evolution strategies and genetic or bionomic algorithms).
The characteristic of ACO algorithms is their explicit use of elements of previous solutions.
In fact, they drive a constructive low-level solution, as GRASP does, but including it in a
population framework and randomizing
the construction in a Monte Carlo way. A
Monte Carlo combination of different
solution elements is suggested also by
Genetic Algorithms, but in the case of
ACO the probability distribution is
explicitly defined by previously obtained
solution components. The particular way
of defining components and associated
probabilities is problem- specific, and can
be designed in different ways, facing a
trade-off between the specificity of the
information used for the
conditioning and the number of solutions
which need to be constructed before
effectively biasing the probability dis-
tribution to favour the emergence of good
solutions. Different applications have
favoured either the use of conditioning at
the level of decision variables, thus
requiring a huge number of iterations
before getting a precise distribution, or the computational efficiency, thus using very coarse
conditioning information.

15
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

2.4.1 FINDING SHORTEST PATH

Fig 2.1
The original idea comes from observing the exploitation of food resources among ants, in
which ants‟ individually limited cognitive abilities have collectively been able to find the
shortest path between a food source and the nest.
1. The first ant finds the food source (F), via any way (a), then returns to the nest (N), leaving
behind a trail pheromone (b)
2. Ants indiscriminately follow four possible ways, but the strengthening of the runway makes it
more attractive as the shortest route.
3. Ants take the shortest route; long portions of other ways lose their trail pheromones.
In a series of experiments on a colony of ants with a choice between two unequal length paths
leading to a source of food, biologists have observed that ants tended to use the shortest route. A
model explaining this behaviour is as follows:
a) An ant (called "blitz") runs more or less at random around the colony;
b) If it discovers a food source, it returns more or less directly to the nest, leaving in its path
a trail of pheromone;
c) These pheromones are attractive, nearby ants will be inclined to follow, more or less
directly, the track;
d) Returning to the colony, these ants will strengthen the route;
16
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.
e) If there are two routes to reach the same food source then, in a given amount of time, the
shorter one will be traveled by more ants than the long route;
f) The short route will be increasingly enhanced, and therefore become more attractive;
g) The long route will eventually disappear because pheromones are volatile;
h) Eventually, all the ants have determined and therefore "chosen" the shortest route.
Ants use the environment as a medium of communication. They exchange information
indirectly by depositing pheromones, all detailing the status of their "work". The information
exchanged has a local scope, only an ant located where the pheromones were left has a notion of
them. This system is called "Stigmergy" and occurs in many social animal societies (it has been
studied in the case of the construction of pillars in the nests of termites). The mechanism to solve
a problem too complex to be addressed by single ants is a good example of a self-organized
system. This system is based on positive feedback (the deposit of pheromone attracts other ants
that will strengthen it themselves) and negative (dissipation of the route by evaporation prevents
the system from thrashing). Theoretically, if the quantity of pheromone remained the same over
time on all edges, no route would be chosen. However, because of feedback, a slight variation on
an edge will be amplified and thus allow the choice of an edge. The algorithm will move from an
unstable state in which no edge is stronger than another, to a stable state where the route is
composed of the strongest edges.
The basic philosophy of the algorithm involves the movement of a colony of ants through
the different states of the problem influenced by two local decision policies, viz., trails and
attractiveness. Thereby, each such ant incrementally constructs a solution to the problem. When
an ant completes a solution, or during the construction phase, the ant evaluates the solution and
modifies the trail value on the components used in its solution. This pheromone information will
direct the search of the future ants. Furthermore, the algorithm also includes two more
mechanisms, viz., trail evaporation and daemon actions. Trail evaporation reduces all trail values
over time thereby avoiding any possibilities of getting stuck in local optima. The daemon actions
are used to bias the search process from a non-local perspective.
2.5 ACO METAHEURISTIC
Artificial ants used in ACO are stochastic solution construction procedures that
probabilistically build a solution by iteratively adding solution components to partial solutions by
taking into account (i) heuristic information on the problem instance being solved, if available
and (ii) (artificial) pheromone trails which change dynamically at run-time to reflect the agents
acquired search experience.
The interpretation of ACO as an extension of construction heuristics is appealing because
of several reasons. A stochastic component in ACO allows the ants to build a wide variety of
different solutions and hence explore a much larger number of solutions than greedy heuristics.

17
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

At the same time, the use of heuristic information, which is readily available for many problems,
can guide the ants towards the most promising solutions. More important, the ants‟ search
experience can be used to influence in a way reminiscent of reinforcement learning the solution
construction in future iterations of the algorithm. Additionally, the use of a colony of ants can
give the algorithm increased robustness and in many ACO applications the collective interaction
of a population of agents is needed to efficiently solve a problem.
The domain of application of ACO algorithms is vast. In principle, ACO can be applied
to any discrete optimization problem for which some solution construction mechanism can be
conceived. In the following of this section, we first define a generic problem representation
which the ants in ACO exploit to construct solutions, then we detail the ants behavior while
constructing solutions, and finally we define the ACO metaheuristic.

2.5.1 PROBLEM REPRESENTATION.

Let us consider the minimization problem (S, f, Ω), where S is the set of candidate solutions, f is
the objective function which assigns to each candidate solution s ∈ S an objective function (cost)
value f(s, t), and Ω is a set of constraints. The goal is to find a globally optimal solution sopt ∈ S,
that is, a minimum cost solution that satisfies the constraints Ω. The problem representation of a
combinatorial optimization problem (S, f, Ω) which is exploited by the ants can be characterized
as follows:
• A finite set C = {c1, c2. . . cNC} of components is given.
• The states of the problem are defined in terms of sequences x =<ci, cj, . . . ,ck, . . .> over the
elements of C. The set of all possible sequences is denoted by X. The length of a sequence x, that
is, the number of components in the sequence, is expressed by |x|.
• The finite set of constraints Ω defines the set of feasible states X˜, with X˜ ⊆X.
• A set S∗ of feasible solutions is given, with S∗ ⊆ X˜ and S∗ ⊆ S.
• A cost f(s, t) is associated to each candidate solution s ∈ S.
• In some cases a cost, or the estimate of a cost, J(xi, t) can be associated to states other than
solutions. If xj can be obtained by adding solution components to a state xi then J(xi, t) ≤ J(xj, t).
Note that J(s, t) ≡ f(s, t).
Given this representation, artificial ants build solutions by moving on the construction graph
G = (C,L), where the vertices are the components C and the set L fully connects the components
C (elements of L are called connections). The problem constraints Ω are implemented in the
policy followed by the artificial ants. The choice of implementing the constraints in the
construction policy of the artificial ants allows a certain degree of flexibility. In fact, depending
on the combinatorial optimization problem considered, it may be more reasonable to implement
constraints in a hard way allowing ants to build only feasible solutions, or in a soft way, in which
case ants can build infeasible solutions (that is, candidate solutions in S\S∗) that will be
penalized, in dependence of their degree of infeasibility.
18
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

2.5.2 ANT‟S BEHAVIOUR.
Ants can be characterized as stochastic construction procedures which build solutions moving on
the construction graph G = (C,L). Ants do not move arbitrarily on G, but rather follow a
construction policy which is a function of the problem constraints Ω. In general, ants try to build
feasible solutions, but, if necessary, they can generate infeasible solutions. Components ci ∈ C
and connections lij ∈ L can have associated a pheromone trail τ (τi if associated to components,
τij if associated to connections) encoding a long-term memory about the whole ant search
process that is updated by the ants themselves, and a heuristic value η (ηi and ηij ,respectively)
representing a priori information about the problem instance definition or run-time information
provided by a source different from the ants. In many cases η is the cost, or an estimate of the
cost, of extending the current state. These values are used by the ants heuristic rule to make
probabilistic decisions on how to move on the graph.
More precisely, each ant k of the colony has the following properties:
 It exploits the graph G = (C,L) to search for feasible solutions s of minimum cost. That is,
solutions s such that ˆ fs = min_s f(s, t).
 It has a memory Mk that it uses to store information about the path it followed so far.
Memory can be used (i) to build feasible solutions (i.e., to implement constraints Ω), (ii)
to evaluate the solution found, and (iii) to retrace the path backward to deposit
pheromone.
 It can be assigned a start state x^k and one or more termination conditions e^k. Usually,
the start state is expressed either as a unit length sequence (that is, a single component
sequence), or an empty sequence.
 When in state x_r = < x_r−1, i>, if no termination condition is satisfied, it moves to a
node j in its neighbourhood N ^ k , that is, to a state <x^r, j> ∈ X.Often, moves towards
feasible states are favoured, either via appropriately defined heuristic values η, or through
the use of the ants‟ memory.
 It selects the move by applying a probabilistic decision rule. Its probabilistic decision rule
is a function of (i) locally available pheromone trails and heuristic values, (ii) the ant‟s
private memory storing its past history, and (iii) the problem constraints.
 The construction procedure of ant k stops when at least one of the termination conditions
e^k is satisfied.
 When adding a component cj to the current solution it can update the pheromone trail
associated to it or to the corresponding connection. This is called online step-by-step
pheromone update.
 Once built a solution, it can retrace the same path backward and update the pheromone
trails of the used components or connections. This is called online delayed pheromone
update.
It is important to note that ants move concurrently and independently and that each ant is
complex enough to find a (probably poor) solution to the problem under consideration.
19
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


Typically, good quality solutions emerge as the result of the collective interaction among the ants
which is obtained via indirect communication mediated by the information ants read/write in the
variables storing pheromone trail values. In a way, this is a distributed learning process in which
the single agents, the ants, are not adaptive themselves but, on the contrary, they adaptively
modify the way the problem is represented and perceived by other ants.

2.5.3 THE METAHEURISTIC.

Informally, the behaviour of ants in an ACO algorithm can be summarized as follows. A
colony of ants concurrently and asynchronously moves through adjacent states of the problem by
building paths on G. They move by applying a stochastic local decision policy that makes use of
pheromone trails and heuristic information. By moving, ants incrementally build solutions to the
optimization problem. Once an ant has built a solution, or while the solution is being built, the
ant evaluates the (partial) solution and deposits pheromone trails on the components or
connections it used. This pheromone information will direct the search of the future ants.
Besides ants‟ activity, an ACO algorithm includes two more procedures: pheromone trail
evaporation and daemon actions (this last component being optional). Pheromone evaporation is
the process by means of which the pheromone trail intensity on the components decreases over
time. From a practical point of view, pheromone evaporation is needed to avoid a too rapid
convergence of the algorithm towards a sub-optimal region. It implements a useful form of
forgetting,


Procedure ACO metaheuristic
Schedule Activities
ManageAntsActivity()
EvaporatePheromone()
DaemonActions() {Optional}
end Schedule Activities
end ACO metaheuristic

Fig 2.2 : The ACO metaheuristic in pseudo-code.
Comments are enclosed in braces. The procedure DaemonActions() is optional and refers to
centralized actions executed by a daemon possessing global knowledge
Favouring the exploration of new areas of the search space. Daemon actions can be used to
implement centralized actions which cannot be performed by single ants.


20
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


Examples are the activation of a local optimization procedure, or the collection of global
information that can be used to decide whether it is useful or not to deposit additional pheromone
to bias the search process from a non-local perspective.
As a practical example, the daemon can observe the path found by each ant in the colony
and choose to deposit extra pheromone on the components used by the ant that built the best
solution. Pheromone updates performed by the daemon are called off-line pheromone updates.
In Figure 2.2 the ACO metaheuristic behaviour is described in pseudo-code. The main
procedure of the ACO metaheuristic manages, via the Schedule Activities construct, the
scheduling of the three above discussed components of ACO algorithms: (i) management of
ants‟ activity, (ii) pheromone evaporation, and (iii) daemon actions. The Schedule Activities
construct does not specify how these three activities are scheduled and synchronized. In other
words, it does not say whether they should be executed in a completely parallel and independent
way, or if some kind of synchronization among them is necessary. The designer is therefore free
to specify the way these three procedures should interact.






















21
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

CHAPTER 3
ANT COLONY ALGORITHM.

3.1 HISTORY
The first ACO algorithm proposed was Ant System (AS). AS was applied to some rather
small instances of the traveling salesman problem (TSP) with up to 75 cities. It was able to reach
the performance of other general-purpose heuristics like evolutionary computation. Despite these
initial encouraging results, AS could not prove to be competitive with state-of-the-art algorithms
specifically designed for the TSP when attacking large instances. Therefore, a substantial amount
of recent research has focused on ACO algorithms which show better performance than AS when
applied, for example, to the TSP. In the following of this section we first briefly introduce the
biological metaphor on which AS and ACO are inspired, and then we present a brief history of
the developments that took from the original AS to the most recent ACO algorithms. In fact,
these more recent algorithms are direct extensions of AS which add advanced features to
improve the algorithm performance.
3.2 BIOLOGICAL ANALOGY.
In many ant species, individual ants may deposit a pheromone (a particular chemical that
ants can smell) on the ground while walking [50]. By depositing pheromone they create a trail
that is used, for example, to mark the path from the nest to food sources and back. In fact, by
sensing pheromone trails foragers can follow the path to food discovered by other ants. Also,
they are capable of exploiting pheromone trails to choose the shortest among the available paths
taking to the food.
Deneubourg and colleagues used a double bridge connecting a nest of ants and a food
source to study pheromone trail laying and following behavior in controlled experimental
conditions. They ran a number of experiments in which they varied the ratio between the length
of the two branches of the bridge. The most interesting, for our purposes, of these experiments is
the one in which one branch was longer than the other. In this experiment, at the start the ants
were left free to move between the nest and the food source and the percentage of ants that chose
one or the other of the two branches was observed over time. The outcome was that, although in
the initial phase random oscillations could occur, in most experiments all the ants ended up using
the shorter branch. This result can be explained as follows:
When a trial starts there is no pheromone on the two branches. Hence, the ants do not have a
preference and they select with the same probability any of the two branches. Therefore, it can be
expected that, on average, half of the ants choose the short branch and the other half the long
branch, although stochastic oscillations may occasionally favor one branch over the other.


22
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

However, because one branch is shorter than the other, the ants choosing the short branch are the
first to reach the food and to start their travel back to the nest. But then, when they must make a
decision between the short and the long branch, the higher level of pheromone on the short
branch biases their decision in its favor. Therefore, pheromone starts to accumulate faster on the
short branch which will eventually be used by the great majority of the ants.
3.2.1 DOUBLE BRIDGE EXPERIMENT.
Ant colonies can collectively perform tasks and make decisions that appear to require a high
degree of co-ordination among the workers: building a nest, feeding the brood, foraging for food,
and so on. In the example presented here, the ants collectively discover the shortest path to a
food source. In experiments with the ant Linepithaema humile, a food source is separated from
the nest by a bridge with two branches
.
The longer branch is r times longer than the shorter
branch(Fig:3a).
The shorter branch is selected by the colony in most experiments if r is sufficiently large (r
= 2 in Fig. 3b). This is because the ants lay and follow pheromone trails: individual ants lay a
chemical substance, a pheromone, which attracts other ants. The first ants returning to the nest
from the food source are those who take the shorter path twice (from the nest to the source and
back). At choice points 1 and 2, nest mates are recruited toward the shorter branch, which is the
first to be marked with pheromone. If, however, the shorter branch is presented to the colony
after the longer branch, the shorter path will not be selected because the longer branch is already
marked with pheromone.
This problem can be overcome in an artificial system, by introducing pheromone decay:
when the pheromone evaporates sufficiently quickly, it is more difficult to maintain a stable
pheromone trail on a longer path. The shorter branch can then be selected even if presented after
the longer branch. This property is particularly desirable in optimization, where one seeks to
avoid convergence toward mediocre solutions. In Linepithaema humile, although pheromone
concentrations do decay, the lifetime of trail pheromones is too large to allow such flexibility.
Fig: 3.1a Fig:3.1b
23
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


The double bridge was substituted by a graph and pheromone trails by artificial pheromone
trails. Also, because we wanted artificial ants to solve problems more complicate than those
solved by real ants, we gave artificial ants some extra capacities, like a memory (used to
implement constraints and to allow the ants to retrace their path back to the nest without errors)
and the capacity of depositing a quantity of pheromone proportional to the quality of the solution
produced (a similar behavior is observed also in some real ants species in which the quantity of
pheromone deposited while returning to the nest from a food source is proportional to the quality
of the food source found . In the next section we will see how, starting from AS, new algorithms
have been proposed that, although retaining some of the original biological inspiration, are less
and less biologically inspired and more and more motivated by the need of making ACO
algorithms competitive with or improve over state-of-the-art algorithms.
Nevertheless, many aspects of the original Ant System remain: the need for a colony, the
role of autocatalysis, the cooperative behavior mediated by artificial pheromone trails, the
probabilistic construction of solutions biased by artificial pheromone trails and local heuristic
information, the pheromone updating guided by solution quality, and the evaporation of
pheromone trail, are present in all ACO algorithms. It is interesting to note that there is one well
known algorithm that, although making use in some way of the ant foraging metaphor, cannot be
considered an instance of the Ant Colony Optimization metaheuristic. This is HAS-QAP,
proposed in, where pheromone trails are not used to guide the solution construction phase; on the
contrary, they are used to guide modifications of complete solutions in a local search style. This
algorithm belongs nevertheless to ant algorithms, a new class of algorithms inspired by a number
of different behaviors of social insects. Ant algorithms are receiving increasing attention in the
scientific community as a promising novel approach to distributed control and optimization.
3.3 CONSTRUCTION ALGORITHMS.
Construction algorithms build solutions to a problem under consideration in an
incremental way starting with an empty initial solution and iteratively adding opportunely
defined solution components without backtracking until a complete solution is obtained. In the
simplest case, solution components are added in random order. Often better results are obtained
if a heuristic estimate of the myopic benefit of adding solution components is taken into account.
Greedy construction heuristics add at each step a solution component which achieves the
maximal myopic benefit as measured by some heuristic information. The function Greedy
Component returns the solution component e with the best heuristic estimate. Solutions returned
by greedy algorithms are typically of better quality than randomly generated solutions. Yet, a
disadvantage of greedy construction heuristics is that only a very limited number of solutions can
be generated. Additionally, greedy decisions in early stages of the construction process strongly
constrain the available possibilities at later stages, often determining very poor moves in the final
phases of the solution construction. As an example consider a greedy construction heuristic for
the traveling salesman problem (TSP).
24
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

In the TSP we are given a complete weighted graph G = (N,A) with N being the set of nodes,
representing the cities, and A the set of arcs fully connecting the nodes N. Each arc is assigned a
value dij , which is the length of arc (i, j) ∈ A. The TSP is the problem of finding a minimal
length Hamiltonian circuit of the graph, where an Hamiltonian circuit is a closed tour visiting
exactly once each of the n = |N| nodes of G. For symmetric TSPs, the distances between the
cities are independent of the direction of traversing the arcs, that is, dij = dji for every pair of
nodes. In the more general asymmetric TSP (ATSP) at least for one pair of nodes i, j we have dij
/=dji.
A simple rule of thumb to build a tour is to start from some initial city and to always choose
to go to the closest still unvisited city before returning to the start city. This algorithm is known
as the nearest neighbor tour construction heuristic. Noteworthy in this example is that there are
some few very long links in the tour, leading to strongly suboptimal solutions. In fact,
construction algorithms are typically the fastest approximate methods, but the solutions they
generate often are not of a very high quality and they are not guaranteed to be optimal with
respect to small changes; the results produced by constructive heuristics can often be improved
by local search algorithms.
3.3.1 LOCAL SEARCH.
Local search algorithms start from a complete initial solution and try to find a better
solution in an appropriately defined neighborhood of the current solution. In its most basic
version, known as iterative improvement, the algorithm searches the neighborhood for an
improving solution. If such a solution is found, it replaces the current solution and the local
search continues. These steps are repeated until no improving neighbor solution is found
anymore in the neighborhood of the current solution and the algorithm ends in a local optimum.
The procedure Improve returns a better neighbor solution if one exists, otherwise it returns the
current solution, in which case the algorithm stops.
procedure Iterative Improvement (s ∈ S)
s‟ = Improve(s)
while s‟/= s do
s = s‟
s‟ = Improve(s)
end
return s
end Iterative Improvement
Fig : 3.2 Algorithmic skeleton of iterative improvement
25
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

The choice of an appropriate neighborhood structure is crucial for the performance of the local
search algorithm and has to be done in a problem specific way. The neighborhood structure
defines the set of solutions that can be reached from s in one single step of a local search
algorithm. An example neighborhood for the TSP is the k-opt neighborhood in which neighbor
solutions differ by at most k arcs. The 2-opt algorithm systematically tests whether the current
tour can be improved by replacing two edges. To fully specify a local search algorithm is needed
a neighborhood examination scheme that defines how the neighborhood is searched and which
neighbor solution replaces the current one. In the case of iterative improvement algorithms, this
rule is called pivoting rule and examples are the best-improvement rule, which chooses the
neighbor solution giving the largest improvement of the objective unction, and the first-
improvement rule, which uses the first improved solution found in the neighborhood to replace
the current one. A common problem with local search algorithms is that they easily get trapped
in local minima and that the result strongly depends on the initial solution.
3.4 ANT ALGORITHM.
We use the following notation. A combinatorial optimization problem is a problem defined
over a set C = c1, ... , cn of basic components. A subset S of components represents a solution of
the problem; F ⊆ 2C is the subset of feasible solutions, thus a solution S is feasible if and only if
S ∈ F. A cost function z is defined over the solution domain, z : 2C R, the objective being to
find a minimum cost feasible solution S*, i.e., to find S*: S* ∈ F and z(S*) ≤ z(S), ∀S∈F. A set
of computational concurrent and asynchronous agents (a colony of ants) moves through states of
the problem corresponding to partial solutions of the problem to solve. They move by applying a
stochastic local decision policy based on two parameters, called trails and attractiveness. By
moving, each ant incrementally constructs a solution to the problem. When an ant completes a
solution, or during the construction phase, the ant evaluates the solution and modifies the trail
value on the components used in its solution. This pheromone information will direct the search
of the future ants.
Furthermore, an ACO algorithm includes two more mechanisms: trail evaporation and,
optionally, daemon actions. Trail evaporation decreases all trail values over time, in order to
avoid unlimited accumulation of trails over some component. Daemon actions can be used to
implement centralized actions which cannot be performed by single ants, such as the invocation
of a local optimization procedure, or the update of global information to be used to decide
whether to bias the search process from a non-local perspective .More specifically, an ant is a
simple computational agent, which iteratively constructs a solution for the instance to solve.
Partial problem solutions are seen as states. At the core of the ACO algorithm lies a loop, where
at each iteration, each ant moves (performs a step) from a state ι to another one ψ, corresponding
to a more complete partial solution. That is, at each step σ, each ant k computes a set Ak σ(ι) of
feasible expansions to its current state, and moves to one of these in probability.

26
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

The probability distribution is specified as follows. For ant k, the probability pιψk of moving
from state ι to state ψ depends on the combination of two values:
 The attractiveness ηιψ of the move, as computed by some heuristic indicating the a priori
desirability of that move.
 The trail level τιψ of the move, indicating how proficient it has been in the past to make
that particular move: it represents therefore an a posterior indication of the desirability of
that move.
Trails are updated usually when all ants have completed their solution, increasing or decreasing
the level of trails corresponding to moves that were part of "good" or "bad" solutions,
respectively. .
The ant system simply iterates a main loop where m ants construct in parallel their solutions,
thereafter updating the trail levels. The performance of the algorithm depends on the correct
tuning of several parameters, namely: α, β, relative importance of trail and attractiveness, ρ, trail
persistence, τij(0), initial trail level, m, number of ants, and Q, used for defining to be of high
quality solutions with low cost. The algorithm is the following.
1. {Initialization}
Initialize τιψ and ηιψ, ∀(ιψ).
2. {Construction}
For each ant k (currently in state ι) do
repeat
choose in probability the state to move into.
append the chosen move to the k-th ant's set tabuk.
until ant k has completed its solution.
end for
3. {Trail update}
For each ant move (ιψ) do
compute Δτιψ
update the trail matrix.
end for
4. {Terminating condition}
If not(end test) go to step 2.





27
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

CHAPTER 4
APPLICATION OF ACO.

4.1 TRAVELLING SALES MAN.
The TSP is a very important problem in the context of Ant Colony Optimization because it
is the problem to which the original AS was first applied, and it has later often been used as a
benchmark to test a new idea and algorithmic variants.
OBJECTIVE:
Given a set of n cities, the Traveling Salesman Problem requires a salesman to find the
shortest route between the given cities and return to the starting city, while keeping in mind that
each city can be visited only once.

4.1.1 ANT COLONY OPTIMIZATION IN TSP.
The meta heuristic Ant Colony Optimization (ACO) is an optimization algorithm
successfully used to solve many NP hard optimization problems introduced in ACO . ACO
algorithms are a very interesting approach to find minimum cost paths in graphs especially when
the connection costs in the graphs can change over time, i.e. when the problems are dynamic.
The artificial ants have been successfully used to solve the (conventional) Traveling Salesman
Problem (TSP) , as well as other NP hard optimization problems, including applications in
quadratic assignment or vehicle routing.
The algorithm‟s based on the fact that ants are always able to find the shortest path between
the nest and the food sources, using information of the pheromones previously laid on the ground
by other ants in the colony. When an ant is searching for the nearest food source and arrives at
several possible trails, it tends to choose the trail with the largest concentration of pheromones,
with a certain probability p. After choosing the trail, it deposits another pheromone, increasing
the concentration of pheromones in this trail. The ants return to the nest using always the same
path, depositing another portion of pheromone in the way back. Imagine then, that two ants at the
same location choose two different trails at the same time. The pheromone concentration on the
shortest way will increase faster than the other: the ant that chooses this way, will deposit more
pheromone in a smaller period of time, because it returns earlier. If a whole colony of thousands
of ants follows this behaviour, soon the concentration of pheromone on the shortest path will be
much higher than the concentration in other paths. Then the probability of choosing any other
way will be very small and only very few ants among the colony will fail to follow the shortest
path.

28
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

There is another phenomenon related with the pheromone concentration since it is a chemical
substance, it tends to evaporate, so the concentration of pheromones vanishes along the time.
In this way, the concentration of the less used paths will be much lower than that of the most
used ones, not only because the concentration increases on the other paths, but also because its
own concentration decreases.

Fig 4.1

The traveling salesman problem plays a central role in ant colony optimization because it
was the first problem to be attacked by these methods. Among the reasons the TSP was chosen
we find the following ones: (i) it is relatively easy to adapt the ant colony metaphor to it, (ii) it is
a very difficult problem (NP-hard), (iii) it is one of the most studied problems in combinatorial
optimization and (iv) it is very easy to state and explain it, so that the algorithm behavior is not
obscured by too many technicalities.
Ant System (AS) can be informally described as follows. A number m of ants is positioned
in parallel on m cities. The ants‟ start state, that is, the start city, can be chosen randomly, and the
memory of each ant k is initialized by adding the current start city to the set of already visited
cities (initially empty). Ants then enter a cycle which lasts NC iterations, that is, until each ant
has completed a tour. During each step an ant located on node i considers the feasible
neighborhood, reads the entries aij ‟s of the ant-routing table Ai of node i, computes the
transition probabilities, and then applies its decision rule to choose the city to move to, moves to
the new city , and updates its memory.
Once ants have completed a tour (which happens synchronously, given that during each
iteration of the while loop each ant adds a new city to the tour under construction), they use their
memory to evaluate the built solution and to retrace the same tour backward and increase the
intensity of the pheromone trails ¿ij of visited connections lij. This has the effect of making the
visited connections become more desirable for future ants. Then the ants die, freeing all the
allocated resources. In AS all the ants deposit pheromone and no problem-specific daemon
actions are performed. The triggering of pheromone evaporation happens after all ants have
completed their tours. Of course, it would be easy to add a local optimization daemon action, like
a 3-opt procedure ; this has been done in most of the ACO algorithms for the TSP that have been
developed as extensions of AS.
29
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.



4.1.2 FLOW CHART FOR TSP USING ACO.











Fig: 4.2



30
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.
k
i
J


4.1.3 ANT COLONY OPTIMIZATION ALGORITHM

procedure Ant colony algorithm
Set for every pair (i, j): Tij = Tmax
Place the g ants
For i = 1 to N:
Build a complete tour
For j = 1 to m
For k = 1 to g
Choose the next node using pk ij in (2)
Update the tabu list T
End
End
Analyze solutions
For k = 1 to g
Compute performance index fk
Update globally Tij(t + m × g) using (3)
End
End

Euclidean distance between two locations dij is used as heuristic. However, within a city, the
traveling time tij between two machines is more relevant than distance, due to traffic reasons.
Therefore, the heuristic function is given by _ = (tij − tmin)/(tmax − tmin), where tij is the
estimated traveling time between location i and location j and tmin = min tij and tmax = max tij
are the minimum and maximum travelling times considered. In this way, the heuristic matrix _
entries are always restricted to the interval [0, 1]. The objective function to minimize, fk(t), is
simply the sum of travelling time between all the visited locations:

4.1.4 RULES FOR TRANSITION PROBABILITY

1. Whether or not a city has been visited
Use of a memory(tabu list): : set of all cities that are to be visited

2. = visibility: Heuristic desirability of choosing city j when in city i.



ij
N
ij
d
1
31
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.



3.Pheromone trail: This is a global type of information
Transition probability for ant k to go from city i to city j while building its route.











a = 0: closest cities are selected
Trial visibility is q
ij
= 1/d
ij

The intensity in the probabilistic transition is o
The visibility of the trial segment is |
The trail persistence or evaporation rate is given as µ
Trail intensity is given by value of t
ij


4.1.5 TSP APPLICATION
• Lots of practical applications
• Routing such as in trucking, delivery, UAVs Manufacturing routing such as movement of parts
along manufacturing floor or the amount of solder on circuit board
• Network design such as determining the amount of cabling required
• Two main types
– Symmetric
– Asymmetric

4.1.6 TSP HEURISTICS
• Variety of heuristics used to solve the TSP
• The TSP is not only theoretically difficult it is also difficult in practical application since the
tour breaking constraints get quite numerous
• As a result there have been a variety of methods proposed for the TSP
• Nearest Neighbour is a typical greedy approach

) (t T
ij

otherwise 0
allowed j if
k
¦
¦
¹
¦
¦
´
¦
e
¿
=
e
k
allowed k
ik ik
ij ij
k
ij
] [ )] t ( [
] [ )] t ( [
) t ( p
| o
| o
q t
q t
32
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


4.1.7 ADVANTAGES
• Positive Feedback accounts for rapid discovery of good solutions
• Distributed computation avoids premature convergence
• The greedy heuristic helps find acceptable solution in the early solution in the early stages of
the search process.
• The collective interaction of a population of agents.

4.2 QUADRATIC ASSIGNMENT PROBLEM

Fig :4.3
The quadratic assignment problem (QAP) is one of fundamental combinatorial
optimization problems in the branch of optimization or operations research in mathematics, from
the category of the facilities location problems.
There are a set of n facilities and a set of n locations. For each pair of locations, a distance is
specified and for each pair of facilities aweigh or flow is specified (e.g., the amount of supplies
transported between the two facilities). The problem is to assign all facilities to different
locations with the goal of minimizing the sum of the distances multiplied by the corresponding
flows
The formal definition of the quadratic assignment problem is
Given two sets, P ("facilities") and L ("locations"), of equal size, together with a weight
function w : P × P → R and a distance function d : L× L → R. Find
the bijection f : P → L ("assignment") such that the cost function:
33
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.
| |
n n
j i
d
,
,


is minimized.
Usually weight and distance functions are viewed as square real-valued matrices, so that
the cost function is written down as:


Problem is:
• Assign n activities to n locations (campus and mall layout).
• D= , , distance from location i to location j
• F= , ,flow from activity h to activity k
• Assignment is permutation
• Minimize:
• It‟s NP hard

QAP Example Fig:4.4
j i
d
,
| |
n n
k h
f
,
,
j i
f
,
H
¿
=
=
n
j i
j i ij
f d C
1 ,
) ( ) (
) (
t t
t
34
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.



4.2.1 ALGORITHMS

1) Ant System for the QAP: Ant System (AS) uses ants in order to construct a solution from
scratch. The algorithm uses a heuristic information on the potential quality of a local assignment
that is determined as follows: two vectors d and f whose components are the sum of the
distances, resp. flows from location, resp. facility i to all other locations, resp. facilities are
computed. This leads to a coupling matrix E = f · dT where eij = fi · dj . Thus, _ = 1/eij denotes
the heuristic desirability of assigning facility i to location j. A solution is constructed by using
both this heuristic information and information provided by previous ants using
pheromones. At each step an ant k assigns the next still unassigned facility i to a location j
belonging to the feasible neighbourhood of the node i, i.e. the locations that are still free, with a
probability pk ij given by

After their run, the ants update the pheromone information Tij :

This algorithm uses an evaporation rate of (1 − p) in order to forget previous bad choices at the
cost of loosing useful information too. is the amount of pheromone ant k deposits on the
edge (i, j):

The algorithm parameters are the number of ants n, the weight given to either heuristic or
pheromone information‟s _, _ and the maximal amount of laid pheromone Q.
35
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.




2) The MAX − MIN Ant System: MAX − MIN Ant System (MMAS) is an improvement over
AS that allows only one ant to add pheromone. The pheromone trails are initialized to the upper
trail limit, which cause a higher exploration at the start of algorithm.
Finally, methods to increase the diversification of the search can be used, for example by
reinitialising the pheromone trails to _max if the algorithm makes no progress. In MMAS, the
ant k assigns the facility i to the location j with a probability pk ij. MMAS does not use any
heuristic information but is coupled with a local search for every ant.

4.2.2 SIMPLIFIED CRAFT (QAP)
Simplification Assume all departments have equal size Notation distance between locations i
and j travel frequency between departments k and h
1 if department k is assigned to location i
0 otherwise
Example


Fig:4.5



36
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.



Fig: 4.6a


Fig: 4.6b
4.2.3 ANT SYSTEM (AS-QAP)
Constructive method:
step 1: choose a facility j
step 2: assign it to a location i
Characteristics:
– each ant leaves trace (pheromone) on the chosen couplings (i,j)
– assignment depends on the probability (function of pheromone trail and a heuristic
information)



1 2 3 4
1
2
- 1 1 2
2

1 - 2 1
3 1 2 - 1
4 2 1 1 -

37
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


– already coupled locations and facilities are inhibited (Tabu list)
Heuristic information




The coupling Matrix:






Ants choose the location according to the heuristic desirability “Potential goodness”






4.2.4 AS-QAP CONSTRUCTING THE SOLUTION.
AS-QAP Constructing the Solution facilities are ranked in decreasing order of the flow
potentials, Ant k assigns the facility i to location j with the probability given by



where is the feasible Neighborhood of node i
 When Ant k choose to assign facility j to location i it leave a substance, called trace
“pheromone” on the coupling (i,j)
 Repeated until the entire assignment is found

(
(
(
(
¸
(

¸

= ¬
(
(
(
(
¸
(

¸

=
(
(
(
(
¸
(

¸

= ¬
(
(
(
(
¸
(

¸

=
80
130
110
120
0 50 20 10
50 0 30 50
20 30 0 60
10 50 60 0

14
12
10
6
0 6 5 3
6 0 4 2
5 4 0 1
3 2 1 0
i ij i ij
F F D D
960 s
720 s

1120 960 800 480
1820 1560 1300 780
1540 1320 1100 660
1680 1440 1200 720
4 3 34
1 1 11
= - =
= - =
(
(
(
(
¸
(

¸

=
d f
d f
S
ij
ij
s
1
= ,
k
i
N l
ij ij
ij ij k
ij
N j if
t
t
t p
k
i
e
¦
¹
¦
´
¦
=
¿
e

] [ )] ( [
] [ )] ( [
) (
| o
| o
q t
q t
k
i
N
38
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


4.2.5 AS-QAP PHEROMONE UPDATE.
Pheromone trail update to all couplings:



is the amount of pheromone ant k puts on the coupling (i,j)



Q...the amount of pheromone deposited by ant k.
4.2.6 NETWORK MODEL
Routing (or routeing) is the process of selecting paths in a network along which to send
network traffic. Routing is performed for many kinds of networks, including the telephone
network, electronic data networks (such as the Internet), and transportation networks. This article
is concerned primarily with routing in electronic data networks using packet
switching technology.
In packet switching networks, routing directs packet forwarding, the transit of logically
addressed packets from their source toward their ultimate destination through
intermediate nodes; typically hardware devices called routers, bridges, gateways, firewalls,
or switches. General-purpose computers with multiple network cards can also forward packets
and perform routing, though they are not specialized hardware and may suffer from limited
performance. The routing process usually directs forwarding on the basis of routing tables which
maintain a record of the routes to various network destinations. Thus, constructing routing tables,
which are held in the routers' memory, is very important for efficient routing. Most routing
algorithms use only one network path at a time, but multipath routing techniques enable the use
of multiple alternative paths.
Routing task is performed by Routers.
Routers use “Routing Tables” to direct the data.


¿
=
A + = +
m
k
k
ij ij ij
t t
1
) ( . ) 1 ( t t µ t
k
ij
t A


otherwise 0
k ant of solution in the j location to assigned is i facility
¦
¹
¦
´
¦
= A
if
J
Q
k k
ij ¢
alue function v objective the .
k
J
¢
39
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.




Fig: 4.7
4.2.7 PROBLEM STATEMENT.
- Dynamic Routing
At any moment the pathway of a message must be as small as possible. (Traffic
conditions and the structure of the network are constantly changing)
- Load balancing
Distribute the changing load over the system and minimize lost calls
4.2.8 ALGORITHM
Increase the probability of the visited link by:


Decrease the probability of the others by :


Where






Fig: 4.8a
µ
µ µ
µ
A +
A +
=
1
old
µ
µ
µ
A +
=
1
old
|
|
.
|

\
|
= A
age
f
1
µ
Node1
Node3
Node4
Node7
Node8
40
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.




Fig: 4.8b

4.3 VEHICLE ROUTING PROBLEM WITH TIME WINDOWS (VRPTW).
The vehicle routing problem(VRP) is a combinatorial and integer programming problem seeking
to service a number of customers with a fleet of vehicles. Proposed by Dantzig and Ramser in
1959, VRP is an important problem in the fields of transportation, distribution and
logistics. Often the context is that of delivering goods located at a central depot to customers
who have placed orders for such goods. Implicit is the goal of minimizing the cost of distributing
the goods. Many methods have been developed for searching for good solutions to the problem,
but for all but the smallest problems, finding global minimum for the cost function
is computationally complex.
Objective Functions to Minimize
• Total travel distance
• Total travel time
• Number of vehicles
Subject to:
• Vehicles ( # ,Capacity, time on road, trip length)
• Depots (Numbers)
• Customers (Demands, time windows)





41
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.




Vehicle Routing Problem with Time Windows (VRPTW):




Fig: 4.9

4.3.1 SIMPLE ALGORITHM
 Place ants on depots (Depots # = Vehicle #).
 Probabilistic choice
~ (1/distance, d
i
, Q)
~ amount of pheromone
 If all unvisited customer lead to a unfeasible solution:
Select depot as your next customer.
 Improve by local search.
 Only best ants update pheromone trial.




42
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


4.3.2 MULTIPLE ACS FOR VRPTW.


Fig: 4.10

4.3.3 SINGLE MACHINE TOTAL WEIGHTED TARDINESS SHEDULING
PROBLEM (SMTWTP).
In the SMTWTP n jobs have to be processed sequentially without interruption on
a single machine. Each job has a processing time pj, a weight wj, and a due date dj associated
and all jobs are available for processing at time zero. The tardiness of job j is defined as Tj =
max{0, Cj − dj}, where Cj is its completion time in the current job sequence. The goal in the
SMTWTP is to find a job sequence which minimizes the sum of the weighted tardiness given
by(summation from i=1 to n of (wi · Ti)). For the ACO application to the SMTWTP, the set of
components C is the set of all jobs. As in the TSP case, the states of the problem are all possible
partial sequences. In the SMTWTP case we do not have explicit costs associated with the
connections because the objective function contribution of each job depends on the partial
solution constructed so far.

43
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


The SMTWTP was attacked in using ACS (ACS-SMTWTP). In ACSSMTWTP, each ant
starts with an empty sequence and then iteratively appends an unscheduled job to the partial
sequence constructed so far.

Each ant chooses the next job using the pseudo-random-proportional action choice rule, where
the at each step the feasible neighborhood Ni^k of ant k is formed by the still unscheduled jobs.
Pheromone trails are defined as follows: τij refers to the desirability of scheduling job j at
position i. This definition of the pheromone trails is, in fact, used in most ACO application to
scheduling problems.
Concerning the heuristic information, in the use of three priority rules allowed to define
three different types of heuristic information for the SMTWTP. The investigated priority rules
were: (i) the earliest due date rule (EDD), which puts the jobs in non-decreasing order of the due
dates dj , (ii) the modified due date rule (MDD) which puts the jobs in non-decreasing order of
the modified due dates given by mddj = max{C + pj, dj} [2], where C is the sum of the
processing times of the already sequenced jobs, and (iii) the apparent urgency rule (AU) which
puts the jobs in non-decreasing order of the apparent urgency, given by auj = (wj/pj) ·
exp(−(max{dj − Cj , 0})/kp), where k is a parameter of the priority rule. In each case, the
heuristic information was defined as ηij = 1/hj, where hj is either dj, mddj, or auj , depending on
the priority rule used. The global and the local pheromone update is done as in the standard ACS,
where in the global pheromone update Tgb is the total weighted tardiness of the global best
solution.
In ACS-SMTWTP was combined with a powerful local search algorithm. The final ACS
algorithm was tested on a benchmark set available from ORLIB. Within the computation time
limits ACS reached a very good performance and could find in each single run the optimal or
best known solutions on all instances of the benchmark set.









44
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

CHAPTER 5
CONCLUSION.

5.1 ADVANTAGES.
- For TSPs (Traveling Salesman Problem), relatively efficient
 for a small number of nodes, TSPs can be solved by exhaustive search
 for a large number of nodes, TSPs are very computationally difficult to solve
(NP-hard) – exponential time to convergence
 Performs better against other global optimization techniques for TSP (neural
net, genetic algorithms, simulated annealing)
- Compared to GAs (Genetic Algorithms):
 Retains memory of entire colony instead of previous generation only
 Less affected by poor initial solutions (due to combination of random path
selection and colony memory).
- Can be used in dynamic applications (adapts to changes such as new distances, etc.)
 Has been applied to a wide variety of applications
 As with GAs, good choice for constrained discrete problems (not a gradient-
based algorithm)
5.2 DISADVANTAGES.
- Theoretical analysis is difficult:
 Due to sequences of random decisions (not independent)
 Probability distribution changes by iteration
 Research is experimental rather than theoretical
- Convergence is guaranteed, but time to convergence uncertain
- Tradeoffs in evaluating convergence:
 In NP-hard problems, need high-quality solutions quickly –
focus is on quality of solutions
 In dynamic network routing problems, need solutions for
changing conditions – focus is on effective evaluation of
alternative paths
- Coding is somewhat complicated, not straightforward

45
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

 Pheromone “trail” additions/deletions, global updates and local
updates
 Large number of different ACO algorithms to exploit different
problem characteristics
5.3 SCOPE OF SEMINAR.
In this paper we have given a formal description of the Ant Colony Optimization meta
heuristic, as well as of the class of problems to which it can be applied. We have then described
two paradigmatic applications of ACO algorithms to the traveling salesman problem and to
routing in packet-switched networks, and concluded with a short overview on the currently
available applications. Ongoing work follows three main directions: the study of the formal
properties of a simplified version of ant system, the development of Ant Net for Quality of
Service applications, and the development of further applications to combinatorial optimization
problems. Since Ant colony algorithm may produce redundant states in the graph, it‟s better to
minimize such graphs to enhance the behavior of the inducted system. A colony of ants moves
through system states X, by applying Genetic Operations. By moving, each ant incrementally
constructs a solution to the problem. When an ant complete solution, or during the construction
phase, the ant evaluates the solution and modifies the trail value on the components used in its
solution. Ants deposit a certain amount of pheromone on the components; that is, either on the
vertices or on the edges that they traverse. The amount of pheromone deposited may depend on
the quality of the solution found. Subsequent ants use the pheromone information as a guide
toward promising regions of the search space. Ants adaptively modify the way the problem is
represented and perceived by other ants, but they are not adaptive themselves. The genetic
programming paradigm permits the evolution of computer programs which can perform
alternative computations conditioned on the outcome of intermediate calculations, which can
perform computations on variables of many different types, which can perform iterations and
recursions to achieve the desired result, which can define and subsequently use computed values
and sub-programs, and whose size, shape, and complexity is not specified in advance.
ACO is a recently proposed metaheuristic approach for solving hard combinatorial optimization
problems. Artificial ants implement a randomized construction heuristic which makes
probabilistic decisions. The cumulated search experience is taken into account by the adaptation
of the pheromone trail. ACO Shows great performance with the “ill structured” problems like
network routing. In ACO Local search is extremely important to obtain good results.
The field of ACO algorithms is very lively, as testified for example by the successful
biannual workshop (ANTS – From Ant Colonies to Artificial Ants: A Series of International
Workshops on Ant Algorithms) where researchers meet to discuss the properties of ACO and
other ant algorithms, both theoretically and experimentally.
46
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.


From the theory side, researchers are trying either to extend the scope of existing theoretical
results, or to find principled ways to set parameters values. From the experimental side, most of
the current research is in the direction of increasing the number of problems that are successfully
solved by ACO algorithms, including real-word, industrial applications.
Currently, the great majority of problems attacked by ACO are static and well-defined
combinatorial optimization problems, that is, problems for which all the
necessary information is available and does not change during problem solution. For this kind of
problems ACO algorithms must compete with very well established algorithms, often specialized
for the given problem. Also, very often the role played by local search is extremely important to
obtain good results.
Although rather successful on these problems, we believe that ACO algorithms will really
evidentiate their strength when they will be systematically applied to “ill-structured” problems
for which it is not clear how to apply local search, or to highly dynamic domains with only local
information available. A first step in this direction has already been done with the application to
telecommunications networks routing, but more research is necessary.














47
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.

BIBILIOGRAPHY

 Dorigo M. and G. Di Caro (1999). The Ant Colony Optimization Meta-Heuristic. In D.
Corne, M. Dorigo and F. Glover, editors, New Ideas in Optimization, McGraw-Hill, 11-
32.
 M. Dorigo and L. M. Gambardella. Ant colonies for the travelling salesman problem. Bio
Systems, 43:73–81, 1997.
 M. Dorigo and L. M. Gambardella. Ant Colony System: A cooperative learning approach
to the travelling salesman problem. IEEE Transactions on Evolutionary Computation,
1(1):53–66, 1997.
 G. Di Caro and M. Dorigo. Mobile agents for adaptive routing. In H. El-Rewini, editor,
Proceedings of the 31st International Conference on System Sciences (HICSS-31), pages
74–83. IEEE Computer Society Press, Los Alamitos, CA, 1998.
 M. Dorigo, V. Maniezzo, and A. Colorni. The Ant System: An autocatalytic optimizing
process. Technical Report 91-016, Revised, Dipartimento di Elettronica,Politecnico di
Milano, Italy, 1991.
 L. M. Gambardella, ` E. D. Taillard, and G. Agazzi. MACS-VRPTW: A multiple ant
colony system for vehicle routing problems with time windows. In D. Corne, M. Dorigo,
and F. Glover, editors, New Ideas in Optimization, pages 63–76. McGraw Hill, London,
UK, 1999.
 L. M. Gambardella, ` E. D. Taillard, and M. Dorigo. Ant colonies for the quadratic
assignment problem. Journal of the Operational Research Society,50(2):167–176, 1999.
 V. Maniezzo and A. Colorni. The Ant System applied to the quadratic assignment
problem. IEEE Transactions on Data and Knowledge Engineering, 11(5):769–778, 1999.
 Gambardella L. M., E. Taillard and M. Dorigo (1999). Ant Colonies for the Quadratic
Assignment Problem. Journal of the Operational Research Society, 50:167-176.
 Dorigo M., Gambardella L.M. [1997], Ant colonies for the traveling salesman problem,
Bio Systems, 43, 73-81






48
ANT COLONY OPTIMIZATION ALGORITHM AND APPLICATIONS.
DEPARTMENT OF ECE.






Sign up to vote on this title
UsefulNot useful