Professional Documents
Culture Documents
The ant colony algorithm is inspired by the behavior of real ants. It pro-
vides an original approach to solve combinatorial optimization problems. We
present an industrial application of this method, in the context of constraint
programming, focused on solving vehicle routing problems.
10.1 Introduction
One of the main concerns in the industry is to improve the effectiveness of
their logistic chain, to be able to organize a better service at a lower cost and
to maintain the flow of their goods. Thus a fundamental component of any lo-
gistic system is the planning of the distribution networks by fleets of vehicles.
In recent times, significant research efforts have been devoted to the model-
ing of such problems and the implementation of suitable algorithms to solve
them. A fruitful collaboration between the specialists in the area of math-
ematical programming and combinatorial optimization on one side and the
transport managers on the other side, resulted in a great number of successful
implementations of software for vehicle routing optimization. The interest in
the implementation of the quantitative methods, for the optimization of the
transport activities, becomes obvious as the importance of the distribution
cost is paramount. The practical applications of this type of problems include:
public transportation, newspaper distribution, garbage collecting, fuel deliv-
ery, distribution of products to department stores, mail delivery or preventive
supervision of road inspections. However vehicle routing problems can be ex-
308 10 Constraint Programming and Ant Colonies
Capacity constraints
They denote the fact that a vehicle cannot transport more than the limit of
its capacity (in weight, volume. . . ) i.e. the quantity of goods at each point
should not exceed a certain value.
The total cost of the tours is calculated as the sum of the costs of each vehicle.
Generally, the cost of a vehicle is the linear combination of the total distance
traveled on the tour and the duration of the tour. A fixed cost can be added
when the objective is also to minimize the number of vehicles.
In this chapter, we will focus our attention on the “Pickup and Delivery
Problem” (PDP) which is a vehicle routing problem with loading and unload-
ing visits. It deals with the problem of picking up goods at a customer site
(or at a depot) and delivering it to another customer using the same vehicle.
The pickup must naturally be performed before the delivery.
The method implemented to solve this problem uses ILOG Dispatcher
[ILOG, 2002a], a C++ library dedicated to solving routing problems, based
on the constraint programming engine and the search algorithms provided
by ILOG Solver [ILOG, 2002b]. The next paragraph details the concepts of
constraint programming as well as the modeling of the problem in this context.
Tree search
but this worst case is seldom reached. Generally, significant parts of the search
tree are pruned thanks to constraint propagation.
Let us take an example to show how the depth-first search works. Let us
consider a problem with three variables a, b and c which have the following
respective domains {0, 2}, {0, 1} and {0, 1}. Let us also consider the three
constraints a = b, a ≥ c and b ≤ c. The solutions to this problem are:
• a = 2, b = 0, c = 0;
• a = 2, b = 0, c = 1;
• a = 2, b = 1, c = 1.
Five other possible combinations of values for a, b and c violate at least one
constraint and are thus not acceptable. The tree search presented here con-
siders the variables in the lexicographical order: initially a, then b, and finally
c. The values of the variables are considered in increasing order. The descrip-
tion of the order in which the variables and the values are chosen, and more
generally the description of the implementation of the search tree, is called
a goal in constraint programming. The search tree for this goal is presented
in figure 10.1. The nodes of the tree represent the states of the assignment of
the variables. When a node is marked by ⊗, it indicates that at least one con-
straint is violated at this stage. The arcs of the tree are transitions between
the states and are labeled with the assignments which take place during the
transition. Finally, the leaves of the tree not marked by ⊗ are solution states,
where all the variables are assigned and no constraint is violated.
a=0 a=2
solution yet, we can check the validity of the constraint a = b since all the
variables under consideration in this constraint have a fixed value. In other
words, no other assignment will be able to change the state of the constraint
— satisfied or violated. In this case, it can be seen that the constraint a = b
is violated. This state is called failure (marked by ⊗). One of the assignments
a = 0 or b = 0 must be changed. In “Depth-first search”, the assignments are
backtracked by undoing the most recent assignment. The search then returns
to the state immediately before the assignment b = 0 — the movement is
known as backtracking — and b is assigned to the next value which is 1. This
time, the constraint a = b is satisfied, and one can thus continue the search
by assigning a value to c. In fact, no value for c can satisfy the constraints
as 0 violates the constraint b ≤ c and 1 violates the constraint a ≥ c. Hence
it is proved that there is no solution for a = 0 since one implicitly explored
all the combinations for b and c with a = 0. One can now return to the top
of the tree by undoing all the previous assignments and assign a = 2. In this
case if we assign b = 0 and c = 0, the constraints are satisfied. Hence, there is
a solution. If we were interested in only one solution, we could have stopped
there. However, one can also continue to find other solutions by backtracking.
Thus, by changing the assignment of c with c = 1, another solution is found.
The assignment b = 1 following that of c = 0 leads to a failure ((b ≤ c
is violated), but the assignment c = 1 after backtracking produces the last
solution.
Constraint propagation
The technique for testing the constraints described above is known as back-
ward checking since the constraints are tested after all the variables involved
in the constraint have been assigned to values. It is an improvement over the
generate and test algorithm which postpones the tests of the constraints after
all variables have been instantiated. This improvement is however in general
too weak to solve anything but very simple problems. Normally, constraint
programming prunes branches of the search tree using a much more effec-
tive method, known as constraint propagation. Constraint propagation is a
technique much more active than backward checking. Instead of checking the
validity of a constraint, domains are filtered using a dedicated algorithm. This
procedure very often results in a reduction of the number of failures — i.e.,
dead ends in the search are noticed higher up in the tree, which avoids explicit
sub-tree exploration.
In the case of backward checking, each variable already has an assigned
value, or no value at all. On the contrary, constraint propagation main-
tains the current domains of each variable. The current domain is initially
equal to the initial domain, and is filtered as the search progresses in the
tree. If it is found that a value is impractical for a variable (because it
would violate at least one constraint), it is removed from the current do-
main. These removed values are not considered any more and the search
312 10 Constraint Programming and Ant Colonies
tree is thus more effectively pruned than with backward checking alone. In
general, filtering algorithms are usually kept polynomial. General methods
[Bessiere and Regin, 1997, Mackworth, 1977] and specific algorithms for cer-
tain types of constraints exist [Belideanu and Contjean, 1994, Regin, 1994,
Regin, 1996].
Domains filtering is carried out independently for each constraint. Com-
munication between constraints is only based on the changes in the current
domains of the variables. In general, constraint propagation algorithm works
by having each constraint filter the domain of the variables related to them.
This process is carried out either until a variable has an empty domain (no
more possible value), or until no more domain reduction occurs. In the first
case, a failure and a backtrack occur. In the second case, a fixed point is
reached. If all the variables have only one possible value, a solution is found.
If not, it is necessary to branch on the values of the remaining variables to
either find a solution or to prove that there is none. The fixed point found
by constraint propagation should not depend on the order in which the con-
straints filter the values of the domains.
The figure 10.2 shows how constraint propagation interacts with depth-
first search on our small problem. As it can be seen, this method is more
effective than backward checking in terms of number of failures . There is
only one failure, all the other branches leading to a solution. At the top of the
tree, before branching, a propagation is carried out. In this case however, no
filtering can be done by considering constraints individually. A branch is thus
created, and a is assigned to 0. The constraint a = b deduces that b = 0 and
thus b = 1. Constraint a ≥ c deduces that c = 1 and thus c = 0. Constraint
b ≤ c deduces that c = 0 which causes a failure as c = 0, and that b = 1
(that also causes a failure as b = 1). It can be noticed that only one failure
will occur whether one or the other deduction is carried out first, with finally
either b, or c having an empty domain. When the failure occurs, the constraint
solver backtracks to the root node and domains are restored. The right-hand
side branch is then taken which assigns a to 2. Similarly to the root node,
no constraint can reduce the domains, and branching is still needed, b = 0.
No filtering is done and a final branching c = 0 leads to the first solution.
After backtracking and assigning c to 1, the second solution is obtained. Then
we backtrack higher up in the tree, just before the assignment of b, and the
branch to the right-hand side of b = 1 is taken. In this case, the constraint
b ≤ c filters, which removes the value 0 from the domain of c. It is the only
filtering which can be done at this node and it results in a solution . It is not
necessary to branch on c as the variable does not have more than one possible
value. This is the last solution to the problem.
Note on optimization
Until now, we explained how to solve a decision problem which satisfies all the
constraints. Sometimes however the problem at hand is an optimization prob-
10.2 Vehicle routing problems and constraint programming 313
a=0 a=2
b=0 b=1
a=2,b=1,c=1
c=0 c=1
a=2,b=0,c=0 a=2,b=0,c=1
lem. Such problems are solved like a succession of decision problems. When a
solution of cost v is found, a constraint is added to the problem specifying that
any solution of cost v or more is not valid. This process is repeated until one
cannot find a better solution, and in this case the last solution is the optimal
solution. Each time a — better — solution is found, it is not required to start
the search again from the beginning. One can simply continue the search with
the new upper bound on v. This upper bound becomes more constraining as
the search progresses.
A powerful modelling and a powerful search.
In the preceding discussion, for pedagogical reasons, we only considered simple
constraints with simple propagation rules. When complex industrial problems
are solved, the models are more complex and constraint programming pro-
vides more advanced constraints to take this complexity into account. These
constraints are often based on powerful propagation algorithms which perform
strong filtering on domains, for example [Regin, 1994, Regin, 1996]. Here are
examples of more complex constraints usually available:
• y = a[x]: y is constrained to be equal to the xth element of a, where a is
a table of constants or variables.
• all − dif f (a): all the variables of the table a must take different values.
• c = card(a, v): the variable c is forced to be equal to the number of occur-
rences of the value v in the table of variables a.
• min − dist(a, d): all the pairs of variables in the table a must take values
which differ by at least d.
Some open constraint programming frameworks let the users write their own
constraints, by writing the corresponding propagation rules. This is often very
useful as a framework cannot provide all possible constraints.
314 10 Constraint Programming and Ant Colonies
with V = N ∪ E ∪ S.
10.2 Vehicle routing problems and constraint programming 315
Path Constraints.
This concept allows us to take into account dimension constraints, such as
capacity and time constraints. Path constraints propagate quantities accumu-
lated along a tour. Formally these constraints can be described as follows:
∀(i, j) ∈ ((S ∪ N ) × (N ∪ E)) nexti = j => σi + f (i, j) = σj
where σi is a variable representing the accumulation of the quantity. f (i, j),
called transit, can be the distance from i to j or simply the quantity of goods
to be delivered at i.
Example:
The goal of the problem is to minimize the total cost of the tours:
ck
k∈E
ck being the cost of the tour of the vehicle k, ck is generally defined by:
f (i, j)
(i, j)∈{(i, j)/ nexti =j et vehi =k}
Description
Remarks:
The search procedures used by the two ant colonies are identical, with the
following limitations:
• Each colony manages its own trail of pheromone.
• The ant colony which minimizes the number of vehicles seeks a solution
with at most v − 1 vehicles whereas the other colony seeks a solution with
v vehicles.
• The two colonies do not use the same local search procedure.
The algorithm used by the two colonies is given in algorithm 10.2.
At this stage, the quantity of pheromone is updated according to the for-
mula: τij = (1 − ρ) ∗ τij + ρ ∗ 1/Lbest , if (i, j) ∈ the best solution found by the
ants in this iteration. Here Lbest is the cost of the best solution found by the
ants in the current iteration and ρ a evaporation coefficient.
Algorithm 10.2: A broad outline of the algorithm used by the two ant colonies.
10.3 Ant colonies 319
The choice of a next visit is carried out according to the following rule:
j corresponding to maxh∈Ω {[τij ] [ηij ] },
α β
if q ≤ q0 (intensif ication)
j=
j found applying the following, probability rule otherwise (diversif ication)
[τij ]α [ηij ]β
if j ∈ Ω
pij = [τih ]α [ηih ]β
h∈Ω
0 otherwise
where:
320 10 Constraint Programming and Ant Colonies
Each time a next visit is selected, the quantity of pheromone is updated for
the arc created according to the formula:
where:
Each ant is thus influenced in its search by the solutions found by the preceding
ants and also by the current best solution.
The ant colony which minimizes the cost uses the move operators described
previously and TwoOpt, OrOpt, Cross, Relocate and Exchange described in
figure 10.7.
322 10 Constraint Programming and Ant Colonies
A
B B
A
B B
C
B A
A
C
B A
A
(e) Cross
• 10 ants;
• α = β = 1;
• q0 = 0.9;
• ρ = 0.75;
• n1 = 5;
• n2 = 3.
The main comparison criterion is the number of vehicles used in the solution.
The computing time for each instance, for each of the two algorithms is two
hours. To trace the evolution of the optimal solution, the value of the solution
is recorded every 10 seconds during the first 300 seconds, and then recorded
324 10 Constraint Programming and Ant Colonies
every 50 seconds up to 7200 seconds. Figures 10.8, 10.9 and 10.10 show the
evolution of the number of vehicles used in the solution.
C2
Gap to the best number of vehicles
ACS
GLS
Time in second
R2
Gap to the best number of vehicles
ACS
GLS
Time in second
ACS
GLS
Time in second
10.5 Conclusion
The “expertise” of the user can help designing effective custom heuristics.
However, this may not always be possible, in particular when the users of such
techniques are not experts.
The ant colony algorithms can significantly help in such cases. In this
chapter, we described how one can make use of ant colonies to obtain heuristics
which can be simultaneously robust and generic in nature. The robustness
characterizes the ability to finding a good solution, whatever the nature of the
data. This objective is primarily fulfilled by memorizing the movements of the
ants (in fact, of the states of the variables) and the corresponding costs. Thus
it is possible to reach good quality solutions, by alternating intensification
and diversification phases.
Since the only knowledge ant colony algorithms have of the problem is
the costs connecting variables to values (in the routing problems, that corre-
sponds to the costs of the arcs between visits), it can be assumed that this
approach is generic, maintaining a certain independence between the descrip-
tion of the problem and the way it is solved. Initial experimental results on
vehicle routing problems with pickup and delivery visits confirm this asser-
tion, especially when minimizing the number of vehicles. It would be very
interesting to study the impact of replacing depth-first search by other more
sophisticated procedures, such as “Slice-Based Search” (SBS) proposed in
ILOG Solver [ILOG, 2002b].
Acknowledgement
We would like to thank Mr. Bruno De Backer and Mr. Paul Shaw for their
assistance in the development of this chapter.