You are on page 1of 12

Lagrange Relaxation and Constraint Programming

Collaborative schemes for Traveling Tournament Problems


Thierry Benoist, François Laburthe, Benoît Rottembourg

Bouygues eLab,
1 avenue Eugène Freyssinet,
78061 Saint Quentin-en-Yvelines Cedex, France

Abstract:
This paper presents a study of hybrid algorithms combining Lagrange relaxation and
constraint programming on a problem combining round-robin assignment and travel
optimization. Traveling Tournament Problems (TTP) have been identified a year ago
as very hard problems requiring combinations from techniques of Operations
Research (OR) and Constraint Programming (CP). Indeed, constrained round robin
tournament problems are now best solved using CP with global constraints while
traveling salesman problems are best solved using integer programming (IP)
techniques. Our problem of interest, TTP, is a good candidate for hybrid CP-OR
approaches, since neither of the pure approaches generalizes to the full problem: pure
IP methods fail to take the general round-robin structure into account, while pure CP
is penalized with weak global lower bounds for travel optimization.
This paper targets another kind of hybrids, combining Lagrange Relaxation
techniques with CP. We show how the relaxation can not only provide a rich global
bound which limits the search but can also give feasible fragments of solutions that
can be used as seeds for building tournaments or as guides for efficient branching.

1. Introduction

Sport leagues (soccer (Schreuder, 1992), basketball (Ball and Webster, 1977; De Werra, Jacot-
Descombes, and Masson 1990; Nemhauser and Trick, 1998; Henz, 2000), baseball (Cain, 1977;
Russel and Leung, 1994), cricket (Willis and Terrill, 1994)) often deal with scheduling problems for
tournaments. The league must organize of set of matches such that all teams can be ranked in the end
of the season and a champion can be nominated. We only consider tournaments where all matches are
planned beforehand and do not depend on the results of matches from the previous period (eg,
scheduling matches among the winners of the previous games). These problems may be temporally
constrained or not (fixed or open number of periods) and may include side constraints.
Many studies in the literature have proposed to deal with these problems using a variety of
approaches, basically integer linear programming (Ferlend and Fleurent, 1993; Nemhauser and Trick,
1998), local search (simulated annealing (Terrill and Willis, 1994), tabu search (Wright 1994) and
constraint programming (Henz, 2001; Schaerf 1999). The case of temporally unconstrained problems
(allowing a team to play several times a week) are known to be easier to tackle, and in practice various
successful techniques have been implemented. For instance Régin’s (1999, 2000) CP approaches
captured the global structure of such problems and considerably reduced the search tree.
This work targets a variant of sport league tournament, called the Traveling Tournament Problem
(TTP) proposed by M. Trick and G. Nemhauser at the Constraint Programming and Integer
Programming Dagstuhl Seminar (Jan. 2000). This variant is concerned with:
• Temporally constrained tournaments: The number of periods on which games can be played is
exactly the number of matches to be played, inducing a global assignment structure.
• Two round robin sport competitions: each team plays each other team twice, once at home (H)
and once away (A), one game being played at the home field of the competitor. N teams (N
will be supposed even, for the sake of simplicity) then compete against each other through N
(N–1) matches at all. In other words, each team must be granted a sequence of 2(N-1)
consecutive matches, half of them at home and the remaining away.
• Enriched with side constraints:
• Match A at B (A@B) must not immediately follow match B at A (B@A) - no repeater
• No more than 3 consecutive home or three consecutive road games are allowed
• Teams are assumed to be initially at their home city and to come back at their home city
after the tournament
• And with travel optimization: a distance matrix between pairs of team locations is provided
and the sum of the length of the tours traveled by the teams is to be minimized.

As simple as this problem appears to be, instances with N = 8 are still open today with a relative
duality gap greater than 5%. The aim of Trick and al. was to propose a challenging problem for both
Constraint Programming and Mathematical Programming communities, forcing techniques to
cooperate.
This paper is the first presentation of an ongoing study of hybrid algorithms on TTP and it is focused
on models and collaboration schemes between a CP solver and a Lagrange relaxation method. A report
on our computational study is in preparation. This study has two goals: First, to improve the state of
the art for transportation tournament problems; second, to start a systematic analysis of the possible
ways of combining Lagrange relaxation and CP, as this is being done for hybrids of CP and IP
(Focacci, Lodi, Milano, 1999).

In section 2 and 3 we propose two Lagrange relaxations of the problem that decompose the
global problem into one local sub-problem (a constrained TSP) per team. Lagrange relaxation of
coupling constraints forces the team tours to be synchronized in order to make a feasible tournament.
In section 4 is introduced the layered collaborative architecture consisting in a cooperation between a
main CP model, a Lagrange decomposition global constraint and sub-problem solvers. Section 5
emphasizes the tight collaboration between the layers of the architecture. In order to speed up the
search and support effective variable fixing, we propose a generic cluster decomposition strategy used
as a Lagrange heuristics in order to build fast solutions from solutions of local sub-problems. Finally,
section 6 will present experimental results comparing the aforementioned strategies.

2. Compact Lagrange Relaxation

The idea of this first naïve relaxation is to decompose the global problem into one sub-problem per
team. Thus, all inter-team constraints are forgotten (relaxed) and each sub-problem consists in
building the tour of minimum distance (forgetting the “no repeater” constraint). In the remainder of the
paper, d(A,B) denotes the distance between locations A and B in the distance matrix and the set of all
team locations is denoted TL. For each team A, the problem becomes the following:

Find the sequence of 2(N-1) locations, X(A,1) … X(A,2N-2), where:


• Label A appears exactly N-1 times in the sequence
• Each label different from A appears exactly once
• Label A appears no more than three times in a row
• Labels along the sequence can not differ from A more than 3 times in a row
minimizing the cost function d(A) = å
d(X(A,i), X(A,i +1)) ,
i∈{0,2N − 2}
taking X(A,0)=X(A,2N-1)=A.

Obviously, getting rid of every other constraint of the problem and computing the sum of d(A) for
every team A provides a lower bound for TTP. Computational experiments show that this bound is not
that far away from the optimal value (when known) or from the best solution available1. Every such
sequence (distance minimal or not) will be called a feasible compact sequence in the following.

After having introduced a model for team sub-problems, we now describe the model to the overall
problem. Let Y(A,i,B) be a binary variable set to 1 if the ith item of the sequence of A is B (i.e. X(A,i)
= B) or set to 0 otherwise (i.e. X(A,i) ≠ B).

The Lagrange relaxation is based on this representation of TTP:

min åd(A)
A∈TL
∀A ∈ TL { X(A,i) is a feasible compact sequence }
∀(A,B,i) s.t.A≠ B Y(A,i,B) + Y(B,i +1, A) ≤ 1 (1)
∀(A,B,i) s.t.A≠ B Y(A,i,B) ≤ Y(B,i,B) (2)
∀(A,i) ∈ TL åY(B,i, A)
B≠ A
≤ 1 (3)

Constraints (1), no repeater constraints, states that match B@A (meaning A is playing away at location
B) does not immediately follow match A@B (where A is at home and hosts B) and constraints (2)
forces team B to be at home when A visits it. Constraints (3) forbid that more than one visitor stay in
A simultaneously. One can check that if the X(A,i) variables make feasible compact sequences and
(1), (2) and (3) apply then the set of sequences build a feasible Tournament for TTP:
• Every match A@B is scheduled on a unique time period i, which can be deduced from the
visitor sequence of A.
• There cannot be more than one match at the same time and place.
Hence the global assignment constraint for the N(N-1) matches is satisfied

Once (1), (2) and (3) are lagrangeanly relaxed, they give birth to O(N3) Lagrange multipliers.
We will denote them λAiB , µ AiB and ν Ai respectively. If L(X,Y,λ,µ,ν) denotes the corresponding
Lagrange function, the classic Lagrange dual function for a set of Lagrange multipliers (λ,µ,ν) is:
~
æ minåd(X(A,i), X(A,i+1))+åd Y(A,iB) ö
ç ÷
w(λ,µ,ν) = å−λAiB +å−ν Ai + å ç {X(A,i) is a feasible compact sequence for A}÷
i i
ì
A∈TLç í
A,i, B A,i
Y(A,i,B)=1 iff X(A,i)= B ÷
èî ø
~
where d is a coefficient depending on (λ,µ,ν) , called the perturbated cost. Computing w(λ,µ,ν) is
not harder than computing the best feasible compact sequence for every team A:

• either through a basic Branch and Bound algorithm using, for example, as branching decisions
of depth i, the location of team A during period i (X(A,i)). An efficient lower bound for the
remaining sequence can be obtained through the computation of the minimum spanning tree
on the complete graph made of non-visited locations.
• Or using a dynamic programming approach, which offers additional reduced costs information
(see section 5.2)
• Or using a finite domain solver enriched with a global “cycle” constraint and an additional
~
linear constraint taking into account the perturbated costs d associated to visiting a location B
at step i.

Lagrange weak duality states that for any (λ,µ,ν) , w(λ,µ,ν) is a lower bound for TTP. Hence,
solving the “easy” concave non-smooth problem provides the greatest lower bound zc for TTP using
this relaxation:

1
The gap is respectively less than 3%, 6% and 6% for N = 4, 6 and 8 respectively
zc=max(λ,µ,ν)∈ℜ+ (w(λ,µ,ν))

The main advantage of this relaxation is its tractability. The compactness of the sub-problems allows a
fast computation for d(A) (or perturbated d(A)), hence zc. Unfortunately, this compactness induces so
many ties in the sequences produced for a given set of Lagrange multipliers that the dual bound is of
poor quality2.
The next section introduces a second model for Lagrange relaxation that provides stronger lower
bounds, though at the price of additional computational costs.

3. Rich Lagrange Relaxation

This second approach models the TTP by associating to each team and each time slot, a precise match
instead of a location. Therefore, the sub-problems amount to computing, for each team, the sequence
of played matches instead of the sequence of visited locations. For each team A, the problem is now
the following: Find the sequence of 2(N-1) matches A@B or B@A, where:
• Label A appears exactly N-1 times at the left place in the sequence (through matches A@?)
• Label A appears exactly N-1 times at the right place in the sequence (through matches ?@A)
• Each label different from A appears exactly once on the left and once on the right
• Label A appears no more than three times in a row on the left
• Label A appears no more than three times in a row on the right
• A@B can not be immediately followed or preceded by B@A
Note that if the match played by A at period i is u@v, v is described by the X(A,i) variable of the
previous model. Therefore, if A plays away, X(A,I) ≠ A is the opponent of A. Conversely if A plays at
home (X(A,I)=A) one only needs to consider variables Z(A,i) describing the opponent of team A at
period i. Thus, the sequence (tour) of a team can be described by 2(N-1) variables, as Z(A,1)@X(A,1)
… Z(A,N-1)@X(A,N-1).
With X(A,0)=X(A,N)=A the cost function to be minimized is d(A) = å
d(X(A,i), X(A,i +1)) .
i∈{0, N −1}
We denote such a sequence as a feasible rich sequence for A. For any match C@D with either A=C or
A=D, let us introduce the binary variable T(A,i,C@D)) variable set to 1 if match C@D is at the ith
position in the sequence of A and 0 otherwise. The overall problem can be described as :

min åd(A)
A∈TL
∀A ∈ TL { Z(A,i)@ X(A,i) is a feasible rich sequence }
∀(A,B,i) s.t.A≠ B T(A,i, A@B) = T(B,i, A@ B) (4)
∀(A,B,i) s.t. A≠ B T(A,i,B@ A) = T(B,i,B@ A) (4')

Constraint (4) and (4’) states that both teams play the same match at the same time, which is enough to
enforce the global consistency of the feasible rich sequences with regard to TTP.
Introducing two Lagrange multipliers λA,i, A@ B and λA,i, B@ A for every triple of teams (A,B) and time
step i we can propose a new relaxation for TTP.

2
On the instances that we relaxed, the bound never exceeded w(0,0,0) at the root of the tree. This was not the
case as soon as branching choices were made.
~ ~
æ minåd(X(A,i), X(A,i+1))+åd T(A,i, A@ B)+åd 'T(A,i, B@ A) ö
ç ÷
w(λ) = å ç i
{
i i
} ÷
ì Z (A ,i )@ X(A,i ) is a feasible rich sequence for A
A∈TLç í T(A,i, A@ B)=1 iff X(A,i)= B ÷
è î ø
T(A,i, B@ A)=1 iff ( X(A,i)= A and Z(A,i)= B)

The perturbated sub-problem to solve for each team is more complex than the basic location
problem of the previous compact relaxation. The basic B&B strategy proposed for section 2 is not
sufficient to find optimal values for d(A) with perturbated costs and we systematically used a finite
domain solver for N greater than 4.
w(λ) is another lower bound for TTP for any multiplier λ , yielding the new dual bound zr for TTP
using this relaxation:
zr = max w(λ)
{λ ∈ℜ

Claim1: zc≤zr
Proof: Every sequence can be interpreted as discrete points (T(A,i,C@D) = 1 or 0) of R(2N.(N-1).(N-1)).
Let us denote Hc and Hr the convex hulls of respectively feasible compact and feasible rich sequences
of matches in R(2N.(N-1).(N-1)). As the rich model respects all constraints of the compact model (and
constraint (1)), it is obvious that Hr ⊂ Hc.
A property of a generic Lagrange dual is that if
• H is the convex hull of the non-relaxed sub-problem solutions
• gi(x) ≤ 0 is the set of relaxed constraints,
• f(x) is the cost function of the primal (initial) problem
Then z = max w( λ ) for λ positive , the optimal solution of the Lagrange dual is also the optimum
solution of the optimization problem below:

Min {f(x) : gi(x) ≤ 0, x ∈ H}


If gi(x) ≤ 0 represents constraints (i) for i in {1,2,3,4,4’} we know that zr = Min {f(x) : (g4(x) ≤ 0 &
g4’(x) ≤ 0), x ∈ Hr}. As Hr is the convex hull of discrete points satisfying constraint g1(x) ≤ 0, and as
Hr ⊂ Hc :
Hr ⊂ (Hc ∩ {x | g1(x) ≤ 0})

and {f(x) : (g4(x) ≤ 0 & g4’(x) ≤ 0), x ∈ Hr} ⊂ {f(x) : (g4(x) ≤ 0 & g4’(x) ≤ 0& g1(x) ≤ 0 ), x ∈ Hc} . If x in
Hr is such that g4(x) ≤ 0 & g4’(x) ≤ 0, then necessarily g2(x) ≤ 0 & g3(x) ≤ 0. Thus :
{f(x) : (g4(x) ≤ 0 & g4’(x) ≤ 0), x ∈ Hr} ⊂ {f(x) : (g2(x) ≤ 0 & g3(x) ≤ 0 & g1(x) ≤ 0 ), x ∈ Hc},
and zc ≤ zr. E

4. Collaborative Architecture
The algorithms that we experimented are all structured as CP programs enriched with bounding
techniques encapsulated within global constraints. The overall system can be decomposed as follows:

1. The main CP model, based on the X,T and Z variables


2. One global constraint implementing the computation of the lower bound by Lagrange
Relaxation
3. One controller per team, handling perturbated TSPs at each iteration of the Lagrange
relaxation
4. One sub-problem solver per team, solving an enriched TSP at each iteration
Main CP model
1 Propagation & Tree search
P(A@B) domains
lower bound

2 Global constraint: Lagrange

P(A@B) domains,
Perturbated cost solution & lower bound
to the sub-problem
sub-
3 problem
update perturbated costs sub-pb solution
hot restarts

4 TSP solver

A few remarks can be made on the various components of the architecture.

The first component implements the basic CP model. It fully captures the problem so that if
the three other components were unplugged, it would still be able to solve the optimization problem
(though not efficiently enough). It features 2N(N-1) X(A,i) variables, 2N.(N-1)2 T(A,A@B,i) variables,
N(N-1) P(A@B) variables (position of the match A@B) and 2N(N-1) Z(A,i) variables. It propagates
the following constraints:
• a sequence constraint per team: for each team Aj, all variables denoting the schedules of
the matches where Aj is involved (ie: P(A1@Aj), …, P(Aj-1@Aj), P(Aj+1@Aj),…, P(AN@Aj)
and P(Aj@A1),…, P(Aj@Aj-1), P(Aj@Aj+1), …, P(Aj@AN)) must take different values;
• constraints linking the variables P, T, X and Z:
∀ j,k, j≠k,
P(Aj@Ak)=i ⇔ Z(Aj,i)=A j ∧ Z(Ak,i)=A j ∧ X(Aj,i)=Ak ∧ X(Ak,i)=A k ⇔ T(Aj,Aj@Ak,i)=1
• no repeater constraints: for all j,k, j≠k, P(Aj@Ak) ≠ P(Ak@Aj)+1
• for all teams Aj and all periods i, ¬( Xi≠Aj ∧ Xi+1≠Aj∧ Xi+2≠Aj∧ Xi+3≠Aj)
• for all teams Aj and all periods i, ¬( Xi=Aj ∧ Xi+1=Aj∧ Xi+2=Aj∧ Xi+3=Aj)
• a global constraint encapsulating the second component
This master component uses constraint propagation within a branch and bound process. Branching can
be done on any of the X, P or Z variables. In all cases, the assignment graph (domains for P) can be
reduced from a decision on X or Z.
A few words can be said on the propagation of the sequence constraint: sequence constraint
propagation algorithms have been described in Pesant et al., 1998 and Caseau, Laburthe, 1997 for
solving TSPs and Bleuzen Guernalec, Colmerauer, 1997 for solving scheduling and sorting problems;
such constraints are included in commercial CP tools (IlcPath in Ilog Solver and cycle in CHIP) in
order to address transportation problems. The actual models and algorithms for sequence constraints
may vary. Routing problems usually rely on an ordering model with variables associating to each item
(match) its immediate successor/predecessor in the sequence. Scheduling problems usually rely on a
precedence graphs. Last, assignment/sorting problems express a one to one correspondence between
items and slots: such “ranking” models, associate to each item its index in the sequence.
The TTP is a rare example that benefits from the simultaneous use of all these models: the main
variables (P(Ai@Aj)) model the rank of the items (matches) in the sequence. Each of these variables
appears in two sequence constraints. The next/prev variables (which are not shared across several
sequence constraints) are useful for computing lower bounds on the distance traveled by a team.
Finally, both the precedence graph and the inverse ranking model are useful for branching. The global
constraint handling the sequence of matches for one team is therefore a rich object, using redundant
models, constraints linking the redundant models ) as well as specific propagation algorithms (a flow
algorithm for the matching between next and prev, as well as between both ranking models, minimum
spanning tree computation for bounding the travel total travel of the sequence, strong connection
checks and a subtour elimination filter proposed for the TSP.
The second component is a Lagrange controller implementing either sub-gradient algorithms
(see Minoux, 1983 and for convergence results Shor, 1985) or modified gradient techniques
(Camerini, Fratta and Maffioli 1975). It dialogs with sub-controllers (components 3) solving the
perturbated sub-problems. The Lagrange controller feeds the subcontroller with perturbated weights,
retrieves solutions for each sub-problems, computes a sub-gradient, and updates the perturbated costs
~
d once having updated the Lagrange multipliers. The multipliers are updated, in the case of sub-
gradient algorithms, with steps of decreasing magnitude in the direction of the sub-gradient vector.
This process ends after a few dozens iteration at each node of the search tree except at the root node
where hundreds of iterations are necessary to find near-optimal values for Lagrange multipliers. For a
description of richer Lagrange dual maximization methods, we refer the reader to Hiriart-Urruty and
Lemaréchal, 1993.

The third level components are sub-problem controllers that dialog with the Lagrange
controller. They are being sent an assignment graph and a perturbated distance and must return the
optimal solution to this enriched TSP problem. This is done by embedding a call to a dedicated TSP
constraint-based solver (a subset of the overall CP model of the master components, restricted to the
sequence and constraints of one single team; parametrized with perturbated distance and cost
coefficients). Note that this TSP solver must be able to function in an incremental manner: from one
iteration to the other, only the perturbated weights change, therefore one should avoid destroying and
re-creating the sequence model but rather change the values of the coefficients in the cost functions d
~
and d . Moreover, one may reuse the solution from the previous iteration as an upper bound when
exploring a branch and bound search tree for finding the best solution.

Thus, the overall architecture can be described as a collaborative architecture, where a CP


model is the master algorithm driving a tree search, a Lagrange relaxation is packaged into a global
constraint (and is thus propagated either one single time or a few times per node) and the Lagrange
relaxation pilots sub-solvers that use constraint technology.

5. Improvements for tighter collaboration


This section describes two improvements over the scheme presented above. The first one concerns an
improved propagation in the overall model (component 1) that can be done by look-ahead evaluation
or the lower bound from the Lagrange relaxation (component 2). The second one is a general scheme
for obtaining good upper bounds (that can be useful per se in the branch and bound procedure, either
as such, or as branching heuristics for the master component).

5.1. Stronger consistency

Suppose now that z* is the optimum value of TTP, and that, through an heuristics like the one
described in section 6, we know z+, an upper bound for TTP. Lagrange weak duality can help to
forbid subsets of solutions for TTP and consequently restrict the search.
For any team A, any match C@D and any time step i, we denote by BestRichTour(A, λ ,{C@D,i}) the
best feasible rich sequence for A with perturbated costs induced by λ such that match C@D is
scheduled at time i. By convention, if both C ≠ A and D ≠ A this will represent the best feasible rich
sequence with perturbated costs without any further constraints.
Then we have:

Claim2: If 1≤i≤ N −1 and C@D is a match of TTP such that


∃λ s.t.å BestRichTour(X ,λ,C@D ,i) > z +
X

then match C@D can not be performed at time i in any optimal solution for TTP.
Proof: As w( λ ) is a lower bound for z*, a lower bound for every solution with C@D scheduled at
time i is strictly greater than an upper bound of the optimal of TTP, hence scheduling C@D at time i,
is an absurd decision. E

This propagation process is applied at the root of the search tree. For each match C@D and
each period i, all tentative assignments P(C@D)=i are tried, and those values that yield a lower bound
above the best feasible upper bound are discarded from the domain. This kind of propagation, by
looking ahead the consequences of an assignment and searching for contradictions, is similar to the
use of “shaving” in constraint based scheduling (see Martin & Shmoys, 1995).
A similar reasoning apply for compact sequences instead of rich sequences if
å
∃λ s.t. BestCompactTour(X ,λ,C@D ,i) > z + , where the constraint for the sequence of team C,
X

states that C visits D at i, and for the sequence of team D, that D is at home at i.

5.2. Reduced Costs and Additive Bounds

The “shaving” approach described in section 5.1 can be applied in a more dynamic way at every node
of the enumeration tree if reduced costs are made available by the local Feasible Sequence solver.
A straightforward dynamic programming technique can solve to optimality, in reasonable amount of
time and memory (for up to 12 teams), the feasible compact sequence problem. The model, for a team
A, is the following:

• a 2(N-1)-layered graph is created, where layer i represents the location of team A at time step i
• for a given time step i, a state for team A is represented by:
o the number of times Home games have been played
o the number of times Away matches have been played within the last three time steps
o the number of times Home matches have been played within the last three time steps
o the list of already visited places (or a memory-limited version for large instances)
o the places to be visited
• transitions are built between compatible states of consecutive layers.
• location costs are issued from the perturbated costs provided by the relaxation (visiting B at
time step i),
• transition cost is the distance between the two locations
• so that finding the best possible compact sequence is a shortest path in this (perturbated)
layered graph

A two-phase algorithm can provide both the optimum value of the path, and all reduced costs
R( λ ,A,i,B) representing the increase in cost of the compact sequence solver with the additional
constraint that A must visit B at time step i. Thus, R( λ ,A,i,B) + R( λ ,B,i,B) is the reduced cost of
performing match A@B at time step i, denoted R( λ ,A@B,i).
As, necessarily, exactly N / 2 matches must be simultaneously played on any time step i:

Claim3 : For any Lagrange multiplyer λ


åBestCompactTour(X ,λ ) + max Pairing(i,λ)
X
i

is a lower bound for TTP

where Pairing(i, λ ) is the minimum reduced cost induced by planning N / 2 matches at i. For a
feasible subset of N / 2 matches, the reduced cost implied by playing these matches at time step i is the
sum of all the reduced costs of the matches R( λ ,A@B,i). The reduced costs are additive because they
imply distinct paths. Finding the set with minimal reduced cost is potentially difficult in the general
case, but tractable for up to 8 teams through a basic Branch and Bound strategy.
5.3. Lagrange Heuristics using Cluster Decomposition

In order to obtain good upper bounds (z+) and solutions for TTP, we apply a generic resolution
scheme adapted to Large Scale Combinatorial Optimization based on Cluster Recomposition. We have
already experimented this scheme, named Crunch, on large Time Tabling problems (Benoist, Gaudin
and Rottembourg 2001) and radio-link frequency allocation problems.
The generic method suppose that the problem variables can be decomposed into usually disjoint
subsets of variables – clusters –. Typically, in the hypergraph where nodes are made of variables and
hyperedges are made of constraints, a cluster would be a dense sub-graph. In the TTP case, a cluster
will be associated to each team A, and will gather the match position variables for A (T(A,C@D,i)).
The resolution method consists in instantiating, cluster by cluster, the variables of the problem.
For each cluster, the partial instantiations aim at minimizing a local objective function. In the cases
where the overall objective can be described as the addition of local costs for each cluster, this local
function may correspond to the lower bound on the global objective function.
For the special case of TTP, feasible perturbated sequences computed by BestCompactTour(A, λ …)
will provide the required fragment of a solution for the cluster associated with A.

Potentially, the set of all clusters orderings can be explored by the heuristics, which obviously
only represent a partial set of solutions for the global problem.
At each choice point we have to decide the next cluster to be assigned. Taking into account already
assigned clusters, the decision is made using a first-fail or best-fit strategy with the help of look-ahead
moves. In addition, the orderings can be partially explored using truncated search strategies like
Limited Discrepancy Search (Harvey, Ginzberg 1995).

For TTP, the Lagrange relaxation provides us with one locally feasible sequence per cluster.
These sequences taken as a whole might be globally infeasible, with respect to the relaxed constraints
of the problem. In the case of compact sequences, the time steps where the team plays away induce a
reduced costs from their counterparts in the sequences of the visited team: if A is at B at i, necessarily,
B will play at home at i. As sequences are not coordinated due to the decomposition, the reduced cost
of playing at home for B at i might be positive. Consequently, imposing a sequence for a cluster
induces a global reduced cost. Our branching strategy consists then in choosing the cluster with the
minimum reduced cost and imposing its local optimal sequence.

6. Computational results
We have tested our strategies on the instances of the Challenge Traveling Tournament web page at
http://mat.gsia.cmu.edu/TOURN/.

The first table compares three strategies on the 6 teams and 8 teams instances:
• the strategy CRUDE, where only the CP solver is used (without either local solver or
relaxation)
• the strategy NORED NOLAG, consisting in using the compact sequence local solver, no
reduced cost propagation, and no Lagrange Relaxation
• the strategy RED NOLAG, where this time, reduced costs are used to both improve the bound
and potentially propagate variable fixing within the local search
• and the complete strategy, RED LAG, where a Lagrange relaxation is used, based on the
compact sequence model, and reinforced with reduced cost information is performed.
The lower bound is either w(λ*) or w(λ*)+maxi Pairing(i,λ ) according to the strategy. The upper
bound is the best solution found after less than 10’ of computational time on standard PC (900 MHz).
Gap represents the relative gap between the upper bound and the best solution known for the instance.
Note that the global CP solver is not given any a priori upper bound before launching the search.
Instance Reduced Lagrange Lower bound Upper bound Gap
Costs Relaxation
NL6 CRUDE CRUDE - 26344 12.3 %
NL6 NORED NOLAG 22552 25001 4.5 %
NL6 RED NOLAG 22552 24674 3.2 %
NL6 RED LAG 22747 24540 2.6 %

NL8 CRUDE CRUDE - 49833 21.2 %


NL8 NORED NOLAG 38670 43838 6.6 %
NL8 RED NOLAG 38670 43349 5.4 %
NL8 RED LAG 38670 42713 3.8 %

Our approach is today able to reach and prove optimality for NL4 and NL6, yet cannot reach
the best-known solution for NL8. The table bellow summarizes our results from NL4 to NL10,
according to the computational time, using the RED and LAG strategy. The crude CP approach (in
italic) is the strategy where the no local solver is used.

Instance Time (sec) Lower Bound Upper Bound Gap1


NL4 0.1s 8044 8276 0%
1.15s 8276 8276 0%

NL6 1.6s 22552 - -


3.2s 22552 27043 13 %
15s 22747 26271 9.8 %
60s 22747 24992 4.5 %
600s 22747 24540 2.6 %
3600s 22747 23916 0%
86400s 23916 23916 0%

NL8 60s 38670 -


300s 38670 45452 10.5 %
600s 38670 42713 3.8 %
3600s 38670 42713 3.8 %
14400s 38670 42517 3.4 %

NL10 crude CP 1s - 84974 -


NL10 crude CP 30s - 82092 -
NL10 120s 56506 -
3600s 56506 71317 20.7 %
7200s 56506 70216 19.5 %
86400s 56506 68691 17.7 %

NL12 crude CP 10s - 164068 -


NL12 crude CP 300s - 161627 -
NL12 10800s - 148126 -
86400s - 147857 -
14400s 143655 -

NL14 crude CP 10s 305775

1
For NLX with X greater or equal to 10, the gap represents the duality gap obtained by our lower bound and
upper bound, since no solution was given on the Challenge site for this instance.
NL14 crude CP 300s 301113

NL16 crude CP 10s 441892


NL16 crude CP 60s 437273

8. Conclusion

Our preliminary results indicate that the three families of solving techniques: CP, Lagrange
Relaxation, and Dynamic Programming tightly cooperate. The bound produced by the relaxation is far
closer to the optimum than the sole sum of the sequences. The variable fixing performed with the use
of the reduced costs computed by the dynamic programming layer deeply limits the search,
particularly when the gap between the current lower bound and the value of the best solution found
gets smaller. It appears that the bounds and the reduced costs significantly compensate for the
blindness of the CP model with regard to cost.
Though, obviously our results are today far from competing with the best solutions produced by the
enriched LP based approach of Trick, Nemhauser and Easton, we expect a significant margin of
improvement digging in the following directions:
• Incorporating at the local solver level a TSP solver based on Constraint Programming
enriched with ad hoc constraints (no repeater, no more than 3 times away or at home). So that
we should be able to implement the rich sequence relaxation for size larger than 6.
• Studying the typical structure of the problem in order to develop “redundant” constraints able
to propagate earlier within the search. A couple of them have already been introduced with
success in the CP model.
• At last, we want to refine our search strategy in order to focus on increasing the global lower
bound of the open instances.

7. References

Ball, B.C., Webster, D.B.: Optimal Scheduling for even-numbered Team Athletic Conferences. AIIE Transactions 9 (1977)
161-169.

Benoist, T., Gaudin, E., Rottembourg, B.. Relaxation lagrangienne et programmation par contraintes pour la résolution de
problèmes d’emplois du temps. In Francoro III, Quebec, CA, May 2001.

Bleuzen Guernalec, N., Colmerauer, A.: Narrowing a 2n block of sortings in O(n log n), Proc. of the 14th International
Conference on Logic Programming, L. Naish ed., The MIT Press, 1997.

Camerini, , P.M., Fratta, L., Maffioli. On Improving Relaxation Methods by Modified Gradient Techniques. Mathematical
Programming Study 3 (1975), 26-34

Caseau, Y., Laburthe, F.: Solving Small TSPs with Constraints, Proc. of the 14th International Conference on Logic
Programming, L. Naish ed., The MIT Press, 1997.

Caseau, Y., Laburthe, F.: Solving Various Matching Problems with Constraints, Proc. of the 3rd International Conference on
Principles and Practice of Constraint Programming, CP'97, G. Smolka ed., Lecture Notes in Computer Science 1330,
Springer, p. 17-31, 1997.

Ferland, J.A., Fleurent, C.: Genetic and Hybrid Algorithms for Graph Coloring. Anals of Operations Research:
Metaheuristics in Combinatorial Optimization 63 / 1. Hammer, P.L. et al (Eds) (1996) 437-461.

Focacci, F., Lodi, A., Milano, M.. Cost-based domain Filtering. In Joxan Jafar, editor, Principles and Practice of Constraint
Programming, volume 1713 of Lecture Notes in Computer Science. Springer, October (1999).
Harvey, W., Ginsberg, M. Limited Discrepancy Search, Proceedings of the 14th IJCAI, p. 607-615, Morgan Kaufmann,
(1995).

Henz, M.: Scheduling a Major College Baseball Conference – revisited. To appear in: Operations Research 49/ 1 (2001).

Hiriart-Urruty, J.B., Lemaréchal, C. Convex Analysis and Minimization Algorithms, Springer-Verlag, (1993)

Martin, D., Shmoys, P.A time-based approach to the Jobshop problem, Proc. of IPCO'5, Vancouver 1996, M. Queyranne ed.,
Lecture Notes in Computer Science 1084, p. 389-403, Springer, (1996).

Minoux, M. Programmation mathématique, Dunod, (1983).

Nemhauser, G.L., Trick, M.A.: Scheduling a Major College Basketball Conference. Operations Research 46 / 1 (1998) 1-8.

Pesant, G., Gendreau, M., Potvin, J.-Y., Rousseau, J.M.: An Exact Constraint Logic Programming Algorithm for the
Travelling Salesman with Time Windows, Transportation Science 32(1), 1998.

Régin, J.-C.: Minimization of the Number of Breaks in Sports Scheduling Problems Using Constraint Programming.
DIMACS Workshop on Constraint Programming and Large Scale Discrete Optimization. (1999)

Régin, J.-C.: Modeling with Constraint Programming. Dagstuhl Seminar on Constraint and Integer Programming. (2000).

Russel, R.A., Leung, J.M. Devising a Cost Effective Schedule for a Baseball League. Operations Research 42 (1994) 614-
625.

Schaerf, A.: Scheduling Sport Tournament Using Constraint Logic Programming. Constraints 4 / 1 (1999) 43-65.

Schreuder, J.A.M.: Constructing Timetables for Sport Competitions. Mathematical Programming Study 13 (1980) 58-67.

Shor, N.Z. Minimization Methods for Non-Differentiable Functions. Springer, (1985)

Terril, J.B., Willis, R.J.: Scheduling the Australian State Cricket Season using Simulated Annealing. Journal of the
Operations Research Societey 45 / 3 (1994) 276-280.

De Werra, D., Jacot-Descombes, L., Masson, P.: A Constrained Sports Scheduling Problem. Discrete Applied Mathematics
(1990) 26, 41-49.

Wright, M.: Timetabling County Cricket Fixtures Using a Form of Tabu Search. Journal of the Operational Research Society
45 / 7 (1994) 758-770.

You might also like