You are on page 1of 33

Discrete (and Continuous) Optimization

WI4 131
Kees Roos
Technische Universiteit Delft
Faculteit Electrotechniek, Wiskunde en Informatica
Afdeling Informatie, Systemen en Algoritmiek
e-mail: C.Roos@ewi.tudelft.nl
URL: http://www.isa.ewi.tudelft.nl/roos
November December, A.D. 2004
Course Schedule
1. Formulations (18 pages)
2. Optimality, Relaxation, and Bounds (10 pages)
3. Well-solved Problems (13 pages)
4. Matching and Assigments (10 pages)
5. Dynamic Programming (11 pages)
6. Complexity and Problem Reduction (8 pages)
7. Branch and Bound (17 pages)
8. Cutting Plane Algorithms (21 pages)
9. Strong Valid Inequalities (22 pages)
10. Lagrangian Duality (14 pages)
11. Column Generation Algorithms (16 pages)
12. Heuristic Algorithms (15 pages)
13. From Theory to Solutions (20 pages)
Optimization Group 1
Capter 12
Heuristic Algorithms
Optimization Group 2
Introduction
Many practical problems are NP-hard. In these case one may choose (or even be forced) to
use a heuristic or approximation algorithm: a smart method that hopefully nds a good
feasible solution quickly.
In designing a heuristic, various questions arise:
Should one accept any feasible solution, or should one ask a posteriori how far it is from
optimal?
Can one guarantee a priori that the heuristic produces a solution within (or %) from
optimal?
Can one guarantee a priori that for the class of problems considered the heuristic on the
average produces a solution within (or %) from optimal?
Optimization Group 3
Greedy and Local Search Revisited
We suppose that the problem can be written as a COP in the form:
min
SN
{c(S) : v(S) k} .
Consider for example the 0-1 knapsack problem:
z = max
_
_
_
n

j=1
c
j
x
j
:
n

j=1
a
j
x
j
b, x {0, 1}
n
_
_
_
.
Here x is the indicator vector of the set S. Thus, dening
c(S) =

jS
c
j
, v(S) =

jS
a
j
, k = b
the 0-1 knapsack problem gets the above form. Also the uncapacitated facility location
problem
min
_
_
_

iM

jN
c
ij
x
ij
+

jN
f
j
y
j
:
n

j=1
x
ij
= 1,

iM
x
ij
|M| y
j
, x
ij
[0, 1], y
j
{0, 1}
_
_
_
ts in this model if we take
c(S) =

iM
min
jS
c
ij
+

jS
f
j
, v(S) = |S| , k = 1.
Optimization Group 4
Greedy Heuristic
min
SN
{c(S) : v(S) k} .
Step 1: Set S
0
= and t = 1.
Step 2: Set j
t
= argmin
t
c
_
S
t1
{j
t
}
_
c
_
S
t1
_
v
(
S
t1
{j
t
}
)
v
(
S
t1
)
.
Step 3: If S
t1
is feasible, and the cost does not decrease when passing to S
t
= S
t1

{j
t
}, stop with S
G
= S
t1
.
Step 4: Otherwise, set S
t
= S
t1
{j
t
}. If S
t
is feasible and the cost function is
nondecreasing or t = n: stop with S
G
= S
t
.
Step 5: If t = n, no feasible solution has been found. Stop.
Step 6: Set t t +1, and return to Step 2.
Optimization Group 5
Example: Uncapacitated Facility Location
Consider the UFL m = 6 clients and n = 4 depots, and costs as shown below:
(c
ij
) =
_
_
_
_
_
_
_
6 2 3 4
1 9 4 11
15 2 6 3
9 11 4 8
7 23 2 9
4 3 1 5
_
_
_
_
_
_
_
and (f
j
) = (21, 16, 11, 24)
Greedy solution:
Step 1: Set S
0
= and t = 1. S
0
is infeasible.
Step 2: We compute j
1
= argmin
t
c({j
t
})
v({j
t
})
.
One has c({1}) = (6 +1 +15 +9 +7 +4) + 21 = 63, c({2}) = 66,
c({3}) = 31, c({4}) = 64. So j
1
= 3, S
1
= {3} and c(S
1
) = 31. S
1
is
feasible.
Step 3: Since S
1
is feasible, we check if the cost decreases when passing to S
2
= S
1
{j}
for some j / S
1
. One has c({3, 1})c({3}) = 18, c({3, 2})c({3}) = 11,
and c({3, 4}) c({3}) = 11. So the cost is nondecreasing: hence we stop with
S
G
= S
1
= {3}.
In many cases the heuristic must be adapted to the problem structure. We give an example
for STSP.
Optimization Group 6
Example: Symmetric TSP
_
_
_
9 2 8 12 11
9 7 19 10 32
2 7 29 18 6
8 19 29 24 3
12 10 18 24 19
11 32 6 3 19
_
_
_
Distance matrix.
Greedy Solution: The pure greedy heuristic has been applied
to this instance in Chapter 2. Here we use a nearest neigh-
borhood insertion heuristic: Starting from an arbitrary node we
subsequently build paths P
t
containing that node by inserting
the node nearest to this path in the current path, until we obtain
a tour (this happens after n 1 insertions).
Let us start at node 1. The nearest neighbor of node
1 is 3, yielding P
1
= 1 3. The nearest neighbor
of the node set {1, 3} is 6, with distance 6 to 3,
yielding the three possible paths shown in the table.
We use the cheapest insertion: P
2
= 1 3 6.
The nearest neighbor of the node set {1, 3, 6} is 4,
with distance 6 to node 6. Possible insertions are
shown in the table. Now node 2 is nearest to P
3
,
and the possible insertions are as indicated. We use :
P
4
= 2 1 3 6 4, which has length 20.
Finally, the last node (node 5) must be connected to
this path so as to obtain a tour. This can be done in
5 ways:
2 1 3 6 4
5 5 5 5
5
9 2 6 3
10 12 18 19
12 18 19 24
10
24
19
node node insertions length new path
1 1 0 P
0
3 1 3 P
1
6 6 1 3 13
6 1 6 3 17
6 1 3 6 8 P
2
4 4 1 3 6 16
4 1 4 3 6 43
4 1 3 4 6 34
4 1 3 6 4 11 P
3
2 2 1 3 6 4 20 P
4
2 1 2 3 6 4 25
2 1 3 2 6 4 44
2 1 3 6 2 4 59
2 1 3 6 4 2 30
possible insertions of node 5 increment
1 3 6 4 2 5 1 19 +10 +12 9 = 32
1 3 6 4 5 2 1 10 +24 = 34
1 3 6 5 4 2 1 19 +19 +24 3 = 59
1 3 5 6 4 2 1 19 +18 +19 6 = 50
1 5 3 6 4 2 1 19 +12 +18 2 = 47
The shortest tour results when inserting 5 between the
nodes 2 and 1 in P
4
(and adding the arc {4, 2}):
2 5 1 3 6 4 2, and the length of
this tour is 20 +32 = 52.
Optimization Group 7
Other Variants of Greedy Insertion for Symmetric TSP
The choice of the node to insert in the current path can be made in many dierent ways:
nearest node (as done in the example);
random node;
farthest node.
When solving the same STSP instance with cheapest insertion of the farthest node we obtain
(starting at node 1 again):
P
1
= 1 5 (length 12);
P
2
= 4 1 5 (length 20);
P
3
= 4 1 3 5 (length 28);
P
4
= 4 1 3 2 5 (length 27);
tour 6 4 1 3 2 5 6 (length 49).
The tour length is shorter than when inserting the nearest node! Suprising, but this often
happens. Can you understand why?
Optimization Group 8
Local Search Heuristic
Local search can be more conveniently discussed when the problem has the form
min
SN
{c(S) : g(S) = 0} ,
where g(S) 0 represents a measure for infeasibility of S. E.g., the constraint v(S) k
can be represented by using g(S) = (k v(S))
+
.
For a local search heuristic we need:
a solution S N;
a local neighborhood Q(S) for each solution S N;
a goal function f(S), which can be either c(S) when S is feasible, and innite otherwise,
or a composite function of the form c(S) +g(S) ( 0).
Local search Heuristic: Choose an intial solution S. Search for a solution S

Q(S) that
minimizes f(S

). If f(S

) = f(S), stop. Then S


H
= S is locally optimal. Otherwise, set
S = S

and repeat.
Optimization Group 9
Local Search Heuristic (cont.)
Local search Heuristic: Choose an intial solution S. Search for a solution S

Q(S) that
minimizes f(S

). If f(S

) = f(S), stop. Then S


H
= S is locally optimal. Otherwise, set
S = S

and repeat.
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
x
x
x
x
x
x
x x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
x x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Q(S
1
)
Q(S
2
)
Q(S
k
)
Appropriate choice of the neighborhoods depend on the problem structure. A simple choice
is just to add or remove one element to or from S. This neighborhood has O(n) elements.
Finding the best solution in the neighborhood can thus be done in O(n) time.
If all feasible sets have the same size, a useful neighborhood is obtained by replacing one
element of S by an element not in S. This requires O(n
2
) time. In the case of the STSP
this leads to the well known 2-exchange heuristic.
Optimization Group 10
2-Exchange Heuristic for STSP
_
_
_
_
_
9 2 8 12 11
9 7 19 10 32
2 7 29 18 6
8 19 29 24 3
12 10 18 24 19
11 32 6 3 19
_
_
_
_
_
With the greedy insertion heuristic we found the tour 6 4 1 3 2 5 6 of length 49. A
2-exchange removes two (nonadjacent, why?) edges. The two resulting pieces are connected by two other edges
(this can be done in only one way, why?!). If this yields a better tour, we accept it and repeat, until we nd a
so-called 2-optimal tour.
We have a 6-city problem. So a tour has 6 edges. There are
63
2
= 9 possible 2-exchanges. For each
2-exchange we give the two new edges and the increase of the tour length.
3
4
2
5
1
6
Current tour.
Length 49.
no. 2 deleted arcs 2 new arcs increment
1 {1, 3} , {2, 5} {1, 2} , {3, 5} 27 12 = 15
2 {1, 3} , {5, 6} {1, 5} , {3, 6} 18 21 = 3
3 {1, 3} , {6, 4} {1, 6} , {3, 4} 40 5 = 35
4 {3, 2} , {5, 6} {3, 5} , {2, 6} 50 26 = 24
5 {3, 2} , {6, 4} {3, 6} , {2, 4} 25 10 = 15
6 {3, 2} , {4, 1} {3, 4} , {2, 1} 38 15 = 23
7 {2, 5} , {6, 4} {2, 6} , {5, 4} 56 13 = 43
8 {2, 5} , {4, 1} {2, 4} , {5, 1} 31 18 = 13
9 {5, 6} , {4, 1} {5, 4} , {6, 1} 35 27 = 8
3
4
2
5
1
6
New tour.
Length 46.
The second exchange gives an improvement: It deceases the length of the tour to z
L
= 46. The tour is
1 5 2 3 6 4 1. One may verify that this tour is 2-optimal.
Optimization Group 11
Improved Local Search Heuristics
How do we escape from a local minimum, and thus potentially do better than a local search
heuristic? This is the question addressed by two heuristics that we rst discuss briey:
Tabu search
Simulated annealing
After this we deal with
Genetic algorithms
Rather than working with individual solutions, genetic algorithms work with a nite population
(set of solutions) S
1
, . . . , S
k
, and the population evolves (changes somewhat randomly) from
one generation (iteration) to the next.
Finally we discuss two special purpose heuristics for the STSP problem and some heuristics
for MIPs.
Optimization Group 12
Tabu Search
In order to escape from local minima one has to accept so now and then a solution with a
worse value than the incumbent. Since the (old and better) incumbent may belong to the
neighborhood of the new incumbent it may happen that cycling occurs, i.e., that the algorithm
returns to the same solution every two or three steps: S
0
S
1
S
0
S
1
. . .. To avoid
cycling, certain solutions or moves are forbidden or tabu. Comparing the new solution with
all previous incumbents would require much memory space and may be very time consuming.
Instead, a tabu list of recent solutions, or solution modications, is kept. A basic version of
the algorithm is
Step 1: Initialize a tabu list.
Step 2: Get an initial solution S.
Step 3: While the stopping criteria is not satised:
3.1: Choose a subset Q

(S) Q(S) of non-tabu solutions.


3.2: Let S

= argmin
_
f(T) : T Q

(S)
_
.
3.3: Replace S by S

.
Step 4: On termination, the best solution found is the heuristic solution.
Optimization Group 13
Tabu Search (cont.)
Step 1: Initialize a tabu list.
Step 2: Get an initial solution S.
Step 3: While the stopping criteria is not satised:
3.1: Choose a subset Q

(S) Q(S) of non-tabu solutions.


3.2: Let S

= argmin{f(T) : T Q

(S)}.
3.3: Replace S by S

.
Step 4: On termination, the best solution found is the heuristic solution.
The parameters specic to tabu search are
(i) The choice of Q

(S). If Q(S) is small, one takes the whole neighborhood. Otherwise,


Q

(S) can be a xed number of neighbors of S, chosen randomly or by some heuristic


rule.
(ii) The tabu list consists of a small number t of most recent solutions (or modications). If
t = 1 or t = 2, it is not surprising that cycling is still common. The magic value t = 7
is often a good choice.
(iii) The stopping rule is often just a xed number of iterations, or a certain number of
iterations without any improvement of the goal value of the best solution found.
Optimization Group 14
Tabu Search (cont.)
If the neighborhood consists of single element switches:
Q(S) = {T N : |T \ S| = 1} , S N,
then the tabu list might be a list of the last t elements {i
1
, . . . , i
t
} added to the incumbent
and a list of the last t elements {j
1
, . . . , j
t
} removed from the incumbent. A neighbor T is
then tabu if T = S \ {i
q
} or if T = S {j
q
} for some q = 1, . . . , t. So, in forming the
new incumbent one is not allowed to modify the current incumbent S by adding some i
q
or
by removing some j
q
.
When implementing tabu search, the performance can be improved by using common sense.
E.g., there is no justication to make a solution tabu if it is the best solution found by other
researchers. In other words, tabu search can be viewed as a search strategy that tries to take
advantage of the history of the search and the problem structure intelligently.
Optimization Group 15
Simulated Annealing
Simulation is less direct, and less intelligent. The basic idea is to choose a neighbor ran-
domly. The neighbor replaces the incumbent with probability 1 if it is better, and with some
(changing) probability (0, 1) if it has a worse value.
The probability for accepting a worse solution is taken proportional to the dierence in goal
values. So, if the number of iterations is large enough, one can escape from every local
minimum. On the other hand, to guarantee convergence of the algorithm, the probability of
accepting worse solutions decreases over time. A more formal description is as follows:
Step 1: Get an initial solution S.
Step 2: Get an initial temperature T, and a cooling ratio r (0 < r < 1).
Step 3: While not yet frozen, do the following:
3.1: Perform the following loop L times:
3.1.1: Pick a random neighborhood S

of S.
3.1.2: Let = f(S

) f(S).
3.1.3: If 0, replace S by S

.
3.1.4: If > 0, replace S by S

with probability e

T
.
3.2: Set T rT. (Reduce the temperature.)
Step 4: Return the best solution found, this is the heuristic solution.
Optimization Group 16
Simulated Annealing (cont.)
Step 1: Get an initial solution S.
Step 2: Get an initial temperature T, and a cooling ratio r (0 < r < 1).
Step 3: While not yet frozen, do the following:
3.1: Perform the following loop L times:
3.1.1: Pick a random neighborhood S

of S.
3.1.2: Let = f(S

) f(S).
3.1.3: If 0, replace S by S

.
3.1.4: If > 0, replace S by S

with probability e

T
.
3.2: Set T rT. (Reduce the temperature.)
Step 4: Return the best solution found, this is the heuristic solution.
Note that the probability of accepting worse solutions decreases in time, because the temper-
ature T decreases in time. Also, the probability decreases if increases, i.e. if the quality of
the solution becomes worse.
Just as for other local search heuristics, one has to dene an initial solution, a neighborhood
for each solution and the value f(S) of a solution. The parameters specic for SA are
(i) The initial temperature T.
(ii) The cooling ration r.
(iii) The loop length L.
(iv) The denition of frozen, i.e., a stopping rule.
Optimization Group 17
Genetic Algorithms
Rather than working with individual solutions, genetic algorithms work with a nite population
(set of solutions) S
1
, . . . , S
k
, and the population evolves (changes somewhat randomly) from
one generation (iteration) to the next. An iteration consists of the following steps:
(i) Evaluation. The tness of the individuals is evaluated.
(ii) Parent Selection. Certain pairs of solutions (parents) are selected based on their
tness.
(iii) Crossover. Each pair of parents combines to produce one or two new solutions (o-
spring).
(iv) Mutation. Some of the ospring solutions are randomly modied.
(v) Population Selection. Based on their tness, a new population is selected replacing
some or all of the original population by an identical number of ospring solutions.
Each of the ve steps is discussed in more detail in the book.
Optimization Group 18
Worst-case Analysis of (some) Heuristics
Integer Knapsack Heuristic
We consider the integer knapsack problem
z = max
_
_
_
n

j=1
c
j
x
j
:
n

j=1
a
j
x
j
b, x Z
n
_
_
_
,
where a
j
Z
+
for all j and b Z
+
. We suppose that the variables are ordered so that
c
j
a
j
is non-increasing and, moreover, that a
j
b for all j. We consider the following simple
heuristic: take x
H
1
=
_
b
a
1
_
and x
H
j
= 0 for j 2, yielding the value z
H
= c
1
_
b
a
1
_
.
Theorem 1 z
H

1
2
z.
Proof: The solution of the linear relaxation is x
1
=
b
a
1
, and x
j
= 0 for j 2. This
provides the upper bound z
LP
=
c
1
b
a
1
z. As a
1
b, we have
_
b
a
1
_
1. Setting
b
a
1
=
_
b
a
1
_
+
_
b
a
1
_
f
we have 0
_
b
a
1
_
f
< 1. Hence
z
H
= c
1
_
b
a
1
_
=
_
b
a
1
_
b
a
1
z
LP
=
_
b
a
1
_
_
b
a
1
_
+
_
b
a
1
_
f
z
LP

_
b
a
1
_
_
b
a
1
_
+
_
b
a
1
_
z
LP
=
1
2
z
LP

1
2
z.
Optimization Group 19
Eucledian STSP
We say that an STSP in graph G = (V, E) is Eucledian if for any three edges e, f and g
forming a triangle, one has the so-called triangle inequality: c
e
c
f
+c
g
.
Proposition 1 Let G be a complete graph whose edge lengths satisfy the triangle inequality,
and let G contain a subgraph H = (V,

E) which is Eulerian. Then G contains a Hamilton
cycle of length at most c(H) =

e

E
c
e
.
Proof: Note that any Eulerian circuit in H passes exactly once through every edge in

E, and
hence has length

e

E
c
e
. Any such circuit C passes through all the nodes in V , probably
more than once through some or all of them. Using C we construct a Hamilton circuit as
follows. We choose a node v
1
V , and starting at that node we walk along the edges of C
(in one of the two possible directions). The rst new node that we encounter after leaving
v
1
is called v
2
. After leaving v
2
, the rst node dierent from v
1
and v
2
is called v
3
, and
so on. Proceeding in this way we get a sequence v
1
, v
2
. . . , v
n
containing all the nodes in
V . We claim that the tour T : v
1
v
2
. . . v
n
v
1
has length at most

e

E
c
e
For this two observations are necessary. First, G contains the edges
_
v
i
, v
i+1
_
for i = 1
to i = n 1 and also the edge {v
n
, v
1
}, since G is a complete graph. So T is a tour
indeed. Second, the nodes on T partition the Euler circuit C in n subsequent edge-disjoint
pieces. Each edge on the tour T is a shortcut for the corresponding piece, due to the triangle
inequality. Thus the length of the tour is at most equal to the length of C.
Optimization Group 20
Geometric proof
We consider the piece of the Eulerian tour between the nodes v
i
and v
i+1
of the Hamilton
tour. It will be convenient to call the nodes on the corresponding piece of the Eulerain tour
1, 2, . . . , k. So 1 v
i
and k v
i+1
. The gure depicts the situation for k = 6. The
red edges are part of the Eucledian tour.
1
2
3
4 5
66
Using the triangle inequality we show that the shortcut 1 6 is shorter than the path
1 2 3 4 5 6:
c
16
c
15
+c
56
c
14
+c
45
+c
56
c
13
+c
34
+c
45
+c
56
c
12
+c
23
+c
34
+c
45
+c
56
.
Optimization Group 21
The Tree Heuristic for Eucledian STSP
Step 1: Find a minimum-length spanning tree T, with edge set E
T
and length z
T
=

eE
T
c
e
.
Step 2: Double each edge in T to form a connected Eulerian graph.
Step 3: Using the previous proposition, taking any Eulerian tour, construct a Hamilton circuit
of length z
H
.
Proposition 2 z
H
2z.
Proof: Since every tour consists of a spanning tree plus an edge, we have the lower bound
z
T
z. The length of the Eulerian tour is 2z
T
, by construction. The construction of the
Hamilton circuit guarantees that z
H
2z
T
. Thus we have
z
H
2
z
T
z z
H
.

Optimization Group 22
Example of the Tree Heuristic for Eucledian STSP
Below the distances are given between the capitals of 11 of the 12 provinces in the Netherlands.
1 2 3 4 5 6 7 8 9 10 11
1. Amsterdam 0 92 162 57 184 87 132 207 175 40 103
2. Arnhem 92 0 132 116 157 63 154 151 200 59 66
3. Assen 162 132 0 214 25 195 68 283 315 159 69
4. Den Haag 57 116 214 0 236 104 182 162 124 61 151
5. Groningen 184 157 25 236 0 220 58 308 340 184 94
6. Den Bosch 87 63 195 104 220 0 215 123 141 53 129
7. Leeuwarden 132 154 68 182 58 215 0 305 306 162 91
8. Maastricht 207 151 283 162 308 123 305 0 242 176 217
9. Middelburg 175 200 315 124 340 141 306 242 0 156 246
10. Utrecht 40 59 159 61 184 53 162 176 156 0 90
11. Zwolle 103 66 69 151 94 129 91 217 246 90 0
We want to nd a minimum length Hamilton circuit along these cities.
Optimization Group 23
Example of the Tree Heuristic for Eucledian STSP
We rst nd a minimal weight spanning tree. The minimal weight tree is shown in the graph.
It has weight 25 +40 +53 +57 +58 +59 +66 +69 +123 +124 = 674.
1
2
3
4
5
6
7
8
9
10
11
By doubling each edge in the tree the graph becomes Eulerian.
An Eulerian tour is (e.g.) 1 4 9 4 1 10
6 8 6 10 2 11 3 5 7 5
3 11 2 10 1, which has length 1348. From
this we obtain the Hamilton circuit 1 4 9 10
6 8 2 11 3 5 7 1, with length
57 +124 +156 +53 +123 +151 +66 +69 +25 +
58 +132 = 1014.
N.B. When applying 2-exchanges to the above tour the length
reduces to 1008. The farthest neighbor insertion heuristic,
when started at node 10, yields a tour of length 1032. When
applying 2-exchanges this length reduces to 990. This is opti-
mal: 1 10 4 9 6 8 2 11 3
5 7 1. For the nearest neighborhood insertion heuristic,
starting at node 8, the length is 1045, which can be reduced
to 1032 by 2-exchanges.
Optimization Group 24
The Tree/Matching Heuristic for Eucledian STSP
Step 1: Find a minimum-length spanning tree T, with edge set E
T
and length z
T
=

eE
T
c
e
.
Step 2: Let U be the set of odd nodes in (V, E
T
). Find a perfect matching M of minimum
length z
M
in the subgraph G

= (U, E

) of G induced by the subset U, so E

contains all edges in G with both end points in U. By construction, (V, E


T
M)
is an Eulerian graph.
Step 3: From any Eulerian tour in (V, E
T
M), construct a Hamilton circuit of length z
C
.
Proposition 3 (Christodes, 1976) z
C

3
2
z.
Proof: As above, z
T
z. Let the nodes be ordered such that 1 2. . . n 1
is an optimal tour (of length z). Let j
1
< . . . < j
2k
be the nodes of U, and let e
i
=
_
j
i
, j
i+1
_
, with j
n+1
= j
1
. Due to the triangle inequality, one has

2k
i=1
e
i
z. The sets
M
1
= {e
i
: i odd} and M
2
= {e
i
: i even} are matchings in G

= (U, E

). Hence
z
M
z
M
1
and z
M
z
M
2
. Consequently, since z
M
1
+z
M
2
=

2k
i=1
e
i
z, 2z
M
z.
Hence either z
M
1

1
2
z, or z
M
2

1
2
z. Suppose z
M
1

1
2
z. Then (V, E
T
M
1
) is
Eulerian, and has weight
3
2
z. As we have seen before, we can then construct a Hamiltonian
circuit of length z
C

3
2
z.
Optimization Group 25
Geometric proof
We are given an optimal Hamilton circuit C : 1 2 . . . n 1 of length z in the
graph G = (V, E) and the set U of nodes having odd degree in a minimum weight spanning
tree T of G. The nodes in U partition C in pieces, we connect these nodes by the edges
that are shortcuts of these pieces, as indicated in the gure below. These edges form an even
length circuit, because the number of nodes in U is even. We alternately assign the edges to
the sets M
1
(red) and M
2
(green). Then M
1
and M
2
are matchings in the subgraph of G
induced by U.
u
1
u
2
u
3
u
6
u
5
u
4
Obviously, due to the triangle inequality, z
M
1
+ z
M
2
z, Hence either z
M
1

1
2
z, or
z
M
2

1
2
z. By adding the shortest matching to T we get an Eulerian graph whose Eulerian
circuits have length
1
2
z.
Optimization Group 26
Example of the Tree/Matching Heuristic for Eucledian STSP
We use the minimal weight spanning tree that we found earlier.
9
4
1
6
10
11
7
8
2
3
5
The odd degree nodes in the tree form the set {7, 8, 9, 10}. The
indiced subgraph on these nodes is depicted below:
9
10
7
8
305
306
162
242
176
217
The mimimal weight matching consists of the arcs {8, 9} and
{7, 10}. Adding these arcs to the tree the graph becomes Eu-
lerian. An Eulerian tour is (e.g.) 1 4 9 8 6
10 2 11 3 5 7 10 1. From this we
obtain the Hamilton circuit 1 4 9 8 6 10
2 11 3 5 7 1, with length 57 +124 +242 +
123+53+59+66+69+25+58+132 = 1008. This
route is 2-optimal. Note that we know that a minimum length tour
can not be shorter than
2
3
1008 = 672.
Optimization Group 27
MIP-based Heuristics
Dive-and-Fix
Aim: nd quickly a feasible solution, which gives a tight lower bound in the search tree.
Suppose we have a mixed 0-1 problem in variables x
i
R and y
j
{0, 1}. Given a solution
(x

, y

) of the linear relaxation, let F =


_
j : y
j
/ {0, 1}
_
.
Initialization Take the solution (x

, y

) of the linear relaxation at some node in the search


tree.
Basic Iteration As long as F = , do the following:
Let i = argmin
jF
_
min
_
y

j
, 1 y

j
__
(nd the fractional variable
closest to integer).
If y

i
< 0.5, x y

i
= 0 (if close to 0, x to 0).
If y

i
0.5, x y

i
= 1 (if close to 1, x to 1).
Solve the resulting LO problem.
If the LO problem is infeasible, stop (the heuristic has failed).
Otherwise, let (x

, y

) be the new linear solution.


Termination If F = , (x

, y

) is a feasible mixed integer solution.


Optimization Group 28
Two other MIP-based Heuristics
We discuss two other heuristics for mixed integer problems that are hard to solve. For simplicity
we describe the heuristic for an IP of the following form.
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
1
x
1
+A
2
x
2
= b, x
1
Z
n
1
+
, x
2
Z
n
2
+
_
.
It is supposed that the variables x
1
j
for j N
1
are more important than the variables x
2
j
for
j N
2
, where |N
i
| = n
i
for i = 1, 2.
The idea is to solve two (or more) easier LO problems or MIPs. The rst one allows us to x
or limit the range of the more important x
1
variables, whereas the second allows us choose
good values for the x
1
variables.
Optimization Group 29
Relax-and-Fix
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
1
x
1
+A
2
x
2
= b, x
1
Z
n
1
+
, x
2
Z
n
2
+
_
.
Relax: Solve the relaxation
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
1
x
1
+A
2
x
2
= b, x
1
Z
n
1
+
, x
2
R
n
2
+
_
.
Let
_
x
1
, x
2
_
be a solution of this problem.
Fix: Fix the (important) x
1
variables to their value in x
1
and solve the restriction
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
2
x
2
= b A
1
x
1
, x
1
= x
1
, x
2
Z
n
2
+
_
.
If this problem is infeasible, the heuristic fails. Otherwise, let
_
x
1
, x
2
_
be a solution
of this problem.
Heuristic: Use as heuristic solution x
H
=
_
x
1
, x
2
_
, whose value satises z = c
T
x
H
z
z.
Optimization Group 30
Cut-and-Fix
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
1
x
1
+A
2
x
2
= b, x
1
Z
n
1
+
, x
2
Z
n
2
+
_
.
We assume that a strong cutting plane algorithm is available, which generates strong cuts C
1
x
1
+C
2
x
2
,
so after adding these cuts, at least some of the integer variables take values close to integer or to their optimal
values.
Cut: Using a strong cutting plane algorithm nd a solution of the tight linear relaxation
z = max
_
c
1
T
x
1
+c
2
T
x
2
: A
1
x
1
+A
2
x
2
= b, C
1
x
1
+C
2
x
2
, x
1
R
n
1
+
, x
2
R
n
2
+
_
.
Let
_
x
1
, x
2
_
be a solution of this problem. N.B. The cuts are generated in the course of the
algorithm solving the linear relaxation of the given problem.
Fix or
Bound
: Choose . For j N
1
, set
j
=
_
x
1
j
+
_
and u
j
=
_
x
1
j

_
. Solve the restriction
z = max c
1
T
x
1
+c
2
T
x
2
:
A
1
x
1
+A
2
x
2
= b, C
1
x
1
+C
2
x
2
, x
1
u, x
1
Z
n
1
+
, x
2
Z
n
2
+
.
If this problem is infeasible, the heuristic fails: one may try again with decreased. Otherwise, let
_
x
1
, x
2
_
be a solution of this problem.
Heuristic: Use as heuristic solution x
H
=
_
x
1
, x
2
_
, whose value satises z = c
T
x
H
z z.
Observe that if is small and positive, x
1
variables taking values within of integer in the linear relaxation are
xed in the restricted problem, while others are forced to either the value
_
x
1
j
_
or the value
_
x
1
j
_
. On the other
hand, if is negative, all the x
1
variables can still take two values in the restricted problem.
Optimization Group 31
More Courses on Optmization
Code Name Docent
WI3 031 Niet-Lineaire Optimalisering C. Roos
WI4 051TU Introduction to OR H. van Maaren
WI4 060 Optimization and Engineering C. Roos
WI4 062TU Transportation, Routing and Scheduling Problems C. Roos
WI4 063TU Network Models and Algorithms J.B.M. Melissen
WI4 064 Discrete Optimization C. Roos
WI4 087TU Optimization, Models and Algorithms H. van Maaren
WI4 131 Discrete and Continuous Optimization G.J. Olsder/C. Roos
IN4 082 Local (Heuristic) Search Methods H. van Maaren
IN4 077 Computational Logic and Satisability H. van Maaren/C. Witteveen
IN4 081 Randomized algorithms H. van Maaren
Optimization Group 32

You might also like