Professional Documents
Culture Documents
Routing Optimization
Dominik R. Rabiej
Summary
This study creates and analyzes Greedy Random, the rst successful problem-independent al-
gorithm for optimizing vehicle routing, the scheduling of multiple deliveries to various clients.
Existing vehicle routing optimization techniques are problem-specic. After programming
ten algorithms, an initial experiment revealed Greedy Random as the best performing algo-
rithm. Further experiments analyzed Greedy Random's success. Greedy Random surpasses
current techniques in ease of applicability and in scope of use in other optimization problems
such as planning and layout.
Dominik R. Rabiej 1
Abstract
This study creates and analyzes a novel technique for optimizing Capacitated Vehicle Routing
with Time Windows (CVRTW). In this new approach, ten algorithms each independently
drove a generic CVRTW engine. After programming the ten dierent algorithms, an initial
experiment compared them against each other on a set of standardized benchmarks. The
best algorithm, Greedy Random (GR), performed signicantly better than the other nine at
p = 0 025. Four more experiments elucidated the reasons for GR's success. Each experiment
:
1 Introduction
The goal of vehicle routing is to schedule multiple deliveries to various clients. Vehicle
routing has existed since the advent of the Industrial Age, when large-scale production and
supply became possible. As the complexity and scale of the manufacturing world increased,
the task of optimizing vehicle routing grew.
This study examined Capacitated Vehicle Routing with Time Windows (CVRTW), which
routes vehicles that each carry a specic capacity of product to dierent customers with
varying availabilities (time windows) and varying demanded amounts of product. By taking
into account capacity and time windows, CVRTW generates solutions that have real-life
applications [4].
CVRTW is a NP-Hard problem, a member of \a complexity class of problems that are
intristically harder than those that can be solved by a non-deterministic Turing machine in
Dominik R. Rabiej 2
polynomial time" [2]. CVRTW is especially dicult since the number of possible solutions
grows exponentially with the size of the problem.
Solomon's 1987 paper initiated research into CVRTW by establishing a standard set
of optimization benchmarks [11]. Using those benchmarks, numerous techniques have opti-
mized CVRTW, including genetic algorithms [12], tabu search [6], probabilistic searches [10],
constraint programming [5], exact algorithms [3], metaheuristics [7], multiple improvement
heuristics [9] and Human-Guided Simple Search [1].
In Human-Guided Simple Search (HuGSS), the most recent of these techniques, a human
user and a computer work together on optimizing a solution. HuGSS allows the human user
to have a broad overview of the solutions calculated by the computer (the CVRTW Engine).
From this vantage point, the human user can eectively drive the CVRTW Engine to optimize
the CVRTW solution [1].
This study creates a new perspective on the optimization process by separating the
CVRTW Engine and the optimization algorithm. It began by programming ten unique
algorithms in place of a human user. Each algorithm used a dierent technique to drive
the CVRTW Engine. After these ten algorithms ran against each other in the Algorithm
Comparison Run (Section 3.1), one algorithm emerged with the best performance. Next,
four experiments investigated its success by comparing variants of the algorithm on a set of
standardized benchmarks.
2 Experimental Components
2.1 CVRTW Solution
Figure 1 presents a visualization of a CVRTW solution [1]. The large central circle represents
the depot from where the vehicles depart. The smaller circles represent the various customers
and the connecting line segments represent the vehicle routes. The pie charts within the
smaller circles represent the availabilities (time windows) of the customers. To reduce visual
Dominik R. Rabiej 3
y
Key
Central Depot
Customer
Vehicle Route
Closed
Open
time window [ copen cclose ] and a demanded amount of product, c. vehicles service these
t ;t p V
n customers. Each vehicle carries amount of product and travels a total distance v :
v k d
from the depot, to all of its customers (the set f v1 vq g) and back to the depot. Each
q c :::c
(all customers receive their shipment within their time windows and no vehicle runs out of
product). If these conditions are not met, a solution is infeasible.
The cost of a solution is the number of vehicles, . If is equivalent for two solutions,
V V
Dominik R. Rabiej 4
then P v (the aggregate distance that the vehicles travel) is used as a tie-breaker. An
d
set the customer to high priority to allow the CVRTW Engine to move a customer o its
current vehicle route onto a dierent vehicle route. Similarly, to prevent the CVRTW Engine
from moving a customer o its current route, an algorithm set the customer to medium or
low priority. Customer priorities also aected whether the route accepted new customers.
If an algorithm set any customers on a route to low priority, then the CVRTW Engine did
not move any customers onto that route. The CVRTW Engine only moved customers onto
routes consisting entirely of high and medium priority customers.
Priorities helped reduce the complexity of the search. In one case, focusing the search on
20 of 100 customers decreased the number of 1-ply moves by a factor of 30, 2-ply moves by
a factor of 222, and 3-ply moves by a factor of 18,432 [1].
Figure 2 summarizes GR's logic sequence. The gure does not illustrate the central depot
or customers' time windows. At rst, GR sets all of the customers to high priority, so that
it will consider all possible cases. It then randomly selects one customer and moves that
Dominik R. Rabiej 6
customer from one vehicle route to another (Step 2). This is GR's initial random move.
Usually that customer relocation will make the solution infeasible (85.9% of the time), as in
Step 3. Sometimes the solution will remain feasible. Regardless, GR then sets the moved
customer to medium priority, so that the CVRTW Engine cannot move it again. Then,
GR invokes the CVRTW Engine to reoptimize the solution for a cycle (Step 4). If the new
solution is better than the original solution (Step 1), then the solution is used. If not, it is
discarded. If the CVRTW Engine cannot nd a feasible solution within the cycle time limit,
the solution is discarded.
Dominik R. Rabiej 7
2.5 Algorithms
Aside from GR, this study used nine other programmed algorithms. Like GR, these al-
gorithms were all improvement algorithms working to optimize a solution from a starting
point.
The logic sequence of each algorithm may be summarized as follows:
High Priority (HI)
1. Set all customers to high priority.
2. Optimize using a greedy search.
Steepest Climbing (SC)
1. Set all customers to high priority.
2. Optimize using a steepest search for a cycle.
3. If the solution is better, use it. Otherwise, discard it.
4. Repeat.
Random Priorities (RP)
1. Randomly set all customers to either high or medium priority.
2. Optimize using a steepest search for a cycle.
3. If the solution is better, use it. Otherwise, discard it.
4. Repeat.
Random Circle Priorities (RCP)
1. Set all customers to medium priority.
2. Select a random customer and set it to high priority.
3. Set all customers within a given radius of that customer to high priority.
4. Optimize using a steepest search for a cycle.
5. If the solution is better, use it. Otherwise, discard it.
6. Repeat.
Random Routes (RR)
1. Set all customers to medium priority.
Dominik R. Rabiej 8
2. Select two dierent routes and set all of their customers to high priority.
3. Optimize using a steepest search for a cycle.
4. If the solution is better, use it. Otherwise, discard it.
5. Repeat.
Random Adjacent Routes (RAR)
1. Set all customers to medium priority.
2. Select a random customer and select another random customer within a given radius
that is on a dierent route than the rst customer. If there are no customers on
dierent routes within the radius, select a new rst customer.
3. Set the routes of the two selected customers to high priority.
4. Optimize using a steepest search for a cycle.
5. If the solution is better, use it. Otherwise, discard it.
6. Repeat.
Random Priorities Greedy Random (RPGR)
1. Randomly set all customers to high or medium priority.
2. Randomly reassign one customer from one vehicle route to another, dierent, vehicle
route.
3. Re-optimize the routes.
4. Set the customer moved to medium priority (so that it cannot be moved back by the
CVRTW Engine).
5. Optimize using a greedy search for a cycle.
6. If the solution is better, use it. Otherwise, discard it. If the solution is infeasible,
discard it.
7. Repeat.
Repetitive Steepest Search (RSS)
1. Set all customers to high priority.
2. Optimize using a steepest search for 16 of a cycle.
3. Set the priority of the moved customers to medium (so that they cannot be moved
again by the CVRTW Engine).
Dominik R. Rabiej 9
Engine initialized itself using a pre-computed solution with parameters. These parameters
specied information such as the algorithm, the cycle time limit and the number of cycles
to run.
The selected algorithm drove the CVRTW Engine for the specied number of cycles. In
each cycle, the algorithm performed its logic sequence (Sections 2.4 and 2.5) and invoked the
CVRTW Engine to optimize the solution. The CVRTW Engine optimized until it reached
the cycle time limit. Then, it returned the possibly optimized solution to the algorithm. The
algorithm evaluated whether to use the solution or discard it. This process repeated until
the CVRTW Engine reached the specied number of cycles.
3 Experiments
3.1 Algorithm Comparison Run
The rst experiment, the Algorithm Comparison Run (ACR), compared the ten algorithms
on the eight Solomon benchmarks (RC101{RC108). Each benchmark had three dierent
starting points, Rank 0, Rank 10, and Rank 20. The goal of the ACR was to determine
which algorithm optimized best.
In the ACR, each algorithm ran twice for 30 minutes on the three dierent ranks of
RC101{RC108. Each algorithm ran for 30 cycles of 60 seconds, except for HI, which ran for
one cycle of 1800 seconds (because it was a continual greedy search) and RSS, which ran for
15 cycles of 120 seconds each (because it did two separate searches in one cycle).
Rank Algorithm Vehicles Distance
1 GR 12.81 1380
2 ANY 13.13 1398
3 RSS 13.33 1393
4 SC 13.38 1398
5 RPGR 13.44 1447
6 RR 13.50 1456
7 RP 13.54 1405
8 RAR 13.65 1415
9 RCP 13.99 1457
10 HI 14.65 1572
Table 1: The overall averaged algorithm rankings and results of the Algorithm Comparison
Run.
Table 1 illustrates the results of the ACR, averaged across benchmarks and ranks. GR
produced a lower average number of vehicles than the other algorithms did. Statistical
analysis of the ACR shows that GR performed signicantly better than the other algorithms
at = 0 025.
p :
The four experiments after the ACR focused on understanding and analyzing GR. Each
experiment tested a hypothesis by comparing programmed variants of GR on a set of stan-
Dominik R. Rabiej 12
dardized benchmarks.
In Table 2 and for all subsequent tables, the notation signies vehicles :distance . For
example, 15:1652 means the optimized solution consisted of 15 vehicles traveling an aggregate
distance of 1652 units. Bold type denotes the lowest cost solution, not necessarily the
statistically signicantly best solution. Statistical analysis of the data in Table 2 showed
that there was no signicant dierence between Infeasible GR and Feasible GR using a 95%
condence interval. This suggested that feasibility was not the primary reason for GR's
success.
Dominik R. Rabiej 13
Statistical analysis of the results in Table 3 showed that GR's optimization performance
increased with the priority of the moved customer. Because High Priority GR performed
signicantly better than Medium Priority GR in distance at = 0 025, the 90-cycle VPR
p :
ran to test whether High Priority GR's dominance existed merely because it considered more
possibilities and thus had a higher chance of improvement. If this were so, then its dominance
would dissipate with a longer total search time because Medium Priority GR would then be
able to consider more possibilities as well.
The 90-cycle VPR ran twice on rank 10 of Solomon's RC101{RC108 benchmarks. Each
Dominik R. Rabiej 14
Statistical analysis of the results of the 90-cycle VPR (Table 4) showed that there was no
signicant dierence in either vehicles or distance between High Priority GR and Medium
Priority GR, but that both were signicantly better in distance than Low Priority GR at
p = 0 025. This conrmed the earlier hypothesis that GR derived success from not moving
:
the customer back immediately. High Priority GR performed better in the 30-cycle VPR
because it did not waste a cycle searching when the initial random move was fruitless. Unlike
Medium Priority GR, it undid the move and then reoptimized. When the initial random
move enabled the solution to be improved, High Priority GR optimized like Medium Priority
GR. In the 90-cycle VPR, Medium Priority GR had more time and thus fruitless searches did
not impact its eectiveness as much. Low Priority GR performed worse than both because it
restricted its search space. The pair of VPRs established that the modication of the priority
of the moved customer only served to control the number of possible solutions considered.
Otherwise, it did not form a core factor of GR's performance.
initial random moves would perform better than only one initial random move. The MIRMR
ran because neither the priority of the moved customer nor the feasibility or infeasibility of
the initial random move had been found to be factors.
The MIRMR tested 7 variants of GR that made 1, 2, 5, 7, 10, 25 and 50 initial random
moves. The 10, 25 and 50 move variants all had exactly the same performance as the 7 move
variant.
Benchmark 1 Move 2 Moves 5 Moves 7 Moves
RC101 15:1655.93 15:1663.59 15:1718.99 15:1718.99
RC102 13.5:1502.3 14:1516.82 14:1552.24 14:1554.07
RC103 11:1364.89 11:1379.72 11:1411.98 11:1411.98
RC104 10:1200.53 10:1196.61 10:1200.53 10:1200.53
RC105 14:1563.97 14:1570.9 14:1647.17 14:1647.17
RC106 12:1437.89 12:1437.89 12:1437.89 12:1437.89
RC107 11:1274.74 11:1296.28 11:1306.35 11:1306.35
RC108 11:1165.93 11:1187.78 11:1217.63 11:1217.63
Scores 12.1875:1395.77 12.25:1406.2 12.25:1436.6 12.25:1436.83
Table 5: The averaged results of the Multiple Initial Random Moves Run.
Statistical analysis of the results of the MIRMR (Table 5) indicated that there was no
signicant dierence between the 1 and 2 move variant, but that both were signicantly
better than the other move variants at = 0 025. This indicated a strong correlation: the
p :
less initial random moves, the better the performance. However, this did not mean that
no initial random moves needed to be made. The HI algorithm in the ACR tested that
possibility; it performed signicantly worse than the other nine algorithms at = 0 025.
p :
steepest search ran with a constant mini-cycle time of two seconds. In between cycles, the
solution was evaluated as it had been in normal GR. SGRR compared three dierent variants
of Steepest GR, one with 30 cycles of 60 seconds each, one with 60 cycles of 30 seconds each
and one with 90 cycles of 20 seconds each.
Benchmark 30 Cycles 60 Cycles 90 Cycles
RC101 15:1653.84 15:1663.25 15:1667.31
RC102 14:1501.32 13.5:1524.9 13:1566.93
RC103 11:1349.15 11:1375.31 11:1363.63
RC104 10:1200.53 10:1194.98 10:1193.01
RC105 14:1550.01 14:1555.7 14:1566.72
RC106 12:1437.03 12:1429.18 12:1433.81
RC107 11:1291.86 11:1278.44 11:1263.91
RC108 11:1184.62 11:1166.06 11:1178.18
Scores 12.25:1396.04 12.1875:1398.48 12.125:1404.19
Statistical analysis of the results in Table 6 showed that there was no signicant dier-
ence between the three variants using a 95% condence interval. The number of vehicles
diminished as more cycles were run. Also, Steepest GR did not perform signicantly dier-
ently from GR even though it used a steepest search. This conrmed the hypothesis that
GR derived its success from its initial random move.
than HuGSS using a 95% condence interval. GR performed well on RC101 and RC107
because the customers in those solutions had narrow time windows. A smaller time window
allowed the CVRTW Engine to conduct a deeper search, nding more improvements.
Dominik R. Rabiej 17
Table 7: A comparison of GR's best results, HuGSS's best results and the best results ever
found. [1]
4 Discussion
4.1 The Role of Infeasible Space
Initial experimental results suggested that GR derived its success from its use of infeasible
space. GR's initial random move made the solution infeasible 85.9% of the time because the
moved customer was distant from all other customers on its new route. The vehicle servicing
that route could not travel to all of its customers within their time windows, resulting in
infeasibility. Still, when GR started from an infeasible initial random move, it found a new
feasible solution 88.1% of the time. Only 17.4% of those were improvements over the solution
prior to the initial random move. GR found 73.1% of its improvements by passing through
infeasible space. No other algorithm used infeasible space as extensively as GR.
Dominik R. Rabiej 18
improvements.
GR also reoptimized the solution after its catalytic initial random move. It did this using
the CVRTW Engine's greedy moves, or in the case of Steepest GR, the mini-cycle moves.
In the Algorithm Comparison Run (Section 3.1), GR had an average of 36.3 greedy moves
per cycle. In cases where GR made an improvement, the average was 47.3 greedy moves per
cycle. Compared to when it did not nd an improvement (33.8 greedy moves per cycle), GR
made 39.8% more moves when it made an improvement. GR made signicantly more moves
at = 0 025.
p :
GR made one catalytic initial random move that completely shifted its search space,
enabling it to nd improvements by either skirting along the edges of infeasible space or by
moving unexpectedly within feasible space.
It was precisely the non-drastic element of GR's catalytic initial random move that en-
abled it to optimize well. This catalytic initial random move was neither too drastic nor
too passive. Because it was appropriately moderate, GR could escape a non-optimal local
minimum and approach the optimal solution.
Dominik R. Rabiej 19
5 Conclusion
This study has created and analyzed Greedy Random (GR), a novel algorithm for vehicle
routing optimization. It provides evidence that GR derives its success from a single catalytic
initial random move that allows it to escape from a non-optimal local minimum and approach
the optimal solution. GR provides a high level of portability because it is a successful
algorithm separate from an optimization engine. Thus, industry can easily apply GR to
other areas of optimization such as manufacturing planning and chip layout.
6 Acknowledgements
My thanks to my mentor, Dr. Neal Lesh at the Mitsubishi Electric Research Laboratory
(MERL) for his insight and inspiration. I also thank everyone at MERL, in particular Dr.
Brian Mirtich and Mr. Erik Piip. I deeply appreciate the assistance of the Research Science
Institute alumni, especially Doug Heimburger, Justin Bernold and Boris Zbarsky. I also
am grateful to Dr. Daniel Milhako of Western Michigan University for his assistance with
statistical analysis. Finally, I am immensely grateful to my parents for their encouragement
and steadfast support.
Dominik R. Rabiej 20
References
[1] D. Anderson, E. Anderson, N. Lesh, J. Marks, B. Mirtich and D. Ratajczak, \Human-
Guided Simple Search." In 17th Nat. Conf. on Articial Intelligence: July 2000, pp.
209-21, 2000.
[2] M. Atallah, Ed., Algorithms and the Theory of Computation Handbook. CRC Press,
Boca Raton, FL, pp. 19-26, 1999.
[3] E. Baker, \An Exact Algorithm for the Time-Constrained Traveling Salesmen Problem."
Operations Research vol. 31, no. 5, Sept-Oct., pp. 938-945, 1983.
[4] J. Braklow, W. Graham, S. Hassler, K. Peck and W. Powell, \Interactive Optimization
Improves Service and Performance for Yellow Freight System." INTERFACES vol. 22,
no. 1, Jan-Feb., pp 147-172, 1992.
[5] B. De Backer, V. Furnon, P. Kilby, P. Prosser and P. Shaw, \Solving Vehicle Routing
Problems using Constraint Programming and Metaheuristics." Journal of Heuristics
Special Issue on Constraint Programming, July 1997.
[6] B. Garcia, J. Potvin and J. M. Rousseau, \A parallel implementation of the tabu search
heuristic for vehicle routing problems with time window constraints." Computers &
Operations Research vol. 21, no. 9, pp. 1025-1033, 1994.
[7] J. Homberger and H. Gehring, \Two Evolutionary Metaheuristics for the Vehicle Rout-
ing Problem with Time Windows." INFOR vol. 37, no. 3, Aug., pp. 297-317, 1999.
[8] D. Montgomery, Design and Analysis of Experiments. New York, John Wiley & Sons,
1984.
[9] P. Prosser and P. Shaw, \Study of Greedy Search with Multiple Improvement Heuris-
tics for Vehicle Routing Problems." University of Strathclyde Department of Computer
Science, Glasglow, Scotland. Research Report 96/201, Dec. 1996.
[10] Y. Rochat and E. Taillard, \Probabilistic Diversication and Intensication in Local
Search for Vehicle Routing." Journal of Heuristics vol. 1, pp. 147-167, 1995.
[11] M. Solomon, \Algorithms for the Vehicle Routing and Scheduling Problems with Time
Window Constraints." Operations Research vol. 35, no. 2, March-April, pp. 254-264,
1987.
[12] S. Thangiah, \Vehicle routing with time windows using genetic algorithms." Articial
Intelligence and Robotics Laboratory, Computer Science Department, Slippery Rock
University, Slippery Rock, PA. Technical Report, 1993.