Bell Labs

© All Rights Reserved

10 views

Bell Labs

© All Rights Reserved

- A Multi-objective Planning Framework for Analysing the Integration of Distributed Energy Resources
- Lecture Note of Mathematical Economics
- Intro to Management Science: Linear Programming
- PSM010_2
- Operation Research V1
- Industrial Engineering
- Production and Operations
- 10.1.1.71
- Tutorial
- Chapter 6
- Unit 01
- A DECISION-SUPPORT MODEL UTILIZING A LINEAR COST OPTIMIZATION APPROACH FOR HEAVY EQUIPMENT SELECTION
- Zeroth Review sample document
- OT I (model)
- lec10
- Transportation
- Optimization in Finance
- Notes - Helpful Definitions Linear Programming
- Learning Outcome 2 - Ac 2 (Analytical Methods)
- 2004_05_bachelet.pdf

You are on page 1of 13

Lev Slutsman, Robert J. Vanderbei, and Pyng Wang

Yun-Chian Cheng,

David J. Houck, Jun-

This paper describes the AT&T KORBX system,

Min L1u, Marc S. which implements certain variants ofthe Karmarkar

Meketon, Lev Sluts- algorithm on a computer that has multiple proces-

man, and Pyng Wang

are in the Advanced

sors, each capable ofperforming vector arithmetic.

Decision Support Sys- We provide an overview ofthe KORBX system that

tems Department at

AT&T Bell Laboratories

covers its optimization algorithms, the hardware

in Holmdel, New Jer- architecture, and software implementation tech-

sey. Robert J. Vander- niques. Performance is characterized for a variety

bei is a member of

technical staff in the

ofapplications found in industry, government, and

Mathematics of Net- academia.

works and Systems

Department at A'T&T Introduction

Bell Laboratories in This paper describes the AT&T KORBX system, which com-

Murray Hill, New Jer- prises severalvariants ofthe Karmarkar linearprogramming algorithm

sey. Mr. Cheng joined implemented on a parallel/vector machine called the KORBX system 7

the company in 1985 processor. Within AT&T, the KORBX systemis being used to solve

and has a Ph.D. in difficult planning problems, such as the Pacific Basin network-facili-

computer science from ties-planning problem. The KORBX systemis alsobeing used by com-

the University of mercial and government customersfor manytypesof applications. For

Wisconsin at Madison. example, it is being used by the U.S. AirForce Military Airlift Com-

Mr. Houck joined the mand (MAC) to solve critical logistics problems and by commercial air-

company in 1979 and linesto solve scheduling problems, such as crewplanning. (Panel 1

has both a B.A. in defines terms and acronyms used in this paper.)

mathematics and a Before getting into details- we shouldaddress the question:

Ph.D. in operations What is a linearprogram? It is an optimization problem in which one

research from The wantsto minimize or maximize a linearfunction (ofa large number

Johns Hopkins Univer- ofvariables) subjectto a large number oflinearequality and inequality

sity. Mr. Meketon constraints. For example, linearprogramming maybe used in manu-

joined the company in facturing-production planning to develop a production schedulethat

1982 and has a B.S. in meets demandand workstation capacity constraints, while minimizing

mathematics from Vil- production costs.

lanova University and Since 1947 whenthe conventional simplex method was pro-

both an M.S. and Ph.D. posed by George Dantzig,' linearprogramming has been effectively

in operations research employed to solve decision-making problems in nearly allsectors of

from Cornell University. industry. Linear programming is also commonly used in engineering

(continued on page 19) and scientific analyses. The KORBX systemimplements newtechnol-

Panel 1. Acronyms and Terms

randomly generatedproblems from academia, industry,

and government.

CE computational element

IEEE Institute of Electrical and Electronics Linear Programming

Engineers Mathematically, there are many ways to repre-

IP interactive processor sent a linearprogram. Butforformulating models the

KLP the linearprogramming solver (a set of following form is convenient:

software modules)

kmps routinethat converts the inputdata Optimize cT x

krep routine that generates reports

MAC U.S. AirForce Military Airlift Command subjectto b - r :5: Ax :5: b

MCNF multicommodity network flow l sxsu (1)

MFLOPS million floating-point operations per

second The matrix A is called the constraint matrix, the vector c

MINOS 5.1 Fortranimplementation ofthe simplex is the objective vector (acolumn vector), cT is the trans-

methodfromStanford University's pose ofc (a rowvector), b is the right-hand side, 1is the

Systems Optimization Laboratory vectorof lower bounds, u is the vector of upper bounds,

MPS Mathematical Progamming System and r is the range ofthe constraints. The term "optimize"

8 MPS III Ketron Management Science Inc.imple- stands for either "minimize" or "maximize." The func-

mentation ofthe simplex method tion c T x is called the objective function. Commonly, m

MPSX IBM Corporation implementation ofthe denotesthe number of rows of A (i.e., constraints) and

simplex method n the numberofcolumns (i.e., variables). Sometimes,

NETLIB collection ofproblems distributed elec- elementsin the lower- or upper-bound vectors are nega-

tronically tive or positive infinity, respectively.

opt routine that solves the preprocessed While system (1) is convenient for modeling, it

problem is mathematically preferable to formulate the problem as

postp routinethat undoes the preprocessor's follows:

transformations

prep routinethat preprocessesthe linear Minimize cT x

program

subject to Ax = b

ogyand is being used today to solve problems that either O:5:X:5:U (2)

were too large or ran too slowly to be solved by earlier

linearprogramming methods. Problems formulated as in system (1) are easily conver-

The paper is organized as follows. In the next ted intoform (2). We call system (2) the primal linear

two sections, we describe the linearprogramming prob- program. The constraint matrix, objective vector, right-

lem and the linearprogramming algorithms that are hand-side vector, and upper-bounds vector-when writ-

available in the system. Next, the KORBX systemis ten in this form-are generally different from the original

described. Finally, we try to characterize the performance ones. To simplify notation, let us assume that allthe

ofthe systemby giving results on a variety ofactual and upper bounds are infinite. Note, however, that the

Figure 1. Polyhedron.

The walls of the poly-

hedron are formed by

the Intersection of

the plane {x I Ax = b}

and the nonnegativity

constraints x, ~ O.

The optimal point

Is x* and the negative

of the projected cost

vector Is -c.

The lines

perpendicular to the

projected cost vector

are the level sets; all

points on a level set

have the same cost.

Cost = 24

KORBX system does indeedhandle problems withfinite polyhedron and, by jumping to an adjacent vertex,

upper bounds. reduces the objective function value in each iteration. In

The constraintsAx = b form an n - m dimen- contrast, the Karmarkar algorithm- and its variants start

sional hyperplane in an n dimensional space.Each non- at an interiorpointand then jumpin a direction of

negativity constraintx i ~ 0 cuts offa halfspacefrom the descent fromone interiorpointto another.eventually

hyperplane, resultingin a multidimensional polyhedron converging to an optimal solution.

(see Figure 1) that is called the feasible region. Associ- Corresponding to any linearprogramis another

ated witheach pointin the feasible region is a cost c Tx. linearprogram, called its dual. The duallinearprogram

The objective is to find a feasible pointthat minimizes that corresponds to program (2), assuming the upper

this cost.This is called an optimal solution. If the prob- bounds are infinite, is:

lem has a singleoptimal solution, it must be at a vertex

ofthe polyhedron. However, most real-world problems

have multiple optima. The set ofmultiple optima always

consists of some face ofthe polyhedron. In either case, subjectto A TW s C (3)

there are optimal solutions that are at vertices.

The simplex method starts at a vertex ofthe The duallinearprogramis important for several reasons.

AT&TTECHNICALJOURNAL. MAY/JUNE1989

For example, it is used to check for optimality. Indeed, series enhancement and type of equationsolver.

the dualitytheorem of linear programming says: Power-Series Enhancement. The idea in power-series

1HEOREM: [fx' isprimal feasible li.e.,Ax' = b (x' ~ O)} enhancement is as follows. For all three methods, at

andw' is dualfeasible (i.e., ATw' s c), then the duality each iteration,the current solutionis used to compute a

gap cTx' - b Tw' is nonnegative. Furthermore, the duality directionand a large step is then taken along this com-

gap iszero if and only if x' is optimal forthe primal and puted direction. If these long steps are replaced by

w' is optimal forthe dual. shorter and shorter steps, then-in the limit-we arrive

The optimizerin the KORBX system implements at a smooth curve that traces a path from any givenfeasi-

severalvariants of the Karmarkar algorithm. Each vari- ble starting point to an optimalsolution. This smooth

ant is an iterativealgorithm. In every iteration, regard- curve is calleda continuous trajectory. The step direc-

less of the method selected, a primal solutionvector x tions in the basic methods are simplyfirst derivatives

and a dual solutionvector ware calculated. If (within a of a parametrizedrepresentation of the curve (i.e., the

tolerance) x is primalfeasible, w is dual feasible, and the tangent direction).

duality gap is zero, then the system terminates and the By computinghigher-order derivatives to gen-

current solution is declared optimal. erate a truncated power-series expansion, it is possibleto

follow the continuoustrajectories more closelythan by

Algorithms using onlythe first derivative (see Figure 2). Doingthis

The variants of the Karmarkaralgorithm imple- produces an algorithm that requires fewer iterations at

10 mented in the KORBX system can be divided into three the expense of more work per iteration. Furthermore, it

basic categories: yields a more robust algorithm.

- Primal-affine. 3- 6 The primal-affine algorithmfirst Currently, the power-series enhancement is

solvesan augmented linear program in order to find a implementedonlyfor the dual-affine and primal-dual

primalfeasible solution. Then, it generates a sequence methods. Detailsof the power-series enhancement for

of primalfeasible solutions with decreasing objective these two methods are given in Reference 11.

value, until eventually the corresponding dual solution Equation Solver. The second importantoption

is feasible and the dualitygap is zero. regards the method for solving systems of equations of

- Dual-affine. 78 The dual-affine algorithmworks the the form:

other wayaround. It starts by finding a dual feasible

solution and iterates, whilemaintaining dual feasibil- (4)

ity, until eventually the corresponding primalsolution

is feasible and the dualitygap is zero. where d is a knownvector and D is a knowndiagonal

- Primal-dual. 91o The primal-dual algorithmfirst matrix (i.e., a square matrix with zeros offthe diagonal).

achievesand then maintainsboth primaland dual This is the most computationally intensive part of any

feasibility, while all along reducing the dualitygap variant of the Karmarkaralgorithm. By default, all the

until it is zero. methods use Cholesky factorization (see, for example,

The affine algorithms (both primaland dual) are often Reference 12).The idea here is to first factor AD 2 A T

called affine-scaling algorithms. into the product of a lower triangular matrix L (called the

There are several algorithmic options that may Cholesky factor) and its transpose L T:

be used with the three basic methods outlined above. We

mention here the two most important examples: power- AD2A T'=L L T

AT&TTECHNICALJOURNAL.MAY/JUNE1989

tution. This method for solving systems of equations is

well understood (itis essentially the same as Gaussian

elimination) and yieldsrobust codes. Reference 13gives

some details ofhow Cholesky factorization and forward

and backward substitution are implemented in the

KORBX system.

The amountofworkto compute L depends

greatly on the nonzero structure ofAD2 A T. Typically,

constraintmatriceshave onlya few nonzerosin each

column. The nonzerostructure ofAD 2A T, whichis the

same for any diagonal matrix D, usually is also sparse.

However, the Cholesky factor L turns out to be denser

than the original AD2A T. The extra nonzerosin L are

referred to as jill-in. Bycarefully ordering the rows of A,

it is possible to preventsubstantial fill-in, thereby approx-

imately minimizing both memory requirementsand com-

Order 1 putational effort (see Figure 3).The most common reor-

Order 2 dering heuristic (andthe one used in the KORBX sys-

Order 3 tem) is called minimum-degree ordering (see, for exam- 11

Continuous trajectory ple, Reference 14).

Another method for solving system (4) is the

conjugate-gradient method (see,for example, Refer-

Figure 2. Power series trajectories. Starting from a solution ence 12). This is an iterative method that, at each itera-

near the center of the polytope, the green line is the order-1 tion, generates an improved estimateofy.IfAD 2AT is

trajectory, the blue line is the order-2 trajectory, the red line approximately the identity matrix, then this methodcon-

is the order-3 trajectory and the gold line is the continuous verges quickly. The preconditioned conjugate-gradient

trajectory. The optimal point Is labeled x", Note that the method uses an approximate Cholesky factorization to

higher-order trajectories track the continuous trajectory transformAD2 A T into such an approximate identity

better than the lower-order, power-series trajectories. matrix. -,

Becausethere are manyimplementation details

To solve system (4), one exploits the fact that L is lower (including the choiceof preconditioner), developing a

triangularto first solve for z: robust and quick algorithm is difficult. Somedetailsof

our implementation, including newdata structures, have

Lz=d been described in Reference 15. Currently, the precondi-

by a simpleprocess calledforward substitution. Then, y is tioned conjugate-gradient option is available onlywith

the dual-affine method and precludesusing the power-

the solution to: series enhancement. The advantage of using the precon-

LTy=Z ditioned conjugate-gradient option is that, when properly

tuned, it uses less memorythan Cholesky factorization.

obtained by a similarprocess called backward substi- Hence,extremelylarge problemscan be solved.

.- --- . ..."..

_.

---....-- --.

- - .

.

. ..:.

.. ........ :':'4.,...t--...

.

~

..

. ---- ..~. -~. . ~.I...

.. .

- ._. ..:::c.... ."

-~

~ ~

. -.

~ .

..

.- ........

(a)

~

~.

~ ~~

. I

(b)

The KORBX system includesthe linear program- into form (2). Prep also uses quick heuristics to

12 ming solver (a set of software modulescollectively refer- reduce the size ofthe problemso that the optimization

red to as KLP) and the KORBX system processor (a will be faster. For example, if prep discovers that a

parallel/vectorcomputeron which the solverruns). variable has the same lowerbound and upper bound, it

Linear Programming Solver. KLP consists of several will eliminate that variable fromthe problem until

modules: postp reinserts it.1t is possible that prep may deter-

- kmps-eonverts the input data. mine the problemis infeasible or unbounded and may

- prep-preprocesses the linear program. even find an optimal solution.

- opt-solves the preprocessed problem. The optimizer, opt, reads in the problemthat

- pas t p-undoes the transformations ofthe prepro- prep produced, rearranges the rows of A to reduce

cessor. fill-in, and solves the linear program using one ofthe

- krep-generates reports. methods described in the previous section. Although any

Linear programs are usually formulated in the method maybe selected,the default is dual power-series

industry-standard Mathematical Progamming System order 5, where order 5 refers to the number ofterms

(MPS) format. Mathematically, this formatdescribes used in the power-series expansion.

linear programs as presented in equation (1). A utility Following opt, there is a postprocessor, postp,

called kmp s reads an MPS-format file and creates files that undoes the transformations of prep. The final pri-

that store this information in a sparse, binaryformat. mal and dual solutions are stored in sparse, binaryfiles.

This collection of data files is calleda KIP problem data- . Finally, KLP has utilities for generating reports

base. Generating the KLP problemdatabase directly typi- based on these solution files.

cally savestime in problemgeneration because kmps is KORBX System Processor. The KORBX system

bypassed. hardware consists ofeight parallel processors, called

A preprocessor, prep, reads a KLP problem computational elements (CEs) that are used for numeri-

AT&TTECHNICALJOURNAL.MAY/JUNE 1989

Figure 3. Reordering the matrix for less fill-In.

(a) A matrix; shows the nonzero structure of the

A matrix. (b) AD 2 A T matrix; shows the nonzero

structure below the diagonal of the AD 2 AT

matrix. (c) Cholesky factor L with original order-

Ing; shows the nonzero structure below the diag-

onal of the Cholesky factor. After the matrix has

been reordered using the mlnlmum-clegree

heuristic, there are fewer nonzeros In the Chole-

sky factor. (d) Cholesky factor after row reorder-

Ing; shows the nonzeros below the diagonal,

reordered by the mlnlmum-degree heuristic.

(c) (d)

cally intensive calculation, and six microprocessor-based see about32 MFLOPS. Bycarefully managing main

interactive processors (IPs) that are used for other types memory and cache memory, it is possible to do even

ofwork (forexample, editinga file). The standard config- better. For example, we have a dense matrix-multipli- 13

uration includes 256 megabytes (Mbytes) of main mem- cation routine-which is written in assembly language

ory, 512 kilobytes (kbytes) of high-speed cache memory, and is optimized to the greatest extent possible-that

four 1.1-gigabyte (Gbyte) disk drives, a tape drive, achieves 53MFLOPS. For linearprograms, a reasonable

printer,and communications hardware. Each CEcan rule ofthumb is to expectabout37 MFLOPS on dense

process instructions separately and has full access to problems and about 2 MFLOPS on sparse ones.

memory. Also, there are hardware-level memory and KORBX System Software. The KORBX operating

process locks. Efficient use ofthe eight parallel proces- system, an extension ofthe UNIX operating system,

sors can lead to an eightfold speedup over singleproces- has been specially designed to accommodate parallel

sor times. Each CE is a vector processor. In fact, each processing. It supportsvirtual memory and, hence,

CE includes eight vector registers, and each register can handles processes that require up to about 1 Gbyte of

handle 32 double-precision numbers at one time. Effi- memory. Included withthe system are Fortranand C lan-

cient use ofthe vector hardwarecan lead to a fourfold guage compilers. The Fortrancompiler has a loop ana-

speedup over scalar processing. However, on some lyzerthat tries to vectorize and parallelize a user's pro-

sparse matrixcomputations, the vectorizing benefitsare gram automatically. In addition, there are compiler

generally modest. directives-written as comment statements-that allow

In scalar mode, each processor operates at the programmerto control the loop analyzer. But for

about 1.0million floating-point operations per second complicated subroutines such as some ofthose in opt,

(MFLOPS) for double-precision arithmetic written in assembly language programming maybe needed to

Fortran. (Thefloating-point hardwarefollows the IEEE achieve the greatest possible parallelization and vectori-

standard.) If a Fortran routine reaps full benefitfrom vee- zation. For C programmers, a libraryprovides many rou-

torization and parallelization, then one shouldexpectto tines for generatingparallel algorithms.

Table I. Test problems and comparisons between MINOS and KLP

Problem in constraint Cholesky MINOS time, system time, system time,

name Rows Columns matrix factor 1 processor 1 processor 8 processors

pilot4 410 1,000 5,141 15,190 147.7 63.8 22.9

scfxm2 660 914 5,183 9,932 81.5 31.2 12.9

growlS 300 645 5,620 6,321 42.4 23.6 11.2

fffff800 524 854 6,227 18,503 82.4 80.0 27.6

ship04l 402 2,118 6,332 4,296 25.9 29.9 16.7

sctap2 1,090 1,880 6,714 12,348 102.3 71.0 44.5

ganges 1,309 1,681 6,912 29,188 109.7 56.9 22.3

ship08s 778 2,387 7,114 4,514 37.8 22.5 11.9

sierra 1,227 2,036 7,302 13,030 223.0 64.7 34.7

scfxm3 990 1,371 7,777 14,631 188.8 58.0 23.3

ship12s 1,151 2,763 8,178 5,529 89.4 26.7 13.3

grow22 440 946 8,252 9,261 99.0 34.2 15.5

ke n 7t 2,426 3,602 8,404 8,790 1,173.0 59.8 33.1

scsd8 397 2,750 8,584 5,588 429.1 22.1 10.1

sctap3 1,480 2,480 8,874 16,956 165.3 102.1 68.7

pilot.we 722 2,789 9,126 18,303 1,029.6 109.3 45.2

doat 2,617 4,640 9,427 8,546 808;8 75.8 52.2

2Sfv47 821 1,571 10,400 35,456 1,545.4 106.7 34.1

czprob 929 3,523 10,669 6,779 327.0 95.3 56.4

14 ship08l 778 4,283 12,802 6,484 103.4 51.3 28.3

pilotnov 975 2,172 13,057 59,999 455.3 133.4 47.3

nesm 662 2,923 13,288 28,524 478.6 174.4 59.9

pilot.ja 940 1,988 14,698 52,793 1,310.3 212.6 69.7

ship12l 1,151 5,427 16,170 9,433 286.5 64.5 33.5

m3000t 3,001 6,000 17,998 12,008 1,820.5 59.7 32.0

80bau3b 2,262 9,799 21,002 44,165 6,460.5 577.9 373.5

ken9t 6,601 11,137 25,474 51,951 33,147.8 349.2 170.1

greenbea 2,392 5,405 30,877 52,727 14,758.5 444.3* 138.1*

greenbeb 2,392 5,405 30,877 52,421 8,044.5 311.9 119.8

pilots 1,441 3,652 43,167 226,172 15,046.5 1,099.5 261.9.

jimlOOt 2,007 19,020 61,976 34,958 27,890.8 205.3 80.2

maclOt 15,246 47,333 101,849 1,838,050 t 12,633.5 2,736.6

tOlp08t 6,532 10,315 293,750 3,306,674 34,506.6# 5,286.9#

NOTE: Alltimes are in seconds.This test set includesall the NETLIB problemsthat have more than 5,000 nonzeros

plus other problemsfromvarioussources. MINOS was run on one processor;KLP was run on one processor and on

the standard configuration. Both systems were run using defaultparameters (forKLP, this means dual powerseries,

order 5).The KLP run times include prep plus opt plus postp (excluding kmp s), The MINOS times also exclude

data conversion. .

* Usingthe primal-dual powerseries, order-3method and setting the parameter phlmul t to 3.5.

t Failedto reach the optimal solutionafter 1,000 degenerate pivots (andafter 6,725.6 seconds).

Problemis too large.

t Non-NETLIB problem.

# Usingthe preprocessor option (lin row).

Performance inherent benefit from vectorization, this seemed to have

In this section, we try to characterize the perfor- little effect.

mance of the KORBX system's linear programming sol- The KLP times include the prep, opt, and

ver. First, we compare our defaultvariant of the Karmar- po s t p run times. We have omitted the time for the data

kar algorithm-that is, the dual power-series order 5 input routine, kmps. The MINOS times also exclude data

(using Cholesky factorization)-with a well-known imple- input time. We should note that, for many small prob-

mentation of the simplex method. Then, we discuss large lems and some large sparse problems, the time for data

applications supplied to us by industry and government input can be significant. Studies we have made indicate

sources, and compare the run times reported to us with that kmp s generally takes about the same amount of

the results we have obtained on the KORBX system. All time as the analogous portion of MINOS. Note that, as

experiments reported in this paper were performed on the problems get larger, the relative speedup of the

the currently released version of the KORBX system pro- optimization increases.

cessor, which uses advanced computational elements. Performance on Large-5cale Problems. In this section,

The KORBX system run times were obtained using KLP we discuss the performance of the KORBX system on

Release 3.0.2 (a prerelease of Customer Release 3.0). large-scale problems obtained from industry, govern-

Comparison with the Simplex Method. For compari- ment, and academia. For several of these problems,we

sons of our system with the simplex method, we ran compare results obtained on the KORBX system to the

MINOS 5.1,which is a Fortran implementationof the run times reported to us by customers during their pro-

simplex method from the Systems Optimization Labora- duct evaluation phase. Table II summarizes these prob- 15

tory at Stanford University.l" Both optimizers were run lems. These reported results generally reflect IBM's sim-

with their default parameters. We chose MINOS because plex implementation, MPSX, that runs on an IBM3090

it is widely available, is often used for benchmarks, and is mainframe computer. Depending on the model, IBM

easily ported to the KORBX system processor. 3090 computers achieve between 7 and 14 MFLOPS.

Table I shows our results. The collection of In comparison, recall that KLP runs between 2 and 37

problems that were used include all problems from the MFLOPS, depending on the problem.

NETLIB suite'? that have at least 5,000 nonzeros in their Pacific Basin facility planning. An important internal

constraint matrix. We have also included seven other AT&T application is the Pacific Basinfacility-planning

problems; some are very sparse, and some are very problem.The problem's obje$;.tive is to determine where

dense. The table is sorted according to the number of undersea cables and satellite circuits should be installed,

nonzeros in the constraint matrix. when they will be needed, number of circuits needed,

Because MINOS is designed for a serial technology to be used for the cables, and routes for calls.

machine, we ran MINOS on one CE and ran KLP on This problem has been modeled as a linear program.l'v'?

both one and eight CEs. (It is believed that interior point The KORBX system solves the lO-year planning-

methods can be made parallelmore easilythan can the horizon problem (called pbfplO in Table II) in about

simplex method.) Note that some of the critical subrou- 21 minutes. Using a specialversion of the dual-affine

tines in our system are written in assembly language and method. developedfor Pacific Basinfacility planningand

are designed to exploitefficientcache and main memory implemented on the KORBX system processor, this

management!" Also, vectorization can result in a speed- problem was solvedin about 4 minutes. This implemen-

up of up to a factor of 4. Although we compiled MINOS tation was customized for these problems to ensure max-

with the loop analyzer turned on so it could reap any imum use of the parallelprocessors. It took about

Table II. Summary of large-scale problems

Nonzeros KORBX

Problem in constraint Comparative system

name Rows Columns matrix Comparative environment time time

mopp2 3,685 12,701 83,253 IBM 3090/MPS III 11 5

airlinel 4,420 6,711 101,377 IBM 3090/MPSX 500 23

financial 846 44,974 136,970 IBM 3090/MPSX * 12

pbfplO 15,425 34,553 137,866 IBM 3090/MPSX 76 18

mopp3 5,316 13,236 174,231 IBM 3090/MPSIII 47 15

airline2 6,642 10,707 203,597 IBM 3090/MPSX t 78

pbfp19 27,971 76,815 249,455 IBM 3090/MPSX N 53

mac30 46,863 152,058 326,252 Cyber273 and IBM 3081/MCNF t 234

mac40 63,027 209,739 449,209 Cyber273 and IBM 3081/MCNF t 360

mac50 79,019 267,357 571,792 Cyber273 and IBM 3081/MCNF t 612

moppl 24,902 43,018 583,732 IBM 3090/MPS III t 450

mlp 20,420 519,660 2,017,380 Honeywell DPS-6 and IBM # 74

3081/Bendersdecomposition

NOTE: Run times are in minutes. The comparative environment and run times were reported to us by the client,

exceptfor financial, pbfp 10, and pbfp19, whichwere run at AT&T.

16 * Didnot solve after 120minutesand 100,000 iterations.

t Didnot solve after 1440 minutes.

Run times using a prereleaseversionof KLP Release 3.1

t Problem believed unsolvable by these systems.

# Method failed.

76minutesto solve the same problemon an IBM 3090 system in 7.5 hours (usingthe primal-dual, power-series,

computerusing MPSX. order-3 method).This problemhas not been solved

The ability to solve these problemsquickly gives using commercial simplex packagesbecause oftheir size

planners the opportunity to develop a more efficient net- and speed limitations. In an attemptto obtaina reason-

workdesign. Currently, this algorithm is being used to able solution using these simplex packages, the linear

solve the 19-year planning problem (pb f p l S}, which programwas decomposed into five smallermodels ac-

MPSX cannot handle because of size limitations. cordingto rank. Interestingly, solving the larger model

Military-officer personnel planning. Anotherlinear pro- produceda planthat was closer to the force structure

gram,called mopp 1 in Table II, plans U.S. Army officer requirements by 32 percent when comparedto the

promotions (to Lieutenant, Captain, Major, Lieutenant aggregate ofthe solutions to the five smallermodels.

Colonel, and Colonel), accessions (newhires), and train- The performance ofthe KORBX system on the

ing requirements by skillcategory. The objective is to smallermodelsis also interesting. The KORBX system

develop a planthat meets the Army's force structure solved the Lieutenant model (problem mo pp Z) the

requirements as closely as possible. fastest-i.e., in 5 minutes-while, according to the Army,

This linear program (which has 21,000 con- MPS III (a commercial implementation ofthe simplex

straints and 43,000 variables) was solved on the KORBX method) on an IBM 3090 Model 200 computertook

11 minutes. The Lieutenant Colonel model (problem (called ml.p) that has 15time periods, 12ports of embar-

mo p p S] took 15 minutes on the KORBX system, com- kation, 7 ports of debarkation and 9 different vehicles for

pared to 47 minutes on the IBM3090 computer. The per- carrying 20,000 movement requirements.This problem,

formance advantage was similaron the other problems. which has about 21,000 constraints and 500,000 vari-

Military logistics planning. The U.S. Air Force Mili- ables, took about 75minutes to run. The military has

tary Airlift Command has a patient evacuation problem generated similarproblemsthat have provedintractable,

that has been modeledas a multicommodity network-/low even after applying techniques such as Bender's decom-

problem. The MAC uses this patient distribution system position and networkflow approaches (with and without

to determine the flow of patients moved via an airlift from side constraints).

an area of conflict to bases and hospitals in the continen- Airline problems. The airline industry has supplied

tal UnitedStates.The objective is to minimize the time us with several, relatively large, dense linear programs

that patients are in the air-transport system.The con- (average of 50 nonzeros per column). According to one

straints are: airline, the KORBX system solved its sampleproblems

- Demand must be met. about 20times faster than MPSX on an IBM 3090 com-

- Sizeand composition of the hospitals, staging areas, puter (problems airline! and airline2 inTable II).

and air fleet are conserved. However, this airline has developed specialized heu-

The MAC has generated a series of problems ristics to generate initial solutions. If they start MPSX

based on the number oftime periods (days). Using a spe- with these customized initial solutions, they can cut

cialized simplexcode, the MAC was able to solve as MPSX's run time by at least a factorof 6 from those 17

large as a 5-day problem. Both the specialized simplex reported in Table II. Evenfrom a "coldstart," the

code and MINOS spent thousands of iterationswithout KORBX system solved these problemsfaster than

changing the objective value, and were unable to solve MPSX from a "warmstart."

the 10-day problem (called mac! in Table I). The

KORBX system solved this problemin less than 2 hours.

Financial industry problems. We experimented

with a class of bond arbitrage problems. These prob-

The dual conjugate-gradient method can solvethe 50-day lems (we have three of them) have about 850con-

problem (79,000 constraints and 572,000 variables) in straints and 45,000 variables. After100,000 iterations

10.2 hours. Table II summarizesthe results of the 30-, (and 120 CPU-minutes), MPSX-running on an IBM

40-, and 50-day problems (called mac 3 0, mac 40, and 3090 computer-was stillperforming degenerate pivots

mac50). at the original vertex.

The Department of DefenseJoint Chiefsof Staff The KORBX system solves each of these prob-

has a logisticsplanningproblemthat models the feasibil- lems in about 12minutes. Table II showsthe results for

ity of supporting military operations during a crisis.The one ofthese problems (financial).

problemis to determine if different materials (called Multicommodity network-flow problems. An impor-

movement requirements) can be transported overseas tant class of linear programmingapplications are the

within strict time windows. The linear program models multicommodity network-flow problems. These prob-

capacities at the embarkationand debarkationports, lems are used to find a minimum cost routingfor each of

capacities of the various airplanesand ships that carry several commodities through a networkwhose arcs have

the movementrequirements, and penaltiesfor missing finite capacity. For these problems, there are specialized

delivery dates. simplex-based codes that are much faster than general-

Using simulateddata, we generated a problem purpose simplexcodes.

Table III. Randomly generated multlcommodity flow problems

Problem in constraint system time, system time,

name Rows Columns matrix MCNF 1 processor 8 processors

j lkOO 1 1,152 1,890 5,300 267.1 58.7 19.1

j lk002 5,422 5,530 15,280 2,074.3 322.6 102.4

j lk003 10,836 10,530 29,420 2,093.3 429.2 180.1

j lk004 22,069 26,120 72,930 * 12,441.6 2,551.5

NOTE: The MCNF columnis the run time of Kennington's specialized primal-simplex multicommodity flow solver.

KLP run times (seconds) include prep plus opt plus postp.

* Method reported floating-point exception after 101,927 seconds.

We compared the KORBX system against a owe a great deal to our colleagueson AT&Ts Advanced

specialized, simplex-based, multicommodity network- Decision SupportSystems team. In particular, we would

flow (MCNF) code developedat Southern Methodist like to acknowledge some of the peoplewhose contribu-

University by Kennington.P The problems we used were tions have had significant impacton the final product:

created with a generator developed by Ali and Kenning- Efthymios Housos and Chih Chung Huang for work on

18 ton." We ran the KORBX system on one CE and then on parallelization; Bob Murray,AdrianKester, and Srinivas

eight CEs. Kennington'ssolverwas compiled by the For- Sataluri for developing and enhancing the preprocessor

tran compilerwith the loop analyzerturned on. However, and postprocessor; and Karen Medhi for improvements

his code was not designed for a parallelmachine and, to the dual conjugate-gradient method. We would also

hence, benefited littlefrom vectorization and paralleliza- like to thank Narendra Karmarkar and RamRamakrish-

tion.Table III shows the results. nan for developing prototypecode for both the primal-

Resultsfor two other randomlygenerated multi- affine and dual conjugate-gradient methods and for other

commodity flow problems, ken 7 and ken 9, are part of help they have given us.

the benchmarks presented earlier in the simplex-method

comparison. References

1. G. Dantzig, "Maximization of a linear function ofvariables subject

to linearinequalities," Activity Analysis ofProduction andAlloca-

Summary tion, Tj. C. Koopmans (ed.) ,John Wiley & Sons,NewYork, 1951,

Karmarkar-based optimization algorithms, par- pp.339-347.

allel/vector processing hardware, and sophisticatedsoft- 2. N. K Karmarkar, "Anewpolynomial time algorithm for linearpro-

ware implementation techniques have been combined, gramming," Combinatorica, Vol. 4, No.4, 1984, pp.373-395.

yielding the AT&T KORBX system. The system is now 3. I. I. Dikin, "Iterative solution ofproblemsofLinearand Quadratic

Programming," Soviet Mathematics Doklady Vol. 8, No.3, 1967,

tacklinglinear-programming problems that were hereto- pp.674-675.

fore unsolvable by other methods. 4. I. I. Dikin, "On the speed of an iterative process," Upravlyaemye

Sistemi, Vol. 12,No. I, 1974, pp.54-60.

Acknowledgments 5. E. Barnes,"Avariation on Karmarkar's algorithm for solving linear

programming problems," Mathematical Programming, Vol. 36,

The AT&T KORBX system is the result of long 1986, pp. 174-182.

hours of hard work by many individuals. We especially 6. R J. Vanderbei, M. S. Meketon, and B. A Freedman, "Amodi-

fication of Karmarkar's algorithm," Algorithmica, Vol. 1,No.4, tories RECORD, March 1986, pp. 11-13.

1986, pp. 395-407. 19. 1. P. Sinha, B. A. Freedman, N. K Karmarkar, A. Putcha, and

7. R. J. Vanderbei, 'The affine-scaling algorithm for linearprograms KG. Ramakrishnan, "Overseas network planning-application of

withfree variables," Mathematical Programming, Vol. 43,1989, Karmarkar's algorithm," Proceedings ofNetwork '86,Tarpon

pp.31-44. Springs, Florida, 1986.

8. 1. Adler, N. K. Karmarkar, M. G. C. Resende, and G.Veiga, "An 20. J. 1. Kennington, "MCNF85 (Primal simplex, multi-commodity

implementation of Karmarkar's algorithm for linearprogramming," network flow solver)," Department of Industrial Engineering and

Technical ReportORC 86-8, Operations Research Center,Univer- Operations Research, SouthernMethodist University, Dallas,

sityof California, Berkeley, California, 1986. Texas, 1985.

9. M. Kojima, S. Mizuno, and A. Yoshise, "Aprimal-dual interiorpoint 21. A.1. Ali, and J. 1. Kennington, "MNETGN programdocumenta-

algorithm for linearprogramming," Report No.B-188, Department tion," Technical Report No.lEOR77003, Department ofIndustrial

ofInformation Sciences, Tokyo InstituteofTechnology, Tokyo, Engineering and Operations Research, Southern Methodist Univer-

Japan, 1987. sity, Dallas, Texas, 1977.

10. R. Monteiro, and 1. Adler, "Interiorpath following primal-dual algo-

rithms, Part I: Linear programming," Mathematical Programming,

to be published, 1988.

11. N. K. Karmarkar, J. C. Lagarias, 1. Slutsman, and P.Wang, "Power Biographies (continued)

SeriesVariants of Karmarkar-Type Algorithms," The AT&TTech- Mr. Liu joined the company in 1982 and has a Ph.D. in opera-

nical joumal, Vol. 68,No.3, May/June 1989, pp.20-36. tions research and system engineering from The University of

12. G. H. Golub, and C.VanLoan, Matrix Computations, The John North Carolina at Chapel Hill. Mr. Slutsman joined the company

Hopkins University Press, Baltimore, Maryland, 1983.

in 1981 and has an M.S. in mathematics from the University of

13. E. Housos, C. C. Huangand J. M. Liu, "Parallel Algorithms for the

AT&T KORBX System," AT&TTechnicalJournal, Vol. 68, No.3, Leningrad, U.S.S.R., and a Ph.D. in applied mathematics from

19

May/June 1989, pp. 37-47. Leningrad Mining Institute, U.S.S.R. Mr. Vanderbei joined the

14. A. George, and J. W-H. Liu, Computer Solution ofLarge Sparse Posi- company in 1984 and has both a B.S. in chemistry and an

tive Definite Systems, Prentice-Hall, Englewood Cliffs, NewJersey, M.S. in operations research and statistics from Rensselaer

1981.

Polytechnic Institute, and both an M.S. and Ph.D. in applied

15. N. K Karmarkar, and K. G. Ramakrishnan, "Implementation and

Computational Results ofthe Karmarkar Algorithm for Linear mathematics from Cornell University. Mr. Wangjoined the com-

Programming, Usingan Iterative Methodfor Computing Projec- pany in 1980 and has a B.S. in mathematics from the National

tions," presented at the Thirteenth International Symposium on Taiwan Normal University, Taiwan; an M.S. in operations

Mathematical Programming, Tokyo, Japan,August1988. research and systems analysis from the University of North

16. B. A. Murtagh, and M. A. Saunders,"MINOS 5.1User's Guide,"

Technical Report83-20R, Systems Optimization Laboratory, Depart- Carolina at Chapel Hill; and a Ph.D. in mathematics from the

ment of Operations Research, Stanford University, Stanford, Cali- University of Illinois at Champaign-Urbafla.

fornia, December1983, revisedJanuary 1987.

17. D. Gay, "Electronic mail distribution oflinearprogramming test

problems," COAL newsletter (Committee on Algorithms), No. 13, (Manuscript received April11, 1989)

1985, pp. 10-12.

18. N. F. Dinn, N. K Karmarkar, and 1. P. Sinha, "Karmarkar algo-

rithm enablesinteractive planning of networks," AT&TBell Labore-

- A Multi-objective Planning Framework for Analysing the Integration of Distributed Energy ResourcesUploaded byartu_alarcon78
- Lecture Note of Mathematical EconomicsUploaded byKevin Hongdi Wang
- Intro to Management Science: Linear ProgrammingUploaded byPeggy Gilliam
- PSM010_2Uploaded byAsmZziz Oo
- Operation Research V1Uploaded bysolvedcare
- Industrial EngineeringUploaded byArash Jamwal
- Production and OperationsUploaded byrajendrakumar
- 10.1.1.71Uploaded byprogineral
- TutorialUploaded byUber
- Chapter 6Uploaded byShankaranarayanan Gopal
- Unit 01Uploaded bysambalta22
- A DECISION-SUPPORT MODEL UTILIZING A LINEAR COST OPTIMIZATION APPROACH FOR HEAVY EQUIPMENT SELECTIONUploaded byInternational Journal of Structronics & Mechatronics
- Zeroth Review sample documentUploaded bySolai Rajan
- OT I (model)Uploaded byRobertBellarmine
- lec10Uploaded byMugiwaRa KiFli
- TransportationUploaded byAbhinav Khanduja
- Optimization in FinanceUploaded bybobtenrd
- Notes - Helpful Definitions Linear ProgrammingUploaded byjuggler2020
- Learning Outcome 2 - Ac 2 (Analytical Methods)Uploaded byChan Siew Chong
- 2004_05_bachelet.pdfUploaded byLécio Augusto Ramos
- Some Simplex SlidesUploaded byDavid
- 99-036Uploaded byGustavo Gabriel Jimenez
- Optimal Distributed Generation Allocation and Sizing in Distribution Systems via Artificial Bee Colony AlgorithmUploaded byiraj214
- stevenson9e_ch6sUploaded bySagar Murty
- borzaAMS69-72-2012Uploaded byHemanth Mandapati
- Research Paper 2Uploaded byarpit1121
- 19-405Uploaded byjhdmss
- Topic 3aUploaded bySyafiq Hassan
- LPPUploaded byMonjit Dutta
- 1304.6894 (2).pdfUploaded byPRINCE KUMAR

- p426.pdfUploaded byAshish Sharma
- 246_OKUploaded byAshish Sharma
- Cs726 Math BackgroundUploaded byAshish Sharma
- OptimizationUploaded byHarsh Vora
- BD_eBOOK Big Data Data ScientistUploaded bykkuppachi
- Fire ecologyUploaded byAshish Sharma
- Duchi16Uploaded bypasomaga
- Computer Vision ResourcesUploaded byHarsh Patel
- MorphologicalUploaded byAshish Sharma
- deep_learning_for_nlp.pdfUploaded byAshish Sharma
- 1710.11573.pdfUploaded byAshish Sharma
- s12936-018-2442-yUploaded byAshish Sharma
- s12936-018-2442-y.pdfUploaded byAshish Sharma
- p426Uploaded byAshish Sharma
- 5689.fullUploaded byAshish Sharma
- Nu MaUploaded byprabhash27
- 8 ThingsUploaded byAshish Sharma
- paper_01Uploaded byAshish Sharma
- CV KanoriaUploaded byAshish Sharma
- Numpy_Python_Cheat_Sheet.pdfUploaded byAshish Sharma
- 1-s2.0-S0141813017307985-main.pdfUploaded byAshish Sharma
- AMitraUploaded byAshish Sharma
- Hadoop Training #5: MapReduce AlgorithmUploaded byDmytro Shteflyuk
- 90221_80827v00_machine_learning_section4_ebook_v03.pdfUploaded byviejopiscolero
- Macro1Uploaded byAshish Sharma
- Macro3Uploaded byAshish Sharma
- Macro5Uploaded byAshish Sharma
- 35.pdfUploaded byAshish Sharma

- A New Iterative Refinement of the Solution of Ill-Conditioned Linear System of EquationsUploaded byscribd8885708
- SM 56Numerical AnalysisUploaded bypavan kumar kvs
- xsfUploaded bydpkkd
- ProgettoFE.pdfUploaded bygiulio
- Cholesky Decomposition - Rosetta CodeUploaded byDeepak Raina
- Chapter 3Uploaded byst6575d
- CholeskyUploaded bymacynthia26
- Appendix bUploaded byrudarski
- Mathcad Prime Migration Guide en-USUploaded byMd Azahary
- Generating Normally Distributed Random Numbers in ExcelUploaded bymuliawan_firdaus
- Kalman filter applicationUploaded byravsree
- jurnal - gauss newtonUploaded byJulham Efendi Nasution
- Generating Random Numbers with Specified DistributionsUploaded byJuliano Miranda
- IluUploaded byMuhammad Enam ul Haq
- Eigenvalue problems.pdfUploaded byEva Dokic
- 2016 - Modeling the Cholesky Factors of Covariance Matrices of Multivariate Longitudinal DataUploaded byMárcio Araújo
- Kinship(1)Uploaded byJin Zhou
- chp_10.1007_978-3-319-74222-9_2Uploaded byAnonymous zXVPi2Ply
- Optimization in ScilabUploaded byMajor Bill Smith
- OLSSON, A. on Latin Hypercube Sampling for Structural Reliaility AnalisysUploaded byBolívar Zanella Ribeiro
- dyna_1Uploaded byMarko Šimić
- رابعاً الدوال الرياضية FunctionsUploaded byadnan
- Matrix Decomposition and Its Application in Statistics NKUploaded bySean
- kernlabUploaded byOlfa Ghribi
- sym_ldlt_4.2.7_Uploaded byarsene
- Solutions Manual Scientific ComputingUploaded bySanjeev Gupta
- IMSL Fortran Library User Guide 1.pdfUploaded byAshoka Vanjare
- CholeSkyUploaded byMuhammad Kaleem
- ARMA Modeling - ThesisUploaded byshereenkhan120
- Seeger - Low Rank Updates for the Cholesky Decomposition - CholupdateUploaded bygiunghi