You are on page 1of 8

THE INTERIOR+OINT

METHOD FOR LINEAR


A variant of the
interior-point method
outperforms the more
common simplex method
for large problems,
and the number
of required iterations
is relatively independent
of problem size.

GREG ASTFAlK
Convex Computer Corp.

IRVIN lUSTlG
Princeton University

ROY MARSTEN
Georgia Institute of Technology

DAVIDSHANNO
Rutgen University

S I
ince its inception in the 1 9 4 0 ~h ~
proeramming -whch seeks to op-
Y

timize h e a r funcYtions given linear con-


ear Changes began in 1984 when Nar-
endra Karmarkar published his landmark
paper introducing interior-point methods
straints -has been used in many applica- to solve linear-programming problems.'
tions, including capital-resource Interior-pointmethods are fundamentally
allocation, inventory control, portfolio different from the classic simplex method.
management, and production and crew The easiest way to see the difference is to
scheduling. For all these applications, the look at the polytope, a generalized poly-
o r i p a l linear-programming algorithm, gon associated with hear-programming
called the simplex method, has served in- problems, which is formed from a
dustry well. problem's consaaints. For t t u s example,
But recent advances in algorithms and assume the polytope is many sided and in
architectures are fundamentally c h a q p g a three-dimensional space. At each itera-
h e a r programming and a new class of algo- tion, the simplex method moves along the
rithm is emerging. T ~ Imethod
S is proving polytope's edge toward a vertex of lower
to be the preferred approach for large or cost than the vertex of the previous itera-
ddicult problems in h e a r programming. tion. In contrast, the interior-point

IEEE SOFTWARE 07407459/92/0700/0061/W3 00 Q IEEE 61


inethod bek~nsits iterations from the strict Ifg is a convex function, then any local
interior of thc polygon. Each iteration minimizer is also a global minimizer. If the
inarches through the interior, without re- pointx* is a local minimizer ofdx) (that is,
gard for the polygon edges, toward the Vg(x*) = 0) and if V2g(x") has full rank,
lowest cost vertex. then Newton's method will converge to x*
Kannarkar's algorithm spurred some if it is started sufficiently close to x*.
of the mathematical programming com-
munity to dramatically improve the sim- LAGRANGE METHOD
plex method. Others worked on bringing
Karniarkar's method into the classical In 1788, Lagrange discovered how to
f(x) = .
framework of optimization. transform a constrained optimization
These advances, together with ad- fn(x) problem, with equality constraints, into an
vances in computing hardware, have made unconstrained problem. To illustrate his
practical solutions to linear-programming

p;
method, assume you need to solve the
problems that were, only a few years ago, problem
considered impractical. minimize-flx)
In this article, we o u h e the formula- x = subject to g;(x) = 0 fori = 1, . . ., m
tion of a primal-dual variant of the inte-
rior-point method, which is based on PJ
The Jacobian J(x) is deiined as the ma-
Here x is a vector of n independent van-
ables. You first form a Lagrangian func-
three well-established optimization algo- trix with the (ZJ) component tion
rithms. ??I

k Y ) =Ax) - c y/ g;(x)
~ INTERIOR-POINTBUILDING BLOCKS Then Newton's method for the system
i= 1
They,s introduced in this equation are
of equations looks like
'The theoretical foundation for inte- called Lagrangian multipliers. In a stan-
-'A x k )
~

k+l -
rior-point methods consists of three cru- x - xk - [J(xk)] (2) dard linear-programming setting, you can
cial building blocks. Or, if we let 6x = xk+*- xk denote the view them as dual variables. The next step
+ Isaac Newton's method for solving displacement vector, then is to minimize the unconstrained function
nonlinear equations, and hence for un- J(xk)6x = -f(xk)
L(x,y) by solving the following system of
constrained optimization. (n + m) equations in (n + m)variables:
You can apply Newton's method to an
+ Joseph Lagrange's methods for opti- unconstrained minimization problem as
mization with equality constraints. well. For example, to minimize g(x) take
+ Anthony Fiacco and Garth fix) = g'(x) and use Newton's method to tori= 1,. . ., 17
McCormick's bamer method for optimi- search for a zero offlx).
zation with inequality constraints. If g is a real-valued function of n vari- d u x , = -g,(x)
3 = o for t = 1, . . ., m
ables, a local minimum ofg will satisfy the ?V/
With these pieces, you can construct following system of n equations in ?z vari- You can then solve this system of equations
the primal-dual interior-point method. ables, using Newton's method.
The priinal-dual variant is the most theo-
retically elegant of the many variants, and BARRIER METHODS
also the most successful computationally.
It is the result of work by many people, Withm an LP context, you can convert
including Masakazu Kojima and col- an general inequality constraints, such as
a,7y
x 2 0 ora/x 2 0, to equations by adding
leapes,' Renato Monterio and Ilan &@Lo
.4dler,' Kevin McShane and colleagues," ax// nonnegative slack (or surplus) variables.
In-Chan Choi and colleagues,' and Irvin In thls case, the Newton iteration looks So the only essential inequalities are the
Lustig and colleagues.' like nonnegativity conditions, x 2 0. The idea
' k
v - d x )6x = -Vgcxk,
of the barrier-function approach is to start
NEWTON'S METHOD from a point in the s t r i c t interior of the
, where Vg is the gradient ofgand V'g is the polytope defined by the inequalities
i Hessian of g - a matrix with the (i,]) (xj)> 0, for all]) and construct a barrier
' Newton's method finds a zero of a
function ofa single variable,Ax) = 0. Given
component that prevents an variable from reaching
an intiaI estlniate x", you compute a se- aIg0 the boundary (x:= 0). For example, add-
quence of trial solutions: axlax] ing -lo&J to the objective function will

62 JULY 1992
cause the objective to increase without 4. Set k = k + 1 and go to step 2 . feasibility, and an approximation to com-
bound as 5 approaches 0. Of course, if the Then xk(pk) approaches x* as )lk ap- plementary slackness, respectively. That
constrained optimum x* is on the bound- proaches 0. In step 2 , you can find x@) by is, p = 0 corresponds to ordinary comple-
ary, which is always true for h e a r pro- using Newton's method to solve the sys- mentary slackness.
gramming, the barrier will prevent you tem of n equations in n variables, In the spirit of barrier methods, we
from reachng it. The solution is to use a JB(x;p) - af(.) = start with p at some positive value and let
bamer parameter that balances the contri- ax, axj xJ it approach zero.
bution of the true objective function In practice, you do not have to com- Let dag{...) denote a diagonal matrix
against that of the barrier function. pute x@) very accurately (step 2) before with the listed elements on its diagonal,
You can convert the minimization reducing p (step 3). and then define
problem with nonnegativity conditions X = diag[sy, . . . , x:]
into an unconstrained minimization prob-
lem in the following way. Assume you
PRIMAL-DUAL INTERIOR-POINTMETHOD 2 = diag(z!, . . . ,4
have the problem el' = ( I , . . . , I )
Armed with Newton's method for un- where e is simply a vector of all 1s. You can
tninimizef(x j constrained minimization and the
subject to x 2 0 then write the p-complementary slack-
Lagrangian and barrier methods for con- ness condition as
You can replace h s by the family of verting constrained problems into uncon-
unconstrained problems XZe = pe
strained ones, we developed the primal-
n Finally, we use Newton's method to
dual variant of the interior-point method.
minimize ~ ( x ; p=fix)
) - p x Iog(xi) Consider the primal-dual pair of linear soh7e the equations:
I=' programs: Ax-b=O
which is parameterized on the positive T ,47;+ z - c = 0
(l'j minimizec x
barrier parameter p. Let x@) be the mini- subject to Ax = b
X Z e - pe = 0
miwr of B(x;p). Fiacco and McCormick x20 This involvesforming the Jacobian matrix
show that x@) approaches x*, the con- ?'
(Dj maxiinizeb y as defined by Equation 1 and the repeated
strained minimizer, as p approaches 0. application of Equation 2 .
subject to A' y + z = c
The set of minimizers, x@), is called the z20 Suppose that the current point, which
central trajectory.
The z vector introduced in thls equa- may be the starting point, (x", yo,29 satis-
As a simple example consider the prob- fiesx">Oandz'>Oandlet
tion is the vector of the slack variables.
lem dp=b-Ax'
Lagrange's method can handle the equal-
minimize (x1+ 1)l + (x2 + 1)' 0 TO
subject to x1 2 0; X Z 0 ~
ity constraints, while Fiacco and d n = c - z -A y
McCormick's barrier method can handle denote the primal and dual residual vec-
The unconstrained minimum would
the nonnegativity conditions. Newton's
be at ( x i $ ) = (-l,-l), but the constrained
minimum is at the origin (xy, x $ ) = (O,O).
Foranyy>O,
B(x;p) = (XI + 1 )' + (xz + 1j2
method then optimizes the resulting un-
constrained functions.
Supposeyou introduce the barriers and
then form two Lagrangians, one for the
primal problem and one for the dual:
tors. Then Newton's method reduces to
the system of equations
[O
A A0 T0I ] [ E ] = !

Z O X
:i
pe - XZe
I (3)

- Plog(xl) - plog(x2)
for the corrections to the primal, dual and
dual slack variables. We can state a single
iteration of the solution scheme as the fol-
lowing algorithm, whch we call Algo-
rithm l:
1. (A2-'XAT)6y= b - p4Z1
+AZ'XdD
whch approaches (0'0) as p + 0. 2. 6z=dD AT+
In general, you can't get a closed-form Computing all the partial derivativesof
solution for the central trajectory, but you L p and LD and setting them to zero, you 3. ~x=z-'[~~-xz~-x~z]
can use the following algorithm from get, because of duplication, just three sets
of equations: 4. u p = 0.995 min {2 I 6xj < 0}
Fiacco and McCormick:
1. Choosek>O;setk=O.
2. Find xk @k), the minimizer Of B(x;pk)
3. If pk > E, stop; otherwise, choose
pk+l < pk
Ax=b
ATY + Z = C
xI' 2I -
-p f o r J = l , ...,n
These are just primal feasibility, dual
5. CQ = 0.995 min

6. $+'=X"+a$x
i

J
{z I 6zJ < 01

IEEE SOFTWARE 63
7 . y"" = y" + U& corrections for eachk, giving a sequence of (ADA'jd =r
8. zi+' = z i + U,$Z solutions (Xk,$,z"),and continued the where d is a direction vector, and r is some
In steps 4and 5, the 0.995 faaor keeps steps until the duality gap was sufficiently modified right-side vector.
11s from actually touching the boundary. small: This takes about 85 percent ofthe time
U!, and U/> are positive quantities that are cTxk - b7' k allotted for computation. T h e ADATma-
constrained to be 0 5 U { / I , ~2 1. As Algo- y < E trix is symmetric and positive definite.
rithm 1 shows, we use SXas a direction in 1 + /bT&'/ Here positive-definite means that the ma-
r space ,and (Sy,Sz) as a direction in (v,z) trix is nonsingular and has a Cholesky fac-
space. We then choose step sizes in the two where E = 10" gives eight digits of agree- torization. Other variations of the inte-
spaces that will preserve x > 0 and z > 0. ment between the primal and dual objec- rior-point method have the same type of
Algorithm 1 represents one Newton it- tive values. Both the primal and dual ob- linear-equation system. Only the defini-
eration. Rather than talang several New- jectives may not improve at each step, but tions of D and rare different.
ton steps to converge to the central trajec- they come closer together. You can solve t h ~ equation
s system in at
tory for a fixed value of p,we reduced p at least three ways:
every step. The primal-dual method itself ImplementationT h e most surprising and + a QR factorization of DAT,
suggests how p should be reduced from important characteristic of the interior- + a Cholesky faczorization of ADAT,
step to step. point method is that the number of itera- and
tions depends very little on problem size. + the preconditioned conjugate gradi-
Proof of Algorithm 1. To prove that Algo- Almost all the problems we have solved - ent, or U;, method.
rithm l reduces the duality gap, we assume regardless of size -require 20 to 80 iter- For the QRmethod, you do the factor-
primal feasibility (Ax" = b) and dual feasi- ations (although there are exceptions to ization DAT = QR Here the colu~nnsof
bility (ATyO+ z" = c).Let gp(0) denote the t h ~ range).
s Q are orthonormal and R is upper trian-
current duality gap Our results do indicate that the nun- gular. The faczored system becomes
her of iterations increases with the log-
gap(())= ,'io
- bTY (RTR)d = r
rithm of the number of variables. The
= (A';' + zo)Ti)
(~")Ty" - number of iterations the simplex method which you can solve by solling two trian-
' (
0 'I' 0
2 ) x requires is a multiple of the number of gular systems.
Let U 5 min(up,un).Then, as we move, rows to reach optimality. Often thls niulti- The CG method could he used be-
0 T 0 ple is between 3 and 6. Given the logarith- cause ADAT is symmetric and positive
gap(cc) = ( z +a&) (x +cc6x) definite. T h e number of iterations re-
mic behavior, the advantage of the inte-
= (Zofx" + a[(z")T6x + (6Z)TX"l rior-point method over the simplex quired by the CG method depends on the
+ aZSzT6x method depends on how efficiently you condition number, or eigenvalue spec-
But because rima1 and dual feasibility execute each step of the interior-point trum, of ADAT. The number of CG iter-
Y
imply 6z = -A Sy and A6x = 0, 6zT6x = 0, method. ations can be reduced by preconditioning.
and thus T h e implementation we describe, To illustrate the CG method, assume you
called OB1 (for optimization with barri- must compute an approximate factoring
gap(a) = gap(0) + c c [ ( ~ ~ x+)( X~~eZ ) % ]
ers), is based on the work of Roy Marsten ADA^ = L L ~
= gap(0) + cc[(~fix)+ (xsz)lTe
and colleagues' and Lustig and col- where L is lower triangular. The CG
Using the thlrd row of Equation 3, we leagues6 It consists of approximately method will, in general, take fewer itera-
get 22,000 h e s of Fortran, including mod- tions to solve the following system, which
gap(u) = gap(0) + u[ye - =elTe ules to read and write M P S files, problem is equivalent to
= gap@) + U[+ - gap(0)l pre- and postprocessing, modules to ma- [r;'ADA'L-T]a = f
So we see that g p ( u ) < gap(0) as long as nipulate the data structures (matrices), a
I-I < .Q!L
l CJ where d = LTd and F = L-lr. If the ap-
simplex algorithm to get a basis, and the proximate factoring is exact, then L is the
I7
routines to factor the matrix. Cholesky factor ofADATand the precon-
In our computational experiments, we For each step of the interior-point
used ditioning is perfect. That is,
method, the most important task is to
L-'ADA'I:~ =I
solve the system of equations for the New-
ton corrections. This breaks down into shows there is only one eigenvalue, and
fork=O, 1 , 2 , ...w
here two sinaller tasks: the CG method takes only one step. What
1. Form the ADA7'niatrix, where D this means is that all the work goes into
B(ll) ~

(1-
ii
I/?r7/
if
if
11
I7
5 i,OOO
> 5,000 is a diagonal matrix that changes at each
iteration (for thc primal-dual method
computing L and none into the CG algo-
rithm. This suggests that partial or incon-
which substantially reduces p at each step. D =Z-'X). plete Cholesky factorization should yield
We solved the Newton equations for the 2. Solve the system of equations for 6y; reasonable preconditioning. Most com-
~~~~~~
~

JULY 1992
putational work to date is based on com- mum-degree algorithm is usually modest, performance characteristics:
plete Cholesky factorization, without con- although it can become significant for very + The interior-point method exhibits
logarithmic behavior in its iterations, with
example illustrates. gorithm consists entirely of graph manip- respect to the number ofvariables.
The Cholesky factorization ulations that do not vectorize.) + There is a significant benefit to
ADA" = LC'' T h e main weakness of the interior- using modem, vector supercomputer ar-
where L is lower triangular, reduces the point method shows up when there are chitecture.
equations to dense columns. Marsten found it right + This implementation of the inte-
away in his work on the notorious netlib rior-point method (OB 1) consistently
(LLT)d = r
problem israel. A single column in A with outperformed a commonly used simplex
which you solve by solving a nonzero in every row will cause ADA'r code and another commercial iinplemen-
I,g = r to be completely dense. This occurs re- tation of the interior-point method.
for g and then solving gardless of the row ordering and, of You can get the linear-programing
T
L d=g course, means that L is completely dense. problems we used to generate the data
for d. Thus, columns of A with many nonzeros from netlib.
Almost all factorization time is spent in will lead to relatively dense L factors. For-
a two-line do loop that includes indirect tunately, linear-programming models logarithmic behavior. Consider the results
array references. This innermost loop can usually have fewer than 10 nonzeros per on a family of linear-programming prob-
be vectorized, yielding a dramatic im- column, regardless of how many rows. lems that originated in an airline crew
provement in performance. Some models, however, have some very scheduling (set-partitioning) problem. A-
The Cholesky factorization approach dense columns. though these are usually treated as inte-
to solving &Is equation system exploits You can deal with &Is difficulty in two ger-programming problems, we are relax-
two key facts. First, the pattern of nonze- ways, both of which involve treating the ing them to a linear-programming
ros in the Cholesky factor, L, does not dense columns separately. In one, '4dler prohlem by not insisting that the solution
depend on D and is therefore the same at and colleagues compute a be in &e space of positive
every iteration of the interior-point Cholesky factor using just integers.
method. Second, the number of nonzeros the sparse part of A, and All the problems have
in L depends on how the rows of A are then use it to precondition
ordered. The work required during fac- a CG method. The sec-
The most surprising and 837 rows. T h e largest
problem has two million
toring and the subsequent triangular solu- ond way, using a Schur important characteristic variables. T h e matrix
tions are directly proportional to the num- complement mechanism, generator that produces
ber of nonzeros in L. Therefore, you requires a sparse factoring of the interiorpoint the A matrix generates a
should order the rows of A in a way that of just A s sparse part and a
$res you a sparse L. dense faczoring of a matrix
method is that the significant number ofdu-
plicate columns, which
You can use several heuristic algo- whose size is equal to the number of iterations we removed during pre-
rithms to solve h s ordering problem, the number of dense colunms processing.
most popular being nlinimuni-degree or- removed from A. depends very little on Figure la shows the
dering. The implementation we describe After q n g both ap-
uses a inore recent version, the multiple proaches, we found that
problem size. results on the number of
iiniquc variables versus
minimum-degree, developed by Joseph you can handle a few the number of prim&
Liu.' dense columns (sav SO) dual iterations. Vt7e ob-
To illustrate the importance of row or- successfully, but a much larger iiuinber tained a larger instance of this problem by
dering, consider the netlib problem (say loo), presents a very real stumbling adding additional columns to the pre-
shipWs. (Netlib is a software repository block to the use of interior-point methods. viously solved problem. We always re-
that includes many linear-programming Of course the meaning of dense depends started the problem from the default start-
data sets. For more information, send a on the size and structure of the model as ing point, without using any knowledge
one-line electronic mail message, send well as the acnial amount of fill-in during from previously solved subproblems.
index, to netlib@ornl.gov or netlib@ factoring. Thus, each problem is essentially inde-
research.att.com.) With the rows of A in pendent from the others even though the
the given order, L has 39,674 nonzeros PERFORMANCE sinaller prohlcms are subsets of the larger
(61-percent density). Afier a minimum- ones.
degree row reordering, the altemate L has We executed the implementation just If you fit a regression curve y = a + b
3 ,2 52 nonzeros (five-percent density). described on a Convex C2lO super- log(??),where-)!is the number of iterations
The time required for the multiple mini- computer and observed three important and 12 is the number of unique columns,

~~~~
~~

IEEE SOFTWARE 65
1 E- _______ ______ - _^__
._

Figure 1. Relationship of number of vanables to number of iterationsin (A) the interim-point method implementation OB1 and (B) in OB1 f m selected Netlib
probiemr
I

5 ' ..
. .

Variables No. of nonzeros in the Cholaky foctoi

rs -~
I
Figure 2. VectorSpeed-up as (A) af i n " ofproblem nze (number of vanables) and (B) afinrtim of the number of nonzeros in L (the Chokskyfactor).

I
you gety = -2 18.07 + 5 1.507 log+). The fit size. Even for a problem as small as four code (code compiled with vectoring dis-
of this data to the curve is quite p o d . variables, the interior-point method re- abled) and vector code.
Clearly, &IS algorithm has a desirable quires eight iterations. This number is off- Figure 2a shows the ratio of scalar and
property with regard to scaling in the set by the phenomenally slow growth in vector CPU time against problem size. As
problem size -the iteration count in- the number of iterations with problem the figure shows, we acheved some im-
creases very slowly as the problem gets size, and as we show later this behavior pressive speed-ups.
bigger. tends make the simplex method a better There appears to be little correlation
Figure 1b shows a different perspec- choice for small problems. between problem size, as determined by
tive on iteration behavior. In this figure, the number of variables, and the speed-up
we plotted the iteration count for I1 3 Vector a&tecture. The interior-point achieved with vector code. When we
problems against the logarithm of the method requires you to factor the ADAT compared speed-up against the number of
number ofvariables. Interestingly, while matrix at each step. Because these compu- nonzeros in L, a different picture
the number of variables grew more tations involve indirect addressing, prob- emerged, as Figure 2b shows. The data
than six orders of magnitude, the num- lem size is not the only aspect that affects has a stronger correlation in t h ~ sfigure
ber of iterations grew by less than one the algorithm's performance. The dism- than in Figure 2a, but it is still quite scat-
order of magnitude. bution of nonzeros in the matrix is also a tered. The number of nonzeros in L is
The figure shows that the interior- concern. related not only to the size of the A matrix,
point method requires a minimum num- As part of our execution on the C2 10, but also to the pattern of nonzeros in
ber of iterations regardless of problem we compared CPU time for both scalar it. Thus, it is important to have a good

66 JULY 1992
Model Rows Columns Nonzeros Minos OB1 Minos/OBl
Iterations CPU time (sec) Iterations CPU time (sec) ratio)

afire 28 88 14 0.03 10 0.16 0.19


chow2 49 108 442 24 0.07 9 0.27 0.26
try3 746 65 0.24 13 0.58 0.41
israel 1 2,358 262 2.46 25 5.95 0.41
Stair 357 467 3,857 578 17.43 15 5.57 3.13
fffff800 298 6.15 29 13.59 0.45
shell 537 1,775 329 5.87 21 5.73 1.02
nesm 663 2,923 13,988 2,992 86.84 40 29.20 2.97
25fv47 822 1,571 6,42 8 295.63 27 23.29 12.69
pilot.ja 7,196 368.70 37 51.28 7.19
scrm3 1,120 40.96 20 10.2 1 4/0 1
sierra 849 3 1.24 18 10.76 2.90
gnges 713 27.09 17 13.87 1.95
sctap3 1,481 2,480 10,734 959 39.83 18 12.71 3.13
degen3 7,293 595.37 20 190.20 3.30
pilot87 3.612 9,109.73 41 776.71 11.73
stocfor.! 2,158 2,031 2.82 209.37 22 20.40 10.26

ordering for the factorization to minimize


fill-in.

Comparison to simplex code. We first Model Rows Columns Nonzeros CPU time (se0
compared this variant of the interior-
OB1 Korbx
point method to the classic simplex ~~ - ~~

method using the Minos 5 . 3 , a simplex cre-a 3,516 4,067 14,087 44.8 121
code developed at Stanford University’s
cre-b 9,648 72,447 2 5 6,095 555.6 1,134
Systems Optimization Laboratory, on
part of the netlib test suite. We chose cre-c 3,068 3,678 13,244 41.1 102
Minos because it is well-known, and its
performance has been established on a
variety of machines.
Because Minos is a very scalar, our
comparisons reflect not only the different
algorithms, but also the different execu- 358,171 2,840.9 15,840
tion modes (scalar versus vector).
Also, we do not consider that our vari-
ant of the interior-point method easily
solved all the problems in the comparison,
butMinos failed to solve some ofthe more
difficult problems presented to it.
Table 1 shows how Mmos comparesto
OBI on selected netlib problems. These
pds- 10 16,558 48,763 106,436 5,392.9 11,880
results show that the interior-point
method is not as effective on smaller prob-
~~ ~_______ ~

IEEE SOFTWARE 67
lems, but as problems get larger, it be- ate future holds the challenge of carryin1
comes more effective than Minos. th~snew method into nonlinear and inte
ger programming, as well as to very large
Conparison to other hteriorpdhiimplemen- speciallystructured linear programs.
tation~.AT&T has released Korbx, a hard- A major research question that come
ware and software combination that im- up in all these areas is how to do hot start
plements Karmarkar’s method on an - how to take advantage of a good start
Alliant FW80 superminicomputer. Re- ing solution. Tlus question is not yet well
cently Carolan and colleagues published understood, but is currently under activ
He is also an editor for SolMNew, contributing the
performance results of Korbx bench- study. 1 “Applicationson Advanced Architecture Computers”
marked by the Military Airlift Com- column.
mand.” Astfalk is a member of the Society for Industrial
and Applied MatheInatics,American Mathematical So-
We obtained 16 of the large models
ciety, ,and ACM.
used in this benchmark (whch are avail-
able kom Jeff Kennington of Southern REFERENCES
1. N. Karmarkar, “ANew Polynomial-TimeN -
Methodist University, Dallas) and used gorithm for Linear Programming,” Can-
them to compare OB1 to the Korbx im- binatorua,Vol.4, No. 8, 1984, pp. 373-395. lnin Lustig is an assistant
plementation. 2 . M. Kojima, S. Mizuno, and A. Yoshise, “A Pri- professor of civil engineer-
To be fair, the OB1 data is newer than mal-Dual Interior-Point Algorithm for Linear ingand operations research
Programming,” in Pn~grmin Mathematics( at Princeton University. His
that on Korbx, and the hardware plat- Progrmmnmng, N. Megiddo, ed., Springer-Ver- research interests are the nu-
forms are quite different. The C210 is a lag, NewYork, 1989,pp. 2947. merical aspects of interior-
single-processor machine with a peak of 3. R. Monterio and I. Adler, “Interior Path Fol- point and simplex methods
50 Mflops, while the W 8 0 is an eight- lowing Prima-Dual Algorithms, Part I: Lin- for linear programming.
ear Programming,”Mathematid Prcgram- Lustig is a member of
processor machine with a peak of 23.5 ming,May 1989,pp. 2742. the Society for Industrial
m o p s per processor. Table 2 shows the 4. K. McShane, C. Monma, and D. Shanno, ”An and Applied Mathematics, Operations Research Soci-
results of our comparison, along with the Implementation ofa Primal-Dual Interior ety of America, MathematicalProgramming Society,
benchmark results. Point Method for Linear Programming,” and ACM.
William Carolan and colleagues ran all ORSAJ. Camputmg,Spring 1989, pp. 70-83.
5. I. Choi, C. Monma, and D. Shanno, “Further
four varian~of the interior-point method Development of a Primal-Dual Interior Point
available in Korbx on all the problems. The Method,” ORUJ. Computing,Fall 1990,pp.
Roy Mamen is a professor
results in the Korbx column are the best time 304-311.
ofindustrial and systems en-
for each problem, although the best method 6. I. Lustig, R. Marsten, and D. Shanno, “Coin-
gineering at Georga Tech.
putational Experience with a Primal-Dual In-
varied kom problem to problem. terior Point Method for Linear Progran-
His research interests are in
Unfortunately, the differences in im- ming,” LinearAlgebra and Appliwtions,Vol. computationaloptimmnon,
particularly the interior-
plementation and hardware make it hard 152,1991,pp. 191-222.
7 . R. Marsten et al., “Implementationofa Dual point method.
to compare results, and a comparison of Marsten is a member of
each implementation on a common hard- Affine Interior Point Algorithm for Linear
Programming,” O R U J . Cmpting, Fall the Society for Industrial ,and
ware platform is not possible at this time. 1989, pp. 287-297.
Applied Mathematics,Oper-
Korbx is proprietary, and we are not aware 8. J. Liu, “Modification ofthe Minimurn-De- ations Research Society of h e r i c d , Mathematical Pro-
gramming Society, and ACM.
of any plans to port it to a platform that gree Algorithm by Multiple Elimination,”
would let us do a side-by-side comparison ACM Tram. MathrmaticalS@are, June 1985,
pp. 141-153.
of the two implementations. 9. I. Adler et al.,“An Implementation of
T h e faster times of Korbx on the Karmarhr’s Algorithm for Linear Program-
osaOl4, osa030, and osa060 problems are ming,” Tech. Report 86-8, Operations Re- David Shannois a professor
due to the more efficient way Korbx’s search Ctr., Univ. of Calif., Berkeley, 1986. at Rutger University’s Cen-
10. U! Carolan et al., “An Empirical Evaluation ter for Operations Research,
multiprocessors can be used in parallel on
of the Korbx Algorithms for Military Airlift where his interests are in lin-
the denser portions of L. The C210, by Applications,” Tech. Report 89-OR-06, CS ear and nonlinear program-
comparison, has only one processor. and Eng. Dept., Southem Methodist Univ., ming. He is the author or co-
Dallas, 1989. author of more than 50
publications in these fields.
Shanno is a member of

W e have developed a robust, reli-


able, and efficient implementa-
tion of the primal-dual interior-point
Address questions about this article to Lustig a t
Princeton University, Princeton, NJ08544; Intemet
the Society for Industrial
and Applied Mathematics, Operations Research Soci-
ety, The Institute ofManagement Science, Mathemdti-
method for h e a r programs. The immedi- in@basie.princeton.edu. cdl Programming Society, and AGM.

6 8 JULY 1992

You might also like