You are on page 1of 216

Springer Texts in

Electrical Engineering
Michel Sakarovitch

Linear Programming

Consulting Editor: John B. Thomas

With 12 Illustrations

Springer Science+Business Media, LLC


Michel Sakarovitch
Universire Scientifique
et Medicale de Grenoble
Laboratoire I.M .A .G .
BP 53X
38041 Grenoble Cedex
France

Library of Congress Cataloging in Publicat ion Data


Sakarovitch , Michel
Line ar programming .
(Springer texts in electricaJ engin eering)
Includes bibliographical referenccs and index .
I . Linear programming . I. Title. 11. Seri es.
T57 .74.S24 1983 519.7'2 83-362

© 1983 by Springer Science+Business Media New York


Originally pub1ished by Dowden & Culver, Inc. in 1983.

All rights reserved . No part ofthis book may be translated or reproduced in any form
without written permission from Springer Science+Business Media, LLC

9 8 7 6 543 2

ISBN 978-0-387-90829-8 ISBN 978-1-4757-4106-3 (eBook)


DOI 10.1007 /978-1-4757-4106-3
Preface

One can s ay that operations research consis ts of t he application of


s ci ent i f i c methods to th e complex problems encount er e d i n th e manag ement of
l arge sys tems. These systems arise , for exampl e , in i ndust ry , admin is tration
and defense . The goal i s t o aid management i n the dete rmi nation of po licy vi a
th e us e of t r act abl e mo dels .

A model can be defin ed as a s chemat i c descript ion of t he s ys t em under


consi der ation (which may be a company , a market, a bat t le , or a t r ans port at i on
network , for exampl e) . To be useful , t he mode l mus t include a rep r ese ntation
of th e interaction bet ween th e s ys t em e l ements. An example coul d be t he model
of perfect gase s. Perfect gases do not e xi s t , but th e conce pt hel ps US to
under st and th e beh avior of real gases . Si mi l ar ly , the atomi c model hel ps us to
unders tand t he structure of material. One of t he non-tri vi al proble ms of opera-
ti ons res ea r ch i s th e choice of an appr opr i at e model . For example, s ub-at omi c
part i cl es he lp i n t he under st and i ng of s uper conduct i vi t y , but th ey cou l d resu l t
in an unne ces s ar il y comp lex mode l for t he behavio r of gases if we were i n-
t erest ed only in the r el ationship betwe en volume, t emperatur e and pres sure . In
fac t , a ll s ci ent i f i c disciplines us e mode l s and one coul d eve n argue t hat al l
cons cious ac tivi t y of t he human mind is pe r forme d through mode ll i ng.

Probl ems of decision making are as ol d as th e proverbial appl e. In contras t ,


t he serious use of spec ific quant it ative t echn i que s in dec i s i on proces se s has
emerge d only recently (si nce Wo r l d Wa r I I ) . The r e are t wo major r easons f or t his:

Organizationa l problems are in creas i ng i n s ize and complexit y. As a resul t ,


systematic me t hods for ana lyzing s uch prob lems are becoming more and more es sen -
ti al .

The se cond r ea son i s t he devel opment of compute rs. Computers are not onl y
nec e s s ar y to s ol ve operat ions research models, but the y h ave influenced directl y
ou r ways of th i nki ng about complex systems . In many i nst anc es one can observe
a paral le l deve l opment of operations re se arch and computer science.

It is conv enien t to consi der ope rations rese arch as consi sting of t wo
i nti mat e ly r e l ated parts :

v
vi fu~

"Model building" Le., the extract ion from a complex realit y of a


scheme that represents it adequate ly fo r th e
purposes under consi de r at i on.

"Model solvin g" Le ., the development of techniques to r e sol ve


models of deci sion procedure s ( for example:
in a gi ven context under certain as sumptions,
what is the cours e of action most likel y to
succeed).

Since ope rati ons resear ch model s are most often mathemat ical model s solved
with the help of compute r a lg or i t hms , one might be led to be lieve th at one can
consider the field tot all y as an abst racti on. However, thi s i s not the case.
Wh at mak es operat ions re se arch a fa s cinating s ubj ect f or st udy and re search is
th at th ere i s a di ale cti c rel ati onship betwe en the two parts . The mathemati cal
te chn ique s should be consi dere d in the cont ext of th e con crete probl ems t hat
they are i nt ended to sol ve. On the other hand, ope rati ons r esearch is much more
than a common- sense appr oac h to r eal-life problems.

In thi s book we are conce rne d wit h th e " model- s olvi ng" aspec ts of operat i ons
r e se arch . In fa ct , th e examples gi ven make no pretense to be r eali st i c mode ls.
They are propos ed as an over ly s i mp li f ied vers i on i n order to enab le the reader ,
as eas i ly as poss i ble, t o ac qua i nt hims el f with the type of pr obl ems that can be
t a ckl ed and t o e xempli f y th e mathemat ical t echn ique s use d. However , we woul d
like to empha s ize th at model buildin g i s bot h i mportant an d diffi cult . Bei ng
abl e to define the system, choose th e variabl e s and decide what can reasona bly
be neglected , r equires as much i nte l ligence and creativi ty as s o lving the model s .

In thi s te xt we i ntend to be, i n fac t, mo r e speci fic , i ns of ar as we l i mit


ourselves to solution t echn iques f or a pa rt icul ar cl as s of mode ls that are de -
fined in th e first chapt er and are ca lled l in ear progr ams . Thus we explore on ly
a f r action of the model- solving se ct ion of ope rations r es ear ch . However, i t i s
a l s o true that linear pr ogrammin g is th e best known of the mathemat ical t e ch-
niques of operations rese arch (and, some say , th e most used mathemat i cal te ch-
nique). Thi s may well be due to th e fa ct th at it has proven t o be a very e ffec -
tive and efficient tool for solv i ng prac t i ca l problems. I t a lso has given ri se
to a deep and beautiful mathematical theory. In fa ct, th e di scovery of the si m-
plex method for sol vi ng linear programs, in 194 7, by G. B. Dantz i g was i n many
ways the birth of operations r esearch.
Preface vii

The mat e r i a l pres ented here ha s been developed for an undergraduate course
fi rs t at th e Universit y of Ca l i f orni a (Ber ke l ey) and then at the Universit y of
Gr enob le (Fr an ce). The book i s l argel y self contained, the onl y prerequ1s1te
be in g a yea r of cal culus . Noti ons of l in ear algebra that a r e us eful a r e pre-
se nted in so me detail. In part i cul ar, I have discussed the solut i on of linear
s ys t e ms of equations t o the ex tent whi ch I think ne ces s ary for the unde rstandin g
of l in ear programming theory . Dependi ng on how much of the final t wo chapters
i s included and on the initia l l evel of the student s in line ar a l gebr a , the
top i cs in th is book can be cove r ed i n a quarter or a semester .

Thi s cours e i s "c omputer or iente d" in the sense that the al gor ithms (whi ch
a re first depicted on s i mp le exampl es ) are given in a ve r y intuitive algorithmi c
lan gua ge, which makes the ir coding on a comput e r rather s i mp le .

We are most grate ful to Bet ty Kami nsk i for her efforts and pers eve ran ce in
prep arin g the camer a copy from whi ch th i s book wa ~ produ ced.

Grenobl e, Febr ua ry 1982


Contents

Chapter INTRODUCT ION TO LINEAR PROGRAMM ING


Examp l e s and Definiti on of Li nea r Programs
2 De f i n i t i ons and Not at i ons 5
3 Linear Programs in Canonical Form 9
4 Equiv a lent Formulations of Linear Prog rams 12
5 El ement s of Geomet ry o f Li near Pro grams 15
EXERCISES 16

Chapt er II DUAL LINEAR PROGRAMS 21


Formal Defi nit ion of th e Dual Lin ear Pr ogr am 21
2 The Object ive Funct i on Val ues of Dua l Linear Programs 26
3 Economic Inte rpretat ion of Dua l ity (An Exampl e) 29
EXERCISES 31

Chapt er II I ELEMENTS OF THE THEORY OF LINEAR SYSTEMS 35


1 Sol ut i on of Lin ear Systems (Defini tion) ; Redundanc y 36
2 Solving Li near Syst ems Using Mat r i x Mul tip l icat i on 40
3 Fin ding Equiva lent Sys tems of Li near Equat i ons:
Elementary Row Oper at i ons 45
4 Pi vot Operation 49
EXERCISES 55

Chapt er I V BASES AND BAS IC SOLUT IONS OF LINEAR PROGRAMS 57


1 Bas es of a Lin ear Prog ram 57
2 Writin g a Linear Prog ram i n Canoni ca l Form with
Respec t to a Basis 59
3 Feas i bl e Bas es, Optima l Bases 65
EXE RCISES 67

Chapt er V THE SI MPLE X ALGORITHM 70


1 A Par ticular Case 71
2 Solv i ng an Examp le 75
3 The Si mplex Algori t hm: Gene ra l Case 79
4 Fi n i t enes s of t he Simple x Al gori thm 84
EXERCISES 90

ix
x Contents

Chapt er VI TilE TWO PHASES OF THE SIMPLEX t-A.ETHOD :


THEORETICAL RESULTS PROVED BY APPL I CATION
OF TilE SI MPLEX METIIOD 95
1 The Two Phases of th e Simple x Method 95
2 Resu lt s Tha t Can Be Proved by th e Si mpl ex Me thod 103
EXERCISES 105

Chapt er VII COMPUTATIONAL AS PECTS OF TH E SIMPLEX METIIOD:


REVISE D SIMPLEX ALGORITfW; BOUNDED VAR IAB LES 109
1 Ef f ici ency of t he Si mplex Algori t hm 109
2 Nume r i ca l Pi t fa l l s 110
3 Revised Si mp le x Al gor i t hm 112
4 Linear Pro grams wi th Bounde d Var i abl e s 117
EXERCISES 122

Chapt er VI I I GEOMETRI C I NTERPRETATION OF TilE SI MPLEX ~IETHO D 124


1 Conve x Pro gr amming 124
2 Geomet r i c In t erpr etation of the Si mp l ex Al gorithm 130
EXE RC ISES 139

Chapt er I X COMPLEMENTS ON DUALITY: ECONOMIC I NTERPRETATION


OF DUAL VAR IABLES 142
1 Theor ems on Dua li t y: Comp l ementary Sl ackne s s
Theorem 142
2 Economi c Interpr et ati on of Dua l Vari ab les 147
EXERC I SES 153

Chapte r X TilE DUAL SIMPLEX ALGORITfW : PARAMETRIC


LINEAR PROGRAMMI NG 156
Dua l Simplex Algorithm 157
2 Par amet r ic Li nea r Pr ogr ammi ng 161
EXE RCISES 170

Chapter XI TilE TRANSPORTATION PROBLEM 173


1 The Pr oblem 173
2 Propert ies of the Tran sport at i on Pro blem 176
3 Solu tion of the Transpor t ation Prob lem 182
4 The Assignment Prob lem 196
EXERC ISES 198

REFERENCES 202
AIDE MEMO IRE AND INDEX OF ALGOR ITIIHS 203
INDEX 205
List of Figures

Fig . 1.1 Geometric Solution for (PI) 3

Fi g. II.I Pos sibl e Ranges of ex and yb 29

Fig . V. I The Domain of Feas ible Solutions of (PI ) 78

Fig . VII 1.1 An Examp le Illustrating Defin it ion 2 126

Fig . VII 1.2 An Examp l e of a Function Which i s Not Convex 128

Fig . VII 1.3 x = AX + (I- A) X 129

Fig. VII 1.4 The Feasi bi l i ty Domain for (P) 132

Fi g. VII I .5 An Illustration of Definition s 8 and 9 135

Fi g. X. I Zmax vs . p for (Qp) 164

Fi g. X.2 Geomet ric Sol ut i on f or (Qp) 165

Fi g. X.3 Zmax vs. p for (Pp ) 167

Fi g. X. 4 The Domain of Fe asibl e Solutions of


(Pp) fo r Vari ous p 168

xi
Chapter 1. Introduction to Linear Programming

The goal of this chapter i s to introduce those opt imizat ion problems
which , just after World War II, G. B. Dantzig named "linear programs. "
The great success of linear programming (i .e ., the study of linear programs)
led authors who became interested in various ootimization problems to link
the term "programming" with that of a more or less fitted adj ective, thu s
calling these problems convex programming, dynamic programming, integer pro-
gramming, and so on . The result i s that in operations research th e t erm
"program" has acquired the very precise meaning "opti mi zati on problem . " It i s
not possible, however, to use the word "programming" for th e study of gene ral
problems of optimization (hence, we say "mathematical programming"), bec ause
more or less s i mul t aneous l y the term "proeram" was taking on anot her meaning
much more in harmony with the original one - - that of a sequence of instruc-
tions in the context of computer science. This nice example of the development
of scientific language does not make things clear for the beg inner . To avoid
confusion in this book we therefore use the term "pr ogr am" as equ ival ent to
an optimization problem and " code" or "computer code" for what is call ed a
program in computer science .

The notion of dual ity, which is central to the underst anding of linear
programming, i s introduced i n Chapter II . Necessary not ions of linear al gebra
are rev iewed in Chapt er III , and the concept of bas ic solutions i s defined in
Chapter IV. Chanter V i s devoted to the ~resentation of the s i mp l ex a lgori t hm.
The two phases of the s impl ex method are presented in Chapt er VI togeth er with
some theoretical results . In Chapter VII we present computational aspec ts of
the simplex and the revised s i mp l ex algorithm. The geometrical inte rpretation
of the simplex algorithm is given in Chapter VIII. Chapter IX contains s ome
complement s on duality, and parametric linear programming is presented in
Chapter X. Finally, Chapter XI is devoted to the presentation of a very i m-
portant special linear program : the transportation problem.

1. Examples and Definition of Linear Programs


a . A production planning problem: A firm can produce two products,
product 1 and 2, using raw materi als I, II, and III. The way the factory
works can be depict ed schematically by the following tabl e :
2 Chapter I. Introduction III Linear Programming

Pr oduct

1 2

I 2 1
Raw Ma t e r i a l
II 1 2

III o 1

As the tabl e i ndicates, t o produce unit of product 1, requ ires 2 units of


r aw material I and 1 un it of raw ~a te ria l II; and to produce 1 uni t of prod-
uct 2, r equi re s 1 unit of product I , 2 uni ts of r aw ma teria l II , and 1 uni t
of raw material III. 'loreover , we know t hat for each unit of product 1, th e
f i r m get s a reward of 4 unit s (say $4, 000) ; for each unit of product 2, a
r eward of 5 units ; and th at th e product i on t echnol ogy i s l inear , e .g . , to
produce x units of product 1, 2x uni ts of r aw mat er i al and x uni t s of raw
mat eri al II must be us ed . There are 8, 7, and 3 unit s of raw mat er i al s I,
I I, and III, respec t ively , available fo r product i on . Any unused materia l
has no s al vage value. The pro blem i s t o find a product i on p l an t hat wi l l be
f eas i bl e (i. e , whic h will not use mo r e r aw mate r ia ls th an ar e on hand) and
whi ch wi ll br i ng t he maximum possi ble re ward .

Let ti ng xi denote t he quant i t y of product i produced , t he pr ob lem can be


set up in t he foll owin g way :

l +
z 4x 5x be maximum s ubjec t to
2
Xl . x2 ~ 0

(PI) 2x + x ;:.. 8 ( I)
l 2
Xl + 2x 2 ;:.. 7 ( II)

x 2 ;:.. 3 ( III)

This problem has the immediate geomet r ic solution sh own i n Fi gur e I - I (nex t
page) •
Section I. Examples and Definitions of Linear Programs

' ,' 3
{ '2 "2
1 =22

2"+'2= 8

2 3 4 ' I

Fi gure 1. 1: Geomet r i c Solut i on for ( PI )


b. A transportation problem: A sp ecia l i mpo r ted produ ct 5 is need ed
i n t hre e f a c tories l oca t ed a t Denve r , Phoen i x, and Chicago . The we ekly consump-
t i on s o f t h i s pr odu ct a r e, r e s pe ctive l y, 400 , 30~ and 200 t ons. Pr odu ct S is
de l ive r e d i nto the co un t r y thr ou gh the harbors of New York and Sea t tle , the
quant i t i es av a i l a ble be i ng 550 t ons a week in New York and 350 in Sea t t le .
Tr ansp or t at ion costs a re s uppos ed to vary in propo r tion t o t he trans por t ed
quan t i t i e s, t he un i t cos ts bei ng:

Denve r Phoenix Ch Lca zo


-
New Yor k 5 6 3

Sea t t le 3 5 4

This t able shows t h at t o convey x tons from New Yo r k to Phoe nix , f o r instan ce,
the cos t i s $6x . And the prob l em consis ts i n determining an optimal " t r ans -
portat i on plan , " L e., i n fi nding what are the quant ities of pr odu ct; S t o s end
from each har bor t o e ac h facto ry in s uch a way t hat :
(a) Demands are sati sfied (e ach f ac t ory receive s a t l east what i s nee ded ).
( 6) Quan t i t i e s sent fr om ea ch h arbor do not exceed availability .
(Y) Quan t i t ies sent ar e non-ne gat i ve .
(0 ) The t ota l tran sp o r t ation co s t is mini mum s ub ject to th e preced in g con -
s t r ai nts .
Le t us as s ign t o New Yor k harb or inde x l, t o Seattle harbor i ndex 2 , and
i ndi ces 1, 2 , an d 3 , respe ct i vely, t o the f actori es o f Den ver, Pho enix, and
Chi ca go . wi l l de not e t he q uant i ty o f pr oduc t S sen t from h arbor i ( i =l or
x
ij
2) t o factory j ( j =1 , 2 , or 3) e ac h week. The linear pro gram is t h en
4 Chapter I. Introduction to Linear Programming

X +X >400
ll 2l
X +X >300 ( a)
12 22
x
1 3+x2 3
>200 1
xll+x12+x13 <550
} ( 13)
x2l +x22+x23 <350

(P2) xl l ,x12 ,x13 ,x2 l ,x22 ,x 23 >0 (y)

Under these co ns t raints , l et

z=5x + 6x + 3x + 3x + 5x +4x } ( 6)
ll 12 13 2l 22 23
be minimum

A s e t of x s atisfyin g (a ) , (13) , (y) would give a "feasible " trans port a-


ij
tion schedule. We s ay t hat i t i s a fe asible sol ution. The problem cons is t s i n
fi nding, among al l fe asib le s olut i ons, one that gives z i ts minimum va l ue .
Here the number of va r i ab les (which is equ al t o 6) fo r bids t he uti lizat i on of a
geo metri c app roach t o s ol ve (P2) '
Remark 1 : One may be t e mpt ed to be l ieve that what has be en done here is
"mode l building" since we s t a r te d with an apparently "real-world problem. "
Howe ve r , one must a pp re ci at e t he fa ct that t he problem, as we st ated it, i s in a
very a cademi c fo rm. It seldom o ccurs that real pr oblems are s o schemat i c and
simple . For inst ance, it i s no t ob vious, in r ealit y, th at cos ts of s hippi ng are
unique, are linear, or that the qu antity needed e ac h week i s pre cise ly known,
cons t an t , an d so on. As was indi cat ed i n t he prefa ce , thi s exa mple (as wel l as
others given i n thi s book) must not be cons ide red as any th i ng ot he r than an aid
to t he und erstanding of t he theory.

Definition 1 : A linear pro gram i s an opt imi zati on probl em in whi ch:
(a ) The va r i ab l es o f the probl em a re cons t r ai ne d by a se t of line ar
equations and/ o r in equal i ti e s.
(b) Subject t o t hese cons trai nts, a func tion (c a l le d t he obj e ctive func-
t i on ) is to be maximized (or minimized ). This fun ction depends linearly on
the variab les.
Remark 2 : Very of ten, as is true in the e xamples depicted above, t he va r i abl es
may be interpreted as the l e vel o f ac tivi ty a t w~ich some pr ocesses will ope r a te .
The objecti ve f unct i on may be a reward on e tries to maxi mize o r a cost one tries
t o minimize.
Section 2. Definitions and Notations

2. Defin i tions an d Not a t i ons


De fin iti on 2 : I f I i s a s et , II[ den otes t he " card inality of se t I," i . e . ,
t he number of elements i n I. Thi s notation wi l l be us ed fo r f i n ite s e ts .
De f in it i on 3: " m x n-ma t rix" A is (for us ) no th ing mor e t ha n a r e ctangul a r
(m r ows an d n co l umns) array of (real) numbe r s sa tisfying the r ules o f addi tion
a nd mul t ipl i c ati on r e calle d in De f inition 4.
The elemen t o f r ow i ~l::i.~m) and co l umn ( 1~) .::, n) of matri x A wi ll be
denoted A~ .
~

Definition 4 : Given a m x n-mat r Lx A an d a p x q-matrix B : The sum C of


A and B exis ts only i f
m ~ p and n ~ q
I t is den ot ed C A + B an d is def in ed bv
C~ ~ A~ + B~
~ ~ ~

The product D of A by B (in t hi s o r de r ) o r "the product of B by A on i t s


l e f t" e xists on l y if
n ~ P
I t is denote d C A x B or C AB and de f i ned by
n
c1 ~~l A~Ba l.::,i.::,m; l .::, j .::,q
( C i s an m x q-matrix). Exercise 11 provides an intui t ive backg round f o r th i s
definit i on .
n
Defini t i on 5: An n- column ve ctor t x E R may be cons i de re d as an n x 1-
~a t rix . Components of x are r e a l numbers den oted xI ,x2 , ' . "xn •
n
Defi nition 6 : I f x E R an d i f J C {1 , 2 , • • • ,n} ( r ea d " J i s a subset of t he
se t of indices 1 , 2, •• • , n" ), x wi l l den ote t he IJ !-column vecto r , t he e lement s
J
o f wh i ch a re x f or j E J .
j
t
Definition 7 : An n-row vec tor C may be considered as a 1 x n-m a t r r x , The
2, n J
component s o f c will be denot e d c\c •• • , c • I f J C {I, 2, •• • , n}, c deno t e s the
IJ I- r ow vecto r , t he e l emen t s of whi ch are c j f or j E J.
De f in it i on 8 : We call " s cal a r produc t of an n- r ow vec to r C by an n-col umn
vec tor x" t he sc a lar ( 1. e , t he 1 x ' I-matrix) ob tained acco r ding to Defini t i on 3
i n mul t i pl ying r he I x n-matr i x c by t h e n x I-matrix x :

cx ~ L cjx .
j ~l J

t The i nt roduction of concep ts of co lumn vector an d r ow vecto r may seem an ex -


agge rated co nce r n of fo rmalis m in at r e at i s e tha t is aimed a t practical an d co m-
puta ciona l app l i ca tions . Howeve r , t hi s l i ght compl ica tion in t he beginning
gi ves in the seque l a ve r y useful cons i s tency of no tations .
6 Chapter I. Introduction 10 Linear Programming

Defini tion 9: ve c t or s cJ and x are defined as stated


If JCil,2, . .. .n }
J J
i n De f initi ons 5 and 6, the scalar product of c by x is
J

(1) L
j EJ

De fi ni t i on 10 : Let A be an m x n e-mat r Lx, JC {1 , 2 , ... .n } , and IC {I,2, . .. m};


l e t kE {I, 2 , .. . ,m}, '-E{l , 2, . . . , n } , x be an n-column ve c t or and y an m-row vee-
t or . Then:
(a) A9., denotes t he m- column vector made out by the 9.,t h co l umn o f A.
(b) th
~ den ot es t he n- r ow vec t or made out by the k row of A.
( c) AJ is t he m x IJ I- ma t rix made out by t he union of column Aj for j E J.
J
The product of A by x is the m-column vect or defined by
J
(1') AJX =
J
I AjX
j
j EJ
(d) AI is the II I x rr-ma t r t x made out by the union of rows Ai for i E I .
The produ ct of yI by AI i s the n- row vec to r defined by

(1") I
i EI
J
(e) A deno tes t he submatrix of A, the a Iamen t s of whi ch ar e A~ for i E I,
I 1
j E J. Note that if I = {i } and J = {j} , A~ i s writt en AI.
Depending on the circumstances , we will thus write matrix A in
one of t he fol lowing forms:
Al 2 An
1 A Al
1 1
Al 2 An
2 A A
2 2 2 1 2 n]
A', [A , A , • • • ,A

Al 2
A . . . An A
m m m m

or mo re generally:

II 1 I
2 AP
A A .
J J J
1 1 I
II 1 I
A 2 AP
A
J J J
2 2 2

II 1 I
2 AP
A A
J J J
q q q
Section 2 . Definition s and Notations

+ I {1 . 2 , . . • .n }
P

J
l
+ J2 + + Jq {1 , 2 •. . . ,m}.

Definition 11 : Comparison of matrices: Let A be an m x n-matri x an d B


an m' x n'-matrix. It will be meaning ful to compare matrix A and matri x B i f
and only if m' =m and n' =n. By definition , in this case. we will say that
A = B (resp. A ~ B) if

A~ ~
= B~
~
for all i 1,2, .. . ,-m; 1,2 .. .. ,n

(resp , Ai ~ Bi f o r all i 1,2, .. . , m; 1 ,2, ... ,n)


Moreover , A ~ 0 (resp , x ~ 0) me ans

Aj ~ 0 for all i 1, 2 , . . . ,m; j=1, 2 •...• n


i
(resp , x ~ 0 for all 1,2, . . •• n)
j
Definition 12: Matri x transposit ion: Let A be an m xn-mat ri x . The
"transpose of A" will be denoted AT; it is the n x m-matrix, the e lemen t s of
which are

A~ ~
fo r i 1,2 , . .. . m; 1 ,2 , ... , u
Note that:
(a) The transpose of a co l umn ve c t o r is a r ow ve c t or
(b) (AT)T A
(c) (Ax)T = xTAT

Definition 1 3 : Urn (or U when no ambiguity exists ) denot es the m xm- uni t
matrix, i .e .,
(U )~ =
. {I if j = i

m ~ 0 oth erwise
Examples : 1. Let

~[~]
1 1 0
A= 2 0 1
[: 1 0 0
:] b. [:]

c= [ 4 , 5 , 0 , 0 .O ]

y=[1,2,01

J = {1 , 2 , S }
8 Chapter I. Introductio n 10 Linear Programm ing

The n we nav e

2 1 4
°
1 2 1 5
T T T
A= 1 b [8 , 7 , 3]
°° c
°
° 1 °1 °
°° °

~ ~ m
1
J J-
A= 2 Ax b
J
1

yA [4 , 5, 1 , 2 , 0 ] ~ c

J-
c x = 22
J

2. We can writ e l ine ar p r ogram (P I ) in a compact mat r i x f o rm. The


ac t i vi t y of the f irm i s de fined by the 2- co l urnn vector

The way t he f irm i s work in g i s dep i ct ed by t he mat ri x

The supp l y in raw ma terial an d t he un it r eward a r e define d r e sp ect i vely by

b - m c > [4,5 ]

so that (PI ) can be writt en

{ maximize z = cx s ubj e ct t o

Ax "' b ; x ~o
Section 3. Linear Programs in Canonical Fonn 9

3. Line a r Programs in Canonical Form

Defini tion 14 : A l i near p r ogr am is in " canoni c al form " if it i s written


i n t he fol lowing way
Fi nd vector x s uch t h at

(2) cx = z is maximized
subject to the cons t rain ts
Ax ~b , x;;. 0
where
A is a given mx n-mat r ix
b is a given m-column vector
c is a given n-row vect or
x is an (un kn own) n- c olumn vector
Traditiona l ly, (2) is usually wri t ten i n t he form

Ax ~ b, x ;;. 0
(p)
{
cx = z (Max)
where t he not at i on
ex = z (Max)
must be r e a d " z deno tes t he va lue of the scalar product ex an d we wi ll try and
make z maximum." z is t he "objective func tion. "

Remark 3 : Equivalent ly, l i near p rogram (P ) ca n be wri tten i n de tached coef fi -


cient form
1 2 n
Alx + Alx2 + •.• +Al x ~ b
l n l
1 2 n
A + A + • • • +A ~ b
2x l 2x 2 2xn 2
(p) x . ;;. 0
]

j = 1" . . . .n
1 2
AmX l + Amx2 + ... +Anx
mn ~ b
m
, 2
c~xl + c x
2 + ... +cn xn z (Max)

Remark 4: Cons t raints x;;.o (which mean , s ee Def i nition 11, x ;;. 0 for j = 1 ,
j
2, ••• ,n, could have been inc lude d in t he se t Ax ~ b. To do th i s , it would
s uffice t o define the (m + n ) x n-mat r i x A' an d t he (m + n )- column vector b ' by
10 Chapter I. Introduction to Linear Programming

b ' . [ :]

and (P) woul d be come


A' x ~ hi
(p' )
lcx = z (Max)

This procedure migh t seem ve ry nat ur al s i n ce not h ing i n t he de finition of


a linear program f or ces va riab les t o be n on-ne gat i ve and we wi ll in fac t meet
linear pro gr ams with non constra ined va riab l es . However, th e us a ge is t o s epa-
r at e non-ne gativity constraints on the v ar i ables from the oth er cons t rai nt s and
we will see that this is well j us t i f ied :
( a) Non- ne ga t ivi ty cons t rain ts have , in ge ne r a l , a ve ry s pe ci f ic physica l
ori gin (ac tivi t ies t hat ca nno t conce ivab ly work a t a ne gati ve l eve l, f o r
i ns tance) .
(b) These co nstrai n ts playa pa rt i cular r ole i n t h e process of s ol ut i on .

Definit i on 15 : Let us come ba ck t o the lin ear pro gram i n canonica l form :
Ax ';;' b, x ;;, 0

l
(p)

cx z (Max)

Eac h ve ctor x t hat satis fies t he cons t rain ts

Ax';;'b , x ;;, 0
is ca l le d a " f eas ible s ol ut i on. " The s e t of su ch x ' s is the set o f f e as ible
so l ut ions . For th e sa ke of compac t ness , we pos e

D is a lso ca l le d t he " domain of f e as ibl e s ol ut i ons . " Pr oblem (p) can be


s t a ted :

Find x E D s uch that

cx ;;, ex for all x ED


Such an x , i f i t e xists , is ca lled an "optimal solut ion. "
Section 3. Linear Programs in Canonical Form II

Example: Let us go back to linear program (PI) and Fi gure 1 . 1 . The se t of


( xl' x that cor r espon d to points inside the poly gon ABC D E are f easible
Z)
solutions of (PI) . x =3 , xZ=Z is, as we shall pr ove l ater, an opt i mal so l ution .
l
We notice that this optimal solution cor res ponds to a verte x o f the poly gon.
We will see in Chapter VIII th at this fact is not due to chance.

Remark 5 : Given the line ar program

J
AX ~ b , x :;;' 0
(p)
ex = z ( Max )

Th r e e cases may occur:


(a) The set of fea sible s ol ut i ons D = {xE Rn /AX ~ b, x:;;. oJ i s nonempty
and there e xists (at least) one optimal s ol ut ion; example :

(4)
( Max)

(b) The set of feasible s ol ut ions D is empt y; example :

I XI ~ - l xl ~ 0
( 5)
1x I = z ( Max )

( c) Th e s e t o f f easible sol ut i ons D is non empty, b ut t here does not exist


any optimal so l ut i on ; e xample:

( 6)
(Max)

The value of t he obje ctive fun ction z is not bounded i n the se t D.

Remark 6 : What i s s ai d in Remark 5 i s just p ure lo gi c: Either D is empt y o r


not. If D is non empty, an optimal so lution may or may no t e xi s t . We wil l
prove later that in the third case, i.e., i f there exist feas i b le s oluti ons but
no optimal solution, we always are in the unboundedness t yp e of si tuation .
This f a ct is not obvi ous. Let us consider t he very s i mple op t imization
problem (whi ch is not a l inear progr am; why?).

Xl < I Xl > 0
(7)
I Xl = z (Max)
12 Chapter I. Introductio n to Linear Programming

The set of feasible solutions is t he segmen t [0, 1) (open on the righ t side)
b ut t here is no optimal solut ion.
It is t o prevent si tuations of that t ype (and a lso be cause strict i neq uali-
t i e s neve r ha ve physical significance in a model) that s trict inequali ties are
f orbidden i n l ine a r programming.

4. Equivalent Formulat ion s of Linear Programs


De f inition 16 : Recal l Definition 14, where we de f ine d a l i nea r program i n
canoni cal f orm. The linear program

x ;;. 0
(PS) \ Ax =b
1ex = z (Max)

is said t o be i n "s tandard form ." If some const raints a re equations and ot he rs
are inequal i ti e s, we s ay that we ha ve a "mixed fo rm."

Remark 7: Note that l ine ar program (PS) can easily be writ ten i n canoni ca l
th
form. The i cons train t of (PS) is

and we have ,;; b .


~
( 8) A.x b . <- -> IAi X
~ 1
A.X ;;'b . <~ - A. x ,;; -b .
~ 1 ~ ~

so t h at (PS) can be wri t ten

with

(Le. , A is a 2m x n-matrix an d b is a 2m-column vector) .

Defini tion 17: Al t hough op timization problems (PS) and (PS) a re forma l ly
diffe rent , we see that t hey are, in fac t, "the same ."
I n gene ral, we wi l l say t hat two optimization problems (0) an d (0 ') are
"equ i valent" if t o each feasib le so l ution of on e of these prob lems, one can
fi nd a corresponding solution to t he other i n such a way t hat for a pair of
homologous so lutions, th e value s of t he objec tive f unc tions a re e q ual .
(Note t h at i n t he example above t he correspondence was simply iden t i ty. )
Section 4 . Equivalent Formulation s of Linear Programs 13

Theorem 1: Any linear program can be wr itten i n canonical or i n standard


form.
Pr oof : To prove th is theorem we wi ll show five poi n t s:

1. An inequality Ai x ;;. b can be trans formed into an


i
inequalit y wi th a "~' si gn .
2. An equality constrain t can be replaced by a s e t o f t wo
inequali tie s .
3. An i nequalit y can be rep l aced by an equa tion .
4. Minimization an d maxi mizati on problems a re e q uivalent.
5. I f some va riab les are not cons traine d to be posi tive or
zero, it is , howeve r, possib le to go back t o canonical or s tandard form .
Proof o f 1 i s t rivial: i t suffices to mult iply by - 1 . Proof of 2 has been
gi ven i n Remark 7.
Proof of 3. We have

f Ai x + ~i = b i
(9) A,x -< b ~, - -
~
l ~i ;;. 0

The new variable we adde d t o go f rom t he i nequa l i ty to the eq ua t i on i s c a l l ed


a "slack va r i ab l e . " The va l ue of this s l a ck v ar i ab l e meas ur es by how much
Aix is l ess than b . (Ai x cannot be gr e a t e r t h an b since ~i is con stra ined
i i
t o be non-ne gative. )
Pro of of 4. Let D be t he domai n of feasible solutions o f a l inear pro-
gram . We have

Min [e x] = -Max [- ex ]
xE D xED
Assume that x is a solution of Max D[-cx) and let x E: D with
xE
ex < ex

Multiplying by - 1 , we ge t

-ex > - e x

which contradicts the f act that x is an opt imal so lution of the ruaximization
problem. So if one of th es e problems ha s an optimal solution, s o does the
other, and opt ima l soluti ons of on e of these prob l ems are solutions o f the
other .
14 Chapter I. Introduction to Linear Programming

Proof of S . We will jus t show t h at t he linea r program

lAx .;; b un res t rf. ct ed


(10)
I ex ; Z (Max)

can be wr i t t en in canonical f orm. Extens i on to the gene r a l case i s s tr aigh t -


forwa rd . We l et

( ll)

And we show that


AI" J
(1 2) 1A> - Xl + A xJ .;; b xi' " XJ ;;:.
xl' 0

,J
c xi c 1Xl" + c xJ ;
z (Max)

(whe re J {2 , 3 , ... . n l ) is eq uivalen t to ( 10 ) accordin g t o Defin ition 1 7 :

(i) I f xi ' Xl ' xJ i s a feasib le so l ut ion o f (12) , then xl ; xi -Xl ' x


J
is a f e asible s olut i on of (10) wi th equa li t y of obj e ct i ve fllll ctions .
( ii) If xl ' xJ i s a f eas ible so l ut i on o f ( 10 ) , then xi , Xl ' xJ wit h

Xl ; - Mi n l O, x ]
l
i s a fe asib le so l ut ion of ( 12) an d the va l ues of t he objec t i ve fllllc tions a re
eq ua l.

Remark 8 : The method us e d i n t he proof of po i nt S a l low>. uS t o "re du ce" llll-


cons t rai ne d va r iab le>. t o cons trained ones and to sh ow how gene r al t he canoni -
ca l or s tandar d f orm i s . But , i n practi ce, i f we h ave l inea r pr ograms wi th
llllconstraine d va r i ab l e s , it would be ve ry ine f fi ci ent t o us e t he procedure just
de s c rib ed . We will show, i n Chapt e r V, how adv antage can be taken of th e fa ct
that some ( or a l l) va r i ab l es are llllconst rained.

Example : Linear pro gram (P I) ca n be written in standard fo rm a f te r a ddi n g


s l ac k vari ables (whi ch we ca l l here x 3 ' x ' x ) :
4 S
2x +
l
X
z+ x
3 8
xl +ZXZ +x
4
(P I) x . ;;:. 0 for j ; 1 , 2 , .. . ,S
Xz +x 3 J
S
4x +Sx ; z (Max)
l Z
Section 5 . Elements of Geometry of L inear Programs 15

Remark 9 : An exp ression of Theorem 1 in mat r ix not at i ons is gi ven in Exe r ci s e


12 .

5. El ements of Geometry of Lin ear Pr og r ams

Th i s shor t section, which i s a p review o f Chap ter VIII, i s aim e d at givi ng


an int uit i ve backgr ound t o line a r programming the or y and to the method of
sol ution t ha t wi l l be pre s en t e d la ter: the s implex algori thm. Let us conside r
t he l i ne ar pr ogram

Ax ";;b x ;;. 0

(P) l cx =
n
z (Max)

The se t of points i n R whi ch sa tisfy, f or a given i E::{1 ,2 , • .• m} ,

( 13)
n
cons ti tute a " hyp er pl an e ." This hyperp lane separat e s R i n t wo r e gi ons , t wo
"half spaces , " such t h at t wo points are in t he same half space i f and on ly if
t he segme nt joinin g t hem does not i n tersect t he hyperp lane (1 3) .
The s e t of po i n t s sat isfyin g

Ax";; b
is t he intersect ion of half sp ace s

Ai x ";; b fo r i = 1, 2 , ... , m
i
This in tersec tion ( i ts e l f i n t e rs e ct e d wi th t he "non-negati ve orthant" x;;. 0)
is a " con vex polyhedron." A t etrahedron , a cube , and a diamond a re e xamples
3 2
of convex pol yhe dra in R . A conv e x po lygon is a conve x polyhedron in R
( s ee Fi gure 1. 1).
The points o f t h e polyhe dron of f e asible solutions t o (P ) whi ch satis fy
for an i E{ 1 ,2 , ... , m} or a jE{l,2, ... ,n }

be long to a " fa ce " of th e polyh edcon . I n te rs ect i on of n face s de t ermines (in


general) a " ve rte x" of t he polyhedron . Inters e ct i on o f n - 1 f a ces determines
( i n gene r a l ) an "e dge " of the polyhe dr on . Li ne ar pr ogr am ( P) consis t s in
f i ndi ng an x and a z such that the hyperp l ane

ex = z
If> Chapter I. Introduction to Linear Programming

cu ts the domai n o f feas i b le s olut i ons " as f a r as pos sible " i n t he dire ct i on o f
i nc reas i ng z ( this i s t he pro cedur e we us ed i n sol vi n g e xa mp l e (P I) i n sect ion
1) . I n genera l , t h e i n t e rs e ct i on i s r educed t o a poi n t x wh ich i s a ve r t ex o f
t he polyhe dro n .
The s imple x a l go rithm proposes a jour ney from a start in g (nono ot i ma l)
ve r tex t o an optimal ve r t e x t h r ough a series of vis i ts of a cha i n of a dj a cen t
ve r t ices of the polyhe dron .

EXERCISES

l. Cons i de r t he lin e ar prog r am

Xl + 2x2 3x
3
1

2x x - 5x ~ 2
l + 2 3
(P) Xl ~ 0, "z -;;. 0
Xl + 3x2 - x ;;, 1
3
Xl + 2x2 + 3x 3 = z ( Hax)

(a) Wr i te t hi s linear p r og ram ( 1) i n canoni ca l f orm ; (2 ) i n s t andar d


form .
(b) ( P) bein g writt en in s tandard form , we have m = 3, n = 6.
J J J
Wr i t e A2 , A , A4. Le t I = {l , 3} , J = {2 , 4 , 5} an d wri te b c , AI ' A , AI '
3 I,

2. A man ufa ctu r e r wishes t o produce 100 l bs. of an al l oy that i s 30 pe rcen t


l ead , 30 p er ce nt zin c , an d 40 pe rcen t ti n . Suppo s e t h at t h ere a re on the ma r -
ke t a l lo ys 1 , 2 , 3,4, 5, 6, 7,8 , 9 wi t h composi t i ons and pri ces as fol lows :

Alloy 1 2 s 4 5 6 7 8 9 Desi red


% l ead 10 10 40 60 30 30 30 50 20 30
% zin c 10 30 50 30 30 40 20 40 30 30
x tin 80 60 10 10 40 30 50 10 50 40

Cos t 4. 1 4. 3 5. 8 6 .0 7.6 7. 5 7.3 6 .9 7. 3 Min


per l b .

Obvious ly , t he manuf a ctu re r can pur chase a l loy 5 alone, bu t th i s wi ll cos t


$ 760 . If he buys 25 I bs , o f I, 25 l bs , o f 2, and 50 I bs . of 8 , he ge ts 100 l bs , of
mix t ure wi th t he r eq ui r e d p ro por t ion , which cos ts onl y $555 . The n umbe r o f
comb i na t i ons t hat wi l l give t he de s i r e d blen d s eems t o be in f i ni t e ( i s it ?)
Exercises 17

and the manufacturer would like to find a systematic way of getting a result.
Write this problem as a linear program (the cost of blending is assumed to be
0) .
3. A farm has two lots, A and B (200 and 400 acres, respectively). Six
kinds of cereals , I, I I, III, IV, V, and VI, can be grown on lots A and B.
The profit for each 100 1b s , of ce r ea l i s:
-
I II III IV V VI
Profit/lOa Ibs , 24 31 14 18 61 47

To grow 100 Ibs , of cereal we need s ome area (in acres) and quantit ies of water
(in cubi c meters)'
I II III IV V VI
Area on lot A 0.01 0 .015 0.01 0 .008 0 .025 0 .0 2

Area on lot B 0.01 0.017 0 .012 0 .01 0 .03 0.017

Water (in m3) 60 80 50 70 107 90

The total vol ume of water that is available is 400,000 m3 • We try t o make a
maximum profit while r es pe cting va r i ous constraints. Write this problem as a
linear program.

4. Afte r n experiments a phys icist is sure that a ce r t a i n quantity Q


varies with va r i abl e t . He has good reasons to believe that the law of va r i a-
tion of Q is of the form
(*) Q(t) =a sin t + b tan t + c

and he wants to determine " as well as possible" the val ue s of parameters a, b ,


and c from th e n expe riments made . These e xpe r i men t s have gi ven him f or t
l,
t 2, t n values Q1' Q2 ' . . • , Qn'
His experiments were not error free and th e law ( * ) of var i ation may be
j us t an approximation. Thus the linear s y s t em

i = 1 , 2 , ... , n (n > 3)

with t h re e unkn owns a, b, c has no s o l uti on . The physicist has two di fferent
ideas of what a goo d ad j ustment may mean :
(a) Find t he val ues of a , b, c that minimize
n
L la sin t i + b t an t i + c - Qil
i=l
18 Chapter I. Introducti on to Linear Programming

(b) Find the values of a, b, c that minimize

Max (a sin t + b tan t i + c -Qi)


i=1,2, . .. ,n
i

In e i t he r cas e , and for physi cal r easons, coe f ficients a, b, and c must be non -
negati ve . Show that e a ch of these adjustments can be written as a linear
pro gr am.

5. To f eed a given animal we n eed four nutrient s, A, B, C, and D. The


mini ma l q uant ity e ac h anima l ne eds per day i s

0.4 k g of A; 0.61 kg o f B; 2 kg of C; 1.7 k g of D


To obtain the foo d we mi x two f l o ur s , M and N:

1 kg o f M contains 100 g of A, no B, 100 g of C, 200 g of D


1 kg of N con ta i ns no A, 100 g of B, 200 g of C, 100 g of D

Wit h $1 we can buy


4 kg of M and 8 kg of N

Wr i te as a l ine ar progr am the problem of f i n di n g the da ily quantity of flours


M and N that must be bought to fe ed one an imal at min imum cost. Solve
gr aphic a lly .

6. The cos t matri x o f a transp ortati on problem (analogous to problem


(P of Section 1) wi t h 3 plants ( corresp onding to harbors) and 4 markets
Z)
( co r r espondin g to f a ctories) i s

M M M M
l 2 3 4
PI 4 4 9 3

P 3 5 8 8
z
P 2 6 5 7
3

Suppl i e s are 3, 5, 7 i n plant s PI' P


P respectively. Consumptions are
Z' 3,
2, 5 , 4, 4 in M M , M M ,respe ctively . Write this transportation problem
l, 2 3, 4
as a l inear program and check that the f ol l owi n g shippin g ma t r i x cor r e s pon ds
t o a feasib Ie solut ion:
~ M
Z
M
3
M
4

PI 0 0 0 3

P 0 5 0 0
z
P Z 0 4 1
3

7. Show t h a t t h e problem

AX = b

(wh ere Cl
j
and
( P)

Sj
l e x = z ( Max)
are g i ve n r e a I s ) i s a linear p ro g ram. Write ( p) in s t an d a r d
f o rm .
8. Sh ow t h at t h e p r ob lem
~ 0
+ ~v b x, v

I
Ax
(P )
ex - E Iv i l = z ( Max)
i= l

(whe r e U i s t he m x m uni t ma t r i x) i s a linear pro g ram.

9. We c ons i de r t h e t wo line ar p r o grams

{ AX ';; b X ;;" O (p ) { AX ';; b Xj;;'O 2 , 3, ... ,n


( P)
e x = z (Max) 1 cx= z ( Max)

Show th at i f x i s an o p t i ma l s o l ut i on o f ( PI ) with xl ;;. 0 , then x i s a n optimal


s o l u t i on o f ( P) .

10 . Show t h at l i n e ar pro gram (P ) c an b e wri tte n in s tan da r d f o rm wi thout


Z
addin g s lack v ariab les .

11. Le t A be a n m x n-mat ri x and B b e a n n x q -mat r i x . Matri x B c a n b e


v iewe d a s a de vi ce th at trans f o rms q-column vect o rs x into n- column vec to r s
y by
y = Bx
Matr i x A can be viewed as a devi ce t h at t r ans f o rms n- co l umn ve c tors y in to
m-col umn ve c t o rs z b y
z = Ay
20 Chapter l. Introduction to Linear Programming

( a) For the n umeri c al exam~le

show that t he co ncatena tion of t h e t wo devices i s th e device C , which t r an s f o r ms


x int o z by

x = Cz

whe re C = AB

(b) Prove the property in general.

12 . A is a m > n -cmat r i x , Let I , J , K be a pa rtition o f { 1 , 2 , ... , n }


(Le. , I. J, K C{l,2 , .. . , n }; r n .r = J nK = Knr = ¢; rU JU K= { 1. 2 •••• •n} ) and
L , M, N be a p a r t i ti on o f {I , 2 , . .. , m} . Con sider th e linear program

ALX .;;; b L x ;;, 0


r
~x ;;, b M x .;;; 0
J
(p)

~x b uncon st rain ted


N ~

cx = z (Max)

Give the s t ruct ure of ii , B, e and A, b , C if

(P) { ~X

are e quivalen t t o ( p).


;;, s
ex = z (Max)
x ;;' 0
(P)
r x b

cx = z (Hax)
x ;;' 0
Chapter II. Dual Linear Programs

In this chap ter we sh ow t hat it i s possible t o as s ocia te t o any l i nea r pr o-


~ ram ano ther linea r ur ogram (wh i ch we call " dua l " ) . Us ing di fferent t ypes of
a rg umen ts we prove how cl os e t he relat ionship is between a l ine ar program and
i t s dual. I n fac t , t hes e t wo p r og r ams must be considered as two fa cets o f t he
s ame problem r ath er than as two different pr oblems . And we wi ll see i n s ubse-
quent chap t e r s that when one s olve s a linear p rogram, i t s dual is sol ve d at the
same t i me. Thus the concep t of duality is ve ry cen t r a l to linear pro gramming
and th i s is why it is i n t r oduce d s o e a r ly in the book .

1. Formal Defin it ion of t he Dual Line ar Pr ogr am


Def i ni ti on l : Gi ven a linear p r og r am i n canoni ca l f orm

Ax. 0;;; b x ;;;'o

(~) {
cx = z (t tax)

(whe re A is a (given) m x n - mat ri x, b i s a (given ) m-column vect or , c i s a ( given)


n- row vecto r, and x is an ( unknown) n-sco I umn ve ct or) , we ca l l the " dual" o f
line a r p rogram ( P )

YA ;;;. c
( D)
{
yb = w (Hin )

(whe r e A, b, and c a re t he s ame as in (P) and y is an ( unknown) m-row vec to r) .


This definiti on is due t o Dantzi g and von Neumann.

Remark 1 : In detach ed coe ffic ien t form (p) is wr i t t en


I 2 n
Alx l + Al x2 + + Alxno;;; b l
1 2 n
A
2xI
+ A2 x2 + + A2 xn o;;; b 2 x. > 0
J -
( P) j =1,2 , • • • •n

1 2
Amx l + Amx2 + + Anx 0;;; b
mn m
I 2 n
c xl + c x2 + ... + c x
n
z (Max)

21
22 Chapter II. Dual Linear Programs

and ( D) is written

I
+... + A\m;;.
m
c

Z
+. . . + AZym ;;. c i
m
( D)
y > 0

i : l, 2, . . . , m

~ema rk Z: We note that the variab les of the dua l are i n a one - to-one co r r es-
pon dence with the cons t r ain ts of th e l inear prog ram we s tarted wit h (fo r con-
veni en ce t hi s linear p rogr am is called the " pri mal" ), whil e the cons t rain t s of
the dual are in one-to- one corresTJondence wi th the va r iab les of the prima l .
Thi s i s shown in t h e fo l l owi ng diagram :

X x X
Xl z 3 n
1 Al Z 3 An .;; b
Y A A
1 1 I I l
Z Al Z 3 An .;; b
y A A
Z Z Z Z Z X. ;;. 0
J
j=l ,Z, . . . , n

m Al 2 3 An .;; b
Y A A
m m m m m
IV IV IV IV
I Z 3 n
c c c c
-+ { objec t ive function
t o maximize
i
y ;;' 0 obj e ct i ve fu nction
to minimize
i =1 ,2, . .. , m

Readin g this table "ho rizontal ly " gives probl em (P) ; a " ve rt i cal" reading
gi ve s its dual.

Example: The dual o f proble m (P I) o f sect i on 1.1 (a) is

Zy1 + YZ ;. 4
Section I. Formal Definition of the Dual Linear Program 23

Not e that, for convenience, we have written the components of vector y as


if it were a column vector . The reader must convince himself that it makes no
difference .
Note that Yl = 1, Y2 = 2, is a feasible solution of (D
l).
The corres-
ponding value of the objective function w is 22.
Remark 3: We have spent some t ime in Chapter I showing that any linear pro gram
can be written in canonical form . Thus any linear program has a dual. We will
:l OW answer the q ues tions :
(a) What is the dual of the dual?
(b ) What is the dual of a linear program written in standard form ? Under
mixed form ?

Theorem 1: The dual of the dual is the primal .

Proof : Linear program (D) can be written

-c
T
l >0
(D)
w' (Max)

( D) being in canoni cal form, we can take its dual in following the proce ss of
Definition 1. The dual of (D) is

~_uAT ;;. _b T u ;;. 0


( DD)
T
l_uc = WOO (Min)

where u i s an (unknown) n-row vector . Let

T
x = u

(DD) can be written equivalently (by taking the transpose and multiplyin g
constraints by -1)

(P) J:::: I (Ma x) x ;;. 0


q e d,
i i

'lemark 4 : This property is a justification of the te rm "dual."

Remark 5: The dual o f the linear pro gr am in standard form,

jAX b x ;;. 0
(PS)
1e x = z (ttax)
24 Chapter II. Dual Linear Programs

is
YA ;;. c ' s i. gn t
y un res t ra. c t.e d t.n
(DS) jyb = w ( Min)
To prove t h i s we write (PS) in cano n ica l f orm (as we l e arn e d t o do i n Se ct i on
1.4) :
Ai~ bi i 1, 2, . .. , m

( PS') - A . x "; - b . i 1, 2, ... , m x;;' 0


1 1

cx = z (Max)

And we can app ly t o ( PS ') the pro ce s s of Def i ni tion 1 t o fi n d i t s dual . The
constraints o f (PS') (o t he r than x ~ 0 ) a r e in one of t wo groups . Let us
associate dual va r i ab l e y , i to t he i th cons t r ai nt of th e fi rs t gro up and y" i
to t he i th con stra int o f t he s e con d gr oup . Le t y ' (res p . y ") be th e m-row
th
ve ct o r the i componen t of wh ich is y , i (resp. y " i ) . Then t he dua l o f (PS ')
is
Y' A - y " A > C
(DS' ) jy 'b - y "b = w ( Min )
y ' , y" > 0

By po sin g y = y ' - y" , we s ee t hat (DS' ) i s equi va l ent to ( DS) (Sec t i on 1.4 ) .

Remark 6 : Hore ge ne ra l ly , it i s conve nien t t o be able t o write t he dua l o f a


linear pro gram without pass i n g t h r ough th e canon ical form (ho we ver, fo r exe r -
cise an d fo r checking , we recommend th at the be ginners a lways fo llow t his
p r ocess) .
Let us assume t h at the obj ec t i ve fu n ction i s t o be maximi ze d an d t hat t he
th
i const raint is

In thi s c as e , when we go t o t he equi va lent canoni ca l f o rm" the cons t rai nt -" i l l
not be change d and t he dua l va r iab le yi mus t be ;;. O.

(b) A .x;;' b .
1 1

In this ca se , when we go t o th e equi vale nt canonica l form , the cons trai nt wil l
be mu l t i pl ied by -1 and t he corre s ponding dua l va r iab le yi mus t be .,; 0 ( _y i > O.

t Th i s is a l so deno ted y ~ o.
Section I. Formal Definition of the Dual Linear Program 25

(c) Aix = b i

In this ca s e , when we go t o the equ i va l ent can onical form , we have two c on-
st r a in ts

- Ai x .;;; - b
i
I f we ca l l y , i and y"i t he cor respond J.ng dua l variables we not e t h at e a ch t i me
one of thes e va r i abl e s is wri tten in an expression wi th a coefficien t , t he
o the r one is t here with t he oppos i te coeffici ent . So that we can pose
i ,i "i
y y - Y
t hi s va riab le of the dual (we say "dual va r i able " ) being not cons t raine d .
Go on as s umi ng th at the obj ecti ve f unc tion i s t o be maximized and t hat t he
cons t raint on p ri mal variable x is
j
(a) x ;;. 0
j
When we go t o t he equi va lent can oni ca l f or m we do not ch an ge va r i ab l e s and t he
co r r e s pon di ng co ns train t of t he dua l is

(b) x .;;; 0
j

To go to the eqUi valent canoni ca l f o r m we pose x ' -x an d t he co r res oondt n g


j j
constraint of t h e dual will be
m
- l:
i=l
i . e.
m
);
i= l

( c) x. not cons trained (x , ~ 0)


J J

I n t h i s case , t o go t o the equivalen t canonica l f o rm we pose (see Remar k 1. 10)

x' x" X
I
j ' X"J. ): 0
j j
26 Chapter II. Dual Linear Programs

To va riabl e x wi l l correspond the cons t raints


j
m
i ;;. i
/, A~y
~
c
m
1=1 i j
L: A~y
~
c
m i =l
- L: Ajy i » -c j
i =l i

These r emarks can be put t o gether as in Table 11. 1.

TABLE 11 .1 : Duality Rules


Primal -Maximization Dua l -Minimization
itil cons t raint ;;; i -th vari ab le ;;. 0
itil cons t r aint ;;. i th variable ;;; 0
-
i -th cons train t = ith variab l e un res t ri cted
j th variable ;;. 0 jth cons traint ;;.
jth variable .;:; 0 j th cons traint .;:;
jth va r iable unre st r i ct e d jth cons traint =

Primal-Minimization Dual-Maximization

i t h cons traint ;;. ith ;;. 0


- va r iable
ith constrain t ;;; ith variable .;:; 0
-
ith cons traint = ith variab le unrestricted
- -
jth variable ;;. 0 jth constraint .;:;
j t h variab le .;:; 0 j th cons traint ;;.
j t h variab l e unr es tric ted j th cons taint =

The proof of the lower part of t he t able (which is equivalen t t o the firs t part
read f rom r i ght t o l e f t ) is left to t he reader as an exercise.

2. The Objective Function Values of Dual Linear Programs


Theo rem 2 : Let (P) an d (D) be a couple of dual linear programs

Ax s b, x ;;. 0 YA ;?: c, y ;;. 0


(P) ( D)
{ {
ex = z (Max) yb = w (Min)

Fo r every couple of feasible solutions ;, y of (P) and (D), r e spectively,


we have

( 1)
Section 2, The Objective Function values of Dual Linear Programs 27

Pr oo f : Let us write down, in detached coefficient form, that x is a f e as i b l e


solution to (P ):

x , ;;. 0
J
j = I, ... , n

Multiply each of these inequalities by the corresponding yt.


Since yt ;;. 0 by assumption, this operation does not change the sign of the
inequality:

Now adding all these inequalities, we get


m n m
(2) \ \ -i j- \-i
L L Y AiX j < L Y hi
i=l j=l i=l

Starting from the s et of inequalities

whi ch states that y i s a feasible solution of ( D) . and multiplying each i n-


equality by t he co rresponding non-neg ative x, and adding, we get
J
n m n
L -i j-
Y Ai x J' >
L c J'-x ,
L
j=l i=l j=l J
whi ch to gether with ( 2) gives the result.
This proo f can be formulated more b riefly by usin g matri x notation:
Since x is a f e as i ble solution to (P) , we have
28 Chapter II. Dual Linear Program s

Since y ~ 0, this gives

In the s ame way, sin ce y is a fe asible s olution to ( D) , we ha ve

Since x ~ 0 , t his gi ve s

yAx ~ cx
which togeth er with (2 ') gives the desi r e d res ult. q. e. d.

Coroll ary: Let x be a feas i b l e s ol ution to (P) an d y be a f e asihle so l ut i on to


(D); then i f cx = yb , x and y a re optima l sol utions o f (p) and (D) , r e sp ecti vely .

Proof : Suppose that there exists it I , a fe as i ble s ol ution o f (P) s uch that
cx ' > cx . Si nc e cx = Yb we ha ve cx ' > y b , whi ch contradicts Theo rem 2 .

Example : We have s ee n in the e xampl e fo l low i ng Remark 2 t hat Yl = 1, Y2 = 2 ,


w = 22 i s a fe asib le s olution t o the dua l of (PI) ' This sh ows t hat the so lu-
ti on xl = 3, x = 2 , Z = 22 we ob tained i n Sec t ion I . l(a) by geomet ric argu-
2
men ts is in fac t an optimal one .

Remark 7: The impo r t ance of the co ro l lary o f Theo re m 2 lies in the fac t that i t
provide s a "certi fi cate of opt i mal i ty. " Ass ume th at you have f oun d an op t i mal
s ol ut i on of a linear pro gram (either by so lvi n g it with the simplex a lgori t hm
or by a good gue ss or by any other way) an d that you wan t to convince a
"supervi s or" that yo ur sol ut i on is i n fac t optimal. I t will t hen s uffice t o
exhi bit a dual feas i ble so l ut ion gi vi n g t he s ame va lue t o t he obje cti ve f unc tion .
We wi ll see l at er th at if t he sol ution has been ob ta i ned t hro ugh t he simplex
algorithm, y ou will ha ve at ha n d , t o geth er wi th y our op t ima l so lution of t he
primal, an optimal solut ion o f t he dua l .

Remark 8 : Fr om Theo rem 2 we t hus conc lu de that fo r every x , cx be longs to


s ome i n t e r va l [a, zmax 1 and that, i n t h e same way , fo r eve ry fe asible
y,yb£[w
min,
Bl . a may be _ 00 or finite and B may be +00 or finit e . Mor eover,
these two intervals have a t mos t on e poin t in common ( t he i r end ooint s ), in
whi ch cas e t his common po i n t cor r es ponds t o an opt i mal s olut ion fo r bo t h pri mal
and dual problems. What i s l e ft unsolve d a t thi s point is under which con-
diti ons the two intervals migh t have a common po i n t, o r when t here migh t be a
gap between two interval s . (See Fi gu re IL L )
Section 3. Econ omic Interpretation of Duality (An Examp le) 29

gap

1/ / / 1111/ 111/ 1/ /11 11 1 I 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


a Z
max

r an ge of ex r ange o f yb

1+ w min
1/ 1/ 1/1 / /// 1/1 // /// / / / /1 I I I 1I I I I I I I I I I 1I 1I I I I I I 1I I I
a S
zmax -> I
Figure 11.2: Possib le Range s of cx and yb

We wi ll see i n Chapt e r VI t hat there is, i n fact . no zap , Moreov er. if one
of t h e i n tervals is empty(i .e . , if one of t he proble ms does not have a feasible
solution ) , whi le t he o ther has a feasib le solution, t he l at t e r problem does not
~ave an opt i mal solution .

3. Economi c In terpretation of Duali ty (An Example)


Up t o t his point duali ty has been introduce d in a completely abs trac t
fashi on. Wit h any l ine a r p rog ram we have as sociate d anothe r one ca l le d its
du al. Developmen t s of p rece ding sections s how the s t ro ng mat hema tica l conaec-
t i on between a pa ir o f dual programs . Mo reover, we wi l l see (as announced
ab ove) i n Chap te r IV that when we so lve a l inear program, s olutions for bo th
primal and dual problems ar e found simul taneous l y . Duality i s an i ntrinsic
~asi c property of linear programming.
In Chapter IX we will gi ve further r es ults of duali ty . We will show that
fo r line ar p rograms that a re mo de l s of real systems, dua l va r i ab les have pr ac -
t i cal (in gen e r al e co nomic) interp retation s . We now just give an e xampl e (in-
sp i r e d f rom [ 3]) of practical i n t erpre t at ion of the pair of dual prob lems.
Let us go ba ck t o tr ansportat i on pro b lem ( P2) of Sec tion I.l (b )
30 Chapter II. Dual Linear Program s

Xu + x IZ + x < 550
l3
x + x + x Z3 < 350 x. . > 0
ZI zz 1)

- XU K < - 400
ZI i= I , Z
(P Z) x xzz < - 300 j= I,Z,3
lZ
Xu + x < - ZOO
Z3
-sx
Il
- 6x
lZ
3x
13
- 3x Zl - sx ZZ 4x
Z3
z (Max)

the dual of which is

Yl Y3 > -5

Yl Y4 > -6
y. > 0
1 -
Yl Ys > -3
i=l,Z,o oo,s
(DZ)
YZ - Y3 > -3

YZ Y4 > -5

YZ Ys > -4

sSOY + 3s0 y - 400Y3 - 300Y4 ZOOy w(Min)


l Z s
Now s uppose that a t r ans po r t at i on sp ecialist comes to the firm that needs
product S i n the factories of Denver, Phoenix, and Chicago and proposes the
following deal: I shall buy product S in harbors 1 and Z and pay $T1 l for
each t on of S i n New York and$T1 Z for each t on in Seat t le . I guar ant ee to
de liver 400, 300, and ZOO tons, respecti vely, to factories 1,Z,and 3 and cha rg e
you $n l , $n Z' $n 3 for a ton in each of t he s e places (respectively). I will set
up my prices i n such a way t hat

j
n1 - 1T
l
.;; 5

nZ - 1T1 .;; 6

n3 - 1T .;; 3 ni , 1T.~ 0
l J
(3) .;; 3
nl - 1T
Z

l nZ
n3
-

-
1T

1T
Z
Z
';; 5
.;; 4

The manager of t he fi rm that nee ds product S is t hen for ced to agree that
given the transportation cos t s he will be be t ter off if he l e t s the t r ans po r t a-
t i on specialist t ake care of transpo rtation . The deal i s therefore closed to
Exercises 31

the satisfaction of the t ranspo rnat Lon specia lis t . But the latter has some
f re e dom in choo sing his prices (he promised only t h at constraints (3) wi l l be
satis fied). Since he wants to maxi mize his ret urn, he wants

400 " i + 300 n + ZOO n - 550 11 - 350 1l = w' (Max)


Z 3 1 Z
Then we s ee ( a f t e r defining Yl = 11 1 , YZ = 1I Z ' Y = nl , Y = nZ' Y = n )
3 4 5 3
that the transportation s pecialis t's prob lem i s precisely (D ) ' the dua l of
Z
( P Z) .

EXERCISES
1. Wri te the dual of t he linear p r ogram

3x + X - ZX = 4
l z 3
Xl - ZxZ + 3x 3 .;; 1 xl ,xZ ~ 0
(p)
ZX
l
+ Xz x ~ Z >
3 x < 0
3
3x + 4x + ZX = z (Min)
l Z 3

Z. Show (us i ng the example f o l l owi n g Remark Z) that gr aphic a l s o lut i on of


(PI ) found in Section 1 .1 (a) i s in fact optimal. Hi n t : Use the
corollary o f Theorem Z.

3. We consider t he linear program

Ax ';; b, x ~ 0
(P) {
ex = z (Max )

(P) is said to be "primal feasible" if b ~ 0 an d "dual feasible" if c .;; O.

(a) Show t hat i f (p) is primal feasible, (P) has an obvious feasible
so lution .
(b) Show tha t i f (P) is dual feasib le, ( D) t he dual of (P) has an obv ious
feasible solution .
(c) Show t h at if (P) is bo th primal and dual feasib le, (P) has an obvious
op timal solution.
( d) Show th at if for an index i, we ha ve

and
32 Chapter II. Dual Linear Programs

(P ) has no f e as i b l e so l ut ion .

(e) Show t ha t if fo r an in de x we ha ve

and
(P) has no opt ima l so l ut ion .

4. Cons i de r t he line ar program

2x + x ~ 6
l 2
xl x ~ 1
(p) 2
xl ' x2 ;;, 0
xl + x
2
~ 3

3x
l
+ 2x 2 z (tta x)

(a) Wri t e (D) , t he dua l o f (P) .


1 5
Check t hat x
2" ,
(b) ~ 2, x ~ 1, Yl ~ 0, y ~
1 2 2 Y3 2"
a re fe a s ib l e sol utions t o (P) an d ( D) , r es pe ct i vely . Conc l usi on?

S. Wr ite the dual o f line a r progr amof Exe r ci s e 1. 5. A comp e t it o r Y o f


t he f i rm X t hat s e lls fl ou r s Hand N de c ides t o se ll n utri ents A, B, C, and
D dire ct l y . Y will s e l l A, B, C, an d D in t he p ropo rt ions i ndica te d i n
Exe r cis e 1. 5. He wants t o be compet i tive wi th X and max imi ze his pr o fi t.
(a ) Show t ha t t he prob lem o f Y is a li ne ar progr am. Wr i t e down t hi s
l i ne ar program .
(b) Show t hat the bes t p r ice s y s tem f or Y i s t o s ell D at some p r ice
(what p rice?) and t o gi ve away A, B, and C ( reca l l t hat t ha p r opo r t ions of A,
B, C, an d D s old a r e gi ve n) .
(c ) De duce f r om t he p r ece di ng tha t the g raph i cal s olut i on of t he line ar
pr og ram o f Exe rc i s e 1. 5 i s i n fac t op tima l .

6. We cons i de r the f ol l owi ng " game" prob lem:

Min imi ze o und er th e f oll owin g c on s t r ain t s:


x ~ 0
n
( P) 1: x. 1
j~ l J

A.x ~ 0 = 1, 2 , . . . , m
1

Show th at (P ) i s a l ine ar progr am, the dua l of wh ich can be wr i tt en


Exercises 33

Ma Xi mi ze A unde r the fo l lowing cons t r a i nt s :


y .::. 0
(D)

! Y AJ >
m
L
i =l
.

yi =
A
1
j=l, Z, • • • , n

(whe re A is a gi ven m x n- matri x) .

7. A f unc t Lon f : E ->- R i s said t o be "superaddi tive" (re s p . " s ub a ddi t i ve" )
if

Show t h at t he va lue of the objective a t t he o ~ t imum of

Ax "; b x ;;. 0
(p) {
cx = z (Max)
is a s u~ e ra d dit i ve f lll\ct ion of b an d a s uba ddi t i ve f unc t ion o f c .

8. Wri t e the dual of l inear p rog r am of Exe r ci s e 1 . 7.

9. Sh ow t hat t he feasible so lution p ro po se d fo r t he tran s po r t at i on prob l em


o f Exe r cis e 1.6 i s in f ac t an optimal one .
(Hi nt : Try a so l ution t o the dual gi ven by TI - ni = c
ij
whenev er x
ij
> 0.)
j
10 . Le t A be an mXm-matrix "s ymme tric" ( Le. , AT = A) . The f ollowin g
line ar program i s said t o be s ymmet ric:

Ax "; b >0'0
( P)
{
( bT) x z (Hax)
Show t ha t i f the l i ne a r sys tem

Ax = b
ha s a so lut ion with x ;;. 0 , t h i s so lution i s an opti mal sol ut ion t o (P) .

11 . Le t PI ' P . . . , P be n po i n ts of a ne t wo r k . A commodi t y i s p roduced


Z' n
i n PI and consumed in P. To e ach couple (p . , P . ) we associate a nonn e gat i ve
n 1 J
number c .. : t he maximum q uanti t y o f commodity th at ca n be shLpped from i t o j
1J
in one day .
34 Chapter II. Dual Linear Programs

(a) Wri t e , as a l i ne ar p r og ram , the prob lem t hat consis ts of s endi ng a


maximum amount of commo di t y f ro m PI t o P ( th e f low i s cons e rva t ive , i. e . , t he r e
n
is no accumulat ion of commodity at any node ). Wh a t is t h e number o f cons t r a i n ts ,
t he nu mbe r o f va r i abl es ?
(b) Write the dua l o f th is p rob l em.

l Z. Show t hat in linear program ( P ) in equal it i e s can be r epl ac ed by


Z
equations wi t ho ut addin g s lack va riab l e s . Prove that i f (Ill ' I1 ' 11 , lT , lT )
Z 3 l z
is a feasible so l ut i on t o the dual o f ( P ) ' so is
Z
(111 + a, I1 + a, 11 + a, lT l +a , lTZ+a) a£JR
Z 3
ana give an e co nomic i nterpretat i on.
Chapter III. Elements of the Theory of Linear Systems

If we l ook at a l i near pro gram i n s t an da r d f orm,

~ AX=b x;;' 0
(P)
ex = z (Max)

we ca n see it is ma de of:
l. A sys tem of linear equat ions Ax = b (we a lso say a "linear sy s t ern") .
2. A se t o f n onnegativity const r ain t s on the va ri ab l e s x;;, o.
3. An objective f unc t i on cx = z , whic h we t r y t o make maximum.

Thus one must not be s urp r ise d in f i nding that the t heory o f line a r s ys -
tems plays a ce nt r a l r ol e in line ar programming . The goa l of thi s cha p te r i s
t o r eca l l elements of this theory that will be needed i n the s eque l. In the
fi rs t s ection we define t h e co ncept of "solution" of a (g enera l) linear s ystem
(wh ich may c ontain a n umbe r of e qua tions smal l e r t han , gr eat er than , or eq ua l
to the numbe r of unknowns ) an d of redund an cy. In Se ct i ons 2 an d 3 we show how
mani pulations on the e quat i ons of a linear s ystem can be vi ewe d as matri x
multiplica t i on of t he mat ri x o f coef f i cien t s [A,b] of the s ystem . Finally, i n
Sect ion 4 we introduce th e very important " pivot operation" an d des c r ib e
(withou t proof) how a line a r sys t em can be solved th ro ugh a sequence of pi vot
operations.
In t his ch ap t e r (as i n the p rece di n g on es ) A is an m x n -mat r i x , b is
an m-co lumn vec t or, an d c i s an n-row ve cto r . Consi de r the l i ne a r system
( 1 ) Ax = b
I n det ache d coefficien t fo rm, ( 1) is wri t ten

1 2
Al x l + Alx2 + ... n
+ Alxn b
l

2 l
1 2
A X + A2 x +
2
... n
+ A2 xn b
2
(1 1)

2
1
Amxl + Amx2 + ... + Anx
mn
b
m
( 1) can a l so be writt en
( 1") Aix = b. i 1, 2 , . .. ,m
~

35
36 Chapter III. Elements of the Theory of Linear Systems

1. Sol uti on of Linear Sys tems ( De f i n it i on ) ; Redundancy

It is obvious t o anyo ne t hat t he t wo (sys tems of) equa tions

(S) + o
(S') 2x + 2x -2 0
l 2
a r e , in fac t, iden tica l. We now define more r i gor ous l y t he equivalence of
l inea r sys tems.

De fin it ion 1 : Let A be an m x n - rnat r i x and b be an m-column vector. We say


t ha t linear system ( 1) is "equiva lent to " t he l i n e ar sy s tem

Ax = b
if and only if these t wo sys tems ha ve the same so lution set, i.e.,

{x I Ax = b } = { x l Ax = b}

The or em 1 : Let y be an m- r ow vector. (1) is equivalen t to


Ax = b
( 1)
l (yA)x = yb

(Note t ha t yA is ann-row vector and yb is a scalar; ( 1) is a sys tem with m + 1


equa t i ons .)

Proof : Let x be a sol ution o f (1) . We have

A.x = b fo r i = 1 , 2 , oo . , m
1. i
th
Mul t ip l ying the i equali ty by yi and adding, we ge t

yAx = yb
Thus x is a so l ution of (1). Conversely if x is a so lution of (1), it is a
solution of (1).

Definition 2 : An equation that belongs to a linea r sys tem is said t o be


"redunda n t" if i t is a l ine ar combination of t he ot her equations of t he s ys tem.
If a sys tem cont ains a redundant equa t io n, the sys tem i tse lf is said t o be
re dunda n t .

Remark 1 : (m + l ) stequat i on of (1) is, by cons truc tio n , a r edundant one. In


gene ra l, l inea r sy stem (1) wi l l con tain a red unda nt equa tion i f t h er e exists an
m-row vec tor y such th at
(2) yA = 0, yb = 0, y I- 0
th i
The i equat ion of ( 1) is then redundant i f y I- O.
Section I. Solution of Linear Systems (Definition); Redundancy 37

Theorem 2: Assume that an equation o f sy s t e m (1) is redundant an d den ot e by


(1) the l inear system ob t a i ne d in delet ing this redundant equation. The n (1 )
and (1) are equival ent .

Proo f : Is the same as th e p r oo f o f Theorem 1. If x i s a solut i on o f (1), i t


satis fi es t he redund ant equat i on and t h us (1) . If x i s a s ol ut ion of ( 1) , i t
is also a solution o f (1) .

Remark 2 : A redundant eq uation can t h us be del e te d f ro m a linea r sys tem wi th-


out chan ging the solut i on se t: it " add s no i nfo rma t i on" about the s ys tem .
This is why it is ca l led r edun dant.
Gre a t Ca r e must be taken of th e fo l lowi ng f ac t : Re dundan cy i s a prope r t y
of an equation in relation to a whole s ys t e m. A system may conta in s eve r a l
redundant eq ua t i ons b ut a ft er deletion o f a sin gle one o f thes e, the sys t em
mi ght be come nonredundant.

Def inition 3 : A linear sys tem ( 1 ) is s aid to be "incons is ten t" if t h ere
e xi sts an m- row ve cto r y such that
(2*) yA = 0, yb f a
Remark 3 : Fr om Theorem 1 it i s app a r ent that if (2*) hol ds, th en (1) h as n o
solut ion. The ( m + l ) st e qua t i on o f (1) would read

Ox = a f 0
which is c le a r ly in f eas i ble. In f act , we will see i n Section 4 t hat if ( 1) ha s
no s ol ut ion , then one can find y such that (2 *) holds, s o that a sys tem i s
incons i stent if an d only if it has no so l ut ion.
A l inear sys tem th at is neither i ncons i s tent nor r edund ant i s said t o bt
of "full rank." I n t h is ca s e , f rom (2 ) and ( 2*) we ha ve

( 2 ') yA = 0 ~ y = 0

Exampl es: 1. Cons ide r t he linea r s ys tems

+ 2x2 + x 3 5

+ 3x + x 7
2 3
+ 3x2 + 2x 3 8

-1

+ 3
38 Chapter III . Elements of theTheory of Linear Systems

xl + x2 2

x2 + x3 3

x2 + x3 3

xl + x2 2

(E is redund ant, as can be s e en by t akin g y (3 , - 1, - 1 ). Deleting the


l)
f irst eq ua tion , we get t h e e qui va l ent sys tem

2x
l
+ 3xZ + x 3 7

xl + 3x 2 + ZX 8
3

1 Z
Taking y = we ge t th e first e quat ion o f (E2) · Taking y = (- 3 ' 3)'
(1 , -1),
we ge t the seco nd eq ua t i on of (E Si mi l a rly , it can be shown t ha t equa tions
Z)'
of (Ei) can be gene r a t ed by lin ear combinat i on of t hose o f (E . Thus , from
2)
Theo rem 1, (Ei) and (E are equivalent. We lea ve i t to th e r eader t o prove
Z)
that (E and (E a re equivalen t.
2) 3)
By looking at (Ei), we cannot s ay much about i t s s o lut ion s e t, where as (E 2) ,
(E and (E a r e wr i tten in a way that gi ves directly a n e xpli ci t formulation
3), 4)
of the sol ution set. In (E ' we c an cons ider that x i s a paramete r ( or a
Z) 3
"free va riab le"; we wil l al so say a "nonba s i c variable") that can be give n a r -
bitrary value s , cor r es ponding va l ues of and bein g

V( E )
Z {xl' xz ' x 31 xl = -1 + x3' Xz 3 - x
3}

Simila r ly, i n (E ) , Xz i s th e nonbasi c va r i able


3

V(E
3) {xl ' x 2 ' x 3 ! xl z - xz ' x3 3 - x }
2

2. Consider t h e l inear system

xl + Zx Z + x 5

I
3

l + 3x 2 +
( Ey) 2x x
3
xl + 3x Z + ZX 3 7
Section I. Solution of Linear Systems (Definition); Redundancy 39

Taking y = (3. -1. -1). we get (2*). which shows ~hat (E~) is incon s istent.
These examples should help the reader to understand what we mean by
"solving linear system (1)."

Definition 4: Solving system (1) is either:


(a) Proving that (1) does not have any s olution. o r

(b) Finding an equivalent I.lIj!>.tem

A
(3) Ax

such that there exists JC{1.2 ••••• n} for which AJ is. up to a permutat ion
of rows or columns. the unit matrix.
In this case. we say that (1) has been solved with respe ct t o the se t J
of indices (or variables). Set J is called "basic." The complementary se t

J {1.2 ••••• n }\J

is the Il nonbas i c " one.

Remark 4: (E (E (E may be considered. according t o Definit ion 4, a s


2). 3). 4)
solutions of (E with respect to basic sets {1.2}. {1,3}. {1 . 3} . respectively.
l)
(E and (E are obviously the "same" system. Since we want t o say that
3) 4)
(E
4)
is a solution of (E
l)
as well as (E
3),
we introduced the f ac t that
J
is A
"up to a permutation of r ows or col umns" the unit matrix.

Remark 5: Suppose that ( 3) is a s olution of (1) with respect to ba s ic se t J.


(3) can be written. by separating basic columns from nonbasic on e s,

(3') b

Since AJ is by assumption. up to a permutation of rows. the ill X m-unit matrix,


one is tempted to write (3') in the following way:

(3") b

which makes it appear clearly that the system has been so lved with respect to
the basic set J of variables ( give arbitrary values to the nonbasi c vari ables
and deduce. by a few multiplications and additions. the correspondin g va lues of
basic variables).
40 Chapter III. Elements of the Theory of Linear Systems

But this f o r mal ism ass umes t hat we know wh i ch basic variable appears in
wh ich equa tion, Le ., t hat there is a (perfe ct) order on the ind ices in J. So
when one reads (3"), on e mus t take J as an ordered set or a "l i st. " For in-
s tance , (E is t he s olution of (E wit h r e sp ect to (1,3 ), whereas (E i s t he
3) l) 4)
s ol ut i on wi t h respect to (3 ,1).
Form ( 3" ) is very co nveni en t and wi l l be used in the s equel. The reader
is warned t hat especially when we a ctuall y comput e s olut ions o f sy s t ems, t hi s
formalism co n t a ins a slight ambi guity t hat ne cessi tate s s ome ca r e ( see , for
e xa mpl e , Remark V. l).

Remar k 6: Recall tha t we defin ed " solving" sys tem (1) wi th r espe ct to th e
(ordered) s e t of indic e s J as findin g an e quiva l e n t system

(3) Ax b

A
f or which A i s th e uni t matrix. (3) cannot contain a re dundan t equat ion.
This can be seen e i t he r by goin g back t o Remark 1 or by noting that no equat ion
of (3) can be ob t ained as a linear comb i na t i on of the othe r eq ua t i ons of the
s ystem , since each equat ion of (3) c on t a i ns a va r iab le th at i s not contained in
t he other equation s.

2. Solvin g Lin e ar Sys t e ms Usin g Matri x MUltiplication

Rema rk 7 : Rec all that we def in ed (Defin ition 1. 3) the produ ct of mx n-matr i x
A by n x q-matrix B (o r the produ ct of B by A on it s le ft) a s be ing the
m x q-matrix C defined by

(4)

Note th at (4) can also be writ ten

j j
(4 ' ) C AB

. th
which mean s that t o ob tain the J column of the prod uct , we j us t ha ve to
multiply matrix A by t h e n-column v e c t o r Bj• In other word s , the j th
co lumn of C j ust dep end s on A and on the j th c ol umn o f B.
Section 2. Solving Linear Systems Using Matrix Multiplication 41

Remark 8 : We leave i t as an ex erc ise (Exe rcise 5) to show that t he ma t r ix


produc t is assoc ia t ive , i .e . ,

A x (B x C) (A x B) x C

if t he prod uc t exis ts .

Def i ni tion 5: Le t B be an m x m-ma t r ix and U be th e m x m- uni t matrix


(Defini tion 1. 1 2) . I f th ere exis ts an m x m-ma t r ix C such t ha t

Cx B U

then
(a) B is sai d t o be "re gul a r ott
-1
(b) C i s sai d t o be the invers e of B and i s de no ted B •

If no s uc h C e xi sts , B i s nonre gul ar or " s i ngul a r."

Remark 9 : We leave i t as an exercise t o show t he fo llowing propert i e s :

-1 -1 -1 -1 -1
( i) Bx B = B x B =U ; (B ) B; f or a gi ven B, B ,
i f i t e xists , is un iq ue .

( ii) Le t Band C be two m x m nons in gular ma tri ce s . Then

-1 -1
C x B

a nd thus B x C i s nonsin gul a r i f Band Care nonsin gul a r .

Theo r em 3 : Let B be a r e gul a r mx m-rna t r Lx, Linear s ystem (1) is e q uivalen t


to
(5 ) ( BA) x Bb

(The property i s true wh e t he r or not (1) is redun dan t).

Proof : Le t x be a so l ution of (1) . We have

i == 1 , 2, . . . , m

th i
multipl yi ng the i e quali t y by B and addi ng , we ge t
k
42 Chapter III. Elements of the Theory of Linear Systems

which shows t ha t x i s a s olution of (6) .


Conversely, the same proof would show that if x is a solution of (5) ,
it is a lso a solution of

and thus of (1), since matrix products are as so ciati v e.

Remark 10: As a consequence of Theo r em 3 , we see t ha t t o solve the full-ra nk


linear system (1) with res pect to the basic set J "it suffices" to multiply
both matrix A a nd v ec tor b on their left by B (AJ) - l .
Thus, full - rank system ( 1) can be solved with r e s p ec t to bas ic set J
(in other words, J is a basic set with respec t to which ( 1) c a n be so lved) i f
J
A is square nonsingular .
He will see in t h e sequel how to solve (1) without explicitly computing
(AJ) - l .

Examples: L Fo r system (E{) , we have

A ~ C :] [:] 3

Take J 1,2 :

11 -1 ]
tl/3 2/3

The reader wi ll check t ha t

J -1
(A) A
- [: 0

1
-~
~
a nd we ha v e the coefficien ts of (E .
2)
Section 2. Solving Linear Systems Using Matrix Multiplication 43

2. Cons i de r th e system

- 3x + 7x 3 - 2x4
l
5
(6)
+ 4x 2 + 2x 3 3x 4 33

This system is nonredundant but J = {1,4} is not a basic set. In ef fe c t ,


A{1,4} is singular. But J = {1,2} is a basic set, as we will see la t er .

Remark 11: To keep this text as c ompac t a s poss ible, the follo wing propertie s
will be stated without proof:

1. Given any full-rank linear sy s tem

(3) b

equivalent to the full -rank s ystem (1), there exists a non singular
matrix B su ch that
A

BA, b Bb

2. A corollary of the preceding is that all f ull-rank e qu iva len t sy s tems


have the same number of equ ations .

3. Thus, i f (3) is equivalent to ( 1) , and i f (3) is f ull r ank, m< m,


J
4. If (1) i s a full-ran k linear s ystem, there exists J s uc h that A is
square nonsingular. Thus, i f (1) is full r ank , m< n

Definition 6 : An " et a - mat r i x" is a uni t matrix except f or one of its co l umns.
An eta- ma t r i x of order m is thus complet ely s pec i f i e d when we a re given:

(a) The rank of its nontrivial column.


(b) The m-column vec t or cons tit u t ing this nontrivial c ol umn.

th
The eta-mat rix with ve ctor d being the r column will be denoted

D(rjd )

Examples : The following are eta-matrices of order 4 :


44 Chapter III. Elements of the Theory of Linear Systems

[J [~ ~]
0 4

1 3
D D(3 ;
l 2
0
0 2

l:} l: ~l
0 0
1 0
D2 D(1 ;
0 1
0 0

lJ l1 ~l
0 0
1 0
D D(l;
3
0 1
0 0

Remar k 12: We l e ave it a s an exe r c i s e t o pr ove the followi ng pr o pert i e s :

( i) The prod uc t of t wo e ta-ma t ric es D(r ;d) and D' (r; d' )
(sam e r) i s an et a -ma t r ix D(r ,d") = D(r;d) x D' ( r j d ") wi th

d' + d d' f or i f r
i i r
d'! {
~
d d' fo r i =r
r r

(i i ) An e ta- matr i x D( r ;d) is non singu la r i f and on l y i f

1
when t hi s i s t he c a s e , D'( r ;d ') = (D( r ; d) f i s given by

for if r
d'
i
f or i =r

Example: Let u s go ba c k t o th e pr ec eding example . We see t ha t D is


2
s i ngula r , whereas
Section 3. Finding Equivalent Systems of Linear Equations: Elementary Row Operations 45

D
-1
l

~
0
1
0
0
-Z
- 3/ Z
l/Z
-1
~ D
-1
3 [-:
-Z
-3
0
1
0
0
0
0
1
0
~
3. Fi n ding Equiva l en t Sys tems o f Line ar Eguat i ons : El e mentary Row Oper ations
As announced in Remark 10, we wi ll see how to solve (1) without comput i ng
(AJ)-l.

Definition 7 : Given the linear s ystem

( 1) Ax b

we introdu c e two trans formations :

(a ) EROl(r , a): "Elementary row operation of the fir st ki nd , " whi ch


is specified by parameters re: {l, Z, •• • ,m} and a (nonz ero sc a l a r ) ,
th
cons i s t s i n r eplacing the r equation of ( 1) ,

by
aA x = ab
r r

leaving the other equat ions of the system as they a r e .

( b) EROZ(r ,k, a) : "Elementary r ow operation o f the second kind ," which


i s s pec if ied by pa r ameter s r, ke: {l, Z, ••• ,m}, ri' k, and
th
scal a r a , co n s is t s in add ing to t he k equ a t ion of (1) t he
th
r equation mul t i pl ied by a . In ot he r wor ds , a fter E RO Z( r ,~a)
th
the k e qua t i on of t he linear s ystem ha s become

the other equ ations being l eft unchanged .

Exampl e: Cons i der th e l in ear s yst em

{ ZXl - 3x Z + 7x 3 - ZX 4 5
( 6)
3x l + 4x Z + ZX 3x 4 33
3
46 Chapter III. Elements of the Theory of Linear Systems

Apply EROl(l,l/Z):

3 7 5
xl - 2" z + 2" x3 x4

I 2"
X

( 3 "i + 4 X
z+ Z x
3
- 3x 4 33

EROZ( l,Z , -3) gives

3 7 5
xl - 2" z + 2" x3 x

I
X 4 Z

17 17 51
2"x z - 2"x 3 Z

EROl(Z,Z/l7) gives

I xl
3
- 2" X

X
z-
7
z + 2" x3 - x4

x
3
5
2"
3

EROl(Z,l ,3/Z) gives

xl + Z x3 - x4 7

I X
z- x
3 3

We see in t hi s example that a sequence of well-c hosen elementa ry row opera-


tions have "so lved " t he l inear system. ,Ie wi ll now r ec all the proof that such
manipu la tions are valid .
Note that system (6) has been solved with respect t o basic variables
{l , Z} . If we wanted to s ol v e it wit h respec t t o {l , 4 } , af ter a pp lying
EROZ(1 ,Z ,-3) , we wou ld have been blocked: t he coeff icient of is zero in
the second equation.

Theorem 4: Elementary row operations introduced in Definition 6 do no t change


the solut ion set of (1).

Proof : Since a f 0, aAx has the sa me solu tion set as Ax b •


r r r
Section 3. Finding Equivalent Systems of Linear Equations: Elementary Row Operations 47

Now, by ER02(r ,k, S) , the syst em

(1" ) i = 1 , . . . ,m

i s transformed i nto t he s ystem

b ~. il k
(1* ) I Ai X
Akx + SAr x b
k
+ Sb r

Let x be a so lu t ion of ( 1") ; we have

A. x b ~. for i=l , • • • , m
~

and, in part i c ula r ,

AkX b
k

Ax br
r

which implies that

SAi Sb
r

and t hus

AkX + SAr x

s o that x is a solut i on of (1*) •


Convers ely, i f x i s a so l u t i on of ( 1* ) ,

A.x b ~. fo r i = 1 , _.• , m, i l j
~

so that
~x b
r
and
(*) SA x Sb r
r

But we also have

Ai + SAr x b + Sb r
k

whi ch by su bt rac tion of (*) gives


48 Chapter I II . Elements of the Theory of Linear Systems

s o that x is a lso a solut ion of (1") .

Rema rk 13 : An e l ement a r y row operation c ha nge s the mat r ix [A, b] of coeffi-


c i en ts of sy s t em (1 ), providin g - - f rom Theor em 1 -- a n e qu i va l ent sy s t em. The
sa me c hang e woul d have been ob ta i ned throug h matrix mult ipl i c at i on.
ER01( r, a) co nsis t s, i n fac t , i n mult ipl ying ma tr ix [A,b ] on i ts l eft by
t he e t a - ma t r ix

T (ER01 ( r , a » D(r;d) ( s e e Def init ion 6)


wi t h
if i " r
d.
1 ={: if i =r

Similarl y, ER02( r ,k , S) co n sist s of mul t ipl ying [A, b] on it s l e f t by t he


e ta-ma t rix :

T(ER02 ( r , k , S» D(r;d )
with

{~
if i =k
d. = if i " k, i " r
1
1 if i =r

Ver i f i ca t i on of t he s e fac ts is s t ra i ght f o r war d calc ul us .

Example : Le t u s go bac k t o t he s o l ut i on of syst em ( 6):

A
~~ b [3:]
=

T(ER01 (1, 1/2 » T(ER02 ( 1 , 2 , -3 » = rt3 1


1 0]

/
T (ER01 ( 2 , 2/ 17) = T( ER02(2,1, 3/ 2» -- [1
0
31 J
and t he produc t
Section 4. Pivot Operation 49

B = T(ER02(2,1,t» x T( EROlC2'127» x T(ER02(l,2,-3» x T(EROl(l,t»

i s equal t o

4 / 17 3/1~
B [
- 3/17 2/d
The r ead er is i nvit ed t o chec k t hat

B xA

~
0

1
2

-1
-~ B xb
t]
o r , i n ot he r word s that
-1

B
G -J (AJ ) -1 J {1 ,2 }

4. Pivot Operation

Definit i on 8 : Let [A, b] be t he m x (n+l ) - ma t r ix c a l l ed the "augmented


matr i x" of sy s t em ( 1) . The f ollowing s e quenc e of el ement ar y r ow ope ra t ions
i s cal l ed a " pi vot op er ation on t he element AS"
r
(1 ..'5. r . '5. m; 1 < s .2. n;
A~ / 0) :
Pe rform EROl(r , liAs )
r

fo r al l i = 1 ,2 , ••• , m except i =r do
~ ER02(r,i, -A~)
end f or a l l

Remark 14: I n the pivot opera t ion , EROl(r, liAs ) i s performed befo re
r
ent er ing t he l oop. Thu s when per f orming e l emen t a ry r ow op er a tions of the
s ec ond kind (in t he l oop) ma t r i x A ha s been c hange d i n suc h a way t ha t

Theorem 5 : The pivot oper a tion on AS i s poss ibl e i f a nd onl y i f AS /O . In


r r
t his cas e , t he so lution of ( 1) is not c ha nged by the pi vot ope r a t ion .
50 Chapter III. Elements of the Theory of Linear Systems

Proof: If AS = 0 1/AS is not d efined and the pivot operation cannot be


r' r
performed . If AS f 0, the pivot operation is a sequence of valid elementary
r
r ow operations. The property is then a consequence of Theorem 4 .

Exampl e : Linear system (6) has been sol ved, in the example following Def i ni-
tion 6 , by the s ucc e s s i on of two pivot operations:

(a) The fi r s t pivot op eration on Ai = 2 gives the new linear sys tem
(characteriz ed by matr ix A and ve ctor 6)

fS/i
~ -~
- 3/ 2 7 /2
A b
17/2 -17/2 ~l/~
(b) The sec ond pivot ope r a t i on on A; = 17/2 gives

~
0 2
~
A
~

1 -1
-J b

l:J
the "solut ion " of (5) .

We note that
A A

A DA b Db
with

~ 1/2
an d
D
:.3/2
J T(ER02(1 ,2,-3)) XT(EROl(1,1/2))

~ ~
A DA b fiA
with

~
3/ 1
D
2/ 17
J T(ER02(2 ,1,3/2)) x T( EROl ( 2, 2/ l 7) )

Remark 15 : It is a direct c on se qu enc e of the definition of pivot operation


that it i s defined uniquel y in terms of the augmen ted matrix of the s ystem .
I n fact, g iven a matr ix and a non zero el ement, A:, we can define a pivot
Section 4. Pivot Operation 51

A A

operation on this element without s peaki ng of the linear sy s t em. Let [A, b]
be the augmented matri x of (1) aft er a pivot operation on AS has be en per-
r
formed. We have

s s
Aj
{Ai - A~A~/A~ if i I r { bi-brA/A r if i I r
(7) A. b.
1
Aj l As if i =r
1
b i AS if i =r
r r r r

We not e t hat f or j = s , we have

i I r
{~
if
~~1
if i=r

th
Thus the pivot operat i on " creat es" or "makes appe ar " a unit column in th e s
j
column of A. Mo r eove r , i f A =0 (jl s ) and AS = 1 then
r r

Remark 16 : The pi vot operation i s a sequence of el ement ary r ow ope ra t ions and
eac h of th es e elementa r y r ow opera tions ca n be expres s ed as th e mUltipli cat i on
of the augment ed matri x of the sy s tem [A,b] by a nons ingular matri x on it s
left (s ee Remark 13). Thus th e pi vot operation i tse l f can be expressed as th e
mult ipli cati on of matri x [A,b] on i ts l eft by a nons ingular matri x Pi ( r , s ) ,
which i s th e product of the mat ric es correspondi ng t o e lement ar y r ow ope rations
compos i ng th e pi vot ope ra tion. Pi(r,s) , ca l led th e "pi vot matri x," is defined
by

Pie r ,s) T(ER02 (r, m, _ AS) ) x ..• x T(ER02(r, r + 1, - AS 1) )


m r+
x
s s s
T(ER02(r , r-l, - A )) x • . . XT(ER02 (r , I , -Al)) XT(ERO l ( r , l/Ar ))
r_ I

Pi er ,s) i s th e product of m e ta- mat rice s all with the nontri vi al col loon in
th
th e r posi t i on . Thus, from Remark 12, pier, s) is an et a- mat rix

Pie r ,s) D(r;d)


wi th

d.
1
r:l i As
r
/A
;
fo r

f or
i I r

i =r
52 Chapter III. Elements of the Theory of Linear Systems

In othe r words , we hav e

[A,b ] pi er , s ) [A,b]
and
:th
r c.otllml1
0 0 0 _As / As 0 0 0 0
1 r

0 0 0 _As / As 0 0 0 0
2 r

0 0 0 _As / As 0 0 0 0
3 r

0 _As / As 0
r-2 r

0 0 0 _As / As 0 0 0 0
r -l r

:th
pier , s) 0 0 0 0 l iAS 0 0 0 0 r !tOW
r

0 0 0 0 _As / AS 0 0 0
r +l r

0 _As / As 0
r +2 r

0 0 0 0 _As /As 0 0 0
m-2 r

0 0 0 0 _As / As 0 0 0
m-l r

0 0 0 0 _As/As 0 0 0
r.1 r

Rema r k 17 : It is a direct conseque nce of what preced es ( s ee,in pa r tic ul ar ,


Rema rk 15) , t ha t t he s ol ut ion of a l inea r sys te m (1) with r es pect t o a basic
set of indices J ca n be obta ined through a sequence of pi vot operat i ons per-
f ormed su ccess ive ly on t he indices of columns s £ J and i ndices of r ows r,
s uch t ha t As -I- 0 We wi.• Ll not prove t hi s f ac t, wh i ch is not us ed in t he
0
r,.
s equo l , The cor respondi ng met hod is known in t he literature as t he " Gaus s-
J ordan e l imina t i on method o"
Section 4 . Pivot Operation 53

In thi s process, redundant equat i ons wi l l be aut omatica l ly e limi na ted.


At some t ime of th e procedure , a redundant equation wil l read Ox = O . I n t he
same way, if ( 1) has no so lution, t hen at some ti me of th e procedure an i mp os-
sib le equation Ox = a f 0 wil l appear.
I f we s tar t with a full -rank sys tem (1 ) and i f D( l ) ,D( 2) , • •• ,D( m)
deno te th e successive pivo t mat rices, we wil l have

(8) D(m) x D(m- l) x • • • x D( 2) x D(l )

Examples: 1. Let us so l ve t he sys t em

xl + 2x + x3 5
2
(El) 2x l + 3x2 + x3 7

xl + 3x 2 + 2x 8
3

by t he Gauss-Jordan elimination me t hod . First , pe r form a pivo t operation on


A~ = 1:

Now , perform a pi vot operation on

Xl x -1
A
3
(E l ) X2 + x3 3
OX + OX 0
2 3

The redundan t equation is apparent and (E l ) i s i den tical t o (E2) •

2. Let us now try t o so lve system (E;). The same sequence of pivot
operations as in the preceding examp le l eads t o

Xl -x 3 -1
:::*
(E x
l) 2
+ x3 3
OX + OX
2 3
54 Chapter III. Elements of the Theory of Linear Systems

3. Solve

with re sp ect to the bas ic s e t J = {I , 2, 3}, the augmente d matri x of thi s


sys t em being

[:
-1
[A, b] -1 -3
2 -1

The system is solved through th e se quence of pivot operati ons shown in the
following tableau:

Xl x x x b
2 3 4
1 1 1 -1 1
1
1
-1
2
1
-1
-3
1
3
4
pi vot operati on
} r=1 , s =1
0( 1) [, :]
-1
.. 1
0

0
1 1 1 -1 1
0 -2 0 -2 2

[: 1
0 1 -2 2 3 1/ 2
p i vot ope ra t ion
} r =2, s=2
0(2) - 1/ 2

1 0 1 -2 2 112

0 1 0 J -1
0 0 -2 1 4
l/2J
[:
0
pi vot ope r at i on
} r =3, 0( 3)
-~/2
s =3
0
1 0 0 3/2 4
0 1 0 1 -1
0 0 1 -1 / 2 -2

and it i s ea sy t o check that


Exercises 55

0 (3) x 0( 2) x Del)
['I'1/2
3/4
3/4
- 1/2
-1/4
I/

-1~2J l
1
-1
2
r
-1
(AJ ) - l

EXERCISES

1. Prove that the f ol l owi ng two linear sys tems are equivalent :

2X + SX + 3x 12
j xll 2
+ 2x 2 x3
3
5

2. I s the fo llowing linear sys tem redundant?

Xl + x2 x3

2x + 3x + x3 2
l 2
xl - x2 7x 3

3. Sol ve t he s ys tem

Xl + x + x + 10x + 2x 10
2 3 4 S
xl + x2 - x + + 4x 6
3 S
2x + 3x - 2x + 3x + 10x 17
l 2 3 4 S

with r esp ect t o the bas i s J = {1,2,3} •

4. Consi der t h e sy s t em

2x + 3x 2 + 4x 1
l 3
xl x + x 2
2 3
4x l + 3x + 2x -1
2 3
3x
l - x2 x
3
0
56 Chapter III. Elements of the Theory of Linear Systems

(a) Tr y to sol ve th i s sy s te m by a succes sion of pivot operat i ons . Clearl y


define the se operations .

(b) Show t hat any equa t i on of thi s s ystem is redundant. Disc ard th e fourth
equation.

(c) Suppose that we want to know th e values of xl' x 2' x f or any right-hand
3
side . Sys tem (1) can be writt en as

(2)

Solv e thi s sy stem (i. c. , give th e expre s s i ons for


xl ' x 2 ' x 3 as f unc t i ons
of Yl ' Y2' Y3) by a s uccess i on of pivot operati ons and writ e the three
l, 2, 3
pi vat matrices D D 0 .

(d) What i s the in vers e of

3 2 1
(e) Compute t he pr oduc t 0 x0 x0 and exp la in you r re sul t.

S. Prove th at matri x pro duc t i s a ssociat ive.

6. Prove Remark 9.

7. Pro ve Remarks 12 and 13.

8. Pr ove that if B and C are nonsin gular n x m-matric es, th en B x C i s non-


s i ngu l a r . Deduce that pivot matrix Pi er ,s)
in troduced in Remark 16 i s
l?
nons ingu lar . Can you pro ve thi s f act directl y by fin ding (pi(r, s )f

9. Let Ax ~ b x > 0
(I') { ex ~ z (max )
be a linear program in st andar d form and ass ume that any equat i on of t he
sys t em
Ax ~ b

is redundant, but th a t af t e r de leti on of any eq uation , t he sys t em obtained


i s non r edundan t , Wha t can be s ai d ab out th e dual of ( p)?
Chapter IV Bases and Basic Solutions of Linear Programs

In this brief chapter, we i ntroduce so me fun da menta l concepts i n linear


prog ramming: Lases , basic s ol ut i ons , cano nica l f orm associated wit h a given
basis , feasible basis, optimal basis . These notions are a key t o th e under-
standing of th e si mplex algorithm.
We start with a linear program i n s tandard f or m:

b x > 0
(P )
Z (Max)

where A is , as before, an mx n-matrix, b is an m- col umn vec tor, and c is an


n-row r ector. Moreover , we assume in t hi s chap ter t hat t he linear sys t em

( 1) Ax b

i s ful l rank (i.e., not redundan t and not inconsis ten t). Thi s i mplies t hat
m< n (see Remarks II 1.3 and II1.1I).

1. Bases of a Linear Program

Defini t i on 1 : Given a linear prog ram (P) in s tandard form such that (1 ) i s
f ull r ank, we call the "ba1>.u.. " of (P) a se t J e U ,2 , ... .n} of indices s uch
J
th a t A i s s quare nonsingular .
In othe r words, J is a bas i s i f and only if li near sys tem (1) can be
solved wit h respec t t o J (see Sec tion II 1.1 and II 1.2) •
J
If J i s a basis of (P) , A i s t he "b as i s mat ri x" corresponding to J •

Definition 2: To a basis J of li near progr am (P) , we as sociate a particular


so lu tion of (1):

o j ¢J

57
58 Chapter IV. Bases and Basic Solutions of Linear Programs

Thi s solut ion is called the "b asic solution," correspondi ng to basi s J .
In other words, the ba si c solution co r respondi ng t o bas i s J is the
s ol ut i on of l inear sy s t em (1), whi ch we obtai n in makin g a l l nonbas i c vari abl es
equa l to ze ro .

Exampl e: By addi t i on of s l ack variabl es x ' x ' Xs ' l inear pr ogr am (PI ) has
3 4
been writte n in s t anda r d form in Sec t ion 1.4 .

2x + x + x 8
I 3 3
Xl + 2x + x4 7
2
(PI) x. > 0 fo r i=l,Z , •• . ,s
l
X + Xs 3
z
4x + sx z (Max)
I Z

{3,4, s} i s obvi ous ly a basis of (PI) ' The cor r e s pondi ng bas i c sol ution i s

Xl Xz o, x3 8 , x
4 7 , Xs 3

We will se e l at er th at {I, 2, s} is a lso a bas i s . The co rre sp onding bas i c


sol uti on i s

Xl 3, x2 Z, x
3 x4 o, X
s

Remark 1: It i s important to not i ce that the bas ic s ol uti on as so ci at ed to a


gi ven ba si s i s unique.
However, given x, a s ol ut i on of (1), there may e xi s t se vera l base s f or
which X is the co rrespond in g basi c s ol ut ion. In thi s ca se, th ere is "degeneracy."
We will come back to probl ems posed by degeneracy in Chapt e r V.
A trivi al cas e of degener acy is th e case whe re b =0 . Then x= 0 is t he
basic solution cor respondi ng to every ba sis. In the case

th e s ol ut i on Xl = x2 = 1 , x 3 = x4 =0 is th e bas i c so l ut i on as soci at ed wit h


both base s {1, 2,3} and {r, Z,4} .
Section 2. Writing a Linear Program in Canonical Form with Respect to a Basis 59

2. Writing a Linear Pr ogram in Canonic al Form with Respect to a Bas i s

Definit i on 3 : We consider the (very s li ght ) generalization of (P):

x > 0
(P )
I; (11ax) - I; ( I; given scalar)

whic h must be understood in t hi s way : " Find x e I x l Ax e b , x~O } , which gives


to cx a maximum va l ue , ca l l i t x, and gi ve to the objective f unc t ion z
th e value cx + 1;. fI

Remark 2 : (PI;) is a linear program nearly identical to (P) s i nce when an


optimal sol uti on to (P) has been f ound, i t is an optimal solution t o (PI;) (the
two linear programs have the s ame f ea sible and optimal soluti ons ), the onl y dif-
feren ce be i ng the corres ponding value of the objective fun cti on , the diff eren ce
of th e object ive fun ct ions being j us t I; In particul ar, (P ( P) •
o) =
Sin ce (PI;) i s qui te t he s ame as (P) , it will s omet i mes be denot ed
(P) and i t may s eem t hat we do not gai n much general it y by writin g (PI;) i n-
s t ea d of (P) However, we wi l l see i n the se que l that thi s formal i sm i s
quite use ful.

Defini t ion 4: Given line ar program (PI;)' the (m + 1) x (n + 1) -matrix

is call ed t he "matri x of coeffic ient s" of this linear program.

Remark 3: Let M be th e matri x of coe f fic i ents of l inear program (PI;) and
s uch th at MS = AS F o . A pivot ope ra t i on on
r r
t ran s fo rms th e matri x M into th e matrix if (s ee Remark lILIS ):

if i Fr
( 2)
if i =r

if i Fr
(2 ' ) b.
1 if i =r
60 Chapter I V. Bases and Basic Solutions of Linear Programs

(3) ~j

A
(3 ') 1;

Let Pi er , s ) be t h e m x m pivot matrix defined i n Remark I I I .1 6 , and


le t n be the m-row vector defined by

(4)
if if. r
if i =r

S
The n i t is easy to check t hat after a pivo t operation on the el eme nt M AS f. 0
r r
of t he matrix M, we obtain

(2*) [A,b] Pi (r ,s) x [A,b]

( 3*) c c - nA ; 1; 1; + nb

Example: The matrix of c oe f fi c i e n ts of (P 1) is

~ ~l
1 0 0
2 0 0
M
0 0 1
5 0 0 0

2
Pe r f o r m a pivot operation on H =1 • We obtain
3

~ J
0 0 -1
0 0 1 -2
M
0 0
0 0 0 -5

the p ivot mat r i x being

Pi ( 3 , 2)
Section 2. Writing a Linear Program in Canonical Form with Respect to a Basis 61

Theorem 1 : Consider the l inear program

Ax = b x > 0
{ cx = z (I4ax) - I;;

and assume t hat lin ear sys tem Ax = b i s fu ll r ank. Let B be a nonsi ngu la r
m x m-matri x and y an m-row vec t or of coe f fi cients . Then (P 1;;) is
equivalent (a cc or di ng to Def initi on 1.15) to t he linear program

( BA) x Bb x > 0
{ (c-yA)x z(r.lax) - z:; - yb

Proof: (i ) We have s een i n Section 111. 2 that linear sys tem ( 1) is equi va lent
to
(IlA) x Bb

pr ovi de d t hat B is nonsingu lar . Thus (P2 and (Pz:;) have the s ame so l uti on set s ,
(i i) The va l ue of th e objec t ive fun ction of (PI;;) is

cx + I;; Y (Ax - b)

Thus f or all x sol ut ion of (1) ( f e as i bl e sol uti on of (PI;; ) and (P2) ) we have

Ax - b 0
and z z,

Corollar y: Let M be t he mat rix of coe ffici ents of l inear program


S
Perform a pivot operat ion on a nonz ero e l ement M ( 1 < r < m, 1 < s
r - - -
The matri x Ii thus obtai ned i s th e matri x of the coe fficie nt of a linear pro-
gram equiva le nt t o (PI) '

Proof: This co ro l lary i s a di re ct conse quence of Remark 3 and of t he t he orem.

Def i nit ion 5 : Consi der a line ar prog ram ( P) where ( 1) i s full rank and l et
J be a bas i s of ( P) Appl y t he transformation des cri bed in Theorem 1 wi t h

( a) B

( b) Y 11 s ol ut i on of

J
(5) c

( P) becomes
62 Chapter IV. Bases and Basic Solutions of Linear Programs

+ (AJ )-lAJ X_ ( AJ )- l b x- > 0


.1 x J' J
(P c) f J
J - J
(c rrA )Xj Z (Ma x ) - rrb

which i s c all ed t he " c ano ni c al f orm with respect t o b a s i s J


J
n is pe r fec t ly defi ne d by (5 ) s i n ce A is no ns i ngu l ar ( rr: cJ( AJ ) - l )
and is cal led t he " mult i p li e r vecto r r elati ve t o basi s J ." The ve ct or

(6) c(J ) c - rrA

i. ce , ,
0 if j E J
(6 ' ) (c(J))j
j j
c - rrA if j¢J

i s c all ed the " c os t vec to r re l a t ive to basis J ."

Remark 4 : It may see m awkward t o ca ll (P c ) a "canoni cal form" wh i le it i s a


standa rd f orm . But in ( Pc) we can t ake basi c va r i a b l es x ' j E J , as
j
sl ack variab les (see Section 1. 4 ), and (P c) i s t he n written

(AJ) - lAJX _ < ( AJ) - l b


J -
( Pc ')
{
(c
J J
- rrA )Xj z (Max) - nb

Remark 5: Note tha t a linear p rog ram ( P) is sa id t o b e written in canon ica l


fo rm wi th respect to bas is J if th e fo l l owing two condi tion s are sa tis f ie d :

J
(a) A is, up t o a pe rmuta tion o f rows or co lumns, th e unit matri x .
J
(b) c 0 •

Theorem 2 : Let M be th e matri x of coefficie nts of t he lin ear program

Ax : b x > 0
(P)
{ cx : Z (Max) -

wr i tt e n i n c anon ical form with r e s pe c t t o ba si s J . Let r E {I, 2, • •• , m}


s ¢ J, s uc h th at AS of. 0 and let t be the uni qu e in de x in J s u ch that
r
Section 2. Writing a Linear Program in Canonical Fonn with Respect to a Basis 63

Perfonn a pivot ope ration on t he e lement AS of M. The matri x M thus


r
obt ai ned i s th e mat rix of coe fficient s of linear program (P) wri t ten in canon-
ica l form wit h r e spect t o t he basi s

Mo reover , th e va l ue of S i s inc reas e d by thi s operati on by t he quanti ty

Proof : Fr om th e Coroll ary of Theo rem 1, M is the matri x of coeffici ent s of a


linear program equivalent t o (P) , i.e ., of t he same linear program wr i tt en
under a different f orm. Fr om Remark 111.15, matri x ;J is a unit matrix . From
Remark 3, we have
~s
c o

Definition 6 : To make r e f erences to pivo t opera ti ons i n the sequel l i ght and
precise , we wi ll give t h e name

PIVOT (p ,q , r ,s ;M)

S
t o t he pivot ope ration on t he e lement Mr of the p x q- mat r i x M, whi ch t r ans-
f or ms M i nt o M, i .e . ,

M~ - Il
j
r
M~1 / Msr if i ;< r
~j
M.
1
lM;/ Ms
r r if i=r

Examp l e : In the e xample precedin g Theorem 1, M i s tran sfonned i nto

~ J
0 0 -1
M' M
0 0 1 -2
1 0 0
0 0 0 -5

+I f ICJ , J \I denot es the set of e lements o f J not i n 1•


Cha pter IV. Base... and Bu...ic So lution ... of Linear Program-,

by PIVOT (4, 6 ,3,2;M) . From Theo re m 2, M i s t he matri x of co efficien t s of


(PI) wr i t t en in can oni cal f orm with r esp ect t o

J' {3 ,4 ,2) {3, 4 . s j U {2}\ {s }

2x + x X 5
I 3 s
Xl + x 2x
4 s
(Pi ) x2 + X 3
s
4x I sx z(Max) - I S
s

(We see i n thi s exa mp le how th e fact t hat J' is the ordered se t (3 ,4, 2) mu s t
be tak en i nto account (s ee Remark 111. 5)) .
Let us now perform PIVOT(4 ,6, 2,I;W) . We get

~
a 1 -2 3
a a -2
Mil M' a a
a a -4 3

the matrix of coe fficients of l inear prog ram (PI ) writt en i n canoni ca l f or m
with r esp ect t o {3, 1, 2) = O ,4 , 2}U{I }\ {4 } Let us no\\' perf orm PIVOT
(4 ,6 , I ,s ; M" ) . \"Ie get :

J
a 1/ 3 - 2/ 3 1
~ 1 a 2/3 - 1/ 3 a

t
MOO '
= W' = ~ - 1/3 2/3 a
a -1 -2 a

t he matrix of coeffici ents of l inea r program (PI) writt en i n canonic a l f orm


wit h r e spect t o {s , I ,2} = {3 , 1 ,2} U{S}\ {3} :

I 2
- "3 x4 X
"3 x3 +
s
2 1
( Pi" ) xl + 3 X3 - 3 x4 3

1 2
x - "3 x3 + 3 x4 2
2
- x
3 - 2x4 z - 22
Section 3. Feasible Bases. Optimal Bases 65

No te t ha t the bas i c so l ution cor respond i ng t o basi s {S,l, 2} i s xl = 3, x2 = 2,


x =x = 0 , X = I, Z = 22; i , e ., the solution that we proved t o be opt imal
3 4 s
in Chapter II .

3. Feasi bl e Base s, Opt ima l Ba ses

Defini ti on 7 : A bas i s J of a l inea r pr ogram ( P) is f e as i bl e if t he corre s-


pondi ng basic s ol uti on (see De fin i t ion 2) i s f eas i bl e, i , e . , if

To say that a ba si s i s f ea s ibl e i s an abus e of l anguage . One ought t o s ay "a


basi s such th at th e co r re spo ndi ng bas i c so l ut ion is fe as ible ."

Theorem 3 : I f t he cos t vec t or r e l at i ve t o a f e as i bl e bas is J i s nonpositive,


t he bas i c so lution corresp onding t o ba sis J i s an optima l sol ut ion of ( P).

Pr oof: Cons i der li ne ar program (Pc) as i nt roduce d in Defi ni ti on 5 and ass ume
that

( 7) c (J ) c - TT A < 0

We hav e
(AJ) - l b XJ , Xj>O
(Pc) Z (Max) - TT b

Let X be th e basi c so l ut i on of (P) co r res pondi ng to ba s i s J and l et x be


any f eas ib l e sol uti on of (p); we have

j
Z cx I (c TTAj ) x . ... TTb
J
jtJ
cx L (c j TT Aj) x . ... TT b TT b since x . = 0 for j¢J
j ¢J J J

Z - Z

since each t erm of t hi s su m is t he produc t of a nonpos it i ve scala r (c j TT Aj )


by a non- negati ve one (x .
j)
66 Chapter IV. Bases and Basic Solutions of LinearPrograms

Exar,ple: Basic solution associ at ed to the basi s J = {5 , l , 2} of (PI) i s an


optimal solution, as is apparent in the example ending Section 2,

Definition 8: A basis J such that the cost vector r elative to J i s negative


or zero i s said to be "optimaL" The term "optimal basi s" is a lso (a s "feasible
bas is") an abu se of language ,

Corollary of Theorem 3 : The multiplier vector relative to an optimal basis of


linear program (P) is an optimal sol ution of the dual of (P):

yA .:: c y unrestri cted


(D)
yb = w (Min)

Moreover,
w • z
mm max

Proof : Let J be an optimal basis of (P), x be th e basi c sol ut i on of (P)


relati ve to J, and 11 the multiplier vector relat i ve to J , For x =x ,
we have (see Definition 5)

cx 1I b

By assumption, we have

c(J) c-1IA <O

which means that y = 11 i s a feasible solution of (D), the corresponding value


of the objective function being w=1Ib ,
Thus we have a pa ir of feasible solutions x and 11 t o linear programs
(P) and (D), r espectively, the corresponding values of the obj ec ti ve func-
tions being equal , From the Corollary of Theor em 11, 2, x is an optimal s ol u-
tion of (P) and 11 an optimal solution of (D),

Remark 6: We wondered, at the end of Section 11,2, whether it was t rue th at


when both a linear program (P) and its dual (D) have optimal s ol ut i ons , we
have zmax = wmi n The present corollary does not answer thi s que stion; we
still do not know if it is true that whenever a linear program has an optimal
solution, it has an optimal ba¢~ solution, We will prove this property in a
constructive way in subsequent chapters,
Exercises 67

We wi l l al so prove that basi c s ol ut i ons correspond to vert i ces of the


polyhedron of f e asi ble so l ut i ons .

Remark 7: We l eave i t as an exe rcise to show that for a linear program

b x > 0
z (I~in)

Theorem 2 becomes : "If the cost vec t or r elati ve t o a feasibl e bas is J is


non-negat i ve , th e bas ic so lution corresponding to J is an optimal solution
of (P) ."
What would be t he opt i ma l solution of l inear program (PI) obt ai ne d from
(PI ) by changi ng "Max" into "Min" i n the obje ctive function?

EXERCISES
1. Conside r the l i near program

Xl + SX 2 + 4x 3 ~ 23
( P) 2x l + 6x 2 + SX3 ~ 28
{
4xl + 7x2 + 6x
3
= z (Mi n)

(a) Wh at i s th e dua l ( D) of (P) ?


(b) Write (P) i n s t anda r d fo rm (na me the sl ack va r i ab les x and x ) .
4 S
(c) Is {1, 4} a basis ? Il'hy?
(d) I~ri t e (P) i n canonical form with r esp ect t o t he basis {l, 4} •
(e) Is {l ,4} a f ea s ibl e bas is? Is it an opt imal bas i s ?

2. Pr ove that {I ,2 ,3 } is an optimal bas i s of th e l inear program of


Exercise 11.4 .

3. Pr ove t ha t t he mult i pl i er vector r elat ive to an optimal basi s of

Ax < b x > 0
(P) { ex = z (Max)

is an optima l so lution of th e dua l (D) of (P) .

4. Prove Remark 7.
68 Chapter IV. Bases and Basic Solutions of Linear Programs

5. Consider the l inear program

+ 4x + 4x 3 + x4 < 24
2
( Xl 6x 4x 3x4 < 36 x. >0 i = 1, 2, 3,4
(P) 8xl +
2
+
3
+
1-
5x l + X2 + 6x + 2x4 z( Max)
3

(a) Write (P) in s tanda r d f orm (the s l ack va r iab les are named x and x6) .
5
(b) Writ e (P) i n canon ica l fo rm with re spect to th e basis {3, 4}
(c) Is {3, 4} a f eas i ble bas i s ? Is it an optima l basis?
(d) Assume that the ri ght-hand s i de of th e firs t cons t raint i s changed
from 24 into 26. Wh at is th e new value of th e so l ut io n ? By how
much has th e obj ec t i ve function been increased ?
(e ) What can you s ay if t he r i ght -h and side of t he fi rs t const rai nt i s
changed from 24 into 44?

6. Cons ider the l inear program (P as i t i s written in th e exampl e prec eding


2)
Remark Ll O.

(a) Show that in (P t he inequality signs can be changed i nto equal it y


2)
signs without uS1ng sla ck variables .
(b) Delete th e re dundant cons t rain ts of (P2), if any .
(c) Prove that {3, 4,5 , 6} i s a bas is but t hat it i s not a feasible basi s .
Check t hat {1,2 ,S ,6} i s a feasi bl e ba s i s .
(d) Check t hat {1, 2,3,4} is an optimal basi s . What i s the optimal
solut ion of (P ) and of its dual (D2) ?
2

7. Consi der the dual l i nea r pr ogr ams

Ax < b x > 0 YA > c Y > 0


( P) { ex = z (Max) (D) { yb = w (Min)

Let
I C {I ,2 , ... , m} I {1, 2, •• • , m}/I
JC {1,2 , • • • . n} J {1,2 , ... ,n }/J

(a ) Prove that (l ,J ) i s a ba s i s of (P) if

(i) II I + IJI m

J,
(ii ) [A (Um)1 ] i s nonsingular .

Simil arl y, prove t hat O ,J ) i s a basis of (D) if

(i) Ii I + II I n
Exercises 69

(ii)
[::)J1 is nonsingu lar .

(b) Prove t hat a necessary and sufficient condi t ion for (I,J) to be a
bas i s of (P) is th at A~ be square nons i ngul ar .
I

(c) Pr ove t hat a neces s ary and suf f ic i ent condi tion for (I ,J) t o be a
basis of (P) is t hat (r,J) be a basis of (D) .

(d) Prove t hat if (I,J) is a feasib le ba sis of (P), (l,J) a f easible


bas i s of (D) then (I,J) is an optima l basis of (P) (I ,J) an
optima l bas is of (D).
Chapter V The Simplex Algorithm

The s i mpl ex a l gori t hm was di scove red i n 1947 by G. Dant zig as a tool t o
solve l i ne ar programs . Simpl ex algori thm i s ce ntral t o this course on linear
programming bec ause it exemplifies the proce ss of operations re search de scribed
i n t he pre fa ce : it is not onl y a very e f f i ci ent way ( its e f f icie ncy i s not yet
compl etel y unders tood) of s ol vi ng prac tica l problems, us ed innumerable time s by
eng i neers, i ndus tri alists, and military people , but i t is al so the bas is of a
mathemat i ca l t heor y that can be us ed to prov e various result s .
We wil l try and keep t hes e two points of view in this t ext, i .e ., insi st
on the computational aspec ts (se e, in parti cul ar, Chapt e r VII ), of t he s i mpl ex
algori thm and sho w how i t can be used as a mathemat i ca l too l .
The simp l ex a lgori thm can on l y be appli ed to lin ear programs t hat are
wri t ten i n canonica l fo rm wi th re spect to a fea si bl e ba s i s . We show i n Chap t er
VI how t o accomp li sh t his.
He r e we ass ume t hat t he l i near program

Ax b x > 0
( P) {
cx z (Max)

is in f act wri t ten in canonical f orm wi t h r espect to t he f e asib l e ba sis ,J. Thus
t he t hree fo l lowing condi t i ons are s a t i sfied :

J
(1 ) A is , up toapermuta tionofrows , the mx m unit matrix .
( 2) b > 0 s i nce J i s a " f e as i bl e" ba s is .
(3)
i
c : 0 (see Sec tion IV. 2) .

(A is, as be fore, an m x n- matri x, b is an m-col umn ve ctor , and c i s an n-row


vec to r .)

J
Remark 1 : We need t o know the s t r uc t ur e of A • We define th e mapping

co l :f1,2 , • • • ,m}-+J

s uch that
70
Section I , A Particular Case 71

if j = col (i )
(4) A~1 if j cJ and jl col (i )

In other words, thi s applicat i on de fi nes th e permut ation of co l unns th at a l lo ws


us to f ind th e unit matrix from AJ •

Example: Cons i der

2x + x2 + x 8
l 3
xl + 2x2 + x4 7 x.1>- 0 i =1 , 2, • • • , 5
3

z (Max)

(PI) is wr i t ten in can onical f orm wi t h r es pect to bas i s {3 , 4 , 5} and we have


col(l ) = 3, co l (2) = 4, co l (3) = 5 .
The s i mp lex al gorithm i s i nt ro duced f or a particular case i n Se cti on 1,
and for exampl e (PI) in Secti on 2. It i s described in its gene ra li ty in Sec-
tion 3. Pr oof of fin it enes s is given in Section 4. It will remain to show how
a general l inear program can be wr i t te n i n canon ical form with r esp ect to a
fe asible bas is, i . e ., how t o "in iti at e" th e s i mp le x a lgo ri th m. Thi s is done i n
Chapter VI .

1. A Par t i cul a r Case


The particular case we study here i s th e case wh ere n = m+ 1. Without l os s
of generality (by r enaming th e variables i f necessary), we can as sume th at
J = {2,3, • • • ,m,m+l } Thus, our pa r t icular program (which we ca l l (PP)) ca n be
.,r itten

x >0 j = 1, 2, • • • ,m,m+l
j-

(PP)

+ x b
m+l m

z(Max)

We can also cons i der that ar e sl ack vari abl es (c f. Remark IV.4 ) :
72 Chapter V. The Simplex A lgorithm

I
Alx l < b
l
I
Ax < b xl > 0
2 I 2
(PPC)

I
Amx I < b
m
c ' xl z (Max)

The domain of f ea si ble s ol uti ons of (PPC) i s a part of t he half l ine Xl > 0 •
i =I,2 , o, m. Thus if Al < 0, th
Recall t ha t , f rom (2), bi ~ 0 for 00 th e i
1 -
inequalit y does not pl ay any role i n th e definition of th e s e t of fe as i bl e xl 's
(if Al = 0, i th const r ai nt i s a l ways i. f 'i d 1. f Ali < 0 ' t h 1· t h
s a t s a e t ;
. t 1S
. e con s r a m
1
a l ways sa t i s f i ed f or xl ':' 0) •
It i s thus s uffi ci ent t o determine V , the domain of f eas ib le so l ut i ons
of (PPC) , t o re s tric t one ' s at t ent i on t o con straint s of

(5)

When I" 11, we ha ve

In s ummary, we have two cases :

(a) V {xl lo ~ xl }
(b) V h l lo ~ xl ~

Let us now go back to the s ol uti on of (PPC ) . If cl ::. O , we know,


by Theorem I V. 3, that th e ba s i c so l ut i on as s oci at e d with basi s J , namel y,

i s an optimal sol ution . (PPC) is s ol ved.


l
If c > 0 , i t i s clear that i t is in our i nteres t ( if we want t o increase
the objective function) to have Xl as bi g as possibl e . Thus :
Section I. A Particular Case

1. If we are in case (a ) , xl can i nc rease wit hout bound, z i ncreases as


c l XI' and linear prog ram (PP) does not have a bounded so l ution (no
opt ima l so lut ion , Remark I . S) .

2. If we are in ca s e (b) (i ;e , I f 11 ), we l et xl be as l arge as possible .


We pos e
xl Min [b/ \ l
i E: I

Let r be an inde x of for which thi s mi nimum is r eached :

The va l ues of the bas i c variab les will be

1-
b. A x. b.
1 1 1 1

By cons t ruc tion, becaus e of th e defin iti on of r, we have

Thus x i s a feasible so lution . We proceed now t o show t hat x is an


opt imal sol ution.
We have j us t one way to show that a so lut ion is optima l: i t is to
app ly Theorem IV. 3. We not i ce that

Letting M deno te t he mat r i x of coefficients of (PP) , we apply the pivo t


ope ration PI VOT (m+l, m+ 2, r , 1; /.1) on the nonzero e lemen t Al of M
r
(AIr i s nonzer o si nce r E: I, by as sumpti on) . We obtai n mat r ix M' , whi ch
is the mat rix of coe f f icients of l i near pr ogram (PP) written in ca noni ca l
fo r m wi t h r esp ect t o bas i s :

(s ee Theor em IV. 2) . The r eade r wi l l check that a fte r t he pivot operat ion
(PP) writes
74 Chapter V. The Simplex A lgorithm

x ~ a
A' b
= b - ~
r-l A~

1
+ -, xr + 1
A
r
(PP' )
A~+1 Al b
-7 Xr+l + xr+ 2 =b
r -l
-~
A
r

Si nce , by assump t ion , c1>0 and AI > 0 we see t hat (PP') is


r '
wr i t ten i n ca nonica l fo rm with res pe ct t o t he optimal bas i s A si ngle J'
pi vot ope ra tion has been sufficient to s olve th i s pa r ticu la r l inear programo

Geome tric I n t e r preta t i on : If we a re i n cas e (a), doma in D is t he hal f li ne


Xl .:: 0 0 The objective f unction monotono us ly i ncreas es with Xl and th us i s
unbounded on D •
I f we are i n case (b ), domain D is th e s egment

and t he obj ec tive func ti on increase s i n proport ion to Xl in t hi s s e gmcn t ,


Thus , th e opti mum wi ll be loca ted at t he ex treme poi nt of t he segment fo r wh i ch

Note that a segment or a ha l f line ar e exampl es of con vex polyed rao We chec k
on this exa mp le t ha t t he optima l so l ut ion co rres ponds t o a ve r tex of th e
pol yedron.

Theorem 1: Consider t he lin ear program

Ax b x > 0
(PP)
{ cx Z (Max)
Section 2. Solving an Example 75

(where A is a m x (m+l ) ma t r i x) wri tten in cano nica l f or m wit h respec t to a


feasible basis J , t h e app l ication "co l" being de fi n e d as in Remark 1. Let
s be the un i que column i n de x not be longin g t o J . If
(i) cS >O
(i i) = {iI A~1 > O) ., fJ
( i ii ) r i s de fi ne d by b/A~ = ~liniEI [b / A~ ]
then the app li cation of PIVOT(m+ l ,m+2 , r ,s;H) t o the matri x

~I = [~ ~]

of coe f fi c ient s of (PP ) t r an sforms M into t he mat r ix of c oe fficie n ts of


li near program (P P) wri tt en in c ano ni cal fo r m with respect to the opti ma l
basis
J' JU{s } \{co l( r)}

Mor e ove r , t o updat e c o l , i t suff i c es t o wri te

co l ( r ) s

(i .e ., matrix i s not modifi ed by the operati on ) .

Proof : Th i s the orem is just a syn t hes is o f what has be e n p rove d i n t h i s sec tion .

2. So l vin g an Examp l e
Let u s a ga i n co n s ider

2x + x2 + x 8
l 3

xl + 2x 2 + x
4
7 x. > 0
J -
( PI)
x + X 3 j =I ,2, • • • ,S
2 s
4x + SX z (Max)
l 2

wh i c h is wr itt en in c an oni c a l form wi th r es pe ct t o f e asib l e bas is {3 , 4 ,s L Thi s


2
basis i s no t op timal since we have c l = 4 > 0 , c = 5 >0 •
It i s in ou r in te res t (if we want t o incre a s e t h e objective fWlction ) t o
increas e Xl or x2 • Th e basic i de a of s i mplex a lgo r i thm i s t o choose one
S
of the indice s s ¢J such th at C >0 (i f no s u ch inde x exis ts , th e presen t
basic s o l ution i s optimal) an d t o incre a s e X a s much a s pos sib l e .
s
76 Chapter V. The Simp lex Algori thm

Suppose t hat we dec ide to incre ase X


z wit hout being interes ted - - fo r
t he t i me being - - i n xl We will even go one s tep fu r ther and forge t about
th e exis tenc e of xl ' Then we wi 11 be ab le t o use what we l earn e d i n Section 1;
we consider

x3 8
X
z+
Zx + x4 7 xZ,x3,x4,xS .:':. 0
Z
(PP (J , Z))
I X + X 3
z s
5x Z z (Max)

The doma i n of variation of X


z is

Thus t he optimal s ol ut i on of (PP1(J, Z)) is obtained for Xz = 3. We have

We know t hat if we wanted t o prove t hat thi s s ol ut i on i s optimal, we would


have to pe rform a pivot ope ration on t he e l ement A~ of the matrix of coeffi -
cients of (PPI (J,Z)). We decide to perform th e very same pivot operation on
t he matrix of coefficients of (PI) itself. Thi s ha s been done in the example
ending Section IV.Z . We get

ZX + x X 5
l 3 s
+ X4 Zx 1 x > 0
(P i) xl S j
Xz + Xs 3 j =1,Z, ... ,5

4x
l
Sx
S z (Max) - 15

Basis JI (with re spect t o which (Pi) is now written in canonical


fo rm) is not optimal (since cl = 4 > 0) , but we can consider that we made s ome
progress s i nce , for th e basic so lution associated with JI , the value of the
objective function is 15.
Now we want to increase Xl ' forget ting about t he other nonbas i c
variab les (i. e ., l eavi ng them at their zero value) . Thus we wi ll s ol ve
Section 2. Solving an Example 77

2x + x 5
l 3
Xl + x
4 xl ,x 2 ,x 3,x4 ~ a
(PPi (J ' ,1 ))
x 3
2
4x z(Max) - I S
l

The domain of variation of Xl is {x110 ~ Xl ~ I } For Xl = 1 we have

z = IS + 4 = 4x + SX 2 19
l

We have been blocked by t he s econd equa tion, so we pe r fo rm a pivot ope rat i on on


A~ 0 As we have s een in th e exa mpl e at the end of Secti on I Vo2, l inear pr ogram
(PI) wi l l now be written i n canoni cal fo rm wi t h r espect t o basis :

J" JI U {r }\'{col ( 2)} {1 ,2 ,3}

x - 2x + 3xS 3
3 4
+ x4 2xS x. > a
r')
Xl J -
(P
X2 + X 3
s j = 1,2 ,ooo, S
- 4x4 + 3xS z(Max) 19

We s ti ll improved the va l ue of the objective fu nction but basis J" is not ye t


optimal We apply the same process to
0

x + 3x 3
3 S
2xS X > a
Xl j
(PPi'(J",S))
x2 + Xs 3 j = 1, 2, 3,5

3xS z( Max) 19

We see that pivot ope ration mus t be done on AS s ince


1

Min[3/3 , 3/ 1]

The new basis is


78 Ch apter V. Th e Sim plex Algorithm

JIU J " U{ S} ' dcol ( I) } {I ,2 , S}

( PI) wr i t ten a f te r t hi s p ivo t ope rat i on on i ts matrix o f c oe f f icie n ts i s

,,- I 2
3" x 3 - 3" x4 + X
s
2 1
xl + 3" x 3 3" x 4 3

l' )
(P
x
1
3" x 3 +
2
3" x4 2
2
x - 2x z (Max) - 22
3 4

J' " is an op t i mal ba si s . The co r respon di ng ba sic so l u ti on i s

Geo me t ric I n t erpretat i on : Let us draw in p lan e (x ,x the domai n o f fea sib l e
l 2)
so lu tions of ( PI) ' It i s th e convex po lygon ABCDE of Figu re V. I .

I
Fi gure V, l : The Domain o f Feasib le So l ut ions of ( PI)
To s ta r t with, th e p rob lem wa s writt en i n c a non i ca l form wi t h r espe ct to the
ba s i s J = { 3 , 4, S} Th e corr espo ndi ng ba sic so l ution is x l = x = 0: we a re
2
in A. The n, x i nc rease s wh ile xl r emain s e qua l t o 0 ; the ma xi mum
2
va lue fo r x is 3 (if we want t o s t ay i n V) : we have tra vel e d a long
2
segment AB . On B, X
s =0 since we are l i mi t e d by co ns t rain t II I an d since
X me asure s how fa r we are from t he bo und on th i s co ns t rai nt . Point B cor -
s
re sponds to t he ba si c s o l ution a s s oci at ed wit h J '. lI'e then l e t xl in cre a se ,
l eavin g X
s=o. Th e maxi mum po ss i b le va l ue f or xl is I: we de s cribe seg-
ment BC . On C, x =0 (since we are limi ted by co n s t raint II ) an d t he
4
Section 3. The Simplex Algorithm: General Case 79

so lution i s the bas i c so lution associated with J ". We deci de t o inc rease
x ' t aking f rom cons trai nt II I, but leaving x4 = 0: we de s cr i be segment
S
CD. D co rresponds t o th e basic so l ution assoc iated with J", the optima l
so lution.

Remark 2: For such a simp le example as t he so lution of (PI)' we see t hat an


index (here, index 5), which at some time l eaves the basis, has t o ente r again
in so me further s tep. Thus , th e movement of an i ndex i nto or out of the basis
can never be consi dered as de f i niti ve .

Remark 3: At t he beginning, we chose i ndex 2 t o ent er th e bas is. This choice


co rresponds t o the l a r ge s t i ncrease of t he objec tive f unct i on by the uni t of
variation of the variable, since c > c l• Howeve r, t he r e ade r is i nvi t e d t o
2

so lve (PI) aga in with the othe r choice ( le t ti ng index I fi rs t enter t he basis) .
He wil l then check t ha t t wo bas i s changes are su fficient i ns tead of three.

3. The Simplex Algori thm : General Case


Thi s s ec tion will just be an extension of the t wo preceding one s .

Remark 4 : Let us desc ribe the main s teps of t he proces s that ende d in t he
solution of (PI)

(a) (P) is wri t ten in ca nonical fo rm wi th respec t t o f e as i bl e basi s J .


(b) We associate wit h J
its correspondi ng basic solution .
(c) If J is not optimal , we choose s¢J s uch that c S > a .
(d) Leaving a l l nonbasic va riab les (excep t xs) at the i r
o val ue, i ncrease x as much as poss i ble and adjus t basic
va riables : s

for k = col (i)

for k ¢J, k f. s

This va ri at ion of th e current sOlution i nduces an increase i n th e


va lue of t he objective func tion.
(e) I f AS < 0 , t he linear program does not have optimal solution
(the objective fu nc tion is unbounde d).
(f) If I = {I lAs > O} f. Yl, l et r be an index such that
1

s
(6) Min [b/\ l
i EI
80 Chapter V. The Simple, Algorithm

(Thi s mini mum may be r eac hed fo r mor e th an one index; r i s one of t hem. )

(g) cannot be grea ter t han

( h) For X s xs ' one of th e ba sic variabl e s wi th t = co l ( r) is


equa l t o o.
(i) J = J U {s }" {col (r ) } i s a f easible bas is .
( j) The new so l ut ion

x' X
S S

s-
x' b. Ai xs f or k= col(i)
k 1

x! 0 jO
J

i s th e basic so l ution cor r es pondi ng to basis j .

(k) In perfo rmi ng the ope ra t i on PIVOT (m+l, n+l, r , s ;M) to t he matri x
M of coe fficie nts of li near progr am (P) , we obta i n t he mat rix of
coe f f icie nt of th i s linear progr am wr i t ten i n ca noni cal f orm wit h
r es pe ct t o bas is J . Nappi.ng " col " i s updat ed i n posi ng

col (r ) s

This p ro ces s i s r epea t e d unti l ei the r :

(a) The new basi s i s optimal (which can be se en in s t ep (k) ).


(S) Linear pr og ram (P) does not have an opt i mal sol ut i on
(s t ep (e )) .

Rema rk 5 : In view of Sect ion 1, steps d to k of th e pr ece din g proces s can


be fo rmulate d in th e f ol l owin g way :

(d ') Let (PP(J, s ) ) be t he li nea r program obta in ed f r om (P) by


s uppr ess i ng a l l nonbas i c vari ab l es, exc ep t X (whi ch is equi va l ent
s
t o f ixing t h e val ues of t hese variab les at 0) , Sol ve (PP( J , s )) by
the method o f Sec t ion 1:

(i ) I f (PP(J, s ) ) does not have an opt i ma l so l ut ion , th e same i s


t r ue fo r (P) (see Exerci se 1) .
(i i) If (PP(J ,s)) has an opt i mal s ol uti on, th e unique s t ep f or
s ol vi ng t hi s l inear prog ram l e ads t o a pi vot ope ra t ion, wh i ch
we app ly t o th e matri x of coe ffi cie nt s of (P) . We t hus
de f ine a new fea sib le basis f or ( P) .
This proces s i s r epeat ed unt il e i t he r case (a) or ( S) of Remark 4 occ ur s .
Sectio n 3. The Simp lex Algorithm: General Case 81

Remark 6: . If we close ly examine t h e proce dure of Rema rk 4. we s ee t hat:

I. Step a is just an initi a lization (which is ass umed to be


done he r e) •
2. Steps c, e,f , and k correspond t o ope rations .
3. Steps b, d,g , h ,i, and ca n be consi dere d as comment s.

We can now repe a t t he procedur e of Remar k 4 in a mo re compact f or m.

S implex Algo rithm


Linea r program (P) is written in canonical fo rm with r es pect t o a f ea sib le
basi s J. The mapping "col " i s defin ed as in Rema rk 1.

REPEAT the fo l lowing proce dure unti l ei t he r an opt i mal basis is obtain ed, or
a s e t of f easi ble so lut ions f or which z is bounded , is shown to exist .

STEP 1: Choose an s s uch th at cS > O . If such an s does not


exist , basis J is opt imal . STOP .

STEP 2: If AS 2 0, no opt imal so l ution exi s t s (z is unbounded) . STOP .

STEP 3 : Let = {i IA~1 > O} ( I f- I/! because of STEP 2) .


Choose an r s uch that

( 6)

STEP 4: Perform the pivot ope rat ion (defined by r ow r and column s)
on the matri x of coefficients of linear program (P) . After
t hi s pivo t ope ration, (P) is wri tten in cano nical form wi t h
respec t t o

Le t J : =J co l (r): = s •
END REPEAT

Remark 7: The va lidi ty of the operations of t he simp lex algori t hm has been
prove d i n prece di ng deve l opments :

St ep 1 : See Theorem IV.3 .


Step 2 and 3: See Remark 5 and Sect i on 1.
St ep <1 : See The or em IV. 2.

It remai ns t o pr ove th at the algo ri thm i s fin ite. Thi s will be done in Sec tion 4.
82 Chapter V. The Simp lex Algorithm

Remark 8 : In t he a lgo ri t hm, we find two i.mp r ec i s e instruction s :


• In Step 3, we hav e t o " choos e" an r E K wh er e

K Min [b /A~J}
i EI

If IKj >1 any r E K can be taken and t he pivo t ope r at i on of Step 4 wi ll be


valid (J i s a f eas i bl e bas is). The choice of r in K wil l be made more
prec i s e in Sec tion 4, t o guarantee f i nit enes s of the algo ri thm•
S
• I n St ep I , we have t o " choos e" an s s uch that c >0 •

The cor r ec t ne ss of th e algo r i t hm does not depend on which i ndex s i s chosen ,


S
as l ong a s c >O. Usu all y, one t akes , as was done i n Sec ti on 2,

s
( 7) s such t hat c Max . c j
J

Thi s choi ce is not neces s ar i ly good (as pr ogr am (PI) exemp li fi es -- s ee Remark
3); its jus t ifi ca ti on li es i n t he f ac t t hat i t gi ves t he gr ea t es t va r i at i on of
t he obj ec t i ve f uncti on by uni t i nc r ea se of t he va r iabl e .
S
Anot her argument cons i st s of as so c iat i ng wit h ea ch s s uch t ha t c >0
a l i ne index r ( s ), t he i nde x on whi ch we would perform t he pivo t ope rat i on
i n case i ndex s would be chose n . The co r re s pondi ng i nc rease of th e obje ct i ve
f unc t i on would then be

( 8)

and we choos e t he index s wi t h whi ch the l a rgest i ncrea se of t he object ive


fun cti on i s as soci at ed . The r e ader is in vited t o che ck th at applying thi s
met hod on (PI) would r esul t in so lvin g th is l inear pro gram i n two s t e ps , in -
s tead of t hree . But a gre at numb er of expe r i ment s conduct ed ove r a l ar ge s amp l e
of l i near pro grams have shown t hat i n gene ra l , th e gai nt brought by t hi s method
was not in r e lati on with i t s cos t (co mputa tion of r es ) and of ( 8) , f or al l s
S
s uch t hat C >0 )• In compute r codes , neither cri t erion (7 ) nor th e exp l or a-
t or y vari ant i s use d (se e Chap ter VII ) . For small academic examp l es, we wi ll
use c r i te r i on (7) ; i n cas e of a t i e, any cho i ce of s among t hos e wi l l do .

t Thi s gai n i s by no mea ns sys t emat i c .


The r e ader i s i nvi te d t o i nvent new
coe fficients for t he obj ec tive functi on of (PI) th at would make t he
met hod fai I .
Section 3. The Simplex Algorithm: General Case 83

Remar k 9: Simplex a lgo ri t hms (as any othe r a lgo r i t hm) are composed of condit iona l
ins t r uc ti ons (Steps 1 and 2) and of ope r at ions (Steps 3 and 4) . To pe rform
any of t hos e, we just need mat r i x M of coe f f i ci ents of th e l inear program ,
whi ch is t r ans f ormed a t each i teration by pivoti ng on e lement A~ . Thus ,
the so l ution of a l i near program by th e si mp lex a lgo r i t hm can be presente d in
gi ving th e sequence of t abl eaus of H. We present now, under t hi s compact f orm,
the solution of (PI) as i t has been obtained in Section 2:

x2 x3 x4 x5 b (PI) is i n canonical
xl
form wit h r e spect to :
2 1 1 0 0 8
1 2 0 1 0 7
0 1 0 0 1 3 ... J
---- ----- ----- ----------- ------ ---- - --
4 5* 0 0 0 0

2 0 1 0 -1 5
1 0 0 1 -2 1 ...
0 1 0 0 1 3
------ ----- ---- -- -- --- ---- ----- -- - ---- J'
4* 0 0 0 -5 - 15

0 0 1 -2 3 3 ...
1 0 0 1 -2 1
0 1 0 0 1 3
---- ---- ----- -- --- -- ------ -- --- - --- --- J"
0 0 0 -4 3* -1 9

0 0 1/3 - 2/ 3 1 1
1 0 2/3 - 1/ 3 0 3
0 1 -1 / 3 2/3 0 2 JU '
------- - -- - - --- -- - - - - ------ -- -- ------ -
0 0 -1 -2 0 - 22

[A * near a coefficen t of th e objective fun ct i on indicates which co l umn


en te rs the basis; an ... indica tes on whic h r ow we pivot. ]

Remark 10 : The reader is invi te d t o verify t hat t o sol ve t he li near prog ram

x > 0
(P ') {Axcx b
z (Min)
84 Chapter V. The Simplex Al gorithm

the onl y point to chan ge i n simplex algori t hmtis Step 1, which becomes :
S
~: Choos e an s s uch that C < 0 • If s uch an s does not exis t ,
bas i s J i s optimal . STOP.

4. Fi ni t eness of t he Si mp lex Algori t hm

Pr oof of t he f i ni teness of the si mp lex a lgori t hm is esse ntial. Unles s this


proof is undertaken , we cannot be s ure th at a linear progr am (once it is wri tten
i n ca noni ca l form with re sp ect to a f easible bas is) has ei t he r an optimal
(bas ic ) so l ut i on or a s e t of sol uti ons for which z is unbounded (se e Remark
1 . 5) .

Defi nition 1: The basic s ol ution a ss oci at ed wi t h the f ea s ibl e basis J of the
linear pro gr am

Ax = b x > 0
(P) { ex = z (Max)

is said to be "d e genera t e" i f at l e ast one co mponen t of x is eq ual t o 0,


J
i ,«,. ; i f ve ctor (AJ) - l b is not s t ric t ly posit ive .

Remark 11 : In Step 3 of the si mple x a lgorithm, we define

K Min
i£ I
and we choos e r £ K. If IKI > 1, consi de r k £ K, kf r , \'1e will have,
a f ter pi vot ing ,
o
Conve rsely , if th e s tarti ng s ol ut i on is nonde gene r at.e and i f IKI = 1 at each
iterat ion, i t i s easy to see t hat th e s olution will r emain nondegenerate. Thus
degeneracy is c lose ly r el ated t o th e f act that I KI > 1 •

Theo r em 2: If, at ea ch s tep of th e si mp lex a lg ori t hm, th e bas ic s ol ut i on


associated wi t h th e current ba sis is nondegenerate, then the algori t hm t er mi-
nat es (either in Step 1 or in Step 2), i n a f inite number of i te ra t i ons.

Proo f : Fr om one i te rat ion t o th e next , the value of s (which i s the value of
the objecti ve fun ct ion as s oc iated wi t h the cur r ent bas i c solut ion ) in c re ases by

cSb l AS
r r

t Not e that unboundednes s i n t his case means z ...


Section 4 . Finiteness of (he Simplex Algorithm 85

(see Theo rem IV. 2) . If no so l uti on is degenerate, thi s increas e wi l l a l ways


be s t r ict ly posi t ive . Si nce the basic so luti on i s uni quely determi ned by the
basis, va lue of ~ i s uni quel y determined by the ba sis .
Since ~ s t rictly inc reases f roQ one i te rati on t o the next, we ca nnot
f i nd, when appl yin g th e simplex a lgori t hm, a basi s that was preVt' o)slY me t.
Since t he number of bases i s fi nite (t his number i s bounded by n , th e
m
number of ways of choos ing m bas ic co l umns out of n ) , the number of itera-
ti ons of th e a lgo r i thm is a lso fin ite .

Remark 12 : Until 1951 (t he si mp lex a lgo r i t hm was found i n 194 7) , it was not
known whether i t was pos sible t hat the a l gori t hm (beca use of cycling among
degenerat e sol uti ons) was nonterminating. In 1951 , Hoffman proposed an exampl e
wher e th e systemati c choice of the first r ow in ca se of deg eneracy would l ead
t o cyc l i ng . Beal e ( 1955) pro vi ded a s i mple r exa mple . The follo wing one is an
adap tat ion by V. Chvat al of a case proposed by K. T. Marsh all and J . W. Soorb all e:

l - 5 .5x2 -
0.5x 2.5x + 9x4 + x5 0
3
0.5x 1. 5x - 0. 5x + x4 + x 0 xj > 0
l- 2 3 6
Xl + x j=1,2 , ••• , 7
7
10x l 57x 2 - 9x - 24x z(Max)
3 4

Mos t pract ical prob lems a re dege ne rate. However , t he occu r rence of
cyc l i ng i n r eal probl ems is most except i onal (Ph. Wolfe repo r ted i n 1963 havi ng
come ac ross . s uch a case). This is why most computer codes pres entl y use d to
so l ve li nea r programs do not i nc l ude so phis ticat ed rout ines s uch as th e one that we
are about t o presen t to avoid cyc l i ng .
But from a theoreti cal s tandpoint , and es pec ial ly becaus e we use t he
simp lex algorithm as a mathemati cal too l t o pro ve theorems (see Chapte r VI) , we
mus t make s ur e tha t some rul e of choice f or t he pivot row i n case of degeneracy
will l ead to a s ol ution in a f ini t e numb er of s t eps. The "p erturbation tech-
nique" t hat we are about to present i s due to Cha rnes [1952] . The f ollowing
" small est subsc ri pt" ru le, proposed by R.G. Bland (1977) , cons is ts of :

1. Choosi ng t he nonbas i c va riab le fo r whi ch c j >0 with th e smalles t


subsc ript t o en te r t he basis.
2. Choos ing th e ca ndi date r ow for pivot i ng -- i n case of a t i e - - wi t h
the sma l les t subscrip t .
86 Chapter V. The Simplex A lgori thm

Thi s rul e i s si mp le r to impl ement , but it s p r oo f i s s li ghtl y mo r e in tri ca t e .


For did acti c r eas on s, we prefer t o p r e sent h ere t he perturbation t e ch n i que
that enli ght en s the s impl e x al gorithm i tse lf . For a geome t r ic int erpret at i on,
s ee Remark VIII . S .

Definition 2 : Cons ider th e line ar pro g r am

Ax = b x > 0
( P)
{
ex = z (Max )

written i n can on ica l form with re spe ct t o s ome f e a sibl e bas is J . We as s oc iat e
with (P), th e line ar p rogram

i
b. + £ x > 0
1
(P ( £))
z(Ma x )

call ed a "perturbed program," i n whi ch £ is a s c a l a r a nd £i me an s " £ to the


power i ", Th e basic s o l ut i on a s s ociated with th e f e a s ibl e b a s i s J o f (P( E))
is
for k = c o l ( i) i = 1,2 , oo o , m

o f or k t. J

Le ., e a c h c ompo nen t of x c an be c onsi de red a s a po I yn omi a l o f de gre e m in E :

If not all c oe f fi c ie nts o f th i s polyn omial are zero, 1\(E ) ke ep s a cons t ant s ign
i n an ope n i n te r va l (O, h) for h >0 s ma ll e no ug h . Th i s si gn is that of
the nonz ero c oe f ficie nt s ;;k o f l ower i n dex k ; I f thi s si gn i s + , we will
say t h at p olynomi a l 1\( E) i s "po sitive" and we wr i te

1\(E ) >0
Gi ve n polynomi al s 1\' ( E) an d 1\" (E ) , we will s a y that 1\' ( E) is " gre at er th an"
1\" (E) and we not e "hat

1\' ( E) ~ 1\" (E )
if
1\' (E) - 1\" ( E) i 0
Section 4. Finiteness of the Simplex Algorithm 87

Remark 13: Relation} is a total order on polynoms (see Exercise 11). Note
that if A(E) is a positive po lynomial, then

A(0) > 0

Remark 14: Consider an m-column vector b (E) , eac h component of which being
a polynomial bi(E) of degree m in E

(9)

Let B be an mx m-matrix. The product of matrix B by vector beE) wi 11


naturally be
b (E) B • b eE)

where b (c) is an m- col umn vector, eac h component of which is a polynomial in E

b (E) b-0. + b-1. E + b- 2. E2+. • •+b.


- mEm
i 1- 1- 1- 1-
with

b-k. L Bl bk
1- i l
l

Not e that b (E) can be represented by the In x (m+l) -matrix

bO l 2 m
1 b1 b1 b
1
l b2
bO b2 2 bm
2
2
Q

O l 2 m
b b b b
m m m m

Then Q, the matrix of coeffi cients of b (E) is given by

Q = BQ

Remark 15 : We will app ly s i mp l ex algorithm to the perturbed probl em (P( E) ) .


We must s pec i fy how Steps 3 and 4 are performed (Steps 1 and 2 are unchang ed) .

Step 3 : If i EI , and i s a polynomi al in E that


has t he same sign as
88 Chapter V. The Simplex Al gorithm

These pol ynomial s can be compared i n us i ng the order r el ati on of Defini t i on 3.


Thus, we can define th e index se t (cor r es pondi ng t o K):

(10 ) L Min [b . ( £) /A~J)


i £l l. l.

and we wi ll choos e r cl, (we wi ll see l ater t hat ILI= I ) •

Step 4: As we have seen i n Chapter III, a pi vot ope r at io n is equival ent


to a matri x multiplication (by pivot matrix Pi er , s ) , see Sec t io n 111.4) .
From Remark 14, we see th at the pivot ope rat ion of s t ep 4 will have to be
performed on an (m+l) x (m+n+l)-ma tri x :

i ns tead of the mat rix

~I

where ~ i s t he (m+l ) - ro w vec tor, a l l coe ffi ci ent s of which arc equa l t o
o at th e begi nning .

Remark 16: At each iterati on, ILl 1. Suppos e th at th i s i s not true


and l et k ' , k" £L . We have

i. e . , matrix Q of coeffi ci ent s of b eE) has two proportion al r ows . But


thi s i s impossible s i nce , f ro m Remark 15, Q i s defin ed by
J -1
Q (A ) Qi ni t
where

i s the value of Q at th e beginning of the a l gor i t hm. Si nce Qi ni t i s of


Section 4 . Finiteness of the Simplex Algorithm 89

r ank m (i t contains t he unit mat r i x) and s i nc e (AJ) - l i s nonsingular , Q


is a l so of rank m and canno t ha ve t wo proportional ro ws .

Remark 17: At each iteration of th e s i mp lex algorithm appli ed to the perturbed


problem b (E) are positive (see Defin ition 3).
(P(E )) , polynomials
i
This property is true at the beg i nning. Let us s how that i f i t is true
at so me i te rat i on , i t is a lso true a t the ne xt one . Aft er a pivot ope r a t ion,
we have
s
b (E) / A
r r
>- 0

fo r i., r

Fr om (10) , thes e polynomial s are posi t i ve or ze r o . Si nce ILI = 1 they are


s t r ic tly po si t ive .

Remark 18 : At each i te rat ion of the s i mp lex al gorithm app lied to t he pe r t urbe d
pr oblem ( P(E)) , the value of the obj ecti ve fun cti on cor respond i ng to the basi c
s olution associ at ed wi t h the cur r ent feas i bl e basis i s

-0 -1 -2 2 -mm
I;; + I;; E + I;; E + • •• + 1;; E

where coe f fi cient s . ~k a re de f i ned by the s ucce ssion of pi vot operations of


Remark 15 . The incre ase of th e va l ue of t he obj e cti ve fu ncti on from one it er a-
t i on to the ne xt is thus

which i s - - f rom Remark 17 - - a positive po lynomia l .

Theo rem 3 : The si mp l e x a lgo ri t hm applied to perturbed program (P(E)) is


al ways fi nite .

Proo f: The argume nt is the same as th e one us ed f or the proof of Theor em 2.


The value of t he objec tive fun ct ion -- which i s now a po l ynomial in E -- s t r ict ly
incre ases from i te ra tion t o iteration . Thus one cannot meet a basis that was
found before. Since t he number of bases i s fi ni te , so i s t he a lgori thm.

Remark 19: We now have a procedure that s ol ves any linear pr ogram (wr i t ten i n
canonical fo rm with r es pect t o a fe as ibl e bas is) in a f inite number of itera t i ons .
90 Chapter V. The Simplex Algorithm

It suf f i ces t o sol ve th e associat ed pe r t urb ed program (P( E)) and t o l e t


E =0 when an opt imal s ol ut ion has been f ound.

Rer.lark 20 : What t he a r t ifac t of "pe rturbed progr am (P( E)) " and defini t i on of
"pos i ti ve polynomi a ls" in fac t br i n gs i s a mor e s en sit ive way of compar i ng r ows i n
case of t i e i n t he i ni t ial l in ear pr ogr am (si nce t he cons t ant t erm of polynomia l s
b (E) pl ays a dominant r ol e i n t he compa ri so n rul e) .
i
Thi s met hod ca n be presen t ed differen tl y in t erms of l exi cographi c
orde r in g. See Exerci se 13.

EXERCISES

I. Let J be a f ea si bl e ba si s and l i near prog r am (P) be wri t ten in


canoni c a l f orm wi t h r es pect t o J . Assume t hat s t J and c S > 0 •
Show th at i f AS .::. 0 , (PP(J ,s)) -- as defined i n Remark 5 - - has no
optimal so l uti on and th at th e s ame i s true f or (P) .

2. Solve co mplete l y t he l i near program of Exerc ise IV. l .

3. Solve the li near prog ram

4x + 4x + 4x + x4 < 44
I 2 3
x. > 0 i = 1, 2, 3, 4
1
8x I + 6x + 4x + 3x < 36
2 3 4
5X + x + 6x + 2x z( Max)
I 2 3 4

Compare your r esul t wi t h what was obtai ned in so l vi ng Exer cise I V.5 .

4. Solve by the simp lex a lgor ithm

x2 < 5
( 2Xl +

Xl - x 2 2- x ,x 2 > 0
l

l Xl + x2 < 3

3x + 2x z (Max}
l 2

Give a graphic i l l ust rat ion .

5. Consider t he l inea r pr ogram


Exercises 91

x + 3x3 + 2x4 .::. 8


2
x. > 0 i = 1, 2, 3, 4
(P)
1'"
3xl
· + 2x
2
+ 2x
3
+ x
4
< 7 1

\ 2xl + 3x + 2x + 3x4 z(Max)


2 3

(a) Write t he dual (D) of (P) and give a gr aphic so lution of (D).
(b) Sol ve (P) by th e simp lex algorithm and check t he so luti on of (D).
(c) Call E the mat r i x of coefficients of slack variables x and
x6 in the canonical form relative to t he optimal bas i s . 5Che ck
t hat

E = G . :
and exp lain why.

(d) Check that (_c 5, _c6) is t he optimal so l ution of (D) and


exp lain why .

6. Solve t he linear program

( -. + 6x2 < 54

(
I xl + 2x
2 < 14 x l,x2 .::. 0

\ 3xl - x2 < 9
I\. 3x + x2 z(Max)
l
using the simp lex a lgorithm .

7. Use the s implex algorithm t o prove


2x

]
l
+ x2 <

4x l - 3x2 <

-3x l 2x
=- xl + x2 < 5
-<
+
2
xl ' x2' ->

8. Use the simplex algori thm t o prove


92 Chapter V. The Simple x Algorithm

2x + Y < 3
- 2x + Y <
x < = (x + y) < 5/2

x, y > 0

Show that we can obtain (x+y) < 5/ 2 using on l y part of th e hypothese s .


Weaken the hypothes es t o obtain-the s t ro nge s t impli cat i on . Che ck on a
graphi c .

9. Let
b x > 0
(P)
z(Max)

be a linear pr ogram wr i t te n i n canoni cal f orm with r e s pect to an opt i mal


ba s i s J .

(a) Show t hat if cj < 0 fo r j tJ , t hen the opti ma l so l ut i on


is uni que .
(b) Show on an examp le that th e optimal so l ut i on may be uni que wi t h ,
at th e s ame time, s ome c J = 0 f or j t J .

10. Consider the linear program

Xl + 4x 2 + 2x 3 + 3x4 < 20

2x + x + 3x + x < 6
l 2 3 4
7xl + ll x + l Zx + 9x4 z (Max)
Z 3

Ca) Find an opt i mal so l uti on of (P) . I s thi s so l ut i on unique?


(b ) The coe f f i cie nt of x in th e objec t i ve fu nct i on i s cha nged
3
from l Z t o 12 - 0 ( 0'::0). Wha t is t he va ria tion of t he
optimal so lut ion when 0 vari es ?
(c) The ri ght-hand s i de of th e first inequati on changes f ro m 20 to
19. Wh at i s th e cor res pondi ng vari ation of the optimal s ol ution?

11. Show that if A(E) is not identi call y 0 and i s not pos i t i ve , then
- A(E) i s posit ive . Show that the s um of t wo pos i t ive pol ynomia ls is a
pos itive pol ynomial . Show that t he relation " ~ " i s a t otal or de r on
pol ynomials.

12. Let ( P) be a l inear program written in ca noni ca l f orm wit h r esp ect to
a feasible basi s J ' Let G = (X,U) be th e directed gr aph defined in
O
the fo l l owi ng way:
Exercises 93

(i) There is a one-to-one correspondence between nodes in G and the


feasible bases of (P) .

(ii) There is an arc from node x (corresponding to basis J) to


J
node x (corresponding to basis J) if and only if an iteration
J
of the simplex algorithm (as presented in Section 3) may lead
from J to J.

(a) Is it possible that graph G include directed cycles?


What about directed cycles of length 2?

(b) Show that there exists an elementary path in G from


xJ to xJ where xJ corresponds to the optimal basis J.

(c) Show that if the rule of choice of the pivot row r E K


is the following: "Choose one r in K with equal
probabilities among elements of K ," the algorithm is
finite with probability 1.

k
13. Indices of components of vectors of R are supposed to be ordered (take,
for instance, "natural order" 1 < 2 < •• •< k ), We defined the "lexicographic"
k k
order on vectors of R in the following way: a ER is 9--positive if
at 0 and if the first nonzero component of a is positive. We note that

a> 0

a is 9--greater than b if a - b > O. We note that

a} b

(a) Show that the order in which the words are written in a
dictionary is the lexicographic order .
(b) Given a linear program written in canonical form with respect
to a feasible basis J:

Ax b x > 0
(P) { ex z(Max)

we associate with (P) an 9--linear program

~ AX - B X. > 0 j=1,2, • • • ,n
J
I cX = Z(9- oMax)

where x is the m x(m+l) matrix of unknowns


Z is a (m+l)-row vector (of unknowns).
B
94 Chapter V. The Simplex A lgori thm

Show t hat we ca n defi ne (u sing l exi cogr aphi c order of vectors


of JRm+l) a f i nit e a l gorithm t o so l ve (P,Q,) 0

(c ) Show t hat i f X is an opti ma l so lut ion of (P,Q,) ' t hen x= XI


is an opt imal so l ution of (P)

(d) Deduc e a f i ni t e process to s ol ve any l i near progr am and show


th at i t i s i dentica l t o t he me t hod developed using pert ur bed
programs 0

(e) Solve example of Remark 12 0


Chapter VI. The Two Phases of the Simplex Method:
Theoretical Results Proved by Application of the Simplex
Method

We now have to show how a gi ven l i near prog r am ca n be wri t t en in ca nonical


form wi th r espect t o a f eas ibl e ba si s (when the so l uti on set i s not empty) . We
then have t o prove th at a l inear progr am with an optimal soluti on has a bas ic op-
t i mal so l ution . This resu l t can be use d t o prove various theorems .

1. The Two Phase s of the Simp l ex Method

Rema rk 1 : Reca l l t hat a li nea r pro gram

b x > 0
(P)
z (Max)

i s s ai d t o be writt en in ca noni ca l fo rm wi t h r espect to t he f easi bl e basi s J if


the f ollowing three condi t i ons are sat is fie d :
J
( I) A is , up t o a pe r mutat i on of it s co lumns, the unit matri x.
This pe rmutation i s defi ned by t he mappi ng "c ol " (cf, Remark V. I) .
( 2) b > 0 •
J-
( 3) cO .

However, if condit ions (1) and (2) are s at i s f i ed but not condi ti on (3 ) , we
pos e
i cco l (i)
y

and th e linear program

Ax = b x > 0
(P I) { (c -yA) x=z(Max} - yb

i s equivalent t o ( P) and satisfies ( I) , (2) , and ( 3) . Thus we wil l consider


t hat i t suffices t o fulfill conditions ( I) and (2) .

Defin ition 1: Let us cons i de r t he linear program

95
96 Chapter VI. The Two Phases of the Simplex Method

Ax = b x > 0
( P) {
ex = z (Max)

We make no assumption on ( P) : linear s ys t em Ax = b is not ne cessari ly full


rank and we do not as s ume th at th e se t of fe asibl e so l ut ions t o ( P) is non-
emp ty.
However, with out l os s of gen eral ity (by multi pl yin g by -1 t he eq ua t i ons
for which b < 0) , we assum e that
i

( 2) b > 0

We as sociate with l i near program (P) an auxiliary linear pro gram

A: + Uv b x, v > 0
(PA)
1i=l~ v·
1
lJi( mi n)

whe re U i s the mxm- uni t ma t rix and a re ca ll ed " art i f ic r a I


vari abl es ."

. th
Re mark 2 : The 1 cons t raint of (PA) i s written

b.
1

i.e . , vi mea sures the difference between the right -hand side b and Aix .
i
When a ll a r ti fi cial va r i ables are equa l to 0, x , t he f e as i bl e so l ut ion of
(PA) , i s thus a f easibl e so l ut ion of ( P) .

Remark 3: We app ly to (PA) th e proc es s of Remark 1: l et e be th e m-row


vecto r each component of whi ch i s equa l to 1. (PA) i s equi va le nt t o

Ax + Uv b x,v ~ 0
(PA') { - eAx lJi (Min) eb

( PA') i s written in ca non ica l fo r m with respe ct t o th e feasib le bas is


J
O
= {n +l,n +2, ••• ,n+m}
Section I. The Two Phases of the Simplex Method 97

Theorem 1: The simpl ex algorithm (as pr es ent ed in Chapt er V) can be applied


to (PA') . In a finit e number of i te r at i ons , we end up with an opt i ma l basic
solution .

Proof : The objective function of (PA') is

s i nce v. > 0 i = 1, 2, •. . ,m
1

Thus , the al gor i t hm cannot t erminate wi th a se t of feasible solutions for which


th e obj ect i ve function is unbounded (Ij!'" - "" is impossible). From Chapter V, the
algorithm t erminates ( i n a finite number of iterations) with an optimal basic
sOlution .

Theorem 2 : Let x,v be an opt i mal so l ution of (PA') (a nd thus of (PA)). A


neces sary and s uf f i cient condition for ( P) to have a f ea s ible solution is that

(4) o i :::: 1, 2, .• • ,m

Proof: If condi t i on (4) is s atisfied, x is obviousl y a feasible s ol ut i on of


(P) . Assume that (P) ha s a feasibl e s olution x and that condition ( 4) is
not s at i s fie d . (x,v = 0) is then a fea sible s ol uti on of (PA) . And we have

m m
L V. o<~
A

i=l
L V.
1
1 i=l

which contradi ct s th e fact that (x, v) was, by assumption, an optimal s ol ut i on


of ( PA).

Theorem 3: Let

lAx + BV x,v.:: 0
(PA)
l~x + dV Ij!(Min)

be th e linear program ( PA) wr i t ten i n canoni cal form with r espect to an optimal
bas is (whi ch we know to exist from Theorem 1) . Then i f
Ai 0, b i = 0
.th
v is the basic variable in the 1 row,
r
th
th e r constraint of (P) (i .e ., Arx b r) was redundant .
98 Chapter VI. The Two Phases of the Simplex Method

Pr oof: Canonical form (PA) has been obtai ned thr ough a sequ ence of pivot
th
ope ra t ions . The i equation of CPA) ca n thus be cons i de re d as a linea r
combinat ion of the equa t ions of t he l inea r sys t em

Ax + Uv b

th
the coe f f i cient of t he r equa tion in this linear combi nation being di ffer ent
from 0 , The ve r y sa me linear combina tion app li ed to t he I inca r sys tem

Ax b

gi ves the r edund ant equat i on


Ox a

The t heorem is proved .

Theorem 4 : If (P) ha s a feasib le s ol ut i on , th ere exist s an opt ima l bas i s for


(PA) conta i ni ng no i ndex r co r r espondi ng to an a r ti f ic ial va r i able v r'

Pr oof: Ass ume that v i s a bas i c var iable in ( PA) . With no l os s of


r
gene rali t y , we can as s ume tha t iJrr = 1 Fr om Theor em 2 , we have

b
r
a

A = 0, we know f ro m Theorem 3 that the r t h equat ion of (PA)


A

(a) If
r
was r ed undan t . We ca n t hus s uppre s s th e r t h eq uat i on of (PA) .

(b) If A l et AS t- 0 , We can t hen make a pivot ope r ation on


t- 0 ,
r ... s r
el ement Ar the variab le v r leaves t he ba sis and var i abl e x
s A

ent er s t he bas is (wi thout cha nge of th e basi c s ol ut i on , s inc e b 0) .


r

We can r epe at the proces s on a l l ar t i f i cial ba s ic var i ab l es ,

Defi nition 2 : The so l ution of linear progr am (PA) , f oll owed by t he expuls ion
of ar t i f i cial bas i c vari able s acco r di ng t o the proces s des cri bed i n the pr oof of
Theorem 4 , i s ca ll ed "ph as e I of t he s i mplex method . " After phase I has bee n
acc ompl is hed , the l inear program (P) i tse l f i s wr i t t en i n canonica l f orm wit h
r espect t o a fe asibl e basis (i f a f ea sible SOl ut ion exis ts ) and the s i mpl ex
Section I. The Two Phases of the Simplex Method 99

algo rithm can be applied to (P) : t hi s pa r t i s called "ph as e II of th e s impl ex


method ."
We follow he re denominations proposed by G. Dantzig : The s i mp l ex me t hod
i s co mp os ed of t wo phases . In each of them t he simp lex a lgo ri th m can be app lied
(to t he aux i liary problem in phase I, to (P) i n phas e II ) .

Remark 4 : In ste ad of ( PA) , we cou l d have as so ciated

Ax + Uv b

(PA) - cx +z 0 x,v .:: 0


m
Lv. 1jJ(min)
i= l 1

as t he auxi l iary probl em in which z is a noncon strained basic vari able (and
thus wi ll never be ca ndi da te t o l eave t he bases ) . If (PA) i s so l ved i ns te ad
of (PA), then at t he end of phase I t he linear program is wri tten i n canonical
fo rm with r espe ct t o a f eas i bl e basis (( 1) , (2) , and (3) ar e satis fie d) and
t he operat i ons des cri bed in Rema rk 1 need not be pe rfo rmed. This i s wha t i s
generally done .

Remark 5 : If (P) contains a vari able, say Xl ' which is not cons t rai ned t o
be posi t ive or ze ro , we can go back t o th e gene ral cas e us i ng the method of
Chapter I (see Remark 1. 10) . This is cor r ec t but not very c leve r . We wi l l pre -
fer expressing Xl as a f uncti on of th e othe r va riables i n an equa ti on where
Xl has a nonze ro coe fficien t, rep lace Xl by th i s va l ue in the other equa tions
and in th e objective fu nct i on, and so lve th e r educ ed l inear pr ogram thus ob-
tai ned . Final ly , t he va l ue of Xl for the opt i mal so lution i s comput e d from
t he expression givi ng Xl (see Exercise 3). If more than one variable i s un-
cons t raine d , thi s procedure is exte nde d naturall y .
To maintain th e spa rs i ty of matrix A, it might be advantageous not t o
e li minate Xl but to cons i de r it as a basic vari abl e from wh i ch, once ent ered,
the ba sis will nev er be a candidat e to leave (the sa me argument a s in Remark 4- -
se e Chapte r VII) .

Remark 6 : In linear pr ogr am (P), it may hap pen that a vari abl e, say x is
th s'
contai ned in onl y one equation , say th e r equa tion . Then i f ASb > 0 , i t
r r
is not nec es s ary (a nd i n fac t, i t is r ather c l umsy) to i nt r oduce an arti f ic i a l
100 Chapter VI. The Two Phases of the Simplex Method

va r iab le cor responding t o t hi s equ ation . We pre fer to consi der Xs as t he


th
basi c va r i able co r re sponding t o the r eq ua t ion .

Examples: As s ume th at we have t o s ol ve

aX + Zx + x3 Z
1 Z
Xl X SX 1Z x , x , x ,x > 0
(P
+
z+ 3 l Z 3 4
a) Zx 6x
Xl + + + x 13
Z 3 4
Xl X x x z( Mi n}
+
z+ 3
+
4

We need only add art i f i ci al vari abl es f or th e f irst and second eq uations ;
x wi ll be the bas ic va ri ab le as s oci at ed with th e t hi rd equat ion (s ee Remar k 6) .
4
The auxil i ary prob lem is wr i t t en

a X1 + Zx + x + VI Z
Z 3
SX
x.1>- 0 i= 1,3,4 ,S
Xl + Xz + + V 1Z
(PA 3 z
a) + Zx + 6x + x 13 ViVz .:. 0
Xl z 3 4
Xz + SX + z 13
3
v + V W ( ~li n }
1 z

In s ubt r acti ng t he fir st two equati ons t o th e obj ective functi on , we


obta i n

W(Min} - 14

and (PAa) i s writt en in canoni ca l f orm wit h r es pect t o a fe asib l e bas i s .


Section I. The Two Phases of the Simplex Method 101

FIRST CASE: a. = L We appl y t he s imp l ex algorithm to (PAl)

z b

2 2 ....
1 5 12
2 6 13
5 13

-2 -3 - 6* -14

2 2
-4 -9 -5 2
-5 - 10 -6 1
-5 -9 -5 3

4 9 6 -2

This bas i s is optima l . The co r respondi ng va l ue of Wis positi ve. Fro m


Theorem 2, (PI) has no f eas i bl e solution . Take an equa tion of (PAl) where
an art ificial va riable is basic and th e right-hand side i s non zero :

In te rms of t he i ni ti a l var i abl e s, t hi s equat ion beco mes

which is clearl y infeasible . In other words , phas e I of th e s imp l ex method has


produced a li near combi nat i on of the equa ti ons of t he syste m (s ubt r act 5 times
the f irst equat i on from th e second one) , whic h i s an i nfeasib le equation . (PI)
has no f eas i ble so l ution. We s t op.
102 Chapter VI. The Two Phases of the Simplex Method

SECOND CASE: a =0 0 We apply the simplex algori t hm to (PA ) :


O

Xl x x x VI v2 b
2 3 4
2 1 2 -<-

1 S 12
2 6 13
S 13

-1 -3 -6* - 14

2
- 9 -s 2
-10 -6
- 9 -s 3

9 6 -2

2 2
-1

- 10
- 9
-6
-S 1
t 3

1
-1 * 1 0 -

2 -1 -2 o
-1
-9 4 10 11
-9 4 9 12

o o

Phas e I is pe rformedo We have fo und a feasib l e basi s for (PO) :

X3 + 2x 0
4
x x
2 4
x x .: 0
9x 11
1, 2,x3,x4
Xl 4
9x z(Min) 12
4
Section 2. Results That Can Be Proved by the Simplex Method 103

Here the initi al basi s found at th e end of phase I is al so opt i mal; i.e . ,
phas e II is finished at the s ame time as phas e I. This i s just due to good
luck.

Remark 7: Nonbas i c artifici al variables do not pl ay any rol e . We ca n as well


forget them. Thus when an ar t i fici al variable l eaves the bas i s , we will s up-
pres s ito

rite S..unp.f.ex Method

Phas e I : The linear program (P) is written in s tandar d form . Let us


multipl y by -1 th e equat i ons for which the right -hand side i s ne gative . De-
fine th e auxi l iary problem as in De f i ni t i on I . The auxi lia ry probl em is written
in canonic al f orm with respe ct t o a f e asibl e bas i s.
Solve th e auxi liary probl em us ing th e s i mp lex a lgo r i t hm. The auxi liary
l i ne ar pro gram i s th en written i n canonica l form with r esp ect to an opti ma l
basi s .
(a) If the value of the obj ective function i s pos i t i ve , the
linear program (P) ha s no f eas ible s ol uti on . STOP .
(b) If the value of th e obj ective function i s equa l to zero,
apply repetiti vel y the fo l l owing proces s unt il a l l ar ti -
fi cial bas i c vari abl es are e l i minated :

Let vr be a basic artificial variable. Perform a pivot


operation that leads vr out of the basis . If such a
pivot operation is impossible, suppress the corresponding
equation .

Wh en there i s no more ar tificial va r iab le i n the bas i s, (P) is wri t t en in


c anoni ca l f orm wi t h re s pect to a f easible bas is.

Phase II: Appl y t he s i mplex a l gori t hm to the l inear progr am (P) as


written at the end of Phas e I .

2. Results That Can Be Pr oved by th e Si mp lex Method

The s impl ex method is a fi ni t e pr ocess si nce the si mplex a l gorithm i t s e l f


i s f inite . From th is finit en es s, we dedu ce t he foll owing r e sults:
104 Chapter VI. The Two Phases of the Simplex Method

Theorem 5 (Fundament al Theorem of Linear Programmin g): A l inear program:

(a) Which has a fe asi bl e s ol ut i on has a basic feas i ble so l ut io n.


(b) Whi ch has an optimal so l uti on ha s a basic opt i mal so l ut i on .
(c ) Which has a f eas ibl e s ol ut i on and t he objec ti ve f unc t i on of
which is bounded (uppe r bound if the obj ect i ve fun ction is
to be max imi zed, lower bound i f th e object i ve fun cti on i s to
be minimized) has an opt i mal so lution .

Theorem 6: If two dual progr ams (P) and (D) both have a f eas ibl e s ol ut io n ,
they both have an optimal s ol ut ion and th e va l ues of the obj ecti ve func t ion f or
th e opt i mums a re equa l .

Proof: Let x,y be a coupl e of s ol ut i ons of th e dual linear programs

(P ) I: ~ : (MaX~ > 0
(D)
VA < c
Iyb = w
y .::. 0

(Min )

Fr om Theorem 11 . 2, th e value of th e object i ve fun ction o f (P) i s bounded by


yb and t he value of the objective f unct ion of (D) is bounded by cx. Fro m
Theorem 5 , (P) and (D) both have a ba si c opti mal sol ut ion . The prope r ty is
th en a con s equ ence of the coro l la ry of Theorem IV. 3.

Theorem 7 (Far kas I Lemma) : Let A be an mx n-matri x and b be an m-column


vec to r . If f or any m- ro w vector y s uch that

yA < 0

we have yb ~ 0, then x > 0 exis ts s uch that

Ax b

Pro of: Let us cons i de r th e l inear pr ogram

yA ~ 0
( P)
yb z (Max)

(P) has a f easibl e so Iut i on y = O. By ass umption , the obj ec t i ve fun cti on is
bounded by O. From Theorem 5, (P) ha s an optima l bas ic so l ut i on. From th e
coro l lary of Theorem IV. 3, its dual has an optimal (and thus a fe as i b l e) solu-
tion . The dual of ( P) is
Exercises 105

Ax - b x > 0
{ Ox = wlMin)

Theor em 8 (Theorem of the Al ternatives ): One and one only of t he two fol lowing
sys tems of const raints has a solut ion

Ax = b yA < 0
(I) { x > 0 (II) Lb ~ 0

Proof: ( I) and (II) canno t both have s ol ut i ons x and y :

o ~ y Ax yb > 0

I f sys tem ( II) has no solution , the hypothe ses of Theor em 7 ar e f ul f i ll ed and
thus ( I) has a s ol ut i on.

Remark 8 : Other theore ms are presented as exercises . Their proofs follow the
same line .

EXERCISES
1, Solve the linear program

xl + x2 ~ 2

- xl + x2 ~ 3
(P)
xl > 4

3x + 2x 2 = z(Min)
l

2. Solve the l i near program

xl + x2 + 2x 3 ~ 6
(P) 2x l - x2 + 2x 3 ~ 2

2x 2 + x = z(Max)
3
106 Chapter VI. The Two Phases of (he Simplex Method

3. Solve t he 1inea r pr ogr am

- xl + x + x x
2 3 4
>
2x + 7x + x + 8x 7 xl < 0
l 2 3 4
(P)
2x l + 10x + 2x + 10x4 10 x ,x ,x > 0
2 3 2 3 4
2x l + l 8x + SX + l 4x z(~l ax)
2 3 4

4. Solve th e linear pr ogr am

2x + x - x > 4
l 2 3
- 3x > 2
2
( P) \ " I x x x .:: 0
l, 2, 3
- 3xl + 2x + x3 > 3
2
\ - xl + 3x
2
3x
3
z( Max)

s. Consider the l i near pr ogr am

Xl - x 2 .:: 1
( P) 2x l - x 2 .:: 6

xl + x2 = z (Mi n)

and th e as so ciated probl em

x X
2 - x3
Xl +
s x. > 0
1 -
( PA) 2x
l
x2 - x
4
+ x
6
6
1=1, . • . ,6
xl + x + MX + ~I x6 z( Min)
2 S

where M is a pos i t i ve s cal ar .

(a ) Expl ai n t he ro les played by x ,x and ~I • Show t hat if


3 4,xS,x6 '
M i s l arge enough, t he so l ut ion of (PA) i s equ ivalent t o
s i mu ltane ous applic at i on of phas es I and II of t he s i mp lex me th od.

(b) Solve (PA) with M=l OO.

(c) Can th i s me thod be gene ra lized?

6. Does t he f ol l owing sys tem ha ve a sOl ut i on?


Exercises 107

Xl + 3x Z 5x 3 10

Xl 4x 7x 3 -1
Z
Xl > - 3, Xz .:: z, X3 -> - 1

7. I s it possi ble t o fi nd numbers a , b, and c suc h that

aZ + bZ + ZcZ 8
Z Z Z
18a + Il b + 10c 31

8. Find a non- neg ative s ol ut i on t o t he sy s t em

+ ZX ZX + x4 0
1 Xl Z 3
- xl + Zx + x + ZX4 Zl
Z 3

9. Pr ove t he following f orms of th e t he or em of t he a l te rn ati ves : One and onl y


one of t he t wo fo l lowi ng systems of cons t rai nt s has a sol ut i on:

(a ) ( I) Ax .::. b, X> 0 ( II ) uA > 0 , u .:: 0 , ub < 0

(b) ( I) Ax = b ( II) uA = 0, ub <O

(c) (1) Ax < b ( II ) uA = 0, u .:: 0, ub < 0

(d) ( Gor dan - 1873) :

(I ) Ax=O , x .::0 , x ., O ( II) uA>O

(e ) (St i emke - 1915) :

( I) Ax = 0, x >O ( I I) uA ':: 0 , uA ., 0

(f ) (Mot zki n- 1936) :

(I)
IAx + By + cz = 0
x ,y .:: 0 x., 0 ( II )
uA > 0
uB > 0
(A nonempty) uC = 0

(g) (Motz ki n- Tuc ke r )

+ By + Cz 0 uA > 0 uA ., 0
( I) lAxX> 0, Y> 0 ( II) uB > 0
(A nonempt y) uC =0
108 Chapter VI. The Two Phases of the Simplex Method

10 0 Us ing th e t heorem of t he al t ernati ves , pr ove t hat the f ol l ow in g sys tems


have no so l ut ions:

Xl + 3x 2 - SX3 2
( a)
xl - 4x - 7x 3
2 3

6x - SX > 7
l 2
( b) - 2x - 7x 2 > 2
l
- xl + 3x 2 < -1

- 3x - 2x + x + 2x 5
l 2 3 4
(c) - 2x
l
- x
2
+ 3x
3
+ SX
4
27

xi > i i=1 ,2 ,3, 4

110 Pro ve t hat i f B i s a s quare anti s ymme tri c mat ri x , th e f oll owing sy s t em
has a s ol ut i on :

Bx .::. 0 x > 0

1Bl< + X > 0

Pr ove t hat

l - BX < 0
x > 0
- Bx - Ux < - e
has a so l ution o

120 Show that i f A i s a square matri x s uch t hat

xTAx > 0 Vx

t hen the sy s te m
Ax .::. 0, x.::. 0 , x I 0

always has a so l ut iono


Chapter VII. Computational Aspects of the Simplex
Method: Revised Simplex Algorithm; Bounded Variables

Up t o th i s point, we have introduced l inear programming and i t s method of


SOl ution , t he simplex al gorithm, i n a pur ely theoretical f ash ion . We pai d no
at te nti on to the effectivenes s of t he a l gor i t hm nor t o the f act that one has to
f ace spec ia l probl ems (and i n part i cular, numerical one s) when the algori t hm i s
i mpl emented on a computer . In th i s chap te r , we give a bri ef acc ount of the e f -
f i c iency of the simplex algorit hm (bot h from a theoreti cal and a pract ical st and-
poin t) . Then we show how harmf ul r ounding e r r or s may be fo r a prac tica l app l ica-
t i on . In the t hird sec t io n , we de s cribe the "revised s i mplex a lg ori t hm," which
has essen tial ly been des i gned for compute r i mplementati on . Fin ally, we de s cribe
how bounded va r iabl es can be t aken ca re of in a way th at r edu ce s computation as
much as poss i ble .

1. Ef ficiency of t he Si mplex Algori thm

We mention ed in Sec t i on 1.5 (and wil l prove i n Chapter VI II ) that simp l ex


algo r i thm finds an "o pt i ma l" vertex of the po l yhedron of f eas ibl e solut ions of
a li near program (P) af te r a j ourn ey along t he e dges of this poly he dron . This
procedure may s eem r at he r clumsy : why s t ay on the border of the domain of
f eas ible so l ut i ons ins tead of goi ng "dire ctl y" (but how?) t o the opt imal poi nt ?
Howev er, the expe rience of so l ution of numer ous l inear prog rams l eads t o
t he conc lusion th at the a l gori t hm is s urp ris i ngly effi ci ent . For probl ems with
m < 50 and m+n ~ 200, Dantzig r eported (c f , [ 2] ), t hat th e numbe r of itera-
t i ons i s of th e or de r of magnitude of 3m/2 , rarel y runn in g over 3m . More
r ecent expe r i me nt s conducte d over prob lems of larger size l ead t o an ana l ogous
concl usion . The number of i teration s depends very l oos el y on th e numbe r of va ria-
bles : some aut hors sugges t t hat , fo r fi xed m , th e numbe r of i t e r ations i s
pro po rtiona l t o th e l ogarit hm of n, on th e average . This brilli ant pe r fo r -
mance is ce r ta i nly an import ant f actor of th e s uccess t hat the s implex me t hod
has won even if no s t ai s fy i ng exp lanat i on has bee n gi ven of t his efficiency .

109
11 0 Chapter VII. Computational Aspects of the Simplex Method

However , th i s r ough e stimat e , which i s of p r i me i mp o rt a nc e to t h e p ra c t i -


t i oner, doe s not gi ve a bound o f the nu mber o f ite rations . Si mp lex a lgo r i thm
i s " nonpo l ynomi a l " in the s en s e tha t i t is alway s possib l e t o co ns truc t e xa mple s
for which th e nu mbe r of i t e r at i on s is n ot bounde d by a p ol yn o mi al in n and m
(i n fact , t he precise meanin g of nonp ol yn omi al a lgori t hm i s s ligh t ly more s ophi s -
t i cated , but we wi l l be conten t here wi th thi s approxima ti on) . We now prese n t
a n exampl e cons t ructe d by Klee and Ilinty (see Exe r ci se 8) :

i -I ..
2 I l Ol - J X. + x.
1
i=1 , 2, ...... ,n
j =1 J
x. > o j=1 ,2 , ~ .. , n
J
n
I z Ulax )
j=l

n l
wh i c h require s 2 - itera tion s o f th e s i mp l e x a l go r i t hm when t he c r i t e r i on t o
S j
ch oos e th e e n t e r i n g variable is c = Max [ c J . Act ua ll y, examp l e s c an be con -
s t r uc te d th at be at any rul e. But the s~ examp l e s a re o f an aca demic nat u r e . On
" curren t" (not e th e l a ck of r i go r ou s meani ng of t hi s t erm) prob le ms , th e a lgo -
r i t hm work s very we 11 .
Re c entl y [1 9 79 ], Kha c h i a n p ropose d a n algo rithm that s olve s linear p rog rams
us in g a numb er of e lement a ry ope rat i on s (s uch a s addition s , compa r is o ns , and
mu l t ip lica t ions) , whi ch is bo un de d by a po lynomia l i n a qu antity th at exp res ses the
s i ze o f t he p rogram . This be auti ful mathe mati c al res u lt i s very lik el y t o be o f
mode rat e prac t ica l us e . Alth ou gh it i s po l yno mia l , Kha ch i an ' s a lgo r it hm seems
t o be fa r l e s s e f f i ci e nt t h a n t h e simpl e x to s o l ve mos t -- if no t a ll -- linear
p r og r a ms .

2. Nume r ica l Pi t f a ll s

Th e presenta tion o f th e s i mp l e x a lgo ri t hm i n p rece di n g c ha pt e r s wa s co m-


p le t e ly ab s trac t in t he s e nse that we a ss umed t h a t the ca lcu lation s were co n duc te d
wi th a perfe ct p recis i on . But it is we l l known t h at n umbers i n a comput e r a re
r epre s e nt e d on ly with a ce rtain pre ci s ion (of d d i gi t s , s ay) : t he y are rounded
o ff .
To g i ve an e xa mp le of th e t yp e of prob le m roundi ng o ff c an l e ad to , let
us try and s o l ve th e fo llowi ng lin e ar sy s tem by t he Gauss-Jorda n e limin at i on
method , a s sumi ng t hat n umbe r s are rep r e s e nte d wi th th r ee si gnifican t d i git s :
Section 2. Numerical Pilfalls III

Using th e f irst equation to e l imi nate xl and rounding off i nte r me diate r esult s
to thre e s i gni f ican t dig it s, we obtain

0,

a c l ear ly improp er r esult . The r e as on for thi s error is th at we pi vot ed on


0 .001, whi ch i s t oo small a coe f f i c ient rel at ive to the ot he r coeffici ent s of
matrix A and t o t he precision used . Thi s difficulty i s prevented by nume rous
tricks i n linear programming computer codes ( sc aling , s ea r ch for a pivot of
r easonabl e size , etc.).
Another e f f ec t of roundin g of f i s the problem of "recogn i zi ng zero . "
Numbe rs that a r e smal l in abs ol ute va l ue have an unce r t ai n s i gn , and thus (for
in st ance) , the optimal ity condi ti on

will have no pre ci se mean i ng . Thus i n comput e r codes , zer o must be defi ned
through t wo small pos itive number s, t and t , ca lled " zero tol eran ce ." A
l z
scale r a i s consi de re d to be equa l to ze r o if

Other ze r o tol eranc e will be use d : For instance, we wi ll refus e i nt r oduci ng


x in the basi s if thi s i nt r oduc t ion leads to a pi vot on e l ement AS s uch
s r
t hat

(s ee above) . Needless t o say , a pr ope r definit ion of t hese ze r o tol erances


(which mi ght be adapt a t i ve wi t h th e ev ol ut i on of th e al gorithm), r equ ire s some
s ki l l and expe r ie nce.
Finall y , expe r i ence s hows th at for most practi cal linear progr ams, mat r i x
A i s very sparse (1% of nonz ero t erms, and eve n l es s i s frequent for matri ce s
of l arge size ) . If th i s s pa rs i ty i s ma intained, the numbe r of operat i ons can
be drast icall y dec r eas ed, whereas i f the prob l em is treated as in Chapter V,
11 2 Chapter VII. Computatio nal Aspects of the Simplex Method

the re i s no r eason why t he sp ar s ity would s tay (t he r e wil l be " f il l in" ) .


Compute r codes wil l us e severa l devi ce s (and in par tic ul a r, th e s o-ca l l ed
r ev ised s i mp lex a lgo r i t hm) t o t ake ad vantage of t he built-in spa rs i ty of
mat r ix A wi t hout destroyin g it .
The concl us i on of thi s bri ef se ction i s that prac t ical implement ation
of the s i mp lex al gorithm i s f ar away from the e lementar y a lgo ri thm pr ese nted
i n Chapter V. Codes have been wr i t t en th at are avai labl e on most computers .
Thes e codes rep re sent s eve ra l pe r s on- yea r s of work by s pe cia l is ts and can
hand l e l i near prog rams wi t h s everal thousand const rai nt s and a qua s i- unli-
mite d number of vari a bl es . It i s not advisab le , when one has t o solve a
lin ear program, to code th e si mpl ex al gorithm from sc r atc h . Utili zat ion of
one of the s e codes wi ll be preferred; user noti ce i s eas y t o understand for
any r eade r of t he pre sent book .

3. Revi s ed Si mpl ex Algorit hm

As we announc ed in the pr ecedi ng s ect i on , the s i mp le x a lg or i t hm i s, i n


ge ne r al , not impl emented in t he f orm under wh ich it i s pres ent ed in Chap te r V.
To li mit th e pro paga t i on o f r ound- off er r ors , t o deal in a be t te r way wi t h
s pa rs e ma tri ces , and t o dec rease th e numbe r of ope r ati ons , the " r evi s ed" f orm is
pr e f er r ed. "Ie i nsi st on t he f act that, ba s i c a ll y, t he a lgo r ithm r emai ns the same
and that it i s sole ly i t s i mp l ementat io n that changes . We r e f er t o t he s imp lex
a l gor i t hm as pre se nted in Sect ion V. 3.

Remar k I : Le t J be a f eas i bl e bas i s of t he 1 inear program

Ax = b x > a
( P) { cx = z (Max)
\'I e l et
(1) A(J) (AJ)- l A (we hav e (A(J))J =un )
(2) b (J ) (AJ) - l b

( 3) 1T(J) cJ( AJ) - l

(4) c( J) c - 1T(J) A

(5 ) r;(J ) 1T (J )b
Section 3. Revised Simplex Algorithm 113

Wri tten in canonical form v i th respect to basis J, the linear program ( P)


becomes
I A(J ) x b(J) x > 0
I c (J) x z(Hax) - r;;(J)

In t he r evi s ed simp lex a lgorithm we know, at each iterat i on , t he f eas ibl e


basis J, but we do not know a ll t he coe fficients of linear prog ram (PJ )
The princip le is t o compute only those coeffic ients t hat are rea l ly nee ded .
For Step 1, we need c(J) . Then n(J ) is comp ute d by (3) and c(J) by
(4) . For St ep 2 , we need (A(J) )s which i s given by

(1 ' ) (A(J))s

For Step 3, we need b( J) given by (2) . Then the new bas i s

J JU{ s} \{co l( r) }

i s determined . But the p!-vot opeJta.-t<.on 06 Step 4 .u, not peJt6oJuTled.

Remark 2: The main advantage of the r evi s ed simp lex a lgori thm is not so much a
reduct i on i n the number of operati ons (see Exercise 1) as the fact t hat r ound-
off e rrors canno t propagat e since we always work with t he initial mat rix of
dat a A.
Li near prog rams in which n (numbe r of va r i ables) is much large r than
m (number of equati ons) are freq uentl y met . I t may then happen t hat matrix A
J
(or A(J)) can not be hel d in t he cent ra l memo ry of the computer, whereas A
(or (AJ) - l) is of su fficien t ly li mi t ed size t o be con tai ned in t he f ast memo ry .
It s uffices t he n t o sto re on a periphe ra l de vice matrix

by co l umns. The n if n (J ) has been compute d by (3), we can compute - - us ing


(4) -- successive ly coe fficien ts of c(J) by ca lling each co lumn of M , one at
a time . Recall t hat we do not have to l ook for Hax. (c(J))j s i nce we can
s J
st op as soon as an i nde x s (such t hat (c(J)) > 0) has been fo und . Not e t hat,
very often , comput er codes wi 11 l ook fo r t he l arges t among the p fi rs t posi-
t i ve (c(J ))j met; t hi s t ype of heu ris tic appr oac h may impr ove code efficiency .
114 Chapter VII. Computationa l Aspects of the Simplex Method

Renark 3 : The di sadvan t age of t he r evi s e d s i mpl ex a lgo r i t hm i s t hat we ha ve to


J)
compute , f or ea ch ite r ation , th e in ve r se of ba s i c matri x (A -1 (whic h t ake s
3
a numbe r of ope r at i on r t ha t i s of t he or de r of m ). From what we have se en in
Chapt er I l l, we know that

(6)

where Pi er ,s) i s th e pivot mat rix (c f. Sec t ion 111. 4), whic h has a s i ngl e
. Il mZ
nontrivi a l co l umn . I t i s eas y t o check t hat (6 ) can be pe r f or med Wlt
Z
addi ti ons and m mul t i p l i c at i ons .
I f we ca ll Dl, DZ, . . . , D , t he q f i r st pivot matrice s and J the
q q
basi s af te r t he qth iterat i on , we have

J -1
(6 ' ) (A q)

and t hi s matrix can be s tored unde r a compact f orm (at l ea s t i f q < m) us i ng


nontri vi al co l umns of mat r i ces D • (6 ' ) is called, in li near progr ammi ng
k
l ite r at ur e, "the pro duct f orm of t he i nve rse. "
In f act , as soo n as q r eaches a cer t a i n va lue , t hi s method is not very
good : r ound- off err ors appear and th e s equence of nontri vi al col umns is heavy
t o t ake care of . Thus, in th e comput er code s t hat are avai l abl e , the fo llow in g
J
compro mi se i s adopted : f r om time t o ti me , th e basic mat rix A i s comp l e t e l y
inve r t ed and thi s a l lows a new s t art wi t h fres h numeri cal dat a . The progr am
de ci des i t se lf when t o pe rfo rm t hi s r einve r si on . The deci s i on is t aken as a
fu ncti on o f :

(a ) Cont ro l s t ha t t he code can do by i t s el f; f or ins t ance , how f ar


f r om ze ro i s (c(J)) J ?

(b) The va l ue of q ,

J
Remark 4 : Ther e i s a very simple way t o avoi d in ve r s i on of mat rix A at each
iter at i on without usi ng (6 ) and (6 ' ) . It s uf fi ces t o note th at we do not r ea ll y
nee d (AJ) - l , but that we ca n obt ain what we ar e l ooki ng for (b(J) , n( J) ,
(A(J))s) by so lving li near systems :

(2 ' ) b
J
(3 ' ) c
Section 3. Revised Simplex Algorithm 115

(I")

We now have three linear systems to solve. The losst i n number of opera-
tions i s substantial since when we had (AJ)-l, we needed only to perform
matrix multiplications (2), (3), (I'). But some advantages can compensate for
this loss:

(i) No propagation of round-off errors .


(ii) Nondestruction of sparsity; advantage c an be taken
of a particular s t ruc t ure of matrix A.

Revised Simpl ex Algorithm

Linear program (P) i s not written in canonical form with r espect to a feasible
basis, but a feasible basis J i s known.

REPEAT the following procedure until e i t he r an optimal basi s i s obtained or


a s et of feas ible solutions for which z is unbounded is shown to-exi st .

~: Solve the linear s ystem

J
(3' ) c

Let c(J) =c - 7t(J)A and choose an s such that (c(J)) s >O . If


s uch an s does not exist, the ba sis J is optimal . STOP .

STEP 2 : Solve th e linear sy stem

(1 II)

If (A(J ) )s .::. 0, no optimal s ol ut i on exi s t s (z is unbounded). STOP .

STEP 3: Solve th e linear system

(2') b

and let = {i! (A(J)) l.~ > O} (1 "f/l bec aus e of Step 2).
Choose an r such that

t Not e t hat when (AJ)-l is used, there is a gain with respect to s implex
a lgori t hm.
11 6 Chapter VII. Computational Aspects of the Simplex Method

(b (J)) / (A(J )) s Mi n [ (b(J)) . / (A(J)) : ]


r r 1 1
i £1

STEP 4: Let
J: = J U {s} \ {co l I r ) }
co l( r) : = s

END REPEAT

Examp l e : To avo i d complex notat i ons, we wr i t e TI, 2, AS , b i ns tead of TI (J ) ,


c (J) , (A(J ) )s , b (J ) o Assume th at we want to s olve the linear program (program
(P2) of Chapte r 1 -- se e Exercise 11012) 0

550

350 xJ. -> °


400 j= I,2,o oo,6

x
2
+ x
5
300

- 5x l - 6x2 - 3x - 3x 5x - 4x z (Max)
3 4 5 6

we a re gi ven th e fe asi ble bas i s J = {1 , 2,4 , 6} wit h col ( l) = 1, co l ( 2) = 2,


co l( 3) = 4 , co l( 4) =6 0 TI is computed in s olvi ng

r ~~J
l
o
~l
1 234 °0 0° ( - 5 , - 6 , - 3, - 4 )
(n .n . ' . " i,
o

We ge t
TIl 2 3 4
- 6, TI = -4 , TI = I, TI °
A3 AS Al A2 A4 A6
c -3 + 6 = 3, c -5 + 4 - 1; c c c c
=
°
3 i s the index t hat "ent er s" th e basi s o We sol ve

1
[1
~
0
°
1
OJ
0
* b = ~3~1°
400
o ° ° 300
Section 4 . Linear Programs with Bounded variables 117

a nd

We get
A
b 250 b 300 b 150
l 2 3
A3 A3 A3
Al A2 0 A -1
3

Thus, K = {4}, col(4) = 6 . Thus , the new basis is J = {l, 2, 4 , 3} with


col(l) =1, col(2)= 2 , col ( 3) = 4, c o l( 4) =3 . We compute n by

(- 5 , -6, -3, -3 )

We get
l 2 3 4
n -3, n = - 1, n - 2, n = -3
AS A6 Al A2 A3 A4
c - 5 + 4 = -1 ; c = 4 + = - 3; c =c =c =c 0

The present basis i s optimal .

4. Li nea r Program s with Bounded Va r iab les

Def i ni t i on 1 : We co ns i de r, i n t h is s e ction, the linear program

Ax = b j =1,2, .. . ,n
(PB)
l cx = z(Max)

with a , e R U { _oo} ,
J

Remark 5 : The " us ual " li ne ar program unde r st andard form (say ( P) o f Remark 1)
is a spe ci al c a s e o f (PB) with
\18 Chapter VII. Computational Aspects of the Simplex Method

o j = 1,2 , • . • , n
+ 00 j = 1 ,2 , .. . , n

Remark 6 : For t he s ake of s implici t y i n th e pres ent at i on and wit hout l os s of


gene r ali ty (cf. Exe r c i se 3 ), we will as sume t hat

_ 00 < U
j
, B. < 00 j = 1 , 2,. . . , n
J

We pose
x. x. u.
J J J

b b Ao.

(PB) can t hen be wr i t t en i n s tanda r d f orm:

n
b x"- , u > 0
(PB ') H B- U

cx z (Max ) - CU

( the u are s l ack va ri ab les) . If A is a mxn-ma t r.i x, we not e t ha t (PB' )


j
con t ains m+n const rai nts and 2n va r iables . We will show now t hat (PB) can
be s o lve d as a linear pr ogr am wi t h m cons t r ai nts and n var iab l es onl y . Con-
s t r a i nt s xj ~ Bj ca n be t aken ca re of i n a direct way by a very s light mod i -
ficat i on of th e s i mp lex algori t hm .

Defini t i on 2 : We ass ume th at t he linea r sys t em Ax = b i s f ull rank . ( If


t hi s we re not the ca s e, we could deduce an eq uiva lent f ull - r ank sys te m by app l y -
J
i ng pha se I of t he s implex method . ) A se t of i ndi ce s J s uch that A is
s qua r e nons i ngul ar wil l be ca l led a " bas i s " of (PB) (not e t hat it i s not a
ba s is of (PB ')) . I n th is context , we say t hat

z( Ma x) - 7t b

J J
with J = {l , 2, .. . . n l \ J, 7t so lut i on of 7tA = c i s t he l i near pr ogram
(PB) "wri tt en in canon ical form wi t h r e sp ec t t o t he bas i s J ."
Section 4. Linear Programs with Bounded variables 119

A solution of the linear system Ax = b such that

or

is called a " bas i c s ol ut i on associ at ed with basis J . " Note that to a gi ven
n
basis, there correspond 2 - m basic solutions in this context.

Theorem 1: A basic fe asibl e solution as soci ated wit h basis J i s an opt imal
solution of (PB) , i f the following condi t i ons are sat i s f i ed:

J
(7 ' ) cj 7TA > 0 - -> X. 8.
J J
j lTAj < 0 x.
(7") c -> OJ
J

Pro of: Let x be a ba sic so lution r el ative to bas is J and x be any


feasi bl e s ol ut i on . We have

z +

7Tb + +

z - z < 0

The l ast i nequali ty is due t o the f act th at ea ch term of the s um z - z is


negative or zero . The r eader will note t he analogy of thi s proof with th at
of Theorem IV. 3.

Remark 7 : A basis J t ogether with ~ basic s olutio~ relative t o J , can


be found using phas e I of th e s i mpl ex met hod (see Exerci se 4) . Then we pose
' 20 C hapte r VII . Co mputational Aspects of the S implex Method

s s
If Ic - nA I = 0 , re lati on s (7 ') an d (7 ") a re verifi ed an d , from The or em I,
t he pr es en t ba s i c so l ut i on i s op tima l. As s ume t ha t we have

and thus,
s
='\ ( th e case c S - Tf As > 0 and xs = as i s s i milar ). The idea
x
co nsis t s - - as i n t h e s i mp le x a l gorithm - - o f h avin g va riabl e X de c r e a s e from
s
i t s p re sent value B wi thout c han ging the v a l ue of t he o ther no nb as ic va r ia -
s
b l es and ha vi ng t h e ba s ic va r iab l es adj us te d so t hat l in e ar s ys t e m Ax = b
r ema i n s ve rified . The dimi nut i on o f x wi ll be limit ed a t t he fi r st o ccu r -
s
rene e of one o f th e f o llowi ng e vents :

(a) x r e a ch e s i t s l ower bo un d a . Th e so l ut i on thus ob tai ne d i s s t i l l


s s
a basi c so l ut i on r el at i ve t o J . For in de x s , r el at i on (7 ") i s
now verifi ed a nd we l ook for a no the r index s uch t ha t (7 ') or (7 " ) is
not sat isfied .

( b) One of t he basi c variab l e s for reJ r e ach e s e it he r it s l owe r or


i t s upp e r bound . Then
J' = J U {s } \ { co l (r)}

i s a new basis . The l i ne a r progra m i s wri t ten i n canonical fo rm wi t h


r e s pe ct t o J I and t h e p roc ess i s r e sumed .

Remark 8 : \'Ie do not de s crib e t h e s i mp lex a lgo r i t hm fo r l i n e a r p r og r ams wi t h


bounded va r i abl e s with more de t a il s . Th e r e ader i s i nvit ed t o :

s
1. Exp r e s s what h appe ns whe n c s - TIA > 0 (c f , Remark 7) a nd ho w
th e al gor ithm p rocee ds .

2. Not e th at t he proposed a lgori t hm i s t he s i mpl ex if (PR) =: (P)


i. e ., i f t he l ower bo unds are 0 a nd t he upp er bo un ds + 00 •
I nd i c a t e how t he a lgo r i t hm i s c ha nge d i f s ome va r iab les on ly
a re bound ed .

3. Formu lat e the a l gori t hm in i t s ge nera lit y .

4. Ver i fy t ha t th e pro cedur e r e s ul t i n g f rom t h is a lgo r i t hm i s


i de ntica l t o th e one ob t a ine d t h roug h the a pp li c at i on of simplex
a l go rit hm to (PR ' ) (see Exe rc ise -5 ) .

The economy o f memo r y s pa ce and co mput e r ti me t h a t i s ob t a i ned t h r ough


thi s method o f ap p l y i n g th e s i mp le x a l go ri t hm t o linear pro gr ams h a v i n g bou nded
variabl e s i s very importan t . Si nce many " r ea l " l in e a r pro gra ms ha v e t he va l ues
Section 4 . Linear Programs with Bounded Variables 121

o f some of the i r variab les bo un de d, commercial co mputer codes all work


according t o the l i ne s we just described. Mo reover, t he revised variant of the
simp lex algorit hm can be used h ere . Th i s wi l l be done in th e following i llus -
t rative examp le .

Example : We wi ll so lve
6x l + SX2 + 3x 3 + x4 + 2x S 16
o< x ~ j j ; 1 ,2 , .. . ,S
13x l + 2x
2
+ 9x
3
+ x
4
+ Sx
S
z(Max ) - j

Th e starting basis is J; {4 } and t h e basic solution is


x2 ; 0,

Firs t I t era t i on : TI ;

s ; 3. If x., i ncreases, x mus t decrease . But x is al ready at O. Thus,


.) 4 4
the new basis is J ; {3} with the same basic so lution .

Second I teration: TI 3

j 1 2 3 4 5

j j
c - TIA -1 5 - 13 0 -2 -1

s ; 1. x l dec reases and x 3 increases (x 3 ; 2( 1 - xl))' We are in case (a) of


Remark 7: we a re blocked when xl reaches i ts lowe r value O. We s ti ll ha ve
basis J ; {3} . The new basic so l ution is

Third Iteration : s ; 5 • Xs decreases and x increases:


3
2
x3 2 + 3 (5 - x )
S
We are he re in c a se b: we are b locked when x reaches i ts upper bound 3 for
3
x ;7/2. The new basis is J; {S} and t h e basic so lution is
S

Fourt h Iteration: TI 5/2.


122 Chapter VII. Comput ationa l Aspec ts of the Simp lex Method

1 2 3 4 5

J _ nAj
Ic -1 2 - 21/ 2 3/2 - 3/ 2 0

The presen t s ol ution i s opt imal .

EXERC ISES

1. Count t he number of ope r a t ions (a ddi t ions, mul t ipl ica tio ns , compar i sons )
needed f or one ite r ati on of t he simp lex algorit hm an d fo r one i te ration
of t he r evi s ed s i mp l ex al gori t hm.

2. Cons ide r t he l inea r pro gr am

xl + x + x 6
2 3
x + x + x 4 x. > 0
4 5 6 J -
xl + x 5 j =1 , 2, . • .,6
4
x2 + x 3
5
x + x 2
3 6
3x l + x + 2x + 6x + 2x 5 + 4x z (mi n)
2 3 4 6

(a) Show t hat the cons t rai nt se t i s r edundant . As si gn t he name ( 1')


t o the li near pro gram obta i ned a f te r del e t i on of a r edund ant
cons t rai nt .

(b) Show t hat {1 ,2 ,5 ,6 } i s a feasi bl e basi s for ( 1')

(c ) Solve ( 1') us in g t he r e vi s ed s i mpl ex a l gori thm.

3. Wr i te li nea r program (PB) of Sec t ion 4 in s tanda rd form wit hou t making
t he ass ump tion t ha t

a. > _ 00 B. < 00 j =1 , 2 , ••• ,n


J J
Al l cases wi l l have to be t ake n ca re of.

4. How ca n phas e I of th e s i mp l ex me t hod be ex tende d t o 1i nea r prog ram (PB)


wi th bounded vari ab le s ? (See Sec t ion 4 .)

5. Pr ove th at th e pr oce dure pr es ented t o so lve l i nea r programs with bounded


va r iables (PB) i s i de ntic a l t o the one obtained t hrough appl ication of
si mp lex a lgori thm t o (PB') . I n pa r tic ul a r , give an interp re ta tion, in
terms of (PB') , of bl ocki ng case (a) of Remark 7.

6. Solve th e l i nea r prog r am


Exercises 123

r4., ...8x
l
z•
4x
3
+ x

+ 6x2 + 4x3 + 3x4 <


4
< 48

72

1 2. xl 2. 10
2 2. x2 2. 5
3 2. x 3 < 6
4 2. x4 2. 8

I 5x l + x 2 + x 3 + 2x4 z (Max)
"
usi ng the method of Se ction 3.

7. Consi der the l inear program

(K )
l alx l + a 2x 2 +• •• +anxn 2. b
clx l + c 2x2 +•• . +cnxn = z( Max)

wit h a , a , . . • , an > 0; b > 0 ; c l , c 2" ' . , cn > O . Without lo s s of


l 2
general ity , we also assume t hat

( 1) c
> > -1l
- a n

(a) Show assumpt ion (1) can be done "without lo ss of gene ra li ty ."

(b) Show, by a common- sense argument , t hat

is an optimal so lut i on to (K) .


Pr ove the proposi tion by us i ng l i near pr ogramming t heory .

(c) Propose a very simp le algori t hm gIVIng directl y (wi t hout pivoti ng
or iterat i on) an optimal sol uti on of l inear prog ram (KB) obtai ned
from (K) by adding constra i nts

c . < x , < 6. j= 1,2 , • • • ,n


J - J J

Cd) Check t hat t he a lgo ri thm you fou nd gives directly t he s ol ution
of t he problem sol ve d as an example i n Sec tion 3.

8. Solve t he Kl ee -Mirrty prob lem of Sec ti on 1 fo r n = 3.


Chapter VIII . Geometric Interpretation of the Simplex
Method

1. Convex Pro gramming

Definiti on 1 : Recall th at we denote by mn the Eucl idi an n- dimensi onal space ,


L e., the set of n- colurnn vectors with r ea l components . A "c onvex se t " C in
mn i s a s et su ch that if two poi nt s p and q bel ong to C, then the whole
se gment [pq] be l ongs to C.
A closed se t i s one th at i nc l udes its boundari es .

Example: The following s ets are convex:

60 o
The foll owin g sets are not convex :

Theorem 1: The int ersection of convex se t s is a conve x s e t .

Proof : Let F be a famil y of conve x se ts and l et C be th e i nt e rsec ti on of


the sets in F. Let p and q be two points in C ; P and q are in a ll
124
Section I. Convex Programming 125

the convex set s of f amil y F and t hus segment [pq] i s in a l l t he sets of F


(s ince these se t s are convex) and thus [pq] i s in t he intersecti on C.

n;
De fini tion Z: Let p and q be t wo poi nt s of R r eca ll t hat x be lon gs
t o segment [pq] i f and onl y i f

for i= l , • •• , n o < A<

This set of equat ions , which can be wr i t t en :


( 1) x = Ap + (l- A)q o < A< 1
is th e equation of s egment [pq]. By this we mean that when A varies between
o and 1 , the poi nt x varies a l ong segment [pq] • Moreover , i f R. (pq) ,
R.(px) , R.(xq) denote th e l en gt hs of segments [pq] , [px], [xq] , r espe ct ivel y,
we have

~
R. (xq) H (pq)
(Z)
R. (px) ( I -A) R,Cpq)

Example: Consider i n RZ (see Figure VIII . l)

p q

Xl A + 3 (l - A)
o< A< 1
Xz ZA - 1(1 - A)

When A varies between 0 and 1, point x var i es a l ong s egment [pq] •


For i ns tance, we have

A 1, x p; A 1/ 4, x = [ 5/~
- 1/4

A l i Z, x
L;J; A 0, x q

Defini tion 3 : If c is an n-row ve ctor and Ct a scalar, t he equation

1 Z n
c x x =
c Xl 2 +
+ <> 0 0 + C Ct
n
126 Chapter VIII. Geometric Interpretation of the Simplex Method

2.--...

-I q

Fi gure VIII. I : An Exampl e Illustrat i ng Definiti on 2

2,
defines a hypenplane. Note that in R a hyperplane r edu ces to a s t r ai ght
3,
line and in R a hyperp l ane is a plane .
The inequality

(4) cx < a

defines a " c l osed hal f space . II It i s clear that (4 ) de fi nes a c losed s e t ; i t is


n
ca l l ed a half space becaus e a hyperpl ane (3) separates R into two r egions and
t he se t of points that satisfy (4) a ll belong t o the s ame r egion .

Theor em 2 : A hal f space i s a convex se t .

Pro of: Let p and q both be long t o th e half space (4) , i.e .,

(4 ' ) cp < a

(4 " ) cq < a
Section I . Convex Programmin g 127

NOli , if X is a po int of s egment [pq] we ha ve, f rom (1) ,

X AP + ( 1 - A) q a < A<
ClAp + (1 - A) q) ; ACp + ( I - A) cq

Now si nce
a < A

a < 1- A

we ca n mult i pl y inequali ti es (4 ' ) and (4 ") , re spectivel y, by A and (1 - A)


and get

cx ACp + (l - A)Cq :- AU + (l - A)U

Thus X s at i s f i es ( 4 ) ; any point of se gment [pq] be lo ngs to half s pace ( 4) ;


i . e ., t he ha l f space is a convex se t .

Defini tion 4: Fro m Theorem 1 a se t t hat is t he i nte rsection of ha lf spaces i s


convex . Such a set wi ll be ca lled a " convex po lyhedra l se t " or a "convex
pO lyed ro n. "

Remark 1: The set C { xiAx :- b, x > a} of f eas i bl e s ol ut i ons of the l inear


pro gram
~Ax < b x > a
(P)
l cx ; z(Max)

is a convex po lyhedra l se t s i nce ea ch cons t rain t of ( P)

A.x
1
< b.
- 1
i = 1, 2, ... Jrn

x. > a j ; 1 ,2 , • • . ,n
J
defines a hal f space.

Def i ni t ion 5 : A r eal-valued fun ction f defined on a convex se t C is convex


if
fl Ap + (1 - A) q) :- Xf'{p) + ( 1 - A) f (q)

No te t hat a l i near fun ction de f ined on a convex se t i s a convex fu nction .


12R Ch apter VIII. Geometric interpretatio n of the Simplex Method

Defi ni tion 6: The opt imiza t ion prob lem (mat hema tica l prog r am) " Fi n d the
minimum of a convex fu nc tion f over a con vex s et C" i s a "c on ve x p r o gram , It

Def initi on 7: Given a convex prog ram

Mi n [ f (x))
XE:C

x i s ca lled a ° l ocal opt imumo if t he r e ex i s ts an open se t D cont ai ni ng x


s uch th a t
f (x ) < f (x) fo r a ll x E: D

Exa mpl e : Consi de r th e fun ct i on s hown i n Fi gur e VI ll. 2 .

f( x )

q
f ( X) f-----l--~~/

b x

Fi gure VII I . 2 : An Exa mple of a Funct i on Which i s not Conve x

It i s not a conve x func t ion s i nce t he cu rve does no t l i e be low th e se gme nt [pq J.
In t hi s example , x i s a l ocal minimum fo r f(x ) but i t is not a globa l mi ni -
mum (x is one) .

Rema r k 2 : From Remark I we deduce t hat a linear pr ogram (whic h can al ways be
wr i tten unde r the form of a minimiza tion prob lem) i s a co nve x pro gram.

Theor em 3 : For a convex prog ram, a l oc a l mi nimum i s a globa l min i mum .

Pro of : Suppos e t hat t heor em i s not true. There exis ts


Section I . Convex Programming 129

x a l o c al mi n i mum

x a gl oba l mi nimum

i. e ., f ( x) < f(x) ,
x = AX + (1 - A) x i s a po int in C for 0 < A < 1 , s i nce C is
convex. (See Figure VI I I . 3) . Moreover ,

f ( A~ + ( 1 - A) X) 2- H(~ ) + (1 - A) f ( x) f or 0 < A< 1

s i nce f i s a c on vex f un ction . Now, f or 0 < A < 1, and s i n c e f (x) < f (x)
we have

H(x) + (1 - A) f(x) < H(x) + ( 1 - A) f(x) f(x)

Thus, f or 0 < A< 1 f ( x) < f ( x) We h ave s een that

R. (x x )

s o that if A i s sma ll enough, x i s as ne ar as we want to x; thus there


exi sts a s e t of po int s x, a s c l os e as we want to x, f or which

f ( x) < f ( x)

so that it i s not true t hat x is a l o c al mini mum and the t he orem i s p r o ve d by


contradiction .

Figure VIII. 3: x = AX + ( l -A)x


130 Chapter VIII. Geo metric Interpretation of the Simplex Method

Remark 3 : Theorem 3 is very important in pract i ce . Thi s property of convex


programs, not permitting any l oca l minima, makes solution a lgorithms r easonabl y
simple . In effect, when we get a f easibl e point, it is s uf f i cie nt t o exami ne
it s nei ghborllood , and this us uall y can be done rather s i mply. On the other ha nd,
the fact t hat Theorem 3 is not true fo r nonconvex prog rams makes th e l atter
class of pro blems ex t re me ly diffi cul t to so l ve. When we f ind a poi nt that is
s us pec ted t o be an opt imal poi nt , we have t o compare it with a l l th e ot he r poi nt s
to be sur e th at i t i s actua l ly optimal . Integer prog rams (whe re th e so l ut i on
point must be an i ntege r ) are highl y nonconvex and thus ver y di f fi cult . Act ua l ly.
no consi st entl y " good" a lgor i t hm is known t hat so l ves i nte ge r programs .

2. Geo met r i c Interpretati on of th e Si mpl ex Algorithm

Example: Cons i de r th e l inear pr ogram

- 2x + x <
I 2

xl - 2x 2 ::. 5/2

(P) x < 3 x ,x > 0

l+'1 ·
xl - 2 - l 2

x 2 ::. 3

xl + 2x 2 = z (Max)

whic h is sol ved by us in g t he s i mp lex a lgo r i t hm in t he s ucces s i on of s i mp l ex


tableaus s hown bel ow. (x 3 , x x ' x are the s lack variable s, whi ch co r res -
4' 5 6
spond to th e l~t, 2nd, 3~, and 4th cons t r ai nt s , r e sp e ctivel y) .
Section 2. Geometric Interpretation of the Simplex Algorithm 131

x x x4 X x6 z b
XI 2 3 s II I
-2 I I I

I -2 I 5/ 2

1 -1 I 3

1/ 2 1 I 3
- ---- ---- - --- -- - ---- ---- -- --- - - -- -- - - --------
1 1 0
/

-2 1 1 1

-3 2 1 9/ 2

-1 1 1 4

5/2 -1 1 2 ...
----- -- -- ------ - -- - - - - - - -- ---- - --- - - - --------

5 -2 1 -2
t

1 1/ 5 4/5 2 3/5

4/ 5 1 6/ 5 6 9/ 10

3/5 1 2/5 4 4/ 5

1 - 2/ 5 2/5 4/5
--- - - ---- - - - -- - ---- - --- - - -- - - -- ------ ----- ---
u -2 1 -6

We ha ve , s uccess ive ly , bas es J = {3 , 4 ,S ,6 } , JI = { 2,4, S , 6}, and


J" {2,4, S,l}; J" i s an opti mal bas i s s i nc e the c os t vect or r elative to
J" i s nonp os itive .
The three c orr e s pondi ng basic s o l u t i on s are
132 Chapter VIII. Geometric Interpretation of the Simplex Method

xl 0 xl 0 Xl 4/5

x2 0 x2 x 2 3/ 5
2
x x 0 x 0
3 3 3
(5) x 5/2 (5 ' ) x4 9/2 ( 5") x4 6 9/1 0
4
x5 3 x5 4 x 4 4/ 5
5
x6 3 x6 2 x 0
6
z = 0 z = 2 z 6

We can r epres ent in t he x x plane the f ea si bil ity domai n for Pro blem
l 2
(P) It is th e convex po lygon OABCDE shown i n Fi gur e VI I I . 4 . In thi s f i gure,
we notice the fo l lowi ng

ill

Figure VI I I 4 : The Feasibi lit y Domain f or (P)


Section 2. Geometric Interpretation of the Simplex Algorithm 133

10 The points that correspond to basic so l utions (5), (5' ), (5") are vertices
0, A, B of the po lygon .

2. I n t he application of t he simplex a l gor i t hm, when one of the nonbasic


variables inc reases , the others remaining at zero va lue, we go f rom one
ver tex of th e po lygon t o an adjacent one . For i ns tance, when one goes from
basic so lution (5 ') t o basic s ol ut i on (5"), the nonbasic variable re- x
3
mains nonbasic and thus 0, which means that the point moves a long con -
s traint until const raint IV (which corresponds t o the "blocking" row)
is hit .

3. For th e cos t vec to r relative to basis J" , we have 23 = 0, al though


3 ¢ J" . Thi s means t ha t x can increas e without z decreasing and that
3
the opt i mal so lution may not be unique (see Exercise V.g) . I n effect , we
can let x increase f r om 0 up t o x:3 = 8 . For x 3 = x:3' Xs hi t s O.
3
We have a new basis J II' = {2,4,3 , 1} The pr obl em can be wri t ten in
canonica l f or m with respect to t hi s basis:

I I
x2 "3 Xs - "3 x6

5
x + "3 x6 8
3 X
s
4 2 I
x4 - "3 s + "3 x6
X
"2
2 2
Xl + 3" Xs + 3" x6 4

o Xs - 2 x
6 z - 6

The corre sponding basic solut ion is

1
4 x4

r
"2
(5 " ') x2 X
s 0 z =6
x3 8 x6 =0

We see , mo reover, t hat a ll the points t hat are on th e segment defined by


the two optimal basic so lutions (5") and (5 ''') are opti mal, name l y all t he
134 Chapter VIII. Geo metric Interpretatio n of the Simplex Method

point s of segment [BCl, t he eq uat ion of wh ich is

( 6)
2 ~ - l5 x3
5

These prope r ties are gene ral one s , as wil l be seen now .

Definition 8 : Le t C be a conve x se t ; a wil l be ca l led an " ext r eme poi nt "


of C (or a vertex if C i s a polyhedr a l s et ) if

p,q e C

a = ;\p + ( 1 - ;\ ) q p=q=a

0 < ;\ <

Definition 9: Let C be a convex se t ; the segment EC C wil l be ca lled an


"edge " if

e£ E

p, q £ C

e = ;\p + ( 1 - ;\ ) q p,q c E

0 < ;\ < 1

Remark 4 : De fi ni t ions 8 and 9 have the s ame form . They can be r estated i n t he
f oll owing way: "A point a ( res p . a segment E) of a con vex polyhedral set
C i s a verte x (resp . an edge) if every time thi s point a (resp . any poi nt of
t hi s s egment E) i s th e middle point of a se gment cont ai ne d in C, th i s se gment
is r educed to a (r es p. i s contained in E) .
Section 1. Geome tric Interpretation of the Simplex Algorithm 135

s
A
L _ - - - - - -
Figure VIII .s: An Illustration of Definitions 8 and 9

Example.: Consider the tri angl e ABC shown in Figure VIII .s . It i s c lear that
the definitions just given are consistent with everyday l an guage: The three
vertices of thi s tri angl e are A, B, and C and the th r ee edges are [AB], [BC],
and [CA] .

Defini ti on 10 : Two verti ces of a convex pol yhedral se t are sai d t o be " adj acent"
if there i s an ed ge j oining t hem.

Theorem 4 : Let C denot e t he convex polyhed ra l se t of feasib le points of the


li near program
Ax b x > 0
(P)
l cx z (ttax)

A basi c f ea s ibl e s olution of (P) is a vertex of C.

Proof: Let J be a f eas i bl e basis and x be th e bas ic so l ution corresponding


to J:

o j O
( 7)

We appl y Def i ni t i on 8 : l et p, q£C such that

x Ap + ( 1 - A) q 0 < A< 1

From ( 7), we have


136 Chapter VIII. Geometric Interpretatio n of the Simplex Method

Ap . + (1 - A)q . o jO
J J

Si nce A> 0 a nd (1 - A) > 0 , thi s implies that

jO

an d from th e un i qu ene s s of th e ba si c s ol ut i on ( i . e ., from t he f a ct t hat J is a


basis) , we ha ve

Thu s
p q x

Theorem 5 : An it erat i on of t he s i mp l e x a lgo r i t hm cons is ts of c ha ng i ng c urre n t


(feasib le) ba si s J in t o ba si s J ' = J U{s} \ {co l( r)} . The ba sic s o l ut i on s
a ssociate d wi th J and J ' co rrespond t o a d j ace nt ve r t i ce s ,

Proo f : From one i t e r a t ion t o t he ne xt , the s o l ut i on of the I in e ar program


va ries i n t h e f oll owin g way ( \~ e a s s ume (P) wri tten i n c a non ic a l form wi t h
r e spe ct t o J) :

x. = 0 j i J, j Is
J
(8 ) o < xs

We ap p ly Defi ni t ion 9 . Let x be a po i nt of t he segment ( 8 ) a nd l et p ,q be


f e a s i bl e so l utions of ( P) s uc h that

x AP + (l - A) q 0 < A< t

Us in g t h e s ame argume nt as in th e proof o f Theore m 4, we s how th at

p . =q .=O fo r jO, jls


J J

Si nce p and '1 are f e a s i bl e SOl u t i ons , we ha ve

b °2. Ps 2. x;
b o 2. 'Is 2. x;
Section 2. Geometric Interpretation of the Simplex Al gorithm 137

so that p and q s atisfy (8). Thus (8) de fine s an edg e of C.

Remark 5: Let us cons i de r t he f oll owi ng l i near program in canonica l form:

Ax < b x > 0
( P)
l ex = z (Max)

whic h , by addi t i on of s lac k vari abl es Yi f or i = 1, 2 ,. • • , m ca n be wr i tten

lAx + Uy b x,y .:::. 0


( P)
I cx z (Max)

The se t of fe asib le so lutions

C hlAx 2. b, x > O}

i s defined by t he int ersecti on of m+n half s paces (m half spaces Ai x 2. b i ,


n half s paces x .:::. 0) . For a basic SO lut ion , we have a t most m va r iab les
j
x ' yi that are posi t i ve, t hus at 1eas t n of t hem that are eq ua l t o ze ro.
j
When x = 0 , t he co r res pondi ng point belongs t o the limit ing hyperplane of the
j
half s pace x .:::. O . When yi = 0, the cor r espondi ng point be longs to th e
j
limi ting hyperp lane of th e ha lf s pace :

A.1 x -< b1.

Finall y, we see th at a bas i c s ol ut i on cor responds t o t he interse ction of


at l e ast n hyperpl anes l i mi t i ng the m+n ha lf spaces th e inters ection of which
consti t utes C. Note that:
n
(a) In gene r a l , n hyperplan es of R i ntersect at one point .
(b) The ca se wher e a basic so l uti on corre sponds t o the i nte r se ct i on
of more t han n hyperp l anes i s th e cas e of de gener acy of the
bas ic s ol ut i on .

We now ge t a geometric i nterpretation of the pe r turbation method we use d


in Section V.4 t o prove finitene s s of t he simp le x algorithm . By moving t he
limiting hyperpl anes by infinitesim al and not commens ur ab le quantities
£ , £2, • • • , £ffi we make s ure th at eac h ba s ic so l uti on is nondegenerate, i . e .,
cor res ponds ex act l y to th e i nte rse c tion of n l imiti ng hype rp l anes .

Remar k 6 : Consider the linea r program


138 Chapter VIII. Geometric Iruerpretation of the Simplex Method

Ax = b x > 0
( P) {
e x = z (Ma x)

wr i tt e n in ca non i c a l fo r m wi th res pec t t o fe as ib l e basi s J 0 Let s ¢J . If


AS < 0 , th e se t of so l ut ions

0 j ¢j jfs

(8 ' )

F x
J
> 0

=b _ ASx
s

i s un bounded , It c orre sp onds t o an e dge " whi c h is i nfi ni te i n one di r e ction"


(th ink o f a tri edron ) 0 We wil l ca ll thi s type of e dge wi t h j u st one e nd po int
a "ray ."
A ba s ic fea sibl e s o l ut i on o f (I') co r re s pon ds t o a vert e x o f t he doma in
C of f easib l e po i nt s o Th is vert e x i s th e e nd point of rr- rn e dges or ra ys ,
co r r espon ding to t he n- m nonba si c va r i abl e s o The c r i te r i o n

j tJ

for op t i mal i ty is a l oc al t e st th at de cid e s f o r op ti mal i t y : i f t he ob jec t i ve


fun cti on doe s not incre a s e a l ong th e n-m edge s who s e en d point s are th e pre s en t
basic so lution, thi s s o l ut i on i s optima l o The c ho ice

IXlaX cs [c j ]
j

f or t he variab le t o e nte r t h e ba si s co r res pond s t o th e c hoi ce o f the e dge wi t h


the l arge s t " Sl op e," L e o, t h e large st increase of the ob j ec t i ve func tion by
uni t i nc r e a s e of the e nte ring variab l e. Th e exp lo r atory vari a n t (s e e Remar k
Vo8 ) co n si s ts o f examini ng a ll ne i gh bo r s of t he pres ent basic s o l ut i on and
c hoosi ng t he ne i ghbori ng ve r tex f o r wh ic h the ob j ective f unction i s l a r ge st.

Remark 7 : We c a n no w s umma r i ze the r es u l t s of t h i s s e c t i on.

(a ) The c on st raints of a l i ne ar p ro g ram (I') de f in e a f easib l e


domain that i s a convex po lyedron C .

( b) The basi c s o l u t i ons of ( I') c o r r e s pond t o th e ve r ti ces of C.

(c) A ve r tex o f C i s de te rmined by t h e inters ection of a t l e a st


n l i mi t i n g hype rplane s . A vert e x determined by mor e than n
limiting hyp e rpl an e s c o r re sponds t o a dege ne rate bas ic so l u t ion .
Exercises 139

(d) The simplex algorithm can be interpreted as a journey along a chain


of adj acent vertices of C.

(e) When we are at a vertex such that no e dge corre sponds to an increase
in the value of the obj ective fun ction, we stop : the present s ol u-
tion is optimal .

EXERCISES
1. In general, if C is a convex pol yhedral s et, a s ubs e t F of C will be
called a face if

X E F }
p,q E C
= > p q eF
j

X = Ap + (1 - A) q
O< A<l

(al Show that vertices and edge s are face s of a convex pol yhedral se t.
Namel y, a vert ex i s a zero-dimensional face , an edg e i s a one-
dimensional f ace .

(b) List all the faces of a cub e.

(c) Show that the set of optimal solutions of a linear program i s a fa ce .


Show that the verti ces of this set are al so verti ce s of the s et of
fea sible solutions.

2. Consider the following linear programs :

xl ~ 0, x2 ~ 0, max z = xl + 2x 2 subj ect to

(i)

-2x + x < 1
l 2
xl - 2x2 < 2
(i i)
140 Cha pter VIII. Geome tric Interp retation of the Simplex Method

(i ii )

(a) Solve ea ch of t he se probl ems using t he s i mp l ex me t hod . Fol low t he


solution point on a diagram and gi ve any re l ev an t comments about th e
uniquenes s of th e so lu tion .

(b) Write th e dual s of th es e linear prog rams. Give t he s ol ut i ons of the


dual s . Wi th th e help of the diagram you got in part (a ), comment
abo ut th e uniquen es s of th e so l ution of the se dua l programs.

3. Give a met hod of f i n di ng the second-best basic SOl ut i on , t he t hird-


bes t • • • • • t he pt h best bas ic so lution .

4. Cons i der the linear program

xl - x 2 < 3

xl + x < 7
2
( P) xl - x 2 > x ,x > 0
l 2
xl < 5

2x
l
- x2 = z (Max)

(a) By app licat ion of phase I of t he simplex method wit h a minimum


number of ar t ificial va r iab les , f i nd a f easi ble bas is.

(b) Apply t he simp lex al gor i t hm and f i nd th e opt i mal s ol ut i on .

( c) Draw t he fea sible region on a graph and sh ow t he pa t h fo l lowed by


the so l ut ion poi nt in t he appl i cat i on of the s impl ex a lgo rithm t o
the precedi ng question .

(d) Comment on th e graph about th e uniquene s s of the so l ut ion of the


dual of (P) .

(e) Wr i t e and so lve the dua l of (P) . Give ltU the opt i ma l so lutions
of t he dual . How many basic optimal so lutions are t here ?

3
5. Consider th e set of poi nts i n R that satisfy
-Xl + 3x 2 + x ~ 9
3
4x l - 2x + x ~ 4
2 3
Exercises 141

(a) Compute t he vertices of th e pol yhedron C.

(b) Fi nd the s et of poi nt s in C th at maximi ze

6. Given a linear prog ram

Ax < b x > 0
{ cx = z(Max)

sh ow that

(a) I f the optimal so l ut i on is not unique, ther e exis t s an e dge of


C = {x IAx '::'b , x ':: O } parall el to ex = O.

(b) If a coeffic ient c


j, j ¢ J , of t he co s t vec to r r el ati ve t o an
optima l basis J , i s ze ro , t hi s optima l basi s i s not unique .

(c) It is poss ib le t hat ( P) has seve ral optimal bases but just
one optimal solution . What can be said t hen of t hi s optimal
so lut ion?

(d) A nece s s ary and sufficient condi tion f or t he optima l basis of


(P) ( resp . of t he dual (D) of (P)) not to be unique is
t hat t he optimal so l ution of (D) ( res p. of (P)) is dege ne rate.

7. Given two non empty, closed conve x pol yhedra C and C' with C(J C' cj> ,
show t hat t here exis t s a hype rpl ane that s t ric t ly separates them.

HINT: Let C = {x IAx .::. b} , C' = {x IA'x .::. b "] •


Show t hat one can find y, y' .:: 0 s uch th at

yA + y'A' o
yb + y'b' < 0

and th at th e hyperpl ane

y'b' )

answers t he ques tion.


Chapter IX. Complements on Duality: Economic
Interpretation of Dual Variables

Duali t y i s so es se nt ia l t o l i near programmi ng t heo r y t hat we cons i dered i t


i mportant t o i nt r oduce t he concept as earl y as pos sibl e in thi s cou rse . Some
resul ts on dual i ty wer e proven i n Chapte rs II , IV, and VI as soo n as the too ls
to establi sh t hem were avai l able . In particular, we have seen that as soo n as
a l i near program has been s ol ved by the appli cation of t he s i mplex method, an
optimal so l ution f or i ts dua l has a lso been foun d (coro l la r y t o Theore m IV.3 ).
In t he first se ct ion , we gat her t he most importan t of th e resu l ts about
dua l i ty t hat have been obta i ned and gi ve a few ex t ra t heorems . In t he second
se cti on we give some economic i nte rpre tat ions of lin ear pr ogramming and dual i t y
th eor y and comment on the concept of price.

1. Theo rems on Dua l ity : Complemen ta ry Slackness Theo rem

We cons i der t he f oll owing pair of dual pro gr ams:

Ax < b x >O yA > c y :: O


(P) { - (D) { -
ex = z (Max ) yb = w(Min)

and we refer the reader t o Chapt er II for t he defi nitions and preli mina ry result s
about dua lity . Let us fir st r eca l l here t he mos t important th eorems obtai ned so
far .

Theorem 11 . 2 : For every coup l e x,y of f ea si bl e sol ut ion s t o ( P) and (D),


r es pect i vel y , we have

cx < yb
142
Section I. Theorems on Duality: Complementary Slackness Theore m 143

Corollary . Let x,y be a couple of f easi bl e so l ut i ons t o (P) and (D),


r es pecti vely , such t hat cx = Yb . Then x,y are a couple of opt i mal so lutions
to (P) and (D) , r es pectivel y.

Coro llary of Theorem IV.3: The mult i pl i er vector re l at i ve to an optimal basi s


of l i near program (P)t i s an optimal so lut ion of it s dual (D).

Theorem VI .6 : If t wo dua l linear programs (P) and (D) both have a feasible
so lution , th ey bot h have an optimal so l ut ion and t he values of the objec tive
f unc tions fo r the optimums are equa l.

Remark 1 : The fo llowing r esu l t s can be cons i dere d as consequen ce s of prece di ng


theorems :

(i) If one of th e problems has a class of unbounde d sol utions (z + 00 or


w+ - 00) , t hen t he ot her does not have a f eas i bl e so lution.

(i i) If ( P) (re sp, ( D) ) has a f easi ble so l ution but not (D) (resp. not
(P)) , th en (P) (re sp , (D)) ha s a cl a ss of unbounded s olutions .

(i ) is a di rect consequence of Theo re m II . Z. If (i i) was not t ru e, from


the f undament al t heorem of linear programming (Theo rem VI . S) , (P) would have an
opti ma l basis, and from th e coro l lary of Theo re m IV. 3, we get a contrad i ct ion .

Remark Z: It may happen t hat ne ither (P) nor (D) has a fea sib l e so l ut i on.
Example :

Xl X
z Yl + YZ > 1
xl ,xZ.::.O
(P) xl Xz -1 (D) Yl + YZ < - 1

xl + Xz z (Max) Yl YZ = W (Mi n)

Thes e r e su lt s are summa r ized i n the f oll owin g t ab le :

tI n Chapter I V, proo f of t hi s r e sult is given when (P) i s wri t ten in


s t andard fo rm. We l eave as an e xerci se t he extension of t he proof to the case
wh ere (P) i s in canon ical f orm.
144 Chapter I X . Complements on Duality: Economic Interp retation of Dual variables

(1') ha s a fe as i b l e s o l ut ion
(1') ha s an (1') h as no (1') h a s no
opt i ma l op t i ma l f e a sib l e
s o l ut i on so l ut ion so l ut ion

(0) h a s an w
mi n
= z
max
optima l impo ss ib le i mpos s i bl e
The ore m VI - 6
.....'" so l ution
.n
..Ul. .
oj
"
0
.....

'" ;.J
4-< :l
(0) h a s no
.....
oj 0 op t i ma l
Ul
Ul i mpossib l e impo s sib l e w-+ _ 00
s o l ut i on
~
:;-
~

(0) ha s no
feasible impo s sibl e z -+ + 00 poss i b l e
so lution

Defi niti on 1: Let x be a f e a s i b l e s o l ut i on o f (1') 0 The i t h co ns t r a i nt j s


sai d t o be:

"tight" if A.x
1
b.
1

if A.x
1
< b.
1

Theorem 1 (Compl ementary Slackness Th eore m): A ne ce s s a r y an d s u f fi ci e n t con d i -


tion fo r a co up le x,y of f e a s i bl e s o l ut i ons t o (1') an d (0) , re spec tive ly ,
t o be op timal i s t h a t:

(a) When ever a c on st rai nt of one o f th e probl ems i s s l ac k , then


t he co r r e sp ondin g va ri ab l e o f th e ot h e r p r obl em is ze r o ,

(b ) Whe ne ve r a va ri abl e of one o f the p r ob le ms i s posi t ive , th en the


corres pon di ng co ns tra i nt o f the ot he r i s tight.

Proo f : Let us wri t e prob lem s (1') an d (0) wi t h s l a c k var iab le s :

( 1')
tcx
x + U~ b

z ( Max )
x ,~.::. 0
(D) r
yb
A
-
nU = c

,; (Min)
y ,n .::. 0

(in (I') , U i s t he mxm unit mat r i x ; in (0) , lJ i s th e n xn l UI it matr i x) 0


Section I. Theorems on Duality: Compleme ntary Slackness Theorem 145

Let x,~ and y, n be f eas i bl e sol utions t o (P) and (D), respectively.
th
Let us mul t i pl y th e i con straint of (P) by the corresp onding s lack variab le
yi and add up fo r i= 1,2, • • • .n , We get

( 1)

.t h
Similarly, let us multip ly th e J cons t rai nt of (D) by x. and add up fo r
J
j = 1, 2 , • •• .n , We get

(1 ' ) yAx nx cx

Subtracting (1 ') from (1), we have

(2) y~ + nx cx

Neces sary conditi on : Let x,y be opti ma l sol ut i ons of (P) and (D) ,
respectively. Then f rom Theorem VI .6, we have

yb - cx 0

and f ro m ( 2)

y~ + nx 0

But
m n
y ~ + nx
-i - -j -
L
i =l
y si + L n x
j
j =l

Each t er m of this sum i s nonnegative , so t hat t he sum can be zero only i f each
term is O. Thus we have

-i
Y > 0 =:> ~i 0
-i
~i > 0 => Y 0
(3)
iij > 0 => x.
J
0

x. > 0 =>
J
iij 0
146 Chapter IX. Complemems on Duality: Economic lnterpretation of Dual Variables

whi ch i s a s t at ement equiva l ent t o t he las t two se ntences of the t heorem.

Suffic i ent condition : If x, y are a pai r of f easibl e sol utions t o (P )


and (D) , r e spe ctive l y , and i f the condi t i ons of the theorem are f ulfi lled ,
t hen (3) i s true , Y£; + fjX = 0 , and , from (2 ), we have that cx - yb = 0 , so
t hat x,y are a pai r of opt i ma l s ol ut i ons t o ( P) and (D) •

Remark 3 : The compl ementa ry s l a ckness t heorem is ve ry important and is f re -


quentl y used . I t s in t erest li es i n the f ac t th at i t permit s to prov e wit h a
minimum amoun t of computat ion (without computing the primal or the dual obj ec -
ti ve function) t hat a gi ven s ol ut i on t o a l i ne ar progr am i s in fa ct an opt i mal
one .

~: Let us cons ide r t he pair of dual l inear progr ams

3x + x2 > 4 3Yl + Y2 < 1


l
x ,x > 0 Yl ' Y2 > 0
l 2
( P) Xl + 4x > 5 (D) Yl + 4Y < 1
2 2
Xl + x
2 = z (Mi n) 4Yl + SY
2
= w(Max)

and assume t hat s omeone t e 11s us th at Xl = 1, X = 1 might be an opt i ma l solu-


2
ti on . It suf f i ces t o che ck th at t he l inear s ys tem

(since Xl > 0)

(s ince x > 0)
2

has a nonnegat i ve so l ut i on . Sol vi ng this s ys t em yie l ds

Yl = .'1 11 , Y2 = 2/ 11

and t hi s i s suf f i cient t o ass ure that X is an optima l sol ut i on t o (P) (s ee


al so Exerc i ses 4 and 5) .

Remark 4: Theorem 1 i s s omet i mes cal led the "weak" complementar y s l acknes s
th eorem. It may happen that f or a coupl e x,y of opt imal s ol ut i ons to (P)
and ( D) , we have s i multaneous l y tight cons t r ai nt and t he co r respondi ng dua l
var iable equa l t o zer o.
Section 2. Economic Interpretation of Dual variables 147

However, the (strong) complementary s l a cknes s theorem (whi ch we wil l not


prove here -- s ee Exercise 3) , stat es th at there a l ways exis t s a coupl e x,y
of opt i mal sol utions t o (P) and ( D), re sp ect ivel y, with the implications :

(a) Whene ve r a cons t raint of one of the pro bl ems is ti ght,


the co rre sp ond i ng dual vari ab le i s pos itive .
(b) Whenever a var iable of one of t he probl ems i s ze ro ,
t he cor r e sp onding const rai nt of th e dual i s s lack.

Bewar e that t hi s t heor em does not as s ure th at the optima l so lut ions i n
que st i on ar e bas i c ones . It may happen that no couple of basic opti mal sol u-
ti ons s at isfi es t he st r ong complementary s lackne s s theorem.

2. Economi c Int er pret at i on of Dual Vari abl es

In Sec t ion 11. 3, we gave an economic i nt erp re tat ion of the dual of the
tr ans port at i on probl em ( P ) . We now give ot he r i llustrations .
2

(a) The Pill Manu.6a.etWtVt ' <I PlWb.e.em : Suppose th at a hous ewife ha s to f i nd
a minimum- cost di et by buyi ng a combinat ion of f i ve foo ds , su bj e ct to the con-
s t r ai nt s th at t he die t wi l l pr ovi de at l east 21 unit s of vitamin A and 12 units
of vit amin B, th e propert ies of th e fi ve foo ds under cons iderati on being gi ven
by

Food 2 3 4 5

Vit o A content o 2

Vit o B con tent o 2

Cost 20 20 31 11 12

The hous ewi f e wil l have t o so lve t he f oll owin g linear program:

> 21
( 4) > 12

20x + 20x + 31x + ll x + 12x z(Min)


l 2 3 4 S
x. > 0 j= 1,2 , •• . , S
J

Now ass ume t hat a mer chan t or pi ll manufac ture r posse sse s pi lls of vi t amin A
and pi ll s of vi tami n B. He wan t s to know at whic h pri ce s he must se l l th e pi ll s
148 Chapter IX. Complements on Duality: Economic Interpretation of Dual Variables

i n orde r t o:

(1) St ay compe tI t I ve wi t h the di vers e f oods th at are on t he mar ket


(i n t e rms of vit amin s upply , not i n t erms o f cooki ng) .
(2) Make as much money as pos sib le i f he can se l l hi s pi lls to
t he housewife .

Call in g Yl ' Y2 the pr i ces of a pi l l of unit of vit amin A and of a


pil l of 1 un i t of vit amin B, r e s pe ct ivel y, the pill manufact ur e r 's pr ob l em i s

< 20

Y2 < 20

Y1 + 2Y2 < 31
(S)
Y
1
+ Y
2
< 11

2Y + Y2 < 12
1
2lY + l 2Y = w (Ma x)
l 2

The hous ewife probl em (4) i s so lve d in t he s ucce ss ion of simp lex t ab l eaus
s hown be low .

xl x x x X sl 52 b
2 3 4 s

1 1 1 2 -1 21 <-

I 2 1 1 -1 12
---- - - - --- - - - - ---- - -- - -- ---- - - - ---- - - - - - - -- ----- -----
* * - 29 - 29 - 48 +20 +20 z - 660
t
1/ 2 1/ 2 1/ 2 1 - 1/ 2 21/2
- 1/ 2 1 3/ 2 1/ 2 1/2 -1 3/2 -e-
--- - - - - -- -- - ----- - - -- - -- - - - - - - - - - -- - - - - -- - - ----- -----
24 * -S -S * -4 20 z - l S6
t
1 -1 -1 1 -1 1 9
-1 2 3 1 1 -2 3
------ --- - - - - - - - - - -- - --- -- - - -- - - -- - - - -- - -- -- - -- -- - - - - - -
19 10 10 * * 1 +10 z - 141
Section 2. Economic Interpretation of Dual \\lri ables 149

The opt i mal basi s i s J = {5 , 4}; we have

J
c

so that TI is f ound by so lving t he sys tem

2
+ TI 11

2
+ TI 12

Thus , wit hout doing any computation , we know t hat an optima l so l ut ion of
t he pil l manufac t ure r 's prob lem (5) is

10

I t i s easy t o check th at (1, 10) is f eas i ble and we have

1 x 21 + 10 x 12 141

which is t he cos t of the minimum-cos t die t f ound i n so lv i ng (4). The so l ut ion of


t he pill manufacturer 's prob lem could a lso have bee n f ound graphi call y.

Ib) MaJtg-<'na..e. PtU.CeA 60/[, PtWductWn PtWb.e.em (PI ) : Ass ume th at the mana ge r
of th e f irm that was depicte d in Sec tion I. l .a might buy some ext ra quant iti es
of th e di f fe rent r aw mate rials. The ques t i on we will t ry and ans wer now is:
" What prices is t he manager ready t o pay f or t hes e ext ra quant i t i es?" Re call
th at prob lem (PI) was wri t ten in the fo llowi ng f or m:
150 Chapter IX. Complements on Duality: Eco nom ic lnterpretatiun of Dual Variables

1 2Xl + x < 8 (1 )
2

xl + 2x 2 < 7 ( II )
(PI) x ,x > 0
l 2
x < 3 ( II I)
2
4x + SX z (Nax)
I 2

xl = 3 , x = 2 , Z = 22.
an d t h at t h e op t i mal s o l ut i on was
2
It t a kes th e man a ger ve r y li ttl e t h ink ing to di scov er t ha t , g i ven t ha t fo r
the opt i mal s oluti on he ha s got I unit o f r aw mate rial III in exces s , he i s not
re ad y t o bu y an y ex t ra quan ti ty o f r aw mat erial III, what eve r it s pri ce might be .
In economi c t erms, we s h ou l d s a y th at a co mmodi ty th at i s i n exc es s ha s a va l lie
e qua l t o ze r o.
Now s upp os e th at t he s upply o f commodi t i e s II an d II I are he ld fixe d an d
that we wan t t o know whe the r it pays t o buy e xtra qu antiti es o f r a w mate r i a l I .
Not knowing t he s ubt let ies of s en si t i vi t y anal ysi s fo r li near programs, the
manage r of t h e fi rm decide s t o t r y to so l ve hi s lin e ar p rog r am wit h a s upp l y o f
9 ( in s tead o f 8) f or r a w mat e rial 1. He act ua ll y find s that t h e s a me basis i s
optima l an d th at the s o l ut i on i s

z 22 + I

Thus it wi l l be r e wa r d i n g t o buy ext r a qua nt i t ies o f raw ma t eri a l I i f an d on l y


if its price i s l e s s t han 1. For t h e mana ge r of th e firm, I un i t o f r aw mater ia l
i s worth I , what e ver i t s r e al market p ri ce . Suppo s e th at t h e ma rk et pri ce o f
r aw mat e r i al I is l e s s th an I and t hat, de s i r in g t o ea rn much more money , t he
mana ger de cide s t o buy 9 ex t ra un it s . lie exp ects h i s r eturn t o be i n c r ease d in
proporti on . But so l vi ng

2x + x2 < 8 + 9
I
Xl + 2x 2 < 7

(P i) XI 'X 2 > 0
X < 3
2

4x + SX z (Max)
r 2
Section 2. Economic Interpretation of Dual \lu i abies 151

he finds th at th e op t i mal basis is J = {l,3,S} and the opt imal solut ion is

Xl = 7 , x = 0, z = 28
2

~ .e :, an i n cre ase of o n ly 6 . And i n creasing further the availability of raw


mat erial I would be of no u s e . Thu s we conclude that the "internal " price we
determin ed f or raw mat er i al I is va lid onl y if we stay "in the n e ighborhood" of
the pres ent product i on plan . Th i s i s wh y these pric es a re called " ma r gi n a l. "
What is the marginal price of r aw material II now? Solving (PI ) wi th a
suppl y o f 8 un i t s ( i n s tead o f 7) f or r aw material II l e a ds to the optima l
so lution

Xl =8/3, x 8/ 3, z = 22 + 2
2=

Th e margin al pri ce o f r aw material II (~ .e . , the value for this production un it

o f an ext r a unit o f r aw mat eri al II ) is thus 2.


Finall y, we see t h at the margin al prices (1 ,2, 0 ) are equal to the op t i mal
s o l ut i on o f the dual o f (PI)' Th i s r e sult is not purely coin cident al, a s we
wi 11 see now .

Ic l MaJl.g.wai. PJUcu and Vu.ai. ViVUa.b.e.u: Let u s consi der the l i n e a r


pro gram
b X > 0
( P)
z (Max)

and l et J be an op timal bas is f or ( P). Let 6b b e an m- ve c t o r that we cal l


a v~on of b, and l et u s a s sume t hat 6b is s ma l l enough so that

( A J) - 1 ( b + 6b ) > 0

~. e . , J i s a l so an op ti ma l basi s f or

(16 ) r cx
x b + 6b
z(Ma x)
X > 0

The op tima l b a s ic so l u tion of (P 6) i s then


152 Chapter IX. Complements on Duality: Economic Interpretation of Dual Variables

(AJ) - lb + (AJ) - l Ob

o j tJ
n b + n ob

But from t h e coroll ary t o Th e orem IV . 3, y=n is an op timal so l ution of the


du al o f (P) . We thus h ave :

The o r em 2 : The va ri a tion of th e op tima l va l ue of th e objective func tion of


pro b lem ( P) , f or a vari a ti on ob of t h e ri gh t-h and side b s u fficien t ly
s mall for th e optimal bas i s t o r emain th e same, is y Ob, where y is an
opt i ma l s olut ion o f t h e dual o f ( P).

It o fte n happen s th at line ar p rograms co me f rom economic p rob l ems . Take ,


fo r ins tance , p ro b lem (P) ab o ve :

1. b may r e pre sent the quant i ty of co mmo dity i to be us ed if b > 0,


i i
o r the q uan t ity o f co mmo d i ty t o be p ro duce d if b < 0 •
i

x. repre s ent s the level of act i vit y


J

3. cj r e pre s ent s th e p r o f it p roduced by a c ti vity ope r a t i ng a t leve l


j
if c >0•

j
4. -c re p r e s ent s th e cost o f ope rat i ng acti vi ty a t leve 1 if
j
c <0 •

S. A is c all ed the " matri x o f t e chnol o gi c al cons t raints . "

6. Yi is t he du a l varia b l e , which i s a p rice attach ed t o c ommod i ty

Then t he pro b lem (P) and i ts du al (D) may be expre s s e d as fo l lows :

( P) : Give n t he av ail abil it y (deman d) f o r e ach of t he m commodi ties i, an d a


p ro fi t (cost) for each o f the n a c t ivi ties j , wh at i s th e l e vel of each a c t i -
v ity such that :

• s up p lied commo d i t i e s wi l l be co ns umed


• demanded comn~ d i t i e s wil l be p rod uce d

fo r a maximum total p ro fi t (a mi n imum t ot al co st)?


Exercises 153

(D) : Given a uni.t profit (cos t) f or each of t he n act ivi t ies j , and a
suppl y (demand) f or each of the m commodit i es i, what mus t be t he un i t pr ice
of each commodi ty i such th at the t ot al va l ue of commodi ties cons ume d minus
th e t otal val ue of commodities produced be mini mum, sub j ect to the constraint s
t hat f or each acti vit y j, the t ot al value of con sumed commodi t i es mi nus the
t ot al value of produced commodi t ies - - for a level of ac t i vit y equal t o I- -
will be gr eate r than or equa l t o t he uru t profit of ac tivi ty j ?

EX ERCISES

1. What is t he dua l of

Xl + x3

x2 + x3 2

(P) xl + x4 2

x2 + x4 4

xl + x + x3 + x z (Max)
2 4

Solve the dua l of (P) . From this so l ution , what can be s ai d about (P)?
Check di rect ly on (P) that what can be s ai d about (P) by s t udying i t s
dua l i s i n f act true .

2. Wr i te down th e comp lementary s lackness t heorem in e conomi c te rms.

3. Prove t he f ollowing " s t r ong complement ar y sl ac knes s t heo rem" : If bot h dual
linear programs (P) and (D) have f easib l e s ol ut ions , it i s pos sibl e t o
fi nd a pa ir of opt i mal so l utions s uch th at:

(i ) Whenever a cons t rai nt of one pro blem i s t i ght,


the corresp onding va riable of the ot he r i s posi tive .

(E) Whenever a vari abl e of one pro bl em is 0, the


cor responding cons t rai nt of th e other i s s lack.

HINT : Us e the r e sult of Exercise VI .l l . Take


154 Chapter IX . Complements on Duality: Economic Interpretation of Dual Variables

4. Us ing t he complemen ta ry s l acknes s theorem, s how t hat

xl = 1, x2 = 2, x = 0, x =4, Xs =0
3 4

is an optimal so l ution of

xl + 2x + 3x3 - 4x 4 + SxS < 0


2

- 2x x 3x 4
S <
+ +
3 4

3x + 4x + X < 3 x. > 0
l 3 s 1

x
2 - x
3
+ 2x
S < 2

<

z (Max)

s. Conside r t he linea r pro gram

xl + 2x 2 < 14 x. > 0 i = 1, 2
1

2X - x < 10
l 2

xl - x2 < 3

2x + x2 z (Max)
I

Is sol ut i on xl =20/3, x2 = 11/ 3 :

(a) Feasib l e?
(b) Basi c?
(c) Opt imal ?
Exercises 155

6. Show that xl = X z = 0, x 3 = 4. 5, x4 =6 is an opt i ma l solution of the


linear program of Exerci se IV.5.

7. Use the compl ement ar y slackness theorem to prove that the fea sibl e solution
of transport ation problem of Exercise 1.6 i s in f act optimal.

8. What can be said of the marginal prices when the optimal solution of the
linear program

Ax < b x > °
{
ex = Z (Max)

i s degenerat e? Give a geomet ric int erpretation .

9. Let V C JRn V'c lRm and F v X V' .... lR • XE V', Ye V' is a saddle
point for F if:

(*) F(x,y) < F(x,y) < F( x, y) \lxeV, \lyeV

Show that (Lagrangian theorem in linear programming) a neces s ary and


sufficient condition for x, y >0 to be a couple of optimal solutions
of dual linear programs

° °
l
Ax < b yA > c
~ yb
(P) - x > (D) y ~
cx = z(Max) = w(M1fi)

i s that x, y is a saddle point of the Lagrangian :

F(x, y) c x- y(Ax -b) x ~ 0, y ~ °


The common va l ue of the objective fun ctions at the optimum i s F(x,y).

HINT : To pr ove suffi ciency, use the fa ct that (*) is true for any
x and any y ,
Chapter X. The Dual Simplex Algorithm: Parametric
Linear Programming

Con sider the pair of dual l inear prog r ams

jAx < b x > 0 YA > c y :: 0


(PC) (DC)
l ex = z (Max) { yb = w(Mi n)

writt en i n canonical f orm. If b ~ 0, t hen x = 0 is a fea si ble so l ut i on of


(PC) . We say in th i s case th at (PC) is " pr i ma l feasib l e . " If c 2. 0 , then
y=0 is a fe as i ble so l ution of (DC). In t hi s cas e , we say t hat (PC) is
"dual f easi bl e . " Give n a linea r program wr i tt en in canoni cal form wit h res pect
t o a basi s, we know ( from Theorem IV. 3) th at th is ba sis is optima l if and only
i f t he li near prog ram is at the same t ime pr i mal and dua l fe as ib le .
The simp lex a lgor i t hm can be cha racte r i zed by saying t hat we l ook for dua l
f easibi l it y while main taining pri mal feas ibi li ty (which has been obta ined
through phase I of t he simp lex met hod) . In so me ca se s, th e l i near pr ogr am i s
gi ven dual f easi bl e but prima l i nf easi bl e . It woul d t hen be neit her smar t nor
e f f i cient t o ca l l f or pha se I and phase II of th e s i mplex method . It i s better
to apply th e dual s i mplex a lgo r i t hm, which lo oks f or prima l feasi bi l i ty whil e
maintaining dual f easibili t y .
The dua l si mplex a lgor i t hm i s present ed in the first secti on . In the
se cond secti on , we st udy the vari ations of th e optimal so l ut ion of (PC) when
some of the "dat a" vary . In effe ct , s ome of t hese dat a ar e of ten impr ecise and
it i s thus i nte re st i n g to s ee how an erro r would aff ect the s ol ut i on . I t a lso
happ en s th at market prices, f or i ns tance , vary and one want s t o know how t he
optimal so l ut i on i s se ns i t ive t o such vari ati ons . Thes e st udies are referred
t o as "sensit i vit y ana lys is " or " post - opt i ma li t y anal ys i s " and can be conducte d
in a sys t ematic way through " parametri c prog ramming . "
Another case where pa r amet ric prog ramming can be us ed happe ns wh en we have
two (or more) obj ec tive f unc tions we want t o maxi mi ze simul t aneou s ly . Then ,
156
Section I. Dual Simplex Algorithm 157

s tudying
x > a

Z (Max)

(wher e c and f a r e n-row vectors and ~ i s a parameter) for vari ous values
of u -- Le ., for different r el ati ve weights of thes e obj e ctive funct ion s--
gives some insight i nto th e way t he opti ma l so l uti on depends on each obje ct ive
f unction.

1. Dual Si mplex Algorithm

If the linear pr ogr am


Ax b x >a
( P) { cx Z (Max)

i s writt en i n canon i ca l fo rm with re spe ct t o basi s J and is dual f easi ble,


we have

J
(1) A is, up t o a permutati on of columns, the unit mat r ix
(thi s pe r mutat ion i s given by the fun cti on " col" ) .

(2) c < a
J
(3) c = O.

Each iterati on of th e dua l s implex wi l l cons is t mainl y, as f or th e primal,


in a pivo t operation on t he coeffic ien ts of th e l i near pr ogr am. After thi s
pivot operation, t he l in ear prog ram wil l be writt en in canoni ca l f orm with
r es pect to t he basis

j J U {s}\{col(r)}

How will we determi ne on wh i ch i ndices r and s perform the pivot operat i on?
We begi n by choosi ng a r ow index r s uch t hat br < O. If s uch an i ndex
doe s not exist , J is an optimal basis . We stop. Thus the index l eavi ng t he
ba s i s wi ll be col ( r). Let us ca ll s th e index ente ring th e bas is ; we
ex ami ne now how s is chose n .
The cos t vect or c r el ative to th e basi s j will be equa l t o
158 Chapter X. The Du"1 Simplex Algorithm: Parametric Linear Programmin g

c c - 1TA
r

In or der t hat condition (3) be s ati sfi ed af te r pivot ing , we mus t have

-s s s
c c 1IA 0
r

and thus

11 : cS/A s
r

In parti cul ar, for j : col (r), we get

(J _cs / As
r

and t hus, f or condit i on (2 ) t o be verifi ed aft e r pivot i ng , we need


AS < 0 and then AS < 0 •
r - r
We now as k the ques tion : Wh at hap pen s i f no candi dat e C O l U~l exis ts , ~o eo ,
th
if A > O? In th i s case , t he r const rain t is
r-

A
r
x b
r
< 0

which cl ea rl y i s an i n feas i ble eq ua tion for all nonnega ti ve xt s , Our pr ob lem


does not have a f ea sible sol ut i on .
s
Assume now that we pivo t on A <
r
o. We get

s
c c - ~ A
AS r
r

By as sump t i on we have

c < 0 ;

k
Thus fo r a ll k such that A > 0,
r -
we have ck < O, In orde r t ha t condi t i on
(2) be sat i s fi ed after pivo t ing , we th en t ake s defi ned by

( 4)
Section I. Dual Simplex Algorithm 159

Dua l Simplex Algorithm

Li near Pro gr am (P) is written in canon ica l form wit h r espect to basi s J
and c < O. The mappi ng " col " is de fi ned as i n Remark V.!.

t he fo l lowing procedure unt il ei the r an optimal basis i s obtained


or i t is shown that (P) doe s not have feasib le solution.

Step 1 : Choose an r such t hat br < 0 • If s uch an r does


not exi st , t he basis J i s opt imal . STOP.

Step 2: If Ar -> o, no fea s i ble so l ut io n ex ists. STOP.

Step 3 : Let L = {jl Arj < a} (L ;l ~ because of Step 2)

Choose an s s uch t hat


cS/ As = mn j EL [cj / Aj j
r r

Step 4: Per f or m t he pivot operat ion (defined by r ow r and column


s) on t he matrix of coef f i ci ent s of l inear prog ram (P).
Afte r t his pivot ope ration (P) i s wri t ten i n canonica l f or m
wit h respect to

J = J U {s } \ {col(r) }

Let J =J col (r) : = s •

end repeat

Remark 1: The reader will note how c los e ly t he dua l simp lex a lgorithm paral lel s
(or mirrors ) the (primal ) s i mpl ex (s ee Sect i on V.3) . The f ac t t hat t he dua l
simp le x al gorit hm r oughl y r educes t o th e pr i mal s implex pe rformed on t he dual
wil l be apparen t i n th e fo l lo wing example. The pro of of t hi s propert y i s l eft
as an ex ercise .
In Step 3 , th e choice of s in cas e of a t i e can be made using a pe r t ur -
bation t e chni que or lexicographic r ul e . The principl es of th e r evi se d and th e
dua l s i mpl ex algorithm ca n be combin ed.

EXAMPLE: Consi der (Dl ) , th e dual of l inear program (PI) '


160 Chapter X. The Dual Simp lex Algorithm: Paramet ric Linear Programming

S i=1,2 , oo..,3

w(Min )

which, a f ter addit ion of the s l ack variables Y Y can be written in


4, S
canoni ca l form with respe ct to th e basis J = {4 , S};

-ZYI - YZ + Y4 4
(S) - YI ZyZ Y3 + YS S y.1 -> 0 i=I ,Z, ••• , S

-SYI 7yZ 3Y3 z (Max)

We now give the solution of th i s l i near progr am using the dual s i mpl ex al gorithm
in tabular form:

YI Y2 Y:; Y4 Ys z b

-2 -I I -4

-I -2 -I I -S ...
-- - -- - --- -- ------ --- --- ------- ----- --- - - -- -- --

-S -7 -3 I
.*

-2 -I I -4 ...
I 2 I -I 5
---- -- ----- - - --- - - ---- - --- - --- -- ------ - -- - - - - -
-5 - I. -3 I 15

Z I -I 4

-3 I 2 -I -3 ...
-- - - - - - - - - - --- - -- --- - - - ------ - - - --- - - - - -------
-3* -I -3 I 19

~:;~J
I 2/3 1/ 3 -2/3

I
- -- - - - ----- _:!L~_ _: ~ L ~ _ __ ! L ~ __ --------
-I -3 -2 I
Section 2. Parametric Linear Programming \6 \

Remark 2: The dual s i mpl ex algorithm is frequen tly used in "post optimization. "
After having sol ved a l i near program, it might happen th at we want to add con-
s traints t hat are not satisfied for the present "opt imal " solution. These new
constraints ar e a sour ce of "infe as ibility." I t is much more efficient to
app ly a few steps of t he dual simp lex a l gor i t hm t han t o begin again the so l u-
tion f rom scratch. This s ituation often occurs in combina toria l optimization
when one so lves integer linear programs (.<.. e., li near programs with integri ty
constraints on the variab les -- see Reference [4]) .

2. Parametric Linear Programming


Parametri c programmi ng consists of studying how the opti mal so lution of the
linear program

Ax b x > 0
(P) {
cx z (Max)

varies when components of band/or c depend linearly on one or more parameters.


Study of the variation of the so l ution with t he variation of some coefficients
of A i s a more intricate mat ter and is not presented here .
We f i r st s t udy numerical exampl e s and t hen give a few general r e sul t s .

(a) A numer i cal exampl e of va r i at i on of t he so lution of (P) as a


function of c : We will study

2xl + x2 + x 8
3
xl + 2x + x 7 x. > 0 i =1 ,2, .. . ,S
2 4 1
(Q~) X2 X
+ + 3
s
(4 + ~)xl + SX z (Max)
2

We r e cogni ze , for ~ = 0, problem (PI)' We write (Q~) in


canonical form with re spect t o the bas is {S, 1,2} :
162 Chapter X. Th e Dual Simplex Algorithm : Parametri c Linear Programm ing

1 2
"3 x 3 3 x4 + X
s
2 1
xl + 3 x3 3 x4 3
(6) x. > 0 i= 1 ,2 ,o •• , S
1
1 2
2 -
X 3 x3 + 3 x4 2

)lx x - 2x z (Max) - 22
l 3 4

Act ua l ly , (6) i s n ot ve ry s at i s f a ctory s i n ce it give s t h e op ti mal so l ution on ly


f or u = O. Let us subtract )l time s the second equation from bo th s i des o f
th e obje ct ive functi on . We ge t

1 2
3 x3 3 x4 + X
s
2 1
xl + 3 x3 3 x4 3 x, > 0 i =1, 2, • • • , S
1
(7)
1 2
x2 3" x 3 + 3 x4 2

2 1
(3 )l + 1) x
3
+ (3 u -z) x 4 z(Max) - 22 - 3 )l

With thi s f orm, we see that the p resen t bas is is op tima l fo r

an d

So we are natural ly ledto exami ne what hap pen s whe n )l go e s ov e r th e s e bound s .


For u = 6, t he coe f fi c i e nt of x i s e qua l to ze ro in (7) . We che c k that
4

i s parall el t o t h e co ns t raint r el at i ve t o r aw mate rial I . Fo r ) l > 6 , we l et


x enter th e ba s is. We perform a pi vo t i ng a s u su al excep t that the coeffi cient
4
cS is not a s c a l a r but a fun cti on of u :
Section 2. Parametric Linear Programming 163

4 1
c "3 j.! - 2

We get

x + X 3
2 s

+ x 1
xl + 2 x3 4
2
(8) x1. -> 0 i= 1 ,2 , . .. , S
3 1
2 x2 2 x3 + x 3
4
1
(2 j.!-3) - (} u + 2) x 3 Z (Max) - 16 - 4j.!

This basi c so l ut i on i s optima l for j.! ~ 6. Let us come back t o t he case


j.! < - 3/ 2 • We s t ar t from ( 7) and let ente r the basi s . We get

x3 - 2x + 3x 3
4 S

xl + x 2x
4 S
( 9)
x1. -> 0
x + Xs 3
2 i =1,2 , . . . ,5

(-j.! - 4)x
4
+ (2 j.!+ 3) X
s z (Max) - 19 - u

The basi s {3, 1,2} 3 j.! < - 4 ,


i s opt i mal f or - 4 ::' j.! ::' - 2' For x ente r s
4
t he basis and we get

2x +x X 5
l 3 s

Xl x4 - 2x
S
(9 ' ) x. >0 i =l ,2 , • • • ,5
1 -
+ x + X 3
2 s
( u + 4) x
l Sxs z (~lax) - 15

The basis {3 ,4 ,2} i s opt imal for u ::. - 4•


164 Chapter X. The Dual Simplex Al gorithm: Parametric Linear Programmi ng

We ca n now s ummari ze o ur res u l t s in t h e f o ll o wing t a b l e:

_00 -4 - 3/ 2 6 +00
1.1
I I
xl 0 I 1 I 3 4
I I
I
X 3 I
3 I
2 I 0
2 I I
I
x3 5 I
3 I
0 I
0
I I
I
x 1 I
0 I 0 I
3
4 I I
I
X 0 I
0 1 I
3
s I
I
I
I
I I

I I I

Z IS I
19 + 1.1 I 22 + 31.1 I
16 + 41.1
I I I

I f we p lo t t h e va l ue o f \ la x a s a f un ct i on o f 1.1 we ge t Figure X.I.

Zmox= 15

10

-4 -312 o
Figure x. r. Zmax vs . 1.1 fo r
Section 2. Parametric Linear Programming 165

The geomet ric in t erpr et at i on of t hes e resu l ts shown i n Figure X.2 i s


particu l ar l y en lightening.

Xz

Fig ure X.2: Geomet ric Sol ut ion fo r (Q )


u

(b) A numer i cal exa mp le of variation s of the solution of (P) i n the


f unc tion of b: We cons ider th e l i near program

2x + x2 + x 8 + 2j.l
I 3

xl + 2x + x 7 + 7j.l x,1 -> 0


2 4
(Pj.l)
x + X 3 + 2j.l i =I, 2, ... ,S
2 s
4x + SX
2 z (Max)
I
166 Chapter X. The Dual Simplex Algorithm: Para metric Linear Programming

Ive no t e th at for u = 0, ( PlJ) is prob lem 0\) , We wri te (I'lJ ) i n c a r:on i c a l


f orm wit h r e sp e ct t o t h e b as i s {S , 1 ,2 } :

(
1 2
'3 x 3 x + X 2lJ
'3 4 s

( 10) j'l x
+
2
'3 x 3
1
~ x .)• +
1
'3 x 4
2
'3 x4
3

2 + 4lJ
u x. > 0
1
i e l , 2 , ...... , 5

I 2 .)

l x
3
2x
4
z (Max) - 22 - l()lJ

Thi s ba s i s i s f e a s ibl e , a n d t h u s optima l , fo r

1 - 2lJ > 0,
- 3 - 11 -> 0 , 2 + 411 > 0

i .c. ,
1
- .!.2 -< u < -
- 2

Let us exp lo r e what happe n s whe n u > 1/ 2 • I n t h i s case , t h e f i r s t c on~ on e n t

of th e r i ght-h and side beco mes n e gat i ve . The bas is {S , 1 , 2} s tays o p t i ma l b ut


i s no l on ger feas i b le : ( 1) s t ays d ual feas ib le. We a re le d t o per form a s tep
of t he du al s i mp l e x a l go r i t h m. We pi vo t on th e fi r s t r ow and f o u rth co l umn and
we get

2x
3
+ x 4 - 23 Xs
3
2 + 3lJ
1 1 S
xl + 2 x3 "2 Xs 2 x. > 0
1 -
( 11)
i= 1 ,2 , . 0> ,S
x
2
+ X
s 3 + 2lJ

Sx-
.)
- 3 X
s z - 2S - l ° ll

The basi s {4 , 1 , 2} r e ma in s f e a s i bl e for a ll lJ ~ 1/2 . Let us c ome h a ck t o


( 10 ) a n d t r y lJ < - 1/2 . In fca s i b i lit y appea r s in t h e thi rd r ow an d we per -
f orm again a s t e p of t he dual s i mp l e x a lgo ri t hm. We ge t
Section 2. Parametric Linear Programming 167

X + x 3 + 211
2 5
xl + 2x + x 7 + 711
2 4
(12)
3x + x3 6x - 6 - 1211
2 4
3x 2 8x z - 28 - 28\.1
4

The ba si s {5 , 1,3} r emain s feas ib le , and thus opt imal , fo r - 1 ~ \.I < - 1/ 2.
For 11 < - 1, t he second equ ati on i s i nfeas ib l e .
The se r esult s can be summarize d i n the followi ng t abl e :

,
I I
,
I
, ,
11 -1
, -1 / 2 I , 1/ 2 , +ool
I
I ,
I

0
, 7 + 7\.1
I
, 7/2
I
3 -
I
, 5/2
I
5/2
xl , ,
I \.I
,
I
,
I
x2 0 I
, 0 I 0 , 2 + 411 I 4 I 3 + 211
, I , I I

x
3
6
, -6 - 1211 ,
, ,
0 I 0 I 0 I 0

x4 0 0
, 0 , 0
I
, 0 , - 3/ 2 + 311
I

, ,
I

, I I I

x5 1
, 3 + 211 , 2 , 1 - 211 , 0 , 0
- - ---- -------, ---------l ----------t--------------!
, -------!---- ------------
,
0 , 28 + 2811 ,
I I
Z 14 , 22 + 1611 , 30 , 25 + 1011

And if we pl ot the value of zmax as a fun ct i on of \.I , we get Fi g. X.3 .

z
40 max

Fi gur e X. 3: zmax vs , \.I fo r (P11)


168 Chapter X. The Dual Simplex Algorithm: Parametric Linear Programming

On Fi gure X.4, we sket ch the domain of fe asibl e so l ut ions fo r va rious


va lues of 1.1 .

u • -I
1
III \

III

o "--.-- r--''-r-..---\---r---,.- - - -- -

Figure X. 4 : The Domain of Feas i ble Solutions of ( PI.l) f or Var i ou s II

(c) General resu lts in parametric programming: Let us cons i der the f ol l owi ng
linear programs :

(PfJ) {
Ax =b +

ex = z (Max)
I.l d x > a
(X b

(c + fJf ) x
x >

w(Max)
a

wher e u i s a s ca l ar , d is an m-column vector, and f i s an Il- row vec t o r .


Section 2. Parametric Linear Programming 169

The domain of f ea s i bl e so lutions for (Q~) does not depend on ~ • We assume


th at
hlAx = b, x > o} ~ fJ

We l et z(~) (resp. w(~)) , deno te th e va lue of t he object ive fu nc tion of (P~)

(res p. of (~)) for an optima l so lution .

Defini t ion 1 : Recall (cf , Definition VIILS) t hat a r eal - valued fu nction g
defined on a convex se t C is convex if

g(Ap + (1 - A)q) < Ag(p) + (1 - A) g(q)

The f unc tion g is said t o be "co ncave" if -g i s convex.

Theo rem 1 : We have t he fo l lowing properties :

(i) The se t of va l ues of ~ for which (P~) has a feasible so lution


is an i nt erval [a,b ] (bo unds of which may be i nfinite) .

(ii) 1£ for some b ] , (P~ o) has a f init e optimal so lution ,


~OE:[a,
the same is t rue fo r (P~) for all IlE[ a , b] .

(i ii) The se t of va lues of u f or which a certain basis of (P>J) (resp ,


of (Q~)) is optima l is an i ntervaL

(iv) z ( ~) is a piecewise lin ear concave f unct i on of ~ .


(v) w(~) is a piecewise linear convex fu nction of ~ .

Proo f : (i) the se t

{ ( x , ~ ) lAx = b + ~ d , x ~ o}

is a convex polyhedron of ~n+l I ts projection on t he ~ - a xi s is t hus


an i nt e r val.

(i i ) (P~), t he dual of (P~), ha s a feasible so lution or not


independentl y of the va lue of u, The prope r ty is t hen a consequence of
duali ty theor em (see Sec tion I X. l ) .
170 Chapter X . Th e Dual Simplex Al gorithm: Param etric L inear Programming

( i i i) Thes e se ts a re conve x a s can be s een by wr iti ng ( PlJ) (re s p, cr~))


in cano!1ica l f o r m wit h r e s p e c t to an opt i mal ba si s .

( i v) Le t J be a n i nt erval f or whi ch J i s a n op t i ma l bas i s of ( I'IJ)


( r esp , (QIJ)) ' I n t h i s int erval z (lJ) ( rcsp , w( IJ) ) i s a lin e ar f unction of
u , a s can be seen in wr iting (1\) (rc s p , (QlJ )) in canon ic a l f o r m wi t h r e-
s p e ct to J.
Let IJ , IJ e. [ a, b] an d l et x ,x be th e corre spond i n g op t i mal so lu t i ons
of ( 1'0) an d ( p~ ) , re s pe cti vel y . Le t

x AX + ( l +A )x o< 1.< 1

Then x i s a fea s ibl e so lut ion o f (I'IJ) for

IJ 1.0 + ( 1 - 1.) 0

And thu s

( *) Z(IJ) > cx

(v) The du a l of (QIJ) i s a problem of t y pe ( l'lJ) exce p t fo r th e


obje cti ve fun ction th at is t o be mi ni mi zed .

EXERC ISES
1. Us e the du a l s i mp lex a l go r i t hm t o s o lve

(1')

xl + X
z = z(Max)
Zo So lve , us ing th e du a l s i mp l e x a lgo r i t hm, th e linear pro g ram

xl - X
z <
xl >

> 3
x l ,x z> ()

xl + X
z
x l + 2x Z Z (Min)
Exercises 171

3. Solve , us ing t he dual s i mpl ex a l g o ri t h~ , t he l inear pr ogr am

Xl + x2 > 2

- xl + x > 3 x ,x > 0
2 l 2
xl > 4

3x + 2x z( Mi n)
l 2

4. Describe the l exi cogr aphic method appl ied to th e dual simplex al gorithm .

5. Cons i de r the linear program

Ax < b x > 0
(P) { ex = z (Max)

which i s neither primal feas ible (so me b are negative) , nor dua l f eas i bl e.
i
Let e be the m-co lumn vec to r each component of which is equa l t o and
~ = - ~lin i [bi l •

Prove that app lying the t echni que of Section 2(b) t o sol ve

Ax < b + lie X > 0


{
ex = z (Max)

sta r t i ng wi th II = II and l et t i ng II de creas e t o 0 gives an a lte rnat ive


method of init ial i zati on (a n a l te rnative phase 1) .
Us e thi s t echn ique t o so lve

Xl + 2x
2 - 3x
3 - x
4 2. - 1
2x l + x + 2x + 3x < 3 x. > 0
2 3 4 1 -

Xl + 3x x + x4 z (Max) i=1, 2, • • • ,4
2 3

6. Solve the l inear pr ogram

I aX
2xx
l

+
x2

X2

2x
+

+
2x

2x
3

x3
3
<

>
6

z (Max)
x. > 0
1
i =1, 2, .'

l 2
172 Chapter X . The D ual Simplex Algorithm: Parametric Linear Programmi ng

(a) For a = -1 .

(b) For a ll va l ues of a •

7. Gi ve a ll th e opt imal sol utions of

xl + X
2 > 2
2x2 > 3 x ,x 2 > 0
(P ) xl + l
a
(2 + a ) xl + 4x2 = z (Mi.n)

when a va r ies.

8. Gi ve a l l t he opt i mal sol utions of

xl + x2 > 3 - a

(~) xl + 2x > 2 + a
2

(2 + a) xl + 4x2 z (Max)

9. Sol ve t he pa rametric program

-2 xl + x2 < + ]J

xl - 2x2 < 2 - 2]J


(Pp)
xl + x2 < 3 + 6]J

xl + 2x z (Max)
2
Chapter XI . The Transportation Problem

A l ar ge numbe r of l in ear programs actual ly so lved are t r ansport at i on


prob lems or possess a st r ucture of t he same t ype . This special s tl~c ture a l lows
very efficient implementation of the simp lex a lgorit hm (so th at very large t r ans -
portat ion prob l ems can be so lved). This structu re also has a great theoret i cal
i nteres t since net work f l ow prob lems presen t the ve ry same structure . For thes e
t wo r e as ons. t he tran sportat i on prob lem des er ves special s t udy i n a course on
l in ear programming .

1. The Prob lem

Definition 1: We ca ll th e fo llowing linear program. t he "transport at i on


prob lem":

q
L t a k=1, 2, • ••• p
u k
t=l

P
tu > 0
(T) L tk£ b t=1 .2 • • • • •q
k=l t

q p
L
t =l
L dut u z(Min )
k=l

where

• The p xq t kt are th e va riab les or unknowns


• Coefficients ~. bt• and dkt are given wi t h

for k=1.2 • • • • •p

f or t= 1.2 • • • •• q

173
174 Chapter XI. Th e Transportation Problem

( 1)

Remark 1 : If ak < 0 ( res p , bR. < 0) , t he kth co n s t r a i nt ( r es p,


c on s t r a i nt ) o f (1') i s i n f e a sibl e . In a dd i t i on :

ak 0 ... t kR, U for \',=1 , 2 , •• • ,q

b\', 0 tkl', U for k= I, 2 , ... ,p

Let u s a dd up the p fir st cons t r a i nt s and the q l a s t con s t r a i n t s of


(T) • We fi n d
p p q q
(I ' ) aI I I t
u I b \',
k= l k k=1 1', =1 1', =1

Thu s, i f ( 1) i s n ot s a t isfie d , (1') ha s no f e a s i b l e s o l ut i on .


Now a s sume t h a t we a dd th e s a me con s t an t Cl t o e a c h coe f f i c i en t dU '
The va l ue o f the ob je ct ive fun ct i on of (1') will be inc r e a s ed hy

p q I' q

I I Clt u Cl I a
k
Cl I
\',=1
b\',
k=l \',= 1 k=l

For ev e r y fea s ib le so l ut ion o f (T) , i t s ob j e cti ve fun ct i on i s in crea s e d by a


con s tan t term: t h e va l ue o f t h e opt i ma l so l ut ion doe s not c ha ng e and we can
as s ume, wi t hou t l o s s o f gene ra li t y , that

k=1,2, o 0 0 , P; \',= 1 ,2 , . •. , 'I

Ref.1ark 2: Let u s c ons i de r t h e prob lem met by an i n dus t ria li st who want s to
th
tra nsport at minima l co st a c e r t a i n c ommodi t y from p fa ctor ie s (in t he k
factory a quant i t y ~ of thi s conunod i t y i s avai lab le ) t o q war ehouse s (t he
deman d in th e I', t h wa rehouse is bR. ) ; un it cost s o f s h i pp i n g ( L e. , the c o s t
o f s h i pp i ng unit ) from f a ct ory k to warehous e I', is dk l', ' Formu lation
o f t hi s p r ob lem is a s fo ll ows (t kl', denot es the amount o f commo di t y s h i ppe d
from f actory k t o wareh ou s e R. ) :
Section I . The Problem 175

L t k9, ~ ak k=1, 2 , • • • , p
£=1
t k9, ~ 0
P
(2) L t k£ > b£ £=1 , 2 , • •• , q
k=l

q p
L L du t k £ z (Min )
£=1 k= l

Adding up t he p first and th e q l a s t in equat i on s, we ge t

q p q P
(1 " ) L b£ < L
L £=1 t
u
< L a
k
£=1 k=l k=l

Thi s r e l ati on has th e fo llowin g i nterpretation : in orde r t h at linear progr a m


( 2) hav e a f e a s ibl e s o l ut i on , i t is n e ce s s ary th at the t ot a l demand be no t
gre a t e r th an th e t ot al avai labi li ty of th e co mmo di ty. We wi l l thus a s sume th a t
( 1") i s s a t is fi e d.
Let us now add a fi cti t i o us warehouse of i ndex 0 for which t he demand is

p q
L ak - L b£
k =1 9,= 1

an d assu me th at t he t r ans p ort at i on co s t s d


f rom t h e f a ct ori e s t o t h i s fi c -
kO
t i t i ou s warehouse equal O. We then ge t a new li nea r p rogram (2 ' ) . \~e l e a ve
it as an ex e rc ise (Exe r c i s e 2) t o s ho w th at (2 ') i s in fact eq u i va lent t o ( 2)
and wi th t h e same f orm as (2 ) .
For ( 2 ' ) , th e doubl e i n equali ty (1") i s in f act an eq uali t y (d ue t o th e
va l ue we ha ve give n to b
Fr om th i s ca n be de duce d th e fac t th at t he i n -
O)'
eq ua li ties in (2 ') can be rep lac ed by equa l i t ies (t here can b e n o fea s ib le solu -
tion for which a s i ngle i nequa l i t y i s s l a ck ). t We co nc lude th i s rem ark by s ay -
i n g that (T) is an a pp r op r i at e mode l f or the tran sporta ti on prob lem .

t The same ar gument was used i n Exe r cise 11. 12 .


176 Chapter XI. The Transportation Problem

Remark 3 : A prob lem of th i s t ype has been i nvest i gat ed by Monge . The pr es ent
forma l ism was f i rs t st udied by Hitchcock , Kantoro vi t ch , and Koopman . Pro blem
(P ) of Chapt e r I i s a t r ans port at i on prob lem.
2
The mat hemati ca l propert i es of l i near pro grams t hat are tran s port ati on
prob lems come from the fac t t hat th e mat r ix A has a ve r y s peci a l st r uct ur e .
This s pec i al st r uct ure i s due to th e f act th at t here i s a graph on wh i ch the
probl em i s defin ed . Solut ion methods ot he r than th e r evi s ed s i mp l ex a lgo r i t hm
th at we are about t o pres ent he re ex i s t t o solve tran s port at i on prob l ems. We
can , in pa r tic ular , ci te th e "llungaYi an method" of H.W . Kuhn, a met hod that i s
a l s o named "primal-dual" (s ee [6 ]) .

2. Pr ope rti e s of th e Tran sp ort at i on Pr oblem

Reua r k 4: The tran s port ati on prob lem can be wr i t ten

jAx f x > 0
(T)
I ex z (Min )
wher e

• A is a (p - q) * pq- mat ri x

• f [ ba ] is a (p +q) -c ol umn vec to r

c is a pq-row vec t or

x i s a pq-co lumn ve cto r .

To make t he co r r es pondence cl ea r betwe en c and d on th e one hand, and x


and t on th e ot her hand , we pos e

j
d

fo r
r x.
J
t
U

(3) q (k - 1) + 9-

In the s eq uel, a col umn index of mat r ix A wi l l someti me s be denot ed by


j , somet i mes by th e coupl e (k , 9-) • and (k ,9-) are r el at ed t hrough (3) .
Section 2. Propen ies of the Transportation Problem 177

Remark 5 : Each variab le t £ appea rs once and onl y once (with coeffi cient 1)
k
in th e gro up of the p first equat ions of ()
T . t h e kt h equat 10n
( actuall y 1n .
of th i s gro uW. Each var iabl e t £ appears once and onl y once (wi t h coeffi cient
k
1) in th e gro up of th e q last equations of (T) (a ct ua l ly in the £th equa-
t i on of th i s gro up) . Thus mat r ix A has t he following propert i es:

(i) A column of A has exactly t wo nonzero elements that are


equa l to 1 .

(ii) One of thes e nonzer o elements belongs to th e gro up of t he


p f i rst ro ws, the ot her to th e gro up of the q last rows .

(i i i) Any (p+q) -col umn vec tor with proper t ies (i) and (ii ) i s
a co l umn of A.

Example: Matrix A fo r prob lem (PZ) is

A 1

De finition Z: We will give t he n ame "n ons ingular triangular" matrix (in brief ,
"triangul ar" mat rix) t o a -6q u.aJl.e nons ingul ar mat rix B sati s fy i ng th e fo llowing
(recursive) de finit i on :

(i ) A nonzero scalar is a tri an gul ar matrix of di mens i on 1.


(ii) B has a row with exactl y one nonze ro e lement . The submat rix
obtained from B by dele ti ng this r ow and t he column containing
the nonzero e lement is t r i angul ar.

Remar k 6 : This defi ni tion i s a ve ry s light extensi on of th e usu al conce pt of


triangular mat r i x : gi ven a trian gular matrix as defined here, th ere exi st s a
pr ope r way of permuting i ts r ows and columns that gives a triangular matri x
as us ua l ly defined .

Remark 7: Not e t hat if B is a t r i angul ar mat rix, linear sys tem

(4 ) Bx f

can be solved very easi ly by substit ution . Let i be a r ow of B contain i ng


178 Chapter XI. The Transportation Problem

just one non- zero e l ement , s ay We have

(4 ' ) x.
J

Subs t i t ut i ng
x by it s val ue (4 ' ) i n the other equa t i ons of (4) , we get a
j
linear s ys tem of di mens i on n- l (if B was of dimen s i on n) t he mat rix of whic h
i s tri angul ar.

Theo r em 1 : Eve r y s quar e nonsin gul ar s ubmat r ix of A (t he cons t ra i nt mat r i x of


linear program (T)) i s tri an gula r .

Proof : Let B be a squa r e nons ingul ar submat r i x of A of dimen si on k ,


Assume t hat B is not tri angula r but th at ever y submat r ix of A of dimens ion
le s s t han k i s t ri angul a r (t hen k > 2 f rom th e defini tion of a t ri angul ar
mat r i x of di mensi on 1) .

From Remar k S ri ) , eve ry col umn of B cont ai ns at most t wo nonz ero


e l ements . Ass ume t hat t here exi st s a r ow of B wit h just one nonz er o e l ement .
De l etin g thi s r ow and t he co l umn of B contai ni ng t he non ze ro e le ment , we ob-
tai n s ubmat r ix of A of dimension
k- l , which , by th e ass umpt i on of mini malit y
on k, i s t r ian gul ar . B woul d be tri angul ar . Cons equent ly , every r ow
Thus
of B cont ai ns at l ea st two nonze r o e lements .
I f we co unt th e numb er of nonzero e l ement s of B by looki ng at the r ows,
we f in d at l east 2k e l ements . By l ooki ng at col umns , we f i nd at most 2k
e lement s . Thus B has exa ct ly 2k nonzer o el ement s and exactl y t wo nonze r o
e l emen t s in ea ch r ow, t wo nonz er o e lements in each col umn.
From Remar k S (ii ) , each co l umn cont ain s a 1 in t he gro up of th e p fi r s t
equa tions and anot he r i n th e gro up of the q las t equat i ons . We now con-
s i de r t he k-row vec to r y de fi ned by

.t h
+1 i f th e 1 r ow of B belongs to th e gr oup
of the p f irst r ows of A
yi
.th
-1 i f t he 1 r ow of B be longs t o the gro up
of t he q l ast ro ws of A

We ha ve y t- 0 , yB = O . Thus B i s s i.nguLar and we have a contra diction .


Section 2. Properties of the Transporta tion Problem 17'1

CorOl lary :t Matrix A is t ot all y uni modul ar; i . e ., every sq uare s ubmatrix of
A ha s a dete r min ant t hat is eq ua l t o +1 , - 1 , or O.

Proof : After a permutation on rows and co lumns (which does not change th e
absol ute va l ue of t he determinant), a t ri angul ar matrix can be wri t ten under
th e class ica l ~ form wit h

a if j >i

The val ue of t he det er min ant of such a folat r i x is equa l t o th e pro duct of i t s
diagona l e lements , whic h ar e a l l eq ua l t o 1 if B is a s ubmatrix of A.

Remark 8 : The examp le of Sec tion VII .3 consis l:s of so lving (P by t he


2)
revise d simp lex algori t hm. The reader wil l chec k th at al l basic matrices are
triangular and th at t hi s was a great help i n ac t ua l ly so lving the diffe re nt
li near sys tems.

Remar k 9 : The matrix

has exact l y two l' s i n each column. I t is not t r i angul ar si nce propert y (i i )
of Remar k 5 is not satisfie d.

Remark 10: Eve r y basic so lution of l in ear program (T) will be in teger val ued
if components of vec t or s a and b are i nte gers. Let be a subse t of t he
se t of rows of A such t hat

rank (A)

L e ., r ows not belonging t o corre spond t o re dunda nt eq uat ions of t he set of


cons trai nts Ax = f. Let J be a ba si s of (1') . The basic so l ution corre-
spo nding to J is

x. 0 j iJ
(4 " ) J
J
AIXJ f
I

t Thi s proper t y will not be used i n th e sequel.


180 Chapter XI. The Trunsportatio n Problem

J J
But A i s a s qua re nons ingul ar s ubmat r ix of A, i oe ", A is tri angul ar and
I I
J
a ll nonzero e le ment s of A ar e eq ua l t o 1 • Thus (4 ") i s a sys t em of t ype
I
( 4) that wi l l be solved by addi t i ons and subt r act i ons . If f has inte ge r
I
compone nts, x wi l l also have inte ger component s .
J

Theo rem 2 : A pos s es s es p+q-l l ine arl y in de pendent col umn s . Any cons traint
of (T) i s r e dundant . Af t e r de l et i on of a s i ng le cons t rai nt . t he l inear sy s t em
thus obtained is f ul l r ank .

Proo f : Let y be th e (p+q) -row vec t or . t he component s of whi ch are

i
if < i 2 p
y
if p+1 < i 2 p+q

Fr om Rema rk SCi) and S( ii) . we have

yA 0

Thus th e I i ne ar sy s t em Ax = f i s r edundant . Let

J {1 . 2 • • • • •q .q +l . 2q+l . 3q+l • • • • • (p - l )q + l l } ; p+q- l

{r .2 .. .. . p- l . P. p+2. p+3 • • • • , p+q}; p+q- l

It i s easy t o chec k th at AJ i s t r i angul a r .


I
Si nce a ll components of y ar e di f f er ent from O. any const rai nt can be
del et ed and t he s yst em t hus obt a i ned i s nonredun dant .

Exampt e: For p = 4. q = 7. mat r i x as the f o llowing shape :

1 1 I I I I I
1
1
I

I
1
q- l
1
1
I
I

q p- l
Section 2. Properties of the Transportation Problem 181

Theorem 3: A ne ce s s ar y an d suff icient condition f or a fea sible s ol ution of (T)


to be an opti mal solut i on i s th at one can find

u l, u2,··· , up

v l,v2 ,· · · ,v q

such t hat

( 5) k=1, 2, • • • ,p ; i =1, 2 , •• • , q

(6) o k=1,2 , • • • ,p ; i =I , 2 , •• • , q

Pr oof : The dual of (T) is

~ + vi ~ dkl k=1,2 , •• • , p; i =1 ,2 , •• • , q
*
(T ) P q
L aku k + L bivi w(Max)
k=l i =l

Condit ion (5) gua ran tees t ha t ~ and vi ar e a f ea s ible so l ution of (T*) .
If (5) and (6) are sati sfied , we have

Thus we have a couple of fea s ib le so l utions to (T) and (T*) t hat s at isfy
th e complementary s l acknes s theorem.

Remark 11: If uk ,v (k=I ,2 , • •• ,p; i =1 , 2 , • •• , q) s at is fy constrain ts (5),


i
so do
k=1 ,2 , • • • , p; i =1 ,2 , •• • , q

f or any ex . Thus t he so l ution of (T*) i s defin ed up t o an add it i ve cons t ant.


Thus th e solution of (T*) is define d up t o an additi ve cons tan t . This comes
f r om th e f ac t th at, y being defined as i n th e pr oof of The orem 2, we have
yA = o . We will take advan tage of thi s fac t in t he seque l by sys tematica l ly
i mpos i ng ul = 0 •
182 Chapter XI. The Transportation Problem

3. So l ut i on o f the Transp ort ation Pr obl em

Rema r k 12: Transportation p ro b lem (T) and it s a l go ri t hm o f s olut io n are


pre s ented i n t h e foll owin g t abl e o f d imensions (1'+1) x ('1+1) :

The pro b lem cons i s t s of findin g t kJl, • uk' v£, su ch t hat:

(i ) The s um of t h
t kJl, on th e k r ow i s equa l to a •
k
(ii) The s um o f £,t h
t k£, on th e co l umn i s eq ua l t o h£, •

(i ii) uk and v£, s atis fy cons t r a i nt s ( 5) •

(iv) Re l a t i ons (6 ) a re s at is fie d .

We not e that t his present ati on o f t he t ran s por t at i on probl em is much mor e
compact t han th e usu al fo r mali sm s i n ce matrix A ha s p+q rows and pX q
co l umns, as oppos ed to t he p+1 rows an d q+1 co l umn s of t he p r e ceding
t a bl e . I n fact , each entry of thi s tab le corresponds t o a co l umn of A.

Exampl e: We con side r t he tran sp ort ation prob le m defined by th e t abl e


Section 3. Solution of the Transportation Problem 183

The corres pondi ng lin ear progr am i s

6
2
4
1
6
3
2

2 3 13 7 o 6 2 5 8 15 9 (M i n )

Remark 13 : As announced e ar l i er , we wi l l so lve trans port at i on prob lems th r ough


t he r evi sed simplex a lgori t hm. This means that we wil l ca r ry a primal , f ea s i bl e
so l ut ion and dual va riab les sati s fying complementary s lackness r el at i ons . When
th e dual va r i ables provi de a feas i ble so l ut ion of t he dua l probl em, we wi l l be
done . In ot he r words , afte r an in it i al i zat i on step , we wi l l carr y tk ~' uk'
and v~ sat is fyi ng (i ), (i i), and (i v) of Remark 12. We will s top when (i ii)
i s also ve r i f ie d.

Remark 14: To initi ali ze, we l ook for a f ea sibl e bas i s J and the co r res pond-
ing basic so lution. J wil l deno t e the set of couples ( k ,~ ) in th e basis .
From Theorem 2 , we have IJ I = p+q- l. The procedure i s t he f ol l owing :
184 Chapter XI . The Transportation Problem

( i) If p=1, q > 1

J { (l , 1) , (l, 2 ) , •• • , ( 1 , q)}

(ii) If p >l , q=l ,

J {(I , 1), (2 , 1), ( 3 , 1), . •• , (p , l)}

(i i i) If P >1 and q > 1, we choose an ent r y f rom tab le T , fo r in -


st anc e t, ent ry (r , s) such t ha t

(7) d Mi n{k=I , 2 , • • • , p ,· 1, 2,
rs 0 =
;0 ••• , q } (dkol
;0

We l et
J: J U { (r , s))

( 8) t
rs

(8 ) is th e maxi mum amount of commodit y t hat can be tran sport ed from


r to s without viol at i ng nonnegat i vit y con s t r aints on t he ot he r
va riab le s ( look at r ow sums and co l umn sums i n t ab l eau T ).

(i v) If a r -< bs we have t =a and a ll t he ot he r var iable t


rs r d
th
f or R- "S will be equa l t o o • The r r ow of t abl eau is
s atu rated . We can now cons i der t hat we wan t t o find a f eas ib l e
s ol ut i on f or th e t r ans por t at i on prob lem obta i ne d from (T) by sup
th
press i ng the r r o w of T and r epl a ci ng b s by

( In th i s new t r an s por t at i on prob lem, we as k t hat t he


su m on col umn s be onl y eq ua l t o b - a .)
s r
t Choi ce (7) is not essen tia l. We can a l s o t ake r = s = 1. We t hen have , f or
evi dent r eas ons when one l ooks at t abl e T , a met hod ca lled t he "nor t h-
west corn er met hod. " (7) i s a heur i st i c appr oach whic h i n gene ra l wi ll gi ve a
s tar t i ng s ol ut i on cl os er t o t he opt imal .
Section 3. Solution of the Transportation Problem 185

(v) If a > bs' we have t = bs and all t he variab le s t ks f or


r rs
k 1- r will be equal t o O. The s th co l umn of t ab le T is
saturated. We define a new transportation problem by suppressing
the sth column of t able T and replacin g a with a - b •
r r s

Example: We apply this procedure to find an initial s o l ut i on to (TO) . The


s o l ut i on is given in the fol lowing t able:

We be gin by po sing t = 2, J = {(2,2)} • We suppress the s econ d row and


22
b change s from 6 to
4 . Then we po se t = 1 ; J: = JU{(l,l)} , we s up -
2 ll
pre s s the first co l unm, and a i s de creased from 6 to 5 . Then we pose
l
t = 4, J : = J U { (1 ,2)}, we s uppr es s th e se cond co l umn, and a de creas e s
12 l
from 5 to 1 . Then we po se t J: JU{(1,4)}, we s uppre s s the
14=1,
first row, an d b4 is decreased from 2 to 1.
The reduced tran sport ation t able is then

We have p = 1; we let t = 3, t = 1, J: J U { ( 3 ,3), (3 , 4) } •


33 34
186 Chapter XI. The Transportation Problem

We now presen t t he ini t iali zati on al gorit hm more forma ll y:

In iti aliz at i on routin e

o for k=I ,2 , ooo,p j 9,=1, 2 ,o •• , q

J: 0; P: = { 1, 2 ,. oo, p} ; Q: = { 1,2, o. o, q}

whi Ie IpI>1 and

d
rs Min [dk9, l : J : = J U { (r , s ) }
{k£P; 9, £Q}
/* i f mo re than one co up le (r ,s) i s cand i dat e , choos e anyone */

if a < b th en
r - s

a . b P: p'd r}
r ' s

b . a : Q:
s' r

end wh i Ie

if Ipi = th en

let r be t he unic e l ement in P

~ sco do
J : = JU{( r, s) }; t b
rs s

end fo r all

e l se 1* then IQI = I "'1 l et be th e unic e lement of Q

fo r a l l r£P do

TJ:=
en d fo r all
J U { ( r, s ) }; t
rs =a r

end i f

en d i niti a l iz a t i on routin e
Section 3. Solution of the Transportation Problem 187

Theorem 4 : The se t of i ndices f ound at th e end of the initiali zat i on r outine


is a f eas ibl e ba sis o

Proof: Let n be t he number of time s we go through the " whil e" l oop , At each
step IpI + IQI i s dec reased by uni t. At t he exi t of th e l oop we have,
say , IpI = I . Thus we wil l t hen have

I P1 + IQI p +q - n
IQI p+q - n - I

The l oop "for all " t hat fo llows wi 11 go through I Q1 steps.


For each iterat i on in the "whil e" l oop and ea ch it eration i n the " f or a ll "
l oop, IJI is increased by unit . Thus at the end of the app lication of
t he r out i ne , we have

n + IQI p +q -l

If J is reall y a basis , the solution we have found (and which is f easib le by


co nstruction) i s th e basic so l ut i on as s oci at ed with J since

(by cons t r uct ion)

Let {Z,3 , • • o,p,p +l poo,p +q} 0 I t r emains t o show that

i s in fa ct a ba si s. We j ust have to prove t hat

( *) o o

Consider th e firs t index we i ntroduced i n J , say ( r , s) , and as sume th at


a < b (t he same argument in th e ot her case). Then
r - s

( r ,l!.)E J ... l!. = s (by construction)

th
and t hus xr s = 0 is the unique possibility for th e r equa ti on of (*) .
The arg ument f ol l ows a lo ng this line on the successi ve l y r ed uced tran sportation
probl ems.
188 Chapter XI. The Transportation Problem

Co r o llary : Tran sport at i on prob lem (T) , as de fin e d in Defini tion 1, a lways
ha s a f e a s i bl e s o l ution .

Remark 15 : To a pp ly t h e revised si mplex a lgo ri thm, we n e ed t o know the mul ti -


p li er vect o r 1T r el ative t o b a si s J . Thus we hav e t o so lve the linear
s ys t e m
J
( 9) c

whe re 1T i s t he (p+q-l) -row vect or (u u , . oo, u ' v ,v , OOO,v . From


2, 3 p l 2 q)
t he s t r u ct ure of ma t r i x A (see Rema rk 5) , ( 9) ca n be wri tten

(9' )
f o r a ll (k ,R,)E:J

J
AI ' be i n g tri an gul a r (9) ( or (9 ')) , can be so lved very easil y , a s we see
i n th e n e xt e x ampl e .

Exampl e.: Ent ries corresponding t o co lumns i n J a re t ho se where t he va l ue o f


t kQ ~ 0 ( t he ot her ent r i e s correspon d t o n on basic co l umns ). Looking at
t a bl e T~l), we be gin wi t h " i = O. We fi nd immedi ate ly

v =3 ,
2
since

" i + v4 = 7 (from (6))

Then , f rom v =3 ,
2 we de du ce th at "z = - 3 and f rom v4 = 7 th at "s = 2 •
Fi na lly , fro m u = 2 we ge t v = 13 • And we get th e t a bl e T( 2 ) s hown
3 3 o
on t he n e xt pag e.
Section 3. Solution of the Transportation Problem 189

r ( 2)
o

Next . this procedure i s pre se nted mo r e formall y.

Dua l variables com utat ion routine


u 1:=0; P:=P ':= {I }; Q:=Q': =0

wh i Ie I PI < p and IQI < q do

---I
for a l l

end fo r all
kE P'

v£ :
and

d k£ - uk;
£E:{ 1. 2 • • • • •q} \Q

Q': =Q'u {R,};


such t hat

Q: =QU {£}
(k. £)E: J do

P' : = 0

for all and kE:{1.2•• ••• p} \P such that do

I
end for al l
P':=P 'U{ k}; P: =PU{k}

Q' : = 0

end whi 1e
end rout ine

Remar k 16: We wi l l not give a fo rmal proo f of thi s r out in e . pI and Q' ar e
th e se t of i ndic es f or which va l ues of uk and v£ have bee n compute d i n the pre-
cedin g step . The r eader wi l l underst and th e me chani sm of th e a l gor i t hm by
applyi ng i t t o th e preceding ex ampl e .
190 Chap ter XI. The Transportation Problem

Rema r k 17: i f con s traint s

a re sat is fie d fo r a ll (k,9,) 'tJ , th e curren t s olution (basic so lution r e l a t ive


t o b as is J) i s opt i ma l (see The orem 3). I f th i s i s not t he c a s e , we choo s e a
co lumn inde x , f or in s t an ce t h e co up le ( r ,s) gi ven by

( 10 ) d u v
rs r s

f or th e c o l umn in de x t o en t e r th e ba s i s . We n ow have to de ci de wh i ch i n de x
will l e a ve the ba sis . Befo re gi v i n g a gene r a l an swe r , we wi l l go hack t o our
examp le .

Exampl e: I f we l ook a t t a b l e T62) and i f we ap p l y crite rion ( 10) , we fin d


t h at ( Z, 3) i s t h e en te ri ng va r iab le. Denote by 8 t he va lue o f t • Wh at
Z3
i s t he max imum poss ib le va lue of 8 when we adj us t t he ba s ic variab le s ? We
cons i de r the tab l e

T ( 3)
a

Since we want t o l e av e the non b a s i c va r i ab les d if f erent from t a t th e ir


rs
z e ro le vel, we see that tIl mu s t s t a y e qua l t o fo r any va lue o f 8 . Con -
s i de ring t h e s econ d row and th e t h i r d co l umn, t h e a djustmen t l e a ds us t o

Z 8
3 8
Section 3. Solution of the Transportation Problem 191

Thi s i mplies, i n consi der i ng th e s econd column and the third r ow, that

4 + e
+ e

Fi na l ly , cons i der i ng fir st r ow and f ourth co lumn, we have

- e

We have de scribed a sor t of a cyc l e . The maximum possib le va l ue fo r e is


e= 1 : in effect , for e > 1 , t 14 wou ld go negat i ve . Thus , index (1, 4) goes
out of t he bas is. We t hus have th e ne w bas is , the new ba si c so lution , and -- by
ca ll i ng t he "du al vari abl es computation ro utine " -- the new va l ue s of dual
variables . Thes e r e su lt s are given in th e fol lowing t abl e :

r ( 4)
o 1L-------!..".jL---:~=--___7~----=7f_..___-7i J ' = { ( 1, 1) , (l ,2 ) , (2 ,2 ),
(2, 3) , (3, 3), (3 , 4) }

Now (3 , 1) i s a candidat e t o ente r th e basis. We see that if t 31 = e, t her e


wi ll be s ome adj ustme nt s in a l l th e ba sic variable s except t 34 ' which i s the
onl y ba sic variable i n i t s col umn th at will not be ab le t o chan ge . We writ e ,
in t he fo l lowing t abl e (s hown on t he next page ), the adj us t ments of basi c
vari able s.
192 Chapter XI. The Tnmsportat io n Problem

r eS)
o

The maximum pos sib l e va l ue f0 1 e is e =1 . For e =1 we have t wo bas ic


variab l es th at r each 0 s i mult aneo usly:
t Il and t 22 • Thi s is a case of de-
gene ra cy. We wil l choose (a r bi t ra ri ly ) one of t hos e i ndic es as t he l e avi ng
index : l et (2 ,2 ) leave the bas is . ( 1 , 1) i s still i n the basi s, but tIl = 0

fo r the basi c solut i on. Call i ng th e "dual va r i ab l es comp ut at i on r outin e ," we


get

r (6) J " = { (1,1 ) , (1 , 2) , ( 2 , 3) ,


o (3 , 1) , (3 , 3) , (3 ,4) }

(Not e that 0 in ent r y (l. l) me ans th at ( 1 , 1) i s a bas ic inde x even i f t Il = 0 .)

Ind ex ( 1 ,3) i s now a candi dat e t o ent e r the basi s . The chain of adj us t ments
on th e basic variab les is gi ven by t he fo l lowi ng tab l e :
Section 3. Solution of the Transportation Problem 193

T ( 7)
o

The maxi mum possibl e value f or e is e= 0 : ( 1 ,3 ) e nte r s th e bas is and (1,1 )


l e ave s the basis, but the bas i c s o l ut ion doe s not change. We ge t

T( B)
o

Thi s solution is optimal. We are done .

Remark lB: Let us go b ack t o th e ge ne ra l c ase of Remark 1 7 and l et 0 be the


inde x of the variable enter in g th e bas is (0 = q (r-l ) + s). To find th e index
th at l eave s the b a s i s, we s t udy t h e variation o f th e bas i c so l ut ion x when
J
x =e vari e s . We h av e
o

( 11) f
194 Chapter XL The Transportation Prob lem

and we l ook fo r

§ = maxi mum va l ue of e for whi ch (11) i s s at is f ie d and

For one of the bas i c va riab les (a t l eas t ) wi l l cance l. We choo se th i s


one (or one of t he se) t o l eave t he ba si s .
The n ext ste p i s a s tudy of the way x
.I
depends on e i n ( 11) . To make
thin gs c l ear , ca l l x th e (unique ) solut i on of
.I

Here , we j ust give t he r esult s of th e s tudy of t hi s dependence (th ese r e sult s


are a cons equence of th e st r uct ure of mat r i x A or , sa i d i n anot he r way , of the
net wor k pro blem whi ch i s embedde d i n t he t ran s port ati on problem) .

+
(i ) .I U{o} can be partition ed in t o t hree subse ts : .I ' , .I , .I wi th

x . E: .I '
J
x. does not depend on e
J

x.
J
x. + e
J

x. E: .I
J
x. x. - e
J J

(i i ) The i nd ices of .I ' can be f ound in t he f ol l owi ng way. Fi r s t l ook


fo r a row or a co l umn of T t hat con tains a single el ement of J = .I U {o } , The
cor re spo n di ng va riab le cannot va ry . De le te t hes e ind i ces (which ar e a lone in
the i r rows or col umns) from J and s t ar t aga i n. Repeat until no f ur th er del e-
ti on i s possib l e.

(i ii) Let J = .I+U .I-. For each r ow or co l umn of T we have exac t l y a


+
or 2 ind i ces of .I , one be lo ngi ng t o .I+ , t he ot her to Since 0 E:.I ,
det erminat i on of .I+ and .I i s an ea sy t ask .

+
Remark 19 : Once we have determined .I' , .I and .I we have

g Mi n [i .]
j E:.I- J
and the index of th e va ri ab le t o l eave i s

TE:.I wit h
Section 3. Solution of the Transportation Problem 195

We n ow give more f ormal l y th e p roc e du re o f cha n ge of basis.

::ha n e o f bas i s rout i ne

I '" (r , s ) is the index of t he var ia ble to en ter t he ba s i s "I


J ':~ £I; J+ :~ {(r,s) }; J : ~ £I; P : ~ {1,2 , • • • • p} ;
Q: ~ { 1. 2 • • • • ,q }; J: ~ JU{( r . s ) } .
re eat
EXIST: ~ FA LSE
fo r a 11 k e: P do
~

T1
if 1{9, ' I (k , 9, ' ) e:J \ J '} 1 I then
9, : ~ unic elemen t ({ 9, '1 (k , 9, ')e:J \ J' })
J ': ~J 'U {(k, 9,)} ; P : ~ P\ {k} ; EXIST: TRUE
end if
e nd f or a ll

fo r a l l 9,e:Q do
~

T1
If j{ k ' l ( k ' . 9,)e:J \ J ' }1 I t hen
k: ~ unic e lement ( {k' !(k ' . 9,)e:J \ J '})
J' : ~ J 'U { (k . 9, ) } ; Q: ~ Q\ U } ; EXIST: TRUE
e n d if
end for all
until EXIST ~ FAL SE
k : ~ r ; j : ~s

i t e rate
~: = unic e l eme nt ({9, 'I (k . 9, ' ) e:J \ J ' ; 9, ' l' j ) } ; j: ~k
ex i t 9, ~ s

T J -: ~J -U {( k, 9,)}; k : = un i c elemen t ({ k ' ! ( k '. 9, ) e:J \J ' : k'l'j});J+: ~J+U {(k .9,)}
end iterate

Fi nd (; ,~) such that t~~ = Min {(k , 9,)e:J-}[t k 9,] 8:


- ---
t k 9,: t k9, + 8 V(k, 9,)e:J+

t k 9,: t k9, -8 V(k , 9,) e:J -

J: ~
J \ W\ ~) }

en d routine
196 Chapter XI. The Tranxportation Problem

We let the reader check that thi s routine performs the chan ge of basi s and the
change of basic s ol ut i on as indicated in Remark s 18 and 19. The solution al -
gorithm for the tranportation prob l em is gi ven next .

Al or i thm PR IMA L TRANS PO RTATION

Ca l l In it i a l i zat i on routine
iterate
~aJI dual variab les computation ro ut i ne
I f in d (r,s) such t hat drs - u r - v S

eXit : drs - u r - Vs ~ 0
Cal I change of basis routine
r
end te rate
j

end algori thm

4. The Assignment Problem

Definition 3 : Let us cons i der th e following probl em. n persons must be


as signed to n jobs (one job for one pe rs on). There i s a cos t d as soc iat ed
U
with the assignment of person k to job Q, (think of a t rainin g cost) . The
problem consi s ts of as signing each person to a job (and each job to a pe rs on) at
a tota l minimum co st. This probl em, known in the literature as th e " ass i gnment
probl em," can be formul ated as a Io.r) integer programming probl em by posing

i f the kt h person is assigned to th e Q, t h j ob

if not
and
n
L tu Q,=1,2 , • • • ,n
k=l
n
CA) L tu k=1,2, •• . ,n
Q,=l

n n
L L dkQ,tkQ, z( Mi n)
k=l Q, =l
Section 4. The Assignment Problem 197

Theo rem 5: Prob lem (A) can be so lved i n i gnor i ng integri ty cons traints
t Jd,d O,l} . More pr ec i se I y , ever y basic opt i ma l sol ut ion of

n
I t
H £=1,2, ••. , n
k=l
n
(A' ) I t
H k=1,2, ••• ,n
£=1
n n
I I dkR, t kR, z(Min)
k=l £=1

is an optima l s ol ution of (A) .

Proof : (A') is a t r mlsport at i on problem. From Remark 10, every bas ic so l u-


t i on of (A') is in teger va lued. Now cons t rai nts

of (A' ) imply t ha t t £ ::- 1 . Thus every basic feasib le so lution of (A') is


k
a f eas i bl e so l uti on of (A) Hence every basic optimal so luti on of (A') is
an opt i mal solution of (A) (since cons traints of (A') are incl uded i n con-
s traints of (A)).

Remar k 20 : Because of Theorem 5, the tradit ion is th at problem (A') i tself


is ca l led an assignment prob lem.

Remark 21 : For any basic so lution of th e assignment problem , we wi l l have


2n- l va riab les in the basis and only n variables equal to (corresponding
t o t he assignment chosen). The assignment pro bl em wil l be very degener at e (a t
e ach step , n- l variables equal 0) . However, no perturbation met hod or lexi -
cographic variant is n eces s a r y. But it is of prime i mportance t o keep contin-
uous ly a basis of appropriate dimension with basic variab les at l eve l O.
198 Chapter XI. The Transportation Problem

EXERCISES

1. Sho w that linear program (1') o bt a i n e d f rom (T) of De fin iti on 1 by


ch ang i ng "Min" into "Max" in th e o bj ect i ve f unc ti on i s a tran s portation
probl em.

How is the "PRIMAL TRANSPORTATIO N" algorithm t o be c h ange d to so lve ('i')

2. Prove that linear programs (2) and (2 ' ) ( see Re ma rk 2) are e q ui va l en t .

3. We con side r t ran sportat ion p ro bl e m (T) defined by t he t abl e

(a) Show, wi t ho ut doin g an y ca l cu lat ion , th at thi s t ran sp o r t at i on pro bl em


ha s an optimal so l ut i on wi t h even compo ne nt s .
(b) Show t h at the s o l ut i o n

i s optimal . Is it uni c ?

( c) a and b a r e chan ged s i mu lt ane o us l y an d their va l ue i s in c re a sed by


l 2
6 . How do th e s o l ut i on and th e v a lue o f th e obj ect i ve f unct i on va ry
a s fun ct ion s of 6 ?

(d) Wa s i t po s s ibl e t o fo res ee that in s ome domain t h e va l ue of the o b-


j ect i ve f unc ti on would de cre as e when 6 i ncr e a ses ?
Exercises 199

4. Solve th e tran sp ort at ion prob lem

5. Sol ve the transportation pr ob lem (example of [ 6] )

3 3 6 2 2
200 Chapter XI. The Transportation Problem

6. Cons i de r t he transport ati on prob lem

(a) Fi nd a so lu t ion of thi s pr obl em for a = 7, S= 8 •


(b) What i s th e opti ma l s ol ut i on fo r a =3 , S =8 ?
(c) For a = 7 , give th e s et of va lues of S for which sol ut i on of
ques t ion (a) is opt imal.

(d) Wha t is th e set of va lues o f (a , S) f or which so lution of quest i on


(a ) i s opt imal ?

(e) Give all th e so lutions of t his transp ort at i on prob l em for a l l va l ues
of (a , B).

7. Solve the assi gnment pr oblem de f ined by the cos t mat r i x

8. We cons i der an as s i gnment pr oblem f or which the obj e ct ive f unc t ion is t o be
maxi mized (thin k of dk£ as r eturns instead of cos ts )
Solve the ex amp le de f in ed by th e r eturn matri x.
Exercises 20 1

16

I;J
7

rI:
12
10
7
14
2
0
6 10

9. Con s i de r the assi gnment prob lem wi t h t he cost matrix

5 5 2 3
5 3 4
4 0 2 0
1 4 5 3 3
2 5 4 6 6

(a) Give an optimal sol ut i on .

(b) Give a l l optima l so l utions .


References

[1] Ch va ta l , V. , L{.HeaJL PIWgJUUlII1Kllg , W. M. Fr-eeman & cc . , 1982 .

[2 ] Dantz ig , G. B. , L{.neiVL Plt oglUUn!1Kng mI d ExteM -<-oM, Pr i nce t on Uni ve r s i t y


Pr e s s , 196 3 .

[3J Ga le , D. , The Theolt lj 06 UneiVL Econo!1Kc Mo deb.., McGr aw Hill, Ne w Yor k , 1960.

[4 J Ga r f i n ke l , R.S . and G. L. Nemhause r , I nte geJt PltogltaJn!1Kng, J ohn Wil e y, 1972.

[ 5] Gas s , S. , U ne a/t PM gJulm!1Kng, Method6 Md AppU C.atiOI1-6, ~1cGr a w lIill, New


Yor k , 1958 .

[6 J For d , L.R . an d u.n . Fu l ke r s o n , F.cOW6 -<-n Netwolt /v." Prin c eton Un i ver s it y
Press , 1962 .

(7 J Si monn a r d , M. , U nea/t PM glUUnmUl q, Pren t ice Hall, Eng l c wood Cl i ffs , New
Jer sey , 1965.

202
Aide Memoire and Index of Algorithms
AID E MEMOIRE

Gi ven a linear program in st and ard f orm

Ax = b x > 0
(P) { ex = Z (Max)

where A i s an mxn matri x, we ca l l:

Feasibl e solution: A vec t or x s uch that

x > 0
Ax = b

Opti mal solution: A f ea sible s ol uti on whic h maximi zes z.

J
Basis: A se t J of m indices suc h th at A is nonsingul ar .

Basic variables : The vari abl e s whos e indices are in th e bas is .

Basic solution c orres pondi ng to ba sis J The sol uti on obta i ne d wi t h th e


non-basic variables equal to 0, L, e. ,

The basic so l ut i on whi ch co r r e spond s to a gi ven bas i s i s uni que .

Feasibl e basis : A basis s uch th at th e cor respond i ng basi c sol ut ion i s f eas ible ,
i. ,e., a basis J s uch that

Multi pli er ve ctor r e lative to a bas i s I (or s i mpl e x multipliers) :

11

Cost ve ctor r elati ve to a bas i s J:

203
204 Aide Memoire and Index of Al gorithm s

c c - rrA.

Optimal basis: A basi s s uch that th e co r res pondi ng bas ic s ol ution i s optimal;
by vi r t ue of theorem l V.3, a bas i s is optimal if and onl y i f

c c - rrA < O.

Bas i c optimal sol ution: A ba sic feasibl e sol ut i on wh i ch i s opt imal , or i n other
words , a basic s ol ut ion which cor responds to an optimal ba s i s .

I NDEX OF ALGORI THMS

1. Pi vot operati on: how to perform a pi vot operation on th e coeffici ent of a


l ine ar sys t em - p. 49.

2. Si mp lex algorithm: how to solve a lin ea r pro gram when it is writt en in


canon ical form with r esp ect to a f eas ibl e bas i s - p. 81.

3. Phase I of si mp lex me t hod: how to wr i t e a lin ear progr am i n canonic a l form


with r espect t o a feasibl e basi s - p. 103.

4. Revi s ed s i mp lex algorithm: another way of i mplementing th e simp lex


a l gor i t hm - p. 115.

5. Si mp l ex algorithm with bounded variabl es : how to pr ovi de , i n a compac t way,


fo r bounds on th e va l ues of vari abl es - p. 119.

6. Dua l s i mplex algorithm : how to sol ve a line ar progr am which is dual


fea sibl e but pr i ma l unfea s i ble - p. 159.

7. Tr an sportat ion a lgor i t hm: how t o s ol ve the t ransportat i on problem - p. 186,


189, 195 and 196.
Index

adj acent ve r tices 135 exp lo r ato r y var iant 82


a l te r na t i ve s (t heore m of) 105 ex t reme point 134
arti f i ci al variables 96 fa ce 139
as si gnment prob le m 196 Farkas ' l emma 104
augment ed mat ri x 49 f easi ble basis 65
au xi l i ary pro blem 96 feasi ble solut ion 4,1 0
ba sic sol ut ion 59 fin i teness of simp le x algorithm 84
bas i c va riabl es 39 , 58 f ree variabl e 38
bas i s 58 fu l l r ank 37
blockin g r ow 77 Gaus s Jordan elimi nation met hod 52
bounded vari ables 11 7 globa l minimum 128
canonical f or m 9, 62 Gordon theor em 107
ca rdinality (of a s et ) 5 ha l fs pa ce 126
closed se t 124 hyperpl ane 126
co l umn vector 5 Hungarian me thod 176
comparison of mat ri ce s 7 i n cons i s t ent sy st em 37
comp lementary s lacknes s 144 ,1 53 inverse of a mat rix 41
conca ve fun ction 169 Lagrangian theorem 155
const r a int s 4 l exi cograph i c ord e r ing 93
conve x set 124 le xico graphic variant (of simp l ex ) 93
convex f unction 127 l ocal minimum 128
conve x pol yedron 127 marginal pri ce 151
conve x pro gram 128
Matri x 5
cost ve ct or (re l ati ve t o a ba s i s) 62
compa r i so n of mat r i ces 7
cycling 85
mat r i x in verse 41
Dantz ig 1 , 21 mat ri x multi plicat ion 5
mat r ix t r anspo s it ion 7
degeneracy 59
regular , singu lar mat r ix 41
dua l linear pr ograms and rul es 21, 26 unit mat r i x 7
dua l i t y theor y 21 , 142 minimum (lo cal and global) 128
dual s i mpl ex a l gor i t hm 157 mixed form of a linear pr ogram 12
dual va ri ahl es 25 Motzkin I s Theo re m 107
economic i nter pretati on of mul t i pl ie r vect or 62
dua lity 29 , 147
notati ons 5 ,6, 7
edge 134
non-basic variab les 38, 39
e l ementa ry r ow ope ration 45
obj ec t i ve func t ion 4, 9
equi va l ent linear programs 12
optima l basi s 66
equ ival ent l i near systems 36
opt imal sol ut i on 10

205
106 Ind ex

par amet ric lin ear programmi ng 161 s ca l ar pro duct 5


pe rt urbation me t hod 86 s ens i t i vi ty ana l ys i s 156
Phase I (o f s implex) 98 s implex a l gori thm 81
Pha se II (of s impl ex ) 99 s i mpl ex method 103
pivot mat r ix 51 singul ar mat r i x 41
pi vot ope rat ion 49 s lac k cons t r a i nt 144
pr imal 22 s l ac k variabl e 13
price 151 s ol ution o f a linear sys t em 39
pos i tive polynom 86 standard fo rm 12
pos t opt i ma l i ty ana l ysis 156 St iemke theor em 10 7
pos t opt imi zat i on 161 syst em of l in ear equat io ns 35
product form of the in verse 114 ti ght const ra in t 144
pr oducti on planni ng tran sp os i t i on of a mat r ix 7
r ay 138 tran s port at i on pr obl em 3, 173
r edundancy 36 tri an gul ar mat ri x 177
r edundant equat i on , sys t em 36 uni t mat rix 7
r egul ar ma t r ix 41 vec to r 5
r evi sed s i mpl ex algo r i t hm 11 2 ve r tex 134
r ow ope ration 45 Vi ll e 's theor em 10 7
r ow vec t or 5 von Neum ann 21

You might also like