You are on page 1of 62

LINEAR PROGRAMMING

ANTONIO CONEJO

JUNE 1997
Linear Programming

CONTENTS

1. REFERENCES

2. THEORY

 THE PROBLEM
 GEOMETRICAL VIEWPOINT (CANONICAL FORM)
 ALGEBRAIC VIEWPOINT (STANDARD FORM)
 THE BASIS
 BASIC FEASIBLE SOLUTIONS
 GEOMETRY
 LP FUNDAMENTAL THEOREM
 MATRIX FORM
 THE SIMPLEX MECHANISM
 THE SIMPLEX ALGORITHM
 INITIAL BASIC FEASIBLE SOLUTION
 DEGENERATED SOLUTIONS AND CYCLING
 SENSITIVITIES AND LAGRANGE MULTIPLIERS

3. ADVANCED TOPICS

 THE DUAL PROBLEM


 WEAK AND STRONG DUALITY THEOREMS
 COMPLEMENTARY SLACKNESS THEOREMS
 LP IN NETWORKS
 PARAMETRIC LP
 INTERIOR POINT ALGORITHMS

4. MPS FILE

5. AVAILABLE SOFTWARE

2
Linear Programming

REFERENCES

1. D.G. LUENBERGER. “LINEAR AND NONLINEAR


PROGRAMMING”. SECOND EDITION. ADDISON-WESLEY
PUBLISHING COMPANY. READING, MASSACHUSETTES, 1984.
(EXCELLENT)

2. V. CHVÁTAL. “LINEAR PROGRAMMING”. W.H. FREEMAN AND


COMPANY. NEW YORK, 1983. (ADVANCED TOPICS, THEORETICAL

EMPHASIS)

3. M.S. BAZAARA, J.J. JARVIS, H.D. SHERALI. “LINEAR


PROGRAMMING AND NETWORK FLOWS”. SECOND EDITION.
JOHN WILEY AND SONS. NEW YORK, 1990. (advanced topics,
computational emphasis)

4. H. KARLOFF. “LINEAR PROGRAMMING”. BIRKHÄUSER.


BOSTON, 1991. (terse, computer science style)

5. C.H. PAPADIMITRIOU, K. STEIGLITZ. “COMBINATORIAL


OPTIMIZATION”. ALGORITHMS AND COMPLEXITY. PRENTICE-
HALL, INC.ENGLEWOOD CLIFFS, NEW JERSEY, 1982. (clever
ideas)

1. WOOD, WOLLENBERG (the power perspective)

2. NAGRATH, KOTHARI (introductory stuff)

3
Linear Programming

THEORY

4
Linear Programming

WHY WE STUDY LP?

 MANY ELECTRIC POWER MODELS USE IT

 THE SOLUTION OF A WELL FORMULATED LP PROBLEM

CAN ALWAYS BE FOUND (ROBUSTNESS)

 IT IS POSSIBLE TO SOLVE HUGE LP PROBLEMS (i.e.,

WITH MATRICES OF 25.000  50.000) IN A REASONABLE

AMOUNT OF TIME (MINUTES) USING A MEDIUM-RANGE

WORKSTATION

 TO STUDY LP IS THE NATURAL WAY TO GET

INTRODUCED INTO OPTIMIZATION TECHNIQUES

5
Linear Programming

EXAMPLE/MOTIVATION (1/3)

 WE WANT TO MAXIMIZE THE ENERGY PRODUCTION


BENEFITS OF RUNNING 2 GENERATORS

 BENEFIT OF PRODUCING WITH GENERATOR 1,


c1  3 [u.m./MWh]

 BENEFIT OF PRODUCING WITH GENERATOR 2,


c2  5 [u.m./MWh]

 MAXIMUM OUTPUT POWER OF GENERATOR 1,


b1  4 MW

 MAXIMUM OUTPUT POWER OF GENERATOR 2,


b2  6 MW

 BOTH GENERATORS SHARE THE SAME COOLING SYSTEM,


SO THEIR JOINT PRODUCTIONS SHOULD MEET THE
RELATION BELOW

3x1  2x2  18 MW

6
Linear Programming

EXAMPLE/MOTIVATION (2/3)

TO MAXIMIZE BENEFITS (ONE HOUR TIME PERIOD) WE HAVE


TO SOLVE THE PROBLEM WHICH FOLLOWS

MAXIMIZAR 3 x 1  5 x2
SUJETO A x1  4
x2  6
3 x1  2 x2  18
x1  0
x2  0

Something similar to
this formulation will be
called canonical form

7
Linear Programming

EXAMPLE/MOTIVATION (3/3)

CANONICAL FORM

X2
x2  6
8

6 (2,6) 3x1  2x2  18

5 (0,6)
x1  0
4

3 (4,3)
FEASIBLE x1  4
REGION
2

1
(0,0) (4,0)

X1
1 2 3 4 5 6

x2  0

8
Linear Programming

CANONICAL FORM (1/3)


GEOMETRIC INTERPRETATION

Cost
vector

MINIMIZAR z = c T x
SUJETO A Ax  b
x  0
x  n Right-hand
Constraint side vector
matrix

x n-DIMENSIONAL UNKNOWN VECTOR

c n-DIMENSIONAL COST VECTOR

A m  n CONSTRAINT COEFFICIENT MATRIX

b m-DIMENSIONAL RIGHT-HAND SIDE VECTOR

9
Linear Programming

CANONICAL FORM (2/3)


EXAMPLE (MAXIMIZING)

The contours of a linear


function are straight lines
X2
8
Optimal
7
solution

INCREASING
6 (2,6)

5 (0,6)

3 (4,3) Contours for


the objective
2 function

1
(0,0) (4,0)

X1
1 2 3 4 5 6

THE CANONICAL FORM MAKE IT POSSIBLE AN INTUITIVE


GEOMETRICAL INTERPRETATION (IN 2)

10
Linear Programming

CANONICAL FORM (3/3)


EXAMPLE (MINIMIZING)

MINIMIZAR z = 3 x1 5 x 2
SUJETO A x1  4
x2  6
3 x1 2 x 2  18
x1  0
x2  0

Canonical
form

11
Linear Programming

STANDARD FORM

MINIMIZAR z  cTx
SUJETO A Ax  b
x  0
x  n

Standard
form

x n-DIMENSIONAL UNKNOWN VECTOR

c n-DIMENSIONAL COST VECTOR

A m  n CONSTRAINT COEFFICIENT MATRIX

b m-DIMENSIONAL RIGHT-HAND SIDE VECTOR (b  0)

m<n FOR THE PROBLEM TO BE CONSISTENT

12
Linear Programming

CONVERSION TO STANDARD FORM (1/2)

a)  aijx j  bi
j
1.
b)  aijx j  yi  bi ; yi  0
j
THE STANDARD

FORM MAKES

POSSIBLE AN
a)  aijx j  bi
j EASY ALGEBRAIC
2.
b)  aijx j  yi  bi ; yi  0
j MANIPULATION

THIS IS THE
a )   xi  
3. REASON TO
b) xi  yi  zi ; yi  0 , zi  0
CONVERT ANY

FORM TO

STANDARD FORM

a) MAXIMIZE z
4.
b) MINIMIZE -z

13
Linear Programming

CONVERSION TO STANDARD FORM (2/2)


EXAMPLE

PREVIOUS EXAMPLE IN STANDARD FORM

Standard
form

MINIMIZE z = -3 x1 -5 x 2 + 0 x 3 + 0 x4 + 0 x5
SUBJECT TO x1 + x3 = 4
x2 + x4 = 6
3 x1 +2 x 2 + x5 = 18
x1  0
x2  0
x3  0
x4  0
x5  0

x3 , x4 , x5
are slack variables

14
Linear Programming

ALGEBRAIC VIEWPOINT

WE USE STANDARD FORM!!!

LET US CONSIDER THE PARTITION

A   B  N
x 
x   B
 xN 
c 
c   B
 cN 

B m-DIMENSIONAL SQUARE SUBMATRIX (OF MATRIX A)


WHICH IS CALLED THE BASIS

xB m-DIMENSIONAL VECTOR OF BASIC VARIABLES. WE


SAY THAT THE BASIC VARIABLES ARE IN THE BASIS

xN VECTOR OF NONBASIC VARIABLES. WE SAY THAT THE


NONBASIC VARIABLES ARE NOT IN THE BASIS

cB VECTOR OF COST COEFFICIENTS CORRESPONDING TO


THE BASIC VARIABLES

cN VECTOR OF COST COEFFICIENTS CORRESPONDING TO


THE NONBASIC VARIABLES

15
Linear Programming

SOLUTIONS

x: Ax  b SOLUTION

 x: Ax  b
 FEASIBLE SOLUTION
x  0

 x: Ax  b

x  0 BASIC FEASIBLE SOLUTION
x  0
 N

BASIC FEASIBLE SOLUTIONS CAN BE OBTAINED BY SOLVING

xB  B1b

AND MAKING

xN  0

16
Linear Programming

EXAMPLE (1/3)

 1 0 1 0 0 4
   
A =  0 1 0 1 0 b = 6 
 3 2 0 0 1 18

WE ARE GONNA GENERATE ALL BASIC FEASIBLE


SOLUTIONS. WE WILL SEE THAT THEY ARE IMPORTANT

1
 x1   1 0 1  4   2
 x    0 1 0  6    6  z  36 ( ) x4  0
 2       x5  0
 x 3   3 2 0 18  2

1
 x 1   1 0 0  4   4
 x    0 1 1  6    3  z  27 x3  0
 2       x5  0
 x 4   3 2 0 18  3

1
 x 1   1 0 0  4   4 
 x    0 1 0  6    6   OUT x3  0
 2       x4  0
 x 5   3 2 1 18  6

1
 x 1   1 1 0 4 6
 x    0 0 1  6    2  OUT x2  0
 3       x5  0
 x 4   3 0 0 18  6 

1
 x 1   1 1 0  4    
 x    0 0 0  6     OUT x2  0
 3       x4  0
 x 5   3 0 1 18 

17
Linear Programming

EXAMPLE (2/3)

1
 x 1   1 0 0  4   4 
 x    0 1 0  6    6  z  12 x2  0
 4       x3  0
 x 5   3 0 1 18  6

1
 x 2   0 1 0 4 9
 x    1 0 1  6    4   NO x1  0
 3       x5  0
 x 4   2 0 0 18  3

1
 x 2   0 1 0  4   6 
 x    1 0 0  6    4   z  30 x1  0
 3       x4  0
 x 5   2 0 1 18  6

1
 x 2   0 0 0  4    
 x    1 1 0  6     OUT x1  0
 4       x3  0
 x 5   2 0 1 18 

1
 x 3   1 0 0  4   4 
 x    0 1 0  6    6   z  0 x1  0
 4       x2  0
 x 5   0 0 1 18 18

18
Linear Programming

EXAMPLE (3/3)

Basic
X2 feasible
8 solution

7 Basic
feasible
6 (2,6) solution

5 (0,6)

3 (4,3)
FEASIBLE
REGION Basic
2 feasible
solution
1
(0,0) (4,0)

X1
1 2 3 4 5 6
Basic
feasible Basic
solution feasible
solution

19
Linear Programming

SOLUTION

IF AN LP HAS AN OPTIMAL SOLUTION THIS SOLUTION IS A


BASIC FEASIBLE SOLUTION (LP FUNDAMENTAL THEOREM)

 LOOK FOR ALL BASIC FEASIBLE SOLUTIONS

 EVALUATE THE OBJECTIVE FUNCTION FOR EVERY BASIC


FEASIBLE SOLUTION

 CHOOSE THE BASIC FEASIBLE SOLUTION WITH THE


LOWEST OBJECTIVE FUNCTION VALUE

IS THIS A GOOD IDEA , OR NOT?

CAN WE FIND AN UPPER BOUND OF THE NUMBER OF BASIC


FEASIBLE SOLUTIONS?

 n n!
 
m m ! n  m !

20
Linear Programming

SOME GEOMETRICAL CONCEPTS (1/3)

 HYPERPLANE

H  x: aTx  b ; a,x  n , b  

 CLOSED HALF-SPACES A straight


line in the
plane
H  x: aTx  b

H  x: aTx  b Each half-plane

 CONVEX POLYTOPE: INTERSECTION OF A FINITE NUMBER OF


CLOSED HALF-SPACES

a1T x  b1 Feasible region of an LP


a T2 x  b2 problem in canonical
    form
T
am x  bm

Feasible region of an LP
 CONVEX POLYTOPE problem in standard form

A y  b
y  0

 POLYHEDRON: NON EMPTY AND BOUNDED CONVEX


POLYTOPE

21
Linear Programming

SOME GEOMETRICAL CONCEPTS (2/3)

Polyhedrons
are convex
sets

 CONVEX SETS

THE SET C n IS CONVEX IF FOR ALL x1, x2  C AND


EVERY REAL NUMBER , 0    1 , THE POINT
x1   1   x 2  C
The corners of the
polyhedrons are
extreme points
 EXTREME POINTS

IF C IS A CONVEX SET, x  C IS A EXTREME POINT OF C


IF THERE NOT EXIST TWO DIFFERENT POINTS x1, x2  C
SO THAT x  x1   1   x 2 FOR SOME , 0    1 .

22
Linear Programming

SOME GEOMETRICAL CONCEPTS (3/3)

IF K IS A CONVEX POLYTOPE DEFINED BY THE FEASIBLE


REGION OF AN LP PROBLEM, THAT IS

Ax  b
 (1)
x0 

x IS A EXTREME POINT OF K IF AND ONLY IF x IS A BASIC


FEASIBLE SOLUTION OF (1)

X2 Extreme point
(canonical
8 form)

6 (2,6)

5 (0,6)
Basic feasible
4 solution
(standard form)
3 (4,3)
FEASIBLE
2 REGION

1
(0,0) (4,0)

X1
1 2 3 4 5 6

23
Linear Programming

LP FUNDAMENTAL THEOREM

IF AN LP PROBLEM HAS AN OPTIMAL

SOLUTION, THAT SOLUTION IS A BASIC

FEASIBLE SOLUTION; THAT IS, AN

EXTREME POINT OF THE CONVEX

POLYTOPE DEFINED BY THE FEASIBLE

REGION OF THE CONSIDERED LP

PROBLEM

24
Linear Programming

SIMPLEX ALGORITHM
REFORMULATION OF THE STANDARD FORM (1/2)

MINIMIZE z  cT x
SUBJECT TO A x = b
x  0
We are gonna
move from
MINIMIZE z  cBT xB  cNT xN corner to
SUBJECT TO B xB  N xN  b
corner
xB  0
xN  0 cleverly: that
is, always
reducing the
value of the of
1 1
xB  B b  B NxN the objective
z cBTB1b  cBTB1NxN  cNTxN function until
no further
reduction if
possible, that
DEFINING
is, until we
~ “Dual”
b  B1b variable reach the
T  cBTB1 vector minimizer
Y  B1N (optimum)

25
Linear Programming

SIMPLEX ALGORITHM
REFORMULATION OF THE STANDARD FORM (2/2)

MINIMIZE
~
z = cBT b -  TN  cNT xN
~
SUBJECT TO xB = b - Y xN
xB  0
xN  0

DEFINING
Reduced cost
vector
d  TN  cN
T

Objective function
THE PROBLEM BECOMES as a function of
nonbasic variables
~
MINIMIZE z = cBT b - d xN
~
SUBJECT TO xB = b - Y xN
xB  0
xN  0
Basic
variables as
a function of
nonbasic
variables

Moving from corner to corner is like going from


Barcelona to Huelva along the coast.
Is it a good idea?

26
Linear Programming

PIVOTING

 0 
  
        
        0 
   ~      xNj  j
i  xBi   bi    Yij   
         0 
     
xB ~
b Y  
 0 

xN

27
Linear Programming

SIMPLEX MECHANISM

 IF d j IS POSITIVE, THE OBJECTIVE FUNCTION DECREASES


IF THE NONBASIC VARIABLE xNj INCREASES

 IF THE NONBASIC VARIABLE xNj INCREASES, THE BASIC


VARIABLE xBi DECREASES IF Yij IS POSITIVE

 IF SOME Yij ´S ARE POSITIVE, THE CORRESPONDING BASIC


VARIABLES xBi ´S DECREASE IF THE NONBASIC VARIABLE
xNj INCREASES

 THE NONBASIC VARIABLE xNj CAN BE INCREASED UNTIL


THE FIRST BASIC VARIABLE BECOMES ZERO, THAT IS,
~
UNTIL bi  Yij xNj BECOMES 0 FOR THE FIRST i

 THE VALUE OF xNj BECOMES

~
MINIMUM  bi 
 : Yij  0
1  i  m  Yij 

 THE VALUE OF xBi BECOMES 0

28
Linear Programming

SIMPLEX ALGORITHM

We’ll see
how

1. GET A BASIC FEASIBLE SOLUTION

2. FIND OUT IF THE CURRENT SOLUTION IS THE MINIMIZER:


IF ALL dj  0 STOP, THE CURRENT SOLUTION IS THE
MINIMIZER; OTHERWISE GO ON

3. FIND OUT WHICH NONBASIC VARIABLE IS GONNA ENTER


THE BASIS, xNj (MOST POSITIVE d j )

4. FIND OUT WHICH BASIC VARIABLE IS GONNA LEAVE THE


BASIS, xBi
For instance

5. BUILD THE NEW BASIS

6. GET A NEW BASIC FEASIBLE SOLUTION

7. GO TO 2

29
Linear Programming

THE ALGORITHM (1/2)

STEP 0

SET   0

GET AN INITIAL BASIC FEASIBLE SOLUTION:

 x(B)   c(B) 
A  B ( ) ( )
 N ,x ( )
 ( )
, c   () 
 0   cN 

LET BE (B)  i: xi() basic AND (N)  j: x(j) nonbasic 


STEP 1

1 ~
COMPUTE x(B )  B(  ) b = b(  )

STEP 2

T 1 T T
COMPUTE (  )  c(B )T B(  ) AND d(  )T  (  ) N(  )  c(N )


LET BE (C)  j: d(j)  0 
IF (C )   , STOP ; x(  ) IS THE MINIMIZER

OTHERWISE, SELECT A NONBASIC VARIABLE x(N )


s
TO ENTER THE BASIS SO THAT


d(s)  maximum d(j): j  (C)
1 j n m

30
Linear Programming

THE ALGORITHM (2/2)

STEP 3

1
COMPUTE Y(  )  B(  ) N(  )
s s

IF VECTOR Y(  )  0, STOP; THE OPTIMAL SOLUTION


s
IS UNBOUNDED

OTHERWISE, SELECT A BASIC VARIABLE x(B ) TO


r
LEAVE THE BASIS AND SO THAT
~ b~ ( ) 
br() ( )
 minimum  i
: Y  0 
Yrs() 1 i  m  is
Y ( ) is

STEP 4

SET (B+1)  (B) \ r  s AND (N+1)  (N) \ r  s

SET     1

CALCULATE B(  ) , N(  ) , c(B ) y c(N )

GO TO STEP1

NOTE THAT W S
IS THE COLUMN s OF MATRIX W

31
Linear Programming

EXAMPLE

Minimizer

X2

7 Last
-30 jump
-36
6

5
Second
4 jump
-27
3

X1
1 2 3 4 5 6
0
-12

Initial
First
basic
jump
feasible
solution

This is like going from Barcelona to Huelva along the


coast !!!

32
Linear Programming

HOW TO GET AN INITIAL BASIC FEASIBLE


SOLUTION (1/2)

MINIMIZE cTx Standard


SUBJECT TO Ax  b form
x  0
x  n

AN INITIAL BASIC FEASIBLE SOLUTION CAN BE OBTAINED


SOLVING THE LP PROBLEM WHICH FOLLOWS

m
MINIMIZE  yi To solve
i1
this
SUBJECT TO A x  y  b problem is
x  0 called the
y  0 phase one
x  n of the
simplex
y  m
method

WHERE

y 
 1
y     IS A COLUMN VECTOR OF ARTIFICIAL VARIABLES
 ym 

IT SHOULD BE NOTED THAT THIS SECOND PROBLEM HAS A


TRIVIAL BASIC FEASIBLE SOLUTION, y = b

33
Linear Programming

HOW TO GET AN INITIAL BASIC FEASIBLE


SOLUTION (2/2)
PENALTY METHOD

MINIMIZE c T x
SUBJECT TO A x  b
x  0
x  n

IS EQUIVALENT TO They have the


same solution
for M big
enough
MINIMIZE c T x  MT y
SUBJECT TO Ax  y  b
x  0
y  0
x  n
y  m

WHICH HAS AN INITIAL BASIC FEASIBLE SOLUTION y = b

M IS A m-DIMENSIONAL COLUMN VECTOR OF LARGE


POSITIVE CONSTANTS

34
Linear Programming

DEGENERATED SOLUTIONS AND CYCLING

NON-DEGENERATED SOLUTION

xNi  0 i  BASE

xBj  0 j  BASE

Some
DEGENERATED SOLUTION variables
of the
basis are
xNi  0 i  BASIS zero
(by
xBj  0 SOME j  BASIS chance)

xBj  0 OTHER j  BASIS

A DEGENERATED SOLUTION MAY INDUCE THE SIMPLEX


ALGORITHM TO CYCLE

APPROPRIATE PROCEDURES ARE AVAILABLE TO AVOID


CYCLING (FOR INSTANCE BLAND’S RULE)

35
Linear Programming

SENSITIVITY (1/2)
IF B IS THE OPTIMAL BASIS

xB  B1b (1)


T 
z = cBxB (2)

THE “DUAL VARIABLES” CAN BE COMPUTED AS

T = cB
T 1
B (3)

LET b BE AN INCREMENT IN b SO THAT THE BASIS DOES


NOT CHANGE

b  b  b

THEN:

xB  xB  x B
z  z  z

FROM (1) AND (2)

xB  B1b

T
 z = cB  xB

USING THESE TWO LAST EQUATIONS AND EQUATION (3) WE


GET

T 1
T
z  cB xB  cBB b  T b

AND FINALLY

z  T b

36
Linear Programming

SENSITIVITY (2/2)

z  T b

THAT IS

z
j  j = 1,2,,m
b j

j IS THE MARGINAL CHANGE IN THE OBJECTIVE FUNCTION

AS A RESULT OF A MARGINAL CHANGE IN THE RIGHT-HAND

SIDE COEFFICIENT OF CONSTRAINT j (AS LONG AS THE

BASIS REMAINS UNCHANGED)

37
Linear Programming

ADVANCED TOPICS

38
Linear Programming

LP IN NETWORKS

MINIMIZE cTx

SUBJECT TO Âx = b

x0

x  X  n

Â: NETWORK STRUCTURE

NOTE:

• A SPECIALIZED SIMPLEX CAN BE USED

• THE SPECIALIZED SIMPLEX IS ABOUT 100 TIMES


FASTER THAN THE STANDARD SIMPLEX

Unfortunately, the
network structure is
not usually strict

39
Linear Programming

PARAMETRIC LP

MINIMIZE cTx

SUBJECT TO Ax = b + b

x0

x  n



NOTE:

 USING APPROPRIATE RULES, THE MINIMIZER x*() CAN BE


OBTAINED WITHOUT SOLVING ADDITIONAL LP PROBLEMS

 THE COST COEFFICIENT VECTOR “c” AND THE


CONSTRAINT COEFFICIENT MATRIX “A” CAN ALSO BE
PARAMETERIZED

40
Linear Programming

INTERIOR POINT ALGORITHMS

 A NONLINEAR PROGRAMMING PHILOSOPHY IS USED

 THESE ALGORITHMS SEEM TO BE COMPETITIVE FOR


LARGE SCALE PROBLEMS

 POLYNOMIAL ALGORITHM (¡¡¡!!!)

 KARMARKAR, 1984

 FURTHER DETAILS: HILLIER EL AL., CHVÁTAL

1. F.S. HILLIER, G.J. LIEBERMAN. “INTRODUCTION TO


OPERATIONS RESEARCH”. FOURTH EDITION.
MCGRAW-HILL PUBLISHING COMPANY. NEW YORK,
1986

2. V. CHVÁTAL. “LINEAR PROGRAMMING”. W.H.


FREEMAN AND COMPANY. NEW YORK, 1983.

41
Linear Programming

DUAL PROBLEMS: MOTIVATION (1/3)


PRIMAL PROBLEM

 WE CAN BUY n NATURAL OILS AT COSTS c1, c2 ,, cn

 EVERY NATURAL OIL CONTAINS m LUBRICANT ELEMENTS.


THE COEFFICIENT aij REPRESENTS THE AMOUNT OF

LUBRICANT i IN ONE UNIT OF NATURAL OIL j

 WE WANT TO PRODUCE A MIXTURE OF NATURAL OILS SO


THAT IT CONTAINS AT LEAST THE AMOUNT OF EVERY
LUBRICANT GIVEN BY b1,b2 ,,bm

 LET x1, x2 ,, xn BE THE QUANTITIES OF NATURAL OIL TO


BE USED IN THE MIXTURE SO THAT THE COST OF IT IS
MINIMUM. THESE QUANTITIES CAN BE OBTAINED BY
SOLVING THE LP PROBLEM

n 
MINIMIZE  cj xj 
j1

n  PRIMAL
SUBJECT TO  aij xj  bi , ,, m PROBLEM
i  12
j1 
xj  0 j  12, ,, n 


42
Linear Programming

DUAL PROBLEMS: MOTIVATION (2/3)


DUAL PROBLEM

 PEPE PÉREZ MANUFACTURES THE m LUBRICANTS AND


SELLS THEM AT PRICES GIVEN BY 1, 2 ,, m

 WE ALREADY KNOW THAT THE AMOUNTS OF LUBRICANTS


REQUIRED TO MANUFACTURE THE MIXTURE ARE
b1,b2 ,,bm; AND THAT THE COEFFICIENT aij REPRESENTS

THE AMOUNT OF LUBRICANT i IN ONE UNIT OF NATURAL


OIL j

 WE ALSO KNOW THAT THE n NATURAL OILS ARE SOLD IN


THE MARKETS AT PRICES GIVEN BY c1, c2 ,, cn

43
Linear Programming

DUAL PROBLEMS: MOTIVATION (3/3)


DUAL PROBLEM

 FOR PEPE’S BUSINESS TO HAVE FUTURE THE COST OF


MANUFACTURING A GIVEN NATURAL OIL USING
SYNTHETIC LUBRICANTS SHOULD BE AT MOST THE PRICE
MARKET OF THE CONSIDERED NATURAL OIL, THAT IS
m
 iaij  c j ; j = 1,2,,n
i1

 IF KIMO WANT TO MAXIMIZE ITS SELLING BENEFITS, HE


SHOULD SELL ITS LUBRICANTS AT PRICES 1,  2 ,,  m
WHICH CAN BE OBTAINED BY SOLVING:

m 
MAXIMIZE  i bi 
i 1
m 
SUBJECT TO  i aij  cj
DUAL
j  1,2, , n  PROBLEM
i 1 
i  0 i  1,2, , m


44
Linear Programming

DUALITY THEORY (1/3)


SYMMETRIC DUAL PROBLEMS (1/2)

THE LINEAR DUAL PROBLEM OF THE LINEAR PROBLEM

Primal
problem
MINIMIZE c T x
SUBJECT TO Ax  b
x  0

IS THE LINEAR PROBLEM

Dual
problem
MAXIMIZE T b
SUBJECT TO T A  c T
  0

45
Linear Programming

DUALITY THEORY (2/3)


SYMMETRIC DUAL PROBLEMS (2)

THE LINEAR DUAL PROBLEM OF THE LINEAR PROBLEM

Primal
problem
MAXIMIZE c T x
SUBJECT TO Ax  b
x  0

IS THE LINEAR PROBLEM

Dual
problem
MINIMIZE T b
SUBJECT TO T A  c T
  0

46
Linear Programming

DUALITY THEORY (3/3)


(CONVERSION)

MAXIMIZE cTx
SUBJECT TO Ax  b
x  0

MINIMIZE cTx
SUBJECT TO  Ax  b
x  0

DUAL: MAXIMIZE  T b
SUBJECT TO  T A   c T
  0

MINIMIZE T b
SUBJECT TO T A  cT
  0

Second dual couple


from the first dual
Dual
couple
problems

47
Linear Programming

EXAMPLE

Cost
coefficient
vector
“c”
Right-hand
side coefficient
vector
 x1 
MINIMIZE  3 5   “b”
x2 

 1 0  4 
 
 1 x   
SUBJECT TO  0 1  
 x   6 
 3 2 2  18
Constraint
coefficient  x1   0 
   
matrix   
 x2   0 
“A”

 x1   2
    
 x 2   6
The example we
have used all
z  36 the time

  1   0
     3
 2  
  3   1

48
Linear Programming

EXAMPLE AND SOLUTION


(LINDO)

MIN - 3 X1 - 5 X2
SUBJECT TO
2) - X1 >= - 4
3) - X2 >= - 6
4) - 3 X1 - 2 X2 >= - 18
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE d


1) -36.000000

VARIABLE VALUE REDUCED COST


X1
X2
2.000000
6.000000
.000000
.000000

ROW SLACK OR SURPLUS DUAL PRICES


2) 2.000000 .000000
3) .000000 -3.000000
4) .000000 -1.000000

NO. ITERATIONS= 2

RANGES IN WHICH THE BASIS IS UNCHANGED:

OBJ COEFFICIENT RANGES


VAR CURRENT ALLOWABLE ALLOWABLE
COEF INCREASE DECREASE
X1 -3.000000 3.000000 4.500000
X2 -5.000000 3.000000 INFINITY

RIGHTHAND SIDE RANGES


ROW CURRENT ALLOWABLE ALLOWABLE
RHS INCREASE DECREASE
2 -4.000000 2.000000 INFINITY
3 -6.000000 3.000000 3.000000
4 -18.000000 6.000000 6.000000

49
Linear Programming

EXAMPLE

Cost
coefficient
Right-hand side vector
coefficient “c”
vector
“b”

 4 
MAXIMIZAR  1 2 3  6 
 
 18
 1 0 
SUJETO A  1 2 3  0 1
 
  3 5
 3 2
Constraint
coefficient  1  2  3   0 0 0
matrix
“A”

 *1   0
 *    3
 *2   
  3   1

z  36

 x1   2
 x 2    6

50
Linear Programming

EXAMPLE SOLUTION
(LINDO)

MAX - 4 L1 - 6 L2 - 18 L3
SUBJECT TO
2) - L1 - 3 L3 <= - 3
3) - L2 - 2 L3 <= - 5
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE

1) -36.000000

VARIABLE VALUE REDUCED COST


L1 .000000 2.000000
L2 3.000000 .000000
L3 1.000000 .000000

ROW SLACK OR SURPLUS DUAL PRICES


2) .000000 2.000000
3) .000000 6.000000

NO. ITERATIONS= 2

RANGES IN WHICH THE BASIS IS UNCHANGED:

OBJ COEFFICIENT RANGES


VAR CURRENT ALLOWABLE ALLOWABLE
COEF INCREASE DECREASE
L1 -4.000000 2.000000 INFINITY
L2 -6.000000 3.000000 3.000000
L3 -18.000000 6.000000 6.000000

RIGHTHAND SIDE RANGES


ROW CURRENT ALLOWABLE ALLOWABLE
RHS INCREASE DECREASE
2 -3.000000 3.000000 4.500000
3 -5.000000 3.000000 INFINITY

51
Linear Programming

DUALITY THEORY
NON-SYMMETRIC DUAL PROBLEMS

PRIMAL DUAL

MINIMIZE c T x MAXIMIZE T b
SUBJECT TO Ax = b SUBJECT TO T A  cT
x  0   0

MAXIMIZE c T x MINIMIZE T b
SUBJECT TO Ax = b SUBJECT TO T A  cT
x  0   0

52
Linear Programming

NON-SYMMETRIC DUALS

MINIMIZE cTx
SUBJECT TO Ax  b
x  0

MINIMIZE cTx
SUBJECT TO Ax  b
Ax  b
x  0

MINIMIZE cTx
SUBJECT TO Ax  b : u
 Ax  b : v
x  0

DUAL: MAXIMIZE uTb  vTb


SUBJECT TO uT A  vT A  cT
u  0
v  0   u v

MAXIMIZE Tb
SUBJECT TO T A  cT

New dual couple from the


first dual couple

53
Linear Programming

DUALITY
CONVERSION RULES

PRIMAL DUAL

MINIMIZATION MAXIMIZATION

 l  0 (b > 0)

 l  0 (b > 0)

= l <> 0

MAXIMIZATION MINIMIZATION

 l  0 (b > 0)

 l  0 (b > 0)

= l <> 0

54
Linear Programming

SIGN OF  (1/2)
EXAMPLE (1)

MAX 3 X1 + 5 X2
SUBJECT TO
2) X1 <= 4
3) X2 <= 6
4) 3 X1 + 2 X2 <= 18
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE MAXIMIZE


1) 36.000000
z
VARIABLE VALUE REDUCED COST 
X1 2.000000 .000000 b
X2 6.000000 .000000

ROW SLACK OR SURPLUS DUAL PRICES


2) 2.000000 .000000
3) .000000 3.000000
4) .000000 1.000000

NO. ITERATIONS= 2

MIN - 3 X1 - 5 X2
SUBJECT TO
2) X1 <= 4
3) X2 <= 6
4) 3 X1 + 2 X2 <= 18
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE


MINIMIZE
1) -36.000000
z
VARIABLE VALUE REDUCED COST 
X1
X2
2.000000
6.000000
.000000
.000000
b

ROW SLACK OR SURPLUS DUAL PRICES


2) 2.000000 .000000
3) .000000 3.000000
4) .000000 1.000000

NO. ITERATIONS= 2

55
Linear Programming

SIGN OF  (2/2)
EXAMPLE (2)

MIN - 3 X1 - 5 X2
SUBJECT TO
2) - X1 >= - 4
3) - X2 >= - 6
4) - 3 X1 - 2 X2 >= - 18
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE


MINIMIZE
1) -36.000000

VARIABLE VALUE REDUCED COST z


X1 2.000000 .000000

X2 6.000000 .000000
b
ROW SLACK OR SURPLUS DUAL PRICES
2) 2.000000 .000000
3) .000000 -3.000000
4) .000000 -1.000000

NO. ITERATIONS= 2

MAX 3 X1 + 5 X2
SUBJECT TO
2) - X1 >= - 4
3) - X2 >= - 6
4) - 3 X1 - 2 X2 >= - 18
END

LP OPTIMUM FOUND AT STEP 2

OBJECTIVE FUNCTION VALUE

1) 36.000000 MAXIMIZE

VARIABLE VALUE REDUCED COST


z

X1 2.000000 .000000

b
X2 6.000000 .000000

ROW SLACK OR SURPLUS DUAL PRICES


2) 2.000000 .000000
3) .000000 -3.000000
4) .000000 -1.000000

NO. ITERATIONS= 2

56
Linear Programming

DUALITY THEOREMS

MINIMIZE cTx MAXIMIZE T b


SUBJECT TO Ax  b SUBJECT TO T A  c T
x  0

WEAK DUALITY THEOREM

IF x IS FEASIBLE FOR THE PRIMAL PROBLEM AND  IS


FEASIBLE FOR THE DUAL PROBLEM, THEN

Tb  cTx

THAT IS

Tb  T Ax  cTx

STRONG DUALITY THEOREM

IF x* IS OPTIMAL FOR THE PRIMAL PROBLEM AND * IS


OPTIMAL FOR THE DUAL PROBLEM, THEN

cTx  Tb

57
Linear Programming

DUAL VARIABLE COMPUTATION

THE STRONG DUALITY THEOREM SAYS

cTx  Tb (1)

WHERE B IS THE OPTIMAL BASIS MATRIX

xB  B1b (2)

BECAUSE xB IS A BASIC SOLUTION

cTx  cB
T *
xB (3)

USING (1), (2) Y (3)

cT x  cB
T  T 1
xB  cBB b  Tb

THEREFORE

T  cB
T 1
B

FINALLY

  B 1T cB

WHICH MAKES POSSIBLE TO CALCULATE * ONCE THE


BASIS B IS KNOWN

58
Linear Programming

COMPLEMENTARY SLACKNESS (1/2)


SYMMETRIC DUAL PROBLEMS

MINIMIZE cTx MAXIMIZE T b


SUBJECT TO Ax  b SUBJECT TO T A  c T
x  0   0

LET BE

x A FEASIBLE SOLUTION FOR THE PRIMAL PROBLEM


l A FEASIBLE SOLUTION FOR THE DUAL PROBLEM

THEN

A NECESSARY AND SUFFICIENT CONDITION FOR BOTH TO


BE OPTIMAL IS THAT  i AND  j

1)  j  0  a jx  b j (positive sensitivity  binding constraint)

2)  j  0  a jx  b j (non binding constraint  zero sensitivity)

3) xi  0  Tai  ci (sensitivity positive  binding constraint)

4) xi  0  Tai  ci (non binding constraint  sensitivity zero)

WHERE

aj IS THE j-TH ROW OF MATRIX A

59
Linear Programming

COMPLEMENTARY SLACKNESS (2/2)


NON-SYMMETRIC DUAL PROBLEMS

MINIMIZE cTx MAXIMIZE T b


SUBJECT TO Ax  b SUBJECT TO T A  c T
x  0

LET BE

x A FEASIBLE SOLUTION FOR THE PRIMAL PROBLEM


l A FEASIBLE SOLUTION FOR THE DUAL PROBLEM

THUS

A NECESSARY AND SUFFICIENT CONDITION FOR BOTH TO


BE OPTIMAL IS THAT  i AND  j

1) xi  0  Tai  ci (positive sensitivity  binding constraint)

2) xi  0  Tai  ci (non binding constraint  sensitivity zero)

60
Linear Programming

MPS FILE

NAME ( MIN) T2 6 -2.5000000


ROWS T2 7 -2.5000000
N1 T2 8 3.0000000
G2 T2 9 3.0000000
L3 T2 12 2.5000000
G4 T2 13 -5.5000000
L5 T2 14 3.0000000
L6 RHS
G7 RHS 2 .150000000
L8 RHS 3 .600000000
G9 RHS 4 .100000000
L 10 RHS 5 .400000000
G 11 RHS 6 .300000000
E 12 RHS 7 -.300000000
E 13 RHS 8 .400000000
E 14 RHS 9 -.400000000
COLUMNS RHS 10 .500000000
P1 1 6.0000000 RHS 11 -.500000000
P1 2 1.0000000 RHS 14 .850000000
P1 3 1.0000000 BOUNDS
P1 12 1.0000000 FR LINDOBND T1 .000000000
P2 1 7.0000000 FR LINDOBND T2 .000000000
P2 4 1.0000000 ENDATA
P2 5 1.0000000
P2 13 1.0000000
T1 6 2.5000000
T1 7 2.5000000
T1 10 3.5000000
T1 11 3.5000000
T1 12 -6.0000000
T1 13 2.5000000
T1 14 3.5000000

61
A. Conejo Linear Programming

AVAILABLE SOFTWARE

 LINDO (TO PLAY AROUND)

 GAMS (FOR PROTOTYPING)

 MATLAB (NO SO GOOD!)

 MINOS (TO CODE SERIOUS MODELS)

 IMSL (WHEN THINGS ARE NOT SO BIG)

 CPLEX (EXPENSIVE)

 OSL (EXPENSIVE)

4/23/2021 62

You might also like