You are on page 1of 11

Optimization Methods: Linear Programming- Revised Simplex Method

Module 3 Lecture Notes 5 Revised Simplex Method, Duality and Sensitivity analysis Introduction In the previous class, the simplex method was discussed where the simplex tableau at each iteration needs to be computed entirely. However, revised simplex method is an improvement over simplex method. Revised simplex method is computationally more efficient and accurate. Duality of LP problem is a useful property that makes the problem easier in some cases and leads to dual simplex method. This is also helpful in sensitivity or post optimality analysis of decision variables. In this lecture, revised simplex method, duality of LP, dual simplex method and sensitivity or post optimality analysis will be discussed. Revised Simplex method Benefit of revised simplex method is clearly comprehended in case of large LP problems. In simplex method the entire simplex tableau is updated while a small part of it is used. The revised simplex method uses exactly the same steps as those in simplex method. The only difference occurs in the details of computing the entering variables and departing variable as explained below. Let us consider the following LP problem, with general notations, after transforming it to its standard form and incorporating all required slack, surplus and artificial variables.

(Z ) ( xi )
j

c1 x1 + c2 x2 + c3 x3 + L L L + cn xn + Z = 0 c11 x1 + c12 x2 + c13 x3 + L L L + c1n xn c21 x1 + c22 x2 + c23 x3 + L L L + c2 n xn M M cm1 x1 + cm 2 x2 + cm 3 x3 + L L L + cmn xn = b1 = b2 M M = bm

(x )
M M

( xl )

As the revised simplex method is mostly beneficial for large LP problems, it will be discussed in the context of matrix notation. Matrix notation of above LP problem can be expressed as follows:

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

Minimize z = C T X subject to : AX = B with : X 0


b1 x1 c1 c11 c12 0 x c c b2 c 22 0 2 2 where X = , C= , B= , 0 = , A = 21 M 0 c m1 c m 2 xn c n bm

c1n c2n c mn

It can be noted for subsequent discussion that column vector corresponding to a decision
c1k c variable x k is 2 k . c mk

Let X S is the column vector of basic variables. Also let C S is the row vector of costs coefficients corresponding to X S and S is the basis matrix corresponding to X S .
1. Selection of entering variable

For each of the nonbasic variables, calculate the coefficient (WP c ) , where, P is the corresponding column vector associated with the nonbasic variable at hand, c is the cost coefficient associated with that nonbasic variable and W = C S S 1 . For maximization (minimization) problem, nonbasic variable, having the lowest negative (highest positive) coefficient, as calculated above, is the entering variable.
2. Selection of departing variable

a. A new column vector U is calculated as U = S 1B . b. Corresponding to the entering variable, another vector V is calculated as V = S 1 P , where P is the column vector corresponding to entering variable. c. It may be noted that length of both U and V is same ( = m ). For i = 1, , m , the ratios,

U(i ) , are calculated provided V (i ) > 0 . i = r , for which the ratio is least, is V (i )

noted. The r th basic variable of the current basis is the departing variable. If it is found that V (i ) 0 for all i , then further calculation is stopped concluding that bounded solution does not exist for the LP problem at hand.

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

3. Update to new basis


Old basis S , is updated to new basis S new , as S new = ES 1
where E = 1 0 M M M 0 0 0 1 M M M 0 0 L L O L L L L

1 2
M

L L L L O L L

0 0 M M M 1 0

r
M

m 1 m

0 0 M V (i ) V (r ) M and i = 1 V (r ) M 0 1

for for

ir i=r

r th column
S is replaced by S new and steps 1 through 3 are repeated. If all the coefficients calculated in

step 1, i.e., (WP c ) is positive (negative) in case of maximization (minimization) problem, then optimum solution is reached and the optimal solution is,

X S = S 1B and z = CX S

Duality of LP problems
Each LP problem (called as Primal in this context) is associated with its counterpart known as Dual LP problem. Instead of primal, solving the dual LP problem is sometimes easier when a) the dual has fewer constraints than primal (time required for solving LP problems is directly affected by the number of constraints, i.e., number of iterations necessary to converge to an optimum solution which in Simplex method usually ranges from 1.5 to 3 times the number of structural constraints in the problem) and b) the dual involves maximization of an objective function (it may be possible to avoid artificial variables that otherwise would be used in a primal minimization problem). The dual LP problem can be constructed by defining a new decision variable for each constraint in the primal problem and a new constraint for each variable in the primal. The coefficients of the j th variable in the duals objective function is the i th component of the primals requirements vector (right hand side values of the constraints in the Primal). The duals requirements vector consists of coefficients of decision variables in the primal objective function. Coefficients of each constraint in the dual (i.e., row vectors) are the

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

column vectors associated with each decision variable in the coefficients matrix of the primal problem. In other words, the coefficients matrix of the dual is the transpose of the primals coefficient matrix. Finally, maximizing the primal problem is equivalent to minimizing the dual and their respective values will be exactly equal. When a primal constraint is less than equal to in equality, the corresponding variable in the dual is non-negative. And equality constraint in the primal problem means that the corresponding dual variable is unrestricted in sign. Obviously, duals dual is primal. In summary the following relationships exists between primal and dual.

Primal
Maximization Minimization

Dual
Minimization Maximization

i th variable j th constraint

i th constraint j th variable
Inequality sign of i th Constraint:

xi 0

if dual is maximization if dual is minimization

i th variable unrestricted
j th constraint with = sign
RHS of j th constraint Cost coefficient associated with

i th constraint with = sign


j th variable unrestricted
Cost coefficient associated with j th variable in the objective function

i th variable in the objective


function

RHS of i th constraint

See the pictorial representation in the next page for better understanding and quick reference:

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

Mark the corresponding decision variables in the dual Cost coefficients for the Objective Function

Opposite for the Dual, i.e., Minimize

Maximize Z = c1 x1 + c2 x2 + L L L + cn xn Subject to c11 x1 + c12 x2 + L L L + c1n xn = b1 y1 c21 x1 + c22 x2 + L L L + c2 n xn b2 y2 M x1 0, x2 unrestricted, L , xn 0


Coefficients of the 1st constraint Coefficients of the 2nd constraint Corresponding sign of the 1st constraint is Corresponding sign of the 2nd constraint is = Right hand side of the 1st constraint

Thus the Objective Function, Minimize b1 y1 + b2 y2 + L + bm ym

cm1 x1 + cm 2 x2 + L L L + cmn xn bm ym
Thus, the 1st constraint, c11 y1 + c21 y2 + L + cm1 ym c1

Right hand side of the 2nd constraint

Thus, the 2nd constraint, c12 y1 + c22 y2 + L + cm 2 ym = c2

M
Determine the sign of y1

M
Determine the sign of y2

M
LL
Determine the sign of ym

Dual Problem

Minimize Z = b1 y1 + b2 y2 + L L L + bm ym Subject to c11 y1 + c21 y2 + L L L + cm1 ym c1 c12 y1 + c22 y2 + L L L + cm 2 ym = c2 M M c1n y1 + c2 n y2 + L L L + cmn ym cn y1 unrestricted, y2 0, L , ym 0

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

It may be noted that, before finding its dual, all the constraints should be transformed to lessthan-equal-to or equal-to type for maximization problem and to greater-than-equal-to or equal-to type for minimization problem. It can be done by multiplying with 1 both sides of the constraints, so that inequality sign gets reversed. An example of finding dual problem is illustrated with the following example. Primal Maximize Subject to
x1 + 2 x 2 6000 3
Z = 4 x1 + 3 x 2

Dual Minimize
Subject to y1 y 2 + y 3 = 4 2 y1 + y 2 3 3
y1 0 y2 0

Z = 6000 y1 2000 y 2 + 4000 y 3

x1 x 2 2000 x1 4000

x1 unrestricted
x2 0

y3 0

It may be noted that second constraint in the primal is transformed to x1 + x2 2000 before constructing the dual.

Primal-Dual relationships
Following points are important to be noted regarding primal-dual relationship: 1. If one problem (either primal or dual) has an optimal feasible solution, other problem also has an optimal feasible solution. The optimal objective function value is same for both primal and dual. 2. If one problem has no solution (infeasible), the other problem is either infeasible or unbounded. 3. If one problem is unbounded the other problem is infeasible.

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

Dual Simplex Method


Computationally, dual simplex method is same as simplex method. However, their approaches are different from each other. Simplex method starts with a nonoptimal but feasible solution where as dual simplex method starts with an optimal but infeasible solution. Simplex method maintains the feasibility during successive iterations where as dual simplex method maintains the optimality. Steps involved in the dual simplex method are: 1. All the constraints (except those with equality (=) sign) are modified to less-thanequal-to ( ) sign. Constraints with greater-than-equal-to ( ) sign are multiplied by 1 through out so that inequality sign gets reversed. Finally, all these constraints are transformed to equality (=) sign by introducing required slack variables. 2. Modified problem, as in step one, is expressed in the form of a simplex tableau. If all the cost coefficients are positive (i.e., optimality condition is satisfied) and one or more basic variables have negative values (i.e., non-feasible solution), then dual simplex method is applicable. 3. Selection of exiting variable: The basic variable with the highest negative value is the exiting variable. If there are two candidates for exiting variable, any one is selected. The row of the selected exiting variable is marked as pivotal row. 4. Selection of entering variable: Cost coefficients, corresponding to all the negative elements of the pivotal row, are identified. Their ratios are calculated after changing
Cost Coefficients the sign of the elements of pivotal row, i.e., ratio = 1 Elements of pivotal row .

The column corresponding to minimum ratio is identified as the pivotal column and associated decision variable is the entering variable. 5. Pivotal operation: Pivotal operation is exactly same as in the case of simplex method, considering the pivotal element as the element at the intersection of pivotal row and pivotal column. 6. Check for optimality: If all the basic variables have nonnegative values then the optimum solution is reached. Otherwise, Steps 3 to 5 are repeated until the optimum is reached.

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

Consider the following problem:

Minimize subject to

Z = 2 x1 + x 2 x1 2 3 x1 + 4 x 2 24 4 x1 + 3 x 2 12 x1 + 2 x 2 1

By introducing the surplus variables, the problem is reformulated with equality constraints as follows: Minimize subject to Z = 2 x1 + x 2 x1 3 x1 4 x1 x1 Expressing the problem in the tableau form: Variables Iteration 1 Basis Z
x1 x2

+ x3 = 2 +4 x 2 3 x 2 2 x 2 + x 4 = 24 + x5 = 12 + x6 = 1

br x3 0 1 0 0 0 -x4

x5 0 0 0 1 0 0

x6 0 0 0 0 1 -0 -2 24 -12 -1

Z x3
x4

1 0 0 0 0 Ratios

-2 -1 3 -4 1 0.5

-1 0 4 -3 -2 1/3

0 0 1 0 0 --

x5 x6

Pivotal Row Pivotal Column

Pivotal Element

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

Tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column and Pivotal Element for each tableau are marked as usual. Z
x1 x2

Iteration

Basis Z x3

Variables x3 x4 0 1 0 0 0 -0 0 1 0 0 --

x5 -1/3 0 4/3 -1/3 -2/3 --

x6 0 0 0 0 1 --

br

1 0 0 0 0 Ratios

-2/3 -1 -7/3 4/3 11/3 2/3

0 0 0 1 0 --

4 -2 8 4 7

x4 x2

x6

Iteration

Basis Z x1

Z 1 0 0 0 0

x1

x2

Variables x3 x4 -2/3 -1 -7/3 4/3 11/3 -0 0 1 0 0 --

x5 -1/3 0 4/3 -1/3 -2/3 0.5

x6 0 0 0 0 1 --

br

0 1 0 0 0 --

0 0 0 1 0 --

16/3 2 38/3 4/3 -1/3

x4 x2

x6

Ratios

Iteration

Basis Z
x1

Z 1 0 0 0 0

x1

x2

Variables x3 x4 2.5 -1 5 -0.5 -5.5 0 0 1 0 0

x5 0 0 0 0 1

x6 -0.5 0 2 -0.5 -1.5

br

0 1 0 0 0

0 0 0 1 0

5.5 2 12 1.5 0.5

x4
x2

x5

Ratios

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

10

As all the br are positive, optimum solution is reached. Thus, the optimal solution is Z = 5.5

with x1 = 2 and x 2 = 1.5 .

Solution of Dual from Final Simplex Tableau of Primal Primal


Maximize subject to Z = 4 x1 x2 + 2 x3 2 x1 + x2 + 2 x3 6 x1 4 x2 + 2 x3 0 5 x1 2 x2 2 x3 4 x1 , x2 , x3 0 Final simplex tableau of primal:

Dual

Minimize subject to

Z ' = 6 y1 + 0 y 2 + 4 y3 2 y1 + y 2 + 5 y3 4 y1 4 y 2 2 y3 1 2 y1 + 2 y 2 2 y3 2 y1 , y 2 , y3 0

y1 y2 Z

y3

As illustrated above solution for the dual can be obtained corresponding to the coefficients of slack variables of respective constraints in the primal, in the Z row as, y1 = 1 , y 2 =

1 and 3

y3 =

1 and Z=Z=22/3. 3

D Nagesh Kumar, IISc, Bangalore

M3L5

Optimization Methods: Linear Programming- Revised Simplex Method

11

Sensitivity or post optimality analysis


A dual variable, associated with a constraint, indicates a change in Z value (optimum) for a small change in RHS of that constraint. Thus,
Z = y j bi

where y j is the dual variable associated with the i th constraint, bi is the small change in the RHS of i th constraint, and Z is the change in objective function owing to bi . Let, for a LP problem, i th constraint be 2 x1 + x2 50 and the optimum value of the objective function be 250. What if the RHS of the i th constraint changes to 55, i.e., i th constraint changes to 2 x1 + x2 55 ? To answer this question, let, dual variable associated with the i th constraint is y j , optimum value of which is 2.5 (say). Thus, bi = 55 50 = 5 and y j = 2.5 . So, Z = y j bi = 2.5 5 = 12.5 and revised optimum value of the objective function is

( 250 + 12.5) = 262.5 .


It may be noted that bi should be so chosen that it will not cause a change in the optimal basis.

D Nagesh Kumar, IISc, Bangalore

M3L5

You might also like