Professional Documents
Culture Documents
Chapter-2 LP
Chapter-2 LP
a
kj
x
j
0. So that a
n
j 1 =
kj
x
j
+ s
n+k
= b
k
. Such a variable s
n+k
is known as a slack variable. Similarly, consider an
equality of the form
a
k1
x
1
+ a
k2
x
2
+ + a
kn
x
n
b
k
. Introduce a variable s
n+k
= - b
k
+
n
j 1 =
a
kj
x
j
0. So that a
n
j 1 =
kj
x
j
- s
n+k
=
b
k
. Such a variable s
n+k
is known as a surplus variable. To each slack and / or surplus variable, assign a cost
coefficient of zero in the objective function i.e the slack and the surplus variables do not contribute to the objective
function.
Managerial Significance the Slack And The Surplus Variables:
Let us take the constraint 4X
1
+ 3X
2
240 (hours of carpentry time) from the Example 3.1 discussed earlier.
Here we think that the best combination of tables and chairs in the ABC furniture case may not necessarily use all
the time available in each department. We must therefore add to each inequality a variable which will take up the
slack, that is, the time not used in each department. This variable is called the slack variable. In this case if we write
the above inequality as equality as follows
4X
1
+ 3X
2
+ S
1
= 240
27
then S
1
represents the unused time in the carpentry department ( in general, unused amount of a resource)
Similarly, the constraint of the form 4X
1
+ 3X
2
240 if converted to equality, will look like
4X
1
+ 3X
2
- S
2
240. Here the variable S
2
represents the amount by which the carpentry hours will exceed 240 in
the solution. Hence they are the surplus amount of resources required.
Note : After introducing slack / surplus variables, any given LPP can be expressed as under :
Maximize Z = c
1
x
1
+ c
2
x
2
+ + c
n
x
n
.
subject to the constraints
a
k1
x
1
+ a
k2
x
2
+ + a
kn
x
n
+ s
n+k
= b
k
, k = 1, 2, , m
and x
j
0, j = 1, 2, , n+k
Using matrix notations, above LPP in canonical form as well as standard form can be expressed as follows :
Canonical Form : Maximize (Minimize) Z = c
T
x subject to Ax () b, x 0.
Standard form : Maximize (Minimize) Z = c
T
x subject to Ax = b, x 0.
where c, x R
n
, b R
m
and A = (a
ij
)
mn
is a real valued matrix with rank equal to m n. Thus, A will have m -
linearly independent columns.
Significance of the slack and the surplus variables:
2.5 Solving the Linear Programming Problem:
Let us introduce some definitions for our standard LPP :
Solution : Any x R
n
which satisfies A x = b is a solution.
Feasible solution : Any x R
n
which satisfies A x = b, x 0 is called a feasible solution to the given LPP.
The set S
F
= { x R
n
: A x = b, x 0 } is known as the set of all feasible solutions.
Basic Solution : Any solution x in which at most m variables are non-zero is called a basic solution.
Basic Feasible Solution : Any feasible solution x R
n
in which k ( m) variables have positive values and rest (n
k) have zero values is called a basic feasible solution. If k = m, the basic feasible solution is called non-
degenerate. If k < m, the basic feasible solution is called degenerate.
Our aim is to obtain a basic feasible solution to given LPP which optimizes the objective function.
Optimum Solution : Any feasible solution, x R
n
which optimizes the objective function Z = c
T
x is known as the
optimum solution to the given LPP.
28
Optimum Basic Feasible Solution : A basic feasible solution is said to be optimum if it optimizes the objective
function.
Unbounded Solution : If the value of the objective function can be increased or decreased infinitely without
violating the constraints then the solution is known as unbounded solution.
Let us discuss some of the fundamental results.
Consider LPP :
Maximize (Minimize) Z = c
T
x subject to Ax = b, x 0.
Let S
F
= { x R
n
: A x = b, x 0 } denote the set of all feasible solutions.
Theorem 2.1 S
F
is a convex set.
Proof : Let x
1
, x
2
S
F
and [0, 1] be any scalar. Then A x
1
= b and A x
2
= b, x
1
0, x
2
0. Consider a convex
combination of x
1
and x
2
(say) x
. Then x
= x
1
+ (1 ) x
2
. Obviously, x
0.
Further A x
= A( x
1
+ (1 ) x
2
) = Ax
1
+ (1 ) Ax
2
= b + (1 )b = b, implying
x
S
F
. Hence, S
F
is a convex set.
Note 1: If S
F
is a null set then there is no solution to given LPP.
Note 2: If S
F
is a closed bounded convex set, i.e. a convex polyhedron, given LPP will have an optimum solution
assigning finite value to the objective function.
Note 3: If S
F
is a convex set unbounded in some direction of R
n
, then LPP will have a solution but the optimum
value of the objective function may be finite or infinite.
Theorem 2.2 Suppose the set S
F
of feasible solutions to the given LPP is non-empty then the basic feasible solution
to the LPP (if it exists) lies at the vertex of a convex polygon.
Proof : Suppose S
F
has p vertices (say) x
1
, x
2
, , x
p
. Let x
0
be the basic feasible solution to given LPP. Two cases
may arise :
Case (i) : x
0
is vertex of convex polygon. Then result is obvious.
Case (ii) : Let x
0
be interior point of is vertex of S
F
. Then x
0
can be expressed as convex combination of its
vertices. That is there exists scalars
1
,
2
, ,
p
with 0
j
1, 1 j p and
p
j 1 =
j
= 1 such that
29
x
0
=
p
j 1 =
j
x
j
(2.6)
Since x
0
is optimum, we have
c
T
x
0
c
T
x
j
for all 1 j p (2.7)
In particular, let x
0
be such that
c
T
x
m
c
T
x
j
for all 1 j p (2.8)
From (2.7) and (2.8), c
T
x
0
c
T
x
m
(2.9)
Again, c
T
x
m
c
T
x
j
for all 1 j p then
j
c
T
x
m
j
c
T
x
j
for all 1 j p.
j
c
T
x
m
p
j 1 =
j
c
T
x
j
implies c
T
x
m
p
j 1 =
j
c
T
p
j 1 =
j
xj
i.e c
T
x
m
c
T
x
0
(2.10)
From (2.9) and (2.10), it follows that c
T
x
0
= c
T
x
m
.
Thus, there always exists a vertex x
m
S
F
such that is c
T
x
m
optimum value. Thus, if a basic feasible
solution to a given LPP exists then one of the vertex will give optimum value to the objective function.
Theorem 2.3 The set of optimal solutions to the LPP is convex.
Proof : Let S
F0
denotes the set of optimal solutions. If S
F0
is empty or singleton then it is convex. Let S
F0
contains
more than one solution (say) x
10
, x
20
S
F0
.Then c
T
x
10
= c
T
x
20
= max Z. Consider convex combination of x
10
and x
20
as w
0
= x
1
+ (1 ) x
2
, 0 1. Then c
T
w
0
= c
T
{ x
10
+ (1 ) x
20
} = c
T
x
10
+ (1 ) c
T
x
20
= max Z + (1
)max Z = max Z. Thus, w
0
S
F0
. S
F0
is convex.
Theorem 2.4 If the convex set of the feasible solution of Ax = b, x 0 is a convex polyhedron then at least one of
the extreme points give a basic feasible solution.
If the basic feasible solution occurs at more than one extreme point, the value of the objective function will
be same for all convex combinations of these extreme points.
Proof : Let x
1
, x
2
, , x
k
be the extreme points of the feasible region F of the LPP defined in (2.3) (2.5). Suppose
x
m
is the extreme point among x
1
, x
2
, , x
k
at which the value of the objective function is maximum (say Z
*
). Then
Z
*
= c
T
x
m
(2.11)
30
Now, consider a point x
0
S
F
, which is not an extreme point and let Z
0
be the corresponding value of the objective
function. Then Z
0
= c
T
x
0
(2.12)
Since x
0
is not an extreme point, it can be expressed as a convex combination of the extreme points x
1
, x
2
, , x
k
of
the feasible region F where F is assumed to be a closed and bounded set. The there exists scalars
1
,
2
, ,
k
with
k
j 1 =
j
= 1, 0
j
1, 1 j k such that x
0
=
1
x
1
+
2
x
2
+ +
k
x
k
. Therefore (2.12) becomes
Z
0
= c
T
{
1
x
1
+
2
x
2
+ +
k
x
k
} =
1
c
T
x
1
+
2
c
T
x
2
+ +
k
c
T
x
k
c
T
x
m
i.e. Z
0
Z
*
(from (2.11))
which implies that an optimum solution, the extreme point solution is better than any feasible solution in F.
Second part of the Theorem :
Let x
1
, x
2
, , x
r
(r k) be the extreme points of the feasible region F at which the objective function
assumes the same optimum value. This means Z
*
= c
T
x
1
=
c
T
x
2
= = c
T
x
r
.
Further let x =
1
x
1
+
2
x
2
+ +
r
x
r
,
r
j 1 =
j
= 1, 0
j
1, 1 j r be the convex combination of x
1
, x
2
, ,
x
r
. Then
c
T
x = c
T
{
1
x
1
+
2
x
2
+ +
r
x
r
} =
1
c
T
x
1
+
2
c
T
x
2
+ +
r
c
T
x
r
=
1
Z
*
+
2
Z
*
+ +
r
Z
*
= Z
*
r
j 1 =
j
=
Z
*
which completes the proof.
Theorem 2.5 If there exists a feasible solution to the LPP then there exists a basic feasible solution to a given LPP.
Proof : Consider Maximize Z = c
1
x
1
+ c
2
x
2
+ + c
n
x
n
.
subject to the constraints
a
1
x
1
+ a
2
x
2
+ + a
n
x
n
= b where a
j
T
= (a
j1
, a
j2
, , a
jn
) is j-th column of A.
Suppose that there exists a feasible solution to a above LPP in which k > m variables have positive values.
Without loss of generality, we assume that first k variates have positive values. Then a
1
x
1
+ a
2
x
2
+ + a
n
x
n
= b .
31
For each a
j
R
m
, {a
1
, a
2
, , a
k
) forms a dependent set. Assume that x
r
0 and a
r
is linear combination of remaining
vectors of the set. So there exists scalars
1
,
2
, ,
r-1
,
r+1
, ,
k
such that
k
j 1 =
j
a
j
= 0 implies
k
j 1
j r
=
j
a
j
+
r
a
r
= 0 i.e. a
r
=
k
j 1
j r
=
j
r
a
j
We have a
r
x
r
+ a
k
j 1
j r
=
j
x
j
= b i.e. a
r
x
r
+ x
r
k
j 1
j r
=
j
r
a
j
= b i.e.
k
j 1
j r
=
(x
j
+
j
r
x
r
) a
j
= b. Put x
j
`= x
j
+
j
r
x
r
. Then
k
j 1
j r
=
x
j
` a
j
= b.
Thus, x
j
gives a new solution to given LPP which depends on (k 1) variables. In order that the new solution is
feasible, we require x
j
` 0.
Clearly, x
j
+
j
r
x
r
0, j = 1, 2, , k. So x
j
j
r
x
r
. or
r
r
x
j
j
x
,
j
0.
Thus, if we choose a
r
such that
r
r
x
= {
j
min
j
j
x
,
j
0 } then the solution will also be feasible. Thus, we get a
new feasible solution in which (k 1) variables have positive values. This process can be continued till we get a
feasible solution in which m variables have positive values.
Now let us discuss methods of solving LPP.
2.6 Graphical Method of solving LPP :
LPP involving two decision variables can be solved graphically. Using results proved in section 2.5, the
optimal solution to LPP can be found by evaluating the value of the objective function at each vertex of the feasible
region. The theorem 2.2 also states that an optimal solution to LPP will only occur at one of the extreme points. The
algorithm to solve LPP using graphical method is as follows :
32
2.6.1 Extreme point approach :
Step 1 : Formulate LPP as discussed in section 2.3.
Step 2 : Plot all constraints on the graph paper and shade the feasible region.
Step 3 : List all extreme points of the feasible region. Evaluate the values of the objective function at each extreme
point and the extreme point of the feasible region that optimizes (maximize or minimize) the objective function
value is the required basic feasible solution.
2.6.2 Iso-profit (cost) function line approach :
Follow step 1 and step 2 as stated in 3.6.1.
Step 3 : Draw an Iso-profit (iso-cost) line for small value of the objective function without violating any of the
constraints of the given LPP.
Step 4 : Move iso-profit (iso-cost)lines parallel in the direction of increasing (or decreasing) objective function.
Step 5 : The feasible extreme point for which the value of iso-profit (iso-cost) is maximum (minimum) is the optimal
solution. This means that while moving the iso-profit line in the required direction, the last point after which we
move out of the feasible region is the required optimal solution.
We discuss the steps involved in solving a simple linear programming model graphically with the help of the following
example.
Example 2.9: The PQR Company manufactures products X and Y. Each unit of X yields an incremental profit of
Rs.2, and each unit of Y, Rs.4. A unit of X requires four hours of processing at Machine Center A and two hours at
Machine Center B. A unit of Y requires six hours at Machine Center A, six hours at Machine Center B, and one hour at
Machine Centre C. Machine Center A has a maximum of 120 hours of available capacity per day. Machine Center B
has 72 hours, and Machine Center C has 10 hours. If the company wishes to maximize profit, how many units of X and
Y should be produced per day?
Solution: To maximize profit the objective function may be stated as
Maximize Z = 2X + 4Y
The maximization will be subject to the following constraints:
4X +6Y 120 (Machine Center A constraint)
2X + 6Y 72 (Machine Center B constraint)
1Y 10 (Machine Center C constraint)
33
X, Y 0
1. Formulate the problem in mathematical terms. The equations for the problem are given above.
2. Plot constraint equations. The constraint equations are easily plotted by letting one variable equal zero and solving
for the axis intercept of the other. (The inequality portions of the restrictions are disregarded for this step.) For the
machine center A constraint equation when X = 0, Y = 20, and when Y = 0, X = 30. For the machine center B
constraint equation, when X = 0, Y = 12. and when Y = 0, X = 36. For the machine
center C constraint equation Y= 10 for all values of X. These lines are graphed.
3. Determine the area of feasibility. The direction of inequality signs in each constraint determines the area where a
feasible solution is found. In this case, all in equalities are of the less-than-or-equal-to variety, which means that it
would be impossible to produce any combination of products that would lie to the right of any constraint line on the
graph. The region of feasible solutions is unshaded on the graph and forms a convex polygon.
Plot the objective function. The objective function may be plotted by assuming some arbitrary total profit figure and
then solving for the axis coordinates, as was done for the constraint equations. Other terms for the objective function,
when used in this context, are the iso-profit or equal contribution line, because it shows all possible production com-
binations for any given profit figure.
4. Find the optimum point. It can be shown mathematically that the optimal combination of decision variables is always
found at an extreme point (corner point) of the convex polygon. In the graph there are four corner points (excluding the
origin), and we can determine which one is the optimum by either of two approaches. The first approach is to find the
values of the various corner solutions algebraically. This entails simultaneously solving the equations of various pairs
of intersecting lines and substituting the quantities of the resultant variables in the objective function. For example, the
calculations for the intersection of 2X + 6Y = 72 and Y = 10 are as follows:
Substituting Y= 10 in 2X+ 6Y=72 gives 2X + 6(10) = 72, 2X = 12, or X=6. Substituting X = 6 and Y = 10 in the
objective function, we get Profit = Rs.2X + Rs.4Y = Rs.2(6) + Rs.4( 10) = Rs.12 + Rs.40 = Rs.52
A variation of this approach is to read the X and Y quantities directly from the graph and substitute these
quantities into the objective function, as shown in the previous calculation. The drawback in this approach is that in
problems with a large number of constraint equations, there will be many possible points to evaluate, and the procedure
of testing each one mathematically is inefficient.
34
The second and generally preferred approach entails using the objective function or iso-profit line directly to
find the optimum point. The procedure involves simply drawing a straight line parallel to any arbitrarily selected initial
iso-profit line so that the iso-profit line is farthest from the origin of the graph. (In cost-minimization problems, the
objective would be to draw the line through the point closest to the origin.) In the figure, the dashed line labeled Rs.2X
+ Rs.4Y = Rs.64 intersects the most extreme point. Note that the initial arbitrarily selected iso-profit line is necessary to
display the slope of the objective function for the particular problem. This is important since a different objective
function (try profit = 3X + 3Y) might indicate that some other point is farthest from the origin. Given that Rs.2X +
Rs.4Y as Rs.64 is optimal, the amount of each variable to produce can be read from the graph: 24 units of product X
and four units of product Y. No other combination of the products yields a greater profit.
Y
(0,20)
2X + 4Y=64
(0,16)
Fig 2.1
Getting back to the problem, we now evaluate the value of the objective function at all the corner points of the
feasible region and select that point as the optimal solution which gives the maximum value. Here the corner points are
(0,0), (0,12), (6,10), (24,4) and (30,0). Out of these the maximum value of the objective function is at the point (24,4).
So the solution is : 24 units of product X and 4 units of product Y.
Example 2.10 Solve graphically the LPP :
Maximize z = 45x
1
+ 80x
2
Subject to the constraints : 5x
1
+ 20x
2
400, 10x
1
+ 15x
2
450, and x
1
, x
2
0.
(30,0)
(36,0)
X
Y=10
(24,4)
(32,0)
(0,12)
(0,8)
2X + 4Y=32
(16,0)
35
X
1
X
1
=2.5
X
2
X
2
=1.5
(2.5, 0.25)
(0, 1.5)
(0,4)
(3, 0) (4, 0)
(2.5, 1.5)
X
2
Solution :
(0,30)
X
1
(24,14)
(0,20)
(0,0)
(80,0)
(45,0)
Fig. 2.2
The vertices of the shaded region are (0,0), (0, 20), (45, 0) and (24, 14). The values of the objective function z at
these extreme points are 0, 1600, 2025 and 2200 respectively. The maximum value of z = 2200 occurs at x
1
= 24 and
x
2
= 14.
Example 2.11 Solve graphically the LPP : Maximize z = 7x
1
+ 3x
2
Subject to the constraints :
x
1
+ 2x
2
3 , x
1
+ x
2
4, 0 x
1
5/2, 0 x
2
3/2, and x
1
, x
2
0.
Solution :
Fig. 2.3
36
The vertices of convex polygon are (0, 1.5), (2.5, 0.25), (2.5, 1.5). The values of the objective function z are 4.5,
18.25, 22 respectively of which maximum is 22 obtained at (2.5, 1.5).
2.7 Special cases in LP :
2.7.1 Alternative (or Multiple) Optimal Solution : We try to under the concept of alternative or multiple solution
by considering the example.
Maximize P = 4x
1
+ 4x
2
subject to the constraints :
x
1
+ 2x
2
10
6x
1
+ 6x
2
36
x
1
6 and x
1
, x
2
0.
X
2
X
1
(6,0)
(0,6)
(0,5)
(10,0)
Fig. 2.4
It can be observed in Fig. 2.4 that the iso-profit line coincides with the edge of the convex feasible region. Thus
there will be infinitely many points at which the objective function is maximum. Hence, any point on the iso-profit
line will give optimum solutions and these solutions will yield the same maximum value of the objective function.
2.7.2 An Unbounded Solution :
We have discussed in section 2.5 that when the value of the decision variables in LP is allowed to increase
indefinitely without violating the feasible conditions, the solution is said to be unbounded. Here, the value of the
objective function may take value infinity.
37
Example 2.12 Solve (if possible) following LPP :
Maximize z = 3x
1
+ 4x
2
subject to the constraints :
x
1
- x
2
= -1
- x
1
+ x
2
0
and x
1
, x
2
0.
X
2
Fig. 2.5
The feasible region suggests that the given LP has unbounded solution.
2.7.3 Infeasible Solution :
The infeasible solution occurs when no value of the variables satisfy all the constraints simultaneously;
equivalently, infeasible solution to the LP occurs when there is no unique feasible region.
Example 2.13 Verify that the following LP has infeasible solution.
Maximize z = 5x
1
+ 3x
2
subject to the constraints : 4x
1
+ 2x
2
8, x
1
3, x
2
7 and x
1
, x
2
0.
2.7.4 Redundant Constraint : A constraint which does not affect the feasibility of the region is said to be the
redundant constraint. Thus, the redundant constraint will not have any effect on the optimum value of the objective
function.
X
1
(1,0)
(0,1)
(2,0)
38
2.8 Simplex Method :
In general, any system may not be restricted to only two decision variables. In this section we try to explore
an algebraic technique to solve LPP iteratively in finite number of steps. This method is known as simplex method.
In this method, we start with one of the vertex or extreme point of S
F
and at each step or iteration move to an
adjacent vertex in such a way that the value of the value of the objective for iteration improves at each iteration. This
method either gives an optimum solution (if it exists) or gives an induction that the given LPP has an unbounded
solution.
Consider LPP :
Maximize (Minimize) Z = c
T
x subject to Ax = b, x 0. Assume that the rank of (A, b) = rank of A equal
to m but less than or equal to n. This means that the set of constraint equation is consistent, all m rows of A are
linearly independent and the number of constraints are less than or equal to the number of decision variables.
Further, Ax = b, can be written as
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
i.e. a
j
is the column of A associated with the variables x
j
, j = 1, 2, , n. Since rank of A is equal to m less than n,
there are m linearly independent columns of A. These m linearly independent columns of A will form a basis in
R
m
. Let B : m m denote the matrix formed by m linearly independent columns of A, then B = (b
1
, b
2
, , b
j
, ,
b
m
)
will represent the basis matrix. Obviously, B : m m will be non-singular so that B
-1
exists. Any column (say) b
i
of
B is some column (say) a
j
of A. Note that it is not necessary that the arrangement of column in B should be in
accordance with those of A.
Any vector a
j
A can be expressed as a linear combination of columns of B. i.e. for any a
j
A, there exists
scalars y
1j
, y
2j
, , y
mj
such that a
j
= y
m
j 1 =
ij
b
i
or a
j
= By
j
, j = 1, 2, , n where y
j
= (y
1j
, y
2j
, , y
mj
). Thus,
A = (a
1
, a
2
, , a
j
, , a
n
) = (By
1
, By
2
, , By
j
, , By
n
) = B(y
1
, y
2
, , y
j
, , y
n
)
i.e A = By implies y = B
-1
A or y
j
= B
-1
a
j
, j = 1, 2, , n.
With this discussion, we are ready to study Simplex method.
Consider the LPP :
Maximize Z = c
T
x subject to Ax = b, x 0.
39
The aim is to obtain an optimum basic feasible solution for given LPP. Since simplex method is an iterative method,
we assumed that an initial basic feasible solution is available.
Let B : m m denote a basis matrix (say) B = (b
1
, b
2
, , b
j
, , b
m
). Each b
i
is some a
j
of A, i =1, 2, , m
and j = 1, 2, , n. The columns of A included in B are called basic vectors and those which are not in B are called
non-basic vectors. The variables vector x chosen corresponding to vectors in basis matrix B are known as basic
variables and rest are known as non-basic variables. Then the constraint equations Ax = b can now be written as Bx
B
+ Px
R
= b where B : m m is the basis matrix and R : m (n m) is the non-basis matrix formed by the non-basic
vectors of A. x
B
: m 1 is the vector corresponding to basic variables and x
R
: (n m) 1 is the vector
corresponding to non - basic variables. Taking x
R
= 0, we get Bx
B
= b or x
B
= B
-1
b.
The basis matrix B : m m is chosen in such a way that x
B
= B
-1
b 0. Then we have basic feasible solution to the
given LPP.
Let C
B
: m 1 denote the cost vector corresponding to the variables in x
B
. Then the value of the objective
function for this solution is
Z
B
= c
T
B
x
B
+ c
T
R
x
R
= c
T
B
x
B
= c
T
B
B
-1
b.
Further, corresponding to above basis matrix we define a new quantity
Z
j
= c
T
B
y
j
= c
m
i 1 =
T
Bi
y
ij
= c
T
B
B
-1
b
Further, corresponding to above basis matrix we define a new quantity
c
j
= c
T
B
y
j
= c
m
i 1 =
T
Bi
y
ij
c
T
B
B
-1
a
j
, j = 1, 2, , n.
The quantity z
j
c
j
(or c
j
z
j
), j = 1, 2, , n is known as net evaluations. After obtaining a basic feasible solution
check following :
1) Whether the basic feasible solution is optimum or not ? and
2) If not to obtain a new improved basic feasible solution. This can be done by remaining one of the basic
vectors from the matrix B = (b
1
, b
2
, , b
r
, , b
m
) and inserting a non-basic vectors of A = (a
1
, a
2
, , a
j
,
, a
n
) in its place.
40
The problem is Which basic variable should be removed from B and which of the non-basic variable should be
introduced in its place ?
Suppose we remove a basic vector b
r
from B and introduce a non-basic vector a
j
A in its place. Let B
*
denote the non-basic matrix obtained by replacing a
j
in place of b
r
then
B
*
= (b
1
, b
2
, , b
r-1
, a
j
, b
r+1
, , b
m
)
= (b
1
, b
2
, , b
r-1
, b
r
+ a
j
- b
r
, b
r+1
, , b
m
)
= (b
1
, b
2
, , b
r-1
, b
r
, b
r+1
, , b
m
) + (0, 0, , 0, a
j
- b
r
, 0, , 0)
Therefore, B
*
= B + (a
j
- b
r
) e
T
r
where e
T
r
R
m
is the r th unit vector in R
m
. This gives
B
*-1
= B
-1
-
1 T
j r r
T 1
r j r
B (a b )e B
1 e B (a b )
1
=
+
B
-1
-
1 T
j r r
rj
B (a b )e B
y
1
= B
-1
-
T 1
j r r
rj
(y e )e B
y
In order that B
*-1
can be obtained the necessary condition is y
rj
0. Thus, a
j
can replace b
r
if and only if y
rj
0. The
new solution is then
x
B
*
= B
-1
b = (B
-1
-
T 1
j r r
rj
(y e )e B
y
) b = B
-1
b -
T 1
j r r
rj
(y e )e B
y
b = x
B
-
T
j r r
rj
(y e )e
y
x
B
= x
B
-
T
j r r
rj
(y e )e
y
x
Br
Therefore, x
B
*
= x
B
-
Br
rj
x
y
(y
j
e
r
).
Hence, the new solution is x
Bi
*
= x
Bi
-
Br
rj
x
y
(y
j
0), i = 1, 2, , m, i r because in e
r
, i-th element is zero. So x
Bi
*
=
x
Bi
-
Br
rj
x
y
y
ij
or x
Br
*
= x
Br
-
Br
rj
x
y
(y
rj
-1) =
Br
rj
x
y
.
We thus have a new basic solution. In order that the new solution is feasible, we require x
Bi
*
0, i = 1, 2,
, m i.e. x
Br
*
=
Br
rj
x
y
> 0, i = 1, 2, , m, i r. Since x
Br
0, x
Br
*
=
Br
rj
x
y
> 0 if y
rj
> 0. Thus, we require that y
rj
> 0.
Further, x
Bi
-
Br
rj
x
y
y
ij
0 implies
Br
rj
x
y
Bi
ij
x
y
, i = 1, 2, , m. Hence,
Br
rj
x
y
= {
i
min
Bi
ij
x
y
, y
ij
> 0}. (2.13)
41
Thus, the vector b
r
to be removed from B should be chosen in accordance with (2.13). Let c
B
*
denote the
new cost vector corresponding to the new solution then
c
B
*
= (c
B1
, c
B2
, , c
Br-1
, c
Bj
, c
Br+1
, , c
Bm
)
T
= (c
B1
, c
B2
, , c
Br-1
, c
Br
+ c
j
- c
Br
, c
Br+1
, , c
Bm
)
T
and new value of the objective function is
Z
*
= c
T
B
*
x
B
*
= [c
T
B
+ (c
j
c
Br
)e
T
r
] [x
B
-
Br
rj
x
y
(y
j
e
r
)]. Put
Br
rj
x
y
=
= c
T
B
*
x
B
*
= [c
T
B
+ (c
j
c
Br
)e
T
r
] [x
B
- (y
j
e
r
)]
= Z - (z
j
c
j
) (2.14)
Our aim is to maximize Z, and so we require Z
*
> Z equivalently z
j
c
j
(or c
j
z
j
) > 0 (< 0).
Therefore, the vector a
j
A to be introduced into the basis matrix B must be such that z
j
c
j
> 0 (or c
j
z
j
< 0). Note that determination of a
j
does not require information about b
r
. However, determination of vector b
r
to be
removed from the basis requires information about both r and j.
We should first determine the vector a
j
to be introduced into new basis and then using (2.13) determine the
vector b
r
to be removed from B. Continuing in this may for finite number of steps, we can ultimately obtain
optimum solution. The new y - matrix is
y
*
= B
*-1
A = [B
-1
-
T 1
j r r
rj
(y e )e B
y
] A = B
-1
A-
T 1
j r r
rj
(y e )e B
y
A
= y -
rj
1
y
(y
j
e
r
) e
r
T
y
= y -
rj
1
y
(y
j
e
r
) (y
r1
, yr2, , y
ij
, , y
rn
)
Comparing the elements on both sides, we get y
ik
*
= y
ik
-
rj
1
y
y
ij
y
rk
So y
ij
*
= y
ij
-
rj
1
y
(y
ij
y
rj
0), i =1, 2, , m, i r , j = 1, 2, , n, k j and
y
rj
*
= y
rj
-
rj
1
y
(y
rj
y
rj
y
rj
) = 1.
42
Note :
1. While discussing simplex method, we have assumed an initial basic feasible solution, we have assumed an initial
basic feasible solution with a basis matrix B. If B = I then B
-1
= I then initial solution x
B
= B
-1
b = b and y matrix
is y = B
-1
A = A. Net evaluations, z
j
c
j
= - c
j
+ c
B
T
B
-1
a
j
= - c
j
+ c
B
T
a
j
, j = 1, 2, , n and value of the objective
function Z = c
B
T
x
B
= c
B
T
b. Thus, we observe that if initial basis matrix is a unit matrix then it is easy to obtain
initial solution and related parameters. Hence, in order to obtain initial basic feasible solution, we shall assume that a
unit matrix is present as a sub-matrix of the coefficient matrix A.
2. For choosing incoming vector a
j
in the next basis, choose that z
j
c
j
( c
j
z
j
)which is most negative (positive). If
two or more z
j
c
j
( c
j
z
j
) have the same most negative (positive) value, choose any one of the corresponding
vector to enter into the basis.
3. After choosing the incoming vector, choose the outgoing vector which satisfies (2.13). If this minimum value is
assumed for more than two vectors, choose any one of the corresponding basis vector from B.
Following two theorems will state without proof.
Theorem 3.6 Every basic feasible solution to a LPP corresponds to a vertex of the set of feasible solution.
Theorem 3.7 For the LPP : Maximize Z = c
T
x subject to Ax = b, x 0. A necessary and sufficient condition for a
basic feasible solution x
B
= B
-1
b corresponding to a basis matrix B : m m to be optimum is that z
j
c
j
0 (or c
j
z
j
0) for all j = 1, 2, , n.
2.8.1 Simplex Algorithm : (Maximization Case)
To find an optimum basic feasible solution to a standard LPP (maximization case, all constraints , all b
j
i.e all R.H.S values positive), perform following steps :
Step 1 : Select an initial basic feasible solution to initiate the solution.
Step 2 : Test for the optimality as discussed in section 3.6. That is if all z
j
c
j
0( c
j
z
j
0), then the basic feasible
solution is optimal. If at least for one of the coefficient matrix z
j
c
j
< 0 (or c
j
z
j
> 0) and elements in that columns
are negative (positive), then there exists an unbounded solution to the given problem. If at least one z
j
c
j
< 0 ( c
j
z
j
> 0) and each of these has at least one positive (negative) element for some row then solution can be improved.
Step 3 : For selecting the variable entering into the basis, select a variable that has the most negative z
j
c
j
value (or
most positive c
j
z
j
). The column to be entered is called key column.
43
Step 4 : After selecting key column, next step is to decide the outgoing variable using (3.6) This ratio is called
replacement ratio (RR). The replacement ratio restricts the number of units of incoming variables that can be
obtained from the exchange. The row selected in this manner is called key row. The element at the intersection of
key row and key column is called key element.
Note that the division by negative or zero element in key column is not allowed. Denote these cases by -.
Step 5 : Now we want to find new solution. If the key element is 1, then dont change the row in the next simplex
table. If the key element is other than 1, then divide element in that key row by that element including the element in
x
B
column and formulate the new row. The new values of the elements in the remaining rows for the next iteration
can be evaluated by performing elementary operations on all rows so that all elements except the key element in the
key column are zero. This can be also calculated as follows
(New row numbers) = (numbers in old row)-[ (numbers above or below the key element)(corresponding number in
the new row, that is the row replaced in the previous step)]
If new solution so obtained satisfies step 2 then terminate the process otherwise perform step 4 and step 5.
Repeat the steps for finite number of steps to obtain basic feasible solution i.e. no further improvement is
possible.
Example 2.14 Use simplex method to maximize z = 5x
1
+ 4x
2
subject to the constraints : 4x
1
+ 5x
2
10, 3x
1
+ 2x
2
9, 8x
1
+ 3x
2
12 and x
1
, x
2
0.
Solution : Writing given LPP in standard form, we need to add slack variables s
1
, s
2
and s
3
in the constraints. Thus
LPP is
Maximize z = 5x
1
+ 4x
2
+ 0s
1
+ 0s
2
+ 0s
3
subject to the constraints :
4x
1
+ 5x
2
+ s
1
= 10
3x
1
+ 2x
2
+ s
2
= 9
8x
1
+ 3x
2
+ s
3
= 12
and x
1
, x
2
, s
1
, s
2
, s
3
0.
Putting x
1
= x
2
= 0, we get first iteration as
44
c
j
5 4 0 0 0 RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
0 s
1
10 4 5 1 0 0 10/4
0 s
2
9 3 2 0 1 0 9/3
0 s
3
12 8 3 0 0 1 12/8
z 0 z
j
- c
j
-5
-4 0 0 0
Clearly, most negative z
j
c
j
corresponds to x
1
. So x
1
will enter into the basis. The minimum replacement ratio
corresponds to s
3
so s
3
will leave the basis. So key column corresponds to x
1
and key row corresponds to s
3
. The
leading (key) element is 8 which is other than 1 so divide all elements of key row by 8 and using elementary row
transformations so that in key column entries of the first and second row are zero. The new iteration table is
c
j
5 4 0 0 0 RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
0 s
1
4 0 7/2 1 0 -1/2 8/7
0 s
2
9/2 0 7/8 0 1 -3/8 36/7
5 x
1
3/2 1 3/18 0 0 1/8 9
z 15/2 z
j
- c
j
0 -17/8
0 0 5/8
Clearly, most negative z
j
c
j
corresponds to x
2
so x
2
will enter into the basis. The minimum replacement ratio
corresponds to s
1
so s
1
will leave the basis. So key column corresponds to x
2
and key row corresponds to s
1
. The
leading (key) element is 7/2 which is other than 1 so divide all elements of key row by 2/7 and using elementary row
transformations so that in key column entries of the second and third row are zero. The new iteration table is
45
c
j
5 4 0 0 0
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
4 x
2
8/7 0 1 1 0 -1/7
0 s
2
7/2 0 0 0 1 -2/8
5 x
1
15/14 1 0 0 0 5/28
z 139/14 z
j
- c
j
0 0 17/28 0 9/28
Since all z
j
c
j
0, the solution x
1
= 15/14, x
2
= 8/7 maximizes the z = 139/14.
Example 2.15 Maximize Z = 2x
1
+ 4x
2
,
subject to 2x
1
+ 3x
2
48,
x
1
+ 3x
2
42,
x
1
+ x
2
21,
and x
1
, x
2
0
We will check the optimality here by evaluating NER as Cj-Zj. Also we will format the table in a different way so
that the reader is accustomed to both the ways. We will make a continuous table in which all the iterations are taken
care of.
Introducing the slack variables and entering the values in the simplex table we get,
46
Cj 2 4 0 0 0
Var
Basis
x
1
x
2
s
1
s
2
s
3
R.H.S R.R
0 s
1
2 3 1 0 0 48 48/3=16
0 s
2
1 3 0 1 0 42 14
0 s
3
1 1 0 0 1 21 21
N.E.R
=Cj-Zj
2 4
0 0 0 Z = 0
0 s
1
1 0 1 -1 0 6 6
4 x
2
1/3 1 0 1/3 0 14 42
0 s
3
2/3 0 0 -1/3 1 7 21/2
N.E.R
=Cj-Zj
2/3
0 0 -4/3 0 Z = 56
2 x
1
1 0 1 -1 0 6
4 x
2
0 1 -1/3 2/3 0 12
0 s
3
0 0 -2/3 1/3 1 3
N.E.R
=Cj-Zj
0 0 -2/3 -2/3 0 Z = 60
As all NER (Cj-Zj) entries are 0, the optimality criteria is satisfied and the solution obtained is optimal.
Thus, the final solution is x
1
= 6, x
2
= 12 and Maximum Z = 60
Example 2.16 Minimize Z = x
1
- 3x
2
+ 2x
3
subject to 3x
1
- x
2
+ 3x
3
7, -2x
1
+ 4x
2
12, -4x
1
+ 3x
2
+ 8x
3
10 and x
1
, x
2
, x
3
0
This being a minimization problem with all constraints of type, we will first convert it into a maximization
problem by multiplying the objective function Z with -1 and then maximizing the same.
i.e Maximize Z* = - Z = - x
1
+ 3x
2
- 2x
3
preparing the simplex table and solving,
47
Cj -1 3 -2 0 0 0
Var
Basis
x
1
x
2
x
3
s
1
s
2
s
3
RHS R.R
0 s
1
3 -1 3 1 0 0 7 -7
0 s
2
-2 4 0 0 1 0 12 3
0 s
3
-4 3 8 0 0 1 10 10/3
N.E.R
=Cj-Zj
-1 3 -2 0 0 0 Z*=0
0 s
1
5/2 0 3 1 1/4 0 10 4
3 x
2
-1/2 1 0 0 1/4 0 3 -6
0 s
3
-5/2 0 8 0 -3/4 1 1 -2/5
N.E.R
=Cj-Zj
1/2 0 -2 0 -3/4 0 Z*=9
-1 x
1
1 0 6/5 2/5 1/10 0 4
3 x
2
0 1 3/5 1/5 3/10 0 5
0 s
3
0 0 11 1 -1/2 1 11
N.E.R
=Cj-Zj
0 0 -13/5 -1/5 -8/10 0 Z*=11
As all NER (Cj-Zj) entries are 0, the optimality criteria is satisfied and the solution obtained is optimal.
The optimum value of the original objective function will be obtained by taking Z*.
Thus, the final solution is x
1
= 4, x
2
= 5, x
3
= 0 and Minimum Z = - Z* = -11.
Example 2.17 :Use simplex method to maximize z = 2x
1
+ 3x
2
subject to the constraints : - x
1
+ 2x
2
4, x
1
+ x
2
6, x
1
+ 3x
2
9 and x
1
, x
2
unrestricted.
48
Solution : s
1
, s
2
, s
3
are slack variables introduced in given three constraints. Since x
1
and x
2
are unrestricted, we
introduce the non-negative variables x
1
0, x
1
0 and x
2
0, x
22
0 so that x
1
= x
1
- x
1
and x
2
= x
2
- x
2
.
c
j
2 -2 3 -3 0 0 0 RR
c
B
B x
B
x
1
x
1
x
2
x
2
s
1
s
2
s
3
0 s
1
4 -1 1 2 -2 1 0 0 2
0 s
2
6 1 -1 1 -1 0 1 0 6
0 s
3
9 1 -1 3 -3 0 0 1 3
z 0 z
j
- c
j
-2
2 -3
3 0 0 0
x
2
enters into the basis and s
1
leaves the basis. The iterative table is
c
j
2 -2 3 -3 0 0 0 RR
c
B
B x
B
x
1
x
1
x
2
x
2
s
1
s
2
s
3
3 x
2
2 -1/2 1 -1 1/2 0 0 -
0 s
2
4 3/2 -3/2 0 0 -1/2 1 0 8/3
0 s
3
3 5/2 -5/2 0 0 -3/2 0 1 6/5
z 6 z
j
- c
j
-7/2
7/2 0
0 3/2 0 0
x
2
enters into the basis and s
3
leaves the basis. The iterative table is
c
j
2 -2 3 -3 0 0 0 RR
c
B
B x
B
x
1
x
1
x
2
x
2
s
1
s
2
s
3
3 x
2
13/5 0 0 1 -1 1/5 0 1/5 65/5
0 s
2
11/5 0 0 0 0 2/5 1 -3/5 55/10
2 x
1
9/5 1 -1 0 0 -3/5 0 2/5 -
z 51/5 z
j
- c
j
0 0 0 0 -3/5
0 7/5
s
1
enters into the basis and s
2
leaves the basis. The iterative table is
49
c
j
2 -2 3 -3 0 0 0
c
B
B x
B
x
1
x
1
x
2
x
2
s
1
s
2
s
3
3 x
2
3/2 0 0 1 -1 0 -1/2
0 s
1
11/2 0 0 0 0 1 5/2 -3/2
2 x
1
9/2 1 -1 0 0 0 3/2 -1/2
z 27/2 z
j
- c
j
0 0 0 0 0 3/2
Since z
j
- c
j
are all non-negative, the optimum solution x
1
= 9/2 and x
2
= 3/2 with maximum z = 27/2 is obtained.
Therefore x
1
= 9/2 0 = 9/2 and x
2
= 3/2 0 = 3/2 is the required basic feasible solution.
2.9 Minimization Case :
Consider LPP
Minimize Z = c
m
i 1 =
i
x
i
subject to the constraints a
n
j 1 =
ij
x
j
b
i
, x
j
0
The inequality of (or =) - type should be transformed by adding surplus variables. i.e.
n
j 1 =
a
ij
x
j
s
i
= b
i
, x
j
, s
i
0
By putting x
j
= 0, j = 1, 2, , n, we get an initial solution s
i
= - b
i
which violates non-negativity criteria of surplus
variables. In this to preserve the non-negativity of surplus variables we add artificial variables (say) A
i
, i = 1, 2, ,
m to get initial basic feasible solution. Thus, we have constraint equations as
n
j 1 =
a
ij
x
j
s
i
+ A
i
= b
i
, x
j
, s
i
, A
i
0
50
Now the resultant LPP has n decision variables, m artificial variables and m surplus variables. An initial basic
feasible solution of the resultant LPP can be obtained by putting (n + m) variables equal to zero; i.e. the iteration
starts with A
i
= b
i
, i = 1, 2, , m which does not contribute any value to the optimal solution but are added to retain
the feasibility condition of LPP. We will discuss the following two methods to remove artificial variables first from
the optimal solution.
1. Two Phase method
2. Big M method (penalty method)
Note : For constraints with equality, we will add only the artificial variables.
2.9.1 Two Phase Method :
In the phase I of this method, we will try to minimize the sum of the artificial variables subjected to the
constraints of the given LPP. The phase II minimizes the original objective function with initial iteration as the final
iteration of phase II. Let us study steps to be performed in solving LPP by Two-Phase method.
Step 1 Check the non-negativity of the b
i
(constant terms). If some of them are negative, make them positive by
multiplying those constraints with 1.
Step 2 Subtract surplus variables and add artificial variables to reformulate inequality constraints into equations.
Step 3 Initialize iterative step by taking A
i
= b
i
.
Step 4 Assign a cost - 1 to each artificial variables for a maximization problem ( 1 for minimization) and a cost
0 to all other variables (surplus and decision variables) of LPP in the objective function; Thus, objective function
for phase I will be
Maximize z
*
= - A
1
A
2
- - A
p
(p m).
Step 5 Solve the problem written in step 4 until either of the following three cases do arise :
1. If all z
j
c
j
0 and at least one artificial variable occurs in the optimum basis and hence max z
*
< 0, then
LPP has infeasible solution.
2. If all z
j
c
j
0, max z
*
= 0 and at least one artificial variable occurs in the optimum basis then go to phase
II.
3. If all z
j
c
j
0 and no artificial variable appears in the optimum basis.
If cases 2 or 3 occur than go to phase II.
51
Step 6 Use the optimum basic feasible solution of phase I as an initial solution for the given LPP. Assign actual
costs to the original variables and 0 to other variables in the objective function. Use simplex method to improve
the solution.
Note : Maximize z = - Minimize (- z).
Example 2.18 Use two-phase method to minimize z = x
1
+ x
2
subject to the constraints : 2x
1
+ x
2
4, x
1
+ 7x
2
7, and x
1
, x
2
0.
Solution : In order to get constraints equations, introduce surplus variables s
1
, s
2
0; and artificial variables A
1
, A
2
,
0. Then LPP converted to the maximization form is
Maximize z = - x
1
- x
2
+ 0s
1
+ 0s
2
- A
1
- A
2
subject to the constraints :
2x
1
+ x
2
s
1
+ A
1
= 4,
x
1
+ 7x
2
s
2
+ A
2
= 7,
and x
1
, x
2
, s
1
, s
2
, A
1
, A
2
0.
Phase I : Here the objective function is Maximize z
*
= 0 x
1
+ 0x
2
+ 0s
1
+ 0s
2
- A
1
- A
2
subject to the above constraints. Initialize the solution by putting x
1
= x
2
= s
1
= s
2
= 0 then A
1
= 4 and A
2
= 7. The
simplex table is
c
j
0 0 0 0 -1 -1 RR
c
B
B x
B
x
1
x
2
s
1
s
2
A
1
A
2
-1 A
1
4 2 1 -1 0 1 0 4
-1 A
2
7 1 7 0 -1 0 1 1
z
*
-11 z
j
- c
j
-3
-8
1 1 0 0
x
2
enters into the basis and A
2
leaves the basis. The new iterative table is
52
c
j
0 0 0 0 -1 -1 RR
c
B
B x
B
x
1
x
2
s
1
s
2
A
1
A
2
-1 A
1
3 13/7 0 -1 1/7 1 -1/7 21/13
0 x
2
1 1/7 1 0 -1/7 0 1/7 7
z
*
-3 z
j
- c
j
-13/7
0
1 -1/7 0 8/7
x
1
enters into the basis and A
1
leaves the basis. The new iterative table is
c
j
0 0 0 0 -1 -1
c
B
B x
B
x
1
x
2
S
1
s
2
A
1
A
2
0 x
1
21/13 1 0 -7/13 1/13 7/13 -1/13
0 x
2
10/13 0 1 1/13 -2/13 -1/13 2/13
z
*
0 z
j
- c
j
0 0 0 0 1 1
Since z
j
- c
j
are all non-negative and no artificial variable appears in the basis, the optimum basic feasible solution to
the objective function of phase I is obtained and go to phase II.
Phase II : Consider the objective function with original cost associated to the decision variables; i.e.
Maximize z = - x
1
- x
2
+ 0s
1
+ 0s
2
. Here we will initialize the solution with last table of phase I.
c
j
-1 -1 0 0
c
B
B x
B
x
1
x
2
s
1
s
2
-1 x
1
21/13 1 0 -7/13 1/13
-1 x
2
10/13 0 1 1/13 -2/13
z -31/13 z
j
- c
j
0 0 6/13 1/13
Since z
j
- c
j
are all non-negative, the optimum basic feasible solution is x
1
= 21/13, x
2
= 10/13 and minimum z
= 31/13 is obtained.
Example 2.19 Use two-phase method to minimize z = x
1
- 2x
2
3x
3
subject to the constraints : -2x
1
+ x
2
+ 3x
3
= 2, 2x
1
+ 3x
2
+ 4x
3
= 1, and x
1
, x
2
, x
3
0.
53
Solution : In order to get constraints equations, introduce artificial variables A
1
, A
2
, 0. Then LPP is
Maximize z = x
1
- 2x
2
3x
3
- A
1
- A
2
subject to the constraints :
-2x
1
+ x
2
+ 3x
3
+ A
1
= 2,
2x
1
+ 3x
2
+ 4x
3
+ A
2
= 1,
and x
1
, x
2
, A
1
, A
2
0.
Phase I : Here the objective function is Maximize z
*
= 0 x
1
+ 0x
2
+ 0s
1
+ 0s
2
- A
1
- A
2
subject to the above constraints. Initialize the solution by putting x
1
= x
2
= 0 then A
1
= 2 and A
2
= 1. The simplex
table is
c
j
0 0 0 -1 -1 RR
c
B
B x
B
x
1
x
2
x
3
A
1
A
2
-1 A
1
2 -2 1 3 1 0 2/3
-1 A
2
1 2 3 4 0 1 1/4
z
*
-3 z
j
- c
j
0
-4
-7
0 0
x
3
enters into the basis and A
2
leaves the basis. The new iterative table is
c
j
0 0 0 -1 -1
c
B
B x
B
X
1
x
2
x
3
A
1
A
2
-1 A
1
5/4 -7/2 -5/4 0 1 -3/4
-1 x
3
1/2 3/4 1 0 1/4
Z
*
-5/4 z
j
- c
j
7/2 5/4 0 0 3/4
Since z
j
- c
j
are all non-negative, an optimum basic feasible solution to the reduced problem is attained but at the
same time artificial variable A
1
appears in the basis at a positive level, so given LPP does not possess any feasible
solution.
2.9.2 Big M method :
To solve the LPP involving or = - to type, another method is Big M method in which a high penalty cost
is associated to the artificial variables. The computational algorithm is as follows :
54
Step 1 Write given LPP in standard form maximization. Add slack, surplus and artificial variables in the constraints
as stated in previous two sections but assign very high value
- M as a coefficient of the artificial variable.
Step 2 Proceed according to the simplex method. At any iteration of the simplex method there can be any one of the
following three cases :
1. Net evaluations z
j
c
j
(j = 1, 2, , n) are non-negative and the artificial variables are not be present in the
basis. ( alternatively also cj-zj 0)
2. Net evaluations z
j
c
j
(j = 1, 2, , n) are non-negative and there is at least one artificial variable in the
basis and the objective function value z contains M. Then the LPP has no solution i.e. infeasible solution.
3. At least one net evaluation z
j
c
j
(j = 1, 2, , n) is negative indicating that some variable is trying to enter
the basis but all RR entries are negative or undefined then the problem has unbounded solution.
Example 2.20 Use Big M method to maximize z = 3x
1
- x
2
subject to the constraints : 2x
1
+ x
2
2, x
1
+ 3x
2
9, x
2
4 and x
1
, x
2
0.
Solution : Introduce surplus variable s
1
in and artificial variable A
1
in the first constraint and slack variables s
2
and
s
3
in second and third constraints. The modified LPP is
Maximize z = 3x
1
- x
2
+ 0s
1
+ 0s
2
+ 0s
3
MA
1
subject to the constraints :
2x
1
+ x
2
s
1
+ A
1
= 2
x
1
+ 3x
2
+ s
2
= 9
x
2
+ s
3
= 4
and x
1
, x
2
, s
1
, s
2
, s
3
, A
1
0.
Putting x
1
= x
2
= s
1
= 0 gives initial iterate as A
1
= 2, s
2
= 9 and s
3
= 4. The iterative table is
c
j
3 -1 0 0 0 -M RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
A
1
-M A
1
2 2 1 -1 0 0 1 1
0 s
2
9 1 3 0 1 0 0 9
0 s
3
4 0 1 0 0 1 0 -
55
z -2M z
j
- c
j
-2M-3
-M+1
M 0 0 0
x
1
enters into the basis and A
1
leaves the basis. The new iterative table
c
j
3 -1 0 0 0 RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
3 x
1
1 1 1/2 -1/2 0 0 -
0 s
2
2 0 5/2 1/2 1 0 4
0 s
3
4 0 1 0 0 1 -
Z 3 z
j
- c
j
0 5/2 -3/2
0 0
s
1
will enter into the basis and s
2
will exist.
c
j
3 -1 0 0
c
B
B x
B
x
1
x
2
s
1
s
3
3 x
1
3 1 3 0 0
0 s
1
4 0 5 1 0
0 s
3
4 0 1 0 1
z 3 z
j
- c
j
0 10 0 3
The optimum basic feasible solution is x
1
= 3, x
2
= 0 and maximum z = 9.
Example 2.21 Use Big M method to maximize z = 6x
1
+ 4x
2
subject to the constraints : 2x
1
+ 3x
2
30, 3x
1
+ 2x
2
24, x
1
+ x
2
3 and x
1
, x
2
0.
Solution : Introduce slack variables s
1
and s
2
in first and second constraints and surplus variable s
3
in and artificial
variable A
1
in the third constraint. The modified LPP is
Maximize z = 6x
1
+ 4x
2
+ 0s
1
+ 0s
2
+ 0s
3
MA
1
subject to the constraints :
2x
1
+ 3x
2
+ s
1
= 30
56
3x
1
+ 2x
2
+ s
2
= 24
x
1
+ x
2
- s
3
+ A
1
= 4 , and x
1
, x
2
, s
1
, s
2
, s
3
, A
1
0.
Putting x
1
= x
2
= s
3
= 0 gives initial iterate as s
1
= 30, s
2
= 24 and A
1
= 4. The iterative table is
c
j
6 4 0 0 0 -M RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
A
1
0 s
1
30 2 3 1 0 0 0 15
0 s
2
24 3 2 0 1 0 0 8
-M A
1
3 1 1 0 0 -1 1 3
z -3M z
j
- c
j
-M-6
-M+4
0 0 M 0
x
1
enters into the basis and A
1
leaves the basis. The new iterative table
c
j
6 4 0 0 0 RR
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
0 s
1
24 0 1 1 0 2 -
0 s
2
15 0 -1 0 1 3 5
6 x
1
3 1 1 0 0 -1 -
z 18 z
j
- c
j
0
2
0 0 -6
s
3
enters into the basis and s
2
leaves the basis. The new iterative table
c
j
6 4 0 0 0
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
0 s
1
14 0 5/3 1 -2/3 0
0 s
3
5 0 -1/3 0 1/3 1
6 x
1
8 1 2/3 0 1/3 0
z 48 z
j
- c
j
0 0 0 2 0
Since all z
j
c
j
0, the optimum solution x
1
= 8 and x
2
= 0 is attained with maximum z = 48.
57
It is observed from the table that the net evaluation corresponding to non-basic variable x
2
is zero which
indicates that there is an alternative solution to the LPP. Enter x
2
into the basis instead of s
1
or s
3
. The alternative
solution will be
c
j
6 4 0 0 0
c
B
B x
B
x
1
x
2
s
1
s
2
s
3
4 x
2
42/5 0 1 3/5 -2/5 0
0 s
3
39/5 0 0 1/5 1/5 1
6 x
1
12/5 1 0 -2/5 3/5 0
z 48 z
j
- c
j
0 0 0 2 0
Here the optimum solution x
1
= 12/5 and x
2
= 42/5 with maximum z = 48.
Example 2.22 Maximize Z = x
1
+ 3x
2
,
Subject to
x
1
+ x
2
3,
-x
1
+ x
2
2,
x
1
- 2x
2
2
and x
1
, x
2
0
Introducing the slack, surplus and the artificial variables we get,
Maximize Z = x
1
+ 3x
2
,
Subject to
x
1
+ x
2
- s
1
+ A
1
= 3,
-x
1
+ x
2
+ s
2
= 2,
x
1
- 2x
2
+ s
3
= 2,
preparing the simplex table and solving,
58
Cj 1 3 0 -M 0 0
Var
Basis
x
1
x
2
s
1
A
1
s
2
s
3
RHS R.R
-M A
1
1 1 -1 1 0 0 3 3
0 s
2
-1 1 0 0 1 0 2 2
0 s
3
1 -2 0 0 0 1 2 -1
N.E.R
=Cj-Zj
1+M 3+M
-M 0 0 0 Z=3M
-M A
1
2 0 -1 1 -1 0 1 1/2
3 x
2
-1 1 0 0 1 0 2 -2
0 s
3
-1 0 0 0 2 1 6 -6
N.E.R
=Cj-Zj
4+2M
0 -M 0 -(M+3) 0 Z=6-M
1 x
1
1 0 -1/2 -- -1/2 0 1/2 -1
3 x
2
0 1 -1/2 -- 1/2 0 5/2 -5
0 s
3
0 0 -1/2 -- 3/2 1 13/2 -13
N.E.R
=Cj-Zj
0 0 2 -- -1 0 Z=8
Here, s
3
is trying to enter the basis. But as all R.R entries are negative, the given problem has an unbounded solution.
Example 2.23 Maximize Z = 5x
1
- x
2
,
Subject to x
1
+ x
2
2, x
1
+ 2x
2
2, 2x
1
+ x
2
2 and x
1
, x
2
0
Introducing the slack, surplus and the artificial variables we get,
Maximize Z = 5x
1
- x
2
Subject to x
1
+ x
2
- s
1
+ A
1
= 2
x
1
+ 2x
2
+ s
2
= 2
2x
1
+ x
2
+ s
3
= 2
59
preparing the simplex table and solving,
Cj -5 1 0 -M 0 0
Var
Basis
x
1
x
2
s
1
A
1
s
2
s
3
RHS R.R
-M A
1
1 1 -1 1 0 0 2 2
0 s
2
1 2 0 0 1 0 2 2
0 s
3
2 1 0 0 0 1 2 1
N.E.R
=Cj-Zj
5+M
-1+M -M 0 0 0 Z=-2M
-M A
1
0 1/2 -1 1 0 -1/2 1 2
0 s
2
0 3/2 0 0 1 -1/2 1 2/3
5 x
1
1 1/2 0 0 0 1/2 1 2
N.E.R
=Cj-Zj
0 (M-7)/2
-M 0 0 -(M+5)/2 Z= -M+5
-M A
1
0 0 -1 1 -1/3 -1/3 2/3
-1 x
2
0 1 0 0 2/3 -1/3 2/3
5 x
1
1 0 0 0 -1/3 2/3 2/3
N.E.R
=Cj-Zj
0 0 -M 0 -(-7+M)/3 -(M+11)/3 Z =
(8-2M)/3
As all NER entries are 0, the optimality criteria is satisfied, but the Z value in the final table contains the
coefficient of the artificial variable. Hence, it is a case of no feasible solution.
2.10 Duality in LPP :
From both the theoretical and practical points of view, the theory of duality is one of the most important and
interesting concepts in linear programming. The basic idea behind the duality theory is that every linear
programming problem has an associated linear program called its dual such that a solution to the original linear
60
program also gives the solution to its dual. Thus, whenever a linear program is solved by the simplex method, we are
actually getting solutions for two linear programming problems. The original problem is called the primal problem.
Although the idea of duality is essentially mathematical, we shall see that duality has important
interpretations which can help managers to answer questions about alternative courses of action and their relative
values.
Let us understand the concepts and the managerial significance of duality with the help of the following
example.
Example 2.24: ABC Company makes three products, T, C and B, which must be processed through the Assembly,
Finishing and the Packaging departments. The three departments have maximum 60, 40 and 80 hours available. The
profit on one unit of each of the products is Rs. 2 per T, Rs. 4 per C and Rs. 3 per B. The other data is given below.
Hours required for 1 unit of product
T C B
Assembly 3 4 2
Finishing 2 1 2
Packaging 1 3 2
The problem can be formulated as:
Maximize: 2T + 4C + 3B
subject to the constraints :
3T + 4C + 2B 60
2T + 1C + 2B 40
1T + 3C + 2B 80
and T, C and B 0
Let the slack variables S
A
, S
F
and S
P
be the unused hours in the three departments. So the above L.P becomes
Maximize: 2T + 4C + 3B + 0S
A
+ 0S
F
+ 0S
P
subject to the constraints :
3T + 4C +2B + S
A
= 60
2T + 1C + 2B + S
F
= 40
61
1T + 3C + 2B + S
P
= 80
and T, C and B 0
The following table gives the simplex solution of the above problem.
c
j
2 4 3 0 0 0 RR
c
B
B x
B
T C B S
A
S
F
S
P
0 S
A
60 3 4 2 1 0 0 15
0 S
F
40 2 1 2 0 1 0 40
0 S
p
80 1 3 2 0 0 1 80/3
z 0 c
j
- z
j
2 4
3 0 0 0
4 C 15 3/4 1 1/2 1/4 0 0 30
0 S
F
25 5/4 0 3/2 -1/4 1 0 50/3
0 S
p
35 -5/4 0 1/2 -3/4 0 1 70
z 60 c
j
- z
j
-1 0 1
-1 0 0
4 C 20/3 1/3 1 0 1/3 -1/3 0
3 B 50/3 5/6 0 1 -1/6 2/3 0
0 S
p
80/3 -5/3 0 0 -2/3 -1/3 1
z 228/3 c
j
- z
j
-11/6 0 0 -5/6 -2/3 0
This being a maximization problem, the optimality criteria is satisfied as all N.E.R entries, that is all c
j
- z
j
entries
are 0.
Recall that
(a) Each positive number in the c
j
- z
j
row represents the net profit obtainable if 1 unit of the variable heading that
column were added to the solution.
(b) Each negative number (a net loss) in the c
j
- z
j
row indicates the decrease in the total profit if 1 unit of the
variable heading that column were added to the product mix. A negative number in the c
j
- z
j
row under one of the
columns representing time has another interpretation also. A negative number here represents the amount of increase
in total profit if the number of hours available in that department could be increased by 1.
62
We see from the table that the optimal solution is to produce 20/3 units of C, 50/3 units of B and no unit of
T. The total contribution for this product mix is about Rs. 76.67. The values under the S
A
, S
F
and S
P
columns in the
c
j
- z
j
row indicate that to remove 1 productive hour from each of the three departments would reduce the total
contribution respectively, by Rs. 5/6, 2/3 and 0. This can be taken to mean also that if additional capital were
available to expand productive time in these departments, the value of increased production to ABC Company, of 1
more hour in each of these departments would be Rs. 5/6, 2/3 and 0. i.e. adding another hour of Assembly time will
increase profit by Rs. 5/6, adding another hour of Finishing will increase profit by Rs. 2/3 and adding another hour
of packaging time will leave profit unchanged. These three values 5/6, 2/3 and 0 are called dual prices, shadow
prices, or simply unit worth of a resource. To be more specific, if adding another hour to each department costs the
same, we would add the time to the Assembly department, for there it is worth Rs. 5/6, which is more than 2/3 or 0.
This primal was concerned with maximizing the contribution from the three products; the dual will be
concerned with evaluating the time used in the three departments to produce T, C and B.
The production manager of the ABC Company recognizes that the productive capacity of the three
departments is a valuable resource to the firm; he wonders whether it would be possible to place a monetary value
on its worth. He soon comes to think in terms of how much he would receive from another manufacturer, a renter
who wants to rent all the capacity in ABC Companys three departments. He reasons along the following lines.
Suppose the rental charges were Rs. A per hour of Assembly time, Rs. F per hour of Finishing and Rs. P
per hour of Packaging time, then the cost to the renter of all the time would be
Total rent paid = 60A + 40F + 80P
and of course the renter would want to set the rental prices in such a way as to Minimize the total rent he would have
to pay; so the objective of the dual is
Minimize : 60A + 40F + 80P
The production manager of ABC Company will not rent out his time unless the rent offered enables him to net as
much as he would if he used the time to produce products T, C and B for ABC Company. This observation leads to
the constraints of the dual.
To make one unit of T requires 3 Assembly hours, 2 Finishing hours and 1 packaging hour. The time that
goes into making one unit of T would be rented out for Rs. 3A + 2F + 1P. If the manager used all that time to make
T, he would earn Rs. 2 in contribution to profit., and so he will not rent out the time unless
63
3A + 2F + 1P 2
and this gives the first constraint in the dual. Similar reasoning with respect to C and B gives the other two dual
constraints
4A + 1F + 3P 4
2A + 2F + 2P 3
So the entire dual problem which determines for the manager of the ABC Company, the value of the productive
resources of the company (its plant hours) is:
Minimize 60A + 40F + 80P
subject to the constraints :
3A + 2F + 1P 2
4A + 1F + 3P 4
2A + 2F + 2P 3
and A, F, and P 0
We add appropriate surplus and artificial variables as follows and then solve the dual problem. Only the initial and
the final tables are shown below.
Minimize 60A + 40F + 80P
subject to the constraints :
3A + 2F + 1P - S
1
+ A
1
= 2
4A + 1F + 3P - S
2
+ A
2
= 4
2A + 2F + 2P - S
3
+ A
3
= 3
with A, F, and P 0
64
c
j
60 40 80 0 0 0 M M M
c
B
B x
B
A F P S
1
S
2
S
3
A
1
A
2
A
3
RR
M A
1
2 3 2 1 -1 0 0 1 0 0 2/3
M A
2
4 4 1 3 0 -1 0 0 1 0 1
M A
3
3 2 2 2 0 0 -1 0 0 1 3/2
z 9M c
j
-z
j
60-9M
40-5M 80-6M M M M 0 0 0
: : : : : : : : : : :
: : : : : : : : : : :
: : : : : : : : : : :
60 A 5/6 1 0 2/3 0 -1/3 1/6 0 1/3 -1/6
0 S
1
11/6 0 0 5/3 1 -1/3 -5/6 -1 1/3 5/6
40 F 2/3 0 1 1/3 0 1/3 -2/3 0 -1/3 2/3
z 228/3 c
j
-z
j
0 0 80/3 0 20/3 50/3 M M-20/3 M-50/3
This being a minimization problem, the optimality criteria is satisfied as all N.E.R entries, that is all c
j
-z
j
row entries
are 0.
The optimum solution to the dual problem indicates that the worth to ABC Company, of 1 productive hour
in the Assembly department is Rs. 5/6 (A = 5/6 in the final table), in the Finishing department is Rs. 2/3 and in the
packaging department is Rs. 0 (T is not in the basis).
Of course, these are the same values we got by looking at the c
j
-z
j
row in the final table of the primal
problem. Thus, when we solved the primal, we also got the solution to the dual. Does solving the dual also give us
the solution to the primal? Yes, if we look at the values contained under the S
1
, S
2
and S
3
columns in the c
j
-z
j
row in
the final table of solution of the dual, we find 0, 20/3 and 50/3, which are the optimal values for T, C and B in the
primal.
65
Now look at the two problem formulations again.
Primal Problem
Maximize: 2T + 4C + 3B
Subject to:
3T + 4C +2B 60
2T + 1C + 2B 40
1T + 3C + 2B 80
and T, C and B 0
Dual problem
Minimize: 60A + 40F + 80P
Subject to :
3A + 2F + 1P 2
4A + 1F + 3P 4
2A + 2F + 2P 3
and A, F, and P 0
Some direct observations from the table above:
1. The objective function coefficients of the primal problem have become the right hand side constants of the
dual. Similarly, the right hand side constants of the primal have become the objective function coefficients
of the dual.
2. The inequalities have been reversed in the constraints.
3. The objective is changed from maximization in primal to minimization in dual.
4. Each column in primal corresponds to a constraint (row) in a dual. Thus, the number of dual constraints is
equal to the number of primal variables.
5. Each constraint (row) in the primal corresponds to a column in the dual. Hence, there is one dual variable
for every primal constraint.
6. The transpose of the technological (input-output) coefficient matrix of the primal becomes the
technological (input-output) coefficient matrix of the dual.
Further economic interpretations of duality:
1. Suppose T, C, B is a feasible solution to the primal ( a level of output that can be achieved with the current
resources) and A, F, P is a feasible solution of the dual
(a set of rents which would induce the manager to rent out the plant rather than to use it himself). Then,
2T + 4C + 3B 60A + 40F + 80P
2. The optimal values are the same in both problems. This is always the case. It means that the value to ABC
Company, of all its productive resources is precisely equal to the profit the firm can make if it employs
these resources in the best possible way. In this way, the profit made on the firms output is used to derive
the imputed values of the inputs to produce that output.
66
3. In the above problem, the dual variable P was equal to 0 and not all the packaging time was used. This is
entirely reasonable, since if the company already has excess packaging time, additional packaging time
cannot be profitably used and so it worthless. This is half of what is called the principle of complimentary
slackness.
4. A = 5/6, F = 2/3 and P = 0, 3A+2F+1P = 23/6. This shows us that the value of the time needed to produce
one unit of T is Rs. 23/6. But one unit of T contributes a profit of Rs. 2. Since the time needed to produce T
is worth more than the return on it, the optimal solution to the primal does not produce any T. This is the
other half of the principle of complimentary slackness.
Now we look at the concept of duality by introducing mathematical rigour.
To every LPP, there is an associated LPP called dual of the given LPP. The given problem is called primal
problem; i.e. every LPP when expressed in its standard form has associated unique LPP based on the same data.
These primal dual pair are inter-related and variables of this pair have interesting implications in econometrics,
production engineering etc. Let us define dual of
Primal : Maximize z = c
T
x, x R
n
subject to the constraints : Ax b, x 0, b R
m
, A is m n .
Dual : Minimize z
*
= b
T
w, w R
m
subject to the constraints : A
T
w c, w 0.
Note : In order to convert any problem into its dual, the primal LPP must be expressed in the maximization form
with all constraints in type or = type. Thus, primal dual pairs are as follows :
Primal problem Dual problem
1 Maximize z = c
T
x
subject to the constraints :
Ax b, x 0
Minimize z
*
= b
T
w
subject to the constraints :
A
T
w c, w 0
2 Maximize z = c
T
x
subject to the constraints :
Ax = b, x 0
Minimize z
*
= b
T
w
subject to the constraints :
A
T
w c, w is unrestricted.
3 Minimize z = c
T
x
subject to the constraints :
Ax = b, x 0
Maximize z
*
= b
T
w
subject to the constraints :
A
T
w c, w is unrestricted.
4 Minimize (maximize) z = c
T
x
subject to the constraints :
Ax = b, x is unrestricted
Maximize (minimize) z
*
= b
T
w
subject to the constraints :
A
T
w = c, w is unrestricted.
Given an LPP, it can be directly converted into its dual using the following table :
67
Primal Problem Dual Problem
1 Maximization with constraints or = Minimization with constraints or =
2 No. of constraints No. of variables
3 Coefficients of the objective function RHS of constraints
4 Input output matrix A Input output matrix A
T
5 j th constraint of = type j th variable unrestricted in sign
6 k th variable unrestricted in sign k th constraint of = type
Example 2.25 Write dual of LPP :
Maximize z = 8x
1
+ 6x
2
subject to the constraints : x
1
x
2
3/5, x
1
x
2
2, x
1
, x
2
0
Solution : To write given LPP in maximization form with all constraints - type; So primal is
Maximize z = 8x
1
+ 6x
2
subject to the constraints : x
1
x
2
3/5, - x
1
+ x
2
-2, x
1
, x
2
0
Let w
1
and w
2
be dual variables. Then dual problem is
Minimize z
*
= 3/5w
1
2w
2
subject to the constraints : w
1
w
2
8, - w
1
+ w
2
6, w
1
, w
2
0
Example 2.26 Write dual of LPP :
Minimize z = 4x
1
+ 6x
2
+ 18x
3
subject to the constraints : x
1
+ 3x
2
3, x
2
+ 2x
3
5, x
1
, x
2
, x
3
0
Solution : Primal is
Minimize z = 4x
1
+ 6x
2
+ 18x
3
subject to the constraints : x
1
+ 3x
2
+ 0x
3
3, 0x
1
+ x
2
+ 2x
3
5, x
1
, x
2
, x
3
0
Let w
1
and w
2
be dual variables corresponding to each of the primal constraint. Then dual problem is
Maximize z
*
= 3w
1
+ 5w
2
subject to the constraints : w
1
+ 0w
2
4, 3w
1
+ w
2
6, 0w
1
+ 2w
2
18. Rewriting the dual problem as
Maximize z
*
= 3w
1
+ 5w
2
subject to the constraints : w
1
4, 3w
1
+ w
2
6, 2w
2
18, w
1
, w
2
0,
68
Example 2.27 Write dual of LPP :
Minimize z = 7x
1
+ 3x
2
+ 8x
3
subject to the constraints : 8x
1
+ 2x
2
+ x
3
3, 3x
1
+ 6x
2
+ 4x
3
4,
4x
1
+ x
2
+ 5x
3
1, x
1
+ 5x
2
+ 2x
3
7, x
1
, x
2
, x
3
0
Solution : Primal is
Minimize z = 7x
1
+ 3x
2
+ 8x
3
subject to the constraints : 8x
1
+ 2x
2
+ x
3
3, 3x
1
+ 6x
2
+ 4x
3
4,
4x
1
+ x
2
+ 5x
3
1, x
1
+ 5x
2
+ 2x
3
7, x
1
, x
2
, x
3
0
Let w
1
, w
2
, w
3
and w
4
be dual variables corresponding to each of the primal constraint. Then dual problem is
Maximize z = 3w
1
+ 4w
2
+ x
3
+ 7w
4
subject to the constraints : 8w
1
+ 3w
2
+ 4w
3
+ w
4
7, 2w
1
+ 6w
2
+ w
3
+ 5w
4
3,
w
1
+ 4w
2
+ 5w
3
+ 2w
4
8, w
1
, w
2
, w
3
, w
4
0 .
Example 2.28 Write dual of LPP :
Maximize z = 3x
1
+ x
2
+ x
3
x
4
subject to the constraints : x
1
+ 5x
2
+ 3x
3
+ 4x
4
4, x
1
+ x
2
= - 1, x
3
x
4
- 5, x
1
, x
2
, x
3
, x
4
0
Solution : Primal as
Maximize z = 3x
1
+ x
2
+ x
3
x
4
subject to the constraints : x
1
+ 5x
2
+ 3x
3
+ 4x
4
4, - x
1
- x
2
= 1, x
3
x
4
-5, x
1
, x
2
, x
3
, x
4
0
Let w
1
, w
2
, and w
3
be dual variables corresponding to each of the primal constraint. Then dual problem is
Minimize z* = 4w
1
+ w
2
5w
3
subject to the constraints : w
1
- w
2
3, 5w
1
- w
2
1, 3w
1
+ w
3
1, 4w
1
- w
2
-1, and w
1
, w
3
0, and w
2
unrestricted.
Example 2.29 Obtain dual of LPP :
Minimize z = x
1
- 3x
2
- 2x
3
subject to the constraints : 3x
1
- x
2
+ 2x
3
7, 2x
1
- 4x
2
12, - 4x
1
+ 3x
2
+ 8x
3
= 10, x
1
, x
2
0 and x
3
is
unrestricted.
Solution : Putting x
3
= x
3
- x
3
, we have primal as
Minimize z = x
1
- 3x
2
2(x
3
- x
3
)
69
subject to the constraints : -3x
1
+ x
2
- 2(x
3
- x
3
) - 7, 2x
1
- 4x
2
12,
- 4x
1
+ 3x
2
+ 8(x
3
- x
3
)
= 10; x
1
, x
2
, x
3
, x
3
0
Also as the third constraint is an equality, we convert it into inequalities as follows :
-4x
1
+ 3x
2
+ 8(x
3
- x
3
)
10 and -4x
1
+ 3x
2
+ 8(x
3
- x
3
)
10
Rewriting primal problem as minimization problem with all constraints - type :
Minimize z = x
1
- 3x
2
2(x
3
- x
3
)
subject to the constraints : -3x
1
+ x
2
- 2(x
3
- x
3
) - 7, 2x
1
- 4x
2
12,
4x
1
- 3x
2
- 8(x
3
- x
3
)
- 10, -4x
1
+ 3x
2
+ 8(x
3
- x
3
)
10; x
1
, x
2
, x
3
, x
3
0
Let w
1
, w
2
, w
3
and w
4
be dual variables corresponding to each of the primal constraint. Then dual problem is
Maximize z
*
= - 7w
1
+ 12w
2
- 10w
3
+ 10w
4
subject to the constraints : - 3w
1
+ 2w
2
+ 4w
3
- 4w
4
1, w
1
4w
2
- 3w
3
+ 3w
4
- 3, -2w
1
- 8w
3
+ 8w
4
- 2,
2w
1
+ 8w
3
8w
4
2, w
1
, w
2
, w
3
, w
4
0.
The third and the fourth constraints can be written as 2w
1
+ 8w
3
8w
4
= 2. From the objective function and the
constraints, w
3
and w
4
can be put together by writing w = w
3
w
4
. So w becomes unrestricted in sign. Rewriting the
dual problem as
Maximize z
*
= - 7w
1
+ 12w
2
- 10w
subject to the constraints : - 3w
1
+ 2w
2
+ 4w 1, w
1
4w
2
- 3w - 3, 2w
1
+ 8w = 2, w
1
0, w
2
0 and w is
unrestricted.
Instead of working out the dual in the above manner, the following way can also be applied by using the above table
given.
Primal (minimize with ) Dual
Minimize z = x
1
- 3x
2
- 2x
3
subject to the constraints :
-3x
1
+ x
2
- 2x
3
-7,
2x
1
- 4x
2
12,
4x
1
- 3x
2
- 8x
3
= -10,
x
1
, x
2
0 and x
3
is unrestricted.
Maximize z
*
= - 7w
1
+ 12w
2
- 10w
subject to the constraints :
-3w
1
+ 2w
2
+ 4w 1,
w
1
4w
2
- 3w - 3,
-2w
1
- 8w = -2,
w
1
0, w
2
0 and w is unrestricted
70
2.10.1 Duality Theorems :
Theorem 2.6 The dual of dual is the primal.
Proof : Consider the standard LPP :
Primal : To find, x R
n
which maximize z = c
T
x, x R
n
subject to the constraints : Ax b, x 0, b R
m
, A is m n matrix (2.15)
Then dual of (2.15) is to find w R
m
which minimize z
*
= b
T
w, w R
m
subject to the constraints : A
T
w c, w 0. (2.16)
Eq. (2.16) can be written as find w R
m
which maximize z
*
= - b
T
w, w R
m
subject to the constraints : - A
T
w - c, w 0. (2.17)
Now dual of (2.17) is find, x R
n
which minimize z
**
= (-c)
T
x, x R
n
subject to the constraints : -(A
T
)
T
x b, x 0, b R
m
(2.18)
Rewriting (2.18) we get (2.15); i.e. dual of dual is primal.
Theorem 2.7 If x
0
is a feasible solution of the primal (2.15) and w
0
is a feasible solution of the dual problem (2.16)
then c
T
x
0
b
T
w
0
.
Proof : Since x
0
is a feasible solution of the primal (2.15), Ax
0
b. Pre-multiply by w
0
T
. Then w
0
T
Ax
0
w
0
T
b or
w
0
T
Ax
0
bw
0
T
(2.19)
(because both are 1 1 size matrices of real numbers). Now w
0
is a feasible solution of the dual problem (2.16) so
A
T
w
0
c. Taking transpose, we get w
0
T
A c
T
or
w
0
T
Ax
0
c
T
x
0
(2.20)
From (2.19) and (2.20), we have the result.
Theorem 3.8 The value of the objective function z for any feasible solution of the primal is not less than the value
of the objective function z
*
for any feasible solution of the dual.
Proof: Consider the primal problem (2.15) to find x R
n
which maximize z = c
T
x, x R
n
subject to the constraints : Ax b, x 0, b R
m
, A is m n matrix
Then dual (2.16) is to find w R
m
which minimize z
*
= b
T
w, w R
m
subject to the constraints : A
T
w c, w 0.
Introducing the necessary slack variables in constraints of (2.15) and (2.16) we get
Primal z = c
1
x
1
+ c
2
x
2
+ + c
n
x
n
71
subject to the constraints : a
11
x
1
+ a
12
x
2
+ + a
1n
x
n
+ x
n+1
= b
1
;
a
21
x
1
+ a
22
x
2
+ + a
2n
x
n
+ x
n+2
= b
2
;
:
:
a
m1
x
1
+ a
m2
x
2
+ + a
mn
x
n
+ x
m+n
= b
m
;
and x
1
, x
2
, , x
n+m
0;
and its dual is z
*
= b
1
w
1
+ b
2
w
2
+ + b
n
w
n
Subject to the constraints : a
11
w
1
+ a
21
w
2
+ + a
m1
w
m
+ w
m+1
= c
1
;
a
12
w
1
+ a
22
w
2
+ + a
m2
w
m
+ w
m+2
= c
2
;
:
:
a
1n
w
1
+ a
2n
w
2
+ + a
mn
w
m
+ w
m+n
= b
m
;
and w
1
, w
2
, , w
m+n
0.
Let x
1
, x
2
, , x
n+m
and w
1
, w
2
, , w
m+n
be any feasible solutions of (3.13) and (3.14) respectively. Multiply primal
constraints by w
1
, w
2
, , w
m
and add. Similarly multiply dual constraints by x
1
, x
2
, , x
n
and add. Subtracting
resultant equations, we get
z z
*
= x
1
w
m+1
+ + x
n
w
m+n
+ w
1
x
n+1
+ + w
m
x
n+m
.
Clearly, RHS is non-negative, so z z
*
0.
Review Exercise
Solve graphivally :
Q. Maximize Z = 90x
1
+ 60x
2
subject to : 5x
1
+ 8x
2
2000, x
1
175, x
2
225, 7x
1
+ 4x
2
1400; x
1
, x
2
0.
Ans. x
1
= 800/9, x
2
= 1750/9, max z = 59000/3.
72
Q. Maximize Z = 60x
1
+ 40x
2
subject to : x
1
25, x
2
35, 2x
1
+ x
2
= 60; x
1
, x
2
0.
Ans. x
1
= 25, x2 = 10, max z = 1900.
Q. Maximize Z = 30x
1
+ 40x
2
subject to : 4x
1
+ 6x
2
180, x
1
20, x
2
10, x
1
+ x
2
40; x
1
, x
2
0.
Ans. x
1
= 16, x
2
= 16.66667, max z = 1266.67.
Q. Minimize Z = 4x
1
+ 3x
2
subject to : x
1
+ 3x
2
9, 2x
1
+ 3 x
2
12, x
1
+ x
2
5; x
1
, x
2
0.
Ans. x
1
= 0, x
2
= 5, max z = 15.
Q. Maximize Z = x
1
+ 3x
2
subject to : x
1
+ 2x
2
9, x
1
x
2
2, x
1
+ 4x
2
11; x
1
, x
2
0.
Ans. x
1
= 7, x
2
= 1, max z = 10.
Q. Maximize Z = 10x
1
+ 8x
2
subject to : 2x
1
+ x
2
20, x
1
+ 3 x
2
30, x
1
2x
2
15; x
1
, x
2
0.
Ans. x
1
= 6, x
2
= 8, max z = 124.
Q. Solve graphically : A diet conscious housewife wishes to ensure certain minimum intake of vitamins A, B and C
for the family. The minimum daily (quantity) needs of the vitamins A, B and C for the family are respectively 30, 20
and 16 units. For the supply to these minimum requirements, the housewife relies on two fresh foods. The first one
provides 7, 5, 2 units of the three vitamins per gram respectively and the second one provides 2, 4, 8 units of the
same three vitamins per gram of the foodstuff respectively. The first foodstuff costs Rs. 3 per gram and the second
Rs. 2 per gram. The problem is how many grams of each foodstuff should the housewife buy everyday to keep her
food bill as low as possible?
Ans. x
1
= 10, x
2
= 0, min z = 30.
73
Q. (a) Maximize Z = 40x
1
+ 30x
2
subject to : x
1
+ 2x
2
40; 4x
1
+ 3x
2
120; x
1
, x
2
0
Ans. x
1
= 30, x
2
= 0, max z =1200 or x
1
= 24, x
2
= 8, max z =1200
(b) Maximize Z = 3x
1
x
2
subject to : 15x
1
5x
2
30; 10x
1
+ 30x
2
120; x
1
, x
2
0
Ans. x
1
= 2, x
2
= 0, max z = 6 or x
1
= 3, x
2
= 3, max z = 6.
Q. (a) Maximize Z = 5x
1
+ 3x
2
subject to : 4x
1
+ 2x
2
8; x
1
4; x
2
6 ; x
1
, x
2
0
(b) Maximize Z = 6x
1
- 4x
2
subject to : 2x
1
+ 4x
2
4, 4x
1
+ 8x
2
16; x
1
, x
2
0
Ans. Infeasible solution.
Q. (a) Maximize Z = 4x
1
+ 2x
2
subject to : x
1
4, x
2
2; x
1
, x
2
0.
(b) Maximize Z = x
1
+ x
2
subject to : x
1
+ 4x
2
10, 3x
1
+ 2x
2
2; x
1
, x
2
0.
Ans. Unbounded solution
74
Simplex method
Q. Find the maximum value of p = x + 2y + 3z
subject to: 7x + z 6, x + 2y 20, 3y + 4z 0 ; x 0, y 0, z 0
Ans. x = 0.8571, y = 0, z = 0, max p = 0.8571.
Q. Find the maximum value of p = 2x - 3y + 5z
subject to: 2x + y 16, y + z 10, x + y + z 20 ; x 0, y 0, z 0
Ans. x = 8, y = 0, z = 10, max p = 66.
Q. Find the minimum value of z = x
1
3x
2
+ 2x
3
subject to: 3x
1
x
2
+ 2x
3
7, -2x
1
+ 4x
2
12, -4x
1
+ 3x
2
+ 8x
3
10; x
1
, x
1
and x
3
0.
Ans. x
1
= 4, x
2
= 5, min z = -11.
Q. Find the maximum value of p = 2x + 4y + z + w
subject to: x + 3y + w 4, 2x + y 3, y + 4z + w 3; x, y, z, w 0.
Ans. x = 1, y = 1, z = 0.5, w = 0, max p = 13/2.
Q. Find the maximum value of p = 107x + y + 2z
subject to : 14x + y 6z + 3w = 7, 16x + 0.5y 6z 5, 3x y z 0 ; x, y, z, w 0.
[Hint: divide the first equation by 3 (coefficient of w) and then treat w as the slack variable]
Ans. Unbounded solution.
Q. Find the maximum value of p = 2x + 4y + 3z
subject to: 3x + 4y + 2z 60, x + 3y + 2z 80, 2x + y + 2z 40; x, y, z 0.
Ans. x = 0, y = 20/3, z = 50/3, max p = 250/3.
Q. Find the maximum value of p = 2x + 4y + z + w
subject to: 2x + y + 2z + 3w 12, 2x + y + 4z 16, 3x + 2z + 2w 20; x, y, z ,w 0.
Ans. x = 0, y = 12, z = 0, w = 0, max p = 48.
75
Solve the following LP problems using Big M method.
Q. Find the minimum value of p = 4x + 8y + 3z
subject to: 3x + 2y + z 3, 2x + y + 2z 3; x, y, z 0.
Ans. x = 3/4, y = 0, z = 3/4, min p = 3.
Q. Find the maximum value of p = 2x + y + 3z
subject to: 2x + 3y + 4z = 12, x + y + 2z 5; x, y, z 0.
Ans. x = 3, y = 2, z = 0, max p = 10.
Q. Find the minimum value of p = 5x + 2y + 10z
subject to: x - z 10, y + z 10 ; x, y, z 0.
Ans. x = 0, y = 10, z = 0, min p = 20.
Solve the following LP problem to show that these have alternative optimal solutions.
Q. Find the minimum value of p = 2x + 8y
subject to: 2x + 2y 14, 5x + y 10, x + 4y 12 ; x, y 0.
Ans. (i) x = 12, y = 0, min p = 24. (ii) x = 16/3, y = 5/3, min p = 24.
Q. Find the maximum value of p = x + 2y + 3z w
subject to: x + 2y + 3z = 15, x + 2y + z + w 10, 2x + y + 5z 20 ; x, y, z, w 0.
Ans. (i) x = y = z = 2.5, w = 0, max p = 15 or
x = 0, y = 15/7, z = 25/7, w = 0, max p = 15
Solve the following LP problem to show that these have unbounded solutions.
Q. Find the minimum value of p = x + 2y + 3z
subject to: x + y + z 500, x + 2y + 3z 700, -y + 3z 0 ; x, y, z 0
Ans. The solution is unbounded.
Q. Find the minimum value of p = 50x + 150y + 100z
subject to: 5x + 5y + 5z 2,500, 5x + 10y + 15z 3,500, 3x - y + 3z 0 ; x, y, z 0
76
Ans. The solution is unbounded.
Solve the following LP problem to show that these have no feasible solutions.
Q. Find the minimum value of p = x 2y 3z
subject to: 2x + y + 2z = 2, 2x + 3y + 2z = 1; x, y, z 0.
Ans. There is no feasible solution.
Q. Find the maximum value of p = x + 3y
subject to: x y 1, 3x y -3; x, y 0.
Ans. There is no feasible solution.
77