You are on page 1of 480

shna's

Kri TEXT BOOK on

L inear P rogramming
(For B.A. and B.Sc. IIIrd year students of All Colleges affiliated to Universities in Uttar Pradesh)

As per U.P. UNIFIED Syllabus


(w.e.f. 2013-2014)

By

A. R. Vasishtha
Retd. Head, Dep’t. of Mathematics
Meerut College, Meerut (U.P.)

UNIFIED

KRISHNA Prakashan Media (P) Ltd.


KRISHNA HOUSE, 11, Shivaji Road, Meerut-250 001 (U.P.), India
Jai Shri Radhey Shyam

Dedicated
to

Lord

Krishna
Authors & Publishers
P reface
This book on LINEAR PROGRAMMING has been specially written
according to the latest Unified Syllabus to meet the requirements of the B.A.
and B.Sc. Part-III Students of all Universities in Uttar Pradesh.

The subject matter has been discussed in such a simple way that the students
will find no difficulty to understand it. The proofs of various theorems and
examples have been given with minute details. Each chapter of this book
contains complete theory and a fairly large number of solved examples.
Sufficient problems have also been selected from various university examination
papers. At the end of each chapter an exercise containing objective questions has
been given.

We have tried our best to keep the book free from misprints. The authors
shall be grateful to the readers who point out errors and omissions which, inspite
of all care, might have been there.

The authors, in general, hope that the present book will be warmly received
by the students and teachers. We shall indeed be very thankful to our colleagues
for their recommending this book to their students.

The authors wish to express their thanks to Mr. S.K. Rastogi, M.D.,
Mr. Sugam Rastogi, Executive Director, Mrs. Kanupriya Rastogi, Director and
entire team of KRISHNA Prakashan Media (P) Ltd., Meerut for bringing
out this book in the present nice form.

The authors will feel amply rewarded if the book serves the purpose for
which it is meant. Suggestions for the improvement of the book are always
welcome.

August, 2013 — Authors


Syllabus
L inear P rogramming
U.P. UNIFIED (w.e.f. 2013-14)

B.A./B.Sc. Paper-IV (Optional) M.M. : 33 / 65

Unit-I
Linear programming problems, Statement and formation of general linear programming
problems, Graphical method, Slack and surplus variables, Standard and matrix forms of
linear programming problem, Basic feasible solution.

Unit-II
Convex sets, Fundamental theorem of linear programming, Simplex method, Artificial
variables, Big-M method, Two phase method.

Unit-III
Resolution of degeneracy, Revised simplex method, Sensitivity Analysis.

Unit-IV
Duality in linear programming problems, Dual simplex method, Primal-dual method
Integer programming.

Unit-V
Transportation problems, Assignment problems.
B rief C ontents
Dedication.........................................................................(v)
Preface ...........................................................................(vi)
Syllabus ........................................................................(vii)
Brief Contents ...............................................................(viii)

Linear Programming ...................................................01—480

Unit-I
1. Mathematical Preliminaries (Some Basic Concepts on Matrices and Linear

Algebra).........................................................................................................................................01—16

2. Linear Programming Problem (Formulation and Graphical Solution)......................17—92

Unit-II
3. Convex Sets and Their Properties..........................................................................93—122

4. Simplex Method (Theory and Application)...............................................................123—198

Unit-III
5. Resolution of Degeneracy.......................................................................................199—214

6. Revised Simplex Method........................................................................................215—246

7. Sensitivity Analysis................................................................................................247—288

Unit-IV
8. Duality in Linear Programming...........................................................................289—348

9. Integer Programming............................................................................................349—382

Unit-V
10. The Transportation Problem...............................................................................383—432

11. The Assignment Problem.....................................................................................433—480


Unit-1

Chapter-1: Mathematical Preliminaries (Some Basic


Concepts on Matrices and Linear Algebra)

Chapter-2: Linear Programming Problem


(Formulation and Graphical Solution)
3

1.1 Some Basic Concepts


onsider the system of equations
C 2 x + 9 y + 7z = 4,

3 x + 4 y − 3 z = 5,

6 x + 8 y − 3 z = 1,

4 x −2 y + z = 2

Here x, y and z are unknowns and their coefficients are all numbers. Arranging the
coefficients in the order in which they occur in the equations and enclosing them in
square brackets, we obtain a rectangular array of the form

2 9 7
3 4 −3 
 
6 8 −3 
4 −2 1

This rectangular array is an example of a matrix. The horizontal lines (→) are called rows
or row vectors, and the vertical lines (↓) are called columns or column vectors of the matrix.
There are 4 rows and 3 columns in this matrix. Therefore it is a matrix of the type 4 × 3.
The numbers 3, 4, –3, 2 etc., constituting this matrix are called its elements. The
difference between a matrix and a number should be clearly understood. A matrix is not a
number. It has got no numerical value. It is a new thing formed with the help of numbers.
4

It is just an ordered collection of numbers arranged in the form of a rectangular array.


Simply 7 is a number. But in our notation of matrices [7] is a matrix of the type 1 × 1 and
we cannot have 7 = [7]. We cannot have a relation of equality between a matrix and a
number.

We shall use capital letters (in bold type or in italic type) to denote matrices.

0 0 0 
5 0 1
Thus A =  and B = 0 0 0 
6 1 7 2 ×3  
0 0 0  3 ×3

are both matrices. They are of the type 2 × 3 and 3 × 3 respectively.

Sometimes we also use the brackets ( ) or the double bars, |||| in place of the square
brackets [ ] to denote matrices.

2 2 3 + 5 i 9  7 7
Thus A = , B =  , C = ,
2 2  −4 3 − 5 i 7 7

are all matrices each of the type 2 × 2.

1.2 Matrix
A set of mn numbers (real or complex) arranged in the form of a rectangular array having
m rows and n columns is called an m × n matrix [to be read as ‘m by n’ matrix'].

An m × n matrix is usually written as

a11 a12 ... a1n 


a a22 ... a2 n 
 21 
a a32 ... a3 n 
A =  31
... ... ...... 
 
... ... ...... 
a am2 amn 
 m1

In a compact form the above matrix is represented by A = [aij ], i = 1, 2,... m, j = 1, 2,... n or


simply by [aij ]m× n. We write the general element of the matrix and enclose it in brackets
of the type [ ] or of the type ( ).

The numbers a11, a12 etc. of this rectangular array are called the elements of the matrix.

The element a ij belongs to the ith row and the j th column and is sometimes called the

(i, j)th element of the matrix. Thus in the element a ij the first suffix i will always denote

the number of the row and the second suffix j, the number of the column in which the
element occurs. In a matrix, the number of rows and the columns need not be equal.
5

1.3 Special Types of Matrices


1.3.1 Square Matrix
A m × n matrix for which m = n (i.e., the number of rows is equal to the number of columns)
is called a square matrix of order n. It is also called an n-rowed square matrix. The
elements a11, a22 , a33 ,..., ann are called the diagonal elements and the line along which
they lie is called the principal diagonal of the matrix. For example the matrix

0 1 2 3
2 3 1 0
A = 
5 0 1 1
0 0 1 2 
 4 ×4

is a square matrix of order 4. The elements 0, 3, 1, 2 constitute the principal diagonal of


this matrix.

1.3.2 Unit Matrix or Identity Matrix


A square matrix each of whose diagonal elements is 1 and each of whose non-diagonal
elements is equal to zero is called a unit matrix or an identity matrix and is denoted by
I. I n will denote a unit matrix of order n. Thus a square matrix A = [a ij ] is a unit matrix if
a ij = 1 when i = j and aij = 0 when i ≠ j.

1.3.3 Null Matrix or Zero Matrix


A m × n matrix whose elements are all 0 is called the null matrix (or zero matrix) of the
type m × n. It is usually denoted by O or more clearly by O m, n. Often a null matrix is

simply denoted by the symbol 0 read as ‘zero’.

1.3.4 Row Matrices and Column Matrices


Any 1× n matrix which has only one row and n columns is called a row matrix or a row
vector. Similarly any m ×1 matrix which has m rows and only one column is a column
matrix or a column vector.

For example, X = [2 7 −8 5 11]1×5 is a row matrix of the type 1 × 5 while

 2
Y =  −9  is a column matrix of the type 3 × 1.
 
 11
3 ×1

1.4 Submatrices of a Matrix


Any matrix obtained by omitting some rows and columns from a given (m × n) matrix A is
a called a submatrix of A.
6

The matrix A itself is a sub-matrix of A as it can be obtained from A by omitting no rows


or columns.

A square submatrix of a square matrix A is called a principal submatrix, if its diagonal


elements are also the diagonal elements of the matrix A. Principal submatrices are
obtained only by omitting corresponding rows and columns.

 1 2 3 9
1 2 3   
For Example: The matrix   is a submatrix of the matrix A = 7 11 6 5  as it
0 2 1
0 2 1 8
can be obtained from A by omitting the second row and the fourth column.

1.5 Equality of Two Matrices


Two matrices A = [aij ] and B = [bij ] are said to be equal if
1. they are of the same size and
2. the elements in the corresponding places of the two matrices are the same i.e.,
aij = bij for each pair of subscripts i and j.

If two matrices A and B are equal, we write A =B. If two matrices A and B are not equal,
we write A ≠ B. If two matrices are not of the same size, they cannot be equal.

1.6 Addition of Matrices


Let A and B be two matrices of the same type m × n. Then their sum (to be denoted by
A +B) is defined to be the matrix of the type m × n obtained by adding the corresponding
elements of A and B. Thus if A = [aij ]m× n and B = [bij ]m× n, then A + B = [aij + bij ]m× n.

Note that A + B is also a matrix of the type m × n.

For example, if

3 2 −1  1 −2 7
A =  and B = 
4 − 3 12 ×3 3 2 −1
2 ×3

 3 + 1 2 − 2 −1 + 7 4 0 6 
then A +B =  =
4 + 3 −3 + 2 1 − 1 7 −1 0 
2 ×3

Important Note : It should be noted that addition is defined only for matrices which are
of the same size. If two matrices A and B are of the same size, they are said to be
conformable for addition. If the matrices A and B are not of the same size, we cannot
find their sum.
7

1.7 Properties of Matrix Addition


1. Matrix Addition is Commutative : If A and B be two m × n matrices, then
A + B = B + A.
2. Matrix Addition is Associative : If A, B, C be three matrices each of the type
m × n, then
(A + B) + C = A + (B + C).
3. Existence of Additive Identity : If O be the m × n matrix each of whose elements is
zero, then A + O = A = O + A for every m × n matrix A.
4. Existence of the Additive Inverse
Negative of a Matrix : Let A = [aij ]m× n. Then the negative of the matrix A is

defined as the matrix [− aij ]m× n and is denoted by −A.


The matrix −A is the additive inverse of the matrix A. Obviously,

− A + A =O = A +(− A).
Here O is the null matrix of the type m × n. It is identity element for matrix addition.
Subtraction of Two Matrices : If A and B are two m × n matrices, then we define

A − B = A +(−B).
Thus the difference A − B is obtained by subtracting from each element of A the
corresponding element of B.
5. Cancellation laws hold good in the case of addition of matrices i.e., if A, B, C are
three m × n matrices, then
A + B = A +C ⇒ B =C (left cancellation law)
and B + A =C+ A ⇒ B =C (right cancellation law)
6. The equation A + X =O has a unique solution X = − A in the set of all m × n
matrices.

1.8 Multiplication of a Matrix by a Scalar


Let A be any m × n matrix and k any number (real or complex) called scalar. The m × n
matrix obtained by multiplying every element of the matrix A by k is called the scalar
multiple of A by k and is denoted by k A or A k. Symbolically, if A = [aij ]m× n, then
k A = A k = [kaij ]m× n.

3 2 −1
For example, if k = 2 and A =  ,
4 − 3 1
2 ×3

2 × 3 2 × 2 2 × −1 6 4 −2 
then 2A =   =
2 × 4 2 × −3 2 × 1 8 −6 2 
2 ×3 .
8

1.8.1 Properties of Multiplication of a Matrix by a Scalar


Theorem 1: If A and B are two matrices each of the type m × n, then k (A +B)= kA + kB
i.e., scalar multiplication of matrices distributes over the addition of matrices.
Theorem 2: If p and q are two scalars and A is any m × n matrix, then ( p + q)A = pA + qA.

Theorem 3: If p and q are two scalars and A is any m × n matrix, then p (qA) = ( pq) A .

Theorem 4: If A be any m × n matrix and k be any scalar, then (− k)A = −(kA ) = k(− A ).

Theorem 5: If A be any m × n matrix, then


(i) 1A = A
(ii) (−1)A = − A,

Theorem 6: If A and B are two m × n matrices, then − (A +B) = − A − B.

1.9 Multiplication of Two Matrices


Let A = [aij ]m× n and B = [bjk ]n× p be two matrices such that the number of columns in A
is equal to the number of rows in B. Then the m × p matrix C = [cik ]m× p such that
n
cik = Σ aij bjk [Note that the summation is with respect to the repeated suffix j]
j =1

is called the product of the matrices A and B in that order and we write C = AB.

The product AB of two matrices A and B exists if and only if the number of columns in A
is equal to the number of rows in B. Two such matrices are said to be conformable for
multiplication. The rule of multiplication is row-by-column multiplication.

 a11 a12 
b b 
For example, if A = a21 a22  , B =  11 12 
  b b
a31 a32   21 22 2 ×2
3 ×2

a11b11 + a12 b21 a11b12 + a12 b22 


then AB = a21b11 + a22 b21 a21b12 + a22 b22 
 
a31b11 + a32 b21 a31b12 + a32 b22 
3 ×2

Important Note : If the product AB exists, then it is not necessary that the product BA
will also exist. For example if A is a 4 × 5 matrix and B is a 5 × 3 matrix, then the product
AB exists while the product BA does not exist.

1.10 Properties of Matrix Multiplication


1. Matrix multiplication is associative, if conformability is assured; i.e.,
A(BC) = (AB)C if A, B, C are m × n, n × p, p × q matrices respectively.
9

2. Multiplication of matrices is distributive with respect to addition of matrices i.e.,

A(B + C) = AB + AC,
where A, B, C are any three, m × n, n × p, n × p matrices respectively.

3. The multiplication of matrices is not always commutative.


Whenever AB = BA, the matrices A and B are said to commute.

If AB = −BA the matrices A and B are said to anti-commute.

4. If A be any m × n matrix and O n, p be an n × p null matrix, then AO n, p = O m, p where


O m, p is an m × p null matrix.
Similarly if O m, n be an m × n null matrix and A be any n × p matrix, then
O m, nA =O m, p.
If A be any n-rowed square matrix and O be an n- rowed null matrix, then
AO =OA =O.

5. The equation AB = O does not necessarily imply that at least one of the matrices A
and B must be a zero matrix. Or
The product of two matrices can be a zero matrix while neither of them is a zero
matrix.

6. In the case of matrix multiplication if AB =O, then it does not necessarily imply
that BA =O.

7. If A be an m × n matrix, In denotes the n-rowed unit matrix, it can be easily seen that
AIn = A = ImA.

1.11 Determinant of a Square Matrix


Let A be any square matrix. The determinant formed by the elements of A is said to be
the determinant of matrix A. This is denoted by| A | or det A. Since in a determinant the
number of rows is equal to the number of columns, therefore only square matrices can
have determinants.

12 0 13  12 0 13
Hence, if  
A = 15 12 11 , then det A =| A | = 15 12 11
 
13 11 14  13 11 14

1.11.1 Difference between a Matrix and a Determinant


1. A matrix “A” cannot be reduced to a number whereas the determinant can be
reduced to a number.
2. The number of rows may or may not be equal to number of columns in a matrix
while in a determinant the number of rows is equal to the number of columns.
10

3. Interchanging the rows and columns, a different matrix is formed while in a


determinant, an interchange of rows and columns does not change the value of the
determinant.

1.12 Non-Singular and Singular Matrices


A square matrix A is said to be non-singular or singular according as | A | ≠ 0 or | A | = 0.

1.13 Transpose of a Matrix


Let A = [aij ]m× n. Then the n × m matrix obtained from A by changing its rows into
columns and its columns into rows is called the transpose of A and is denoted by the
symbol A ′ or A T .
The operation of interchanging rows with columns is called transposition. Symbolically
if
A = [aij ]m× n,

then A ′ = [bji]n× m, where bji = aij ,

i.e., the ( j, i)th element of A ′ is the (i, j)th element of A.

For example, the transpose of the 3 × 4 matrix


1 2 3
1 2 3 4  2 3 4
A = 2 3 4 1 is the 4 × 3 matrix A ′ =  
  3 4 2
3 4 2 1 4
3 ×4
 1 1
4 ×3

The first row of A is the first column of A ′. The second row of A is the second column of
A′. The third row of A is the third column of A′.
Theorems : If A′ and B′ be the transposes of A and B respectively, then

1. (A ′ )′ = A;
2. (A +B)′ = A ′ +B′, A and B being of the same size.
3. (kA )′ = kA ′, k being any complex number.
4. (AB)′ = B′ A ′ , A and B being conformable to multiplication.
The above law (4) is called the reversal law for transposes i.e., the transpose of the
product is the product of the transposes taken in the reverse order.

1.14 Adjoint or Adjugate of a Square Matrix


Let A = [aij ]n× n be any n × n matrix. The transpose B′ of the matrix B = [Cij ]n× n, where Cij
denotes the cofactor of the element aij in the determinant| A |, is called the adjoint of the
matrix A and is denoted by the symbol adj A.
11

Thus the adjoint of a matrix A is the transpose of the matrix formed by the cofactors of A
i.e., if

a11 a12 ... a1n 


a a22 ... a2 n 
A =  21 ,
... ... ... ... 
a an2 ... ann 
 n1

 C11 C12 ... C1n 


C C22 ... C2 n 
then Adj A = the transpose of the matrix  21 
 ... ... ... ... 
C C ... Cnn 
 n1 n2

 C11 C21 ... Cn1 


C C22 ... Cn2 
= the matrix  12 
 ... ... ... ... 
C C2 n ... Cnn 
 1n

Note : Sometimes the adjoint of a matrix is also called the adjugate of that matrix.

Theorem : If A is any square matrix of order n, then A (adj A ) =| A | In = (adj A)A .

1.15 Invertible Matrices : Inverse or Reciprocal of a Matrix


Let A be any n-rowed square matrix. Then a matrix B, if it exists, such that

AB =BA =In

is called the inverse of matrix A.

Note : For the products AB, BA to be both defined, it is necessary that A and B are both
square matrices of the same order. Thus non-square matrices cannot possess inverse.

Existence of the Inverse, Theorem : The necessary and sufficient condition for a
square matrix A to possess the inverse is that | A | ≠ 0.
Important :
1
1. If A be an invertible matrix, then the inverse of A is Adj A. It is usual to denote
| A|
the inverse of A by A −1.

1
Thus, A −1 = Adj A , provided | A | ≠ 0.
| A|

2. Computation of the Inverse by Partitioning : A non-singular square matrix in


partitioned form is expressed as follows :

 I Q
M= 
O R 
12

Where, I (unit matrix) and R are square submatrices and Q, O (null matrix) are two
matrices.
If R −1 exists and is known, then the inverse of the matrix M is given by

 I − QR −1 
M −1 =  
O R −1 

Theorem 1: If A, B be two n-rowed non-singular matrices, then AB is also non-singular


and

(AB)−1 = B −1A −1,

i.e., the inverse of a product is the product of the inverses taken in the reverse order.

Theorem 2: If A be an n × n non-singular matrix, then (A ′ )−1 = (A −1)′ i.e., the operations


of transposing and inverting are commutative.

1.16 Vectors
Any ordered n-tuple of numbers is called an n-vector. By an ordered n-tuple we mean a set
consisting of n numbers in which the place of each number is fixed. If x1, x2 ,..., x n be any
n numbers, then the ordered n-tuple X = ( x1, x2 ,...., x n) is called an n-vector. The ordered
triad ( x1, x2 , x3 ) is called a 3-vector. Similarly (1, 0, 1, –1) and (1, 8, –5, 7) are 4-vectors.
The n numbers x1, x2 ,...., x n are called components of the n-vector X = ( x1, x2 ,...., x n). A
vector may be written either as a row vector or as a column vector. If A be a matrix of the
type m × n, then each row of A will be an n-vector and each column of A will be an
m-vector. A vector whose components are all zero is called a zero vector and will be denoted
by O.

If k be any number and X be any vector, then relative to the vector X, k is called a scalar.

Algebra of Vectors : Since an n-vector is nothing but a row matrix or a column matrix,
therefore we can develop an algebra of vectors in the same manner as the algebra of
matrices.

Equality of Two Vectors : Two n-vectors X and Y where X = ( x1, x2 ,..., x n) and
Y = ( y1, y2 ,..., yn) are said to be equal if and only if their corresponding components are
equal i.e., if x i = yi, for all i = 1, 2,...., n

For example if

X = (1, 4, 7) and Y = (1, 4, 7).

then X =Y
13

Multiplication of a Vector by a Scalar (Number) :

If k be any number and X = ( x1, x2 ,...., x n),

then by definition k X = (kx1, kx2 ,..., kx n)

The vector k X is called the scalar multiple of the vector X by the scalar k.

Properties of Addition and Scalar Multiplication of Vectors : If X, Y, Z be any three


n-vectors and p, q be any two numbers, then obviously

1. X+ Y = Y + X

2. X+(Y +Z)=(X+ Y)+Z.

3. p (X+ Y)= pX+ pY

4. ( p + q) X = pX+ qX

5. p (qX)= ( pq) X

1.17 Linear Dependence and Linear Independence of


Vectors
Linearly Dependent Set of Vectors : A set of r n-vector X1, X2 ,...., X r is said to be
linearly dependent if there exist r scalars (numbers) k1, k2 ,..., kr not all zero, such that

k1X1 + k2 X2 + ...+ kr X r = O,

where, O, denotes the n-vector whose components are all zero.

Linearly Independent Set of Vectors : A set of r , n-vectors X1, X2 ,...., X r is said to be


linearly independent if every relation of the type

k1X1 + k2 X2 + .... kr X r = O

implies k1 = k2 = k3 = .... = kr = 0

1.18 A Vector as a Linear Combination of Vectors


A vector X which can be expressed in the form

X = k1 X1 + k2 X2 + .... + kr X r ,

is said to be a linear combination of the set of vectors X1, X2 , .... X r .

Here k1, k2 ,..., kr are numbers.

The following two results are quite obvious :


14

1. If a set of vectors is linearly dependent, then at least one member of the set can be
expressed as a linear combination of the remaining members.
2. If a set of vectors is linearly independent then no member of the set can be
expressed as a linear combination of the remaining members.

1.19 Systems of Linear Non-Homogeneous Equations


Sometimes we think that we can solve every two simultaneous equations of the type

a1 x + b1 y = c1 
.
a2 x + b2 y = c2 

but it is not so. For example, consider the simultaneous equations

3x + 4 y = 5 
.
6 x + 8 y = 13 

There is no set of values of x and y which satisfies both these equations. Such equations
are said to be inconsistent.

Let us take another example. Consider the simultaneous equations

3x + 4 y = 5 
.
6 x + 8 y = 10 

These equations are consistent since there exist values of x and y which satisfy both of
4 5
these equations. We see that x = − c + , y = c constitute a solution of these equations,
3 3
where c is arbitrary. Thus these equations possess an infinite number of solutions.

Now we shall discuss the nature of solutions of a system of non-homogeneous linear


equations. Let

a11 x1 + a12 x2 + ...+ a1n x n = b1 


a21 x1 + a22 x2 + ...+ a2 n x n = b2 

... ... ... ... ... ... ... ... ... 
...(1)
... ... ... ... ... ... ... ... ... 

am1 x1 + am2 x2 + ...+ amn x n = bm 

be a system of m non-homogeneous equations in n unknowns x1, x2 ,..., x n.

If we write

 a11 a12 ... a1n   x1   b1 


a a ... a2 n  x  b 
A =  21 22  , X =  2 , B =  2 
 ... ... ... ...   ...   . .. 
a a ... amn  x  b 
 m1 m2 m× n  n  n×1  m  m×1
15

where A, X, B are m × n, n × 1 and m ×1 matrices respectively, the above equations can be


written in the form of a single matrix equations AX =B.

Any set of values of x1, x2 ,..., x n which simultaneously satisfy all these equations is called
a solution of the system (1). When the system of equations has one or more solutions,
the equations are said to be consistent, otherwise they are said to be inconsistent.

 a11 a12 ... a1n b1 


a a ... a2 n b2 
The matrix [A B]=  21 22 
 ... ... ... ... ... 
a a ... amn bm 
 m1 m2

is called the augmented matrix of the given system of equations.

1.20 Condition for a System of n Equations in n


unknowns to have a Unique Solution
Theorem 1: If A be an n-rowed non-singular matrix, X be an n × 1 matrix, B be an n × 1
matrix, the system of equations AX = B has a unique solution.
Proof: If A be an n-rowed non-singular matrix, the ranks of the matrices A and [A B] are
both n. Therefore the system of equations AX =B is consistent i.e., possesses a solution.

Pre-multiplying both sides of AX =B by A −1, we have

A −1AX = A −1B ⇒ IX = A −1B ⇒ X = A −1B

is a solution of the equation AX =B.

To show that the solution is unique, let us suppose that X1 and X2 be two solutions of
AX =B.

Then AX1 = B, AX2 = B ⇒ AX1 = AX2 ⇒ A −1AX1 = A −1AX2

⇒ IX1 = IX2 ⇒ X1 = X2 .

Hence the solution is unique.

1.21 Working Rule for Finding the Solution of the


Equation AX=B
Suppose the coefficient matrix A is of the type m × n, i.e., we have m equations in n
unknowns. Write the augmented matrix [A B] and reduce it to a Echelon form by
applying only E-row transformations on it. This Echelon form will enable us to know the
ranks of the augmented matrix [A B] and the coefficient matrix A. Then the following
different cases arise :
16

Case I : Rank A < Rank [A B].

In this case the equations AX = B are inconsistent i.e., they have no solution.

Case II : Rank A = Rank [A B] = r (say).

In this case the equations AX=B are consistent i.e., they possess a solution. If r < m, then
in the process of reducing the matrix [A B] to Echelon form, (m − r ) equations will be
eliminated. The given system of m equations will then be replaced by an equivalent
system of r equations. From these r equations we shall be able to express the values of
some r unknowns in terms of the remaining n − r unknowns which can be given any
arbitrarily chosen values.

If r = n, then n − r = 0, so that no variable is to be assigned arbitrary values and therefore


in this case there will be a unique solution.

If r < n, the n − r variables can be assigned arbitrary values. So in this case there will be an
infinite number of solutions. Only n − r + 1 solutions will be linearly independent and the
rest of the solutions will be linear combinations of them.

If m < n, then r ≤ m < n. Thus in this case n − r > 0. Therefore when the number of
equations is less than the number of unknowns, the equations will always have an infinite
number of solutions, provided they are consistent.

1.22 Some Useful Results of Linear Algebra


1. Basis : A subset S of E n is said to be the basis of E n if
(i) S is linearly independent
(ii) S generates E n i.e., E n = L(S).

2. Generating Set : A set of vectors α 1, α 2 ,..., α k ∈ E n is said to generate E n if every


vector belonging to E n can be expressed as a linear combination of α1, α 2 ,..., α k .

3. Any two bases of E n have same number of elements.


4. Each set of n + 1 or more vectors of E n is L.D.
5. If a set of n vectors of E n spans E n then S is a basis of E n.
6. If S is a basis of E n then every vector α ∈ E n can be uniquely expressed as a linear
combination of vectors of S.
mmm
17

2.1 Introduction
inear Programming (L.P.) is a branch of Operations Research. In order to understand
L the importance of its study, we must know something of its history and evolution. Its
main origin was during the second world war (1939-1945). At that time, the military
management in England called upon a team of scientists to study the strategic and
tactical problems related to air and land defence of the country. Since they were having
very limited military resources, it was necessary to decide upon the most effective
utilization of them e.g., the efficient transportation, effective bombing, adequate food
supply, etc.

Following the end of war, the success of military teams attracted the attention of
Industrial managers who were seeking solutions to their complex executive-type
problems. The most common problem was to find: what methods should be adopted so
that the total cost is minimum or the total profit is maximum? The first mathematical
technique in this field, called the Simplex Method of Linear Programming, was
developed and applications have been developed through the efforts and co-operation of
interested individuals in academic institutions and industry both.

While making use of the techniques of Linear Programming, a mathematical model of


the problem is formulated. This model is actually a simplified representation of the
problems in which only the most important features are considered for reasons of
simplicity. Then an optimal or most favourable solution is found.
18

Thus this technique is concerned with the optimization problems. A decision, which
taking into account all the circumstances can be considered the best one, is called an
optimal decision.

Thus, Linear Programming is the mathematical tool used in solving problems in business,
industry, commerce, military operations, etc. Here the problem before us is usually to
maximize the profit or minimize the cost keeping in mind the restrictions imposed upon
us by the limitations of various resources.

2.2 Linear Programming & Linear Programming Problem


Linear Programming : Linear programming is an important optimization (maximization or
minimization) technique used in decision making in business and every-day life for obtaining the
maximum or minimum values as required of a linear expression subject to satisfying certain number
of given linear restrictions.

Linear Programming Problem (L.P.P.) : The linear programming problem in general calls for
optimizing (Maximizing/minimizing) a linear function of variables called the objective function
subject to a set of linear equations and/or linear inequations called the constraints or restrictions.

The term linear means that all the relations governing the problem are linear and the term
programming refers to the process of determining a particular programme or plan of
action.

Objective Function : The function which is to be optimized (maximized or minimized)


is called an objective function.

Constraints : The system of linear inequations (or equations) under which the objective
function is to be optimized are called the constraints.

2.3 Basic Requirements of a Linear Programming


Problem (L.P.P.)
1. Well-defined Objective Function : The objective function of the L.P.P. should be
clear and well-defined. The objective may be either to maximize the profit by
utilizing the available resources or it may be to minimize the cost by using a limited
amount of various resources required for production.

2. Quantitative Measurement of Elements : It is necessary that each element may


be capable of being measured quantitatively. Numerical data must be given so that
the relationships among the elements may be considered.

3. Presence of Constraints and Restrictions : The available resources which are to


be allocated among various competing activities should be limited.
19

4. Alternate Lines of Action : There must be different lines of actions available to


perform the job.

5. Non-negative Restrictions : All the variables considered for making decisions


assume non-negative values.

6. Linearity : A primary requirement of a L.P.P. is that both the objective function


and all the constraints must be expressed in terms of linear equations and inequations.
Linearity implies that the products of variables such as xy, x2 y etc., powers of

variables such as x2 , x3 etc., and combinations of variables of the type ax + b log y

etc. are not allowed.

7. Additivity : Additivity means that if it takes x minutes on machine A to make


product P and y minutes to make product Q, then the total time on machine A
devoted to this production is x + y, provided the time required to change the
machine from product P to product Q is negligible.

8. Multiplicativity : Multiplicativity means that if the cost of production of 1 unit of


an item is ` 10, then the cost of production of x units of the same item is ` 10 x.
In terms of profit we can say that the profit from selling a given number of units is
the profit per unit multiplied by the number of units sold.

9. Divisibility : Divisibility means that the fractional levels of variables must be


permissible besides integral values.

10. Finite Number of Constraints : The number of activities involved in a problem


should be finite, thus leading to a finite number of constraints in the problem.

2.4 Mathematical Description of a General Linear


Programming Problem
A general linear programming problem can be stated as follows :

Find x1, x2 ,..., x n which optimize the linear function

Z = c1 x1 + c2 x2 + ...+ cn x n ...(1)

subject to the constraints

a11 x1 + a12 x2 + ... + a1n x n (≤ = ≥) b1 


a21 x1 + a22 x2 + ... + a2 n x n (≤ = ≥) b2 

... ... ... ... ... ... ... ... ... 
...(2)
... ... ... ... ... .. . ... ... ... 

am1 x1 + am2 x2 + ... + amn x n (≤ = ≥) bm 
20

and the non-negative restrictions

x1, x2 ,..., x n ≥ 0, …(3)

where all a11, a12 ,...., amn; b1, b2 ,..., bm; c1, c2 ,...., cn are constants and x1, x2 ,..., x n are
variables.

The function Z given in (1) is called the objective function, the conditions given in (2)
are called the linear constraints and the conditions given in (3) are called the
non-negative restrictions of the linear programming problem.

This linear programming problem may be stated in matrix form as follows :

Find x1, x2 ,..., x n, so as to optimize

Z = cx

subject to A x (≤ = ≥) b

and x ≥ 0,

where A = [aij ]m× n is called the coefficient matrix,

c = [c1, c2 ,...., cn] is a row matrix known as price vector,

 b1 
b 
 2
b =  ...  = [b1, b2 ,..., bm]T
 ... 
 
bm 

is a column matrix called the requirement vector.

 x1 
x 
 2
x =  ...  = [ x1, x2 ,...., x n]T
 ... 
 
 x n 

is a column matrix of variables

and 0 is a null matrix of the type n ×1.

Note : For the matrix form all constraints in L.P.P. should have the same inequality (i.e.,
≥ or ≤ ) or equality sign.
21

2.5 Mathematical Formulation of a Linear


Programming Problem
To make use of the linear programming technique it is important to recognize a problem
which can be handled by linear programming and then to formulate its mathematical
model.

Working Rule to form a Linear Programming Problem : The following steps are
helpful in the formulation of a linear programming problem :

1. Identify the variables in L.P.P. and denote them by x1, x2 , x3 etc.

2. Identify the objective function and express it as a linear function of the variables
x1, x2 , x3 etc.

3. Find the type of the objective function. It may be in the form of maximizing profits
or minimizing costs.

4. Identify all the constraints and express them as linear inequations or equations.

Example 1: A goldsmith manufactures necklaces and bracelets. The total number of


necklaces and bracelets that he can handle per day is at most 24. It takes one hour to make
a bracelet and half an hour to make a necklace. It is assumed that he can work for a
maximum of 16 hours a day. Further the profit on a bracelet is ` 300 and the profit on a
necklace is ` 100. Formulate this problem as a linear programming problem so as to
maximize the profit.

Solution: Suppose the goldsmith manufactures x1 necklaces and x2 bracelets per day.

Since the profit on a necklace is ` 100 and profit on a bracelet is ` 300, therefore the total
profit Z in ` is given by

Z = 100 x1 + 300 x2 ...(1)

Since it takes half an hour to make one necklace, so the time required to make x1
necklaces = (1 2) x1 hours.

Again it takes one hour to make one bracelet, so the time required to make x2 bracelets

= 1. x2 hours = x2 hours.

Therefore, total time required to make x1 necklaces and x2 bracelets

= ( x1 2 + x2 ) hours. ...(2)
22

Since total time available per day is 16 hours, therefore

x1 2 + x2 ≤ 16 or x1 + 2 x2 ≤ 32. ...(3)

The total number of necklaces and bracelets that the goldsmith can manufacture in a day
is atmost 24, so we have

x1 + x2 ≤ 24. ...(4)

Also the number of necklaces and bracelets manufactured can never be negative,
therefore

x1 ≥ 0, x2 ≥ 0. ...(5)

Hence, the linear programming problem formulated from the given problem is as follows:

Maximize Z = 100 x1 + 300 x2

subject to the constraints

x1 + 2 x2 ≤ 32

x1 + x2 ≤ 24

and the non-negative restrictions x1 ≥ 0, x2 ≥ 0.

Example 2: A tyre factory produces three types of tyres T1, T2 , T3 . Three different types
of chemicals say C1, C2 , C3 are required for production. One T1 tyre needs 2 units of C1, 3
units of C3 ; one T2 tyre needs 3 units of C1, 2 units of C2 and 2 units of C3 ; and one T3
tyre needs 5 units of C2 and 4 units of C3 . The factory has only a stock of 20 units of
C1, 25 units of C2 and 30 units of C3 . Further the profit from the sale of one tyre T1 is ` 6,
one tyre T2 is ` 10 and of one tyre T3 is ` 8. Assuming that the factory can sell all that it
produces, formulate a linear programming problem to maximize its profit.

Solution: Let the factory produce x1 tyres of type T1, x2 tyres of type T2 and x3 tyres of
type T3 .

The give informations can be put in a tabular form as given below :


Tyre T1 Tyre T2 Tyre T3 Total Chemical
Chemicals x1 x2 x3 Available

Chemical C1 2 3 0 20
Chemical C2 0 2 5 25
Chemical C3 3 2 4 30
Profit in ` (per tyre) 6 10 8
23

The total profit Z in ` is given by

Z = 6 x1 + 10 x2 + 8 x3 . ...(1)

The total quantity of chemical C1 required = (2 x1 + 3 x2 ) units.

Since the factory has a stock of 20 units of chemical C1, therefore we have

2 x1 + 3 x2 ≤ 20. ...(2)

Similarly, considering the total quantity of the chemicals C2 and C3 required, we have

2 x2 + 5 x3 ≤ 25 ...(3)

and 3 x1 + 2 x2 + 4 x3 ≤ 30 ...(4)

Also, we have

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, ...(5)

since the factory cannot produce negative quantities.

Hence, the linear programming problem formulated from the given problem is as follows:

Maximize Z = 6 x1 + 10 x2 + 8 x3

subject to the constraints

2 x1 + 3 x2 ≤ 20

2 x2 + 5 x3 ≤ 25

3 x1 + 2 x2 + 4 x3 ≤ 30

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.

Example 3: The objective of a diet problem is to ascertain the quantities of certain foods
that should be eaten to meet certain nutritional requirement at a minimum cost. The
consideration is limited to milk, green vegetables and eggs, and to vitamins A, B, C. The
number of milligrams of each of these vitamins contained within a unit of each food and
their daily minimum requirements along with the cost of each food is given in the table
below :

Vitamin Litre of Kg. of Dozen of Minimum Daily


Milk Vegetables Eggs Requirement

A 1 1 10 1 mg.
B 100 10 10 50 mg.
C 10 100 10 10 mg.
Cost in ` 20 10 8
24

Formulate a linear programming problem for this diet problem. [Meerut 2004]

Solution: Let the daily diet consist of x1 litres of milk, x2 kgs. of vegetables and x3 dozens
of eggs.

∴ the total cost Z per day in ` is given by

Z = 20 x1 + 10 x2 + 8 x3 ...(1)

Total amount of vitamin A in the daily diet is

( x1 + x2 + 10 x3 ) mg.,

which should be at least equal to 1 mg., therefore

x1 + x2 + 10 x3 ≥ 1. ...(2)

Similarly, considering the total amounts of vitamins B and C in the daily diet, we have

100 x1 + 10 x2 + 10 x3 ≥ 50 ...(3)

and 10 x1 + 100 x2 + 10 x3 ≥ 10 ...(4)

Since the quantities of different food items to be consumed cannot be negative, therefore
we have

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.

Hence, the linear programming problem formulated from the given diet problem is :

Minimize Z = 20 x1 + 10 x2 + 8 x3 ,

subject to the constraints

x1 + x2 + 10 x3 ≥ 1

100 x1 + 10 x2 + 10 x3 ≥ 50

10 x1 + 100 x2 + 10 x3 ≥ 10

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.

Example 4: According to the medical experts it is necessary for an adult to consume at


least 75 gms of proteins, 85 gms of fats and 300 gms of carbohydrates daily. The following
table gives the analysis of the food items readily available in the market with their
respective costs.
25

Food value (in gm.) per 100 gms Cost in ` per


Food Type
Proteins Fats Carbohydrates kg.

A 18.0 15.0 — 3.0

B 16.0 4.0 7.0 4.0

C 4.0 20.0 2.5 2.0

D 5.0 8.0 40.0 1.5

Minimum daily 75.0 85.0 300.0


requirement

Formulate a linear programming problem for an optimum diet.

Solution: Let the daily diet consist of x1 kg. of food A, x2 kg. of food B, x3 kg. of food C
and x4 kg. of food D.

Then the total cost per day in ` is

Z = 3 x1 + 4 x2 + 2 x3 + 1. 5 x4 . ...(1)

Total amount of proteins in the daily diet is

(180 x1 + 160 x2 + 40 x3 + 50 x4 )

Since the minimum daily requirement of proteins is 75 gms, therefore we have

180 x1 + 160 x2 + 40 x3 + 50 x4 ≥ 75 ...(2)

Similarly, considering the total amounts of fats and carbohydrates in the diet, we have

150 x1 + 40 x2 + 200 x3 + 80 x4 ≥ 85 ...(3)

and 70 x2 + 25 x3 + 400 x4 ≥ 300. ...(4)

Since, the daily diet cannot contain quantities with negative values of any food item,
therefore

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0. ...(5)

Hence, the linear programming problem formulated for the given diet problem is :

Minimize Z = 3 x1 + 4 x2 + 2 x3 + 1. 5 x4

subject to the constraints

180 x1 + 160 x2 + 40 x3 + 50 x4 ≥ 75

150 x1 + 40 x2 + 200 x3 + 80 x4 ≥ 85
26

70 x2 + 25 x3 + 400 x4 ≥ 300

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0 and x4 ≥ 0.

Example 5: A manufacturer of patent medicines is proposed to prepare a production plan


for medicines A and B. There are sufficient ingredients available to make 20,000 bottles
of medicine A and 40,000 bottles of medicine B, but there are only 45,000 bottles into
which either of the medicines can be filled. Further, it takes three hours to prepare enough
material to fill 1,000 bottles of medicine A and one hour to prepare enough material to fill
1,000 bottles of medicine B and there are 66 hours available for this operation. The profit
is ` 8 per bottle for medicine A and ` 7 per bottle for medicine B. Formulate this problem
as a L.P.P..

Solution: Suppose, the manufacturer produces x1 thousand bottles of medicine A and x2


thousand bottles of medicine B.

Since there are only 45,000 bottles available for filling the medicines A and B, therefore
we have x1 + x2 ≤ 45.

It takes three hours to prepare enough material to fill 1,000 bottles of medicine A.
Therefore, the time required to fill x1 thousand bottles of medicine A is 3 x1 hours.

Similarly, the time required to fill x2 thousand bottles of medicine B is x2 hours.

Thus, the total time required to fill x1 thousand bottles of medicine A and x2 thousand
bottles of medicine B is (3 x1 + x2 ) hours.

The total time available for this operation is 66 hours. So, we have 3 x1 + x2 ≤ 66.

There are sufficient ingredients available to make 20 thousand bottles of medicine A and
40 thousand bottles of medicine B, therefore we have x1 ≤ 20, x2 ≤ 40.

Since the number of bottles cannot be negative, therefore we have x1 ≥ 0, x2 ≥ 0.

The profit per bottle for medicine A is ` 8 and for medicine B is ` 7. So, the total profit in
` on x1 thousand bottles of medicine A and x2 thousand bottles of medicine B is given by

Z = 8 × 1, 000 x1 + 7 × 1, 000 x2 = 8, 000 x1 + 7, 000 x2 .

Hence, the linear programming problem formulated for the given problem is

Maximize Z = 8, 000 x1 + 7, 000 x2 ,

subject to the constraints :

x1 + x2 ≤ 45

3 x1 + x2 ≤ 66
27

x1 ≤ 20

x2 ≤ 40

and the non-negative restrictions x1 ≥ 0, x2 ≥ 0.

Example 6: A firm manufactures two types of products A and B and sells them at a profit
of ` 2 on type A and ` 3 on type B. Each product is processed on two machines E and F.
Type A requires one minute of processing time on E and two minutes on F, type B required
one minute on E and one minute on F. The machine E is available for not more than 6
hours 40 minutes while machine F is available for 10 hours during any working day.
Formulate the problem as a linear programming problem.

Solution: Let the manufacturer produce x1 units of the product of type A and x2 units of
the product of type B.

The given information can be systematically arranged in the form of the following table :

Machine Time of processing (minutes per unit) Time Available


(minutes)
Type A (x1 units) Type B (x2 units)

E 1 1 400

F 2 1 600

Profit per unit `2 `3

Since the machine E takes 1 minute time for processing a unit of type A and 1 minute
time for processing a unit of type B, therefore the total time required on machine E is
( x1 + x2 ) minutes.

But the machine E is available for not more than 6 hours and 40 minutes = 400 minutes,
therefore we have

x1 + x2 ≤ 400.

Similarly, the total time (in minutes) required on machine F is 2 x1 + x2 .

Since the machine F is available for not more than 10 hours = 600 minutes, therefore we
have

2 x1 + x2 ≤ 600.

Since it is not possible to produce negative quantities, so we have x1 ≥ 0, x2 ≥ 0.

The profit on type A is ` 2 per unit so the profit on selling x1 units of type A will be ` 2 x1.
Similarly, the profit on selling x2 units of type B will be ` 3 x2 .

Therefore, the total profit (in `) on selling x1 units of type A and x2 units of type B is
28

Z = 2 x1 + 3 x2 .

Hence, the required linear programming problem formulated for the given problem is

Maximize Z = 2 x1 + 3 x2

subject to the conditions

x1 + x2 ≤ 400

2 x1 + x2 ≤ 600

x1 ≥ 0, x2 ≥ 0.

Example 7: A manufacturer produces three models (I, II, and III) of a certain product. He
uses two types of raw materials (A and B) of which 4,000 and 6,000 units respectively are
available. The raw material requirements per unit of the three models are given below :

Requirement per unit of given model


Raw Materials
I II III

A 2 3 5

B 4 2 7

The labour time for each unit of model I is twice that of model II and three times that of
model III. The entire labour force of the factory can produce the equivalent of 2,500 units
of model I. A market survey indicates that the minimum demand of the three models are
500, 500 and 375 units respectively. However, the ratio of the number of units produced
must be equal to 3:2:5. Assume that the profits per unit of models I, II and III are ` 60,
40 and 100 respectively. Formulate the problem as a L.P.P. in order to determine the
number of units of each product which will maximize profit.

Solution: Let the manufacturer produce x1, x2 , x3 units of models I, II and III
respectively. Since the profit per unit on model I, II and III is ` 60, ` 40 and ` 100
respectively, the objective function is to maximize the profit :

Z = 60 x1 + 40 x2 + 100 x3 . ...(1)

The raw materials used and available give rise to the following constraints respectively.

2 x1 + 3 x2 + 5 x3 ≤ 4, 000 ...(2)

4 x1 + 2 x2 + 7 x3 ≤ 6, 000 ...(3)

Now if t be the labour time required for one unit of model I, then the time required for
1 1
one unit of model II will be t and that for the model III will be t. As the factory can
2 3
produce 2,500 units of model I, so the restriction on the production time will be
29

1 1
t x1 + t x2 + t x3 ≤ 2, 500 t i.e., 6 x1 + 3 x2 + 2 x3 ≤ 15, 000. ...(4)
2 3

Now according to the market demand, we must have

x1 ≥ 500, x2 ≥ 500, x3 ≥ 375. ...(5)

Further the ratio of the number of units of different types of models is 3:2:5 i.e.,

x1 = 3 k, x2 = 2 k, x3 = 5 k.

These give rise to the constraints


1 1 1 1
x = x and x = x
3 1 2 2 2 2 5 3 ...(6)

Thus, the linear programming problem is as follows :

Maxmize Z = 60 x1 + 40 x2 + 100 x3

subject to the conditions :

2 x1 + 3 x2 + 5 x3 ≤ 4, 000

4 x1 + 2 x2 + 7 x3 ≤ 6, 000

6 x1 + 3 x2 + 2 x3 ≤ 15, 000

2 x1 = 3 x2 , 5 x2 = 2 x3 and x1 ≥ 500, x2 ≥ 500, x3 ≥ 375.

Note : x1, x2 , x3 will automatically be non-negative due to (5). So we need not to


mention that x1, x2 , x3 ≥ 0. Even if we write then these will be redundant constraints.

Example 8: A complete unit of a certain product consists of four units of component A and
three units of component B. The two components (A and B) are manufactured from two
different raw materials of which 100 units and 200 units respectively are available.
Three departments are engaged in the production process with each department using a
different method for manufacturing the components. The following table gives the raw
material requirements per production run and the resulting units of each component. The
objective is to determine the number of production runs for each department which will
maximize the total number of component units of the final product:

Input per run (units) Output per run (units)


Department
Raw Material Components
I II A B

1 7 5 6 4
2 4 8 5 8
3 2 7 7 3

Formulate a linear programming model to the above problem.


30

Solution: Let x1, x2 , x3 be the number of production runs for the departments 1, 2, 3
respectively.

The raw materials available give the restrictions

7 x1 + 4 x2 + 2 x2 ≤ 100 ...(1)

5 x1 + 8 x2 + 7 x3 ≤ 200 ...(2)

The total number of units produced by three departments :

6 x1 + 5 x2 + 7 x3 (component A) and 4 x1 + 8 x2 + 3 x3 (component B).

Now final product requires 4 units of components A and 3 units of component B. So the
maximum number of units of the final product cannot exceed the smaller value of

1 1
(6 x1 + 5 x2 + 7 x3 ) and (4 x1 + 8 x2 + 3 x3 ).
4 3

Thus the objective function becomes

1 1 
Max. Z = min  (6 x1 + 5 x2 + 7 x3 ), (4 x1 + 8 x2 + 3 x3 )
4 3 

Since the objective function is not linear, a suitable transformation can be used to
convert this into a L.P.P.

1 1 
Let, min.  (6 x1 + 5 x2 + 7 x3 ), (4 x1 + 8 x2 + 3 x3 ) = v.
4 3 

1 1
Then (6 x1 + 5 x2 + 7 x3 ) ≥ v and (4 x1 + 8 x2 + 3 x3 ) ≥ v.
4 3

Thus we get the L.P.P. as follows :

Maximize Z = v,

subject to the constraints :

6 x1 + 5 x2 + 7 x3 − 4 v ≥ 0

4 x1 + 8 x2 + 3 x3 − 3 v ≥ 0

7 x1 + 4 x2 + 2 x3 ≤ 100

5 x1 + 8 x2 + 7 x3 ≤ 200

and the non-negative restrictions x1, x2 , x3 , v ≥ 0.


31

Example 9: The owner of the Metro Sports wishes to determine how many advertisements
to place in the selected three monthly magazines A, B and C. His objective is to advertise in
such a way that total exposure to principal buyers of expensive sports good is maximized.
Percentages of readers for each magazine are known. Exposure in any particular magazine
is the number of advertisements placed multiplied by the number of principal buyers. The
following data may be used :

Magazines
Exposure Category
A B C

Readers (in Lakhs) 1 0.6 0.4

Principal Buyers 10% 15% 7%

Cost per advertisement in (`) 5,000 4,500 4,250

The budgeted amount is at most ` 1 lakh for the advertisement. The owner has already
decided that magazine A should have no more than 6 advertisement and that B and C
each have at least two advertisements. Formulate a LP model for the problem.

Solution: Let x1, x2 , x3 be the number of advertisement in magazines A, B and C


respectively.

Then the LP formulation will be as follows :

Maximize Z = (10% of 1, 00, 000) x1 + (15% of 60, 000) x2 + (7% of 40, 000) x3

or Maximize Z = 10, 000 x1 + 9, 000 x2 + 2, 800 x3

subject the constraints : 5, 000 x1 + 4, 500 x2 + 4, 250 x3 ≤ 1, 00, 000

x1 ≤ 6, x2 ≥ 2, x3 ≥ 2

and the non-negativity restrictions x1, x2 , x3 ≥ 0

Example 10: A manufacturer of biscuits is considering four types of gift packs containing
three types of biscuits : orange cream (OC), chocolate cream (CC), and wafers (W).
Market research conducted recently to assess the preferences of the consumers shows the
following types of assortments to be in good demand :

Assortments Contents Selling price per kg (Rs.)

A Not less than 40% of OC 20


Not more than 20% of CC
Any quantity of W

B Not less than 20% of OC 25


Not more than 40% of CC
Any quantity of W
32

C Not less than 50% of OC 22


Not more than 10% of CC
Any quantity of W

D No restrictions 12

For the biscuits, the manufacturing capacity and costs are given below :

Biscuits variety OC CC W

Plant capacity (kg/day) 200 200 150

Manufacturing cost (Rs./kg.) 8 9 7

Formulate a linear programming model to find the production schedule which maximizes
the profit assuming that there are no market restrictions.

Solution: Let x ij kg. be the quantity used in i -th assortment of the j-th type of biscuits,

where i = A, B, C, D and J = OC, CC, W (numbers 1, 2, 3 respectively).

Then in assortment A, x A1, x A 2 , x A 3 denote the quantity in kg. of OC, CC and W type of
biscuits respectively. Similarly we can interpret for other assortments.

Now the given data can be put in the form of L.P.P. as follows :

Maximize Z = 20 ( x A1 + x A 2 + x A 3 ) + 25 ( x B1 + x B2 + x B3 )

+ 22 ( x C1 + x C2 + x C3 ) + 12 ( x D1 + x D2 + x D3 )

− 8 ( x A1 + x B1 + x C1 + x D1) − 9 ( x A 2 + x B2 + x C2 + x D2 )

− 7( x A 3 + x B3 + x C3 + x D3 )

or Maximize Z = 12 x A1 + 11x A 2 + 13 x A 3 + 17 x B1 + 16 x B2 + 18 x B3

+14 x C1 + 13 x C2 + 15 x C3 + 4 x D1 + 3 x D2 + 5 x D3

Subject to the constraints :

x A1 ≥ 0 . 40 ( x A1 + x A 2 + x A 3 ) 
gift pack A
x A 2 ≤ 0 . 20 ( x A1 + x A 2 + x A 3 ) 

x B1 ≥ 0 . 20 ( x B1 + x B2 + x B3 ) 
gift pack B
x B2 ≤ 0 . 40 ( x B1 + x B2 + x B3 ) 

x C1 ≥ 0 . 50 ( x C1 + x C2 + x C3 ) 
gift pack C
x C2 ≤ 0 .10 ( x C1 + x C2 + x C3) 

Plant capacity constraints are : x A1 + x B1 + x C1 + x D1 ≤ 200


33

x A 2 + x B2 + x C2 + x D2 ≤ 200

x A 3 + x B3 + x C3 + x D3 ≤ 150

and non-negativity restrictions are x ij ≥ 0 (i = A, B, C, D ; j = 1, 2, 3).

Example 11: A company has two grades of inspectors, I and II, who are to be assigned for
a quality control inspection. It is required that at least 2,000 pieces be inspected per 8
hour day. Grade I inspectors can check pieces at the rate of 50/hour with the accuracy of
97% grade II inspectors can check prices at the rate of 40/hour with the accuracy 95%. The
wage rate of Grade I inspector is ` 4.50/hour and that of Grade II is ` 2.50/hour. Each time
an error is made by an inspector, the cost to the company is one rupee. The company has
available for the inspection job, 10 grade I and 5 grade II inspectors. Formulate the
problem to minimize the total cost of inspection.

Solution: Let the no. of inspectors of grade I and II be x1 and x2 respectively. They will
inspect (8 × 50) x1 + (8 × 40) x2 pieces daily. As the company requires at least 2,000 items
to be inspected daily, so the constraint is

(8 × 50) x1 + (8 × 40) x2 ≥ 2000 i.e., 5 x1 + 4 x2 ≥ 25. ...(1)

The limitations on the available numbers of inspectors give the constraints

x1 ≤ 10, x2 ≤ 5 ...(2)

Further the company is bearing two types of costs, the wages of inspectors and the costs
of inspection errors.

The cost of each inspector per hour is as follows :

  3 
Grade I : ` 4 . 50 + 1 ×  × 50  = ` 6 . 00 / hour
  100 

  5 
Grade II : ` 2 . 50 + 1 ×  × 40  = ` 4 . 50 / hour
 100 

Now the objective of the company is to minimize the total cost of inspection at the above
rates for each day.

Thus the objective function is

Minimize Z = 8 × 6 . 00 × x1 + 8 × 4 . 50 × x2 = 48 x1 + 36 x2 ...(3)

Hence the required L.P.P. is

Minimize Z = 48 x1 + 36 x2

subject to the constraints : 5 x1 + 4 x2 ≥ 25

x1 ≤ 10, x2 ≤ 5

and the non-negativity restrictions x1, x2 ≥ 0.


34

1. A furniture dealer deals in two items, viz., tables and chairs. He has ` 10,000 to
invest and a space to store at most 60 pieces (including both tables and chairs). A
table costs him ` 500 and a chair ` 100. He can sell all the items the he buys, earning
a profit of ` 50 for each table and ` 15 for each chair. Formulate this problem as a
LPP so that he maximizes the profit.

2. A factory produces two products A and B. Each of the product A requires 2 hrs of
moulding, 3 hrs of grinding and 4 hrs for polishing and each of the product B
requires 4 hrs for moulding, 2 hrs for grinding and 2 hrs for polishing. The moulding
machine can work for 20 hrs, grinding machine for 24 hrs and polishing machine
available for 13 hrs. The profit is ` 5 per unit of A and ` 3 per unit of B. Assuming
that the factory can sell all that it produces, formulate the problem as a LPP to
maximize the profit.

3. A dietician decides a certain minimum intake of vitamins A, B and C for a family.


The minimum daily needs of the vitamins A, B, C are 30, 20, 16 units respectively.
For the supply of these, the dietician depends on two types of foods X and Y. The
first one gives 7, 5, 2 units per gram of vitamins A, B, C respectively. The second one
gives, 2, 4, 8 units per gram of these vitamins respectively. The first food costs ` 2
per gram and the second ` 1 per gram.
How many grams of each food stuff should the family buy every day to keep the food
expense at a minimum? Formulate a linear programming problem for this problem.

4. A firm can produce two products A and B during a given period of time. Each of
these products requires four different operations, viz., Grinding, Turning,
Assembling and Testing. The requirement in hours per unit of manufacturing of
these products is as given :

Grinding Turning Assembling Testing

A 1 3 4 5

B 2 1 3 4

The available capacities of these operations in hours for the given time period are :
30 for grinding, 60 for turning, 200 for assembling and 200 for testing.
Profit on each unit of A is ` 3 and that for each unit of B is ` 2. Formulate the
problem as a linear programming model to maximize the profit assuming that the
firm can sell all the items that it produces at the prevailing market price.

5. A toy company manufactures two types to dolls; an ordinary doll A and a deluxe
doll B. Each type of doll B takes twice as long to produce as one of type A. It is given
that the company would have time to make a maximum of 2000 dolls per day if it
35

produces only the ordinary version. The supply of plastic is sufficient to produce
1500 dolls per day (both A and B combined). The deluxe version requires a fancy
dress of which there are only 600 pieces per day available. If the company makes
profit of ` 3 and ` 5 per doll respectively on doll A and doll B, formulate the problem
as a linear programming problem to maximize the profit. [Meerut 2007 (BP)]

6. A furniture firm manufactures chairs and tables, each requiring the use of three
machines A, B and C. Production of one chair requires 2 hours on machine A, 1
hour on machine B and 1 hour on machine C. Each table requires 1 hour each on
machines A and B and 3 hours on machine C. The profit realized by selling one
chair is ` 30 while for a table the figure is ` 60. The total time available per week on
machine A is 70 hours, on machine B is 40 hours, and on machine C is 90 hours.
Formulate the linear programming problem to maximize the profit. [Meerut 2008]

7. A diet is to contain at least 4000 units of carbohydrates, 500 units of fat and 300
unit of protein. Two foods A and B are available. Food A costs ` 2 per unit and food
B costs ` 4 per unit. A unit of food A contains 10 units of carbohydrates, 20 units of
fat and 15 units of protein. A unit of food B contains 25 units of carbohydrates, 10
units of fat and 20 units of protein. Formulate the problem as a LPP so as to find the
minimized cost for a diet that consists of a mixture of these two foods and also
meets the minimum nutrition requirements. [Gorakhpur 2009]

8. A resourceful home decorator manufactures tow types of lamps say A and B. Both
lamps go through two technicians, first a cutter, second a finisher. Lamp A requires
2 hours of the cutter's time and 1 hour of the finisher's time. Lamp B requires 1 hour
of cutter's and 2 hours of finisher's time. The cutter has 104 hours and finisher has
76 hours of time available each month. Profit on the lamp A is ` 6 and on the lamp B
is ` 11. Assuming that he can sell all that he produces, how many of each type of
lamps should he manufacture per month to obtain the best returns?
Formulate a LPP for this problem.

9. The manager of an oil refinery must decide on the optimal mix of 2 possible
blending processes of which the inputs and outputs per production run are as
follows :

Input Output
Process
Crude A Crude B Gasoline X Gasoline Y

1 6 4 6 9

2 5 6 5 5

The maximum amounts available of crudes A and B are 500 units and 400 units
respectively. Market demand shows that at least 300 units of gasoline X and 260
units of gasoline Y must be produced. The profits per production run from process 1
and 2 are `. 40 and ` 50 respectively. Formulate the LPP for maximizing the profit.
36

10. A factory produces two products A and B. To manufacture one unit of product A, a
1
machine has to work for 1 hours and a craftsman has to work for 2 hours. To
2
manufacture one unit of product B, the machine has to work for 3 hours and the
craftsman for one hour. In a week the factory can avail of 80 hours of machine time
and 70 hours of craftsman's time. The profit on the sale of each unit of A and B is of
` 10 and ` 8 respectively. If the manufacturer can sell all the items produced, how
many of each should be produced to get the maximum profit per week ?
Formulate the problem as a LPP.

11. A company manufactures two kinds of leather purses, A and B. A is a high quality
purse and B is lower quality. The sales of each of these purses A and B earn profit of
` 4 and ` 3 respectively. Each purse of type A requires twice as much time as a purse
of type B, and if all purses are to type B, the company could make 1000 purses per
day. The supply of leather is sufficient for only 800 purses per day (both A and B
combined). Purse A requires a fancy buckle, and only 400 buckles per day are
available. There are only 700 buckles available for purse B. What should be the
daily production of each type of purse to get the maximum profit ? Formulate the
problem as a LPP.

12. A firm can produce three types of cloth, say : A, B and C. Three kinds of wool are
required for it, say red wool, green wool and blue wool. One unit length of type A
cloth needs 2 yards of red wool and 3 yards of blue wool; one unit length of type B
cloth needs 3 yards of red wool, 2 yards of green wool and 2 yards of blue wool and
one unit length of type C cloth needs 5 yards of green wool and 4 yards of blue
wool. The firm has only a stock of 8 yards of red wool, 10 yards of green wool and 15
yards of blue wool. It is assumed that the income obtained from one unit length of
type A cloth is ` 3.00, of type B cloth is ` 5.00 and of type C cloth is ` 4.00.
Formulate this problem as a linear programming model to maximize the income
from the finished cloth.

13. A firm manufactures 3 products A, B and C. The profits are ` 3, ` 2 and ` 4


respectively . The firm has 2 machines and below is the required processing time in
minutes for each machine on each product :

Product
Machine
A B C

G 4 3 5

H 2 2 4

Machines G and H have 2,000 and 2,500 machines minutes, respectively. The firm
must manufacture 100 A's, 200 B's and 50 C's but not more than 150 A's. Set up a
linear programming problem to maximize profit.
37

14. A farmer has 100 acre farm. He can sell all tomatoes, lettuce or radishes he can raise.
The price he can obtain is ` 1.00 per kg for tomatoes, ` 0.75 a head for let tuce and
` 2.00 per kg for radishes. The average yield per-acre is 2,000 kg of tomatoes, 3,000
heads of lettuce and 1,000 kg of radishes. Fertilizer is available at ` 0.50 per kg and
the amount required per acre is 100 kg each for tomatoes and lettuce and 50 kg for
radishes. Labour required for sowing, cultivating and harvesting per acre is 5
man-days for tomatoes and radishes, and 6 man-days for lettuce. A total of 400
man-days of labour are available at ` 20.00 per man-day. Formulate this problem as
a linear programming model to maximize the farmer's total profit.
15. A city hospital has the following minimal daily requirements for nurses :
Minimum number of
Period Clock time (24 hr. day)
nurse required

1 6 A.M. — 10 A.M. 2
2 10 A.M. — 2 P.M. 7
3 2 P.M. — 6 P.M. 15
4 6 P.M. — 10 P.M. 8
5 10 P.M. — 2 A.M. 20
6 2 A.M. — 6 A.M. 6

Nurses report to the hospital at the beginning of each period and work for 8
consecutive hours. The hospital wants to determine the minimum number of nurses
available for each period. Formulate this as a L.P.P. by setting up appropriate
constraints and objective function. [Meerut 2005, 08 (BP)]

1. Maximize Z = 50 x + 15 y, 2. Maximize Z = 5 x + 3 y,
subject to the constraints subject to the constraints
5 x + y ≤ 100 2 x + 4 y ≤ 20
x + y ≤ 60 3 x + 2 y ≤ 24
and the non-negative restrictions 4 x + 2 y ≤ 13

x ≥ 0, y ≥ 0 and x ≥ 0, y ≥ 0

3. Minimize Z = 2 x + y, 4. Maximize Z = 3 x + 2 y,
subject to the constraints subject to the constraints
7 x + 2 y ≥ 30 x + 2 y ≥ 30
5 x + 4 y ≥ 20 3 x + y ≤ 60
2 x + 8 y ≥ 16 4 x + 3 y ≤ 200
and x ≥ 0, y ≥ 0 5 x + 4 y ≤ 200
and x ≥ 0, y ≥ 0
38

5. Maximize Z = 3 x + 5 y, 6. Maximize Z = 30 x + 60 y,
subject to the constraints subject to the constraints
x + 2 y ≤ 2000 2 x + y ≤ 70
x + y ≤ 1500 x + y ≤ 40
y ≤ 600 x + 3 y ≤ 90
and x ≥ 0, y ≥ 0 and x ≥ 0, y ≥ 0
7. Minimize Z = 2 x + 4 y, 8. Maximize Z = 6 x + 11 y,
subject to the constraints subject to the constraints
10 x + 25 y ≥ 4000 2 x + y ≤ 104
20 x + 10 y ≥ 500 x + 2 y ≥ 76
15 x + 20 y ≥ 300 and x ≥ 0, y ≥ 0
and x, y ≥ 0
9. Maximize Z = 40 x + 50 y, 10. Maximize Z = 10 x + 8 y,
subject to the constraints subject to the constraints
6 x + 5 y ≤ 500 1. 5 x + 3 y ≤ 80
4 x + 6 y ≤ 400 2 x + y ≤ 70
6 x + 5 y ≥ 300 and x ≥ 0, y ≥ 0
9 x + 5 y ≥ 260
and x, y ≥ 0
11. Maximize Z = 4 x + 3 y, 12. Maximize Z = 3 x1 + 5 x2 + 4 x3 ,
subject to the constraints subject to the constraints
2 x + y ≤ 1000 2 x1 + 3 x2 ≤ 8
x + y ≤ 800 2 x2 + 5 x3 ≤ 10
x ≤ 400 3 x1 + 2 x2 + 4 x3 ≤ 15
y ≤ 700 and x1, x2 , x3 ≥ 0
and x ≥ 0, y ≥ 0
13. Maximize Z = 3 x1 + 2 x2 + 4 x3 , 14. Maximize
subject to the constraints Z = 1850 x1 + 2080 x2 + 1875 x3 ,

4 x1 + 3 x2 + 5 x3 ≤ 2000 subject to the constraints

2 x1 + 2 x2 + 4 x3 ≤ 2500 x1 + x2 + x3 ≤ 100

100 ≤ x1 ≤ 150, x2 ≥ 200, x3 ≥ 50. 5 x1 + 6 x2 + 5 x3 ≤ 400


and : x1, x2 , x3 ≥ 0.

15. Maximize Z = x1 + x2 + x3 + x4 + x5 + x6 ,
subject to the constraints
x1 + x2 ≥ 7, x2 + x3 ≥ 15,
x3 + x4 ≥ 8, x4 + x5 ≥ 20,
x5 + x6 ≥ 6, x6 + x1 ≥ 2
and : x1, x2 , x3 , x4 , x5 , x6 ≥ 0.
39

2.6 Some Important Definitions


1. Solution of a L.P.P. : A set of values of the variables x1, x2 ,..., x n satisfying the
constraints of a L.P.P. is called a solution of the L.P.P.

2. Feasible Solution of a L.P.P. : A set of values of the variables x1, x2 ,..., x n


satisfying the constraints and the non-negative restrictions of a L.P.P. is called a
feasible solution of the L.P.P. [Meerut 2004, 06 (BP), 08 (BP), 10]

3. Optimal (or optimum) Solution of a L.P.P. : A feasible solution of a L.P.P. is said


to be optimal (or optimum) if it also optimizes (i.e., maximizes or minimizes as the
case may be) the objective function of the problem. [Meerut 2007]

4. Unbounded Solution : If the value of the objective function can be increased or


decreased indefinitely, such solutions are called unbounded solutions. In this case
we say that the problem has unbounded solution. [Meerut 2006 (BP)]

5. Fundamental Extreme Point Theorem : An optimum solution of a L.P.P., if it


exists, occurs at one of the extreme points (i.e., corner points) of the convex polygon
of the set of all feasible solutions. (For proof see chapter 2). [Meerut 2007 (BP), 09]

2.7 Solution of Simultaneous Linear Inequations


The graph or the solution set of a system of simultaneous linear inequations is the
region containing the points ( x, y) which satisfy all the inequations of the given system
simultaneously.

To draw the graph of the simultaneous linear inequations i.e., to find the solution set of
the simultaneous linear inequations, we find the region of the xy-plane, common to all
the portions comprising the solution sets of the given inequations. If there is no region
common to all the solutions of the given inequations, we say that the solution set of the
system of ineqations is empty. It should be noted that the solution set of simultaneous
linear inequations may be an empty set or it may be the region bounded by the straight
lines corresponding to given linear inequations or it may be an unbounded region with
straight line boundaries.
40

Example 1: Draw the graph of the solution set of the ineqautions

x + y ≤ 5, 4 x + y ≥ 4, x + 5 y ≥ 5, x ≤ 4, y ≤ 3.

Solution: Consider the equations

x + y = 5, 4 x + y = 4, x + 5 y = 5, x = 4, y = 3.

First we shall consider each inequation separately.

Region Represented by x + y ≤ 5 : The straight line x + y = 5 meets the x-axis at (5, 0)


and y-axis at (0, 5). Since the given inequality is not strict, we join these points by a thick
line. Clearly, the point (0,0) not lying on the line x + y = 5 satisfies the inequation
x + y ≤ 5. Therefore, out of the two portions of the xy-plane divided by the line x + y = 5,
the one containing the origin (0, 0) along with the line represents the solution set of the
inequation x + y ≤ 5.

Region Represented by 4 x + y ≥ 4 : The straight line 4 x + y = 4 meets x-axis at (1, 0)


and y-axis at (0, 4). Since the given inequality is not strict, we join these point by a thick
line. Clearly, the point (0, 0) not lying on the line 4 x + y = 4 does not satisfy the
inequations 4 x + y ≥ 4. Therefore, out of the two portions of the xy-plane divided by the
line 4 x + y = 4, the portion not containing the origin along with the line represents the
solution set of the inequations 4 x + y ≥ 4. Y

Region Represented by x + 5 y ≥ 5 : 5
x+y=5
The straight line x + 5 y = 5 meets the 4
x-axis at (5, 0) and the y-axis at (0, 1). y=3
3
Since the given inequality is not strict, we
join these points by a thick line. Clearly, 2
the point (0, 0) not lying on the line x+5y=
1
5
x + 5 y = 5 does not satisfy the inequation
x + 5 y ≥ 5. Therefore, out of the two X' O(0, 0) 1 2 3 4 5 X
portions of the xy-plane divided by the
line x + 5 y = 5, the portion not containing 4x+y=4 x=4
the origin along with the line represents Y'
the solution set of the inequation Fig. 2.1

x + 5 y ≥ 5.

Region Represented by x ≤ 4 : Clearly, the line x = 4 is parallel to y-axis at a distance of


4 unit from the origin lying to the right hand side of the y-axis. Since the given inequality
is not strict, we draw this line as a thick line. The point (0, 0) not lying on the line x = 4
satisfies the inequation x ≤ 4, therefore out of the two portions of the xy-plane divided by
41

this line, the portion containing the origin along with the line represents the solution set
of the inequation x ≤ 4.

Region Represented by y ≤ 3 : Clearly, the line y = 3 is parallel to x-axis at a distance of


3 units from the origin lying above x-axis. Since the inequality is not strict, we draw this
line as a thick line. The point (0, 0) not lying on the line y = 3 satisfies the inequation
y ≤ 3, therefore out of the two portions of the xy-plane divided by this line, the portion
containing the origin along with the line represents the solution set of the inequation
y ≤ 3.

Hence, the shaded region i.e., the region common to the given five inequations,
represents the solution set of the given system of inequations.

Example 2: Exhibit graphically the solution set of the linear inequations

3 x + 2 y ≥ 6, x ≥ 1, y ≥ 1 .

Solution: Consider the equations 3 x + 2 y = 6, x = 1, y = 1.

To find the solution set of the given system of inequations, we first find the solution sets
of the given distinct inequations separately.

Region Represented by 3 x + 2 y ≥ 6 : The straight line 3 x + 2 y = 6 meets x-axis at


(2, 0)and y-axis at (0, 3). Since the given inequality is not strict, we join these points by a
thick line. Clearly, the point (0, 0) not lying on the line 3 x + 2 y = 6 does not satisfy the
inequation 3 x + 2 y ≥ 6.

Therefore out of the portions of the xy-plane divided by the line 3 x + 2 y = 6, the portion
not containing the origin along with the line represents the solution set of the inequation
3 x + 2 y ≥ 6.
Y
Region Represented by x ≥ 1 : Clearly,
3
the line x = 1 is parallel to y-axis at a
distance of 1 unit from the origin lying 2
to the right hand side of y-axis. Since y=1 1
the given inequality is not strict, we
draw this line as a thick line. The point
(0, 0) not lying on the line x = 1 does not X' O 1 2 3 X
satisfy the inequation x ≥ 1, therefore 3x+2y=6
out of the two portions of the xy-plane
x=1
divided by this line, the portion not Y'
Fig. 2.2
containing the origin along with the line
represents the solution set of the

inequation x ≥ 1.
42

Region Represented by y ≥ 1 : Clearly, the line y = 1 is parallel to x-axis at a distance of


1 unit from the origin lying above x-axis. Since the given inequality is not strict, we draw
this line as a thick line. The point (0, 0) not lying on the line y = 1 does not satisfy the
inequation y ≥ 1, therefore out of the two portions of the xy-plane divided by this line, the
portions not containing the origin along with the line represents the solution set of the
inequation y ≥ 1.

Hence, the shaded region common to the given three inequations represents the solution
set of the given system of inequations. We observe that the solution set of the given
system of linear inequations is an unbounded region.

Example 3: Draw the diagram of the solution set of the system of linear inequations :

2 x + 3 y ≤ 6, x + 4 y ≤ 4, x ≥ 0 , y ≥ 0 .

Solution: Changing the given inequations into equations, we get

2 x + 3 y = 6, x + 4 y = 4, x = 0, y = 0.

Region Represented by 2 x + 3 y ≤ 6 : The line 2 x + 3 y = 6 meets x-axis at (3, 0) and


y-axis at (0, 2). Join the points (3, 0) and (0, 2) by a thick line to obtain the graph of the
line 2 x + 3 y = 6. Since the point (0, 0) satisfies the inequation 2 x + 3 y ≤ 6, therefore the
portion of the xy-plane divided by the line 2 x + 3 y = 6 containing the origin along with
the line represents the solution set of the
Y
inequation 2 x + 3 y ≤ 6. x=0
Region represented by x + 4 y ≤ 4 : 3

The line x + 4 y = 4 meets x-axis at (4, 0)


x+4y 2
and y-axis at (0, 1). Join the points (4, 0) =4
and (0, 1) by a thick line to obtain the 1
graph of the line x + 4 y = 4. The point y=0
(0, 0) does not lie on the line x + 4 y = 4. X' O 1 2 3 42 X
x+
Since (0, 0) satisfies the inequation 3y
=6
x + 4 y ≤ 4, therefore the portion of the
xy-plane divided by the line x + 4 y = 4 Y'
containing the origin along with the line Fig. 2.3
represents the solution set of the

inequation x + 4 y ≤ 4.

Region Represented by x ≥ 0 : The inequation x ≥ 0 represents the region lying on the


right hand side of the y-axis including y-axis.

Region Represented by y ≥ 0 : The inequation y ≥ 0 represents the region lying above


the x-axis.

The common region (shaded region in the figure) of the above four regions represents the
solution set of the given system of inequations.
43

Example 4: Exhibit graphically the solution set of the following system of linear
inequations :
x + y ≥ 1, − 3 x − 4 y ≥ −12, − x + 2 y ≥ −2, x ≥ 0 , y ≥ 0 .

Solution: Region Represented by x + y ≥ 1 : The straight line x + y = 1 meets x-axis at


(1, 0) and y-axis at (0, 1). We join the points (1, 0) and (0, 1) by a thick line. Since the
origin (0, 0) does not satisfy the inequation x + y ≥ 1, therefore the portion of the xy-plane
divided by the line x + y = 1 which does not contain the origin along with the line
represents the solution set of the inequation
Y
x + y ≥ 1.
x=0
Region Represented by −3 x − 4 y ≥ −12 i.e.,
3
by 3 x + 4 y ≤ 12 : The straight line 2
2 =–
3 x + 4 y = 12 meets x-axis at (4, 0) and y-axis at +2y
– x
(0, 3). We join the points (4, 0) and (0, 3) by a 1
thick line. Since the point (0, 0) satisfies the y=0
inequation 3 x + 4 y ≤ 12, therefore the portion X' O 1 2 3 4 X
–1 3x
+4
of the xy-plane divided by the line 3 x + 4 y = 12 x+ y =1
y= 2
which contains the origin along with the line –2 1
represents the inequation −3 x − 4 y ≥ −12.
Y'
Fig. 2.4
Region Represented by − x + 2 y ≥ −2 : The
straight line − x + 2 y = −2 meets x-axis at (2, 0)
and y-axis at (0, –1). We join the points (2, 0) and (0, –1) by a thick line. Since the origin
(0, 0) satisfies the inequation − x + 2 y ≥ −2, therefore the portion of the xy-plane divided
by the line − x + 2 y = −2 which contains the origin along with the line represents the
inequation − x + 2 y ≥ −2.

Region Represented by x ≥ 0 : The inequation x ≥ 0 represents the region lying on the


right hand side of the y-axis including y-axis.

Region Represented by y ≥ 0 : The inequation y ≥ 0 represents the region lying above


the x-axis including x-axis.

Hence, the shaded region i.e., the region common to the given five inequations,
represented the solution set of the given system of inequations.

2.8 Methods for the Solution of a L.P.P.


[Meerut 2006 (B.P.)]

In general we use the following three methods for the solution of a L.P.P.
1. Graphical (or geometrical) Method : If the objective function Z is a function of
two variables only then the problem can be solved by graphical method. A problem
of three variables can also be solved by this method but it is complicated enough.
44

2. Analytic Method : (Trial and Error Method) : The L.P.P. having more than two
variables cannot be solved by graphical method as even the problem of three
variables becomes complicated enough. In such cases, the analytic method (trial
and error method) can be useful.
3. Simplex Method : This is the most powerful method to solve a L.P.P. as any
problem can be solved by this method. This method is an algebraic procedure which
progressively approaches the optimal solution. This method is discussed in chapter
four.

2.9 Graphical Method to Solve a L.P.P.


[Meerut 2007]
There are two techniques of solving a L.P.P. by graphical method :
(i) Corner-Point method and
(ii) Iso-Profit or Iso-Cost Method.
The graphical method of solving a L.P.P. is based on the principle of extreme point
theorem. (See article 3.8 theorem 9 of chapter 3, Convex sets and Their Properties'.)

While solving a L.P.P. graphically by this method, we first obtain the region in the
xy-plane containing all points that simultaneously satisfy all constraints including
non-negative restrictions. This polygonal region so obtained is called the convex
polygon of the set of all feasible solutions of the L.P.P. It is also called the permissible
region for the values of the variables.

Now we determine the vertices (or corner points) of this convex polygon. These vertices
are called the extreme points of the set of all feasible solutions of the L.P.P.

After obtaining the extreme points we find the values of the objective function Z at all
these points. The point where the objective function attains its optimum value
(maximum or minimum value of Z as the case may be) gives the optimal (or optimum)
value of the given L.P.P.

If the two vertices of the convex polygon give the same optimal value of the objective
function, then all point on the line segment joining these two vertices give the optimal
value of the objective function and L.P.P. is said to have infinite number of optimal
solutions.

Procedure to the Solve a L.P.P by Graphical Method : In this method we proceed as


follows :

Step 1 : Consider each constraint as an equation.

Step 2 : Draw lines in the plane corresponding to each equation (obtained in step1) and
non-negative restrictions.
45

Method to draw lines : Putting x2 = 0 in the equation of the line find x1 and then putting
x1 = 0 find x2 . Thus we get the points of intersection i.e., ( x1, 0), (0, x2 ), of the line with the
axes. The line is drawn by joining these two points on the axes.

Step 3 : Now we find the permissible region for the values of the variables which is the
region bounded by these lines such that every point of this region satisfies all the
constraints and the non-negative restrictions.

Working Rule for Finding Permissible Region (Feasible Region) : Consider the
constraint ax + by ≤ or ≥ c, where c > 0. The line ax + by = c (drawn in step 2) divide the
xy-plane in two regions, one containing and the other not containing the origin. Since (0,
0) satisfy the inequality ax + by ≤ c, so for the inequality ax + by ≤ c, the feasible region is
the region which contains the origin. Also (0, 0) does not satisfy the inequality ax + by ≥ c,
so for the inequality ax + by ≥ c, the feasible region is the region which does not contain
the origin. See fig. 2.5(a).

Again consider the constraint y − mx ≤ or ≥ 0 where m > 0.

The line y − mx = 0 (drawn in step 2) divide the xy-plane in two regions, one containing
the +ve x-axis and the other containing the +ve y-axis. For the inequality y − mx < 0, the
feasible region is the region which contains the positive x-axis for the region y − mx > 0,
the feasible region is the region which contains the positive y-axis. See fig. 2.5(b).
y y
ax+by=c y – mx = 0

ax+by>c y–mx>0

ax+by<c y–mx<0
x x
O O
(a) (b)
Fig. 2.5

Thus, we find feasible corresponding to all inequalities. Then the region which is
common to all these regions is the permissible region (i.e., feasible region) for the
values of the variables. This permissible region is shaded.

Step Four : Here we find the point in the permissible region (obtained in step 3) which
gives the optimum value of the objective function Z. The point will be one of the extreme
points (vertices) of the convex polygon enclosing the permissible region.

This point, can be attained by one of the following two methods.


46

Method 1 : Corner Point Method : Determine the vertices of the convex polygon,
which are the points of intersection of the straight lines passing through them. These
vertices are called the extreme points of the set of all feasible solution of the L.P.P. Then
find the values of the objective function Z at all these points. The point where the
objective function Z attains its optimum value (maximum or minimum value as the case
may be) gives the optimum (or optimal) value of the L.P.P..

If two vertices of the convex polygon given the same optimum value of the objective
function Z, then all points on the line segment joining these two vertices will give the
optimum value of the objective function Z. In this case the L.P.P. is said to have infinite
number of optimum solutions.

Method 2 : Iso-Profit or Iso-Cost Method : Here, to find the vertex of the convex
polygon, which gives the optimum value of the objective function Z, draw a straight line in
the feasible region corresponding to the equation obtained by giving some convenient
value k to the objective function. This line is called an iso-profit or iso-cost line, since
every point on the line within the permissible region will yield the same value of Z. We
can also take k = 0. Which give a line passing through the origin and parallel to iso-profit
line. Thus, we draw the line (dotted line) through the origin corresponding to Z = 0.
Then for the maximization problem the extreme point of the permissible region which is
farthest away (i.e., at greatest distance) from this line and for the minimization problem
the extreme point of the permissible region which is nearest to this line gives the
optimum values of Z. To obtain this extreme point of the permissible region, giving the
optimum value of the objective function Z, we go on drawing lines parallel to the line
Z = 0. The farthest extreme point is the vertex of the permissible region through which
one of the parallel lines passes and after which it leaves this region and the nearest
extreme point is the vertex of the permissible region through which the parallel line
enters this region.

Note : If there is no permissible region then we say that the problem has no solution.

2.10 Limitations of the Graphical Method


[Meerut 2007, 09; Kanpur 2011]

This method can be applied to problems involving only two variables while most of the
practical situations do involve more than two variables. Therefore it is not a powerful
tool for the solution of L.P.P.
47

Example 1: Solve by graphical method, the linear programming problem :

Minimize Z = 20 x1 + 10 x2

subject to the constraints, x1 + 2 x2 ≤ 40

3 x1 + x2 ≥ 30

4 x1 + 3 x2 ≥ 60

and the non-negative restrictions x1, x2 ≥ 0 . [Meerut 2005, 07 (BP), 11, 12(BP); Kanpur 2007]

Solution: Step 1: Considering the constraints as equations, we get the following


equations
x1 + 2 x2 = 40, 3 x1 + x2 = 30, 4 x1 + 3 x2 = 60.

Step 2 : Draw lines corresponding to each equation.

Step 3 : The shaded region PQRSP is the permissible region of the problem.

Step 4 : By corner-point method : Solving simultaneously the equations of the


corresponding intersecting lines, the coordinates of the vertices of the convex polygon are

P (40, 0), Q (4,18), R (6,12) and S (15, 0)


x2

30

25 3x1+x2 = 30

20 Q(4, 18)
Line for mini. Z
15
(6, 12)
10 R x+
1 2
x
2 =
40
2k 5
Z=20x1 + 10x2 = 0
S(15, 0) P(40, 0)
x1
–k O 5 10 15 20 25 30 35 40
x −1
⇒ 1 =
4x 1

x2 2
+3
x2
=
60

Fig. 2.6
48

Now the values of the objectives function Z at these vertices (corner points) are as given
in the table below :

Point ( x, y ) Value of the objective function Z = 20 x1 + 10 x2

P (40, 0) Z = 20 × 40 + 10 × 0 = 800

Q(4,18) Z = 20 × 4 + 10 × 18 = 260

R (6,12) Z = 20 × 6 + 10 × 12 = 240 (Mini)

Q(15, 0) Z = 20 × 15 + 0 = 300

Obviously, Z is minimum at R (6,12).

Hence, the optimal solution of the given L.P.P. is

x1 = 6, x2 = 12 and minimum Z = 240.

By Iso-profit Method : Here, we draw the line through the origin corresponding to
Z = 0, which is parallel to iso-profit line.

Z = 0 ⇒ 20 x1 + 10 x2 = 0 ⇒ x1 x2 = (−1) 2

The dotted line through the origin is shown in the figure. Drawing parallel lines away
from the origin (note) O, we see that the nearest line (since it is minimization problem)
in the permissible region passes through the vertex R (6,12).

Hence, the optimal solution is

x1 = 6, x2 = 12 and Minimum Z = 20 × 6 + 10 × 12 = 240.

Example 2: Using Iso-profit line method, compute the maximum value of the expression
2 x1 + 3 x2 subject to the conditions :

2 x1 + x2 ≤ 5

x1 − x2 ≤ 1

x2 ≤ 2

and the non-negative restrictions x1 ≥ 0 , x2 ≥ 0 .

Verify the maximum value by computing the values at the boundary points of the polygon.
[Meerut 2008]

Solution: Converting the given conditions into equation, and drawing these lines, the
permissible region is OPQRSO (shaded region) which is the set of all feasible solution of
the problem.

Here, we are to maximize the objective function

Z = 2 x1 + 3 x2 .
49

Taking Z =0 we get x2
x1 x2 = (−3) 2.
5
The line corresponding to Z = 0 is

2x 1
shown by dotted line through the 4

+x 2
1
=
origin O, which is parallel to the x2

=5
3 –
x1
Iso-profit line. R(3/2, 2) x2=2
S(0, 2)
Line for max. Z
Now drawing parallel lies away
from the origin (since it is Q(2, 1)
2k
maximization problem), shown x1
–3k O P(1, 0) 3 4 5
by dotted lines, we see that the Z=2x1 + 3x2 = 0
–1
farthest line from the origin O x −3
⇒ 1 =
x2 2
passes through the vertex
Fig. 2.7
R (3 2 , 2).

Hence, the optimal solution is

x1 = 3 2 , x2 = 2

and Max. Z = 2 × 3 2 + 3 × 2 = 9.

Verification : Solving simultaneously the equations of the corresponding intersecting


lines of the permissible region, the coordinates of the vertices (corners) of the convex
3 
polygon are O(0, 0), P,(1, 0), Q(2,1), R  , 2 and S (0, 2).
2 

The values of the objective function Z at these corner points are as follows :

Point ( x1, x2 ) Value of the objective function Z = 2 x1 + 3 x2

O(0, 0) Z =0+0 =0
P,(1, 0) Z = 2 ×1 + 0 = 2
Q(2,1) Z = 2 × 2 + 3 ×1 = 7
3  Z = 2 × 3 2 + 3 × 2 = 9 (Max.)
R  , 2
2 

S (0, 2) Z = 0 + 3 × 2 = 6.

3  3
Obviously, the maximum value of Z is 9 at R  , 2 i.e., when x1 = , x2 = 2.
2  2

Example 3: A goldsmith manufactures necklaces and bracelets. The total number of


necklaces and braclets that he can handle per day is at most 24. It takes one hour to make
a bracelet and half an hour to make a necklace. It is assumed that he can work for a
maximum of 16 hours a day. Further the profit on a bracelet is ` 300 and the profit on a
necklace is ` 100. Find how many of each should be produced daily to maximize the profit.
50

Solution: Formation of the problem as linear programming problem : Proceeding as


in Example 1 page 21, the mathematical form of the given problem as a L.P.P. is as
follows:

Maximize Z = 100 x1 + 300 x2

subject to the constraints x1 + 2 x2 ≤ 32

x1 + x2 ≤ 24

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0 .

where x1 and x2 are the numbers of necklaces and bracelets respectively to be


manufactured per day.
x2

Solution of the Problem :


x + 30
Proceeding stepwise, the permissible 1 2
x = R(0, 24)
2 3
region is the shaded region OSTPO 2
20 x1 + x2=24
which is the set of all points which
P(0, 16)
simultaneously satisfy all the 10 T(16, 8)
constraints and the non-negative
Q(32, 0)
restrictions. Solving simultaneously x1
O(0, 0) 10 20 S 30 40
the equations of the corresponding (24, 0)
intersecting lines, we get the
co-ordinates of the vertices of the Fig. 2.8
shaded region as

P (0,16), O (0, 0), S (24, 0), T (16, 8).

Now the values of the objective function at these corner points are as given in the table
below :

Point ( x1, x2 ) Value of the objective function Z = 100 x1 + 300 x2

P (0,16) 100 × 0 + 300 × 16 = 4800 (Max.)

O(0, 0) 100 × 0 + 300 × 0 = 0

S (24, 0) 100 × 24 + 300 × 0 = 2400

T (16, 8) 100 × 16 + 300 × 8 = 4000

Clearly, Z is maximum at P (0,16). Hence, x1 = 0, x2 = 16 is the optimal solution of the


given problem and the optimal value of Z is 4800, i.e., the goldsmith earns maximum
profit when he makes 16 bracelets and 0 necklace a day and that his maximum profit in
this case is ` 4800.
51

Example 4: A toy company manufactures two types of dolls; a basic version-doll A and a
deluxe version-doll B. Each doll of type B takes twice as long to produce as one of type A,
and the company would have time to make a maximum 2000 per day if it produces only
the basic version. The supply of plastic is sufficient to produce 1500 dolls per day (both A
and B combined). The deluxe version requires a fancy dress of which there are only 600
pieces per day available. If the company makes a profit of ` 3 and ` 5. per doll respectively,
on dolls A and B, how many of each should be produced per day in order to maximize
profit? [Meerut 2009]

Solution: Formulation of the problem as linear programming problem : Proceeding


as in Ques. 5, page 34, the mathematical form of the given problem as a L.P.P. is as
follows :
Maximize Z = 3 x1 + 5 x2

subject to the constraints x1 + 2 x2 ≤ 2000

x1 + x2 ≤ 1500

x2 ≤ 600

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0.

Solution of the Problem : Proceeding stepwise, the permissible region is the shaded
region OPQRSO which is the set of all feasible solutions of the problem.

x2

(0, 1500)
1500
x1 + x2 = 1500

Line for max. Z


(0, 1000)
R(800, 600)
x2=600
(0, 600) S
500 Q(1000, 500)

(0, 300) B x1 +2x2 = 2000


3k
–5k (2000, 0)
(0, 0) O A 1000 P 2000 x1
(500, 0) (1500, 0)
Z=3x1 + 5x2 = 0
x1 −5
⇒ =
x2 3
Fig. 2.9
52

To find the maximum value of the objective function Z.

By Corner Point Method : Solving simultaneously the equations of the corresponding


intersection lines of the feasible region, we get coordinates of the corner points of the
feasible region as O(0, 0), P (1500, 0), Q(1000, 500), R (800, 600) and S (0, 600).

The values of the objective function Z at these corner points are as given below :

Point ( x1, x2 ) Value of the objective function Z = 3 x1 + 5 x2

O(0, 0) Z = 3 ×0 + 5 ×0 = 0
P (1500, 0) Z = 3 × 1500 + 5 × 0 = 4500
Q(1000, 500) Z = 3 × 1000 + 5 × 500 = 5500 (Max.)
R (800, 600) Z = 3 × 800 + 5 × 600 = 5400
S (0, 600) Z = 3 × 0 + 5 × 600 = 3000

Thus, Z is maximum at the corner point Q i.e., for x1 = 1000 and x2 = 500 and the
maximum value of Z is 5500. The optimal solution of the problem is x1 = 1000, x2 = 500.

Hence, 1000 dolls of type A and 500 dolls of type B should be produced per day to
maximize the profit and the maximum profit per day is ` 5500.

The optimal solution may also be found by the following alternative method.

By Iso-profit Line Method : To find the maximum value of the objective function Z, we
draw the dotted line Z = 3 x1 + 5 x2 = 0 passing through the origin, which is parallel to
iso-profit line. We now move this line parallel to itself away from the origin so that its
distance from the origin becomes maximum yet it has at least one point in the feasible
region. We see that in this position this line passes through only one point Q(1000, 500)
of the feasible region. This line is an iso-profit line having only one point viz., Q in the
feasible region which will yield the maximum value of Z.

Hence, Z is maximum for x1 = 1, 000 and x2 = 500 and the maximum value of Z is

3 × 1, 000 + 5 × 500 = ` 5,500.

Example 5: A dietician mixes two types of food in such a way that the vitamin contents of
the mixture contain at least 8 units of vitamin A and 10 units of vitamin C. Food X
contains 2 units/kg of vitamin A and 1 unit/kg of vitamin C while food Y contains 1
unit/kg of vitamin A and 2 units/kg of vitamin C. One kg of food X costs ` 5 whereas one kg
of food Y costs ` 7. Determine the minimum cost of such a mixture.

Solution: Formulation of the problem as L.P.P. : Let the dietician mix x1 kg of food X
and x2 kg of food Y. Then

x1 ≥ 0, x2 ≥ 0
53

Since the minimum requirement of vitamin A is 8 units, therefore

2 x1 + x2 ≥ 8

Similarly, since the minimum requirement of vitamin C is 10 units, therefore

x1 + 2 x2 ≥ 10.

The total cost Z in ` of purchasing x1 kg of food X and x2 kg of food Y is

Z = 5 x1 + 7 x2 .

Hence, the mathematical form of the given problem as a L.P.P. is as follows :

Minimize Z = 5 x1 + 7 x2

subject to the constraints 2 x1 + x2 ≥ 8

x1 + 2 x2 ≥ 10

and the non-negative restrictions

x1 ≥ 0, x2 ≥ 0.

Solution of the Problem : Proceeding stepwise, the permissible region is the shaded
region, which is unbounded and is the set of all feasible solutions of the problem.

x2

A
10

8 P(0, 8)

2x1 + x2=8
6

4 T
(2, 4) x1 + 2x2=10
2
5k
S(10, 0) B
–7k O 2 4 6 8 10 12 14 x1
Z=5x1 + 7x2 = 0
x −7 Line for mini. Z
⇒ 1 =
x2 3
Fig. 2.10

To find the minimum value of the objective function Z.

By Corner Point Method : Solving simultaneously the equation of the corresponding


intersecting lines of the permissible region, the co-ordinates of the corner points of this
region are P (0, 8), T (2, 4) and S (10, 0).
54

Now the values of the objective function at these corner points are as given in the table
below :

Point ( x1, x2 ) Value of the objective function Z = 5 x1 + 7 x2

P (0, 8) Z = 5 × 0 + 7 × 8 = 56

T (2, 4) Z = 5 × 2 + 7 × 4 = 38 (Min.)

S (10, 0) Z = 5 × 10 + 7 × 0 = 50

Clearly, Z is minimum at T (2, 4). Hence x1 = 2 and x2 = 4 is the optimal solution of the
given problem and the optimal value is Z = 38.

Hence, the dietician should mix 2 kg of food X and 4 kg of food Y to make the required
mixture at minimum cost. The minimum cost in this case is ` 38.

By Iso-cost Method : To find the minimum value of Z, we draw the dotted line
corresponding to Z = 5 x1 + 7 x2 = 0 passing through the origin which is parallel to iso-cost
line. We now move this line parallel to itself so that it passes through only one point
[here the corner point T (2, 4)] of the feasible region. This line is an iso-cost line having
only one point viz. T in the feasible region which gives the minimum value of Z.

Hence, Z is minimum for x1 = 2 and x2 = 4 and the minimum value of Z is

5 × 2 + 7 × 4 = ` 38.

Example 6: A farm is engaged in breeding hens. In view of the need to ensure certain
nutrients (say x1, x2 , x3 ), it is necessary to buy two types of food, say A and B. One unit of
food A contains 36 units of x1, 3 units of x2 and 20 units of x3 . One unit of food B
contains 6 units of x1, 12 units of x2 and 10 units of x3 . The minimum daily requirement
of x1, x2 and x3 is 108, 36 and 100 units respectively. The cost of food A is ` 20 per unit
whereas food B costs ` 40 per unit. Find the minimum food cost so as to meet the minimum
daily requirement of nutrients.

Solution: Formulation of the Problem as L.P.P. : Let x units of food A and y units of
food B be bought to fulfill the minimum requirement of the nutrients x1, x2 , x3 and to
minimize the cost.

If Z be the cost in ` of the two types of food bough, then Z = 20 x + 40 y.

According to the minimum daily requirements of x1, x2 , x3 we have

36 x + 6 y ≥ 108, 3 x + 12 y ≥ 36, 20 x + 10 y ≥ 100

Also, since the quantity of each type of food bought cannot be negative, therefore
x ≥ 0, y ≥ 0.
55

Hence, the mathematical form of the given problem as a L.P.P. is as follows :

Minimize Z = 20 x + 40 y

subject to the constraints 36 x + 6 y ≥ 108, 3 x + 12 y ≥ 36, 20 x + 10 y ≥ 100

and the non-negative restrictions x ≥ 0, y ≥ 0.

Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
shaded region YLSTPX which is unbounded.

18 L(0, 18)

20x + 10y=100 16

14

12

10

(0, 8)A

6 S(2, 16)
3x + 12y=36
4
(0, 3)N T(4, 2)
2
k
–2k (3, 0) M P(12, 0) B(16, 0)
X
O 2 4 6 8 10 12 14 16
Z=20x + 40y = 0
x –2
⇒ = 36x + 6y=108 Line for mini. Z
y 1
Fig. 2.11

To find the minimum value of Z, we draw the dotted line through the origin
corresponding to Z = 20 x + 40 y = 0, which is parallel to iso-cost line. We now move this
line parallel to itself so that it enters the permissible region through a point [here the
corner point T (4, 2)]. This line is an iso-cost line having only one point viz. T in the
feasible region which will give the minimum value of Z.

Therefore, Z is minimum for x = 4 and y = 2 and the minimum value of Z is


20 × 4 + 40 × 2 = ` 160.

Hence, 4 units of food A and 2 units of food B should be bought to fulfill the minimum
requirements of x1, x2 , x3 at a minimum cost of ` 160.
56

Example 7: Old hens can be bought at ` 2 each and young ones at ` 5 each. The old hens
lay 3 eggs per week and the young ones lay 5 eggs per week, each egg being worth 30 paise.
A hen (young or old) costs ` 1 per week to feed. I have only ` 80 to spend for hens. How
many of each kind should I buy to give me a maximum profit, assuming that I cannot house
more than 20 hens ?

Solution: Formulation of the Problem as L.P.P. : Suppose I buy x1 old hens and x2
young hens.

Since old hens lay 3 eggs per week and the young ones lay 5 eggs per weak, the total
number of eggs I have per week is 3 x1 + 5 x2 . Consequently, each egg being worth 30
paise, my total income per week is ` 0.3 (3 x1 + 5 x2 ).

Also, the expenses for feeding ( x1 + x2 ) hens, at the rate of ` 1 per hen per week

= ` ( x1 + x2 )

Thus, the total profit Z in ` . earned per week is given as

Z = 0 . 3(3 x1 + 5 x2 ) − ( x1 + x2 ) = − 0 .1x1 + 0 x2

Since the cost of one old hen is ` 2 and that of one young hen is ` 5 and I have only ` 80 to
spend for hens, therefore 2 x1 + 5 x2 ≤ 80

Since I cannot house more than 20 hens, therefore x1 + x2 ≤ 20.

Again, the number of hens purchased, whether old ones or young ones, cannot be
negative.

Therefore, x1 ≥ 0 and x2 ≥ 0.

Hence, the given problem formulated as L.P.P. is as follows :

Maximize Z = − 0 .1x1 + 0 . 5 x2

subject to the constraints 2 x1 + 5 x2 ≤ 80, x1 + x2 ≤ 20

and the non-negative restrictions x1 ≥ 0 and x2 ≥ 0.

Solution of the Problem


Proceeding stepwise, the permissible region (the set of all points satisfying all the
constraints and the non-negative restrictions) consists of the shaded region OBECO.

To find the maximum value of the objective function Z, we draw the dotted line through
the origin corresponding to Z = − 0 .1x1 + 0 . 5 x2 = 0, which is the iso-cost line.

We now move this line parallel to itself away from the origin so that is leaves the region
through only one point [here the corner point B (0,16)] of the feasible region. This line is
an iso-profit line having only one point viz. B in the feasible region which will yield the
maximum value of Z.
57

x2
x1 + x2 = 20

24 Line for max. Z

20 D(0, 20)

(0, 16) B
E(20/3, 40/3)

12
8

4
(0, 2)Q 2x1+5x2=80
(–10, 0)P k C(20, 0)
–12 –8 –4 O 0 4 5k 8 12 16 20 24 28 32 36 40 x1
Z= –0.1x1 + 0.5x2 = 0
x 5
⇒ 1 =
x2 1
Fig. 2.12

Thus, Z is maximum for x1 = 0 and x2 = 16 and the maximum value of Z

= 0 . 5 × 16 − 0 .1 × 0 = ` 8.

Hence, he should buy only 16 young hens and no old hen in order to get the maximum
profit of ` 8 per week.

Note : If the dotted line through the origin is moved in the opposite direction then it will
pass through the vertex C (20, 0) for which

Z = − 0 .1 × 20 + 0 . 5 × 0 = −2 < 8
∴ Z is not max. at C (20, 0).

By Corner Point Method : The point giving the maximum value of the objective
function may also be located by finding the values of the objective function at all the
different corner points of the feasible region.
Solving simultaneously the equations of the corresponding intersecting lines of the
feasible region, we get the co-ordinates of the vertices of the feasible region as
O (0, 0), B(0,16), E(20 3 , 40 3) and C(20, 0). The values of the objective function at
these corner points are as given below :

Point ( x1, x2 ) Value of the objective function Z = −0 .1x1 + 0 . 5 x2

O(0, 0) Z = − 0.1 × 0 + 0. 5 × 0 = 0
B (0,16) Z = − 0.1 × 0 + 0. 5 × 16 = 8 (Max.)
 20 40  20 40
E ,  Z = − 0.1 × + 0. 5 × =6
 3 3 3 3

C (20, 0) Z = − 0.1 × 20 + 0. 5 × 0 = −2
Thus, the maximum value of Z is 8 and is attained when x1 = 0 and x2 = 16.
58

Example 8: Suresh has two factories of toys, one located in city X and the other in city Y.
Both these factories manufacture the same type of toys. From these locations a certain
number of toys are delivered to each of the three depots situated at places A, B and C. The
weekly requirements of the depots are respectively 5, 5 and 4 units, while the production
capacity of the factories at X and Y are respectively 8 and 6 units. The transportation cost
in ` per unit from a factory to a depot is as given in the table.

To Depot
A B C
From Factory

X 16 10 15

Y 10 12 10

How many units should be transported from each factory to each depot in order that the
transportation cost is minimum ?

Solution: Formulation of the problem as L.P.P. : Let x1 and x2 units of toys be


transported from the factory at X to the depots at A and B respectively. Since the
production capacity of the factory at X is 8 units, so 8 − x1 − x2 units will be transported
from the factory at X to the depot at C.

Now the number of units supplied to the depots cannot be negative, therefore

x1 ≥ 0, x2 ≥ 0 and 8 − x1 − x2 ≥ 0 ⇒ x1 + x2 ≤ 8

The weekly requirement of the depot at A is 5 units. Since x1 units are supplied from the
factory at X, the remaining 5 − x1 units are to be supplied from the factory at Y.

∴ 5 − x1 ≥ 0 ⇒ x1 ≤ 5.

Similarly, 5 − x2 units are to be supplied from the factory at Y to the depot at B.

∴ 5 − x2 ≥ 0 ⇒ x2 ≤ 5

Since the production capacity of the factory at Y is 6 units, therefore

6 − (5 − x1 + 5 − x2 ), i.e., x1 + x2 − 4

Units of toys are to be supplied from this factory to the depot at C.

On account of the non-negative restrictions, we must have

x1 + x2 − 4 ≥ 0 ⇒ x1 + x2 ≥ 4.

The total transportation cost Z. in ` is given by

Z = 16 x1 + 10 x2 + 15(8 − x1 − x2 ) + 10(5 − x1) + 12(5 − x2 ) + 10( x1 + x2 − 4)


59

or Z = x1 − 7 x2 + 190.

Hence the given problem formulated as L.P.P. is as follows ;

Minimize Z = x1 − 7 x2 + 190

subject to the constraints x1 + x2 ≤ 8

x1 ≤ 5

x2 ≤ 5

x1 + x2 ≥ 4

and the non-negative restrictions x1 ≥ 0, x2 ≥ 0.

Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
shaded region EFCGHDE.
x2

8
x1 + x2=8 x1=5
7
Line for mini. Z
x1 + x2=4
6
5 D H x2=5
N

(0, 4)E
3 G

2
0
1 x 1–7x 2=
L
F C B
O 1 2 3 4 5 6 7 8 x1

Fig. 2.13

To find the minimum value of the objective function Z, we draw a dotted line through
the origin corresponding to Z = 0 (Note) which is an iso-cost line. When we move this
line parallel to itself in the direction of x2 decreasing, so that it passes through only one
point [here the corner point C (5, 0)] of the feasible region. At C (5, 0), Z = 195 > 190. So
we move the line OL parallel to itself away from the origin towards the positive direction
of x2 -axis so that it passes through only one point [here the corner point D (0, 5)] of the
feasible region. This line is an iso-cost line having only one point viz. D in the feasible
region which will give the minimum value of Z.
60

Thus, Z is minimum for x1 = 0 and x2 = 5 and the minimum value of Z in this case is
Z = 0 − 7 × 5 + 190 = 155.

Hence, the optimal transportation strategy is to supply 0, 5 and 3 units from the factory
at X and 5, 0 and 1 units from the factory at Y to the depots at A, B and C respectively. In
this case the transportation cost is minimum and is ` 155.

Example 9: The post master of a local post office wishes to hire extra helpers during the
deepawali season, because of a large increase in the volume of mail handling and delivery.
Keeping in view the limited office space and the budgetary condition, the number of
temporary helpers must not exceed 10. According to past experience, a man can handle
300 letters and 80 packages per day, and a woman can handle 400 letters and 50
packages per day. It is believed that the daily volume of extra mail and packages will be no
less than 3400 and 680 respectively. A man receives ` 25 a day and a woman re ceives
` 22 a day. How many men and woman helpers should be hired to keep the pay-roll at a
minimum ? [Kanpur 2008]

Solution: Formulation of the Problem as L.P.P. : Let x1 men and x2 women be hired
by the post master to keep the pay-roll at a minimum.

If Z be the daily pay-roll in ` of the extra helpers, then

Z = 25 x1 + 22 x2

Since the maximum number of helpers can be only 10, therefore x1 + x2 ≤ 10.

Given that a man can handle 300 letters daily and a woman can handle 400 letters daily
and that the number of extra letters expected daily is not less than 3400, we must have

300 x1 + 400 x2 ≥ 3400.

Similarly, 80 x1 + 50 x2 ≤ 680.

Since the number of men and women hired cannot be negative, we have

x1 ≥ 0, x2 ≥ 0.

Hence, the L.P.P. formulated for the given problem is as follows :

Minimize Z = 25 x1 + 22 x2

subject to the constraints x1 + x2 ≤ 10

300 x1 + 400 x2 ≥ 3400

80 x1 + 50 x2 ≥ 680

and the non-negative restrictions x1 ≥ 0, x2 ≥ 0.


61

Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
point G (6, 4) only.

x2
80x1 + 50x2 = 680
14
E(0, 68/5)
13
x1 + x2 = 10
12

300x1 + 400x2 = 3400 11


10 A(0, 10)
9
(0, 17/2)C
8
7
6
5
4 G(6, 4)

3
2
1 (10, 0)
B D(34/3, 0)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 x1

F(17/2, 0)

Fig. 2.14

Therefore, the optimal solution is x1 = 6, x2 = 4 and the optimal value of

Z = 25 × 6 + 22 × 4 = 238.

Hence, 6 men and 4 women helpers should be hired by the courier company to meet its
seasonal requirements and keep the pay-roll at a minimum of ` 238.

Example 10: Solve by graphical method the L.P.P.

Minimize Z = 5 x1 + 6 x2

subject to the constraints x1 + x2 ≥ 50 , x1 + 2 x2 ≤ 40 , 3 x1 + 4 x2 ≤ 100

and the non-negative restrictions x1 ≥ 0 and x2 ≥ 0 .

Solution: Solving simultaneously the inequations of the constraints and the non-negative
restrictions by graphical method, we see that there exist no values of x1 and x2 that
simultaneously satisfy all the constraints and the non-negative restrictions.
62

x2

60
x1+x2=50

50

40

30
x1+x2 ≥ 50

x1+2x2=40
20

10

O 10 20 30 40 50 60 x1

3x1+ 4x2 =100

Fig. 2.15

Hence, the given problem does not have any feasible solution.

Problems Having Unbounded Solution

Example 11: Solve graphically the following L.P.P.


Max. Z = 3 x1 + 2 x2

subject to x1 − x2 ≤ 1,

x1 + x2 ≥ 3

and x1, x2 ≥ 0 . [Meerut 2009 (BP)]


x2
Solution: Proceeding stepwise, the
permissible region is shaded in the
figure which is unbounded. From the
figure it is clear that the dotted line 3
1
=

through the origin representing Z = 0


x1

2
–x
+

2
x2

can be moved parallel to itself in the


x
=
3

direction of Z increasing and still 1


3k
have some points in the permissible
region. –2k O 1 2 3 x1
Z=3x1 + 2x2 = 0
Thus Z can be made arbitrarily large –1
x1 −2
and so the problem has no finite ⇒ =
x2 3
maximum value of Z. Hence the
Fig. 2.16
problem has unbounded solution.
63

Example 12: Solve by graphical method, the L.P.P.

Maximize Z = −3 x1 + 2 x2

subject to x1 ≤ 3

x1 − x2 ≤ 0

and x1, x2 ≥ 0 . [Meerut 2010]

Solution: Proceeding stepwise, the x2

permissible region is shaded in the


figure which is unbounded. From the
figure it is obvious that the dotted line x1–x2=0
through the origin representing Z = 0
can be moved parallel to itself in the
direction of Z increasing and still have
some points in the permissible region. x1=3

Thus, Z can be made arbitrarily large 3k


2k
and so the problem has no finite x1
O 1 2 3 4
maximum value of Z. Z=–3x1 + 2x2=0
x 2
⇒ 1 =
Hence, the problem has unbounded x2 3
solution. Fig. 2.17

Solve graphically the following linear programming problems :

1. Max. Z = x1 + x2 2. Max. Z = 8 x1 + 7 x2
subject to x1 + x2 ≤ 2000 subject to 3 x1 + x2 ≤ 66,000
x1 + x2 ≤ 1500, x1 + x2 ≤ 45,000
x2 ≤ 600 x1 ≤ 20,000, x2 ≤ 40,000
and x1, x2 ≥ 0. and x1, x2 ≥ 0.

3. Min. Z = 1. 5 x1 + 2. 5 x2 4. Max. Z = 6 x1 + 11x2


subject to x1 + 3 x2 ≥ 3, subject to 2 x1 + x2 ≤ 104,
x1 + x2 ≥ 2 x1 + 2 x2 ≤ 76
and x1, x2 ≥ 0. [Kanpur 2010] and x1, x2 ≥ 0. [Kanpur 2012]
64

5. Max. Z = 5 x1 + 7 x2 6. Min. Z = 3 x1 + 5 x2
subject to x1 + x2 ≤ 4, 3 x1 + 8 x2 ≤ 24, subject to −3 x1 + 4 x2 ≤ 12
10 x1 + 7 x2 ≤ 35 2 x1 − x2 ≥ −2,
and x1, x2 ≥ 0. 2 x1 + 3 x2 ≥ 12, x1 ≤ 4, x2 ≥ 2
and x1, x2 ≥ 0.

7. Max. Z = 3 x1 + 2 x2 8. Max. Z = 3 x1 + 4 x2
subject to x1 + x2 ≤ 4 subject to x1 − x2 ≤ −1
x1 − x2 ≤ 2 − x1 + x2 ≤ 0
x1, x2 ≥ 0. [Meerut 2009] x1, x2 ≥ 0. [Meerut 2011 (BP)]
9. Min. Z = x1 + x2 , 10. Max. Z = 0 . 75 x1 + x2
subject to 5 x1 + 10 x2 ≤ 50 subject to x1 − x2 ≥ 0
x1 + x2 ≥ 1 −0. 5 x1 + x2 ≤ 1
x2 ≤ 4, x1, x2 ≥ 0. and x1, x2 ≥ 0
[Meerut 2006 (BP), 08 (BP), 12]
11. Max. Z = 3 x1 + 4 x2 12. Max. Z = 3 x1 + 5 x2
subject to 4 x1 + 2 x2 ≤ 80 subject to x1 + 2 x2 ≤ 20
2 x1 + 5 x2 ≤ 180 x1 + x2 ≤ 15, x2 ≤ 6
and x1, x2 ≥ 0. [Gorakhpur 2008, 10] and x1, x2 ≥ 0. [Gorakhpur 2009]
13. Max. Z = 6 x1 − 2 x2 14. Max. Z = 3 x1 + 4 x2
subject to 2 x1 − x2 ≤ 2 subject to 5 x1 + 4 x2 ≤ 200
x1 ≤ 3 3 x1 + 5 x2 ≤ 150
and x1, x2 ≥ 0. 5 x1 + 4 x2 ≥ 100
8 x1 + 4 x2 ≥ 80
x1, x2 ≥ 0. [Meerut 2004, 07]
1
15. Max. Z = x1 + x2 16. Min. Z = 2 x1 − 10 x2
2
subject to x1 − x2 ≥ 0
subject to 3 x1 + 2 x2 ≤ 12
x1 − 5 x2 ≥ −5
5 x1 ≤ 10
and x1, x2 ≥ 0.
x1 + x2 ≥ 8, − x1 + x2 ≥ 4
x1, x2 ≥ 0. [Meerut 2006]
17. Max. Z = 3 x1 − 2 x2 18. Max. Z = x1 + x2
subject to x1 + x2 ≤ 1 subject to x1 − x2 ≥ 0
2 x1 + 2 x2 ≥ 4 −3 x1 + x2 ≥ 3
and x1, x2 ≥ 0. and x1, x2 ≥ 0.
65

19. Max. Z = 7 x1 + 3 x2 20. Max. 7 x1 + 3 x2


subject to x1 + 2 x2 ≤ 3, x1 + x2 ≤ 4, subject to x1 + 2 x2 ≥ 3
5 3 x1 + x2 ≤ 4
0 ≤ x1 ≤ , 0 ≤ x2 ≤ .
2 2
0 ≤ x1 ≤ 5 2 , 0 ≤ x2 ≤ 3 2.

21. Min. Z = − x1 + 2 x2 22. Min. Z = 5 x1 − 2 x2


subject to − x1 + 3 x2 ≤ 10 subject to 2 x1 + 3 x2 ≥ 1
x1 + x2 ≤ 6, x1 − x2 ≤ 2 x1, x2 ≥ 0.
and x1, x2 ≥ 0.

23. Max. Z = 5 x1 + 3 x2 24. Max. Z = 2 x1 + 3 x2


subject to 3 x1 + 5 x2 ≤ 15 subject to x1 + x2 ≤ 1
5 x1 + 2 x2 ≤ 10 3 x1 + x2 ≤ 4
x1, x2 ≥ 0. [Gorakhpur 2011] and x1, x2 ≥ 0.

25. Max. Z = 2 x1 + x2 26. Max. Z = −3 x1 + 2 x2


subject to x1 + 2 x2 ≤ 10, x1 + x2 ≤ 6 subject to x1 ≤ 3, x1 − x2 ≤ 0,
x1 − x2 ≤ 2, x1 − 2 x2 ≤ 1 and x1, x2 ≥ 0. [Meerut 2010]
and x1, x2 ≥ 0.

27. Obtain graphically the maximum value of


Z = [min (3 x1 − 10), min (−5 x1 + 5)], 0 ≤ x1 ≤ 5. [Kanpur 2009]

28. A soft drink plant two bottling machines A and B. It produces and sells 8 ounce and
16 ounce bottles. The following data is available :

Machine 8 Ounce 16 Ounce

A 100/minute 40/minute
B 60/minute 75/minute

The machines can be run 8 hrs. per day, 5 days per week. Weekly production of the
drinks cannot exceed 3,00,000 ounces and the market can absorb 25,000 eight
ounce bottles and 7,000 sixteen ounce bottles per week. Profit on these bottles is 15
paise and 25 paise per bottle respectively. The planner wishes to maximize his
profit subject to all the production and marketing restrictions. Formulate it as a
linear programming problem and solve graphically.
29. The ABC Electric Appliance Company produces two products; refrigerators and
coolers. Production takes place in two separate departments. Refrigerators are
produced in Department I and coolers are produced in Department II. The
company's two products are produced and sold on a weekly basis. The weekly
production cannot exceed 25 refrigerators in Department I and 35 coolers in
Department II, because of the limited available facilities in these two departments.
66

The company regularly employs a total of 60 workers in the two departments. A


refrigerator requires 2 man-week of labour, while a cooler requires 1 man-week of
labour. A refrigerator contributes a profit of ` 60 and a cooler contributes a profit of
` 40. How many units of refrigerators and coolers should the company produce to
realize maximum profit ?

30. A firm manufactures two types of products A and B and sells each of them at a profit
of ` 2 per product. Each product is processed on two machines P and Q. Type A
product requires one minute of processing time on machine P and two minutes on
machine Q. Type B product requires one minute on machine P and one minute on
machine Q. The machine P is available for not more than 6 hours and 40 minutes
while machine Q is available for 10 hours during any working day.
Formulate the given problem as a LPP and find how many products of each type
should the firm produce each day in order to get maximum profit.

31. An automobile manufacturer makes automobiles and trucks in a factory that is


divided into two shops. Shop A, which performs the basic assembly operation, must
work 5 man-days on each truck but only 2 man-days on each automobile. Shop B,
which performs finishing operation, must work 3 man-days for each automobile or
truck that it produces. Because of men and machine limitations, shop A has 180
man-days per week available while shop B has 135 man-days per week. If the
manufacturer makes a profit of ` 300 on each truck and ` 200 on each automobile,
how many of each should he produce to maximize his profit ?

32. A pineapple firm produces two products-canned pineapple and canned juice. The
specific amounts of material, labour and equipment required to produce each
product and the availability of each of these resources are shown in the table given
below :

Canned Canned Available


juice Pineapple Resources

Labour (man-hour) 3 2.0 12.0


Equipment (machine hours) 1 2.3 6.9
Material (unit) 1 1.4 4.9

Assuming one unit each of canned juice and canned pineapple has profit margins of
` 2 and ` 1 respectively. Determine the product mix that will maximize the profit.

33. Diet Problem : Consider two different types of foodstuffs, say F1 and F2 . Assume
that these foodstuffs contain vitamins V1, V2 and V3 respectively. Minimum daily
requirement of three vitamins are 1 mg. of V1, 50 mg. of V2 and 10 mg. of V3 .
Suppose that the food stuff F1 contains 1 mg. of V1, 100 mg of V2 and 10 mg. of V3 .
Whereas the food stuff F2 contains 1 mg. of V1, 10 mg. of V2 100 mg. of V3 . Cost of
one unit of foodstuff F1 is ` 1 and that of F2 is ` 1.5.
67

Find the minimum cost diet that would supply the body at least the minimum
requirements of each vitamin by graphical method.

34. A company sells two different products A and B. The company makes a profit of
` 40 and ` 30 per unit on products A and B respectively. The products are produced
in a common production process and are sold in two different markets. The
production process has a capability of 30,000 man-hours. It takes 3 hours to
produce one unit of A and one hour to produce one unit of B. The market has been
surveyed, and company officials feel that the maximum number of units of A that
can be sold is 8,000 and the maximum of B is 12,000 units. Subject to these
limitations, the products can be sold in any convex combination. Formulate the
above problem as a L.P.P. and solve it by graphical method.

35. A farm is engaged in breeding pigs. The pigs are fed on various products grown on
the farm. In view of the need to ensure certain nutrient constituents, it is necessary
to buy products (say A and B) in addition. The contents of the various products, per
unit, in nutrients are vitamins, proteins etc. given in the following table :

Nutrient content in Min. amount of


Nutrients
A B nutrient

M1 36 6 108
M2 3 12 36
M3 20 10 100

The last column of the above table gives the minimum amount of nutrient
constituents M1, M2 , M3 which must be given to the pigs. If the products A and B
cost ` 20 and ` 40 per unit respectively, how much each of these two products
should be bought so that the total cost is minimized ?

36. A company produces two types of leather belts, say type A and B. Belt A is of
superior quality and belt B is of a lower quality. Profits on the two types of belts are
40 and 30 paise per belt, respectively. Each belt of type A requires twice as much
time as required by a belt of type B. If all belts were of type B, the company would
produce 1000 belts per day. But the supply of leather is sufficient only for 800 per
day. Belt A requires a fancy buckle and 400 fancy buckles are available for this, per
day. For belt of type B, only 700 buckles are available per day. How should the
company manufacture the two types of belts in order to have maximum overall
profit?

37. A person requires 10, 12 and 12 units of chemicals A, B and C respectively for his
garden. A liquid product contains 5, 2 and 1 units of A, B and C respectively per jar. A
dry product contains 1, 2 and 4 units of A, B and C per carton. If the liquid product
sells for ` 3 per jar and the dry product sells for ` 2 per carton, how many of each should
be purchased to minimize the cost and meet the requirements?
68

38. A publisher sells a hard cover edition of a text book for ` 72 and a paperback edition
of the same text for ` 40. Costs to the publisher are ` 56 and ` 28 per book
respectively in addition to weekly costs of ` 9600. Both types of books require 5
minutes of printing time, although hard cover requires 10 minutes binding time
and the paperback requires only 2 minutes. Both the binding and printing
operations have 4800 minutes available each week. How many of each type of book
should be produced in order to maximize profit ?

39. A manufacturer has employed 5 skilled men and 10 semi-skilled men and makes an
article in two qualities : deluxe model and an ordinary model. The making of a
deluxe model requires 2 hrs work by a skilled man and 2 hrs work by a semi-skilled
man. The ordinary model requires 1 hr by a skilled man and 3 hrs by a semi-skilled
man. By union rules no man may work more than 8 hrs per day. The manufacturer's
clear profit on deluxe model is ` 15 and on an ordinary model is ` 10. How many of
each type should be made in order to maximize his total daily profit ?

1. Infinite number of solution. One is 2. x1 = 10, 500, x2 = 34, 500;


x1 = 1000, x2 = 500 ; Z = 1500. Z = 3, 25, 500.

3. 3 1 4. x1 = 44, x2 = 16 ; Z = 440.
x1 = , x = ; Z = 3. 5.
2 2 2

5. x1 = 1. 6, x2 = 2 . 4 ; Z = 24 . 8. 6. x1 = 3, x2 = 2 ; Z = 19.

7. x1 = 3, x2 = 1; Z = 11. 8. No feasible solution.

9. Infinite number of solutions. 10. Unbounded solution.

11. x1 = 5 2 , x2 = 35 ; Z = 29 5 2. 12. x1 = 10, x2 = 5 ; Z = 55

13. x1 = 3, x2 = 4 ; Z = 10. 400


14. x1 = , x2 = 150 13 ; Z = 1800 13.
13

15. No feasible solution. 5 5


16. x1 = , x = ; Z = −10.
4 2 4

17. No solution. 18. No feasible solution.

19. x1 = 5 2 , x2 = 1 4 ; Z = 73 4. 20. x1 = 5 2 , x2 = 3 2 ; Z = 22.

21. x1 = 2, x2 = 0 ; Z = −2. 22. x1 = 0, x2 = 1 3 ; Z = − 2 3.


23. x = 20 , x = 45 ; Z = 235 . 24. x1 = 0, x2 = 1; Z = 3.
1 19 2 19 19

25. x1 = 4, x2 = 2 ; Z = 10. 26. Unbounded solution.


69

27. Max. Z = −10 for x1 = 0.

28. Let x1 and x2 be the number of bottles of 8 and 16 ounces respectively.


L.P.P. is Max. Z = 0 .15 x1 + 0 . 25 x2
s.t. 8 x1 + 16 x2 ≤ 3, 00, 000,
x1 x x x
+ 2 ≤ 2, 400, 1 + 2 ≤ 2, 400,
100 40 60 75
x1 ≤ 25, 000, x2 ≤ 7, 000 and x1, x2 ≥ 0.
Solution is x1 = 25, 000, x2 = 6, 250, Z = 5, 312 . 50.

29. Refrigerators = 12 . 5, Coolers = 35 ; Max. profit = ` 2,150.

30. Let x1 and x2 be the numbers of products A and B to be manufactured.


L.P.P. is Max. Z = 2 x1 + 2 x2
s.t. x1 + x2 ≤ 400, 2 x1 + x2 ≤ 600
and x1, x2 ≥ 0.
Solutions is x1 = 200, x2 = 200
or x1 = 0, x2 = 400
or any point on line segment joining these two points. Z = ` 800.

31 Trucks = 30, Automobiles = 15 per week : Max. Profit = ` 12,000

32. Canned juice = 4, Pineapple juice = 0 ; profit = ` 8.

33. F1 = 1 unit, F2 = 0 unit ; cost = ` 1.

34. If two products A and B be x1 and x2 units respectively then L.P.P. is


Max. Z = 40 x1 + 30 x2
s.t. 3 x1 + x2 ≤ 30, 000, x1 ≤ 8, 000, x2 ≤ 12, 000, x1, x2 ≥ 0.
Solution is x1 = 6, 000, x2 = 12, 000, Z = ` 60, 000

35. 4 units of product A, 2 units or product B ; cost = `160.

36. 200 belts of type A, 600 belts of type B ; profit = ` 260.

37. 1 unit of liquid product, 5 cartons of dry product, cost = `13.

38. Maximum profit is ` 3360 when 360 books of hard cover edition and 600 books
of paper back edition are sold.

39. Maximum profit is ` 350 when 20 ordinary and 10 deluxe models are made.
70

2.11 Slack and Surplus Variables


[Meerut 2006 (BP), 07 (BP), 09, 11, 12 ; Kanpur 2007; Gorakhpur 2008, 09, 10, 11]

2.11.1 Slack Variables


If a constraint has a sing ≤, then in order to make it an equality we have to add something
positive to the left hand side.

The non-negative variables which are added to L.H. sides of the constraints to
convert them into equalities are called the slack variables.

For example, consider the linear programming problem

Max. Z = 2 x1 + 3 x2 + 4 x3

x1 + x2 + x3 ≤ b1 
s.t.  …(1)
2 x1 + 4 x2 − x3 ≤ b2 

x1, x2 , x3 ≥ 0.

In order to convert constraints (1) into equalities (equations) we add two non-negative
variables x4 and x5 on L.H.S. of (1), then we have

x1 + x2 + x3 + x4 = b1

and 2 x1 + 4 x2 − x3 + x5 = b2 .

Hence x4 and x5 are called the slack variables.

2.11.2 Surplus Variables


If a constraint has a sign ≥, then in order to make it an equality we have to subtract
something positive from its L.H.S. The non-negative variables which are subtracted
from the L.H. sides of the constraints to convert them into equalities are called the
surplus variables.

For example, consider the linear programming problem

Max. Z = 2 x1 + 4 x2 + 4 x3

x1 + x2 + 2 x3 ≤ b1 
 ...(2)
s.t. 2 x1 + 4 x2 + 6 x3 ≥ b2 
x1 − 2 x2 + 4 x3 ≥ b3 

The constraints (2) are inequalities. In order to convert them into equalities we have to
add something in the L.H.S. of first and subtract something from the L.H. sides of the
second and the third inequalities of (2), i.e., we have following equalities

x1 + x2 + 2 x3 + x4 = b1
71

2 x1 + 4 x2 + 6 x3 − x5 = b2

x1 − 2 x2 + 4 x3 − x6 = b3

Here, x4 is the slack variable while x5 and x6 are the surplus variables.

2.12 Standard form of Linear Programming Problem


The standard form of a L.P.P. has the following characteristics :
1. The objective function is of maximization form.
2. All the constraints are equations.
3. The right hand side elements of all the constraints are non-negative.
4. All the variables are non-negative i.e., ≥ 0

Working Rule to find the Standard form of a L.P.P. : To convert a L.P.P. to the
standard form proceed stepwise as follows :

Step 1 : To convert the objective function to the maximization form :

If the objective function is

Min. Z = f ( x)

Then multiply both sides of the objective function by –1 and put − Z = Z ′ to get the
corresponding maximization form of the objective function.

i.e., the corresponding objective function in maximization form is

Max. Z ′ = − Z = − f ( x)

Step 2 : To make right hand side element of each constraint non-negative, if not.

If the right hand side of a constraint is negative, then it is converted to positive sign by
multiplying both sides of this constraint by –1.

Remember that, if a constraint contain inequality sign (i.e., ≥ or ≤ sign), then


multiplication of both sides of it by –1 will reverse the sign of the inequality.

e.g., the multiplication of both sides by –1, the inequality 2 x1 − 3 x2 ≥ −3 reduces to


−2 x1 + 3 x2 ≤ 3.

Step 3 : To convert all the constraints to equations if not.

All the constraints containing inequality sign one converted to equations by the
introduction of non-negative variables called slack or surplus variables.
1. The constraint with ≤ sign, is reduced to an equation by adding a non-negative
variable on the left land side.
72

e.g., the constraint 2 x1 + 5 x2 ≤ 7 is reduced to equation 2 x1 + 5 x2 + x3 = 7, by adding


the non-negative variable (called slack variable) on the left hand side.
2. The constraint with ≥ sign, is reduced to an equation by subtracting a non-negative
variable from the left hand side.
e.g., the constraint 3 x1 − 5 x2 ≥ 9 is reduced to equation 3 x1 − 5 x2 − x4 = 9, by
subtracting the non-negative variable x4 (called surplus variable) from the left hand
side.

Step 4 : To reduce the variable to non-negative values


If a variable say x is unrestricted in sign i.e., it can have positive, negative or zero value,
then it is replaced by the difference of two non-negative variables.

i.e., write x = x′ − x′′ where x′ ≥ 0 and x′′ ≥ 0 in the objective function and in all equations
obtained in step 3.

2.13 Matrix form of a Linear Programming Problem


A L.P. Problem in standard form can be expressed in matrix form as follows :
Max. Z = c x
subject to A x = b, b ≥0
and x ≥0
where, c = row matrix (row vector) of coefficients of variables (given, slack and surplus)
in objective function Z.
x = Column matrix (or column vector) of the variables (given, slack and surplus)
b = Column matrix (or column vector) of R.H.S elements in the constraint
equations
and A = Matrix of the coefficients of variables in the constraint equations.
For clear understanding of the method, see the following examples.

Example 1: Convert the following L.P.P. into standard form.


Max. Z = 2 x1 + 3 x2 + 5 x3
subject to, 5 x1 − 4 x2 + 3 x3 ≤ 7
2 x1 + 5 x2 − 4 x3 ≥ 2
4 x1 + 3 x2 + 7 x3 ≥ 8
and x1, x2 , x3 ≥ 0 [Gorakhpur 2011]

Also express the L.P.P. in the matrix form.


73

Solution: For clear understanding we shall convert the given L.P.P. into standard form
stepwise as discussed in article 2.12.

Step 1 : Here the given objection function Z = 2 x1 + 3 x2 + 5 x3 is of maximization form.

Step 2 : In the given problem, right hand side element of each constraint is positive
(non-negative)

Step 3 : Adding the slack variables x4 and x6 (both non-negative) in first and third
constraint respectively and subtracting the surplus variable x5 (non-negative) from the
second constraint, the constraint of the given L.P.P. as equations one

5 x1 − 4 x2 + 3 x3 + x4 = 7

2 x1 + 5 x2 − 4 x3 − x5 = 2

4 x1 + 3 x2 + 7 x3 + x6 = 8

Step 4 : All the variables, x1, x2 , x3 , x4 , x5 , x6 are non-negative i.e., ≥ 0.

Taking coefficients 0 for slack and surplus variables in the objective function Z, the
standard form of given L.P.P. is

Max. Z = 2 x1 + 3 x2 + 5 x3 + 0. x4 + 0. x5 + 0. x6

subject to,

5 x1 − 4 x2 + 3 x3 + x4 = 7

2 x1 + 5 x2 − 4 x3 − x5 = 2

4 x1 + 3 x2 + 7 x3 + x6 = 8

and x1, x2 , x3 , x4 , x5 , x6 ≥ 0

∴ The matrix form of the L.P.P. is

Max. Z = c x

subject to, Ax = b and x ≥0

where, c = [2 3 5 0 0 0]

x = [ x1 x2 x3 x4 x5 x6 ]T , b = [7 2 8]T

5 −4 3 1 0 0 
and A = 2 5 −4 0 −1 0 
 
4 3 7 0 0 1
74

Example 2: Convert the following L.P.P. into standard form.

Min. Z = 2 x1 + x2 + 4 x3

subject to, −2 x1 + 4 x2 ≤ 4

x1 + 2 x2 + x3 ≥ 5

2 x1 + 3 x3 ≤ 2

and x1, x2 ≥ 0 , x3 unrestricted in sign. [Kanpur 2010]

Also express the L.P.P. in the matrix form.

Solution: Stepwise solution of the given L.P.P. is as follows :

Step 1 : Taking Z ′ as the objective function in maximization form, we have

Max. Z ′ = − Z = −2 x1 − x2 − 4 x3 ...(1)

Step 2 : In the given L.P.P. right hand side of each constraint is positive.

Step 3 : Adding the slack variables x4 and x6 (both non-negative) in first and third
constraint respectively and subtracting the surplus variable (non-negative) from the
second constraint, the constraints of the L.P.P. as equations are

−2 x1 + 4 x2 + x4 = 4 

x1 + 2 x2 + x3 − x5 = 5  ...(2)
2 x1 + 3 x3 + x6 = 2 

Step 4 : Here x3 is unrestricted in sign, so putting x3 = x3′ − x3′′ , where x3′ , x3′′ ≥ 0, in (1)
and (2), the standard form of the given L.P.P. is

Max. Z ′ = −2 x1 − x2 − 4 x3′ + 4 x3′′ + 0. x4 + 0. x5 + 0. x6

subject to, −2 x1 + 4 x2 + x4 =4

x1 + 2 x2 + x3′ − x′′ − x5 =5

2 x1 + 3 x3′ − 3 x3′′ + x6 =2

and x1, x2 , x3′ , x3′′ , x4 , x5 , x6 ≥0

∴ The matrix form of the given L.P.P. is

Max. Z = c x

subject to A x=b and x ≥0

where, c = [−2 − 1 − 4 4 0 0 0]
75

x = [ x1 x2 x3′ x3′′ x4 x5 x6 ]T , b = [4 5 2]T

 −2 4 0 0 1 0 0 
and A =  1 2 1 −1 0 −1 0 .
 
 2 0 3 −3 0 0 1

Reduce the following L.P.P. into standard form. Also find their matrix form

1. Max. Z = x1 + 2 x2 2. Min. Z = 12 x1 + 5 x2
subject to, 2 x1 + 3 x2 ≤ 6 subject to 5 x1 + 3 x2 ≥ 15
x1 + 7 x2 ≥ 4 7 x1 − 2 x2 ≤ 14
and x1, x2 ≥ 0 and x1, x2 ≥ 0
3. Max. Z = 2 x1 + 5 x2 + 9 x3 4. Max. Z = 2 x1 + 6 x2 + 5 x3
subject to 5 x1 − 4 x2 + 3 x3 ≤ 7 subject to 5 x1 − 2 x2 + 3 x3 ≤ 9
3 x1 + 5 x2 + 6 x3 ≥ 16 x1 + x2 + x3 ≥ 5
4 x1 + 3 x2 + 5 x3 ≤ 9 x1 + 2 x2 = 7
and x1, x2 , x3 ≥ 0 [Gorakhpur 2010] and x1, x2 , x3 ≥ 0 [Kanpur 2008]
5. Max. Z = 3 x1 + 11x2 6. Max. Z = 3 x1 + 5 x2 + 7 x3
subject to 2 x1 + 3 x2 ≤ 6 subject to 6 x1 − 4 x2 ≤ 5
x1 + 7 x2 ≥ 4 3 x1 + 2 x2 + 5 x3 ≥ 11
x1 + x2 = 3 4 x1 + 3 x3 ≤ 2
where, x1, x2 ≥ 0 [Kanpur 2007] x1, x2 ≥ 0 and x3 unrestricted in sign
[Gorakhpur 2008]
7. Max. Z = 3 x1 + 2 x2 + 5 x3 8. Min. Z = x1 − 2 x2 + x3
subject to 2 x1 + 3 x2 ≤ 3 subject to 2 x1 + 3 x2 + 4 x3 ≥ − 4
x1 + 2 x2 + 3 x3 ≥ 5 3 x1 + 5 x2 + 2 x3 ≥ 7
3 x1 + 2 x3 ≤ 2 x1 ≥ 0, x2 ≥ 0, x3 in unrestricted in
and x, x2 ≥ 0, x3 is unrestricted in sign
sign. [Gorakhpur 2009]
9. Max. Z = x1 + x2 10. Max. Z = 2 x1 + 3 x2 + 4 x3
subject to x1 ≤ 3, x2 ≥ 1, subject to x1 + x2 + x3 ≥ 5
and x1, x2 ≥ 0 x1 + 2 x2 = 7
5 x1 − 2 x2 + 3 x3 ≤ 9
and x1, x2 , x3 ≥ 0
76

1. Max. Z = x1 + 2 x2 + 0. x3 + 0 x4 2. Max. Z ′ = −12 x1 − 5 x2 + 0. x3 + 0. x4


subject to 2 x1 + 3 x2 + x3 = 6 subject to 5 x1 + 3 x2 − x3 = 15
x1 + 7 x2 − x4 = 4 7 x1 − 2 x2 + x4 = 14
and x1, x2 , x3 , x4 ≥ 0 and x1, x2 , x3 , x4 ≥ 0
Matrix form Max. Z = c x Matrix form Max. Z ′ = c x
subject to A x = b where subject to A x = b, where
c = [1, 2, 0, 0] , c = [−12, − 5, 0, 0],
x = [ x1, x2 , x3 , x4 ]T x = [ x1, x2 , x3 , x4 ]T

b = [6, 4]T , b = [15, 14]T ,

2 3 1 0  5 3 −1 0 
A=  and x ≥ 0 A= , x ≥ 0
1 7 0 −1 7 −2 0 1

3. Max. Z = 2 x1 + 5 x2 + 9 x3 + 0. x4 4 Max. Z = 2 x1 + 6 x2 + 5 x3
+0. x5 + 0. x6 +0. x4 + 0. x5
subject to 5 x1 − 4 x2 + 3 x3 + x4 = 7 subject to 5 x1 − 2 x2 + 3 x3 + x4 = 9
3 x1 + 5 x2 + 6 x3 − x5 = 16 x1 + x2 + x3 − x5 = 5
4 x1 + 3 x2 + 5 x3 + x6 = 9 x1 + 2 x2 = 7
and x1, x2 , x3 , x4 , x5 , x6 ≥ 0 and x1, x2 , x3 , x4 , x5 ≥ 0
Matrix form : Max. Z = c x Matrix form Max. Z = c x
subject to A x = b, where subject to A x = b, where
c = [2, 5, 9, 0, 0, 0], c = [2, 6, 5, 0, 0],
x = [ x1, x2 , x3 , x4 , x5 , x6 ]T x = [ x1, x2 , x3 , x4 , x5 ]T

b = [7, 16, 9]T , b = [9, 5, 7]T ,

5 −4 3 1 0 0  5 −2 3 1 0 
A = 3 5 6 0 −1 0  , x ≥ 0 A = 1 1 1 0 −1 , x ≥ 0
   
4 3 5 0 0 1 1 2 0 0 0 

5. Max. Z = 3 x1 + 11x2 + 0. x3 + 0. x4 6. Max.


subject to 2 x1 + 3 x2 + x3 = 6 Z = 3 x1 + 5 x2 + 7 x3′ − 7 x3′′ + 0. x4

x1 + 7 x2 − x4 = 4 +0. x5 + 0. x6

x1 + x2 = 3 subject to 6 x1 − 4 x2 + x4 = 5

where x1, x2 , x3 , x4 ≥ 0 3 x1 + 2 x2 + 5 x3′ − 5 x3′′ − x5 = 11

Matrix form : Max. Z = c x 4 x1 + 3 x3′ − 3 x3′′ + x6 = 2

subject to A x = b, where and x1, x2 , x3′ , x3′′ , x4 , x5 , x6 ≥ 0


77

c = [3, 11, 0, 0], Matrix form : Max. Z = c x


T subject to A x = b, where
x = [ x1, x2 , x3 , x4 ]
c = [3, 5, 7, − 7, 0, 0, 0]
b = [6, 4, 3]T ,
x = [ x1, x2 , x3′ , x3′′ , x4 , x5 , x6 ]T ,
2 3 1 0 
A = 1 7 0 −1, x ≥ 0 b = [5, 11, 2]T
 
1 1 0 0  6 −4 0 0 1 0 0 
A = 3 2 5 −5 0 −1 0 ,
 
4 0 3 −3 0 0 1
x ≥0
7. Max.Z = 3 x1 + 2 x2 + 5 x3′ − 5 x3′′ + 0. x4 8. Max. Z1 = − x1 + 2 x2 − x3′
+0. x5 + 0. x6
+ x3′′ + 0. x4 + 0. x5
subject to 2 x1 + 3 x2 + x4 = 3
s.t. −2 x1 − 3 x2 − 4 x3′ + 4 x3′′ + x4 = 4
x1 + 2 x2 + 3 x3′ − 3 x3′′ − x5 = 5
3 x1 + 5 x2 + 2 x3′ − 2 x3′′ − x5 = 7
3 x1 + 2 x3′ − 2 x3′′ + x6 = 2
and x1, x2 , x3′ , x3′′ , x4 , x5 ≥ 0
and x1, x2 , x3′ , x3′′ , x4 , x5 , x6 ≥ 0
Matrix form : Max. Z ′ = c x
Matrix form : Max. Z = c x
subject to A x = b where
subject to A x = b, where
c = [−1, 2, − 1, 1, 0, 0]
c = [3, 2, 5, − 5, 0, 0, 0]
x = [ x1, x2 , x3′ , x3′′ , x4 , x5 ]T ,
x = [ x1, x2 , x3′ , x3′′ , x4 , x5 , x6 ]T ,
b = [4, 7]T
b = [3, 5, 2]T
 −2 −3 −4 4 1 0 
2 3 0 0 1 0 0  A= ,
 3 5 2 −2 0 −1
A = 1 2 3 −3 0 −1 0 , x ≥ 0
  x ≥0
3 0 2 −2 0 0 1
9. Max. Z = x1 + x2 + 0. x3 + 0. x4 10. Max. Z = 2 x1 + 3 x2 + 4 x3
subject to x1 + x3 = 3, x2 − x4 = 1, +0. x4 + 0. x5
and x1, x2 , x3 , x4 ≥ 0 subject to x1 + x2 + x3 − x4 = 5
Matrix form : Max. Z = c x x1 + 2 x2 = 7
subject to A x = b, where 5 x1 − 2 x2 + 3 x3 + x5 = 9
c = [1, 1, 0, 0], and x1, x2 , x3 , x4 , x5 ≥ 0
T Matrix form : Max. Z = c x
x = [ x1, x2 , x3 , x4 ]
subject to A x = b, where
b = [3, 1]T ,
c = [2, 3, 4, 0, 0],
1 0 1 0 
A= , x ≥ 0 x = [ x1, x2 , x3 , x4 , x5 ]T
0 1 0 −1
b = [5, 7, 9]T ,
1 1 1 −1 0 
A = 1 2 0 0 0 , x ≥ 0
 
5 −2 3 0 1
78

2.14 Basic Solution (B.S.)


[Kanpur 2012; Gorakhpur 2008]

Consider a system A x = b of m equations in n unknowns (n > m) and r ( A) = r ( A b) = m


i.e., none of the equations is redundant.
A solution obtained by setting any (n − m) variables to zero is called a basic solution, provided the
determinant of the coefficients of the remaining m variables is not zero.
Such m variables (any of them may be zero) are called basic variables and remaining
(n − m) zero valued variables are called non-basic variables.

Thus for a solution to be basic, at least n − m variables must be zero.

Here we note that the matrix formed by the coefficients of the m basic variables, or say
formed by the vectors associated to the basic variables is non-singular as its determinant
does not vanish. Hence the vectors associated to the basic variables are L.I.

Thus a solution in which the vectors associated to m variables are L.I. and remaining n − m variables
are zero is called a basic solution.

Hence a basic solution can be constructed by selecting the m L.I. vectors out of n and
setting the variables associated to the remaining (n − m) columns to zero. If B is the matrix
of m L.I. vectors of A and xB is the column vector of the corresponding variables (basic
variables), then the basic solution is given by

B xB = b or x B = B −1b.

n!
The number of basic solutions thus obtained will be at the most n Cm = since m
m !(n − m) !
vectors out of n can be selected in n Cm ways. Note that a B.S. corresponds to some basis.
Basic solutions are of two types : Non-degenerate basic solutions and degenerate basic solutions.

Degenerate Basic Solution : A basic solution is said to be degenerate basic solution if at


least one of the basic variables is zero. [Meerut 2007, Kanpur 2010]

Non-degenerate Basic Solution : A basic solution is said to be non-degenerate basic


solution if none of the basic variables is zero.
Thus a non-degenerate basic solution contains exactly m non-zero and (n − m) zero
variables.

2.15 Theorem
A necessary and sufficient condition for the existence and non-degeneracy of all the basic solutions of
A x = b is that every set of m columns of the augmented matrix A b = [ A, b] is L.I.
Proof : The condition is necessary : Suppose all the basic solutions of the system
Ax = b exist and are non-degenerate. Therefore, every set of m-column vectors of A are L.I.
79

Let α1, α 2 ,...., α m be one set of m-column vectors of A, then the given system gives

α1 x1 + α 2 x2 + ...+ α m x m = b.

Now since each solution is non-degenerate, x1 ≠ 0 and hence by replacement theorem


vector b can replace α1 in the basis α1, α 2 ,..., α m, Thus vectors b, α 2 , α 3 ,... α m also form a
basis and hence they are L.I. Similarly b can replace α 2 as x2 is also non-zero. Hence a1, b,
α 3 ,..., α m are also L.I. In a similar way we can show that b with any (m −1) vectors of A
forms a L.I. set. Hence every set of m columns of the augmented matrix A b = [ A, b] is L.I.

The Condition is Sufficient : Suppose every set of m column vectors of A b = [ A, b] is


L.I. Then obviously every set of m-column vectors of A is also L.I. and hence all the basic
solutions exist.

Now consider any m-column vectors α1, α 2 ,..., α m of A which are L.I. and let x1, x2 ,..., x m
be the corresponding basic solution. Then

x1 α1 + x2 α 2 + ... + x m α m = b

or −1. b + x1 α1 + x2 α 2 + ... + x m α m = 0

Now if x1 = 0, then we have

−1. b + x2 α 2 + ... + x m α m = 0 ,

which shows that the vectors b, α 2 ,..., α m are L.D.

But this is a contradiction as we have already assumed that m vectors of [ A, b] are L.I.

Hence, x1 ≠ 0.

Similarly, we can show that x2 ,..., x m are also different from zero i.e., the solution is
non-degenerate.

In this way, it can be shown that all the basic solutions are non-degenerate.

Corollary 1: The necessary and sufficient condition for any given basic solution
x B = B −1b to be non-degenerate is the linear independence of b and every (m −1) columns
from B.

Proof: The condition is necessary : We know that if a solution is non-degenerate


( x i ≠ 0, i = 1, 2,..., m), b can replace any column of B and still a basis is maintained.

Hence b and any (m −1) columns of B are L.I.

The condition is sufficient : Let b and any (m −1) columns of B be L.I. Any column of B
can be replaced by b and still a basis is maintained. Hence x1,..., x m are all non-zero.
80

2.16 Some Definitions

1. A Basic Feasible Solution (B.F.S.) : In a L.P.P. a feasible solution which is also


basic is called a basic feasible solution. In other words, a feasible solution to a
L.P.P. in which the vectors associated to non-zero variables (basic variables) are L.I.
is called a basic feasible solution. [Meerut 2007; Kanpur 2010, 12; Gorakhpur 2008]
In a L.P.P. having m constraints, at most m vectors may be L.I. Hence a B.F.S.
cannot have more than m non-zero (i.e., positive) variables. Thus for a F.S. to be a
B.F.S. at least (n − m) variables must vanish.
B.F.S. are also finite in number and their maximum number is n Cm, where n is the
number of variables and m the number of constraints (m ≤ n).

2. Basic Variables : The variables associated to B.F.S. are called basic variables.

3. Non-degenerate B.F.S. : A B.F.S. of a L.P.P. is said to be non-degenerate B.F.S. if


none of the basic variables is zero. [Meerut 2006 (BP), 09, 11 (BP)]

4. Degenerate B.F.S. : A B.F.S. of a L.P.P. is said to be degenerate B.F.S. if at least one


of the basic variables is zero. [Meerut 2006 (BP), 07, 09]

Example 1: Show that the feasible solution x1 = 1, x2 = 0 , x3 = 1 and Z = 6 to the system


of equations
x1 + x2 + x3 = 2

x1 − x2 + x3 = 2

xi ≥ 0

which minimizes Z = 2 x1 + 3 x2 + 4 x3 is not basic.

Solution: In the given F.S. there are only two non-zero variables namely x1 and x3 . If
vectors associated to these variables be α1 and α 3 then the determinant of column vectors
1 1
corresponding to variables x1, x3 is = 1 −1 = 0
1 1

∴ the vectors α1, α 3 are L.D.

Hence the solution is not a basic solution.


81

Example 2: Find all the basic solutions of the following system :

x1 + 2 x2 + x3 = 4

2 x1 + x2 + 5 x3 = 5

and prove that they are non-degenerate. [Meerut 2007 (BP), 11 (BP); Kanpur 2007, 11]

Solution: The given system of equations can be expressed in matrix form as A x = b


where A = (α1, α 2 , α 3 )

 x1 
1 2  1 4 
α1 =  , α 2 =  , α 3 =  , x =  x2 , b =  .
2
  1
  5
    5 
 x3 

Here n = number of variables = 3 and m = number of equations = 2. Hence there can be at


most 3 C2 = 3 basic solutions.

Three sets of two vectors out of α1, α 2 , α 3 are as follows :

1 2  1 1 2 1
B1 = [α1, α 2 ] =  , B2 = [α1, α 3 ] = 2 5 , B3 = [α 2 , α 3 ] = 1 5
2 1     

We have | B1| = −3 ≠ 0, | B2| = 3 ≠ 0, | B3 | = 9 ≠ 0.

If follows that every set of two vectors of A is L.I. Hence all the three basic solutions exist.
If x B , x B , x B are the vectors of corresponding basic variables, then
1 2 3

1  1 −2  4  2 
x B = B1−1 b = − =
1 3 −2 1  5  1

1  5 −1 4   5 
x B = B2 −1 b =  =
2 3 −2 1 5  −1

1  5 −1 4  5 3 
x B = B3 −1 b =  =
3 9  −1 2  5  2 3 

To find all the basic Solutions : In basis B1, basic vectors are α1, α 2 . So

 x1  2 
x B =   =   ⇒ x1 = 2, x2 = 1.
1  x2  1

These two variables are the basic variables and the remaining variable x3 is non-basic. It
will have the value zero. Thus the basic solution associated to the basis B1 is given by
(2, 1, 0). Similarly other basic solutions are (5, 0, − 1) and (0, 5 3 , 2 3).

In all the three basic solutions none of the basic variables is zero, hence they are all
non-degenerate.
82

Example 3: Which of the following vectors x = [ x1, x2 , x3 , x4 , x5, x6 , x7 ] is a B.F.S. of


the system
x1 + 2 x2 + x3 + 3 x4 + x5 = 9

2 x1 + x2 + 3 x4 + x6 = 9

− x1 + x2 + x3 + x7 = 0,

x1, x2 , . . . , x7 ≥ 0

(i) x1 = [2, 2, 0 , 1, 0 , 0 , 0 ] (ii) x 2 = [0 , 0 , 9, 0 , 0 , 9, − 9 ]

(iii) x 3 = [3, 3, 0 , 0 , 0 , 0 , 0 ] (iv) x1 = [0 , 0 , 0 , 0 , 9, 9, 0 ]

(v) x 5 = [1, 0 , 0 , 0 , 8, 7, 1] (vi) x 6 = [0 , 0 , 0 , 3, 0 , 0 , 0 ]

Solution: We have denoted the column vector by symbol [ ].


(i) The vectors associated to non-zero variables are

α1 = [1, 2, − 1], α 2 = [2, 1, 1], α 4 = [3, 3, 0].

1 2 3
Now α1 α 2 α 4 = 2 1 3 =0
−1 1 0

∴ vectors α1, α 2 , α 4 associated to non-zero variables are L.D.

Hence x1 is not a B.F.S.

(ii) The vectors associated to non-zero variables are

α 3 = [1, 0, 1], α 6 = [0 1, 0], α 7 = [0, 0, 1].

1 0 0
Now α3 α6 α7 = 0 1 0 = 1 ≠ 0
1 0 1

∴ vectors α 3 , α 6 , α 7 are L.I.

Hence x 2 is a basic solution but not a basic feasible solution as x7 is negative.

(iii) This solution contains 5 zero variables. So it may be a B.F.S. Now vectors
associated to non-zero variables are

α1 = [1, 2, − 1], α 2 = [2, 1, 1].

Now these vectors together with the vector α 5 = [1, 0, 0] are L.I. as

1 2 1
α1 α 2 α 5 = 2 1 0 =3 ≠0
−1 1 0
83

Hence x 3 is a B.F.S. with x1, x2 , x5 as basic variables.

Since one of the basic variables namely x5 vanishes, so this is a degenerate B.F.S.

(iv) This solution contains 5 zero variables. So it may be a B.F.S. Now vectors
associated to non-zero variables are α 5 = [1, 0, 0], α 6 = [0, 1, 0]. Obviously
these vectors together with a7 = [0, 0, 1] are L.I. Hence x 4 is a B.F.S. with
x5 , x6 , x7 as basic variables. Since one of the basic variables namely x7 vanishes, so
this is degenerate B.F.S.

(v) This solution does not contain alteast 7 − 3 = 4 variables zero or it contains more
than 3 (no. of equations) non-zero variables, so it is not a B.F.S.

(vi) This solution contains one non-zero variable. The vector associated to this variable
is a4 = [3, 3, 0].
3 0 0
We have α4 α6 α7 = 3 1 0 = 3 ≠ 0
0 0 1

∴ vectors α 4 , α 6 , α 7 are L.I.


Hence x 6 is a B.F.S. with x4 , x6 , x7 as basic variables. Since basic variables x6 , x7
also vanish, so it is a degenerate B.F.S.

2.17 Analytic Method (Trial and Error Method)


As already stated, the Graphical method can be used to solve the L.P.P. in two variables
only. L.P.P. in three variables can also be solved by Graphical method but it is
complicated enough. Simplex method (discussed in chapter 4) is the powerful method
(or technique) to solve a L.P.P. in any number of variables.

Here we give analytic method (trial and error method) to solve a L.P.P. We know that for a
system of m equations in n unknowns (n > m), a solution in which at least n − m variables
have their values equal to zero is called a basic solution. These (n − m) variables are called
non-basic variables. The remaining m variables whose values may or may not be equal to
zero are called basic variables, the vectors in the coefficient matrix corresponding to these
basic variables should be L.I.

In analytical method giving zero values to the non-basic variables in the given equations,
we get equations in basic variables. Solving these equations we can get the basic solutions
of the problem. Then the basic solutions (B.S.) which also satisfy the condition x i ≥ 0, ∀ i
called the basic feasible solutions (B.F.S.) are obtained. For all these B.F. solutions the
values of objective function Z are computed and the B.F.S. giving the optimal value
84

(maximum or minimum value as required) of Z is taken as the optimal solution of the


given L.P.P.

Drawbacks of Analytical Method


1. For large values of m and n, it is extremely difficult and time consuming to solve
various sets of simultaneous equations.
2. Some of the sets given infeasible solutions. There is no technique to detect all such
sets and avoid to solve them.

For clear understanding of the method, see following illustrative examples.

Example 1: Find an optimal solution of the following L.P.P. without using the simplex
algorithm.
Max. Z = 2 x1 + 3 x2 + 4 x3 + 7 x4

subject to 2 x1 + 3 x2 − x3 + 4 x4 = 8

x1 − 2 x2 + 6 x3 − 7 x4 = −3

and xi ≥ 0 , i = 1, 2, 3, 4. [IAS Mains 2001]

Solution: In matrix form the above system of equations can be written as

Ax =b

2 3 −1 4 
where A= = (α1 α 2 α 3 α 4 ),
1 −2 6 −7

 x1 
2   3  −1  4 x  8
2
where α1 =  , α 2 =  , α 3 =  , α 4 =  , x =  , b =  .
1  −2   6 −7  x3   −3 
 x4 

Here order of A is 2 × 4 i.e., m = 2, n = 4.

Since n − m = 4 − 2 = 2, so the basic solutions are obtained by taking 2 variables equal to


zero at a time. In this problem out of 4 variables 2 variables can be taken equal to zero in
4
C2 = 6 number of ways. Hence the problem can have at the most 6 basic solutions,
which can be computed as follows :
85

S.N. Basic Non-basic Equation to Basic solution Is it For B.F.S.


variables variables obtain B.S. (If exists) B.F.S. value of Z

1. x1, x2 x3 = x4 = 0 2 x1 + 3 x2 = 8 x1 = 1 Yes. 5
x1 − 2 x2 = −3 x2 = 2
x3 = x4 = 0

2. x1, x3 x2 = x4 = 0 2 x1 − x3 = 8 x1 = 45 13 No. —
x1 + 6 x3 = −3 x3 = − 14 13
x2 = x4 = 0

3. x1, x4 x2 = x3 = 0 2 x1 + 4 x4 = 8 x1 = 22 9 Yes. 31/3


x1 − 7 x4 = −3 x4 = 7 9
x2 = x3 = 0

4. x2 , x3 x1 = x4 = 0 3 x2 − x3 = 8 x2 = 45 16 Yes. 163/16
−2 x2 + 6 x3 = −3 x3 = 7 16
x1 = x4 = 0

5. x2 , x4 x1 = x3 = 0 3 x2 + 4 x4 = 8 x2 = 44 13 No. —
−2 x2 − 7 x4 = −3 x4 = −7 13
x1 = x3 = 0

6. x3 , x4 x1 = x2 = 0 − x3 + 4 x4 = 8 x3 = 44 17 Yes. 491/17
6 x3 − 7 x4 = −3 x4 = 45 17 (Max.)

x1 = x2 = 0

Hence the optimal solution of the given L.P.P. is

x1 = x2 = 0, x3 = 44 17 , x4 = 45 17 and maximum Z = 491 17.

Example 2: Find all the optimal B.F. solutions of the following L.P.P. without using the
simplex method.
Max. Z = 6 x1 + 2 x2 + 9

subject to 3 x1 + x2 ≤ 6

x1 + 3 x2 ≤ 9

and x1, x2 ≥ 0 . [Meerut 2006]

Solution: Introducing the slack variables x3 and x4 , the problem reduces to

M ax. Z = 6 x1 + 2 x2 + 9

subject to 3 x1 + x2 + x3 = 6
86

x1 + 3 x2 + x4 = 6
and x i ≥ 0, V i = 1, 2, 3, 4.
In matrix form the above system of equations can be written as
Ax =b

3 1 1 0 
where A=  = (α1 α 2 α 3 α 4 ).
1 3 0 1

 x1 
3  1 1 0  x  6 
2
Here α1 =  , α 2 =  , α 3 =  , α 4 =  , x =  , b =  .
1
  3
  0
  1
  x
 3 6 
 x4 

Here n = 4 and m = 2. So the basic solutions are obtained by taking n − m = 4 − 2 = 2,


variables equal to zero at a time. Here out of 4 variables 2 can be taken to zero in 4 C2 = 6
number of ways.

Hence the problem can have at the most 6 basic solutions which may be B.F.S. also.
These solutions are computed as follows :

Basic Non-basic Equations to Basic solution (If Is it For B.F.S.


S.N.
variables variables obtain B.S. exists) B.F.S. value of Z

1. x1, x2 x3 = x4 = 0 3 x1 + x2 = 6 x1 = 3 2, x2 = 3 2 Yes. 21 (Max.)


x1 + 3 x2 = 6 x3 = 0, x4 = 0

2. x1, x3 x2 = x4 = 0 3 x1 + x3 = 6 x1 = 6, x3 = −12 No. —


x1 = 6 x2 = 0, x4 = 0

3. x1, x4 x2 = x3 = 0 3 x1 = 6 x1 = 2, x4 = 4 Yes. 21 (Max.)


x1 + x4 = 6 x2 = 0, x3 = 0

4. x2 , x3 x1 = x4 = 0 x2 + x3 = 6 x2 = 2, x3 = 4 Yes. 13
3 x2 = 6 x1 = 0, x4 = 0

5. x2 , x4 x1 = x3 = 0 x2 = 6 x2 = 6, x4 = −12 No. —
3 x2 + x4 = 6

6. x3 , x4 x1 = x2 = 0 x3 = 6 x3 = 6, x4 = 6 Yes. 9
x4 = 6 x1 = 0, x2 = 0

Here we obtain two optimal basic feasible solutions of the problem

(i) x1 = 3 2 , x2 = 3 2 and (ii) x1 = 2, x2 = 0

For both B.F.S.'s. Optimal value of Z = 21

Note : If we solve this L.P.P. by Graphical method then we see that every point on the
line segment joining points (2, 0) and (3/2, 3/2) gives B.F.S. and optimal Z = 21.
87

1. Find all basic solutions for the system of simultaneous equations :

2 x1 + 3 x2 + 4 x3 = 5

3 x1 + 4 x2 + 5 x3 = 6. [Meerut 2008 (BP), 12; Kanpur 2007]

2. Determine all basic feasible solutions of the system of equations :

2 x1 + x2 + 4 x3 = 11

3 x1 + x2 + 5 x3 = 14. [Meerut 2006 (BP), 12 (BP)]

1
3. Is x1 = 1, x2 = , x = x4 = x5 = 0 a basic solution to the following equations :
2 3

1
x1 + 2 x2 + x3 + x4 = 2, x1 + 2 x2 + x3 + x5 = 2.
2

3  −1 1 1


4. If a1 =  , a2 =  , a3 =   and b =  
4  2  4  4 
then determine whether all possible basic solutions exist for the following set of
equations :

[a1 a2 a3 ] x = b.

5. Find a B.F.S. of the system

x1 + 2 x3 = 1, x2 + x3 = 4, x1, x2 , x3 ≥ 0.

6. Do the system x1 + x2 = 1, x1 − x3 = 2, x1, x2 , x3 ≥ 0 have a F.S.

7. Find all the B.F.S. of the following system :

8 x1 + 6 x2 + 13 x3 + x4 + x5 = 6
9 x1 + x2 + 2 x3 + 6 x4 + 10 x5 = 10.

8. Determine two different B.F.S. of the L.P.P.

Max. Z = 2 x1 + 3 x2 + x3 + 0. x4 + 0. x5 − M. x6

subject to 2 x1 + 3 x2 + 4 x3 + x4 = 6

x1 + 2 x2 + 2 x3 − x5 + x6 = 3

and x1, x2 , x3 , x4 , x5 , x6 ≥ 0.
88

1. (0, − 1, 2),  − 1 , 0, 3  , (−2, 3, 0).


 
 2 2

2. 1 5
(i) x1 = 3, x2 = 5, x3 = 0 (ii) x1 = , x = 0, x3 = .
2 2 2

3. No

4. Yes

5. x1 = 1, x2 = 4, x3 = 0 and x1 = 0, x2 = 7 2 , x3 = 1 2

6. No

7. 2 2   50 26   26 54   50 54 
 , 0, 0, , 0 ,  , 0, 0, 0,  , 0, , 0, , 0 , 0, , 0, 0, ,
3 3   71 71  35 35   59 59 
 13 59   25 59 
0, 0, , , 0 , 0, 0, , 0, .
 38 38   64 64 

8. (0, 2, 0, 0, 1, 0) and (3, 0, 0, 0, 0, 0). Max. Z = 26.

2.18 Applications of Linear programming Technique


[Meerut 2007]

In this section we discuss important applications of linear programming in our life.


1. Personnel Assignment Problem : Linear programming technique can be used to
allocate the available man power to various jobs in the best way (which minimizes
the total cost or maximizes the total profit).
2. Transportation Problem : This type of problem occurs very frequently in practical
life or say matches with many real situations. In general transportation problems
are concerned with the distributions of certain products from several sources to
number of destinations at minimum cost.
3. Production Management : Linear programming can be applied in production
management for determining product mix, product smoothing and assembly time
balancing.
4. Agricultural Applications : Linear programming can be applied in agricultural
planning for allocating the available limited resources such as acreage, labour, water
supply and working capital to the various crops so that the total net revenue may be
maximum.
5. Military Applications : Linear programming technique may be applied in
selecting an air weapon system against gurillas so as to keep them pinned down
utilising the minimum amount of aviation gasoline ; in maximizing the total
89

tonnage of bombs dropped on the enemy's targets ; in determining the


transportation schedule of the essential items such as weapons, medicines etc. in
least time ; in determining the number of defence units used in the attack in order
to provide the required level of protection at the lowest possible cost.

6. Blending Problems : Such problems arise when a new product is to be prepared by


mixing the certain other things, maintaining some composition formula of the new
product. By using the L.P. technique, we can determine the most economical blend
of the new product subject to the availability of the raw materials and the minimum
or maximum limitations of the constituents. Such problems occur frequently in the
petroleum, paint and steel industries etc.

7. Marketing Management : Linear programming helps in analysing the


effectiveness of advertising compaign and time based on the available advertising
media. It also helps traveling salesman in finding the shortest route for his tour.

8. Feed Mix : The problem of finding the best diet i.e., the combination of food that
can be supplied at minimum cost, satisfying the daily requirement of the nutrients
is also a L.P.P.

9. Trim Loss Problems : If an item is made in a particular size by cutting it from a


sheet of standard size, the optimum combination of requirements can be
determined so that trim loss may be minimum.

10. Efficiencing on Operation of System of Dams : In this problem, we determine


variations in water storage of dams which generate power so as to maximize the
energy obtained from the entire system. The physical limitations of storage appear
as inequalities.

2.19 Limitations of Linear Programming


In spite of wide area of applications, in practical situations, the linear programming
technique has certain limitations as follows :

1. The L.P. technique is applicable to that class of problems in which the objective
function and the constraints both are linear. But in many practical problems, it is
not possible to express both the objective function and the constraints in linear
form.

2. There is no guarantee of having integral valued solutions. For example, in finding


out how many men and machines would be required to perform a particular job,
rounding off the solution to the nearest integer will not give an optimal solution.

3. The effect of time and uncertainty is not considered in linear programming model.
Thus the model should be defined in such a way that any change due to internal as
well as external factors can be incorporated.
90

4. Parameters appearing in the model are assumed to be constant while in real life
situations they are neither constant nor deterministic.

5. Linear programming deals with only single objective whereas in real life situations
problems occur with multi objectives.

6. All practical L.P.P. involve extensive calculations and hence computer facility is
required. In many situations the computations become tedious even on large digital
computers.

2.20 Advantages of Linear Programming


It indicates how the available resources can be used in the best way. It improves the
quality of decisions. Thus it helps in attaining the optimum use of productive resources
and man-power. It also reflects the drawbacks of the production process. The
modification in mathematical solution is also possible by using linear programming
method.

Multiple Choice Questions


1. The extreme point of the convex set of feasible solutions of the L.P.P.
Max Z = 10 x1 + 15 x2 s.t. x1 + x2 = 2, 3 x1 + 2 x2 ≤ 6, x1, x2 ≥ 0 are
(a) (2, 0), (0, 2) (b) (2, 0), (0, 3)
(c) (0, 2), (0, 3) (d) (0, 0), (0, 3)
2. If there is no feasible region in a L.P.P. then we say that the problem has
(a) Infinite solutions (b) No solution
(c) Unbounded solution (d) None of these
3. The solution of the L.P.P. Max. Z = 5 x1 + 7 x2 s.t.
3 x1 + 2 x2 ≤ 12, 2 x1 + 3 x2 ≤ 13, x1, x2 ≥ 0 is
(a) x1 = 2, x2 = 3, Z = 31 (b) x1 = 4, x2 = 0, Z = 20
13 91
(c) x1 = 0, x2 = ,Z = (d) None of these
3 3
4. Given the following set of equations :
x1 + 4 x2 − x3 = 3, 5 x1 + 2 x2 + 3 x3 = 4
The B.F.S. involving x1 and x2 is
 5 11  5 
(a)  , , 0 (b)  , 0, 0
 9 18  9 

 11 
(c) 0, , 0 (d) None of these
 18 
91

5. The maximum number of basic solutions to a set of m simultaneous equations in n


unknowns (n ≥ m) is
(a) m (b) n− m
n
(c) Cm (d) None of these

Fill in the Blank


1. In L.P.P. the function which is to be optimized is called an ............... function.
2. The linear inequations (or equations) under which the objective function is to be
optimized are called ............... .
3. A set of values of the variables x1, x2 ,..., x n satisfying the constraints and
non-negative restrictions of a L.P.P. is called a ............... solution of the L.P.P.
4. If the value of the objective function Z can be increased or decreased indefinitely,
such solutions are called ............... solutions.
5. A necessary and sufficient condition for the existence and non-degeneracy of all the
basic solutions of A x = b (m equations in an unknowns), is that every set of m
columns of the augmented matrix [A, b] is ............... .
6. The non-negative variables which are added to L.H.S. to the constraints to convert
them into equalities are called the ............... variables.
7. The non-negative variables which are subtracted from the L.H.S. of the constraints
to convert them into equalities are called the ............... variables.
8. The non-basic variables in a basic solution are ............... valued variables.
9. A feasible solution to a L.P.P. in which the vectors associated to non-zero variables
are ............... is called a B.F.S.
10. A L.P.P. of ............... variables can be solved graphically. [Meerut 2005]

True / False
1. A basic feasible solution is said to be optimum, if it optimizes the objective
function.
2. A basic feasible solution is called degenerate if one or more basic variables are
zero-valued.
3. The L.P.P. optimize Z = x1 + x2 , s.t. x1 + x2 ≤ 1, 3 x1 + x2 ≥ 3, x1, x2 ≥ 0 has single
point as its feasible region.
4. The L.P.P. Max. Z = x1 + x2 , s.t. x1 − x2 ≥ 0, 3 x1 − x2 ≤ −3, x1, x2 ≥ 0 has a feasible
solution.
5. By graphical method any L.P.P. can be solved easily. [Meerut 2004, 2005]
92

Multiple Choice Questions


1. (a) 2. (b)

3. (a) 4. (a)

5. (c)

Fill in the Blank


1. objective 2. constraints

3. feasible 4. unbounded

5. linearly independent 6. slack

7. surplus 8. zero

9. L.I. 10. two

True / False
1. True 2. True

3. True 4. False

5. False
mmm
Unit-2

Chapter-3: Convex Sets and Their Properties

Chapter-4: Simplex Method (Theory and Application)


95

3.1 Definitions
1. Point Sets : Point sets are sets whose elements are points or vectors in E n or R n
(n-dimensional Euclidean space).
For example
(i) A linear equation in two variables x1, x2 , i.e., a1 x1 + a2 x2 = b represents a line in
two dimensions. This line may be considered as a set of those points ( x1, x2 )
which satisfy a1 x1 + a2 x2 = b. This set of points can be written as

S1 = {( x1, x2 ) : a1 x1 + a2 x2 = b}.
(ii) Consider the set of points lying inside a circle of unit radius with centre at the
origin, in two dimensional space ( E2 ). Obviously the points ( x1, x2 ) of this set
2 2
satisfy the inequality x1 + x2 < 1.

This set of points can be written as


2 2
S2 = {( x1, x2 ) : x1 + x2 < 1}.

These sets may contain either a finite or infinite number of elements. Usually,
however, they will contain an infinite number of elements. Further, we shall
always assume that there is at least one element in a set unless otherwise
stated.
2. Hypersphere : A hypersphere in E n with centre at a and radius ε > 0 is defined to be
the set of points
96

X = {x :|x − a|= ε}
i.e., the equation of a hypersphere in E n is

( x1 − a1)2 + ( x2 − a2 )2 + ... + ( x n − an)2 = ε2

where a = (a1, a2 ,..., an), x = ( x1, x2 ,..., x n),

which represents a circle in E2 and sphere in E3 .

The set of points inside the hypersphere is the set

X = {x :| x − a|< ε }.

3. An ε-neighbourhood : An ε-neighbourhood about the point a is defined as the set


of points lying inside the hypersphere with centre at a and radius ε > 0.

i.e., the ε-neighbourhood about the point a is the set of points

X = {x : | x − a |< ε}.

4. An Interior Point : A point a is an interior point of the set S if there exists an


ε-neighbourhood about a which contains only points of the set S.
An interior point of S must be an element of S.

5. Boundary Point : A point a is a boundary point of the set S if every


ε-neighbourhood about a (ε > 0 may be, however, small) contains points which are
in the set and the points which are not in the set.
A boundary point of S does not have to be an element of S.

6. An Open Set : A set S is said to be an open set if it contains only the interior points.

7. A Closed Set : A set S is said to be a closed set if it contains all its boundary points.

8. Lines : In E n, the line through two points x1 and x 2 , x1 ≠ x 2 is defined to be the set
of points
X = {x : x = λ x1 + (1 − λ ) x 2 , for all real λ }.

9. Line Segments : In E n, the line segment joining two points x1 and x 2 is defined to
be the set of points
X = {x : x = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤ 1}.
Note that the restriction 0 ≤ λ ≤ 1 restricts the point x to lie within the line joining
the points x1 and x 2 . Line segment of x1, x 2 is also denoted by [x1 : x 2 ].

10. Hyperplane : A hyperplane is defined as the set of points satisfying

c1 x1 + c2 x2 + ....+ cn x n = z (not all ci = 0)

or cx =z

For prescribed values of c1, c2 ,..., cn and z.


97

For optimum value of z this hyperplane is called optimal hyperplane.


c
The vector c is called a vector normal to the hyperplane and ± are called unit
|c|
normals.
It can be easily seen that hyperplanes are closed sets.
A hyperplane divides the whole space E n into three mutually disjoint sets given by

X1 = {x : c x > z}

X2 = {x : c x = z}

X3 = {x : c x < z}.

The sets X1 and X3 are called open half spaces.

The sets {x : cx ≤ z} and {x : cx ≥ z} are called closed half spaces.

Note : The objective function of a L.P.P. represents a hyperplane and each constraint
(sign ≤, ≥) is a closed half space produced by the hyperplane given by the constraint by
taking (=) sign in place of ≤ and ≥.

Parallel Hyperplanes : Two hyperplanes c1x = z1 and c2 x=z2 are said to be parallel if
they have the same unit normals i.e., if c1 = λ c2 for some λ, λ being non-zero.

3.2 Convex Combination


1. A convex combination of a finite number of points x1, x 2 ,...., x n is defined as a
point

x = λ 1 x1 + λ 2 x 2 + ...+ λ n x n,

where λ i is real and ≥ 0, V i and

n
Σ λ i = 1.
i =1

2. The convex linear combination of two points x1 and x 2 is given by

x = λ 1 x1 + λ 2 x 2 , s.t. λ 1, λ 2 ≥ 0, λ 1 + λ 2 = 1. [Meerut 2009 (BP), 10, 11, 12]

It can also be written as

x = λ x1 + (1 − λ) x 2 , 0 ≤ λ ≤ 1.
This shows that the line segment of the two points x1 and x 2 is nothing but the set
of all possible convex combinations of the two points x1 and x 2 .
98

3.3 Convex Set


[Meerut 2004, 08 (BP), 09, 10, 11, 12; Gorakhpur 2009, 10, 11]

A set of points is said to be convex if for any two points in the set, the line segment
joining these two points is also in the set. In other words a set is convex if the convex
combination of any two points in the set, is also in the set.

P
P

Convex Sets Non Convex Sets


Fig. 3.1

It can be easily seen that the convex combination of any number of points in the convex
set also belongs to the set.

By convention a set of one point is always convex.

3.4 Extreme Point of a Convex Set


A point x in a convex set C is called an extreme point if x cannot be expressed as a
convex combination of any two distinct points x1 and x 2 in C.

In other words, a point x in a convex set C is an extreme point of C if it does not lie on the
line segment of any two points, different from x in the set.

Mathematically, a point x is an extreme point of a convex set if there do not exist other
points x1, x 2 (x1 ≠ x 2 ) in the set such that

x = λ x1 + (1 − λ) x 2 , 0 < λ < 1.

2 2
For example: The set C = {( x1, x2 ) : x1 + x2 ≤ 1} is convex. Every point on the

circumference is an extreme point. Thus a convex set may also have infinite number of
extreme points.

The polygons which are convex sets have the extreme points as their vertices.

Obviously, an extreme point is a boundary point of the set.

It is important to note that all boundary points of a convex set are not necessarily
extreme points.

A point of C which is not an extreme point, is referred as an internal point of C.


99

3.5 Convex Hull


[Meerut 2006(BP)]

The convex hull C( X ) of any given set of points X is the set of all convex combinations of
sets of points from X.

In other words, the intersection of all convex sets, containing X ⊂ E n, is called the convex
hull of X and is denoted by < X >. Thus, the convex hull of a set X ⊂ E n, is the smallest
convex set containing X.

For example: If X is the set of eight vertices of a cube, then the convex hull C( X ) is the
whole cube.

3.6 Convex Function


[Meerut 2006 (BP); Gorakhpur 2009, 10]

A function f (x ) is said to be strictly convex at x if for any two other distinct points x1
and x 2

f {λx1 + (1 − λ)x 2 } < λ f (x1) + (1 − λ ) f (x 2 ), where 0 < λ < 1.

On the other hand, a function f (x ) is strictly concave if − f (x ) is strictly convex.

3.7 Convex Polyhedron


The set of all convex combinations of finite number of points is called the convex
polyhedron generated by these points.

Alternatively, if the set X consists of a finite number of points, the convex hull of X is
called a convex polyhedron with vertices at these points.

For example: The set of the area of a triangle is a convex polyhedron of the set of its
vertices.

3.8 Some Important Theorems


Theorem 1: A hyperplane is a convex set. [Meerut 2007]

Proof: Consider the hyperplane

X = {x : cx = z}.

Let x1 and x 2 be any two points in the hyperplane X.


100

∴ c x1 = z and c x 2 = z.

If x 3 = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤ 1

then c x 3 = λ c x1 + (1 − λ ) c x 2 ,

= λ z + (1 − λ )z = z

which implies that

x 3 = λ x1 + (1 − λ ) x 2 is also a point in the hyperplane X.

Hence by definition, the hyperplane X is a convex set.

Theorem 2: The closed half spaces H1 = {x : c x ≥ z } & H2 = {x : c x ≤ z } are convex sets.

Proof: Let x1 and x 2 be any two points of H1. Then

c x1 ≥ z, c x 2 ≥ z.

If 0 ≤ λ ≤ 1, then c [λ x1 + (1 − λ) x 2 ] = λ c x1 + (1 − λ ) cx 2

≥ λ z + (1 − λ ) z = z.

Hence x1, x 2 ∈ H1 and 0 ≤ λ ≤ 1 ⇒ λ x1 + (1 − λ) x 2 ∈ H1.

So H1 is a convex set.

Similarly, if x1, x 2 ∈ H2 , 0 ≤ λ ≤ 1, then replacing the inequality sign ≥ by ≤ in above, we


get λ x1 + (1 − λ) x 2 ∈ H2 .

So H2 is also a convex set.

Corollary : The open half spaces : {x : cx > z} and {x : cx < z} are convex sets.

Theorem 3(a): Intersection of two convex sets is also a convex set.


[Meerut 2007 (BP), 08, 12]

Proof: Consider two convex sets X1 and X2 . Let X3 be the intersection of sets X1 and X2 ,
i.e., X3 = X1 ∩ X2 .

Now x1 ∈ X1 ∩ X2 ⇒ x1 ∈ X1 and x1 ∈ X2

x 2 ∈ X1 ∩ X2 ⇒ x 2 ∈ X1 and x 2 ∈ X2 .

Since X1 and X2 are convex sets,

∴ x1, x 2 ∈ X1 ⇒ λ x1 + (1 − λ ) x 2 ∈ X1 0 ≤ λ ≤1

and x1, x 2 ∈ X2 ⇒ λ x1 + (1 − λ ) x 2 ∈ X2 0 ≤ λ ≤1
101

Thus λ x1 + (1 − λ ) x 2 ∈ X1 and λ x1 + (1 − λ ) x 2 ∈ X2

⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 0 ≤ λ ≤ 1.

Hence by definition, X3 = X1 ∩ X2 is a convex set.

Theorem 3(b): Intersection of any finite number of convex sets is also a convex set.

Proof: Let X1, X2 ,..., X n be n convex sets and

X = X1 ∩ X2 ∩ ... ∩ X n.

Now x1 ∈ X1 ∩ X2 ∩ ... ∩ X n ⇒ x1 ∈ X i, V i = 1, 2,...., n

and x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n ⇒ x 2 ∈ X i, V i = 1, 2,...., n

Since X i is convex set for i = 1, 2,...., n

∴ x1, x 2 ∈ X i ⇒ λ x1 + (1 − λ ) x 2 ∈ X i, V i = 1, 2,...., n. where 0 ≤ λ ≤ 1

⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n 0 ≤ λ ≤1

i.e., x1 ∈ X1 ∩ X2 ∩ ... ∩ X n and x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n

⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n, 0 ≤ λ ≤ 1.

Hence by definition X1 ∩ X2 ∩ ... ∩ X n is a convex set.

Theorem 3(c): Arbitrary intersection of convex sets is also a convex set.

Theorem 4: The set of all convex combinations of a finite number of points x1, x 2 , . . . . , x n
is a convex set.

Proof: Let X be the set of all convex combinations of a finite number of points.

 n n 
i.e., X = x : x = Σ λ i x i , Σ λ i = 1, λ i ≥ 0 
 i =1 i =1 

Let u, v ∈ X
n n
∴ u = Σ ai x i , Σ ai = 1, ai ≥ 0.
i =1 i =1

n n
and v = Σ bi x i , Σ bi = 1, bi ≥ 0.
i =1 i =1

Consider w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1
n n
∴ w = λ Σ ai x i + (1 − λ ) Σ bi x i
i =1 i =1
102

n
= Σ {λ ai + (1 − λ ) bi} x i
i =1

n
= Σ ci x i where ci = λ ai + (1 − λ ) bi
i =1

n n
Now Σ ci = Σ {λ ai + (1 − λ ) bi}
i =1 i =1

n n
= λ Σ ai + (1 − λ ) Σ bi = λ.1 + (1 − λ).1 = 1.
i =1 i =1

Also ci = λ ai + (1 − λ )bi ≥ 0, V i.
n
Hence w = Σ ci x i is a convex combination of x1, x 2 ,...., x n
i =1

i.e., w = λ u + (1 − λ ) v ∈ X , 0 ≤ λ ≤ 1.

Hence by definition X is a convex set.

Theorem 5: Let S and T be two convex sets in En , then for any scalars α , β, αS + βT is
also convex.

Proof: Let x, y be two points of αS + βT .

Then x = α u1 + βv1 and y = α u2 + βv2 ...(1)

where u1, u2 ∈ S and v1, v2 ∈T .

For any scalar λ, 0 ≤ λ ≤ 1, we have

λ x + (1 − λ ) y = λ (α u1 + βv1) + (1 − λ ) (α u2 + βv2 )

= α [λ u1 + (1 − λ ) u2 ] + β [λ v1 + (1 − λ ) v2 ] ...(2)

But S and T are convex sets,

u1, u2 ∈ S ⇒ λ u1 + (1 − λ )u2 ∈ S, 0 ≤ λ ≤ 1 ...(3)

and v1, v2 ∈ T ⇒ λv1 + (1 − λ )v2 ∈ T , 0 ≤ λ ≤ 1. ...(4)

Now from (2), (3) and (4) we have

λ x + (1 − λ ) y ∈αS + βT , 0 ≤ λ ≤ 1.

Thus x, y ∈ α S + βT ⇒ [x : y] ⊂ α S + βT .

Hence α S + βT is a convex set.

Corollary : If S and T be two convex sets in E n, then S + T and S − T are also convex sets.
103

Theorem 6: A set C is convex iff every convex linear combination of points in C, also
belongs to C.

Proof: Let every convex linear combination of points in C belong to C.

Then, in particular convex linear combination of every two points in C also belongs to C.

Hence C is a convex set.

Conversely let C be a convex set. Then to prove that convex linear combination of any
number of points in C also belongs to C.

We shall use the induction principle.

Since C is convex, the convex linear combination of two points in C belongs to C. Thus
the result is true for n = 2.

Now suppose the convex linear combination of any n points in C, belongs to C.

Let x1, x 2 ,...., x n be any n points in C. Then by assumption,


λ1 x1 + λ 2 x 2 + ... + λ n x n ∈ C where λ i ≥ 0, ∑ λ i = 1.

Let x = µ1 x1 + µ2 x 2 + .... + µ n x n + µ n+1 x n+1 ...(1)

n +1
i.e., x is a convex linear combination of (n + 1) points of C, where µ i ≥ 0 and Σ µ i = 1.
i =1

Now we shall show that x ∈ C.

If µ n+1 = 0 then x becomes a convex linear combination of x1, x 2 ,..., x n which by


assumption, belongs to C and hence the result holds in this case. Also if µ n+1 = 1, then the
result is trivially true.

Let µ n+1 be neither 0 nor 1. Then

(µ1 + µ2 + ...+ µ n)
x= (µ1 x1 + µ2 x 2 + ...+ µ nx n) + µ n+1 x n+1 ...(2)
(µ1 + µ2 + K µ n)

 n   n 
=  Σ µ i  Σ ai x i + µ n+1 x n+1,
i = 1  i = 1 

µi
where ai = , i = 1, 2,..., n.
µ1 + µ2 + ...+ µ n

As each µ i ≥ 0, we have ai ≥ 0

n n  µi  ∑ µi
and also Σ ai = Σ   = = 1.
i =1 i = 1 µ1 + µ2 + ...+ µ n  ∑ µi
104

n
Thus Σ ai x i = y (say) is also a convex linear combination of x1, x 2 ,..., x n. So y
i =1

belongs to C, by assumption.

Now x = (µ1 + µ2 + ...+ µ n) y + µ n+1 x n+1 ...(3)

But (µ1 + µ2 + ...+ µ n) ≥ 0, µ n+1 ≥ 0 and (µ1 + µ2 + ...+ µ n) + µ n+1 = 1.

It follows that x is a convex combination of two points y and x n+1 of C. So x belongs to C.

Hence convex linear combination of (n + 1) points of C also belongs to C. So by induction


principle the result is true.

Theorem 7: The set of all feasible solutions (if not empty) of a L.P.P. is a convex set.
[Kanpur 2008; Gorakhpur 2007]

Proof: Let X be the set of all feasible solutions of a L.P.P.

Ax = b, x ≥ 0. ...(1)

Case I : If the set X has only one element, then X is convex set. Hence the theorem is true
in this case.

Case II : If the set X has at least two elements.

Let x1 and x 2 be any two distinct elements in X.

∴ A x1 = b, x1 ≥ 0.

and A x 2 = b, x 2 ≥ 0.

If x 3 = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤1

then A x 3 = Aλ x1 + (1 − λ ) Ax 2

= λ b + (1 − λ ) b=b.

Also since x1 ≥ 0, x 2 ≥ 0, λ ≥ 0, 1 − λ ≥ 0, as 0 ≤ λ ≤ 1

∴ x 3 = λ x1 + (1 − λ ) x 2 ≥ 0

i.e.,x 3 satisfies (1). Thus x 3 = λ x1 + (1 − λ ) x 2 is also a F.S. and so belongs to set X.

But x 3 is a convex combination of any two distinct points, x1 and x 2 in X.

Hence by definition the set X is a convex set.

Note : Since the convex combinations of two points are infinite in number so from the
above theorem we conclude that :

If a given L.P.P. has two feasible solutions, then it has infinite number of feasible
solutions.
105

Theorem 8: Every basic feasible solution of the system A x = b, x ≥ 0 is an extreme point


of the convex set of feasible solutions and conversely. [Meerut 2004]

Proof: To prove that ev ery B.F.S. is an ex treme point of the con vex set of all fea si ble
so lu tions.

Let x be a B.F.S. of A x = b which is a n-component vector containing both zero


(non-basic) and non-zero (basic) variables. Let x B and B be the column vector of m basic
variables and the matrix of vectors associated to basic variables in the B.F.S. x
respectively, then

x = [x B , 0 ], ...(1)

where 0 is a null vector of (n − m) components,

and A x = b ⇒ B. x B = b. ...(2)

Now we have to prove that x is an extreme point.

We shall prove this by using contradiction.

Suppose that x is not an extreme point. If X is the convex set of all feasible solutions of
A x = b, then x ∈ X.

If x is not an extreme point then there exist two distinct points x1 and x 2 in X such that

x = λ x1 + (1 − λ ) x 2 , 0 < λ < 1 ...(3)

But x1 and x 2 can be expressed as

x1 = [u1, v1] and x 2 = [u2 , v2 ] ...(4)

where u1 and u2 are vectors of m components of x1 and x 2 respectively and v1, v2 are
(n − m) components vectors.

Substituting the values of x and x1, x 2 from (1) and (4) in (3), we have

[x B , 0] = λ [ u1, v1] + (1 − λ ) [u2 , v2 ], 0 < λ < 1

= [λ u1 + (1 − λ ) u2 , λ v1 + (1 − λ ) v2 ]

∴ 0 = λ v1 + (1 − λ ) v2 , 0 < λ < 1 ...(5)

Now 1 > λ > 0, 1 − λ > 0, and the components of v1 and v2 are ≥ 0.

The relation (5) can only be satisfied when v1 = 0 and v2 = 0

∴ x1 = [u1, 0 ], x 2 = [u2 , 0 ].
106

Since x1 and x 2 are in X, therefore from (2) we have

A x1 = B u1 = b and A x 2 = B u2 = b

i.e., b = B x B = B u1 = B u2

which gives x B = u1 = u2

∴ x = x1 = x 2

which is contradiction to the assumption that x1 ≠ x 2

i.e., x cannot be expressed as a convex combination of any two distinct points in the set of
all feasible solutions. Hence x must be an extreme point.

Converse : To prove that every extreme point of the convex set of feasible solutions is a B.F.S.

Let x = [ x1, x2 ,...., x n] be an extreme point. Now in order to prove that x is a B.F.S., we
shall prove that the vectors associated with the positive elements of x are L.I.

Suppose that k-components (variables) in x are non-zero and (n − k) components are zero.

We can assume these components as the first k components of x.

k
∴ Σ x i α i = b, x i > 0, i = 1, 2,....., k ...(6)
i =1

where α i is the column vector in A associated to the i-th variable in x.

If possible, let the column vectors α 1, α 2 ,..., α k of matrix A be L.D. Then there exist some
scalars λ i = (i = 1, 2,...., k) with at least one of them non-zero, s.t.,

k
Σ λi αi = 0 ...(7)
i =1

From (6) and (7), for some arbitrary δ > 0, we have


k k
Σ xi α i ± δ Σ λ i α i = b
i =1 i =1

k
or Σ ( x i ± δλ i) α i = b
i =1

From which it is obvious that the two points

(n − k) in number

x1* = [ x1 + δ λ 1, x2 + δ λ 2 ,....., x k + δ λ k , 0, 0,......., 0]


107

x *2 = [ x1 − δ λ 1, x2 − δ λ 2 ,....., x k − δ λ k , 0, 0,......., 0]

satisfy A x = b.

Also since x i > 0, therefore, taking δ s.t.

Min  xi 
0 <δ <  , λ ≠ 0, i = 1, 2,...., k.
i |λ i| i

we conclude that first k components of x1* and x *2 are always positive. But the remaining

components of x1* and x *2 are zero, which follows that x1* and x *2 are feasible solutions
different from x.

Now x1* + x *2 = 2 [ x1, x2 ,....., x k , 0, 0,......, 0]

1 * 1 *
or x + x = [ x1, x2 ,...., x k , 0, 0,..., 0] = x
2 1 2 2

1
or x = λ x1* + (1 − λ ) x *2 where λ =
2

i.e.,x can be expressed as a convex combination of two distinct feasible solutions x1* and

x *2 . But this is a contradiction as x is an extreme point. Hence the vectors α 1, α 2 ,...., α k


are L.I. Further, we know that at most m vectors of E m can be L.I. So α1, α 2 ,...., α k cannot
be more than m and hence the extreme point x will have atmost m non-zero variables, i.e.,
atleast (n − m) variables will be zero.

Thus x is a B.F.S. Hence, every extreme point of the convex set of feasible solutions is a
B.F.S.

Corollary 1: The extreme points of the convex set of feasible solutions are finite in
number.

From the above theorem, we conclude that there is only one extreme point for a given
B.F.S. and vice-versa. That is there is one-to-one correspondence between the extreme
points and the B.F. solutions in the absence of degeneracy. Also in case of degeneracy
corresponding to an extreme point with the number of non-zero variables less than m, we
can form more than one degenerate B.F.S. Hence the number of extreme points of the
feasible region is finite and it cannot exceed the number of its B.F. solutions.

Corollary 2: An extreme point can have atmost m-positive x i' s where m is the number of
constraints.

Corollary 3: In an extreme point, vectors associated to the positive x i' s are L.I.
108

Theorem 9: Fundamental extreme point theorem : If the convex set of the feasible solutions
of A x = b, x ≥ 0 is a convex polyhedron, then atleast one of the extreme points gives an
optimal solution. [Meerut 2007 (BP), 08, 09, 10, 12]

Proof: In the Corollary 1 of last theorem we have proved that the extreme points of the
convex set of feasible solutions of A x = b, x ≥ 0 are finite in number.

Let x1, x 2 ,...., x k be the extreme points of the set X of all the feasible solutions of
A x = b, x ≥ 0 . Let Z be the objective function which is to be maximized be given by

Z = c x.

If x * ∈ X is the optimal solution, then

Max. Z = c x *.

Now if x * is an extreme point, then the theorem is proved.

Now if x * is not an extreme point in X, then since X is convex polyhedron, therefore x *


can be expressed as a convex combination of the extreme points of X,
k
i.e., x * = λ1 x1 + λ 2 x 2 + ....+ λ k x k = Σ λ i. x i, λ i ≥ 0 and ∑ λ i = 1
i =1

∴ Z * = c x * = c (λ 1x1 + λ 2 x 2 + ....+ λ k x k )

= (λ1 c x1 + λ 2 c x 2 + ...+ λ k c x k )

If maximum of c x i is c x p, then

Z * ≤ (λ1 + λ 2 + ...+ λ k ). c x p

or Z * ≤ c x p.

But Z * is the maximum value of Z. Therefore,

Z * = c x p or c x * = c x p

i.e., x * = x p (one of the extreme points).

Hence the optimal solution is attained at the extreme point.

This proves the theorem.

Theorem 10: If the objective function of a L.P.P. assumes its optimal value at more than
one extreme point, then every convex combination of these extreme points gives the optimal
value of the objective function.

Proof: Let us consider the L.P.P. as follows :


109

Max. Z = c x

subject to A x = b, x ≥ 0.

Let x1, x 2 ,...., x k be the extreme points of the feasible region. If the objective function Z
assumes its optimal value Z * at the extreme points x1, x 2 ,..., x p, ( p ≤ k) then

Z * = c x1 = c x 2 = .... = c x p.

If x 0 is the convex combination of the extreme points x1, x 2 ,..., x p, then

p
x 0 = λ 1 x1 + λ 2 x 2 + ....+ λ p x p, λ i ≥ 0, Σ λ i = 1
i =1

∴ c x 0 = c [λ 1 x1 + λ 2 x 2 + ... + λ p x p]

= λ 1 c x1 + λ 2 c x 2 + .... + λ p c x p

= λ 1 Z * + λ 2 Z * + .... + λ pZ *

 p 
= (λ 1 + λ 2 + ... + λ p) Z * = Z *. Q Σ λ i = 1
 i = 1 

Hence the optimal value Z * is also attained at x0 which is the convex combination of the
extreme points at which optimal value occurs. Hence the theorem.

Example 1: A hyperplane is given by the equation 3 x1 + 2 x2 + 4 x3 + 7 x4 = 8.

Find in which half spaces do the following points (−6, 1, 7, 2 ) and (1, 2, − 4, 1) lie.

Solution: The given equation of the hyperplane is

3 x1 + 2 x2 + 4 x3 + 7 x4 = 8.

Substituting (− 6, 1, 7, 2) in the L.H.S., we get

L.H.S. = 3(−6) + 2 .1 + 4 . 7 + 7 . 2 = 26 > 8 = R.H.S.

∴ The point (−6, 1, 7, 2) lies in the open half space,

3 x1 + 2 x2 + 4 x3 + 7 x4 > 8.

Again substituting (1, 2, − 4, 1) in the L.H.S., we get

L.H.S. = 3 .1 + 2 . 2 + 4(−4) + 7 .1 = −2 < 8 = R.H.S.


110

∴ The point (1, 2, − 4, 1) lies in the open half space

3 x1 + 2 x2 + 4 x3 + 7 x4 < 8.

Example 2: Sketch the convex polygon spanned by the following points in a two
dimensional Euclidean space. Which of these points are vertices? Express the other as the
convex linear combination of the vertices

 1 1
(0, 0), (0, 1), (1, 0),  ,  .
 2 4

Solution: The convex combinations of the points (0, 0), (1, 0); (0, 0), (0, 1) and (1, 0),
(0, 1) give the line segments OA, OB and AB respectively. Thus the convex combination
of points (0, 0), (1, 0) and (0, 1) is the interior of the triangle OAB.

The points O(0, 0), A(1, 0) and B(0, 1) are the vertices Y
 1 1
and the point C  ,  is the interior point of the B (0, 1)
 2 4

convex polygon spanned by the given points.


(1/2, 1/4)
 1 1
To express  ,  as the linear combination of (0, 0), C
 2 4
O
(0, 0) A (1, 0) X
(0, 1), (1, 0).
Fig. 3.2
 1 1
Let  ,  = λ 1(0, 0) + λ 2 (0, 1) + λ 3 (1, 0)
 2 4

where λ1 + λ 2 + λ 3 = 1 and λ1, λ 2 , λ 3 ≥ 0.

 1 1 1 1
∴  ,  = (λ 3 , λ 2 ) which gives λ 2 = , λ 3 = .
 2 4 4 2

1 1 1
∴ λ1 = 1 − λ 2 − λ 3 = 1 − − = .
4 2 4

 1 1 1 1 1
Thus  ,  = (0, 0) + (0, 1) + (1, 0).
 2 4 4 4 2

Example 3: Show that C = {( x1, x2 ) : 2 x1 + 3 x2 = 7} ⊂ R2 is a convex set.


[Meerut 2008; Gorakhpur 2009]

Solution: Let u, v ∈ C, where u = (u1, u2 ), v = (v1, v 2 ).

Then 2 u1 + 3 u2 = 7 and 2 v1 + 3 v 2 = 7. ...(1)

If w = (w1, w 2 ) is a point on the line segment joining the points u and v, then

w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1
111

⇒ (w1, w2 ) = λ ( u1, u2 ) + (1 − λ ) (v1, v2 )

= (λ u1 + (1 − λ ) v1, λ u2 + (1 − λ ) v2 )

⇒ w1 = λ u1 + (1 − λ ) v1 and w2 = λ u2 + (1 − λ ) v 2 .

Now 2 w1 + 3 w 2 = 2 [λ u1 + (1 − λ ) v1] + 3[λu2 + (1 − λ ) v 2 ]

= λ [2 u1 + 3 u2 ] + (1 − λ ) [2 v1 + 3 v 2 ]

= λ . 7 + (1 − λ ) . 7 = 7, using (1)

∴ w = (w1, w2 ) ∈ C.

Hence the set C is a convex set.

Example 4: Show that S = {( x1, x2 , x3 ): 2 x1 − x2 + x3 ≤ 4} ⊂ R3 , is a convex set.


[Meerut 2005, 07 (BP)]

Solution: Let x = ( x1, x2 , x3 ) and y = ( y1, y2 , y3 ) be any two points of S. Then we have

2 x1 − x2 + x3 ≤ 4 and 2 y1 − y2 + y3 ≤ 4. ...(1)

If w = (w1, w2 , w3 ) is a point on the line segment joining the points x and y, then

w = λ x + (1 − λ ) y, 0 ≤ λ ≤ 1

⇒ (w1, w2 , w3 ) = λ ( x1, x2 , x3 ) + (1 − λ ) ( y1, y2 , y3 )

⇒ w1 = λ x1 + (1 − λ ) y1, w2 = λ x2 + (1 − λ ) y2 , w3 = λ x3 + (1 − λ ) y3 .

Now 2 w1 − w2 + w3 = λ (2 x1 − x2 + x3 ) + (1 − λ ) (2 y1 − y2 + y3 )

≤ 4 λ + 4 (1 − λ),
[using (1)]

=4

∴ w = (w1, w2 , w3 ) ∈ S.

Hence the set S is a convex set.

Example 5: Examine convexity to the set

(i) A = {( x1, x2 ) ∈ R2 : 4 x1 + 3 x2 ≤ 6, x1 + x2 ≥ 1} [Meerut 2007, 09]

(ii) A = {( x1, x2 ): x1 ≥ 2, x1 ≤ 3}.

Solution: (i) Given, A = {( x1, x2 ) ∈ R 2 : 4 x1 + 3 x2 ≤ 6, x1 + x2 ≥ 1}.

If u = ( x1, x2 ) ∈ A then 4 x1 + 3 x2 ≤ 6 and x1 + x2 ≥ 1


112

and if v = ( y1, y2 ) ∈ A then 4 y1 + 3 y2 ≤ 6 and y1 + y2 ≥ 1. ...(1)

If w = (z1, z2 ) is a point on the line segment joining points u and v, then

w = λ u + (1 − λ ) v 0 ≤ λ ≤1

⇒ (z1, z2 ) = λ ( x1, x2 ) + (1 − λ ) ( y1, y2 )

= (λ x1 + (1 − λ ) y1, λ x2 + (1 − λ ) y2 )

⇒ z1 = λ x1 + (1 − λ ) y1 and z2 = λ x2 + (1 − λ ) y2 .

Now 4 z1 + 3 z2 = λ (4 x1 + 3 x2 ) + (1 − λ ) (4 y1 + 3 y2 ) ≤ λ . 6 + (1 − λ ) . 6

[Q0 ≤ λ ≤ 1, 0 ≤ 1 − λ ≤ 1, 4 x1 + 3 x2 ≤ 6 and 4 y1 + 3 y2 ≤ 6]

i.e., 4 z1 + 3 z2 ≤ 6

and z1 + z2 = λ ( x1 + x2 ) + (1 − λ ) ( y1 + y2 ) ≥ λ .1 + (1 − λ).1

i.e., z1 + z2 ≥ 1 [Q 0 ≤ λ ≤ 1, 0 ≤ 1 − λ ≤ 1, x1 + x2 ≥ 1, y1 + y2 ≥ 1]

∴ w = (z1, z2 ) ∈ A.

Hence the set S is a convex set.

(ii) Hint : Proceed similarly.

B is a convex set.

Aliter : Obviously S is the intersection of two half spaces, viz.,

H1 = {( x1, x2 ) : 4 x1 + 3 x2 ≤ 6}

and H2 = {( x1, x2 ) : x1 + x2 ≥ 1}, in (i) part

and H1 = {( x1, x2 ) : x1 ≥ 2} and H2 = {( x1, x2 ) : x1 ≤ 3} in (ii) part.

Since H1 and H2 are convex sets so S = H1 ∩ H2 is also convex set.

Example 6: For any points x, y ∈ R n , show that the line segment [x : y ] is a convex set.

Solution: Let u, v ∈[x : y]. Then

u = λ 1x + (1 − λ1) y, 0 ≤ λ 1 ≤ 1

and v = λ 2 x + (1 − λ 2 ) y, 0 ≤ λ 2 ≤ 1.

Now let w be a point on the line segment joining the points u and v.

Then w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1.
113

= λ [λ 1 x + (1 − λ1) y] + (1 − λ ) [λ 2 x + (1 − λ 2 ) y]

= [λλ 1 + (1 − λ) λ 2 ] x + [λ (1 − λ1) + (1 − λ) (1 − λ 2 )] y.

If we take µ = λλ1 + (1 − λ) λ 2 , then

1 − µ = 1 − λλ 1 − (1 − λ) λ 2 = λ + (1 − λ) − λλ 1 − (1 − λ) λ 2

= λ (1 − λ 1) + (1 − λ) (1 − λ 2 ).

Since, 0 ≤ λ1 ≤ 1, 0 ≤ λ 2 ≤ 1 ⇒ 0 ≤ λλ1 + (1 − λ) λ 2 ≤ 1,

w = µ x + (1 − µ) y, 0 ≤ µ ≤ 1

∴ w ∈ [x : y].

Hence the set [x : y] is a convex set.

Example 7: Let A be an m × n matrix and b an m-vector, then show that


n
[x ∈ R : A x ≤ b} is a convex set.

Solution: Let A = [aij ]m× n, x = ( x1, x2 ,..., x n) and b = (b1, b2 ,...., bm), then the set

S = {x ∈ R n : A x ≤ b} can be written in m-inequalities :

a11 x1 + a12 x2 + ...+ a1n x n ≤ b1

a21 x1 + a22 x2 + ... + a2 n x n ≤ b2

... ... ... ... ...

am1 x1 + am2 x2 + ... + amn x n ≤ bm.

Thus the set S is the intersection of m half spaces

Hi = {( x1, x2 ,...., x n) : ai1 x1 + ai2 x2 + ...+ ain x n ≤ bi, i = 1, 2,..., m}.

m
It follows that S = ∩ Hi is convex as each half space is convex.
i =1

Example 8: Show that the set S = { x : x = ( x1, x2 , x3 ), x12 + x22 + x32 ≤ 1} is a convex
set. [Meerut 2006 (BP)]

Solution: Let x, y ∈ S, where x = ( x1, x2 , x3 ), y = ( y1, y2 , y3 ).

Then, by the given condition, we have

x12 + x22 + x32 ≤ 1 ...(1)

and y12 + y22 + y32 ≤ 1. ...(2)


114

If z = (z1, z2 , z3 ) is a point on the line segment joining the points x and y, then

z = λ x + (1 − λ ) y, 0 ≤ λ ≤ 1

⇒ (z1, z2 , z3 ) = λ ( x1, x2 , x3 ) + (1 − λ ) ( y1, y2 , y3 )

⇒ z1 = λ x1 + (1 − λ ) y1, z2 = λ x2 + (1 − λ ) y2 , z3 = λ x3 + (1 − λ ) y3 .

Now z12 + z22 + z32 = [λ x1 + (1 − λ ) y1]2 + [λ x2 + (1 − λ) y2 ]2 + [λ x3 + (1 − λ) y3 ]2

= λ2 ( x12 + x22 + x32 ) + (1 − λ)2 ( y12 + y22 + y32 )

+ 2 λ (1 − λ)( x1 y1 + x2 y2 + x3 y3 )

≤ λ2 .1 + (1 − λ)2 .1 + 2 λ(1 − λ)( x1 y1 + x2 y2 + x3 y3 ),

[using (1) and (2)]

By Lagrange's identity, we have

( x12 + x22 + x32 ) ( y12 + y22 + y32 ) − ( x1 y1 + x2 y2 + x3 y3 )2 = Σ ( x1 y2 − x2 y1)2 ≥ 0

∴ x1 y1 + x2 y2 + x3 y3 ≤ √ ( x12 + x22 + x32 ) √ ( y12 + y22 + y32 ) ≤ 1, [using (1) and (2)]

Thus z12 + z22 + z32 ≤ λ 2 + (1 − λ )2 + 2 λ .(1 − λ) = 1.

∴ z = (z1, z2 , z3 ) ∈ S.

Hence the set S is a convex set.

Example 9: Express any point w inside a triangle as a convex combination of the vertices
(extreme points) x1, x 2 , x 3 of the triangle.

Solution: Let ABC be the triangle with vertices


A(x1)
x1, x 2 , x 3 respectively and P be any point w inside
the triangle. Join the points A and P and extend
this line to meet the base BC at D(u). Since D is a
point on the line segment BC, so u can be written as
a convex combination of x 2 and x 3 . P w

Thus u = λ 1 x 2 + (1 − λ1) x 3 , 0 ≤ λ1 ≤ 1. ...(1)


u
Again P is a point on the line segment AD, B(x ) D C(x3)
2
therefore
Fig. 3.3

w = λ 2 x1 + (1 − λ 2 ) u, 0 ≤ λ 2 ≤ 1
115

= λ 2 x1 + (1 − λ 2 ) [λ1x 2 + (1 − λ1) x 3 ], [using (1)]

= λ 2 x1 + λ1(1 − λ 2 ) x 2 + (1 − λ1) (1 − λ 2 ) x 3

or w = µ1x1 + µ2 x 2 + µ3 x 3 ...(2)

where µ1 = λ 2 , µ2 = λ1(1 − λ 2 ), µ3 = (1 − λ1) (1 − λ 2 ).

Each of the µ1, µ2 , µ3 lies between 0 and 1 i.e., 0 ≤ µ i ≤ 1, i = 1, 2, 3 as 0 ≤ λ 1 ≤ 1 and


0 ≤ λ 2 ≤ 1.

Also µ1 + µ2 + µ3 = λ 2 + λ1(1 − λ 2 ) + (1 − λ1) (1 − λ 2 )

= λ 2 + λ1 − λ1λ 2 + 1 − λ1 − λ 2 + λ1λ 2 = 1

⇒ µ1x1 + µ2 x 2 + µ3 x 3 is a convex combination of the points x1, x 2 , x 3 .

Hence (2) is the required convex combination for the point w.

Example 10: Find the extreme points, if any, of the following sets :
(i) S = {( x, y ): x2 + y2 ≤ 25}

(ii) {( x, y ):| x|≤ 1, | y|≤ 1} [Meerut 2007]

Solution: (i) Draw the region of the set S. It represents the boundary and interior of the
circle with centre at (0, 0) and radius 5.

Every point on its circumference is an extreme point.

(ii) We have | x| ≤ 1 ⇒ −1 ≤ x ≤ 1 and | y| ≤ 1 ⇒ −1 ≤ y ≤ 1.

Thus the set S represents the region of square bounded by the lines x = 1, x = −1, y = 1,
y = −1. The extreme points of this convex set are the vertices (1, 1), (−1, 1) (−1, − 1) and
(1, − 1).

Example 11: Is the union of two convex sets necessarily a convex set? [Meerut 2006 (BP)]

Solution: No, The union of two convex sets may or may not be a convex set.

For example: Consider S1 = {( x, y) : x ≥ 2} and T1 = {( x, y) : x ≥ 3}

Then S1 ∪ T1 = {( x, y) : x ≥ 2}

Obviously S1 ∪ T1 is a convex set.

Again consider S2 = {( x, y) : x ≥ 2} and T2 = {( x, y) : y ≥ 1}.

Then S2 ∪ T2 = {( x, y) : x ≥ 2 or y ≥ 1}.

Now the points A (9 / 4, 1 / 2) and B(1 / 2, 5 / 4) are the points of S2 ∪ T2 but their
mid-point P(11 / 8, 7 / 8) is not the point of the set S2 ∪ T2 as 11 / 8 < 2 and 7 / 8 < 1.
116

Thus S2 ∪ T2 is not a convex set.

Hence the union of two convex sets is not necessarily a convex set.

Example 12: Find all the basic feasible solutions for the equations

2 x1 + 6 x2 + 2 x3 + x4 = 3

6 x1 + 4 x2 + 4 x3 + 6 x4 = 2

xi ≥ 0

and determine the associated general convex combination of the extreme point solutions.
[Meerut 2007; Kanpur 2010]

Solution: In matrix form the given system of equations can be written as

Ax =b

where A = (α1, α 2 , α 3 , α 4 )

 x1 
2  6  2  1 x  3 
α1 =  , α 2 =  , α 3 =  , α 4 =  , x =  2 , b =   .
6
  4
  4
  6
  x
 3 2 
x 
 4

This problem can have at most 4 C2 = 6 basic solutions. Now the six sets of two vectors
are

2 6  2 2 
B1 = [α1, α 2 ] =   , B2 = [α1, α 3 ] = 6 4 
6 4   

2 1 6 2 
B3 = [α1, α 4 ] =   , B4 = [α 2 , α 3 ] = 4 4 
6 6   

6 1 2 1
B5 = [α 2 , α 4 ] = 
4 6  , B6 = [α 3 , α 4 ] = 4 6 .
   

Here | B1| = −28, | B2 | = −4, | B3 | = 6, | B4 | = 16, | B5 | = 32, | B6 | = 8.

Since none of these is zero, therefore all these sets are L.I.

Hence all the six basic solutions exist.

If x Bi, i = 1, 2,..., 6 are the vectors of the basic variables associated to the sets
Bi, i = 1, 2,.., 6 respectively, then

 x1  −1 1  4 −6  3   0 
 x  = x B1 = B1 b = − 28 −6 =
2  2  1 / 2 
 2 
117

 x1  −1 1  4 −2  3   −2 
 x  = x B2 = B2 b = − 4 −6 =
2  2  7 / 2 
 3 

 x1  −1 1  6 −1 3   8 / 3 
 x  = x B3 = B3 b = 6 −6 2  2  = −7 / 3 
 4      

 x2  −1 1  4 −2  3  1 / 2 
 x  = x B4 = B4 b = 16  −4 =
6  2   0 
 3 

 x2  −1 1  6 −1 3  1 / 2 
 x  = x B5 = B5 b = 32 −4 6  2  =  0 
 4      

 x3  −1 1  6 −1 3   2 
 x  = x B6 = B6 b = 8 −4 2  2  = −1.
 4      

Thus it is obvious that out of these only three basic solutions are B.F.S. (in which
variables are non-negative). But the B.F.S.'s correspond to the extreme points. Hence the
only three extreme point solutions are given by

 1   1   1 
x1 = 0, , 0, 0 , x 2 = 0, , 0, 0 , x 3 = 0, , 0, 0 .
 2   2   2 

Here x1 = x 2 = x 3 . Hence there is unique extreme point solution.

Note : To find the basic solution x B we can also proceed as follows. Putting
1

x3 = 0, x4 = 0 in the given eqns., we get 2 x1 + 6 x2 = 3 and 6 x1 + 4 x2 = 2 solving


x1 = 0, x2 = 1 / 2, etc.
118

1. Which of the following sets are convex ?


(i) A = {( x1, x2 ) : x1 x2 ≤ 1, x1 ≥ 0, x2 ≥ 0}
(ii) 2 2
A = {( x1, x2 ) : x2 − 3 ≥ − x1 , x1 ≥ 0, x2 ≥ 0}

2. 2 2
Prove that the set {( x1, x2 ) : x1 + x2 ≤ 4} is a convex set.
[Meerut 2011 (BP)]
2 2
3. Define a convex set. Show that the set S = {( x1, x2 ) : 3 x1 + 2 x2 ≤ 6} is convex.
[Meerut 2006]

4. Show that the set S = {( x1, x2 ) : 2 x1 + 3 x2 = 11} ⊂ R 2 is a convex set. [Meerut 2010]

5. Given two planes a1 x + b1 y + c1z + d1 = 0, a2 x + b2 y + c2 z + d2 = 0 in R 3 , prove that


their intersection is a convex set but their union is not.

6. Show that S = {( x1, x2 , x3 ) : 2 x1 − x2 + x3 ≤ 4, x1 + 2 x2 − x3 ≤ 1} is a convex set.

7. Examine convexity of the following sets :


 2 2 
 x x 
(i) S = ( x1, x2 ) : 1 + 2 ≤ 1
 4 9 
2 2
(ii) S = {( x1, x2 ) : x1 + x2 ≤ 1, x1 + x2 ≥ 1}

8. If S1 and S2 be two non-empty disjoint convex sets and S be a set such that if x1 ∈ S1
and x 2 ∈ S2 then x1 − x 2 ∈ S. Show that S is also a convex set and does not contain
the origin. [Meerut 2008 (BP); 09 (BP)]

9. Show that the set of all the internal points of a convex set S is a convex set.
[Meerut 2011, 12 (BP)]

10. Determine whether the vector [7, 0] is a convex combination of the vectors [6, 3],
[9, –6], [1, 2], [1, –1].

11. Give an example of a convex set whose every boundary point is an extreme point.

12. Determine the convex hull of the following sets :


(i) 2 2
A = {( x1, x2 ) : x1 + x2 = 1}

(ii) A = {x1, x 2 }.
1
13. If x1, x 2 ∈ S ⇒ (x1 + x 2 ) ∈ S then the set S is convex or not.
2

14. Can there be any convex set without any extreme point? Prove that an extreme
point of a convex set is a boundary point of the set.

15. Is the set S = {x : x ∈ E m, | x| = 1} convex ?


119

16. Find the extreme points of the polygonal convex set X determined by the system
2 x1 + x2 + 9 ≥ 0, − x1 + 3 x2 + 6 ≥ 0, x1 + x2 ≤ 0, x1 + 2 x2 − 3 ≤ 0.
17. A and B are two convex sets in R n, C is a set in R n defined as
C = {z ∈ R n : z = x + y, x ∈ A, y ∈ B}

Examine convexity of C.

 3
18. Express (2, 1), 0,  , if possible, as a convex combinations of (1, 1) and (–1, 2).
 2

Multiple Choice Questions


 1 1
1. In a two dimensional Euclidean space the points, (0, 0), (0, 1), (1, 0),  ,  span
 2 4
the convex polygon. Then the vertices of the polygon are :
 1 1
(a) (0, 0), (1, 0), (0, 1) (b) (0, 0), (1, 0),  , 
 2 4
 1 1
(c) (0, 0), (0, 1),  ,  (d) None of these
 2 4

2. The maximum number of extreme points for a L.P.P. max. Z = c x subject to


A x = b, x ≥ 0, where A is m × n matrix, is equal to :
m! n!
(a) (b)
n !(n − m) ! m !(n − m) !

(c) m (d) n

3. A rectangle with sides a1 and a2 (a1 ≠ a2 ) is placed with one corner at the origin and
two of its sides along the axes. The interior of the rectangle plus its edges form a :
(a) Convex set (b) Non-convex set
(c) Polyhedron convex set (d) None of these

4. Which of the following sets in E2 is not a convex set ?


2 2
(a) {( x1, x2 ) : x1 + x2 ≤ 1}
2 2
(b) {( x1, x2 ) : ( x1 + x2 ≤ 4}
2 2 2 2
(c) {( x1, x2 ) : x1 + x2 ≥ 1 x1 + x2 ≤ 4}

(d) {( x1, x2 ) : x1 ≥ 0}.

5. The convex hull of the set of all the points on the boundary of the circle is the :
(a) Interior of the circle (b) Whole circle
(c) Boundary of the circle (d) None of these
120

6. Consider the triangle with vertices (0, 0), (2, 0), (1, 1). The point (.3, .2) is a convex
combination of these vertices is :

(a) . 75 (0, 0) + . 05 (2, 0) + . 2 (1, 1)

(b) .25 (0, 0) + .50(2, 0) + .25(1, 1)

(c) .30(0, 0) + .60(2, 0) + .10(1, 1)

(d) None of these

7. The extreme points of the set {( x, y) :| x| ≤ 1, | y| ≤ 1} are :

(a) (1, 1), (1, –1) (b) (1, 1), (1, –1), (–1, 1)

(c) (1, 1), (1, –1), (–1, 1), (–1, –1) (d) (1, 1), (–1, –1)

Fill in the Blank


1. The convex linear combination of two points x1 and x 2 is given by
x = λ 1 x1 + λ 2 x 2 , s.t. λ 1, λ 2 ≥ 0, λ1 + λ 2 = ......

2. A hyperplane is a ......... set.

3. Set of all feasible solutions of a L.P.P. is a .......... set.

4. Any point on the line segment joining two points in R n can be expressed as a convex
combination of .......... points.

5. The polygons which are convex sets have the extreme points as their .......... .

6. The optimal solution of L.P. problem occurs at an .......... point. [Meerut 2004]

7. Every extreme point of a convex set is a .......... point of the set.,

True or False
1. A vertex is a boundary point but all boundary points are not vertices.

2. The intersection of the arbitrary family of convex sets in R n is not necessarily


convex.

3. Every convex set in R n is a polygon also.

4. A line passing through two distinct points x1 and x 2 is the set of all the points x
such that x = λ x1 + (1 − λ ) x 2 , λ ∈ [0, 1] .

5. The number of edges that can emanate from any given extreme point of the convex
set of F.S. is two.

6. The set of all feasible solutions (if not empty) of a L.P.P. is a convex set.
121

7. If a L.P.P. has two feasible solutions, then it has an infinite number of feasible
solutions.

8. The basic feasible solutions of a L.P.P. are infinite in number.

9. The extreme points of the convex set of feasible solutions to a L.P.P. are finite in
number.

10. Every B.F.S. to a L.P.P. is not an extreme point of the convex set of feasible
solutions.

11. If the convex set of feasible solutions to a L.P.P. is a convex polyhedron, then at
least one B.F.S. is optimal.

Exercise
1. (i) No (ii) No
7. (i) Convex (ii) Convex
10. Yes
11. The set {x : |x|≤ 1} in R 2 .

12. (i) {( x , x ) : x 2 + x 2 ≤ 1} (ii) Line segment [x1 : x 2 ].


1 2 1 2

13. Convex
14. Yes
15. No
 3 −3 
16.  ,  , (−3, − 3), (−7, 5), (−3, 3).
2 2 

17. Convex
18. (2, 1) cannot be written as a convex combination of (1, 1) and (−1, 2);
 3 1 1
0,  = (1, 1) + (−1, 2).
 2 2 2

Multiple Choice Questions


1. (a) 2. (b)
3. (c) 4. (c)
5. (b) 6. (a)
7. (c)
122

Fill in the Blank


1. 1 2. convex
3. convex 4. two
5. vertices 6. extreme
7. boundary

True or False
1. True 2. False 3. False
4. False 5. True 6. True
7. True 8. False 9. True
10. False 11. True
mmm
123

4.1 Introduction
n chapter 2 we have discussed the formulation of linear programming problems and
I the graphic method of solving them. It was observed in the graphical approach to the
solution of L.P.P. that in a given situation the set of constraints determines the feasible
region and objective function gives the optimal point, the one that minimizes or
maximizes, as the case may be. But the graphical method suffers from a great limitation
that it can handle problems involving only two decision variables. Whereas in real world
situations, we frequently encounter such problems where more than two variables are
involved.

It is sometimes impossible or requires great labour to search for optimal solution from
amongst all the feasible solutions which may be infinite in number. Fundamental
theorem of linear programming makes it easy to find an optimal solution because it deals
with basic feasible solutions only. But it is also not an easy job to enumerate all the B.F.S.
even for small values of m (number of constraints) and n (number of variables). To
overcome this difficulty Simplex Method or Simplex Algorithm was developed by
George Dantzig in 1947. This method provides an efficient technique which can be
applied for solving linear programming problems of any magnitude (involving two or
more decision variables).

The simplex method is an iterative procedure, for finding, in a systematic manner the
optimal solution to a linear programming problem. The simplex algorithm according to
124

its iterative search selects the optimal solution from among the set of feasible solutions to
the problem.

The simplex method consists of the following main steps (iterations) :


1. Finding a trial B.F.S. to given problem.
2. Testing whether this B.F.S. is optimal or not.
3. Improving the first trial solution (if it is not optimal) by a set of rules and repeating
the above procedure till an optimal solution is attained.

In simplex method, we concentrate on maximization problem only because we can easily


convert the minimization problem to maximization.

In this chapter, we shall deal with only those problems for which initial basic feasible
solution is non-degenerate.

4.2 Some Definitions and Notations


Here we shall introduce some notations and definitions which are extremely useful in the
discussion of simplex algorithm.

Consider a linear programming problem which after introducing slack and surplus
variables is as follows :

Max. Z = c1 x1 + c2 x2 + ...+ cn x n + 0 x n+1 + 0 x n+ 2 + ...+0 x n+ m

subject to

a11 x1 + a12 x2 + ....+ a1 j x j + ...+ a1n x n + x n+1 = b1

a21 x1 + a22 x2 + ...+ a2 j x j + ...+ a2 n x n + x n+ 2 = b2


... ... ... ... ... ...
... ... ... ... ... ...

am1 x1 + am2 x2 + ... + amj x j + ... + amn x n + x n+ m = bm,

x i ≥ 0 for all i = 1, 2,...., N , where N = n + m

b1, b2 ,..., bm are all positive.

The above linear programming problem can be easily converted to matrix form

Max. Z = c x

subject to A x = b, x ≥ 0 ,

where A = [aij ]m× N is the coefficient matrix of order m × N ,

x = [ x1, x2 , x3 ,...., x n,...., x N ]N ×1


125

c = (c1, c2 ,...., cn, 0, 0, ..., 0)1× N

and b = [b1, b2 , ...., bm]m×1

where c is a row vector of order 1× N, x and b are column vectors of order N ×1 and m ×1
respectively.

If we denote the j-th column of the matrix A by α j , j = 1, 2,..., N , we can write

A = (α 1, α 2 ,...., α N )

Let B be a non-singular sub-matrix of A of order m × m whose column vectors are m linearly


independent columns selected from A. If we denote these columns by β1, β2 ,..., β m, then

B = (β1, β2 ,..., β m)

and is called the basis matrix.

Since the vectors β1, β2 ,..., β m are linearly independent they form a basis for E m.

Therefore each α j ∈ A ⊂ E m can be expressed as a linear combination of vectors of B.

Thus, we can write

α j = β1 y1 j + β2 y2 j + ...+ β m ymj

 y1 j 
y 
2j 
= (β1, β2 ,...., β m) 
 M 
y 
 mj 

 y1 j 
y 
2j 
or α j = BY j , where Y j =  ,
 M 
y 
 mj  m×1

y1 j , y2 j ,...., ymj are the scalars required to express α j in such a form.

Also α j = BY j ⇒ Y j = B −1α j .

The vector Y j will change if the columns of A forming B change.

Any basis matrix B will provide a basic solution to A x = b.

The variables corresponding to β1, β2 ,..., β m are denoted by x B1, x B2 ,..., x Bm respectively
and are called basic variables.

We denote the column vector of these m basic variables by x B


∴ x B = [ x B1, x B2 ,...., x Bm]
126

and x B = B −1 b.

This is called the Basic Feasible Solution (B.F.S.) of the L.P.P.

Since x B1, x B2 ,...., x Bm are the basic variables, the remaining N − m variables belonging
to x are called non-basic variables.

We shall denote the coefficients of basic variables x B1, x B2 ,...., x Bm in the objective
function Z by cB1, cB2 ,..., cBm.

Corresponding to any x B , c B will represent the row vector containing the constants
cB1, cB2 ,..., cBm.

∴ c B = (cB1, cB2 ,...., cBm).

Since for any basic feasible solution all non-basic variables are zero therefore the
objective function Z becomes

Z = cB1 x B1 + cB2 x B2 + ....+ cBm x Bm + 0

= (cB1, cB2 ,...., cBm) [ x B1, x B2 ,...., x Bm]

= c B x B.

Finally, we introduce a new variable Z j , given by

Z j = y1 j cB1 + y2 j cB2 + ...+ ymj cBm

= (cB1, cB2 ,...., cBm) [ y1 j , y2 j ,...., ymj ]

= c BY j .

There exists Z j for each α j and it changes as the columns of A forming B change.

Note : For convenience, we shall represent column vectors by [ ] without using transpose
symbol and row vector by ( ). There should be no confusion in understanding scalar
multiplication of two vectors c and x which is written as c x in place of c. x.

4.3 Fundamental Theorem of Linear Programming


Theorem If a linear programming problem

Max. Z = c x subject to A x = b, x ≥ 0,

where A is an m × N , ( N = m + n) matrix of coefficients given by A = (α1, α 2 , . . . . , α N )


has an optimal feasible solution, then at least one basic feasible solution must be optimal.
[Meerut 2009 (BP) 10, 11, 11 (BP); Kanpur 2010, 12]

Proof: Let x * = [ x1, x2 ,....., x N ]


127

be an optimal feasible solution of the given linear programming problem and

N
Z * = Σ ci x i
i =1

be the corresponding optimum value of the objective function.

Also suppose that k (k ≤ N ) variables in x * are non-zero and the remaining N − k variables

are zero. For the sake of convenience, we can assume that the first k variables of x * are
non-zero.
N −k
Thus
x * = [ x1, x2 ,...., x k , 0, 0,...., 0]

k
∴ Σ xi α i = b
i =1 ...(1)
k
and Z * = Σ ci x i.
i =1 ...(2)

Now two possibilities may arise :

The vectors α 1, α 2 ,...., α k may be either linearly independent or dependent.

Case 1: If α1, α 2 , . . . . , α k are linearly independent.

Any feasible solution for which the vectors α i, i = 1, 2, ..., k associated with non-zero
variables x i, i = 1, 2,..., k are L.I. is called a basic feasible solution. Thus x * is a B.F.S.
which is also optimal.

Hence the result of the theorem is true.

The solution x * is degenerate if k < m and non-degenerate if k = m.

Case 2: If α1, α 2 , . . . . . , α k are linearly dependent and k > m.

In this case, we shall reduce the number of non-zero variables step by step until we obtain
a B.F.S. from feasible solution.

Since α1, α 2 ,...., α k are linearly dependent therefore there exist scalars λ 1, λ 2 ,...., λ k
such that

λ1 α1 + λ 2 α 2 + .... + λ k α k = 0
k
or Σ λi αi = 0
i =1 ...(3)

and at least one λ i ≠ 0.


128

We can assume that at least one λ i is positive because if none is positive, then we can
multiply the equation (3) by –1 and get positive λ i.

Now, suppose

max  λ 
v = 1 ≤ i ≤ k  i  .
 xi  ...(4)

Obviously v is positive since x i > 0, i = 1, 2,...., k and at least one λ i is positive.

1
Multiplying both sides of (3) by and subtracting from (1), we get
v
k  λ 
Σ  x i − i  α i = b. ...(5)
i =1 v

(5) gives
 λ λ λ 
x′ =  x1 − 1 , x2 − 2 ,...., x k − k , 0, 0,...., 0 
 v v v 
A new solution of the matrix equation A x = b.

Also from (4) for i = 1, 2,...., k, we have


λi λi
v≥ or xi ≥ [Q x i > 0, v > 0]
xi v

λ
or x i − i ≥ 0.
v

Since all the components of x ′ are non-negative therefore x′ is a feasible solution to the
given L.P.P.
λi
Also v = , for at least one i, i = 1, 2,..., k.
xi

λ
i.e., for this value of i, 1 ≤ i ≤ k, v − i = 0, therefore the new feasible solution x′ cannot
xi
have more than k −1 non-zero variables.

In this way, we have derived a new feasible solution from the given optimal feasible
solution which contains less number of non-zero variables. This solution is B.F.S., if the
column vectors associated to non-zero variables in this new solution are L.I. If these
associated vectors are not L.I., we shall repeat the whole reduction procedure as
explained above. Continuing in this way for finite number of times, we will derive a
solution in which columns corresponding to positive variables are L.I. i.e., we will obtain
a B.F.S. of the system.

Now it remains to prove that x′ is also an optimal solution.


129

Let Z′ be the new value of the objective function corresponding to the new solution x′.
Then
k  λ  k 1 k
Z ′ = Σ ci  x i − i  = Σ ci x i − Σ c λ
i =1  v  i =1 v i =1 i i

1 k
or Z′ = Z* − Σ c λ , from (2) ...(6)
v i =1 i i

Now for optimality, we must have

Z′ = Z*

i.e., x′ will be optimal solution only if


k
Σ ci λ i = 0. ...(7)
i =1

We shall prove this result by contradiction.

If possible, let us assume that


k
Σ ci λ i ≠ 0.
i =1

k
Then either Σ ci λ i < 0
i =1 ...(8)
k
or Σ ci λ i > 0.
i =1 ...(9)

In either of these two equations (8) and (9), we can find a real number r such that
k
r Σ ci λ i > 0
i =1

[in equation (8) r will be negative and in equation (9) r will be positive]
k
or Σ ci (r λ i) > 0
i =1

k k k
or Σ ci (rλ i) + Σ ci x i > Σ ci x i
i =1 i =1 i =1

k
or Σ ci( x i + r λ i) > Z *, from (2) ...(10)
i =1

Multiplying equation (3) by r and adding to (1), we get


k k
Σ xi α i + Σ r λ i α i = b
i =1 i =1
130

k
or Σ ( x i + r λ i) α i = b.
i =1

N −k
Therefore, [ x1 + r λ1, x2 + r λ 2 ,...., x k + r λ k , 0, 0,..., 0]

is also a solution of the matrix equation A x = b for all values of r.

Furthermore, we can choose r in infinitely many ways for which the above solution also
satisfies the non-negative restrictions.

Let us choose r such that

x i + r λ i ≥ 0, i = 1, 2,..., k

or r λ i ≥ − xi
x
∴ r ≥− i if λ i > 0,
λi

x
r ≤− i if λi < 0
λi

and r is unrestricted if λ i = 0.

Thus x i + r λ i ≥ 0 for i = 1, 2,.... k if we select r such that

max  − x i  ≤ r ≤ max  − x i 
i  λ i i  λ i ...(11)
(λ i > 0) (λ i < 0)

Furthermore, we have

max  − x i  < 0 and min  − x i  > 0


i  λ i i  λ i
(λ i > 0) (λ i < 0)

which means that interval given by (11) is non-empty.

Thus r lies in the non-empty interval given in (11).

Consequently, an infinite number of solutions given by


N −k
[ x1 + r λ 1, x2 + r λ 2 ,....., x k + r λ k , 0, 0,..., 0]

satisfy the non-negative restrictions as well.


k
Now, returning to result (10), we find that Σ ci ( x i + r λ i) yields the value of the
i =1

objective function which is strictly greater than the greatest value (or optimal value) Z *
of objective function, which is impossible.
131

k
∴ We must have Σ ci λ i = 0
i =1

or Z ′ = Z *.

 N −k 
Hence x′ =  λ1 λ2 λk 
 x1 − , x2 − ,...., x k − , 0, 0,..., 0 
 v v v 

is also an optimal solution.

This proves the theorem.

4.4 Reduction of Feasible Solution to Basic Feasible


Solution
Theorem If a linear programming problem

Max. Z = c x subject to A x = b, x ≥ 0 ,

where A = (α1, α 2 , . . . , α N ) is the coefficient matrix of order m × N , ( N = m + n) , has at


least one feasible solution, then it has at least one basic feasible solution also.

Proof: Consider an arbitrary feasible solution of the given linear programming problem

x * = ( x1, x2 ,...., x N ), x i ≥ 0. ...(1)

Let us assume that k, (k ≤ N ) variables in x ∗ have positive values and the remaining N − k
variables are zero. We can also assume that the variables have been numbered in such a
way that the first k variables are non-zero.

N −k
Thus
x * = [ x1, x2 ,...., x k , 0, 0,..., 0]

k
Also, we have Σ x i α i = b. ...(2)
i =1

Now two possibilities may arise :

The vectors α 1, α 2 ,...., α k may be either linearly independent or dependent.

Case 1: If α1, α 2 , . . . , α k are linearly independent.

Any feasible solution for which the vectors α i, i = 1, 2,..., k associated with non-zero
variables x i, i = 1, 2,..., k are L.I. is called a basic feasible solution. Thus x * is a B.F.S. which is
also optimal.
132

Hence, the result of the theorem is true.

The solution x * is degenerate if k < m and non-degenerate if k = m.

Case 2: If α1, α 2 , . . . , α k are linearly dependent and k > m.

In this case we shall reduce the number of non-zero variables step by step until we obtain
a B.F.S. from feasible solution.

Since α1, α 2 ,...., α k are linearly dependent therefore there exist scalars λ 1, λ 2 ,...., λ k
such that

λ1 α 1 + λ 2 α 2 + .... + λ k λ k = 0
k
or Σ λi αi = 0 ...(3)
i =1

and at least one λ i ≠ 0.

We can assume that at least one λ i is positive because if none is positive, then we can
multiply the equation (3) by –1 and get positive λ i.

Now, suppose

max  λ 
v = 1 ≤ i ≤ k  i ...(4)
 xi 

Obviously, v is positive since x i > 0, i = 1, 2,...., k and at least one λ i is positive.

1
Multiplying both sides of (3) by and subtracting from (2), we get
v
k  λ 
Σ  x i − i  α i = b. ...(5)
i =1 v

 λ λ λ 
(5) gives x′ =  x1 − 1 , x2 − 2 ,..., x k − k , 0, 0,..., 0 
 v v v 

A new solution of the matrix equation A x = b.

Also from (4) for i = 1, 2, ..., k, we have


λi λi
v≥ or xi ≥ [Q x i, v > 0]
xi v

λ
or x i − i ≥ 0.
v

Since all the components of x ′ are non-negative therefore x ′ is a feasible solution to the
given L.P.P.
133

λi
Also v = , for at least one i, 1 ≤ i ≤ k.
xi

i.e., for this value of i, 1 ≤ i ≤ k,

λ
v − i = 0, therefore the new feasible solution x′ cannot have more thank k −1 non-zero
xi
variables.

In this way, we have derived a new feasible solution from the given optimal feasible
solution which contains less number of non-zero variables. This solution is B.F.S., if the
column vectors associated to non-zero variables in this new solution are L.I. If these
associated vectors are not L.I., we shall repeat the whole reduction procedure as
explained above. Continuing in this way for finite number of times we will derive a
solution in which columns corresponding to positive variables are L.I. i.e., we will obtain
a B.F.S. of the system.

Hence, the theorem is proved.

Note : 1. By applying the above mentioned procedure we can obtain a basic feasible
solution from any feasible solution.

2. If the given feasible solution is optimal, then basic feasible solution is also optimal.

Example 1: If x1 = 1, x2 = 0 , x3 = 1 be a feasible solution of the L.P.P.

Min. Z = 2 x1 + 3 x2 + 4 x3

subject to x1 + x2 + x3 = 2

x1 − x2 + x3 = 0 , x1, x2 , x3 ≥ 0 ,

then show that the given feasible solution is not basic.

Solution: The given system of constraint equations can be written in matrix form A x = b.

x 
1 1 1  1  2 
or 1 −1 1  x2  = 0  .
  x   
 3

1  1 1 2 
∴ α 1 =  , α 2 =  , α 3 =  , b =   .
1
 −
 1 1
 0 

The given feasible solution x1 = 1, x2 = 0, x3 = 1 will not be basic if the vectors associated
to non-zero variables are not linearly independent.
134

i.e., if α1 and α 3 are linearly dependent.

If we choose λ 1 = −1 and λ 2 = 1, we have

1 1 −1 + 1 0 


λ 1α1 + λ 2 α 3 = −1   + 1   =   =   = 0.
1
 1 −1 + 1 0 

∴ The vectors α1 and α 3 are linearly dependent i.e., are not L.I.

Hence the given feasible solution is not basic.

1 1
Aliter : |(α1, α 3 )| = = 0 ⇒ the vectors α1 and α 3 are not L.I.
1 1

Example 2: If x1 = 2, x2 = 3, x3 = 1 be a feasible solution of the linear programming


problem

Max. Z = x1 + 2 x2 + 4 x3

subject to 2 x1 + x2 + 4 x3 = 11

3 x1 + x2 + 5 x3 = 14

x1, x2 , x3 ≥ 0 ,

then find a basic feasible solution from the given feasible solution. [Gorakhpur 2007]

Solution: The given L.P.P. can be expressed a

Max. Z = x1 + 2 x2 + 4 x3 ,

subject to Ax =b

x 
2 1 4   1  11
or 3 1 5   x2  = 14 
  x   
 3

or α1 x1 + α 2 x2 + α 3α 3 = b, ...(1)

2  1 4  11
where α1 =  , α 2 =  , α 3 =  , b =  .
3  1 5  14 

Since x1 = 2, x2 = 3, x3 = 1 is a feasible solution to the given L.P.P. Therefore, from (1)

2α1 + 3α 2 + α 3 = b. ...(2)
135

Now the vectors α1, α 2 , α 3 associated with non-zero variables x1, x2 , x3 will be linearly
dependent if one of these vectors can be expressed as the linear combination of the
remaining two vectors.

Let α1 = a α 2 + bα 3 ...(3)

2  1 4  a + 4 b
or 3  = a 1 + b 5  = a + 5 b
      

⇒ a + 4 b = 2 and a + 5 b = 3 ⇒ a = −2, b = 1

Substituting the values of a and b in (3), we get

α1 = −2α 2 + α 3

or α1 + 2α 2 − α 3 = 0 ...(4)
3
or Σ λ i α i = 0 which gives λ1 = 1, λ 2 = 2, λ 3 = −1.
i =1

Now we shall determine which of the three variables x1, x2 , x3 should be zero.

max
Let v = 1 ≤ i ≤ 3 (λ i x i) = max (λ1 x1 , λ 2 x2 , λ 3 x3 )

= max (1 2 , 2 3 , −1 1) = 2 3 = λ 2 x2 .

∴ x2 should be zero, for which we should eliminate α 2 between (2) and (4).

Eliminating α 2 between (2) and (4), we get

2α1 − 3(α1 − α 3 ) / 2 + α 3 = b

or (1 2) α1 + (5 2) α 3 = b.

∴ The new F.S. is x1 = 1 2 , x2 = 0, x3 = 5 2 .

Now the column vectors α1 and α 3 corresponding to basic (non-zero) variables x1 and x3
are L.I. as

2 4
|(α1, α 3 )| = = −2 ≠ 0,
3 5

so this new F.S. is B.F.S.

Hence, the B.F.S. obtained from the given F.S. is

x1 = 1 2 , x2 = 0, x3 = 5 2 .

Note :
1. We can also find the new B.F.S as follows.
136

We have v =2 3.

Since x i − λ i v ≥ 0 for V i = 1, 2, 3.

∴ The new F.S. is

( x 1 − λ 1 v , x2 − λ 2 v, x3 − λ 3 v).

We have x1 − λ1 v = 2 −1 (2 / 3) = 1 2 , x2 − λ 2 v = 0 , x3 − λ 3 v = 5 2 .

Hence, the new F.S. is (1/2, 0, 5/2), which is also B.F.S.

2. Another new B.F.S. can be obtained as follows :

Equation (4), can be written as

−α 1 − 2α 2 + α 3 = 0 ...(5)

∴ λ 1 = −1, λ 2 = −2, λ 3 = 1

max
∴ v = 1 ≤ i ≤ 3 (λ i x i) = 1 / 1 = λ 3 x3 .

∴ x3 should be zero and α 3 should be eliminated between (2) and (5).

Eliminating α 3 between (2) and (5), we have

2α1 + 5α 2 = 0.

∴ New F.S. which is also B.F.S. is

x1 = 2, x2 = 5, x3 = 0.

Example 3: Consider the set of equations


5 x1 − 4 x2 + 3 x3 + x4 = 3

2 x1 + x2 + 5 x3 − 3 x4 = 0

x1 + 6 x2 − 4 x3 + 2 x4 = 15,

x1, x2 , x3 , x4 ≥ 0 .

If x1 = 1, x2 = 2, x3 = 1, x4 = 3 is a feasible solution, then find a basic feasible solution.


[Meerut 2005]

Solution: The given set of equations can be expressed in the matrix form

A x =b

x 
5 −4 3 1  1   3 
x
or 2 1 5 −3   2  =  0 
   x3   
 1 6 −4 2    15 
 x4 
137

α1 x1 + α 2 x2 + α 3 x3 + α 4 x4 = b ...(1)

5   −4   3  1  3
where α1 = 2 , α 2 =  1, α 3 =  5 , α 4 = −3 , b =  0  .
         
1  6  −4   2  15 

Since x1 = 1, x2 = 2, x3 = 1, x4 = 3 is a feasible solution to the given set of equations,


therefore from (1)

α1 + 2α 2 + α 3 + 3α 4 = b. ...(2)

Now the vectors α1, α 2 , α 3 , α 4 associated with non-zero variables x1, x2 , x3 , x4 will be
linearly dependent if one of these vectors can be expressed as the linear combination of
the remaining three vectors.

Let α1 = aα 2 + bα 3 + cα 4 ...(3)

5   −4   3  1
or 2  = a  1 + b  5  + c −3 
       
1  6  −4   2 

−4 a + 3 b + c  5 
or  a + 5 b − 3 c  = 2 
   
6 a − 4 b + 2 c  1

−4 a + 3 b + c = 5
or a + 5b − 3c = 2
6a − 4b + 2c = 1

Solving these, we get a = 22 43 , b = 139 86 , c = 189 86 .

Substituting these values in (3), we get


22 139 189
α + α + α = α1
43 2 86 3 86 4

or 86α 1 − 44α 2 − 139α 3 − 189α 4 = 0 ...(4)


4
or Σ λ i α i = 0, which gives λ 1 = 86, λ 2 = −44, λ 3 = −139, λ 4 = −189.
i =1

max  λ 
Using v = 1 ≤ i ≤ 4  i  , we have
 xi 

λ λ λ λ   86 44 139 −189  86 λ1
v = max  1 , 2 , 3 , 4  = max  ,− ,− ,  = = .
x
 1 x x x  1 2 1 3  1 x1
2 3 4

∴ The variable x1 should be zero.


138

i.e., The vector α 1 should be eliminated between (2) and (4).

Eliminating α1, between (2) and (4), we get

(44α 2 + 139α 3 +189α 4 ) 86 + 2α 2 + α 3 + 3α 4 = b


216 225 447
or 0.α1 + α2 + α3 + α =b
86 86 86 4
216 225 447
∴ x1 = 0, x2 = , x3 = , x4 = is the new feasible solution.
86 86 86

Since the vectors α 2 , α 3 , α 4 corresponding to non-zero (basic) variables are L.I. as

−4 3 1
|(α 2 α 3 α 4 )| = 1 5 −3 ≠ 0.
6 −4 2

∴ This new F.S. is B.F.S.

Note : Writing (4) as

−86α1 + 44α 2 + 139α 3 + 189α 4 = 0

Proceeding as above, another B.F.S. obtained is

x1 = 225 139 , x2 = 234 139 , x3 = 0, x4 = 228 139 .

1. State and prove the fundamental theorem of linear programming.


2. Show that if a linear programming problem has a feasible solution, it also has basic
feasible solution.
3. Prove that if the system A x = b of m linear equations in n unknowns (m ≤ n)
rank A = m has a feasible solution, then it has a basic feasible solution also.
4. If x1 = 2, x2 = 4 and x3 = 1 be a F.S. to the system of equations

2 x1 − x2 + 2 x3 = 2, x1 + 4 x2 = 18,
then find two basic feasible solutions.
5. (2, 1, 3) is a feasible solution of the set of equations

4 x1 + 2 x2 − 3 x3 = 1, 6 x1 + 4 x2 − 5 x3 = 1.

Reduce the feasible solution to a B.F.S. of the set. [Meerut 2006]

6. x1 = 1, x2 = 1, x3 = 1, x4 = 0 is a feasible solution to the system of equations

x1 + 2 x2 + 4 x3 + x4 = 7, 2 x1 − x2 + 3 x3 − 2 x4 = 4.
Reduce the feasible solution to two different basic feasible solutions.
139

7. x1 = 2, x2 = 4, x3 = 5 is a F.S. of th system of equations

2 x1 − x2 + 2 x3 = 10

x1 + 4 x2 = 18

x1, x2 , x3 ≥ 0

Reduce this F.S. to B.F.S.


8. (1, 1, 1) is a feasible solution of the system of equations

x1 + x2 + 2 x3 = 4

2 x1 − x2 + x3 = 2

Reduce the F.S. to B.F.S.


9. Consider the system of equations

2 x1 − 3 x2 + 4 x3 + 6 x4 = 25

x1 + 2 x2 + 3 x3 − 3 x4 + 5 x5 = 12
If x1 = 2, x2 = 1, x3 = 3, x4 = 2, x5 = 1 is a F.S., then reduce it to a B.F.S. of the
system.
10. Consider the set of equations

2 x1 − 3 x2 + 4 x3 + 6 x4 = 21

x1 + 2 x2 + 3 x3 − 3 x4 + 5 x5 = 9
If x1 = 2, x2 = 1, x3 = 2, x4 = 2, x5 = 1 is a F.S., then reduce it to two different basic
feasible solutions.

4. x1 = 26 9 , x2 = 34 9 , x3 = 0; x1 = 0, x2 = 9 2 , x3 = 13 4

5. x1 = 1, x2 = 0, x3 = 1

6. x1 = 0, x2 = 1 2 , x3 = 3 2 , x4 = 0; x1 = 3, x2 = 2, x3 = 0, x4 = 0

7. x1 = 58 9 , x2 = 26 9 , x3 = 0

8. x1 = 0, x2 = 0, x3 = 2; x1 = 2, x2 = 2, x3 = 0

9. x1 = 147 2 , x2 = 0, x3 = 0, x4 = 1 12, x5 = 0

10. x1 = 0, x2 = 0, x3 = 39 10 , x4 = 9 10 , x5 = 0
and x1 = 39 4 , x2 = 0, x3 = 0, x4 = 1 4 , x5 = 0
140

4.5 To Determine Improved B.F.S. from a Given B.F.S.


The following theorem helps us to develop a procedure for determining another basic
feasible solution from a given B.F.S. which gives a better value of objective function.

Theorem Let x B = B−1 b be a B.F.S. of a linear programming problem with Z = c Bx B


as the value of the objective function. If for any column α j in A, but not in B, the condition

c j − Z j > 0 or Z j − c j < 0 holds and if at least one yij > 0 , i = 1, 2, . . . , m, then it is

possible to obtain a new B.F.S. by replacing one of the columns in B by α j and if Z′ is the

new value of objective function, then Z′ ≥ Z.

If the initial B.F.S. x B = B−1b is non-degenerate, then Z′ > Z.

Proof: Consider the L.P.P.

Max. Z = c x, subject to A x = b, x ≥ 0,

where A = (α 1, α 2 , ...., α N ), N = m + n,

basis matrix B = (β1, β2 , ...., β m).

Let x B = [ x B1, x B2 ,...., x Bm] be a B.F.S. of the given L.P.P.

Since the vectors β1, β2 ,...., β m are in the basis of A, therefore we can express α j as a linear
combination of β's.

m
∴ αj = Σ yij β i = y1 j β1 + y2 j β2 + .... + ymj β m. ...(1)
i =1

If yrj ≠ 0, then α j can replace β r in B and B is still a basis matrix.

Let yrj ≠ 0, then from (1), we have

1 y1 j y( r −1) j y( r +1) j ymj


βr = αj − β1 − ....− β r −1 − β r +1 − .... − β
yrj yrj yrj yr j yrj m

1 m yij
or βr = αj − Σ β i.
yrj i = 1 yrj ...(2)
i≠ r

Now, we have

B xB = b

or (β1, β2 ,..., β r ,...., β m) [ x B1, x B2 , ...., x Br ,...., x Bm] = b

or β1 x B1 + β2 x B2 + ... + β r x Br + .... + β m x Bm = b
141

m
or Σ β i x Bi + β r x Br = b
i =1 ...(3)
i≠ r

Putting for β r from (2) in (3), we get

m  yij  x
Σ β i  x Bi − x Br  + Br α j = b
i =1  yrj  yrj ...(4)
i≠ r

m
or Σ x′ Bi β i + x′ Br α j = b,
i =1 ...(5)
i≠ r

yij 
where x′Bi = x Bi − x Br , i = 1, 2,..., m, i ≠ r 
yrj 

x 
and x′Br = Br , i = r 
...(6)
yrj

Comparing (3) and (5), we observe that the new basic solution of A x = b is given by

x ′B = [ x′Bi, x′Br ], i = 1, 2,...., m, i ≠ r

where the values of x′Bi, x′Br are given by (6).

This basic solution will be feasible if we have

yij 
x Bi − x ≥ 0, i = 1, 2,..., m, i ≠ r 
yrj Br 

x Br 
and ≥ 0, i = r . ...(7)
yrj 

Since x B is the initial B.F.S., we have

x Br ≥ 0, r = 1, 2, ...., m.

Thus we observe that (7) will hold only if

yr j > 0 and yij ≤ 0, i = 1, 2,..., m, i ≠ r .

If yr j > 0 and yij > 0, then (7) is satisfied only if

x Bi x
− Br ≥ 0
yij yr j

x Br x
or ≤ Bi
yr j yij
142

x Br min  x Bi 
or = i  y , yij > 0  = v . ...(8)
yrj  ij 

Thus a new B.F.S. can be obtained from the initial B.F.S. by removing the column vector
β r of the basis matrix B by α j if r is to be selected such that

x min  x 
v = Br = i  Bi , yij > 0  .
yrj  yij 

If we have v = 0, which is possible only when x Br = 0 then it means that the initial B.F.S.
is degenerate.

Now we shall prove Z′ ≥ Z.

The value of the objective function for the initial B.F.S. x B is

Z = cB x B

= (cB1, cB2 ,...., cBm) [ x B1, x B2 , ...., x Bm]

m
= Σ cBi x Bi.
i =1

Corresponding to the new B.F.S. x ′B , the value of the objective function is Z ′, so we have

m
Z ′ = Σ c′Bi x′Bi, ...(9)
i =1

where c′Bi are the coefficients of the basic variables x′Bi (i = 1, 2,...., m) in the objective
function.

Obviously, c′Bi = cBi, i = 1, 2,...., m, i ≠ r

and c′Br = c j .

∴ We can write

m
Z ′ = Σ cBi x′Bi + c j x′Br .
i =1 ...(10)
i≠ r

Substituting for x′Bi, x′Br from (6) in (10), we get

m  yij 
Z′ = Σ cBi  x Bi − x Br  + c x Br
 j
i =1  yr j  yr j
i≠ r
143

m  yij 
= Σ cBi  x Bi − x Br  + c x Br
 j
i =1  yr j  yr j

 yij 
Since the term cBi  x Bi − x Br  = 0 when i = r therefore it can be included in the
 y 
 r j

summation without changing the value of Z ′.

m x m x
∴ Z ′ = Σ cBi x Bi − Br Σ cBi yij + Br c j
i =1 yr j i =1 yr j

x m
= Z + Br (c j − Z j ), where Z j = Σ cBi yi j
yr j i =1

= Z + v (c j − Z j ), from (8) ...(11)

From (11), we observe that the new value of the objective function is equal to the original
value of the objective function plus the quantity v (c j − Z j ).

We have Z ′ ≥ Z if v (c j − Z j ) ≥ 0.

Since v ≥ 0 therefore Z ′ ≥ Z only if c j − Z j ≥ 0.

Hence by choosing the vector α j for which c j − Z j > 0 and at least one yij > 0, we obtain a new
improved value of the objective function.

If the initial B.F.S. is non-degenerates then v > 0 and in that case Z ′ > Z.

Note :
1. B′ = (β′1, β′2 ,...., β′m) is the new non-singular matrix obtained from B by replacing β r
with α j , then we have β′i = β i, i ≠ r and β′r = α j .
2. If v = 0, then the initial B.F.S. is degenerate. From (6), we have

x′Bi = x Bi, i = 1, 2,...., m, i ≠ r

x′Br = x Br = 0, i = r .

Since the values of the variables common to the initial and new solutions do not change
therefore the new B.F.S. is also degenerate.

4.6 Conditions for the Existence of Unbounded Solutions


[Meerut 2007]

In 4.5, we have proved that if for any column α j in A but not in B the condition
c j − Z j > 0 holds and if at least one yi j > 0, i = 1, 2,...., m, then it is possible to find an
improved B.F.S.
144

Now the question is, that, what will be the result if for at least one α j all
yij ≤ 0, i = 1, 2, ..., m.

To get the answer of this question we shall prove the following theorem :

Theorem (Unbounded Solutions) : If for any basic feasible solution x B = B−1 b to


A x = b there is some column α j in A but not in B for which c j − Z j > 0 and yij ≤ 0,

i = 1, 2, . . . . , m, then if the objective function is to be maximized, the problem has an


unbounded solution.

Proof: Consider the linear programming problem

Max. Z = c x subject to A x = b, x ≥ 0 ,

where A = (α1, α 2 ,..., α N ), N = m + n,

basis matrix B = (β1, β2 ,...., β m).

Let x B = [ x B1, x B2 , ...., x Bm] be a B.F.S. of the given L.P.P. Then, we have

m
B xB = b or Σ x Bi β i = b ...(1)
i =1

and the corresponding value of the objective function is

m
Z = c B x B = Σ cBi x Bi ...(2)
i =1

Adding and subtracting λα j in (1), we get

m
Σ x Bi β i − λα j + λα j = b ...(3)
i =1

where λ is some scalar.

Let α j ∈ A s.t. α j ∉ B.

Since the vectors β1, β2 , ...., β m are in the basis of A therefore we can express α j as the
linear combination of β's
m
∴ αj = Σ yi j β i
i =1

m
or − λα j = − λ Σ yij β i.
i =1

Substituting the above value of − λα j in (3), we get


145

m m
Σ x Bi β i − λ Σ yij β i + λα j = b
i =1 i =1

m
or Σ ( x Bi − λ yij ) β i + λα j = b
i =1

Thus, we obtain a new solution

x ′B = [ x′B1, x′B2 ,...., x′Bm, λ] ...(4)

where x′Bi = x Bi − λyij , i = 1, 2,...., m.

When λ > 0, x Bi − λ yij ≥ 0 since yij ≤ 0, i = 1, 2,...., m therefore (4) gives the feasible
solution in which the number of positive variables is less than or equal to (m + 1). It may
be less than (m + 1) because some x Bi − λ yij may be zero for some i. In case, the number of
positive variables in this solution is equal to (m + 1), then this solution will be non-basic
feasible solution.

If Z ′ is the new value of the objective function corresponding to this new solution, then
we have
m
Z ′ = Σ cBi ( x Bi − λyij ) + c j λ
i =1

m
or Z ′ = Σ cBi x Bi + λ ( c j − cBi yij ) = Z + λ (c j − Z j ).
i =1

Since c j − Z j > 0 (given) therefore Z ′ can be made as large as we please by giving


sufficiently large values to λ. We know that a L.P.P. has an unbounded solution if the
value of its objective function can be increased or decreased arbitrarily.

Hence, the given L.P.P. has an unbounded solution.

Note : If for some α j , Z j − c j > 0 and yij ≤ 0, i = 1, 2,...., m, then the L.P.P. has an

unbounded solution if the objective function is to be minimised.

4.7 Condition for Improved Basic Feasible Solution


to Become Optimal

Theorem Let x B = B−1 b be the basic (degenerate or non-degenerate) feasible solution of

the L.P.P. Z = c x s.t A x = b, x ≥ 0 . Let Z* = c B x B be the value of the objective


function at any iteration of simplex method. If c j − Z j ≤ 0 for every column α j in A but

not in B, then Z* is the optimum value of the objective function Z and x B is an optimal
basic feasible solution.
146

Proof: Suppose x B = [ x B1, x B2 , ...., x Bm] is the B.F.S. of the given L.P.P.

We have

A = (α1, α 2 ,...., α N ), N = m + n

and the basis matrix B = (β1, β2 , ...., β m).

∴ B xB = b or x B = B −1 b. ...(1)

The value of the objective function for the B.F.S. x B is

m
Z * = c B x B = Σ cBi x Bi. ...(2)
i =1

Also, we are given that c j − Z j ≤ 0 for every column α j in A but not in B.

Let x′ = [ x1′ , x2′ ,...., x′N ] be any feasible solution and Z ′ be the value of the objective
function for this solution.

Then, we have

A x′ = b ...(3)
N
and Z ′ = c x ′ = Σ c j x′j . ...(4)
i =1

Now Z * will be the optimum value of the objective function if we have Z * ≥ Z ′.

To prove Z * ≥ Z ′ we proceed in the following manner :

We have x B = B −1 b = B −1( Ax ′ ), from (3)

= (B −1 A)x ′

= Y x ′, where Y = (Y1, Y2 ,...., Y N )

= (Y1, Y2 ,....., Y N ) [ x1′ , x2′ ,...., x′N ]

 x1′ 
 y11 y12 ... y1 j ... y1N 
 x′ 
y y22 ... y2 j ... y2 N   2
 21   M 
=  ... ... ... ... ... ... 
 x′ 
 ... ... ... ... ... ...   j 
   M 
 ym1 ym2 ... ymj ... ymN 
  x′ 
 N

or [ x B1, x B2 ,...., x Bi,...., x Bm]

 N N N N 
=  Σ y1 j x′j , Σ y2 j x′j ,...., Σ yij x′j ,..., Σ ymj x′j 
j = 1 j =1 j =1 j =1 
147

Equating i-th component on both sides, we get

N
x Bi = Σ yij x′j . ...(5)
j =1

We have c j − Z j ≤ 0 for all j for which α j is not in B.

If this inequality also holds for all those j for which α j is in B, then c j − Z j ≤ 0 for all j for
which α j also belongs to B.

Now if α j = β i, then

α j = β i = 0 . β1 + 0 . β2 + .... + 0 . β i−1 + 1. β i + 0 . β i+1 + ...+0 . β m

which shows that Y j = ei, a vector whose i-th component is unity and all other
components are zero.

Again α j = β i gives c j = cBi

∴ c j − Z j = c j − c BY j = c j − c B ei = c j − cBi = 0

Thus c j − Z j = 0 for all those j for which α j ∈ B.

∴ c j − Z j ≤ 0 for all α j in A

or cj ≤ Z j

or c j x′j ≤ Z j x′j , [Q x′ j ≥ 0]

N N
or Σ c j x′j ≤ Σ Z j x′j
i =1 j =1

N
or Z ′ ≤ Σ x′j (c BY j ), from (4)
j =1

N  m 
= Σ x′j  Σ cBi yij 
j =1 i = 1 

m  N 
= Σ cBi  Σ x′j yij 
i =1 j =1 

m
= Σ cBi x Bi, from (5)
i =1

∴ Z ′ ≤ Z *.

Hence, Z * is the optimum value of the objective function.


148

4.8 Alternative Optimal Solutions


The given linear programming problem is said to possess an alternative optimal solution
if the set of variables giving the optimal value of the objective function is not unique.

Theorem (Conditions for Alternative Optimum Solutions)

Suppose there exists an optimal B.F.S. to a L.P.P. and


(i) If for some α j in A but not in B, c j − Z j = 0 and yij ≤ 0 for all i = 1, 2, . . . , m, then a

non-basic alternative optimal solution will exist.


(ii) If for some α j in A but not in B, c j − Z j = 0 and yij > 0 for at least one i, then an

alternative basic optimum solution will exist.

Proof: (i) We have discussed in 4.6 that if we introduce the column vector α j in B where

α j is in A but not in B and yij ≤ 0 for all i = 1, 2,...., m, then a non-basic feasible solution
x ′B with (m + 1) number of positive variables is given by

x ′B = [ x′B1, x′B2 ,...., x′Bm, λ]

where x′Bi = x Bi − λ yij , i = 1, 2,...., m, λ > 0

and the value of the objective function for this new F.S. is given by

Z ′ = Z + λ (c j − Z j ) .

If c j − Z j = 0, then Z ′ = Z

i.e., the value of the objective function for this new non-basic F.S. is also equal to Z ′
(optimal value). Hence, this new non-basic F.S. is an alternative optimal solution of the
given L.P.P.

(ii) We have shown in theorem of 4.5 that if yij > 0 for at least one i = 1, 2,...., m, then by
replacing one column β r in B by the column α j which is in A but not in B, we obtain
a new B.F.S. x ′B is given by

x ′B = [ x′B1, x′B2 , ..., x′Bm]

yij
where x′Bi = x Bi − x , i = 1, 2,...., m, i ≠ r
yrj Br

x
x′Br = Br , i = r
yrj

x Br Min  x Bi 
.
and = i  y , yij > 0
yr j 
 ij 
149

The value of objective function for this new B.F.S. is given by

x
Z ′ = Z + Br (c j − Z j )
yr j

i.e., the value of the objective function for this new B.F.S. is also equal to Z ′
(optimal value). Hence, this new B.F.S. is an alternative optimal B.F.S.

4.9 Inconsistency & Redundancy in Constraint Equations


4.9.1 Redundancy in Constraint Equations
By redundancy in constraint equations we mean that the system has more than enough
number of constraint equations, in other words, it has more constraint equations than
the number of variables.

This is the situation when

r ( A) = r ( Ab) = k ≤ n < m.

In this case, there will be (m − k) redundant equations.

4.9.2 Inconsistency
As already defined, the set of constraints (linear equations) is said to be inconsistent if
r ( A) ≠ r ( Ab).

Before solving a L.P. problem by simplex method, we should have r ( A) = r ( A b) i.e., the
constraint equations (after introducing the slack and artificial variables) should be
consistent. Since in simplex method we always have r ( A) = r ( Ab) = m.

If the system A x = b involves artificial variables, then we cannot say whether this system
is consistent or there is any redundancy. Below we give the cases (without proof) to
decide about the consistency and redundancy in such systems.

Case 1 : If the basis B contains no artificial vector and the optimality condition is
satisfied (at any iteration), then the current solution is a B.F.S. of the problem.

Case 2 : If one or more artificial vector appears in the basis B at a zero level i.e., the value
of the artificial variables corresponding to artificial vectors in B are zero and the
optimality condition is satisfied (at any iteration), then the system is consistent.
Furthermore if yij = 0 V j and x Br = 0 and r corresponds to the row containing an
artificial vector, then the r-th constraint equation is redundant.

Case 3 : If at least one artificial vector appears in the basis B at a positive level i.e., the
value of at least one artificial variable corresponding to artificial vector in B is non-zero
and the optimality condition is satisfied (at any iteration), then there exists no feasible
solution of the problem.
150

4.10 To Determine Starting B.F.S.


In this article, we are going to discuss a method for finding a most convenient initial
B.F.S. to a linear programming problem.

In the constraints of a general linear programming problem, there may be any of the three
signs ≤, =, ≥ . Let us assume that the requirement vector b ≥ 0 (if any of the bi's is negative,
multiply the corresponding constraint by –1).

Case 1 : To find initial B.F.S. when all the original constraints have ≤ sign.
To convert all the constraints into equations we insert slack variables only. The
equations obtained are as follows :

a11 x1 + a12 x2 + .... + a1n x n + 1. x n+1 = b1

a21 x1 + a22 x2 + .... + a2 n x n + 1. x n+ 2 = b2

.... .... .... ....

.... .... .... ....

am1 x1 + am2 x2 + .... + amn x n + + 1. x n+ m = bm

Here x n+1, x n+ 2 , ....., x n+ m are slack variables.

In matrix form these equations can be written as

 x1 
 a11 a12 .... a1n 1 0....0   x 
a  2   b1 
a .... a2 n 0 1....0   M   
 21 22 
 .... .... .... .... .... ....   x  =  b2 
 .... ....  n   M 
.... .... .... ....   x n +1   
 
am1 am2 .... amn 0 0....1  M  bm 
 
 x n+ m 

Taking the initial basis matrix

B = I m, (unit matrix of order m × m).

∴ The initial basic solution is given by

x B = B −1 b = I m b = b ≥ 0

Thus, the initial B.F.S. is

x n+1 = x B1 = b1, x n+ 2 = x B2 = b2 , ...., x n+ m = x Bm = bm,

which can be obtained by writing all the non-basic variables (i.e., given variables)
x1, x2 ,...., x n equal to zero and solving the equations for the remaining variables (i.e.,
slack variables) x n+1, x n+ 2 , ...., x n+ m.
151

Case 2 : To find initial B.F.S. when all the original constraints have ≥ sign.
First we convert all the constraints into equations by inserting surplus variables. The
equations obtained are as follows :

a11 x1 + a12 x2 + .... + a1n x n − x n+1 = b1

a21 x1 + a22 x2 + .... a2 n x n − x n+ 2 = b2


.... .... .... .... ….
.... .... .... .... ….

am1 x1 + am2 x2 + ....+ amn x n − x n+ m = bm.

Here x n+1, x n+ 2 ,...., x n+ m are surplus variables.

In matrix form, these equations can be written as

 x1 
 a11 a12 .... a1n −1 0 .... 0   x 
a  2   b1 
a .... a2 n 0 −1 .... 0   M   
 21 22 
 .... .... .... .... ....  x  =  b2 
 .... ....   n   M 

.... ....
  x n +1   
am1 am2 .... amn 0 0 .... −1  M  bm 
 
 x n+ m 

If we take the initial basis matrix B = − I m,

we have x B = B −1 b = − I m b = − b ≤ 0, which is not a B.F.S.

To avoid this difficulty we add one more variable to each constraint. These variables are
called “artificial variables”.

Adding surplus and artificial variables, the constraints of the given L.P.P. change to the
following equations :

a11 x1 + a12 x2 + .... + a1n x n − x n +1 + x n+ m+1 = b1

a21 x1 + a22 x2 + ....+ a2 n x n − x n+ 2 + x n+ m+ 2 = b2

.... .... .... .... .... ….

.... .... .... .... .... ….

am1 x1 + am2 x2 + ...+ amn x n − x n+ m + x n+ m+ m = bm

Here x n+ m+1, x n+ m+ 2 ,...., x n+ m+ m are the artificial variables.

In matrix form, these equations can be written as


152

 x1 
 x 
 2 
 M 
 a11 a12 ... a1n −1 0 ... 0 1 0 ... 0  x 
 n 
a  b1 
a ... a2 n 0 −1 ... 0 0 1 ... 0  x n +1  b 
 21 22 
 ... ... ... ... ... ... ... ... ... ... ... ...  M =  2
 ...    M 
... ... ... ... ... ... ... ... ... ... ...  x n+ m  b 
   m
a a
 m1 m2 ... amn 0 0 ... −1 0 0 ... 1  x 
 n+ m+1 
 M 
 M 
 
x
 n+ m+ m 

Now taking the basis matrix B = I m.

∴ x B = B −1 b = I m b = b ≥ 0,

which is a basic feasible solution.

∴ The B.F.S. is x n+ m+1 = x B1 = b1, x n+ m+ 2 = x B2 = b2 ,..., x n+ m+ m = x Bm = bm, which


can be obtained by writing all the non-basic variables (i.e., given variables and surplus
variables) x1, x2 ,...., x n; x n+1, x n+ 2 ,...., x n+ m equal to zero and solving the equations for
the remaining basic variables (i.e., artificial variables) x n+ m+1, x n+ m+ 2 ,...., x n+ m+ m.

Case 3 : To find initial B.F.S. when the constraints have ‘≤’, ‘≥’ and ‘=’ signs.

In this case, we convert the constraints into equations by inserting slack, surplus and
artificial variables. Here the basis matrix B = I m is obtained by introducing unit column
vectors corresponding to the slack and artificial variables.

To obtain the initial B.F.S. we put all the non-basic variables equal to zero and solve for
remaining basic variables. Here also the initial B.F.S. x B = b ≥ 0 .

Note :
1. It is important to note that in all the three cases the initial B.F.S. consists of the
constants b1, b2 ,...., bm, (note that b1, b2 ,...., bm are all non-negative).
2. Sometimes an identity is present in A without introducing the artificial variables.
So there is no need to introduce the artificial variables in such cases.
153

4.11 Computational Procedure of Simplex Method


(The Maximization Problem)
[Meerut 2004]
It consists of the following steps systematically :

Step 1: If the given problem is of minimization, convert it into the maximization


problem.
For this, multiply both sides of the objective function by –1 and put − Z = Z ′.

If v is the maximum value of Z ′, then −v will be the minimum value of Z.

Step 2 : Make all the bi 's non-negative.

The R.H.S. of each of the constraints should be non-negative. If the L.P.P. has a
constraint for which a negative bi is given, it should be converted into positive value by
multiplying both sides of the constraints by –1.

For example, if the given constraint is 7 x1 − 8 x2 ≥ −3, it will change to −7 x1 + 8 x2 ≤ 3.

Step 3 : Convert inequalities of constraints into equations.


For this introduce slack or surplus variables. The coefficients of slack or surplus variables
in the objective function are zero. Introduce artificial variables, if necessary.

Step 4 : Find the initial B.F.S.


Follow the method discussed in 4.10 to find the initial B.F.S. If artificial variables are
introduced in the L.P.P., then follow the two-phase method or Big M-method for solving
such problems.

Step 5 : Construct the initial simplex table (starting simplex table).

(See the initial simplex table on next page)

It should be remembered that the values of non-basic variables are always zero at each
iteration.

Step 6 : Test the initial B.F.S. for optimality.


Compute the net evaluation ∆ j for each variable x j by using the formula ∆ j = c j − c B Y j .

Note that in the starting simplex table ∆ j 's are same as c j 's. Also ∆ j 's corresponding to
basic variables are zero.

Optimality Test :
(i) If ∆ j ≤ 0 for all j, the solution under consideration is optimal.

(a) Alternative optimal solution will exist if any ∆ j (for non-basic variable) is also zero.
[Meerut 2006 (BP)]
I n i ti a l S i m p le x Ta b le
154

cj → c1 c2 ... ck ... cn cn + 1 cn + 2 ... cn + m M in.


r a ti o
B cB
xB ↓ Y1(= α1) Y2 (= α 2 ) Yk ( = α k ) Yn ( = α n ) Yn + 1(β1) Yn + 2 (β2 ) Yn + m(β m )

Y n+1 cB1 = 0 x B1 = b1 y11 = a11 y12 = a12 ... y1k = a1k ... y1n = a1n 1 0 ... 0

Y n+ 2 cB2 = 0 x B2 = b2 y21 = a21 y22 = a22 ... y2 k = a2 k ... y2 n = a2 n 0 1 ... 0

M M M M M ... M ... M M M ... M

Y n+ r cBr = 0 x Br = br yr1 = ar1 yr2 = ar2 ... yrk = ark ... yrn = arn M M ... M →

M M M M M ... M ... M M M ... M

Y n+ m cBm = 0 x Bm = bm ym1 = am1 ym2 = am2 ... ymk = amk ... ymn = amn 0 0 ... 1

Z = c Bx B = 0 ∆j ∆1 ∆2 ∆k ↑ ∆n ∆ n+1 ∆ n+ 2 ∆ n+ m
155

(b) If ∆ j < 0 for all j, corresponding to non-basic variables, then the solution is
unique optimal solution.
(ii) If ∆ j > 0 for any j i.e., if at least one ∆ j is positive the solution under test is not

optimal. In this case, we must proceed to improve the solution (step 7).
(iii) If corresponding to maximum positive ∆ j , all elements of the column Y j are negative
or zero, the solution under test will be unbounded.
(iv) If optimality condition is satisfied but the value of at least one artificial variable
present in the basis is non-zero, the problem will have no feasible solution.
Step 7 : Select the entering (incoming) vector and departing (outgoing) vector.

To improve the initial B.F.S. we select the vector entering the basis matrix and the vector
to be removed from the basis matrix by the following rules :

(i) To find Incoming Vector : The incoming vector α k is always selected


corresponding to the largest positive value of ∆ j .
If maximum value of ∆ j occurs for more than one α j , then we can select any of these
vectors as incoming vector.

(ii) To find Outgoing Vector : The departing vector β r is selected corresponding to


that value of r for which
x Br min  x Bi 
yrk
= i  y , yik > 0  ,
 ik 

if α k is the incoming vector.

Note : If the above mentioned minimum value is not unique, then more than one
variable will vanish in the next solution. As a result the next solution will be a degenerate
B.F.S. for which the outgoing vector is selected in a different way.

Step 8 : If α k is the entering vector and β r is the outgoing vector, then the element yrk
which lies at the intersection of minimum ratio arrow (→) and incoming vector arrow ↑ is
called the pivot element (key element).

We put this element in ÿ.

In order to bring β r in place of incoming vector Y k , unity must occupy the place ÿ. In other
words, key element yrk should be 1. If it is not so, then divide all the elements of this row
by the key element yrk . Then subtract suitable multiples of the row containing the key
element from all other rows and obtain zero at all other positions of this column Y k . Now
bring β r in place of α r and construct the revised simplex table.

Thus, we get improved basic feasible solution.

Step 9 : Test the improved B.F.S. for optimality.

If it is not optimum, repeat steps 7 and 8 until we obtain an optimum solution.


156

Example 1: Maximize Z = 40 x1 + 35 x2

subject to 2 x1 + 3 x2 ≤ 60

4 x1 + 3 x2 ≤ 96,

x1, x2 ≥ 0 . [Meerut 2008]

For the clear understanding of the simplex method the solution to this problem is
illustrated below in a step-wise manner.

Solution: Step 1: The given problem is of maximization and all the bi's are already
positive.

Step 2 : Converting the inequalities of constraints into equations by introducing slack


variables x3 and x4 , we get

2 x1 + 3 x2 + x3 = 60 and 4 x1 + 3 x2 + x4 = 96

The coefficients of slack variables are zero in the objective function. Therefore, the given
problem becomes

Maximize Z = 40 x1 + 35 x2 + 0 x3 + 0 x4

subject to 2 x1 + 3 x2 + x3 + 0 x4 = 60

4 x1 + 3 x2 + 0 x3 + x4 = 96

x1, x2 , x3 , x4 ≥ 0.

This is the standardised form of the given problem.

Step 3 : The simplex method begins with an initial basic feasible solution in which the
values of basic variables are zero.

∴ x1 = 0, x2 = 0 which gives x3 = 60, x4 = 96.

Step 4 : Constructing the initial simplex table.

First Simplex Table

B cB cj 40 35 0 0 Min. ratio

xB Y1 Y2 Y3 (β1) Y4 (β2 ) x B Y1 , yi1 > 0

Y3 0 60 2 3 1 0 60 2 = 30
Y4 0 96 4 3 0 1 96 4 = 24(min) →

Z = cB x B = 0 ∆j 40 35 0 0
↑ ↓
157

Step 5 : Computing ∆ j for all non-basic variables x j , j = 1, 2, using the formula

∆ j = c j − c BY j

∆1 = c1 − c B Y1 = 40 − (0, 0) (2, 4) = 40

∆2 = c2 − c B Y2 = 35 − (0, 0) (3, 3) = 35.

Since all ∆ j 's are not less than or equal to zero therefore the solution is not optimal. So we
proceed to next step to improve the solution.

Step 6 : Incoming and outgoing vectors.

Since ∆1 = 40 is the maximum value of ∆ j , j = 1, 2, therefore α 1 = Y1 is the incoming

vector.

To find outgoing vector

xB x x   60 96 
Now =  B1 , B2  =  ,  = (30, 24).
Y1  y11 y21   2 4 

x Br x  x x  x
= min  Bi , yi1 > 0  = min  B1 , B2  = 24 = B2
yr1 i  yi1  i  y11 y21  y21

∴ r = 2 which gives β2 = Y4 as the outgoing vector.

∴ y21 = a21 is the key element which is equal to 4.

To bring Y1 in place of Y4 we proceed in the following manner :

Dividing the second row containing the key element a21 = 4 by 4 to make it unity.

Now we shall subtract appropriate multiples of this new row from the other remaining
row so as to obtain zeros in the remaining positions of Y1.

Subtracting 2 times of the second row obtained after dividing by 4 from the first row and
subtract 40 times of the this second row from the row containing ∆ j 's to obtain new
values of ∆ j 's.

Constructing the second simplex table in which β2 (Y4 ) is replaced by α1(= Y 1).
Second Simplex Table

B cB cj 10 35 0 0 Min. ratio

xB Y1 (β2 ) Y2 Y3 (β1) Y4 x B Y2 , yi2 > 0


Y3 0 12 0 32 1 −1 2 8 (min)

Y1 40 24 1 34 0 14
32

Z = c B x B = 960 ∆j 0 5 0 −10
↑ ↓
158

Also computing ∆ j by using the formula ∆ j = c j − c B Y j for x2 , x4 , we get

 3 3
∆2 = c2 − c B Y2 = 35 − (0, 40)  ,  = 5
 2 4

 1 1
∆4 = c4 − c B Y4 = 0 − (0, 40)  − ,  = −10
 2 4

∆1 and ∆3 are zero as they correspond to unit column vectors.

Since all ∆ j 's are not less than or equal to zero therefore the solution obtained is not
optimal.

∆2 = 5 is the maximum value of ∆ j .

∴ α 2 = Y2 is the incoming vector.

x Br x 
Now = min  Bi , yi2 > 0 
yr2 i  yi2 

x x  x B1
= min  B1 , B2  = min [8, 32] = 8 =
 y12 y22  y12

∴ r = 1 which gives β1 = Y3 as the outgoing vector.


3
∴ y12 = a12 = is the key element.
2

To bring Y2 in place of Y3 we proceed in the following manner :

Dividing the first row by 3/2 to make the key element unity.

Subtracting 3/4 times of the first row thus obtained from the second row and subtracting
5 times of it from the row containing ∆ j 's to obtain new values of ∆ j 's.

Constructing the third simplex table in which Y3 is replaced by Y2 .

Third Simplex Table

B cB cj 40 35 0 0 Min. ratio

xB Y1(β2 ) Y2 (β1) Y3 Y4

Y2 35 8 0 1 23 −1 3
Y1 40 18 1 0 −1 2 12

Z = c B x B = 1000 ∆j 0 0 −10 3 −25 3

Since all ∆ j ≤ 0, therefore the above solution is optimal.


Hence the optimal solution is
x1 = 18, x2 = 8 and Max. Z = 1000.
159

This solution is unique optimal solution.

Q Both ∆3 , ∆4 < 0 corresponding to non-basic variables x3 , x4 .

The above solution with its different computational steps can be more conveniently
represented by a single table as shown below :

B cB cj 40 35 0 0 Min. ratio

xB Y1 Y2 Y3 Y4 x B Y1 , Yi1 > 0
Y3 0 60 2 3 1 0 60 2 = 30

Y4 0 96 4 3 0 1 96 4 = 24 (min)

Z = c Bx B = 0 ∆j 40 35 0 0 x B Y2 , Y i2 > 0
↑ ↓
Y3 0 12 0 32 1 −1 2 8 (min)

Y1 40 24 1 34 0 14 32

Z = c B x B = 960 ∆j 0 5 0 −10
↑ ↓

Y2 35 8 0 1 23 −1 3

Y1 40 18 1 0 −1 2 12

Z = c B x B = 1000 ∆j 0 0 −10 3 −25 3

Example 2: Solve by simplex method the following L.P.P. :


Maximize Z = 3 x1 + 5 x2 + 4 x3

subject to 2 x1 + 3 x2 ≤ 8, 2 x2 + 5 x3 ≤ 10

3 x1 + 2 x2 + 4 x3 ≤ 15, x1, x2 , x3 ≥ 0 . [Meerut 2007 (BP), 11; Kanpur 2010]

Solution: The given problem is of maximization and all the bi's are positive.

Converting the inequalities of constraints into equations by introducing slack variables


x4 , x5 , x6 . Also the coefficients of slack variables are zero in the objective function. Thus
the given problem becomes

Max Z = 3 x1 + 5 x2 + 4 x3 + 0 x4 + 0 x5 + 0 x6

subject to 2 x1 + 3 x2 + 0 x3 + x4 =8

0 x1 + 2 x2 + 5 x3 + x5 = 10

3 x1 + 2 x2 + 4 x3 + x6 = 15
160

x1, x2 ,..., x6 ≥ 0.

Taking x1 = 0, x2 = 0, x3 = 0, we get x4 = 8, x5 = 10, x6 = 15, which is the initial B.F.S.

The solution to the problem using simplex method is given in the following table :

cj 3 5 4 0 0 0 Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 Y5 Y6
y i2 > 0

Y4 0 8 2 3 0 1 0 0 8 3 ( min)

Y5 0 10 0 2 5 0 1 0
10 2 = 5
Y6 0 15 3 2 4 0 0 1
15 2

Z = c Bx B = 0 ∆j 3 5 4 0 0 0 x B Y3 ,
↑ ↓ y i3 > 0
Y2 5 83 23 1 0 13 0 0 
Y5 0 14 3 −4 3 0 5 −2 3 1 0 14 15 ( min)

Y6 0 29 3 53 0 4 −2 3 0 1
29 12

Z = c Bx B ∆j −1 3 0 4 −5 3 0 0 x B / Y 1,
= 40 3 ↑ ↓ yi1 > 0

Y2 5 83 23 1 0 13 0 0 4
Y3 4 14 15 −4 15 0 1 −2 15 15 0 
Y6 0 89 15 41 15 0 0 −2 15 −4 5 1 89 41(min)

Z = c Bx B ∆j 11 15 0 0 −17 15 −4 5 0
= 256 15 ↑ ↓

Y2 5 50 41 0 1 0 15 41 8 41 −10 41
Y3 4 62 41 0 0 1 −6 41 5 41 4 41
Y1 3 89 41 1 0 0 −2 41 −12 41 15 41

Z = c Bx B ∆j 0 0 0 −45 41 −24 41 −11 41


= 765 41

In the last table all ∆ j 's ≤ 0, therefore it gives optimal solution.

∴ Optimal solution is

x1 = 89 41, x2 = 50 41, x3 = 62 41 and Max. Z = 765 41

Example 3: Solved by simplex method the following L.P.P. :


Maximize Z = 3 x1 + 2 x2 + 5 x3

subject to x1 + 2 x2 + x3 ≤ 430
161

3 x1 + 2 x3 ≤ 460

x1 + 4 x2 ≤ 420 ,

x1, x2 , x3 ≥ 0 . [Meerut 2008 (BP), 12; Gorakhpur 2010, 11]

Solution: The given problem is of maximization and all the bi's are positive.

Converting the inequalities of the constraints into equations by introducing slack


variables x4 , x5 and x6 the given problem becomes

Max. Z = 3 x1 + 2 x2 + 5 x3 + 0 x4 + 0 x5 + 0 x6

subject to x1 + 2 x2 + x3 + x4 = 430

3 x1 + 0 x2 + 2 x3 + x5 = 460

x1 + 4 x2 + 0 x3 + x6 = 420

x1, x2 ,..., x6 ≥ 0.

Taking x1 = 0, x2 = 0, x3 = 0, we get x4 = 430, x5 = 460, x6 = 420 which is the initial


B.F.S.

The solution to the problem using simplex algorithm is given in the following table :

B cB cj 3 2 5 0 0 0 Min. ratio
x B Y3 ,
xB Y1 Y2 Y3 Y4 Y5 Y6
y i3 > 0

Y4 0 430 1 2 1 1 0 0 430
Y5 0 460 3 0 2 0 1 0 230 (min)

Y6 0 420 1 4 0 0 0 1

Z = c Bx B = 0 ∆j 3 2 5 0 0 0 x B Y2 ,
↑ ↓ y i2 > 0

Y4 0 200 −1 2 2 0 1 −1 2 0 100 (min)



Y3 5 230 32 0 1 0 12 0

Y6 0 420 1 4 0 0 0 1
420 4

Z = 1150 ∆j −9 2 2 0 0 −5 2 0
↑ ↓

Y2 2 100 −1 4 1 0 12 −1 4 0
Y3 5 230 32 0 1 0 12 0
Y6 0 20 2 0 0 −2 1 1

Z = 1350 ∆j −4 0 0 −1 −2 0
162

In the last table all ∆ j ≤ 0, therefore the solution is optimal.

Hence, the optimal solution is

x1 = 0, x2 = 100, x3 = 230 and Max. Z = 1350.

Example 4: Solve by simplex method the following L.P.P. :


Minimize Z = x2 − 3 x3 + 2 x5

subject to 3 x2 − x3 + 2 x5 ≤ 7

−2 x2 + 4 x3 ≤ 12

−4 x2 + 3 x3 + 8 x5 ≤ 10 ,

x2 , x3 , x5 ≥ 0 . [Meerut 2005, 10]

Solution: The given problem is of minimization. Convert it to maximization problem by


taking the objective function as Z ′ = − Z.

The objective function becomes

Max. Z ′ = − Z = − x2 + 3 x3 − 2 x5 .

Converting the inequalities of constraints into equations introducing slack variables


x1, x4 and x6 , the given problem becomes

Max. Z ′ = − Z = 0 . x1 − x2 + 3 x3 + 0 . x4 − 2 x5 + 0 . x6

subject to x1 + 3 x2 − x3 + 0 x4 + 2 x5 + 0 x6 = 7

0 x1 − 2 x2 + 4 x3 + x4 + 0 x5 + 0 x6 = 12

0 x1 − 4 x2 + 3 x3 + 0 x4 + 8 x5 + x6 = 10,

x1, x2 ,..., x6 ≥ 0.

Taking x2 = 0, x3 = 0, x5 = 0, we get x1 = 7, x4 = 12, x6 = 10, which is the initial B.F.S.

The solution to the problem using simplex algorithm is given below :

B cB cj 0 –1 3 0 −2 0 Min. ratio

xB Y1 Y2 Y3 Y4 Y5 Y6 x B Y3 , yi3 > 0

Y1 0 7 1 3 −1 0 2 0 
Y4 0 12 0 −2 4 1 0 0 3 (min)

Y6 0 10 0 −4 3 0 8 1
10 3

Z′ = c Bx B = 0 ∆j 0 −1 3 0 −2 0 x B Y2 , yi2 > 0
↑ ↓
163

Y1 0 10 1 52 0 14 2 0 4 (min)

Y3 3 3 0 −1 2 1 14 0 0

Y6 0 1 0 −5 2 0 −3 4 8 1

Z′ = 9 ∆j 0 12 0 −3 4 −2 0
↓ ↑

Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1

Z ′ = 11 ∆j −1 5 0 0 −4 5 −12 5 0

Since all ∆ j 's ≤ 0, therefore the solution given by last table is optimal.

Hence, the optimal solution is

x2 = 4, x3 = 5, x5 = 0 and Min. Z = − Z ′ = −11

Note : Another form of Ex. 20 is

Min. Z = x1 − 3 x2 + 2 x3

subject to 3 x1 − x2 + 2 x3 ≤ 7

−2 x1 + 4 x2 ≤ 12

−4 x1 + 3 x2 + 8 x3 ≤ 10, x1, x2 , x3 ≥ 0

Example 5: Solve by simplex method the following L.P.P. :


Maximize Z = 2 x1 + 5 x2 + 7 x3

subject to 3 x1 + 2 x2 + 4 x3 ≤ 100

x1 + 4 x2 + 2 x3 ≤ 100

x1 + x2 + 3 x3 ≤ 100 ,

x1, x2 , x3 ≥ 0 . [Meerut 2009, 11 (BP); Kanpur 2009]

Solution: The given problem is of maximization and all bi's are positive.

Introducing slack variables x4 , x5 , x6 to convert constraint inequalities into equations,


the given problem becomes

Z = 2 x1 + 5 x2 + 7 x3 + 0 x4 + 0 x5 + 0 x6

subject to 3 x1 + 2 x2 + 4 x3 + x4 = 100

x1 + 4 x2 + 2 x3 + x5 = 100

x1 + x2 + 3 x3 + x6 = 100.
164

The starting B.F.S. is

x1 = 0, x2 = 0, x3 = 0, x4 = 100, x5 = 100, x6 = 100.

The solution to the problem using simplex method is given below :

B cB cj 2 5 7 0 0 0 Min. ratio

xB Y1 Y2 Y3 Y4 Y5 Y6 x B Y3 ,
yi3 > 0

Y4 0 100 3 2 4 1 0 0 25 (min) →
Y5 0 100 1 4 2 0 1 0 50
Y6 0 100 1 1 3 0 0 1 100 3

Z = c Bx B = 0 ∆j 2 5 7 0 0 0 x B Y2 ,
↑ ↓ yi2 > 0

Y3 7 25 34 12 1 14 0 0 50
Y5 0 50 −1 2 3 0 −1 2 1 0 50 3 (min) →
Y6 0 25 −5 4 −1 2 0 −3 4 0 1 Neg.

Z = c B x B = 175 ∆j −13 4 32 0 −7 4 0 0
↑ ↓

Y3 7 50 3 56 0 1 13 −1 6 0
Y2 5 50 3 −1 6 1 0 −1 6 13 0
Y6 0 100 3 −4 3 0 0 −5 6 16 1

Z = c Bx B ∆j −3 0 0 −3 2 −1 2 0
= 200

In the last table all ∆ j ≤ 0, therefore the solution is optimal.

Hence, the optimal solution is

x1 = 0, x2 = 50 3 , x3 = 50 3 and Max. Z = 200.

4.12 Artificial Variables Technique


Sometimes in linear programming problems constraints may also have ≥ and = signs after
ensuring that all bi ≥ 0. In such problems, we first introduce surplus variables to convert
inequalities into equations. In such cases, basis matrix is not obtained as an identity
matrix in the starting simplex table. To overcome this difficulty we introduce new
variables, called, the artificial variables to each of such constraints. These variables are
fictitious and cannot have any physical meaning.
165

The artificial variables are introduced for the limited purpose of obtaining an initial
solution. It is not relevant whether the objective function is of the maximization or
minimization type. Since artificial variables do not represent any quantity relating to the
decision problem they must be driven out of the system and must not be present in the
final solution (if at all they do, it represents a situation of infeasibility which is discussed
later in this chapter).

The L.P.P. involving artificial variables can be solved by two methods :


1. Method 1 : Big M-Method (Carners M-Method)
2. Method 2 : Two Phase Method

4.12.1 Method 1: Big M-Method (Carners M-Method)


For the solution of the L.P.P. involving artificial variables a method known as Big-M
Method or Carners M-Method was developed by A. Carners. In this method, a very
large negative price say − M is assigned to each artificial variable in the objective function.

∴ The objective function is written as

Z = c x + 0 . x slack . V + 0 . x surplus . V − M . x art .V

Due to this large negative price − M in the objective function, it cannot be improved in
the presence of the artificial variables. So first we shall have to remove these artificial
vector from the basis matrix. Once artificial column vectors corresponding to the
artificial variables leave the basis, we forget about it forever and never consider it as a
vector to enter into the basis at any iteration.

For clear understanding of the procedure see following examples :

Note : A solution to the problem which does not contain an artificial variable in the
basis, represents a feasible solution to the problem.

Example 1: Apply Big-M Method to solve the following L.P.P. :

Max. Z = 2 x1 + 4 x2

subject to 2 x1 + x2 ≤ 18

3 x1 + 2 x2 ≥ 30

x1 + 2 x2 = 26

x1, x2 ≥ 0 . [Meerut 2007 (BP), 10, 12 (BP)]


166

Solution: The problem is of maximization and all bi's are positive.

Introducing the necessary slack variable x3 , surplus variable x4 and artificial variables,
x a , x a and assigning large negative costs − M to artificial variables, the problem reduces
1 2
to the form

Max. Z = 2 x1 + 4 x2 + 0 . x3 + 0 . x4 − M x a − M x a
1 2

subject to 2 x1 + x2 + x3 = 18

3 x1 + 2 x2 − x4 + x a = 30
1

x1 + 2 x2 + xa = 26.
2

Taking x1 = 0, x2 = 0, x4 = 0, we get x3 = 18, x a = 30, x a = 26, which is the initial


1 2
(starting) B.F.S.

The solution to the problem using simplex method is given in the following table :

B cB cj 2 4 0 0 −M −M Min. ratio
x B Y2 ,
xB Y1 Y2 Y3 Y4 A1 A2
yi2 > 0

Y3 0 18 2 1 1 0 0 0 18
A1 −M 30 3 2 0 −1 1 0 15
A2 −M 26 1 2 0 0 0 1 13 (min)

Z = c B x B = −56 M ∆j 2 + 4M 4 + 4M 0 −M 0 0 x B Y1,
↑ ↓
y i1 > 0
Y3 0 5 32 0 1 0 0 10 3
A1 −M 4 2 0 0 −1 1 2 (min)

Y2 4 13 12 1 0 0 0
26

Z = 52 − 4 M ∆j 2M 0 0 −M 0
↑ ↓

Y3 0 2 0 0 1 34
Y1 2 2 1 0 0 −1 2
Y2 4 12 0 1 0 14

Z = 52 ∆j 0 0 0 0

Computation of ∆ j
For the first table
∆1 = c1 − c BY1 = 2 − (0, − M, − M) (2, 3, 1) = 2 + 4 M
167

∆2 = c2 − c BY2 = 4 − (0, − M, − M) (1, 2, 2) = 4 + 4 M

∆4 = c4 − c BY4 = 0 − (0, − M, − M) (0, − 1, 0) = − M

For the second table,


∆1 = c1 − c BY1 = 2 − (0, − M, 4) (3 2 , 2, 1 2) = 2 M

∆2 = 0 = ∆3 = ∆5 , ∆4 = c4 − c B Y4 = 0 − (0, − M, 4) (0, − 1, 0) = − M

For the third table

∆1 = 0 = ∆2 = ∆3 , ∆4 = c4 − c B Y4 = 0 − (0, 2, 4) (3 / 4, − 1 / 2, 1 / 4) = 0.

In the last table all ∆ j ≤ 0, and no artificial variable appears in the basis, therefore this
solution is optimal.

Hence, the optimal solution is x1 = 2, x2 = 12 and Max. Z = 52.

Example 2: Apply Big-M Method to solve the following L.P.P. :


Max. Z = x1 + 2 x2 + 3 x3 − x4

subject to x1 + 2 x2 + 3 x3 = 15

2 x1 + x2 + 5 x3 = 20

x1 + 2 x2 + x3 + x4 = 10 ,

x1, x2 , x3 , x4 ≥ 0 . [Meerut 2009; Gorakhpur 2008]

Solution: The given problem is of maximization and all bi's are positive. Also the
constraints are equations. Examining the constraints we observe that in order to obtain a
unit matrix of order 3 we need two more unit vectors as one unit vector is formed by the
coefficients of x4 . Therefore introducing two artificial variables x a and x a in the first
1 2
two constraints and assigning large negative cost − M to the artificial variables, the given
problem becomes

Max. Z = x1 + 2 x2 + 3 x3 − x4 − Mx a − Mx a
1 2

subject to x1 + 2 x2 + 3 x3 + xa = 15
1

2 x1 + x2 + 5 x3 + xa = 20
2

x1 + 2 x2 + x3 + x4 = 10.

Taking x1 = 0, x2 = 0, x3 = 0, we get x a = 15, x a = 20, x4 = 10, which is the initial B.F.S.


1 2

The solution to the problem using simplex method is given in the following table :
168

cj 1 2 3 −1 −M −M Min. ratio

B cB xB Y1 Y2 Y3 Y4 A1 A2 x B Y3 ,
yi3 > 0

A1 −M 15 1 2 3 0 1 0 5
A2 −M 20 2 1 5 0 0 1 4 (min)

Y4 −1 10 1 2 1 1 0 0
10

Z = c Bx B ∆j 3M + 2 3M + 4 8M + 4 0 0 0 x B Y2 ,
= −35 M − 10 ↑ ↓ yi2 > 0

A1 −M 3 −1 5 75 0 0 1 15 7 (min)

Y3 3 4 25 15 1 0 0
20
Y4 −1 6 35 95 0 1 0
10 3

Z = c Bx B ∆j (− M + 2) (7 M + 16) 0 0 0 x B Y1,
= −3 M + 6 5 5 ↓ yi1 > 0

Y2 2 15 7 −1 7 1 0 0 Neg.
Y3 3 25 7 37 0 1 0 25 3
Y4 −1 15 7 67 0 0 1 5 2 (min)

Z = c Bx B ∆j 67 0 0 0
= 90 7 ↑ ↓

Y2 2 52 0 1 0 16
Y3 3 52 0 0 1 −1 2
Y1 1 52 1 0 0 76

Z = c Bx B ∆j 0 0 0 −1
= 15

Computation of ∆ j . By ∆ j = c j − c B Y j

For the first table,

∆1 = c1 − c B Y1 = 1 − (− M, − M, − 1) (1, 2, 1) = 3 M + 2,

∆2 = c2 − c B Y2 = 2 − (− M, − M, − 1) (2, 1, 2) = 3 M + 4

∆3 = c3 − c B Y3 = 3 − (− M, − M, − 1) (3, 5, 1) = 8 M + 4

∆4 = 0 = ∆5 = ∆6
169

For the second table,


1
∆1 = 1 − (− M, 3, − 1) (−1 5 , 2 5 , 3 5) = (− M + 2),
5
1
∆2 = 2 − (− M, 3, − 1) (7 5 , 1 5 , 9 5) = (7 M + 16)
5

∆3 = 0 = ∆4 = ∆5

For the third table,

∆1 = 1 − (2, 3, − 1) (−1 7 , 3 7 , 6 7) = 6 7 , ∆2 = 0 = ∆3 = ∆4

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

Hence, the optimal solution is

x1 = 5 2 = x2 = x3 , x4 = 0 and Max. Z = 15

Infeasible Solution
Example 3: Apply Big-M method to solve the L.P.P.
Max. Z = − x1 − x2

subject to 3 x1 + 2 x2 ≥ 30

−2 x1 + 3 x2 ≤ −30

x1 + x2 ≤ 5,

x1, x2 ≥ 0 . [Meerut 2008, 09 (BP)]

Solution: The given problem is of maximization.

Since bi in second constraint is negative therefore multiplying both sides by −1, we get

2 x1 − 3 x2 ≥ 30.

To convert inequalities of constraints into equations, introducing slack, surplus and


artificial variables wherever necessary, the given problem becomes

Max. Z = − x1 − x2 + 0 x3 + 0 x4 + 0 x5 − Mx a − Mx a
1 2

subject to 3 x1 + 2 x2 − x3 + xa = 30
1

2 x1 − 3 x2 − x4 + xa = 30
2

x1 + x2 + x5 =5

where x1, x2 , x3 , x4 , x5 , x a , x a ≥ 0.
1 2

Taking x1 = 0, x2 = 0, x3 = 0, x4 = 0, we get x5 = 5, A1 = 30, A2 = 30 which is the initial


B.F.S.
170

The solution to the problem using simplex method is given below :

B cB cj −1 −1 0 0 0 −M −M Min. ratio
,x B Y1
xB Y1 Y2 Y3 Y4 Y5 A1 A2
yi1 > 0

A1 −M 30 3 2 −1 0 0 1 0 10
A2 −M 30 2 −3 0 −1 0 0 1 15
Y5 0 5 1 1 0 0 1 0 0 5 (min)

Z = c Bx B ∆j 5 M −1 −M −M −M 0 0 0
= −60 M ↑ ↓

A1 −M 15 0 −1 −1 0 −3 1 0
A2 −M 20 0 −5 0 −1 −2 0 1
Y1 −1 5 1 1 0 0 1 0 0

Z = −35 M − 5 ∆j 0 −6 M −M −M −5 M 0 0

Since all ∆ j ≤ 0, therefore the solution obtained is optimal. But the artificial column
vectors A1, A2 (corresponding to artificial variables x a , x a ) appear in the basis at
1 2
positive levels which implies that the given L.P.P. has no feasible solution.

Note : Sometimes the constraints may be inconsistent so that there is no feasible


solution to the problem. Such a situation is called infeasibility. In the case of
infeasibility, one or more artificial variables appear in the basis at positive level in the
find simplex table.

4.12.2 Method 2: Two Phase Method [Kanpur 2012]

As an alternative to the Big-M method, there is another method for dealing with linear
programming problems involving artificial variables. This is called the two phase
method and as its name suggests it separates the solution procedure into two phases. In
phase I, all the artificial variables are eliminated from the basis.

In phase II, we use the solution from phase I as the initial basic feasible solution and use
the simplex method to determine the optimal solution.

4.12.2.1 Computational Procedure of Two Phase Method


Phase I : Step 1 : After making all bi's positive we convert each of the constraints into
equations, by introducing slack, surplus and artificial variables.

Step 2 : Assign zero coefficient to each of the primary, slack and surplus variables and the
coefficient (–1) to each of the artificial variables in the objective function. As a result the
new objective function is
171

Z ′ = − (sum of the artificial variables)

Step 3 : Solve the problem formed in step 2 by applying the simplex method. If the
original problem has a feasible solution, then this problem shall have an optimal solution
with optimal value of the objective function Z ′ equal to zero as each of the artificial
variables will be equal to zero.

If max Z ′ < 0 and at least one artificial variable appears in the optimum basis at a positive
level, then the given problem does not possess any feasible solution.

If max Z ′ = 0 and at least one artificial variable appears in the optimum basis at zero level,
we proceed to phase II.

If max Z ′ = 0 and no artificial variable appears in the optimum basis then also we proceed
to phase II.

Phase II : In phase II start with the optimal solution contained in the final simplex table
of the phase I. Assign the actual costs to the variables in the objective function and a zero
cost to every slack and surplus variable. Eliminate the artificial variables which are
non-basic at the end of the phase-I. Remove c j row values of the optimum table and
replace them by c j values of the original problem. Now apply simplex algorithm to the
problem contained in the new table to obtain the optimal solution.

4.12.3 Disadvantages of Big-M Method Over Two Phase Method


1. We can always use Big-M method to check the existence of a feasible solution but
its computational procedure may be inconvenient because of the manipulation of
the constant M. Two phase method eliminates the artificial variables in the
beginning.
2. When we solve a problem on a digital computer we have to assign some numerical
value to M which must be larger than the values c1, c2 ,... present in the objective
function. But a computer has only a fixed number of digits.

Example 1: Solve the following problem by using the two phase method :

Min. Z = 40 x1 + 24 x2

subject to 20 x1 + 50 x2 ≥ 4800

80 x1 + 50 x2 ≥ 7200 , x1, x2 ≥ 0 . [Meerut 2008(BP)]

Solution: The given problem is of minimization. Converting it to maximization by


taking the objective function as
Z ′ = − Z = −40 x1 − 24 x2
172

Introducing the surplus variables x3 , x4 and artificial variables x a , x a , the constraint


1 2
inequalities, reduce to the following equations :

20 x1 + 50 x2 − x3 + xa = 4800
1

80 x1 + 50 x2 − x4 + xa = 7200,
2

x1, x2 , x3 , x4 , x a , x a ≥ 0.
1 2

Phase I : Assigning cost −1 to artificial variables, cost 0 to all other variables, the new
objective function of auxiliary problem becomes

Max. Z ′′ = 0 x1 + 0 x2 + 0 x3 + 0 x4 − x a − x a
1 2

subject to the constraints given above.

Taking x1 = 0, x2 = 0, x3 = 0, x4 = 0, we get x a = 4800, x a = 7200, which is the initial


1 2
B.F.S.

Now applying the simplex method in the usual manner, we have the following table :

B cB cj 0 0 0 0 −1 −1 Min. ratio

XB Y1 Y2 Y3 Y4 A1 A2 x B Y1, yi1 > 0

A1 −1 4800 20 50 −1 0 1 0 4800 20 = 240


7200 80 = 90
A2 −1 7200 80 50 0 −1 0 1 (min)

Z ′′ = c B x B ∆j 100 100 −1 −1 0 0 x B y2 , yi2 > 0


= −12000 ↑ ↓

A1 −1 3000 0 75 2 −1 14 1 80 (min)

Y1 0 90 1 58 0 −1 80 0 144

Z ′′ = −2910 ∆j 0 75 2 0 14 0
↑ ↓

Y2 0 80 0 1 −2 75 1 50

Y1 0 40 1 0 1 60 −1 60

Z ′′ = 0 ∆j 0 0 0 0

Since all ∆ j ≤ 0 and no artificial variable appears in the basis therefore an optimum
solution to the auxiliary problem has been attained.
173

Phase II : Now assigning the actual costs to the original variables and cost zero to the
surplus variables, the objective function becomes

Max. Z ′ = −40 x1 − 24 x2 + 0 x3 + 0 x4 .

Replace the c j row values in the final simplex table of phase I by the c j values of the
original objective function.

Also delete the artificial variables from the final simplex table of phase I, we write the first
simplex table to phase II. Now solution of the problem applying simplex method in the
usual manner is given in the following table :

cj −40 −24 0 0 Min. ratio


B cB
xB Y1 Y2 Y3 Y4 x B Y3 , yi3 > 0

Y2 −24 80 0 1 −2 75 1 50 Neg.
Y1 −40 40 1 0 1 60 −1 60 2400 (min)

Z ′ = c B x B = −3520 ∆j 0 0 2 75 −38 75
↓ ↑

Y2 −24 144 85 1 0 −1 50
Y3 0 2400 60 0 1 −1

Z ′ = −3456 ∆j −8 5 0 0 −12 25

Since all ∆ j ≤ 0, therefore the solution obtained is optimal.

Hence the optimal solution is x1 = 0, x2 = 144 and Min. Z = − Z ′ = 3456.

Infeasible Solution
Example 2: Solve the following L.P.P. by using the two phase method :

Min. Z = x1 − 2 x2 − 3 x3

subject to −2 x1 + x2 + 3 x3 = 2

2 x1 + 3 x2 + 4 x3 = 1

x1, x2 , x3 ≥ 0 . [Meerut 2005, 09 (BP), 10]

Solution: Converting the objective function to maximization form by substituting


Z = − Z ′, we get

Max. Z ′ = − x1 + 2 x2 + 3 x3

Introducing the artificial variables x a and x a , the constraint equations become


1 2
174

−2 x1 + x2 + 3 x3 + xa =2
1

2 x1 + 3 x2 + 4 x3 + xa =1
2

where x1, x2 , x3 , x a , x a ≥ 0.
1 2

Phase I : Assigning costs −1 to artificial variables and costs 0 to all other variables, the
new objective function of auxiliary problem is

Max Z ′ = 0 x1 + 0 x2 + 0 x3 − x a − x a ,
1 2

subject to the constraints mentioned above.

Taking x1 = 0, x2 = 0, x3 = 0, we get x a = 2, x a = 1, which is the starting B.F.S.


1 2

Now apply simplex method in the usual manner to remove artificial variables.

B cB cj 0 0 0 −1 −1 Min. ratio

xB Y1 Y2 Y3 A1 A2 x B Y3 , yi3 > 0

A1 −1 2 −2 1 3 1 0 23
A2 −1 1 2 3 4 0 1 1 4 (min)

Z ′ = −3 ∆j 0 4 7 0 0
↑ ↓

A1 −1 54 −7 2 −5 4 0 1 −3 4
Y3 0 14 12 34 1 0 14

Z ′ = −5 4 ∆j −7 4 −5 4 0 0 −3 4

Since all ∆ j 's are negative or zero, an optimum basic feasible solution to the auxiliary
problem has been attained. But the artificial variable x a (column vector A1) appears in the
1
basic solution at a positive level. Hence the original L.P.P. does not possess any feasible solution.

4.13 Miscellaneous Problems


4.13.1 L.P.P. Having Unbounded Solution
A L.P.P. has unbounded solution if the feasible region is unbounded such that value of
the objective function can be increased or decreased (as required) indefinitely. It is,
however, not necessary that an unbounded feasible region should yield an unbounded
value for the objective function.

For clear understanding see the following examples :


175

L.P.P. Having Unbounded Feasible Region But Bounded Optimal Solution

Example 1: Max. Z = 6 x1 − 2 x2

subject to 2 x1 − x2 ≤ 2

x1 ≤ 4

x1, x2 ≥ 0 .

Solution: The given problem is of maximization and all bi's are positive.

To convert inequalities of constraints into equations introducing slack variables x3 and


x4 , given problem becomes

Max. Z = 6 x1 − 2 x2 + 0 x3 + 0 x4

subject to 2 x1 − x2 + x3 =2

x1 + x4 = 4.

Taking x1 = 0, x2 = 0, we get x3 = 2, x4 = 4 which is the initial B.F.S.

The solution to the problem using simplex method is given below :

cj 6 −2 0 0 Min. ratio
B cB
xB Y1 Y2 Y3 Y4 x B Y1, yi1 > 0

Y3 0 2 2 −1 1 0 1(min)

Y4 0 4 1 0 0 1
4

Z = c Bx B = 0 ∆j 6 −2 0 0 x B Y2 , yi2 > 0
↑ ↓

Y1 6 1 1 −1 2 12 0 
Y4 0 3 0 12 −1 2 1 6 (min)

Z =6 ∆j 0 1 −3 0
↑ ↓

Y1 6 4 1 0 0 1
Y2 −2 6 0 1 −1 2

Z = 12 ∆j 0 0 −2 −2

Since all ∆ j ≤ 0, therefore the solution obtained is optimal.

Hence the optimal solution is x1 = 4, x2 = 6 and Max. Z = 12.


From the first simplex table, we observe that the elements of column of Y2 are either
negative or zero which indicates that the feasible region is not bounded.
176

Hence, a linear programming problem having unbounded feasible region may have a
bounded optimal solution.

L.P.P. Having Unbounded Solutions

Example 2: Max. Z = 10 x1 + 20 x2

subject to 2 x1 + 4 x2 ≥ 16

x1 + 5 x2 ≥ 15, x1, x2 ≥ 0 .

Solution: The given problem is of maximization and all bi's are positive.

Introducing surplus variables x3 , x4 and artificial variables x a , x a , the given problem


1 2
becomes

Max. Z = 10 x1 + 20 x2 + 0 x3 + 0 x4 − Mx a − Mx a
1 2

subject to 2 x1 + 4 x2 − x3 + xa = 16
1

x1 + 5 x2 − x4 + xa = 15, x1, x2 , x3 , x4 , x a , x a ≥ 0.
2 1 2

Taking x1 = 0, x2 = 0, x3 = 0, x4 = 0, we get x a = 16, x a = 15, which is the starting B.F.S.


1 2

The solution to the problem using simplex method is given in the following table :

cj 10 20 0 0 − M − M Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 A1 A2
yi2 > 0

A1 −M 16 2 4 −1 0 1 0 16 4
A2 −M 15 1 5 0 −1 0 1 15 5 (min)

Z = − 31M ∆j 3 M + 10 9 M + 20 −M −M 0 0 x B Y1,
↑ ↓ yi1 > 0
A1 −M 4 65 0 −1 45 1 10 3 (min)
Y2 20 3 15 1 0 −1 5 0 →
15

Z = − 4 M + 60 ∆j 6 0 −M 4 0 x B Y3
( M + 5) ( M + 5)
5 5 ↓

Y1 10 10 3 1 0 −5 6 23 
Y2 20 73 0 1 16 −1 3 14 (min)

Z = 80 ∆j 0 0 5 0
↓ ↑
177

B cB cj 10 20 0 0 −M −M Min. ratio
x B Y1,
xB Y1 Y2 Y3 Y4 A1 A2 yi1 > 0

Y1 10 15 1 5 0 −1
Y3 0 14 0 6 1 −2

Z = 150 ∆j 0 −30 0 10

In the last table we observe that Y4 is the incoming vector but we cannot find outgoing
vector because all the elements in this column are negative. Therefore, the solution is
unbounded.

Note : If in a situation there is at least one ∆ j greater than zero but there is no

non-negative ratios or min. ratios → ∞, then also we have unbounded solution.

4.13.2 L.P.P. Having More than One Optimum Solution.


Example 3: Max. Z = 6 x1 + 4 x2

subject to 2 x1 + 3 x2 ≤ 30

3 x1 + 2 x2 ≤ 24,

x1 + x2 ≥ 3, x1, x2 ≥ 0 . [Kanpur 2011]

Solution: To convert inequalities into equations, introducing slack, surplus and artificial
variables, the problem becomes

Max. Z = 6 x1 + 4 x2 + 0 x3 + 0 x4 + 0 x5 − Mx a

subject to 2 x1 + 3 x2 + x3 = 30

3 x1 + 2 x2 + x4 = 24

x1 + x2 − x5 + x a =3

x1, x2 , x3 , x4 , x5 , x a ≥ 0.

The starting B.F.S. is

x1 = 0, x2 = 0, x3 = 30, x4 = 24, x5 = 0, x a = 3.
178

The solution to the problem using simplex method is given below :

cj 6 4 0 0 0 −M Min. ratio
B cB
x B Y1, yi1 > 0
xB Y1 Y2 Y3 Y4 Y5 A1

Y3 0 30 2 3 1 0 0 0 15
Y4 0 24 3 2 0 1 0 0 8
A1 −M 3 1 1 0 0 −1 1 3 (min)

Z = c Bx B ∆j 6+ M 4+ M 0 0 −M 0 x B Y5 , yi5 > 0
= −3 M ↑ ↓

Y3 0 24 0 1 1 0 2 12
Y4 0 15 0 −1 0 1 3 5 (min)

Y1 6 3 1 1 0 0 −1 
∆j 0 −2 0 0 0
Z = 18 x B Y2 , yi2 > 0
↓ ↑

Y3 0 14 0 53 1 −2 3 0 42 5 →
Y5 0 5 0 −1 3 0 13 1 
Y1 6 8 1 23 0 13 0 12

Z = 48 ∆j 0 0 0 −2 0
↑ ↓

Since all ∆ j ≤ 0, therefore the solution is optimal.

The optimal solution is x1 = 8, x2 = 0, Max. Z = 48.

Here corresponding to non-basic variable x2 in the last table ∆2 = 0 and Y2 is not in the
basis B, therefore an alternative solution also exists. Thus, the problem does not have unique
solution.

Taking Y2 as incoming and Y3 as outgoing vector we obtain the following simplex table :

B cB cj 6 4 0 0 0 Min. ratio

xB Y1 Y2 Y3 Y4 Y5

Y2 4 42 5 0 1 35 −2 5 0
Y5 0 39 5 0 0 15 15 1
Y1 6 12 5 1 0 −2 5 35 0

Z = c Bx B ∆j 0 0 0 −2 0
= 48
179

Since all ∆ j ≤ 0, therefore this solution is also optimal having the same maximum value of
Z.

Second optimal solution is

x1 = 12 5 , x2 = 42 5 and Max. Z = 48

Hence, two optimal solutions of the problem are

1. x1 = 8, x2 = 0

2. x1 = 12 5 , x2 = 42 5 and Max. Z = 200.

We know that the convex combination of B.F.S. is also an optimal solution.

Thus if we obtain two alternative optimum solutions, then we can obtain any number of
optimum solutions.

For any arbitrary value of λ such that 0 ≤ λ ≤ 1 the following table gives different
optimum solutions which are infinite in number :

Variables I Sol. II Sol. Gen. Sol.

x1 8 12 5 8 λ + (12 5) (1 − λ )
x2 0 42 5 0 λ + (42 5) (1 − λ )

Taking a particular value of λ , λ = 1 2 (say),

the third optimal solution is

x1 = 26 5 , x2 = 21 5 and Max. Z = 48.

4.13.3 L.P.P. with Unrestricted Variables


Example 4: Max. Z = 2 x1 + 3 x2

subject to − x1 + 2 x2 ≤ 4

x1 + x2 ≤ 6

x1 + 3 x2 ≤ 9, x1, x2 are unrestricted. [Meerut 2006 (BP)]

Solution: The given problem is of maximization and all bi's are positive. We know that
the simplex method is applicable if the variables are non-negative. In the given problem,
x1 and x2 are unrestricted (may be +, – or zero). In such cases, we replace all these
variables by the difference of two non-negative variables and solve the problem in usual
manner.

∴ Making the transformation

x1 = x1′ − x1′′ and x2 = x2′ − x2′′


180

such that x1′ , x1′′, x2′ , x2′′ ≥ 0.

The given problem becomes

Max. Z = 2 ( x1′ − x1′′ ) + 3 ( x2′ − x2′′ )

subject to − ( x1′ − x1′′ ) + 2( x2′ − x2′′ ) ≤ 4

( x1′ − x1′′ ) + ( x2′ − x2′′ ) ≤ 6

( x1′ − x1′′ ) + 3 ( x2′ − x2′′ ) ≤ 9, x1′ , x2′ , x1′′, x2′′ ≥ 0.

To change constraint inequalities into equations introducing slack variables x3 , x4 and


x5 , the given L.P.P. becomes

Max. Z = 2 x1′ − 2 x1′′ + 3 x2′ − 3 x2′′ + 0 . x3 + 0 . x4 + 0 . x5

− x1′ + x1′′ + 2 x2′ − 2 x2′′ + x3 =4

x1′ − x1′′ + x2′ − x2′′ + x4 =6

x1′ − x1′′ + 3 x2′ − 3 x2′′ + x5 = 9.

Taking x1′ = 0, x1′′ = 0, x2′ = 0, x2′′ = 0 we get x3 = 4, x4 = 6, x5 = 9, which is the starting


B.F.S.

The solution to the problem using simplex algorithm is given below :

B cB cj 2 −2 3 −3 0 0 0 Min. ratio
x BY ′, 2
xB Y1′ Y1″ Y2′ Y2 ″ Y3 Y4 Y5
y′i2 > 0

Y3 0 4 −1 1 2 −2 1 0 0 2 (min)

Y4 0 6 1 −1 1 −1 0 1 0
6
Y5 0 9 1 −1 3 −3 0 0 1
3

Z = c Bx B ∆j 2 −2 3 −3 0 0 0 x B Y1′ , y′i1 > 0


=0 ↑ ↓

Y2′ 3 2 −1 2 12 1 −1 12 0 0 
Y4 0 4 32 −3 2 0 0 −1 2 1 0 83
Y5 0 3 52 −5 2 0 0 −3 2 0 1 6 5 (min)

Z = c Bx B ∆j 72 −7 2 0 0 −3 2 0 0 x B Y3
=6 ↑ ↓
181

Y2′ 3 13 5 0 0 1 −1 15 0 15 13
Y4 0 11 5 0 0 0 0 25 1 −3 5 11 2 (min)

Y′ 2 65 1 −1 0 0 −3 5 0 25
1 

Z = c Bx B ∆j 0 0 0 0 35 0 −7 5
= 51 5 ↑ ↓

Y2′ 3 32 0 0 1 −1 0 −1 2 1 2
Y3 0 11 2 0 0 0 0 1 5 2 −3 2
Y′ 2 92 1 −1 0 0 0 3 2 −1 2
1

Z = c Bx B ∆j 0 0 0 0 0 − 3 2 −1 2
= 27 2

Since all ∆ j ≤ 0, therefore the solution is optimal.

∴ The optimal solution is x1′ = 9 2 , x1′′ = 0, x2′ = 3 2 , x2′′ = 0,

Max. Z = 27 2

⇒ x1 = x1′ − x1′′ = 9 2 , x2 = x2′ − x2′′ = 3 2, Max. Z = 27 2.

Example 5: Minimize Z = x1 + x2 + x3

subject to x1 − 3 x2 + 4 x3 = 5

x1 − 2 x2 ≤ 3

2 x2 − x3 ≥ 4,

x1, x2 ≥ 0 and x3 is unrestricted.

Solution: First we shall convert the minimization problem into maximization by


substituting Z ′ = − Z.

The objective function changes to

Max. Z ′ = − x1 − x2 − x3 .

Since x3 is unrestricted therefore replacing it by

x3′ − x3′′ where x3′ , x3′′ ≥ 0.

Also introducing slack variable x4 , surplus variable x5 and artificial variables x a and x a ,
1 2
the given problem becomes

Max. Z ′ = − x1 − x2 − x3′ + x3′′ + +0 x4 + 0 x5 − Mx a − Mx a


1 2

subject to x1 − 3 x2 + 4 x3′ − 4 x3′′ + x a = 5


1

x1 − 2 x2 + x4 = 3
182

0 x1 + 2 x2 − x3′ + x3′′ − x5 + x a = 4,
2

x1, x2 , x3′ , x3′′ , x4 , x5 , x a , x a ≥ 0.


1 2

Now solve the problem using simplex algorithm.

(Proceed as in Ex. 4 above).

The optimal solution is

x1 = 0, x2 = 21 5 , x3 = 22 5 and Min. Z = 43 5.

4.13.4 L.P.P. with Some Constant Term in the Objective Function


In a L.P.P., when the objective function contains a constant term, then the simplex
method is applied by leaving this constant term in the beginning and optimal solution is
obtained. In the end, the constant term (which was left initially) is added to the optimal
value of the objective function.

For clear understanding of the method see the following examples :

Example 6: Solve the L.P.P.

Max. Z = 2 x1 − x2 + x3 + 50

subject to 2 x1 + 2 x2 − 6 x3 ≤ 16

12 x1 − 3 x2 + 3 x3 ≥ 6

−2 x1 − 3 x2 + x3 ≤ 4

and x1, x2 , x3 ≥ 0 .

Solution: Leaving the constant term 50 from the objective function in the beginning,
introducing slack, surplus and artificial variables, the given problem reduces to

Max. Z ′ = 2 x1 − x2 + x3 + 0 x4 + 0 x5 + 0 x6 − M x a
1

subject to 2 x1 + 2 x2 − 6 x3 + x4 = 16

12 x1 − 3 x2 + 3 x3 − x5 + x a =6
1

−2 x1 − 3 x2 + x3 + x6 =4

and x1, x2 , x3 , x4 , x5 , x6 , x a ≥ 0.
1

Taking x1 = 0 = x2 = x3 = x5 , we get x4 = 16, x a = 6, x6 = 4 which is the starting B.F.S.


1
183

The solution of the problem by simplex method is given in the following table :

cj 2 −1 1 0 0 0 −M Min. ratio
B cB x B Y1,
xB Y1 Y2 Y3 Y4 Y5 Y6 A yi1 > 0

Y4 0 16 2 2 −6 1 1 0 0 16 2
A −M 6 12 −3 3 0 −1 0 1 6 12 (min.) →
Y6 0 4 −2 −3 1 0 0 1 0 

Z ′ = cB x B ∆j 2 + 12 M −1 − 3 M 1 + 3 M 0 −M 0 0 x B Y3 , yi3 > 0
= −6 M ↑ ↓

Y4 0 15 0 52 −13 2 1 16 0 
Y1 2 12 1 −1 4 14 0 −1 2 0 2 (min.) →
Y6 0 5 0 −7 2 32 0 −1 6 1 10 3

Z ′ = xB x B ∆j 0 −1 2 12 0 16 0 x B Y5 ,
=1 ↓ ↑ yi5 > 0

Y4 0 28 26 −4 0 1 −2 0 
Y3 1 2 4 −1 1 0 −1 3 0 
Y6 0 2 −6 −2 0 0 1/3 1 6 (min.) →

Z ′ = xB x B ∆j −2 0 0 0 13 0 x B Y1, yi1 > 0


=2 ↑ ↓

Y4 0 40 −10 −16 0 1 0 6 
Y3 1 4 −2 −3 1 0 0 1 
Y5 0 6 −18 −6 0 0 1 3 

Z ′ = CB x B ∆j 4 2 0 0 0 −1
=4 ↑

Here Y1 is the entering vector but all elements in column one are –ve, so we cannot select
the outgoing vector. Hence, the solution of the problem is unbounded.

4.13.5 L.P.P. with Lower Bounds of Some or All Variables,


Other than Zero
In some L.P.P., the lower bounds may be specified for examples, in a L.P.P. it may be
stipulated that x1 ≥ c1, x2 ≥ c2 etc. In such problems, we substitute
x1 = c1 + y1, x2 = c2 + y2 etc. and then solve the problem in terms of y1 and y2 , where
y1 ≥ 0, y2 ≥ 0 etc.

For clear understanding of the method see the following example :


184

Example 7: Solve the L.P.P.


Max. Z = 3 x1 + 5 x2 + 4 x3

subject to 2 x1 − 3 x2 ≤ 8

2 x2 + 5 x3 ≤ 10

3 x1 + 2 x2 + 4 x3 ≤ 15

x1 ≥ 2, x2 ≥ 4, x3 ≥ 0 .

Solution: Taking x1 = y1 + 2, x2 = y2 + 4, x3 = y3 ; the given problem reduces to

Max. Z = 3 y1 + 5 y2 + 4 y3 + 26

subject to 2 y1 − 3 y2 ≤ 16, 2 y2 + 5 y3 ≤ 2,

3 y1 + 2 y2 + 4 y3 ≤ 1

and y1, y2 , y3 ≥ 0.

Introducing the slack variables y4 , y5 , y6 and leaving constant term 26 from Z in the
beginning, the above L.P.P. reduces to

Max. Z ′ = 3 y1 + 5 y2 + 4 y3 where Z = Z ′ + 26

subject to 2 y1 − 3 y2 + y4 = 16

2 y2 +5 y3 + y5 =2

3 y1 + 2 y2 + 4 y3 + y6 =1

and y1, y2 , y3 ≥ 0.

Taking y1 = 0 = y2 = y3 , we get y4 = 16, y5 = 2, y6 = 1 which is the starting B.F.S.

The solution by simplex method is shown in the following table :

cj 3 5 4 0 0 0 Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 Y5 Y6 yi2 > 0

Y4 0 16 2 −3 0 1 0 0 
Y5 0 2 0 2 5 0 1 0 22
Y6 0 1 3 2 4 0 0 1 1 2 (min. ) →

Z′ = cB x B ∆j 3 5 4 0 0 0
=0 ↑ ↓
185

Y4 0 35 2 13 2 0 6 1 0 32
Y5 0 1 −3 0 1 0 1 −1
Y2 5 12 32 1 2 0 0 12

Z′ = cB x B ∆j −9 2 0 −6 0 0 −5 2
=5 2

Since no ∆ j > 0, so this solution is optimal.

∴ Optimal solution is

y1 = 0, y2 = 1 2 , y3 = 0, Max. Z ′ = 5 2

∴ x1 = y1 + 2 = 2, x2 = y2 + 4 = 9 2, Max. Z = Z ′ + 26 = 57 2

i.e., x1 = 2, x2 = 9 2, Max. Z = 57 2.

4.14 Solution of Simultaneous Linear Equations by


Simplex Method
To solve a system of n simultaneous linear equations by simplex method, if non-negative
restriction of variables is not given, then we replace each variable by the difference of two
non-negative variables. Also we add an artificial variable (non-negative) in each equation
(if required) to obtain a basis matrix (identity matrix). A dummy objective function is
also introduced in which every given variable is assigned 0 (zero) cost and every artificial
variable is assigned cost −1.

The reformulated L.P.P. is, then solved by using simplex method.

For clear understanding of the method, see the following examples :

Example 1: Solve the following system of linear equations by using simplex method :
x1 − x3 + 4 x4 = 3 , 2 x1 − x2 = 3

3 x1 − 2 x2 − x4 = 1, x1, x2 , x3 , x4 ≥ 0 . [Meerut 2001, 06, 07]

Solution: Here the variables are non-negative. Adding the artificial variables
(non-negative) x a , x a , x a in the given equations and introducing the dummy objective
1 2 3
function Z with costs zero to each given variable and cost–1 to each artificial variable the
given system can be written as a L.P.P. in the following form :

Max. Z = 0 x1 + 0 x2 + 0 x3 + 0 x4 − 1x a − 1x a − 1x a
1 2 3

subject to x1 − x3 + 4 x4 + x a = 3
1
186

2 x1 − x2 + x a = 3
2

3 x1 − 2 x2 − x4 + x a = 1
3

and x1, x2 , x3 , x4 , x a , x a ≥ 0.
1 2

Taking x1 = 0, x2 = 0, x3 = 0, x4 = 0, we get x a = 3, x a = 3, x a = 1, which is the initial


1 2 3
B.F.S.

The solution to the problem using simplex algorithm is given in the following table :

cj 0 0 0 0 −1 −1 −1 Min. ratio
B cB x B Y1,
xB Y1 Y2 Y3 Y4 A1 A2 A3
yi1 > 0

A1 −1 3 1 0 −1 4 1 0 0 3
A2 −1 3 2 −1 0 0 0 1 0 32
A3 −1 1 3 −2 0 −1 0 0 1 1 3 (min)

Z = c Bx B ∆j 6 −3 −1 3 0 0 0 x B Y4 ,
= −7 ↑ ↓ yi4 > 0

A1 −1 83 0 23 −1 13 3 1 0 8 13 (min) →
A2 −1 73 0 13 0 23 0 1 72
Y1 0 13 1 −2 3 0 −1 3 0 0 

Z = c Bx B ∆j 0 1 −1 5 0 0 x B Y2 ,
= −5 ↑ ↓ yi2 > 0

Y4 0 8 13 0 2 13 −3 13 1 0 4

A2 −1 25 13 0 3 13 2 13 0 1 25 3
Y1 0 7 13 1 −8 13 −1 13 0 0 

Z = −25 13 ∆j 1 3 13 2 13 0 0 x B Y3 ,
↑ ↓ yi3 > 0

Y2 0 4 0 1 −3 2 13 2 0 
A2 −1 1 0 0 12 −3 2 1 2→
Y1 0 3 1 0 −1 4 0 

Z = −1 ∆j 0 0 12 −3 2 0
↑ ↓

Y2 0 7 0 1 0 2
Y3 0 2 0 0 1 −3
Y1 0 5 1 0 0 1

Z =0 ∆j 0 0 0 0
187

Since all ∆ j ≤ 0, therefore this solution is optimal.

Hence, the solution of the given system is

x1 = 5, x2 = 7, x3 = 2, x4 = 0.

Example 2: Solve the following system of linear equations by simplex method :

2 x1 + x2 = 1

3 x1 + 4 x2 = 12 .

Solution: Here the two variables x1 and x2 are not required to be non-negative, so we
write x1 = x1′ − x1′′ and x2 = x2′ − x2′′ , such that x1′ , x1′′, x2′ , x2′′ ≥ 0.

Adding the artificial variables (non-negative) x a and x a in the two equations and
1 1
introducing the dummy objective function Z with costs zero to each variable
x1′ , x1′′, x2′ , x2′′ and costs–1 to each artificial variable x a , x a , the given problem in L.P.P. is
1 2
as follows :

Max. Z = 0 x1′ + 0 x1′′ + 0 x2′ + 0 x2′′ − x a − x a ,


1 2

subject to

2 x1′ − 2 x1′′ + x2′ − x2′′ + xa =1


1

3 x1′ − 3 x1′′ + 4 x2′ − 4 x2′′ + x a = 12


2

and x1′ , x1′′, x2′ , x2′′ , x a , x a ≥ 0.


1 2

Taking x1′ = 0, x1′′ = 0, x2′ = 0, x2′′ = 0, we get x a = 1, x a = 12 which is the starting B.F.S.
1 2

The solution of the problem using simplex method is given in the following table :
B cB cj 0 0 0 0 −1 −1 Min. ratio
x B Y2′ , y′i2 > 0
xB Y1′ Y1′′ Y2′ Y2′′ A1 A2
A1 −1 1 2 −2 1 −1 1 0 1 1 → (min)
A2 −1 12 3 −3 4 −4 0 1 12 4

Z = −13 ∆j 5 −5 5 −5 0 0 x B Y1′′, y′′i1 > 0



Y2′ 0 1 2 −2 1 −1 1 0 
A2 −1 8 −5 5 0 0 −4 1 85→

Z = −8 ∆j −5 5 0 0 −5 0

Y2′ 0 21 5 0 0 1 −1 −3 5 25
Y1′′ 0 85 −1 1 0 0 −4 5 15

Z =0 ∆j 0 0 0 0 0 0
188

Since all ∆ j ≤ 0, therefore the solution is optimal.

The optimal solution is x1′ = 0, x1′′ = 8 5 , x2′ = 21 5 , x2′′ = 0

i.e., x1 = x1′ − x1′′ = − 8 5 , x2 = x2′ − x2′′ = 21 5

4.15 Inverse of a Matrix by Simplex Method


[Meerut 2007]

Let A be a n × n non-singular real matrix. Then to find the inverse matrix of matrix A by
simplex method proceed as follows :

Introducing a dummy n ×1 real matrix b, consider the following system of equations :


Ax = b, x ≥ 0

Now introduce the artificial variables x a > 0 and a dummy objective function Z, with
costs zero to variables in x and cost –1 to each artificial variable. Then find the solution of
the following L.P.P. formed, by using simplex method :
Max. Z = 0 x − 1x a
subject to Ax = b, x, x a ≥ 0.

Then in the final simplex table which gives the optimal solution (i.e., when all ∆ j ≤ 0) and
the columns of A becomes the columns of unit matrix I (i.e., when A is converted to an
unit matrix or when all variables of vector x are in the basis), the inverse of matrix A is the
matrix formed by the column vectors which were the column vectors of the initial basis in
proper order.

If in the final simplex table which gives the optimal solution (i.e., when all ∆ j ≤ 0), the
matrix A is not converted to an unit matrix (i.e., all variables of vector x are not in the
basis and artificial variable / variables appear in the basis), then we continue the simplex
method by excluding the artificial variable and introducing the remaining variable (not
in the basis) into the basis. By this operation (iteration), the solution obtained must
remain optimal while it can be feasible or infeasible. The process is continued till A is
converted to an unit matrix. Finally, the inverse of A is obtained as above.

4.15.1 Method to Find Dummy n × 1 Real Matrix b


Although there is no particular method to find dummy n ×1 real matrix b. Even then in
most of the cases it can be obtained as follows :
 n 
 b1 = ∑ a1 j 
j =1
 
b = ∑ n 
a
 2 j = 1 2j
b= 
 .............. 
 .............. 
 n 
 bn = ∑ anj 
 j =1 
189

For clear understanding of the method see the following example :

Example 1: Apply simplex method to find the inverse of the matrix


4 3 
3 2 . [Meerut 2008]
 

4 3   b1 = ∑ a1 j = 4 + 3  7
Solution: Let A =   and b = b = ∑ a = 3 + 2  = 5  be a dummy real column
3 2  2 2j   
matrix. Consider the system of equations

Ax = b

4 3   x1  7 4 x1 + 3 x2 = 7
i.e., 3 2   x  = 5  i.e., 3 x + 2 x = 5
   2   1 2

Introducing the artificial variables x a , x a and the dummy objective function Z with
1 2
cost 0 to each variable x1, x2 cost –1 to each artificial variable x a , x a , the resulting
1 2
L.P.P. is

Max. Z = 0 x1 + 0 x2 − 1 x a − 1 x a
1 2

subject to 4 x1 + 3 x2 + xa =7
1

3 x1 + 2 x2 + xa =5
2

and x1, x2 , x a , x a ≥ 0.
1 2

Taking x1 = 0, x2 = 0, we get x a = 7, x a = 5, which is the starting B.F.S.


1 2

The solution of the above L.P.P. by simplex method is given in the following table :

cj 0 0 −1 −1 Min. ratio
B cB x B Y1, yi1 > 0
xB Y1 Y2 A1 A2
A1 −1 7 4 3 1 0 74
A2 −1 5 3 2 0 1 5 3 (min.) →
Z = cB x B ∆j 7 5 0 0 x B Y2 , yi2 > 0
= −12 ↑ ↓
A1 −1 13 0 13 1 −4 3 1 (min. ) →
Y1 0 53 1 23 0 13 52
Z = cB x B ∆j 0 13 0 −7 3
1 ↑ ↓
=−
3
Y2 0 1 0 1 3 −4
Y1 0 1 1 0 −2 3
Z = cB x B ∆j 0 0 −1 −1
=0
190

Since in the last table no ∆ j > 0, therefore the solution is optimal.

The last table in proper order (i.e., form) of unit matrix for Y1, Y2 (i.e., A) can be written as

B cB xB Y1 Y2 A1 A2

Y1 0 1 1 0 −2 3
Y2 0 1 0 1 3 −4

Since the given matrix A (given by columns Y1, Y2 ) has been converted to a unit matrix,
therefore A −1 is given by the columns A1, A2 of the initial basis.

 −2 3
Hence A −1 =  .
 3 −4 

Solve the following problems using simplex method :

1. (i) Max. Z = 3 x1 + 2 x2 (ii) Max. Z = 5 x1 + 3 x2

subject to x1 + x2 ≤ 4 subject to 3 x1 + 5 x2 ≤ 15

x1 − x2 ≤ 2, 5 x1 + 2 x2 ≤ 10

x1, x2 ≥ 0 x1, x2 ≥ 0 [Meerut 2004]

(iii) Min. Z = x1 − 3 x2 + 2 x3 (iv) Max. Z = 5 x1 + 10 x2 + 8 x3

subject to 3 x1 − x2 + 2 x3 ≤ 7 subject to 3 x1 + 5 x2 + 2 x3 ≤ 60

−2 x1 + 4 x2 ≤ 12 4 x1 + 4 x2 + 4 x3 ≤ 72

−4 x1 + 3 x2 + 8 x3 ≤ 10 2 x1 + 4 x2 + 5 x3 ≤ 100

x1, x2 , x3 ≥ 0 x1, x2 , x3 ≥ 0
[Meerut 2005] [Meerut 2009 (BP), 12 (BP)]

(v) Max. Z = 2 x1 + 4 x2 + x3 + x4 (vi) Max. Z = 2 x1 + x2

subject to subject to x1 − x2 ≤ 10

x1 + 3 x2 + x4 ≤ 4 2 x1 − x2 ≤ 40

2 x1 + x2 ≤ 3 x1, x2 ≥ 0. [Kanpur 2007]


x2 + 4 x3 + x4 ≤ 3

x1, x2 , x3 , x4 ≥ 0
191

2. (i) Max. Z = 7 x1 + 5 x2 (ii) Max. Z = 3 x1 + 2 x2


subject to subject to
− x1 − 2 x2 ≥ −6 2 x1 + x2 ≤ 40
4 x1 + 3 x2 ≤ 12 x1 + x2 ≤ 24
x1, x2 ≥ 0 2 x1 + 3 x2 ≤ 60
x1, x2 ≥ 0

(iii) Max. Z = 3 x1 + 5 x2 (iv) Max. Z = 2 x1 + 3 x2 + 3 x3


subject to subject to
3 x1 + 2 x2 ≤ 18 3 x1 + 2 x2 + x3 ≤ 3
x1 ≤ 4 2 x1 + x2 + 2 x3 ≤ 4
x2 ≤ 6, x1, x2 , x3 ≥ 0. [Kanpur 2008]
x1, x2 ≥ 0 [Meerut 2004]
3. Max. Z = 5 x1 + 7 x2 4. Max. Z = x1 − x2 + 3 x3
subject to the constraints subject to
x1 + x2 ≤ 4 x1 + x2 + x3 ≤ 10
3 x1 − 8 x2 ≤ 24 2 x1 − x3 ≤ 2
10 x1 + 7 x2 ≤ 35 2 x1 − 2 x2 + 3 x3 ≤ 1
x1, x2 ≥ 0. x1, x2 , x3 ≥ 0.

5. Max. Z = 4 x1 + 10 x2 6. Max. Z = 2 x1 + 4 x2
subject to subject to
2 x1 + x2 ≤ 50 2 x1 + 3 x2 ≤ 48
2 x1 + 5 x2 ≤ 100 x1 + 3 x2 ≤ 42
2 x1 + 3 x2 ≤ 90 x1 + x2 ≤ 21
x1, x2 ≥ 0. [Meerut 2002] x1, x2 ≥ 0.

7. Max. Z = 2 x1 + x2 8. Max. Z = 3 x1 + 4 x2
subject to subject to
x1 + 2 x2 ≤ 10 x1 − x2 ≤ 1
x1 + x2 ≤ 6 − x1 + x2 ≤ 2
x1 − x2 ≤ 2 x1, x2 ≥ 0.
x1 − 2 x2 ≤ 1
x1, x2 ≥ 0.
192

9. Max. Z = 4 x1 + 5 x2 + 9 x3 + 11x4 10. Max. Z = 8 x1 + 11x2


subject to the constraints subject to
x1 + x2 + x3 + x4 ≤ 15 3 x1 + x2 ≤ 7
7 x1 + 5 x2 + 3 x3 + 2 x4 ≤ 120 x1 + 3 x2 ≤ 8
3 x1 + 5 x2 + 11x3 + 15 x4 ≤ 100 x1, x2 ≥ 0.
x1, x2 , x3 , x4 ≥ 0.

11. Min. Z = x1 + x2 + 3 x3 12. Max. Z = 3 x1 + 2 x2 − 2 x3


subject to subject to
3 x1 + 2 x2 + x3 ≤ 3 x1 + 2 x2 + 2 x3 ≤ 10
2 x1 + x2 + 2 x3 ≤ 2 2 x1 + 4 x2 + 3 x3 ≤ 15
x1, x2 , x3 ≥ 0. [Kanpur 2008] x1, x2 , x3 ≥ 0.

13. Max. Z = 7 x1 + x2 + 2 x3 14. Max. Z = 2 x1 + 4 x2 + 3 x3


subject to subject to
x1 + x2 − 2 x3 ≤ 10 3 x1 + 4 x2 + 2 x3 ≤ 60
4 x1 + x2 + x3 ≤ 20 2 x1 + x2 + 2 x3 ≤ 40
x1, x2 , x3 ≥ 0. x1 + 3 x2 + 2 x3 ≤ 80
x1, x2 , x3 ≥ 0.

15. Max. Z = 5 x1 + 2 x2 + 3 x3 − x4 + x5 16. Min. Z = 4 x1 + 8 x2 + 3 x3


subject to the constraints subject to
x1 + 2 x2 + 2 x3 + x4 = 8, x1 + x2 ≥ 2
3 x1 + 4 x2 + x3 + x5 = 7, 2 x1 + x3 ≥ 5
x1, x2 , x3 , x4 , x5 ≥ 0. x1, x2 , x3 ≥ 0.

17. Max. Z = 7 x1 + 3 x2 + x3 18. Max. Z = x1 + 2 x2 + 0 x3 + 0 x4


subject to subject to
x1 + x2 = 2 x1 − x2 + x3 = 4
3 x1 + x3 = 1 x1 − 5 x2 + x4 = 8
x1, x2 , x3 ≥ 0. x1, x2 , x3 , x4 ≥ 0. [Kanpur 2008]

Solve the following L.P.P. using two phase method :

19. Min. Z = x1 + x2 20. Max. Z = 3 x1 − x2


subject to subject to
2 x1 + x2 ≥ 4 2 x1 + x2 ≥ 2
x1 + 7 x2 ≥ 7 x1 + 3 x2 ≤ 2
x1, x2 ≥ 0. x2 ≤ 4
x1, x2 ≥ 0. [Meerut 2011]
193

21. Max. Z = 5 x1 + 8 x2 22. Max. Z = 107 x1 + x2 + 2 x3


subject to subject to
3 x1 + 2 x2 ≥ 3 14 x1 + x2 − 6 x3 + 3 x4 = 7

x1 + 4 x2 ≥ 4 1
16 x1 + x − 6 x3 ≤ 5
2 2
x1 + x2 ≤ 5
3 x1 − x2 − x3 ≤ 0
x1, x2 ≥ 0. [Meerut 2004, 11 (BP)]
x1, x2 , x3 , x4 ≥ 0. [Kanpur 2009]

23. Max. Z = 3 x1 + 2 x2 + x3 + 4 x4 24. Min. Z = (15 2) x1 − 3 x2

subject to subject to

4 x1 + 5 x2 + x3 + 5 x4 = 5 3 x1 − x2 − x3 ≥ 3

2 x1 − 3 x2 − 4 x3 + 5 x4 = 7 x1 − x2 + x3 ≥ 2

x1 + 4 x2 + 5 x3 − 4 x4 = 6 x1, x2 , x3 ≥ 0.

x1, x2 , x3 , x4 ≥ 0.

Apply Big-M method to solve the following problems :

25. Max. Z = −2 x1 − x2 26. Max. Z = 3 x1 − x2

subject to subject to

3 x1 + x2 = 3 2 x1 + x2 ≥ 2
4 x1 + 3 x2 ≥ 6 x1 + 3 x2 ≤ 3
x1 + 2 x2 ≤ 4 x2 ≤ 4
x1, x2 ≥ 0. x1, x2 ≥ 0. [Meerut 2012]

[Kanpur 2012; Gorakhpur 2009]


27. Max. Z = 8 x2 28. Max. Z = 4 x1 + 5 x2 − 3 x3
subject to subject to

x1 − x2 ≥ 0 x1 + x2 + x3 = 10
2 x1 + 3 x2 ≤ −6 x1 − x2 ≥ 1
x1, x2 are unrestricted. 2 x1 + 3 x2 + x3 ≤ 40
x1, x2 , x3 ≥ 0. [Meerut 2007]

29. Min. Z = 5 x1 + 6 x2 30. Max. Z = 4 x1 + 2 x2

subject to subject to

2 x1 + 5 x2 ≥ 1500 3 x1 + x2 ≤ 27
3 x1 + x2 ≥ 1200 x1 + x2 ≥ 21
x1, x2 ≥ 0. x1, x2 , ≥ 0. [Meerut 2011]
194

31. Max. Z = 2 x1 + x2 + 3 x3 32. Min. Z = 4 x1 + 6 x2


subject to subject to
x1 + x2 + x3 ≤ 5 x1 + 2 x2 ≥ 80
2 x1 + 3 x2 + 4 x2 ≤ 12 3 x1 + x2 ≥ 75
x1, x2 , x3 ≥ 0. [Kanpur 2007] x1, x2 ≥ 0.

33. Min. Z = 2 y1 + 3 y2 34. Min. Z = x1 + x2 + 3 x3


subject to y1 + y2 ≥ 5 subject to 3 x1 + 2 x2 + x3 ≤ 3
y1 + 2 y2 ≥ 6 2 x1 + x2 + 2 x3 ≥ 3
y1, y2 ≥ 0. x1, x2 , x3 ≥ 0. [Meerut 2006]

Solve the following system of linear equations by simplex method :

35. 3 x1 + 2 x2 = 4, 4 x1 − x2 = 6. 36. 3 x1 + 2 x2 = 5, 5 x1 + x2 = 9.
[Gorakhpur 2008]

37. x1 + x2 = 1, 2 x1 + x2 = 3.

38. Find the inverse of the matrix


1 2  3 2
(i) 3 2  [Meerut 2005] (ii) 4 [Gorakhpur 2007,09]
   −1

4 1 2 
4 1
(iii) 0 1 0  (iv)  .
  2 9  [Meerut 2006]
8 4 5 

39. Food A contains 20 units of vitamin X and 40 units of vitamin Y per gram. Food B
contains 30 units each of vitamin X and Y. The daily minimum human
requirements of vitamins X and Y are 900 units and 1200 units respectively. How
many grams of each type of food should be consumed so as to minimize the cost if
food A costs 60 paise per gram and food B costs 80 paise per gram?

40. A finished product must weigh exactly 150 gms. The two raw materials used in
manufacturing the product are : A with cost of ` 2 per unit and B with a cost of ` 8
per unit. At least 14 units of B and not more than 20 units of A must be used. Each
unit of A and B weighs 5 and 10 grams respectively.

How much of each type of raw material should be used for each unit of the final
product in order to minimize the cost ?

41. A company produces two types of products say type A and B. Product B is of
superior quality and product A is of a lower quality. Profits on the two types of
products are ` 30 and ` 40 respectively. The data on resources required and capacity
available is given below:
195

Requirements Capacity available


Types of Products
Product A Product B per month

Raw materials (kg) 60 120 12,000


Machining (hours per piece) 8 5 600
Assembly (man hour) 3 4 500

How should the company manufacture the two types of products in order to have a
maximum overall profit ?

Multiple Choice Questions


1. A slack variable is introduced if the given constraint has a sign :
(a) ≥ (b) ≤
(c) = (d) None of these

2. Simplex method to solve linear programming problems was developed by :


(a) Newton (b) Lagrange
(c) George Dantzig (d) None of these

3. If a constraint has ≤ sign, we introduce :


(a) Surplus variable (b) Artificial variable
(c) Slack variable (d) None of these

4. Solution of the L.P.P. :


Max. Z = 10 x1 + 6 x2
subject to
x1 + x2 ≤ 2, 2 x1 + x2 ≤ 4, 3 x1 + 8 x2 ≤ 12,
x1, x2 ≥ 0 is
(a) x1 = 2, x2 = 0, Max. Z = 20 (b) x1 = 0, x2 = 0, Max. Z = 0
(c) x1 = 1, x2 = 7, Max. Z = −2 (d) x1 = 0, x2 = 3, Max. Z = 1

Fill in the Blank


1. Simplex method was developed by ................... in 1947.

2. According to fundamental theorem of linear programming, we can search the


...................solution among the basic feasible solutions.

3. If a constraint has a sign ≤, in order to convert it into an equation we use


................... variables.
196

4. If a constraint has a ≥ sign, then to change it into an equation we introduce


................... variables.
5. A linear programming problem is said to have an ................... solution if the
objective function can be increased or decreased indefinitely.
6. To convert the problem of minimization into the maximization problem we
multiply both sides by ................... .
7. To solve a L.P.P. by simplex method all bi's should be ................... . [Meerut 2005]

8. If in the simplex table all ∆ j ≤ 0, the solution under test is ................... .


[Meerut 2004, 05]

9. The linear programming problem has no ................... solution if the solution


contains one or more artificial variables as basic variables.
10. If in the final simplex table all ∆ j < 0, the optimal solution is ................... .
11. If corresponding to maximum positive ∆ j all minimum ratios are negative or → ∞,
the solution under test is ................... .

True/False
1. Fundamental theorem of L.P.P. states that if the given L.P.P. has an optimal
solution, then at least one basic solution must be optimal.
2. The problem has no feasible solution if the value of at least one artificial variable
present in the basis is non-zero and the optimality condition is satisfied.
3. In the phase I of two phase method, we remove artificial variables from the basis
matrix.
4. If we have to solve a linear programming problem by simplex method the variable x
should be unrestricted in sign. [Meerut 2005]

Exercise
1. (i) x1 = 3, x2 = 1, Max. Z = 11

(ii) x1 = 20 19 , x2 = 45 19, Max. Z = 235 19


(iii) x1 = 4, x2 = 5, x3 = 0, Min. Z = −11
(iv) x1 = 0, x2 = 8, x3 = 10, Max. Z = 160
(v) x1 = 1, x2 = 1, x3 = 1 2 , x4 = 0, Max. Z = 13 2
(vi) x1 = 30, x2 = 20, Max. Z = 80
2. (i) x1 = 3, x2 = 0, Max. Z = 21 (ii) x1 = 16, x2 = 8, Max. Z = 64

(iii) x1 = 2, x2 = 6, Max. Z = 36 (iv) x1 = 0, x2 = 2 3 , x3 = 5 3, Max. Z = 7


197

3. x1 = 0, x2 = 4, Max. Z = 28

4. x1 = 0, x2 = 29 5 , x3 = 21 5, Max. Z = 34 5

5. x1 = 0, x2 = 20, Max. Z = 200 or x1 = 75 4 , x2 = 25 2, Max. Z = 200

6. x1 = 6, x2 = 12, Z = 60

7. x1 = 4, x2 = 2, Max. Z = 10

8. unbounded
9. x1 = 50 7 , x2 = 0, x3 = 55 7, x4 = 0 Max. Z = 695 7

10. x1 = 13 8 , x2 = 17 8, Max. Z = 291 8

11. x1 = x2 = x3 = 0, Max. Z = 0

12. x1 = 15 2 , x2 = 0, x3 = 0, Max. Z = 45 2

13. x1 = 0, x2 = 0, x3 = 20, Max. Z = 40

14. x1 = 0, x2 = 20 3 , x3 = 50 3, Max. Z = 230 3

15. x1 = 6 5 , x2 = 0, x3 = 17 5 , x4 = 0, x5 = 0, Max. Z = 18 5

16. x1 = 5 2 , x2 = 0, x3 = 0, Min. Z = 10

17. x1 = 1 3 , x2 = 5 3 , x3 = 0, Max. Z = 22 3

18. unbounded
19. x1 = 21 13 , x2 = 10 13, Min. Z = 31 13

20. x1 = 2, x2 = 0, Max. Z = 6

21. x1 = 0, x2 = 5, Max. Z = 40

22. unbounded
23. no feasible solution
24. x1 = 5 4 , x2 = 0, x3 = 3 4, Max. Z = 75 8

25. x1 = 3 5 , x2 = 6 5 , Max. Z = −12 5

26. x1 = 3, x2 = 0, Max. Z = 9

27. x1 = −6 5 , x2 = −6 5 , Max. Z = − 48 5

28. x1 = 11 2 , x2 = 9 2 , Max. Z = 89 2

29. x1 = 4500 13 , x2 = 2100 13 , Max. Z = 2700

30. x1 = 0, x2 = 27, Max. Z = 54

31. x1 = 4, x2 = 0, x3 = 1, Max. Z = 11
198

32. x1 = 24, x2 = 33, Min. Z = 254

33. x1 = 4, x2 = 1, Min. Z = 11

34. x1 = 3 4 , x2 = 0, x3 = 3 4 , Min. Z = 3

35. x1 = 16 11, x2 = −2 11

36. x1 = 13 7 , x2 = −2 7

37. x1 = 2, x2 = −1

1  −2 2  1  1 2
38. (i) (ii)
4  3 −1 11 4 −3 

 5 3 −2 
1 1  9 −1
(iii) 0 4 0 (iv)
4  34 −2 4 
−8 −8 4 

39. Food A = 15gm, food B = 20 gm, total cost = ` 25.


40. 2, 14, min. cost = ` 116.
41. 200/11 units of type A, 1000/11 units of type B, max. profit = ` 46,000/11

Multiple Choice Questions


1. (b) 2. (c)
3. (c) 4. (a)

Fill in the Blank


1. George Dantzig 2. optimal
3. slack 4. surplus
5. unbounded 6. (−1)
7. non-negative 8. optimal
9. feasible 10. unique
11. unbounded

True/False
1. True 2. True 3. True
4. False
mmm
Unit-3

Chapter-5: Resolution of Degeneracy

Chapter-6: Revised Simplex Method

Chapter-7: Sensitivity Analysis


201

5.1 Introduction
n chapter 2 we have defined that a B.F.S. of a L.P. problem is said to be degenerate B.F.S. If
I at least one of the basic variables is zero. So far we have considered the L.P. problems in
which by minimum ratio rule we get only one vector to be deleted from the basis. But
there are L.P. problems where we get more than one vector which may be deleted from,
the basis.

x 
Thus, if min  Bi , yik > 0 ,(α k is incoming vector)occurs at i = i1, i2 ,..., is .
y
 ik 

i.e., minimum occurs for more than one value of i, then the problem is to select the vector
to be deleted from the basis. In such cases if we choose one vector say β i (i is one of
i1, i2 ,..., is ) and delete it from the basis then the next solution (obtained at this iteration)
may be a degenerate B.F.S. Such problem is called the problem of degeneracy. It can be
seen that when the simplex method is applied to a degenerate B.F.S., to get a new B.F.S.,
the value of the objective function may remain unchanged i.e., the value of the objective
function is not improved. In some cases due to the presence of degeneracy the same
sequence of simplex tables are repeated forever without ever reaching the optimal
solution. This problem is called cycling.

The procedure which prevent cycling within the simplex routine and an optimal solution is obtained
in finite number of steps is called the resolution of degeneracy.
202

5.2 Conditions for the Occurrence of Degeneracy


in a L.P.P.
[Kanpur 2010, 11; Meerut 2001 (BP), 06 (BP)]

In a L.P.P. the degeneracy may appear in the following two ways :


1. The degeneracy appear in a L.P.P. at the very first iteration when some component
of vector b i.e., some bi is zero.

2. If none of the components of b is zero at any iteration and choice of the outgoing vector β r
at the same iteration is not unique, the next solution is bound to be degenerate.

5.3 Computational Procedure to Resolve Degeneracy


by Coroner's Perturbation Method
x 
If Y k .(= α k ) is the incoming vector (i.e., vector entering the basis) and Min.  Bi , yik > 0 
 yik 
is not unique i.e., the same minimum ratio x Bi yik , yik > 0 occurs for more than one
value of i, i.e., in more than one row, say it occurs in the i1-th, i2 -th, i3 -th, ... rows, then let
the set I1 = {i1, i2 , i3 ,...}.

Then to select the vector to be deleted from the basis (i.e., outgoing vector), proceed as
follows :
1. Denote the columns of the unit matrix in proper order (i.e., the columns
corresponding to the basic variables in proper order) in the simplex table by Y1, Y2
etc. Thus,

Y1 = [1, 0, 0,..., 0]t , Y2 = [0, 1, 0,..., 0]t etc.

 Element of Y1 in i − th row 
2. Compute mini. 
i ∈I1 Element of Y k in i − th row 

I1 = {i1, i2 , i3 ...}
If this minimum is unique, say attained for i = i r ,
then the i r -th vector is taken, as the outgoing vector and the element where this
i r -th row is intersected by column Y k is taken as the key element. It this minimum is
not unique, then let I2 be the set of all those values of i ∈ I1, for which there is a tie
obviously I2 ⊂ I1. Then proceed to the next step.

Element of Y2 in i − th row 
3. Compute mini. 
i ∈I2 Element of Y in i − th row 
k
203

If this minimum is unique, say attained for i = i s , then the i s -th vector is taken as the
outgoing vector and the element where this i s -th row is intersected by column Y k is
taken as the key element.
If this minimum is also not unique, then let I3 be the set of those values of i ∈ I2 , for
which there is a tie, Obviously I3 ⊂ I2 ⊂ I1. Then proceed to the next step.

Element of Y3 in i − th row 
4. Compute mini. 
i ∈I3 Element of Y k in i − th row 

In this minimum is unique say attained for i = i t ∈ I3 , then i t -th vector is taken as
the outgoing vector and the element where this i t -th row is intersected by column
Y k is taken as the key element.
If this minimum is also not unique, then proceed similarly.

¤ Problems of Degeneracy at the Initial Stage

Example 1: Solve the following L.P.P. (Problem with some x Bi = 0 for which yik = 0).

Max. Z = 2 x1 + 3 x2 + 10 x3

subject to x1 + 2 x3 = 0

x2 + x3 = 1

x1, x2 , x3 ≥ 0

Solutions: The given L.P.P. can be written as

Max. Z = 2 x1 + 3 x2 + 10 x3

subject to x1 + 0. x2 + 2 x3 = 0

0. x1 + 1. x2 + 1. x3 = 1

Here the basis matrix I2 (formed by coefficients of x1 and x2 ) exists, so there is no need to
introduce the artificial variables.

Taking x3 (non-basic variable) = 0, we have x1 = 0, x2 = 1, so the starting B.F.S. is


x1 = 0, x2 = 1, x3 = 0. This B.F.S. is degenerate as the basic variable x1 = 0
204

The solution by simplex method is given in the following table.

cj 2 3 10 Min. Ratio

B cB
xB Y1 Y2 Y3 x B Y3
β1 β2 α3 yi3 > 0

Y1 2 0 1 0 2 0 2 (Min. ) →
Y2 3 1 0 1 1 (1 1)

Z = cB x B ∆j 0 0 3
↓ ↑
=3

Y3 10 0 12 0 1
Y2 3 1 −1 2 1 0

Z = cB x B = 3 ∆j −3 2 0 0

Here no ∆ j > 0, so this solution is optimal.

Hence, The optimal solution is x1 = 0, x2 = 1, x3 = 0 which is degenerate, as one basic


variable x3 = 0

Note : Here in this example it is important to note that we have obtained an optimal
degenerate solution from an degenerate solution without improving the values of Z.

Example 2: Solve the following L.P.P. (problem with some x Bi = 0 for which yik < 0).

Max. Z = 2 x1 + 3 x2 + 10 x3

subject to x1 − 2 x3 = 0

x2 + x3 = 1

x1, x2 , x3 ≥ 0 .

Solution: The given L.P.P. can be written as

Max. Z = 2 x1 + 3 x2 + 10 x3

subject to 1. x1 + 0 . x2 − 2 x3 = 0.

0 . x1 + 1. x2 + 1. x3 = 1

Here the basis matrix I2 (formed by coefficient of x1 and x2 ) exists, so there is no need to
introduce the artificial variables. Taking x3 (non-basic variable) = 0, we get x1 = 0, x2 = 1,
so the starting B.F.S. is x1 = 0, x2 = 1, x3 = 0 which is degenerate as the basic variable
x1 = 0.
205

The solution by simplex method is given in the following table :

cj 2 3 10 Min. Ratio
B cB
xB Y1 Y2 Y3 x B Y3 , yi3 > 0

Y1 2 0 1 0 −2 
Y2 3 1 0 1 1 1 1(Min. ) →

Z = cB x B = 3 ∆j 0 0 11
↓ ↑

Y1 2 2 1 2 0
Y3 10 1 0 1 1

Z = c B x B = 14 ∆j 0 −11 0

∆2 = c2 − c BY2 = 3 − (2,10), (2,1) = −11, ∆1 = 0 = ∆3

This solution x1 = 2, x2 = 0, x3 = 1 is optimal solution (non-degenerate).

Note : Here in this examples it may be noted that a non-degenerate optimal solution is
obtained from a degenerate B.F.S. and the value of the objective function is also
improved.

Example 3: Solve the L.P.P.

Max. Z = 5 x1 + 3 x2

subject to x1 + x2 ≤ 2

5 x1 + 2 x2 ≤ 10

3 x1 + 8 x2 ≤ 12

and x1, x2 ≥ 0 .

Solution: Introducing the slack variables x3 , x4 , x5 the given L.P.P. can be written as

Max. Z = 5 x1 + 3 x2

subject to x1 + x2 + x3 =2

5 x1 + 2 x2 + x4 = 10

3 x1 + 8 x2 + x5 = 12

Taking x1 = 0, x2 = 0 we have x3 = 2, x4 = 10, x5 = 12 which is starting B.F.S.


206

Starting Simplex Table

B cB cj 5 3 0 0 0 Min. Ratio
x B Y1
xB Y4 Y5 Y1 Y2 Y3
yi1 > 0
Y1 Y2 Y3 (β1) Y4 (β2 ) Y5(β3 )

Y3 0 2 1 1 1 0 0 2
Y4 0 10 5 2 0 1 0 2→
Y5 0 12 3 8 0 0 1 4

Z = cB x B = 0 ∆j 5 3 0 0 0
↑ ↓

Here ∆1 = c1 − c BY1 = 5, ∆2 = c2 − c BY2 = 3, ∆3 = 0 = ∆4 = ∆5 which are positive or zero,


therefore this solution is not optimal.

Since Max. ∆ j = 5 = ∆1.

∴ Entering (in coming) vector is Y1(= α1)

By minimum ratio rule we find that minimum is not unique but occurs for i = 1 and i = 2
both.

Thus, the problem is the problem of degeneracy.

∴ To select the vector to be deleted from the basis, we proceed as explained in article
5.3.
1. First of all we re-number the columns starting from the identity matrix in the proper
order by Y1, Y2 , Y3 , Y4 , Y5 .

∴ Y1 = Y3 , Y2 = Y4 , Y3 = Y5 , Y4 = Y1, Y5 = Y2

2. Since minimum ratio occurs for i = 1 and 2.

∴ I1 = {1, 2}

and incoming vector is Y1 = Y4


 Element of Y1 in i − th row 
∴ Compute Mini.  
i ∈ I1 Element of Y4 in i − th row 

 Element of Y1 in 1st row Element of Y1 in 2nd row 


= Mini.  , 
Element of Y4 in 1st row Element of Y4 in 2nd row 
1 0  0
= Mini.  ,  = .
1 5  5
This minimum is unique and correspond to i = 2

∴ The vector in the second row i.e., Y4 is to be deleted and key element is y21 = 5.
Further computation by simplex method is shown in the following table.
207

B cB cj 5 3 0 0 0 Min. Ratio
x B Y2 , yi2 > 0
xB Y1 Y2 Y3 Y4 Y5

Y3 0 0 0 35 1 −1 5 0 0 (Min. ) →
Y1 5 2 1 25 0 15 0 5
Y5 0 6 0 34 5 0 −3 5 1 15 17

Z = c B x B = 10 ∆j 1 0
0 –1 0
↑ ↓

Y2 3 0 0 1 53 −1 3 0
Y1 5 2 1 0 −2 3 13 0
Y5 0 6 0 0 −34 3 53 1

Z = c B x B = 10 ∆j 0 0 −5 3 −2 3 0

Since no ∆ j > 0 ∴ the solution is optimal.

∴ Optimal solution is, x1 = 2, x2 = 0 and Max. Z = 10

Example 4: Solve the L.P.P.

Max. Z = 2 x1 + x2 ,

subject to 4 x1 + 3 x2 ≤ 12

4 x1 + x2 ≤ 8

4 x1 − x2 ≤ 8 and x1, x2 ≥ 0 [Kanpur 2008]

Solution: Introducing the slack variables x3 , x4 and x5 the given L.P.P. can be written as

Max. Z = 2 x1 + x2

subject to 4 x1 + 3 x2 + x3 = 12

4 x1 + x2 + x4 =8

4 x1 − x2 + x5 =8

and x1, x2 , x3 , x4 , x5 ≥ 0

Taking x1 = 0, x2 = 0, we have x3 = 12, x4 = 8, x5 = 8, which is the starting B.S.F.

∴ The starting simplex table is


208

B cB cj 2 1 0 0 0 Min. Ratio

xB Y4 Y5 Y1 Y2 Y3 x B Y1, Yi1 > 0


Y1 Y2 Y3 Y4 Y5

Y3 0 12 4 3 1 0 0 3
Y4 0 8 4 y24 1 0 y21 1 y22 0 2
Y5 0 8 4 y34 −1 0 y31 0 y23 1 2→

Z = cB x B ∆j 2 1 0 0 0
↑ ↓
=0

Here ∆1 = c1 − c BY1 = 2, ∆2 = c2 − c BY2 = 1, ∆3 = 0 = ∆4 = ∆5 which are positive or zero,


therefore the solution is not optimal.

Since Max ∆ j = 2 = ∆1

∴ Entering (Incoming) vector is Y1(= α1).

By minimum ratio rule we find that minimum is not unique but occurs for i = 2 and i = 3
both

Thus, the problem is the problem of degeneracy.

∴ To select the vector to be deleted from the basis, we proceed as explained in article
5.3
1. First of all we renumber the columns of the above table starting from the identity
matrix in the proper order by Y1, Y2 , Y3 .

∴ Let Y1 = Y3 = β1, Y2 = Y4 = β2 , Y3 = Y5 = β3 , Y4 = Y1, Y5 = Y2

2. Since minimum ratio occurs for i = 2 and 3 ∴I1 = {2, 3} and incoming vector is Y1(Y4 )

 Element of Y1 in i − th row 
∴ Compute Mini.  
i ∈ I1 Element of Y4 in i − th row 

0 0 
= Mini.  ,  = Mini.{0, 0} Q i = 2, 3
4 4 

Since the minimum is not unique and there is a tie at i = 2, 3.

∴ Let I2 = {2, 3) ⊆ I1 and proceed to the next step.

Element of Y2 in i − th row 
Compute Mini.  
i ∈ I2 Element of Y4 in i − th row 

= Mini.{1 4 , 0 4} = Min.{1, 0}, Q i = 2, 3

=0
209

i.e., minimum occurs at i = 3. ∴ Vector in the 3rd row i.e., Y5 is the out going vector
and key element = y31 = 4. Thus, we enter Y1 in place of Y5 .
The computational work by simplex method is shown in the following table.

B cB cj 2 1 0 0 0 Min. Ratio

XB Y1 Y2 Y3 Y4 Y5 x B Y2 , Yi2 > 0

Y3 0 4 0 4 1 0 −1 1
Y4 0 0 0 2 0 1 −1 0(Min.) →
Y1 2 2 1 −1 4 0 0 14 

Z = cB x B ∆j 0 32 0 0 −1 2 x B Y5 , Y i5 > 0
=4 ↑ ↓

Y3 0 4 0 0 1 −2 1 4 (Min.) →
Y2 1 0 0 1 0 12 −1 2 
Y1 2 2 1 0 0 18 18 16

Z = cB x B ∆j 0 0 0 −3 4 14
=4 ↓ ↑

Y5 0 4 0 0 1 −2 1
Y2 1 2 0 1 12 −1 2 0
Y1 2 32 1 0 −1 8 38 0

Z = cB x B ∆j 0 0 −1 4 −1 2 0
=5

Since no ∆ j > 0, ∴ the solution is optimal.

∴ Optimal solution is x1 = 3 2 , x2 = 2, and Max. Z = 5

Example 5: Solve the following L.P.P. by simplex method.

Max. Z = 5 x1 − 2 x2 + 3 x3

subject to 2 x1 + 2 x2 − x3 ≥ 2

3 x1 − 4 x2 ≤ 3

x2 + 3 x3 ≤ 5

and x1, x2 , x3 ≥ 0 [Meerut 2006, 06 (BP)]

Solution: Introducing surplus variable x4 , slack variables x5 , x6 and artificial variable x a ,


the given L.P.P. can be written as
210

Max. Z = 5 x1 − 2 x2 + 3 x3 + 0. x4 + 0. x5 + 0. x6 − M. x a

subject to 2 x1 + 2 x2 − x3 − x4 + xa =2

3 x1 − 4 x2 + x5 =3

x2 + 3 x3 + x6 =5

and x1, x2 ,...., x6 , x a ≥ 0

Taking x1 = 0 = x2 = x3 = x4 , we get x a = 2, x5 = 3, x6 = 5 which is the starting B.F.S.

The computation of the solution by simplex method is given in the following table.

B cB cj 5 −2 3 0 0 0 − M Min. Ratio
X B Y1,
xB Y1 Y2 Y3 Y4 Y5 Y6 A Yi1 > 0

A −M 2 2 2 −1 −1 0 0 1 2 2 = 1→
Y5 0 3 3 −4 0 0 1 0 0 3 3 =1
Y6 0 5 0 1 0 0 0 1 0 −

Z = −2 M ∆j 5 + 2 M −2 + 2 M 3 − M −M 0 0 0
↑ ↓

Here we note that max. ∆ j = ∆1 so Y1 is the incoming vector and by minimum ratio rule
we find the same minimum ratio in row 1 and row 2. So it is a case of degeneracy. But
here in the first row we have the artificial vector A, so giving preference to A, to leave the
basis we choose A as the outgoing vector. Here there is no need to apply the rule of
resolving degeneracy.

Thus, entering vector Y1 in place of A in the basis, y11 = 2 is the key element, further
computations are shown in the following table :

B cB cj 5 −2 3 0 0 0 −M Min. Ratio
x B Y3
xB Y1 Y2 Y3 Y4 Y5 Y6 A yi > 0
3

Y1 5 1 1 1 −1 2 −1 2 0 0 
Y5 0 0 0 −7 32 3 2 1 0 0 (3 2) Min.

Y6 0 5 0 1 3 0 0 1 53

Z =5 ∆j 0 −7 11 2 52 0 0 x BY2
↑ ↓ yi > 0
2
211

Y1 5 1 1 −4 3 0 0 13 0 
Y3 3 0 0 −14 3 1 1 23 0 
Y6 0 5 0 15 0 −3 −2 1 5 15 Min. →

Z =5 ∆j 0 56 3 0 −3 −11 3 0 x BY4
↑ ↓ yi4 > 0

Y1 5 13 9 1 0 0 −4 15 7 45 4 45 
Y3 3 14 9 0 0 1 1 15 2 45 14 45 7 0 3 Min.

Y2 −2 1 3 0 1 0 −1 5 −2 15 1 15 

Z = 101 9 ∆j 0 0 0 11 15 −53 45 −56 45


↓ ↑

Y1 5 23 3 1 0 4 0 13 43
Y4 0 70 3 0 0 15 1 23 14 3
Y2 −2 5 0 1 3 0 0 1

Z = 85 3 ∆j 0 0 −11 0 −5 3 −14 3

Since on ∆ j > 0, so this solution is optimal. Hence the optimal solution to the given
problem is x1 = 23 3 , x2 = 5, x3 = 0 and max. Z = 85 3 .
212

1. What is degeneracy ? Discuss a method to resolve degeneracy in L.P.P.

2. What do you mean by degeneracy ? Discuss in detail the necessary and sufficient
conditions for the occurrence of degeneracy.

3. What are the problems caused by degeneracy ? Describe a procedure to avoid these
problems.

4. Explain what is meant by degeneracy and cycling in linear programming ? How


their effect over come ?

5. Prove that the degeneracy may appear in a L.P.P. at the very first iteration when
some component of vector b [i.e., bi] is zero.
Solve the following L.P.P.

6. Max. Z = 3 x1 + 9 x2 7. Max. Z = 3 x1 + 5 x2

subject to x1 + 4 x2 ≤ 8 subject to x1 + x3 = 4

x1 + 2 x2 ≤ 4 x2 + x4 = 6
and x1, x2 ≥ 0 3 x1 + 2 x2 + x5 = 12
and x i ≥ 0, i = 1, 2,...,5.

8. Max. Z = 3 x1 + 5 x2 + 4 x3 9. Max. Z = x1 − x2 + 3 x3

subject to 2 x1 + 3 x3 ≤ 18 subject to x1 + x2 + x3 ≤ 10

2 x2 + 5 x3 ≤ 18 2 x1 − x3 ≤ 2

3 x1 + 2 x2 + 4 x3 ≤ 25 2 x1 − 2 x2 + 3 x3 ≤ 0
and x1, x2 , x3 ≥ 0 and x1, x2 , x3 ≥ 0

10. Max. Z = 6 x1 + 3 x2 + 5 11. Max. Z = 3 x − 20 x + 1 x − 6 x


4 1 2 2 3 4
subject to x1 + 3 x2 ≤ 9
1
x1 + x2 ≥ 5 x − 8 x2 − x3 + 9 x4 ≤ 0
4 1
2 x1 + x2 ≤ 8 1 1
x − 12 x2 − x3 + 3 x4 ≤ 0
and x1, x2 ≥ 0 2 1 2

0. x1 + 0. x2 + 1. x3 + 0. x4 ≤ 1
and x1, x2 , x3 , x4 ≥ 0
213

Multiple Choice Questions


1. If one or more of the basic variable in a B.F.S. is zero then the solution is called :
(a) Non-degenerate (b) Degenerate
(c) Optimal (d) None of these
2. If the choice of outgoing vector at any iteration in simplex method is not unique
then the next solution is bound to :
(a) Degenerate (b) Optimal
(c) Non-degenerate (d) None of these
3. When the simplex method is applied to a degenerate B.F.S. to get a new B.F.S., the
value of the objective function :
(a) Will increase (b) Will decrease
(c) May remain unchanged (d) None of these

Fill in the Blank


1. In a L.P.P. a B.F.S. is degenerate B.F.S. if at least one of the ................. is zero.
2. The procedure which prevent cycling within the simplex routine is called the
resolution of ................. .
3. An optimal degenerate solution may be obtained from an ................. solution.
4. A non-degenerate optimal solution may be obtained from a degenerate ................. .
5. The method to resolve degeneracy is ................. method.
214

Exercise
6. x1 = 0, x2 = 2, Z = 18

7. x1 = 0, x2 = 6, x3 = 4, x4 = 0, x5 = 0, Z = 30

8. x1 = 7 3 , x2 = 9, x3 = 0, Z = 52

9. x1 = 0, x2 = 6, x3 = 4, Z = 6

10. x1 = 3, x2 = 2, Z = 29

11. x1 = 1, x2 = 0, x3 = 1, x4 = 0, Z = 5 4

Multiple Choice Questions


1. (b) 2. (a)
3. (c)

Fill in the Blank


1. Basic variable 2. degeneracy
3. degenerate 4. B.F.S.
5. corner's perturbation
mmm
215

6.1 Introduction
he revised simplex method is an efficient computational procedure for solving linear
T programming problems on digital computers. The revised simplex method solves a
linear programming problem in the same way as the simplex method. The “revised”
aspect concerns the procedure of changing tableaux. In revised simplex method we
compute and store only the information that are of current need.

There are two standard forms for the revised simplex method.

Standard Form I : In standard form I, it is assumed that an identity matrix is present


after introducing the slack and surplus variables only. That is, here in this form the
artificial variables are not needed.

Standard Form II : In standard form II, it is assumed that the artificial variables are
needed for getting an initial identity matrix. In this case two phase method of ordinary
simplex method will be used in a slightly different way to handle artificial variables.

6.2 Revised Simplex Method in Standard Form I


(Formulation of a L.P.P. in the Form of Revised Simplex)
In revised simplex method, the objective function is treated as if it were another
constraint. Whereas in the simplex method we deal with an m-dimensional basic. Here in
216

revised simplex method we would deal with (m + 1) dimensional basis in standard form I
and with a (m + 2) dimensional basis in standard form II.

The L.P.P. in its standard form is


Max. Z = c1 x1 + c2 x2 + ... + cn x n

subject to a11 x1 + a12 x2 + ... + a1n x n ≤ b1

a21 x1 + a22 x2 + ... + a2 n x n ≤ b2


..... ..... ..... ..... .....
..... ..... ..... ..... .....

am1 x1 + am2 x2 + ... + amn x n ≤ bm

and x i ≥ 0, i = 1, 2,..., n.

Considering the objective function as an additional constraint in which Z is as large as


possible and unrestricted in sign and introducing the slack and surplus variables, we get
the following (m + 1) constraints :
Z − c1 x1 − c2 x2 − ... − cn x n − 0. x n + 1 − ... − 0. x n + m =0 
a11 x1 + a12 x2 + ... + a1n x n + x n + 1 = b1 

... ... ... ... ... ... ... 
... ... ... . .. ... ... ...  ...(1)

am1 x1 + am2 x2 + ... + amn x n + ..... + x m + n = bm 

Thus we have to find the solution of the system (1) of (m + 1) simultaneous equations in
n + m + 1 number of variables Z , x1, x2 ,..., x n, x n + 1,..., x n + m, such that Z is as large as
possible and unrestricted in sign.

The equation (1) in more symmetric notations can be written as follows :

1. x 0 + a01 x 1 + a02 x 2 + ... + a0 n x n + a0 , n + 1 x n + 1 + ... + a0 , n + m x n + m = 0 


0. x 0 + a11 x 1 + a12 x 2 + ... + a1n x n + 1. x n + 1 + ... + 0. x n + m = b1 

... ... ... ... ... ... ...  ...(2)
0. x 0 + a m1 x 1 + am2 x2 + ... + amn x n + 0. x n + 1 + ... + 1. x n + m = bm 

where Z = x0 , − c j = a0 j , j = 1, 2,..., n + m

In matrix form, the system (2) can be written as

 x0 
1 a01 a02 ... a0 n a0 , n + 1... a0 n + m   x 
0   1  0 

a11 a12 ... a1n 1 ... 0
  M   b1 
 ... .. . ... ... ... ...   x   
 ...   n  =  M 

... ... ... ... ...
  xn + 1   M 
0 a m1 a m2 ... a mn 0 ... 1   M  b 
     m 
 x n + m 
217

1 a0  Z  0 
or     =   ...(3)
x b
0 A     

where a0 = (a01, a02 , . . . , a0, n+ m ) (n + m dimensional row vector)

x = [ x1, x2 ,..., x n + m], (Column vector of given, slack and surplus variables).
b = [b1, b2 ,..., bm] (Column vector of requirements)
0 = [0, 0,..., 0] (m-dimensional column vector of zeroes)

and A = (m) × (n + m) matrix of coefficients of given, slack and surplus variables

In terms of usual notations, (3) can be written as

1 − c  Z  0 
0 A   x  = b  ...(4)
     

Since a0 = (a01, a02 ,..., a0 , n + m) = (− c1, − c2 ,..., − cn, 0, 0 ,..., 0) = − c.

Equations (1), (2) and (4) are referred to as standard form I for the revised simplex
method.

6.3 Notation for Standard Form I


We have A = (α1, α 2 ,.... α n + m)

where α1, α 2 ,..., α n + m are m-dimensional column vectors of A.

We shall use the subscript (1) on all vectors to indicate that we have (m + 1) components
in standard form I.

Thus, (i) Corresponding to each vector α j of A, we represent a new (m + 1) component


(1)
vector by α j given by

(1)
α j = [− c j , a1 j , a2 j ,..., amj ]

= [− c j , α j ]

= [a0 j , aj ], j = 1, 2,..., (n + m).

We can also write

(1)  − c j   a0 j 
αj =  =
α  α 
 j  j 
(ii) Corresponding to m component vector b, we can define (m + 1) component vector by

b(1) = [0, b1, b2 ,..., bm] = [0, b]

(iii) (m + 1) component column vector corresponding to Z or x0 will be denoted by e1.


218

(iv) The basis matrix in revised simplex method, standard I will be denotes by B1.
Subscript 1 is used to indicate the matrix B1 is of order (m + 1) × (m + 1). e1 (defined in
‘iii’) will always be in the first column of B1 and the remaining m columns are any
(1) (1)
m of α j , j = 1, 2,..., (n + m) which are L.I. and are denoted by β i , i = 1, 2,..., m.

∴ B1 = (e1, β1(1), β2(1),..., β m(1))

= (β0 (1) β1(1), β2(1),...., β m(1))

1 M − cB1 − cB2 ... − cBm 


0 M β β12 ... β1m 
 11 
0 M β21 β22 ... β2 m 
=
... . .. ... ... ... 
 
... ... ... ... ... 
0 M β β ... β mm 
 m1 m2

1 − CB 
= 
0 B 

where CB = [cB1, cB2 ,..., cBm]


where cBi, i = 1, 2,...., m are the coefficients of the basic

variables x Bi, i = 1, 2,..., m (for A x = b) in the objective function Z.

β11 β12 ... β1m 


β β ... β2 m 
 21 22 
and B =  ... ... ... ...  , basis matrix for A x = b
 ... ... ... ... 
 
β m1 β m2 ... β mm 

= (β1, β2 ,..., β m)

6.4 To Find the Inverse of the Basis Matrix (i.e., B1-1)


and the Basic Solution in the Standard Form I
1. To find B1–1 : From 6.3 we have
 1 − CB 
B1 = 
0 B 

Since B1–1 exists and is known, therefore using article 1.15 the inverse of the matrix B, is
given by
comparing with article 1.15, we
 1 C B −1 
B1−1 =  B see that here 
−1   
0 B  I = 1, R = B and Q = − CB 
219

Note : We have seen that we always start with B = I m (m × m identity matrix)

 1 CB . I m   1 CB 
∴ B −1 = I m−1 = I m. B1−1 =  =
0 I m  0 I m 

Also if after ensuring that all bi ≥ 0, only the slack variables

are added and B = I m, then CB = (cB1, cB2 ,..., cBm) = (0, 0,..., 0) = 0,

1 0 
then B1−1 =   = I m + 1.
0 I 

2. To find α j(1) not in the basis matrix B1

Since α j(1) not in the basis matrix B1 can be expressed as the linear combination of the

column vectors of B1, therefore, we have

α j (1) = y0 j β0 (1) + y1 j β1(1) + ... + ymj β m(1)

= (β0 (1), β1(1),..., β m(1)).[ y0 j , y1 j ,..., ymj ]

= B1 Y j(1)

∴ Y j(1) = B1−1 . α j(1)


…(1)

where Y j(1) = [ y0 j , y1 j ,..., ymj ]

 1 C . B −1   − c j 
or Y j(1) =  B   
0 B −1   α j 

 − c + C B −1 α 
j B j
= − 1

 0 + B αj 
 

 − c j + CB Y j 
= Since B −1 α j = Y j = [ y1 j , y2 j ,..., ymj ]
0 + Yj 
 

− c j + Z j 
or Y j(1) =   Since Z j = CB Y j
Yj
 

− ∆ j 
or Y j(1) =  Since ∆ j = c j − Z j ...(2)
Y 
 j 

It is important to note that the first computation in Y j(1) is Z j − c j = −∆ j which is used to

decide the solution for optimality.


220

It is clear from (1) and (2) that

− ∆ j = Z j − c j = (first row of B1−1) × (α j(1) not in the basis B1)


...(3)

or − ∆ j = (first row of B1−1) × (aj(1) not in the basis B1)

Thus the great advantage of treating the objective function as one of the constraints is that
− ∆ j = Z j − c j for any α j(1) not in the basis B1 can be easily computed by taking the product

of first row of B1−1 with α j(1).

3. To find the initial basic solution

If x B(1) is the basic solution corresponding to the basis matrix B1, then

x B(1) = B1−1b(1)

1 C B −1  0 
= B . 
0 B −1  b 

C (B −1 b)
= B 
−1
 B b 

C x 
=  B B Since B −1 b = x B .
 xB 

C x   Z 
or x B(1) =  B B  =   ...(4)
 x B  x B 

x B (1) given by (4) is the B.S. not necessarily B.F.S., since Z may be negative also. From
(4) we conclude that the first component of x B(1) immediately gives us the value of the
objective function while the second component x B gives the B.F.S. (corresponding basis
B) of the original constraint system A x = b.

6.5 Computational Procedure of the Revised Simplex


Method in Standard Form I
Step 1 : If the problem is of minimization; convert it into the maximization
problem

Step 2 : To express the given problem in Revised Simplex Form Standard I


After ensuring that all bi ≥ 0, express the given problem in revised simplex form standard I
as in article 6.2.

Step 3 : To find the initial basic feasible solution and the basis matrix B1
221

In this step we proceed to obtain the initial basis matrix B1 as an identity matrix.

Then the initial solution is given by

x B(1) = [0, b1, b2 ,..., bm]

Step 4 : Construction of starting Table for Revised Simplex Method

Now we construct the simplex table for revised simplex method as follows :

Variables Solution Yk(1) Mini Ratio


in the B1−1 x B(1) = B1−1 α k(1) Mini
basis  x Bi 
 , yik > 0 
y
 ik 

e1 β1(1) β2(1) ... β m(1)

Z 1 0 0 ... 0 0 Z k − ck
= − ∆k

x B1 0 1 0 ... 0 b1 y1k

x B2 0 0 1 ... 0 b2 y2 k

M M M M M M M

x Bm 0 0 0 ... 1 bm ymk

Step 5 : Test of optimality

This is done by computing ∆ j for all α j(1) not in the basis B1 by the formula.

∆ j = − (first row of B1−1) × (α j(1))

The B.F.S. is optimal iff all ∆ j ≤ 0

Step 6 : To find the vector, incoming (or entering) and leaving (outgoing) the basis
Max
(i) To find incoming vector : The incoming vector will be taken as α k (1) if ∆ k =
j
{∆ j } for those j for which α j(1) are not in basis B1.

(ii) To find outgoing vector : First compute Y k (1) by the formula

Y k (1) = B1−1 . α k (1) = [∆ k , y1k , y2 k ,..., ymk ].

The vector β r(1) to be removed from the basis is determined by using minimum ratio rule.
It is taken corresponding to that value of r for which
222

x Br Mini  x Bi 
=  , yik > 0 
yrk i  yik 

where α k (1) is the incoming vector and Y k (1) is the column vector, corresponding to α k (1).

Step 7 : Determination of the improved solution

When α k (1) is the incoming vector and β r(1) the outgoing vector the element yrk is called
the key element.

In order to bring α k (1) in place of β r(1) we proceed similarly as in ordinary simplex

method. Bringing α k (1) in place of β r(1), we construct the new (revised) simplex table.

In this way we get improved B.F.S.

Step 8 : Now test the above improved B.F.S. for optimality as in step 5

If this solution is not optimal then repeat steps (6) and (7) until an optimal solution is
finally obtained.

For the clear understanding of the procedure, few illustrative examples are given here.

Example 1: Solve the following L.P.P. by revised simplex method :


Max. Z = x1 + 2 x2

subject to x1 + x2 ≤ 3

x1 + 2 x2 ≤ 5

3 x1 + x2 ≤ 6

x1, x2 ≥ 0 .

Solution: Step 1 : The given problem is a maximization L.P.P. and all bi ≥ 0

Step 2 : To express the given L.P.P. in revised simplex form

The given L.P.P. in revised simplex form is as follows :

Max. Z = x1 + 2 x2 + 0. x3 + 0. x4 + 0. x5

subject to Z − x1 − 2 x2 + 0. x3 + 0. x4 + 0. x5 = 0

x1 + x2 + x3 + 0. x4 + 0. x5 = 3
223

x1 + 2 x2 + 0. x3 + 1. x4 + 0. x5 = 5

3 x1 + x2 + 0. x3 + 0. x4 + x5 = 6

x1, x2 ,......, x5 ≥ 0

Since here unit matrix I4 is obtained without the use of artificial variables so the problem
is of standard form I.

In matrix form the system of constraint equations can be written as

β0 (1) β1(1) β2(1) β3(1)  Z  0 


(1) (1 ) x   
e1 α1 α2 α 3(1) α 4(1) α 5(1)
 1  3 
1 −1 −2 0 0 0  x2   
0 1 1 1 0 0 x  =  
   3  5 
0 1 2 0 1 0  x4   
0 3 1 0 0 1   x  6 
  5  

Step 3 : To find initial basic solution and the basis matrix B1

Here x B(1) = [0, 3, 5, 6] is the initial B.F.S.

and the basis matrix B1 is given by

B1 = [β0 (1), β1(1), β2(1) , β3(1)] = [e1, α 3(1), α 4(1), α 5(1)] = I4

which is a unit matrix ∴ B1−1 = I4 −1 = I4 .

Step 4 : Construction of starting simplex table


Now we construct the revised simplex table

First Revised Simplex Table

B1−1 Mini
Ratio
Variables β0(1) β1(1) β2(1) β3(1) Solution Yk(1) = Y2(1)
x B Y2
in the e1 α 3(1) α 4(1) α 5(1) x B(1) = B1−1 α 2 (1)
basis yi2 > 0

Z 1 0 0 0 0 −2 = − ∆2
x3 ( x B1) 0 1 0 0 3 1 ( y12 ) 31
x4 ( x B2 ) 0 0 1 0 5 2 ( y22 ) 5 2 (Mini)
→
x5 ( x B3 ) 0 0 0 1 6 1 ( y32 )
61


outgoing vector
224

Step 5 : Test of optimality


Here α1(1) and α 2(1) are not in the basis B1. Therefore, we need to compute ∆1 and ∆2 to
test the optimality of the solution.

[∆1, ∆2 ] = − (First row of B1–1) . [α1(1), α 2(1)]

 −1 −2 
 1 1
= − (1, 0, 0, 0)   = [1, 2]
 1 2
 3 1 

∴ ∆1 = 1, ∆2 = 2.

Since ∆1 and ∆2 are both positive, so the solution is not optimal.

Step 6 : To find incoming and outgoing vectors

(i) Incoming vector : Since

∆ k = Max . {∆1, ∆2 } = Max . {1, 2} = 2 = ∆2 ∴ k = 2

i.e., α 2(1) is the vector that must enter the basis i.e., the variables x2 will enter the
B.F.S.

The column vector Y2(1) corresponding to α 2(1) is given by

Y2(1) = B1−1 α 2(1) =  1 0 0 0   −2   −2 


0 1 0 0   1  1
   = 
0 0 1 0   2   2 
0 0 0 1   1  1
     

(ii) Outgoing vector :

x Br x 
Now = Mini  Bi , yi2 > 0 
yr2  yi2 

3 5 6  5 x
= Mini  , ,  = = B2 ∴ r = 2.
1 2 1  2 y22

Hence β2(1) (α 4(1)), is the outgoing vector and so y22 = 2 is the key element.

Step 7: Determination of the improved solution

Since y22 = 2 is the key element enclosed within * in table, therefore, we multiply third
row of Y2(1) by 1 2 and then add 2, −1 and −1 times of the row thus obtained in first,
second and fourth rows respectively.
225

Thus we get the following table giving the improved B.F.S. :


Second Revised Simplex Table

B1−1

Variables in β0(1) β1(1) β2(1) β3(1) Solution Yk(1) Mini


the basis e1 α 3(1) α 2(1) α 5(1) x B(1) Ratio

Z 1 0 1 0 5
x3 ( x B1) 0 1 −1 2 0 12
x2 ( x B2 ) 0 0 12 0 52
x5 ( x B3 ) 0 0 −1 2 1 72

The improved solution from above table

Z = 5 ; x2 = 5 2 , x3 = 1 2 , x5 = 7 2.

Step 8 : Test of optimality for the solution given in above table

Now we see that α1(1) and α 4(1) are two columns corresponding to the variables not in the
basis, therefore, we compute ∆1 and ∆4 .

[∆1, ∆4 ] = − {First row of B1−1} [α1(1), α 4(1)].

 −1 0
 1 0
= − (1, 0, 1, 0) .   = [0, − 1]
 1 1
 3 0 

∴ ∆1 = 0 and ∆4 = −1.

Here both ∆1 and ∆4 ≤ 0. Therefore, the above solution is optimal.

Hence, optimal solution of the given L.P.P. is

x1 = 0, x2 = 5 2, and Max. Z = 5.

Note : ∆1 = 0 indicate that the problem has alternative optimal solution also.

Example 2: Solve the following L.P.P. by revised simplex method :


Max. Z = 3 x1 + x2 + 2 x3 + 7 x4

subject to 2 x1 + 3 x2 − x3 + 4 x4 ≤ 40

2 x1 + 2 x2 + 5 x3 − x4 ≤ 35

x1 + x2 − 2 x3 + 3 x4 ≤ 100

and x1 ≥ 2, x2 ≥ 1, x3 ≥ 3, x4 ≥ 4.
226

Solution: Step 1 : Since the lower bounds of the variables are not zero
∴ we substitute x1 = y1 + 2, x2 = y2 + 1, x3 = y3 + 3, x4 = y4 + 4 in the given L.P.P.
which reduces to the following form :

Max. Z ′ = Z − 41 = 3 y1 + y2 + 2 y3 + 7 y4

subject to 2 y1 + 3 y2 − y3 + 4 y4 ≤ 20

−2 y1 + 2 y2 + 5 y3 − y4 ≤ 26

y1 + y2 − 2 y3 + 3 y4 ≤ 91

and y1 ≥ 0, y2 ≥ 0, y3 ≥ 0, y4 ≥ 0.

Step 2 : To express the L.P.P. in revised simplex form

This L.P.P. in revised simplex form I is as follows :

Max. Z ′ = 3 y1 + y2 + 2 y3 + 7 y4

s.t. Z ′ − 3 y1 − y2 − 2 y3 − 7 y4 + 0. y5 + 0. y6 + 0. y7 = 0

2 y1 + 3 y2 − y3 + 4 y4 + 1. y5 + 0. y6 + 0. y7 = 20

−2 y1 + 2 y2 + 5 y3 − y4 + 0. y5 + 1. y6 + 0. y7 = 26

y1 + y2 − 2 y3 + 3 y4 + 0. y5 + 0. y6 + 1. y7 = 91.

Since here unit matrix I4 is obtained without the use of the artificial variables, so the
problem is of standard form I.

In matrix form the above system of constraint equations can be written as

 Z′ 
β1(1) β2(1) β3(1) y 
β 0 (1)  1
e1 α1(1) α 2(1) (1)
α3 α4 (1)
α 5(1) α 6(1) α 7(1)  y2   0 
1 −3 −1 −2 −7 0 0 0  y  20 
   3 =  
 0 2 3 −1 4 1 0 0   y4  26 
   y  91
 0 −2 2 5 −1 0 1 0   5  
 0 1 1 −2 3 0 0 1   y6 
 
y 
 7

Step 3 : To find initial basic solution and the basic matrix B1

Here x B(1) = [0, 20, 26, 91] is the initial B.F.S. and basis matrix B1 is given by

B1 = [β0 (1), β1(1), β2(1) , β3(1)] = (e1, α 5(1), α 6(1), α 7(1)] = I4 , which is a unit matrix.

∴ B1−1 = I4 −1 = I4 .
227

Step 4 : Construction of starting simplex table

The first revised simplex table is as follows :

B1−1 Mini Ratio


x B Y4
Variables in
β0(1) β1(1) β2(1) β3(1) Solution Yk(1) = Y4(1)
y i4 > 0
the basis
e1 α 5(1) α 6(1) α 7(1) x B(1) = B1−1 α 4(1)

Z′ 1 0 0 0 0 −7

y5 ( x B1) 0 1 0 0 20 4 5
(Mini) →
y6 ( x B2 ) 0 0 1 0 26 –1 
y7 ( x B3 ) 0 1 0 1 91 3 91 3


Outgoing vector

Step 5 : Test of optimality


Here we compute ∆ j for all α j(1), j = 1, 2, 3, 4 not in the basis.

[∆1, ∆2 , ∆3 , ∆4 ] = − (first row of B1−1) [α1(1), α 2(1), α 3(1), α 4(1)]

−3 −1 −2 −7
 2 2 −1 4 
= − (1, 0, 0, 0) .   = [3, 1, 2, 7]
 −2 2 5 −1
 1 1 −2 3 
 

∴ ∆1 = 3, ∆2 = 1, ∆3 = 2, ∆4 = 7,

which are all positive, so this solution is not optimal.

Step 6 : To find incoming and outgoing vectors

(i) Incoming vector

Since ∆ k = Max. ∆ j = 7 = ∆4 ∴ k =4

i.e., α 4(1) is the vector entering the basis.

The column vector Y4(1) corresponding to α 4(1) is given by

Y4(1) = B1−1. α 4(1) = I4 .[−7, 4, − 1, 3] = [−7, 4, − 1, 3]

(ii) Outgoing vector

Now
228

x Br Mini  x Bi 
=  , y > 0
yr4 i  yi4 i4 

20 91 20 x B1
= Mini  , − ,  = = ∴ r =1
4 3 4 y14

Hence β1(1) (= α 5(1)) is the outgoing vector.

∴ Key element = y14 = 4.

Step 7 : Determination of the improved solution

In order to bring α 4(1) in place of β1(1) (= α 5(5 )) in B1, we divide second row by 4 and then
add its 7, 1 and – 3 times in first, third and fourth rows respectively.

Thus we get the following table giving the improved B.F.S. :

Second Revised Simplex Table

Variables in B1−1 Sol. Yk(1) = Y3(1) Mini Ratio


the basis x B(1) = Y1−1 α 3(1) x B Y3
β0(1) β1(1) β2(1) β3(1) y i3 > 0
e1 α 4(1) α 6(1) α 7(1)

Z′ 1 74 0 0 35 −15 4

y4 ( x B1) 0 14 0 0 5 −1 4 
124 19
y6 ( x B2 ) 0 14 1 0 31 19 4 (Mini) →
y7 ( x B3 ) 0 −3 4 0 1 76 −5 4 


Outgoing Vector

Step 8 : Test of optimality for the solution given in the above table

We compute ∆ j for all α j(1), j = 1, 2, 3, 5 not in the basis

[∆1, ∆2 , ∆3 , ∆5 ] = − (first row of B1−1) [α1(1), α 2(1), α 3(1), α 5(1)]

 −3 −1 −2 0 
 2 3 −1 1 
 7  1 17 15 7
 = − , − , , − 
= − 1, , 0, 0 . 
 4   −2 2 5 0   2 4 4 4 
 1 1 −2 0 
 

∴ ∆1 = −1 2 , ∆2 = −17 4 , ∆3 = 15 4 , ∆5 = −7 4.

Since ∆3 = 15 4 is positive, so this solution is not optimal.


229

Step 9 : To find entering and outgoing vector

Since ∆ k = Max. ∆ j = 15 4 = ∆3

∴ k = 3 i.e., α 3(1) is the vector entering the basis.

The column vector Y3(1)corresponding to α 3(1) is given by

 −15 −1 19 −5 
Y3(1) = B1−1. α 3(1) = , , ,
 4 4 4 4 

By minimum ratio rule we find β2(1) (= α 6(1)) as the outgoing vector

∴ Key element = 19 4.

Step 10 : To find the improved solution

In order to bring α 3(1) in place of β2(1) (= α 6(1)) in the basis B1, we divide the third row by
19 4 and then add its 15 4, 1 4 and 5 4 times in first, second and fourth rows respectively.

Thus the simplex table giving the next improved solution is as follows :

Variables in B1−1 Sol. Yk(1) = Y1(1) Mini Ratio


the basis x B(1) = B1−1. α1(1) x B Y1
β0(1) β1(1) β2(1) β3(1) y i1 > 0
e1 α 4(1) α 3(1) α 7(1)

Z′ 1 37 19 15 19 0 1130 19 −13 19

y4 ( x B1) 0 5 19 1 19 0 126 19 8 19 63 4 (Mini)



y3 ( x B2 ) 0 1 19 4 19 0 124 19 − 6 19 ...
y7 ( x B3 ) 0 −13 19 5 19 1 1599 19 −17 19 ...


Outgoing Vector

Step 11 : Test of optimality for the solution given in the above table

We compute ∆ j for all α j(1) , j = 1, 2, 5, 6 not in the basis,

[∆1, ∆2 , ∆5 , ∆6 ] = − (first row of B1−1) [α1(1) , α 2(1) , α 5(1) , α 6(1)]

 −3 −1 0 0
37 15  2 3 1 0  13 −122 −37 −15 
 
= − 1, , ,0 .   = , , ,
 19 19   −2 2 0 1  19 19 19 19 
 1 1 0 0 

∴ ∆1 = 13 19 , ∆2 = −122 19 , ∆5 = −37 19 , ∆6 = −15 19.


230

Since ∆1 > 0, ∴ this solution is not optimal.

Step 12 : To find entering and outgoing vectors

Since ∆ k = Max ∆ j = 13 19 = ∆1

∴ k = 1 i.e., α1(1) is the vector entering the basis. The column vector Y1(1) corresponding
to α1(1) is given by

 −13 8 −6 −7 
Y1(1) = B1−1. α1(1) = , , ,
 19 19 19 19 

By minimum ratio rule we find that β1(1) (= α 4(1)), is the outgoing vector. ∴ key element
= y14 = 8 19.

Step 13 : To find the improved solution

In order to bring α1(1) in place of β1(1)(= α 4(1)), we divide second row by 8 19 then add its
13 19, 6 19 and 17 19 times in first, third and fourth rows respectively.

Thus we get the following table, giving the improved B.F.S. :

Variables in B1−1
the basis
β0(1) β1(1) β2(1) β3(1) Solution
e1 α1(1) α 3(1) α 7(1) x B(1)

Z′ 1 19 8 78 0 281 4
y1 ( x B1) 0 58 18 0 63 4
y3 ( x B2 ) 0 14 14 0 23 2
y7 ( x B3 ) 0 −1 8 38 1 393 4

Step 14 : Test of optimality for the solution given in the above table :

We compute ∆ j for α j(1), j = 2, 4, 5, 6 not in the basis

[∆2 , ∆4 , ∆5 , ∆6 ] = − (first row of B1−1) [α 2(1), α 4(1), α 5(1), α 6(1)]

 −1 −7 0 0
 0   −63 −13 −19 −7 
 19 7   3 4 1 =
= − 1, , , 0 . , , ,
 8 8   2 −1 0 1  8 8 8 8 
 1 3 0 0

−63 −13 −19 −7
∴ ∆2 = , ∆4 = , ∆5 = , ∆6 = .
8 8 8 8

Since all ∆ j < 0, so the above solution is optimal.


231

i.e., y1 = 63 4 , y2 = 0, y3 = 23 2 , y4 = 0 and max. Z ′ = 281 4

Hence the optimal solution of the given problem is

x1 = y1 + 2 = 71 4 , x2 = y2 + 1 = 1, x3 = y3 + 3 = 29 2 , x4 = y4 + 4 = 4,

and Max. Z = Max. Z ′ + 41 = 281 4 + 41 = 445 4

1. Formulate a L.P. problem in the form of revised simplex.


2. Explain the revised simplex method and compare it with the simplex method.
3. Develop the computational algorithm for solving a linear programming problem by
revised simplex method.
Solve the following L.P.P.'s by revised simplex method :

4. Max. Z = 2 x1 + x2 5. Max. Z = 3 x1 + 2 x2 + 5 x3
subject to 3 x1 + 4 x2 ≤ 6 subject to x1 + 2 x2 + x3 ≤ 430
6 x1 + x2 ≤ 3 3 x1 + 2 x3 ≤ 460
x1, x2 ≥ 0 x1 + 4 x2 ≤ 420
x1, x2 , x3 ≥ 0
6. Max. Z = 5 x1 + 3 x2 7. Max. Z = 6 x1 − 2 x2 + 3 x3
subject to 3 x1 + 5 x2 ≤ 15 subject to 2 x1 − x2 + 2 x3 ≤ 2
5 x1 + 2 x2 ≤ 10 x1 + 4 x3 ≤ 4
x1, x2 ≥ 0 x1, x2 , x3 ≥ 0
8. Max. Z = x1 + x2 + 3 x3 9. Max. Z = 3 x1 + 5 x2
subject to 3 x1 + 2 x2 + x3 ≤ 3 subject to x1 ≤ 4, x2 ≤ 6
2 x1 + x2 + 2 x3 ≤ 2 3 x1 + 2 x2 ≤ 18
x1, x2 , x3 ≥ 0 x1, x2 ≥ 0

4. x1 = 2 7 , x2 = 9 7 , max. Z = 13 7
5. x1 = 0, x2 = 100, x3 = 230, max. Z = 1350
6. x1 = 20 19 , x2 = 45 19 , max. Z = 235 19
7. x1 = 4, x2 = 6, x3 = 0, max. Z = 12
8. x1 = 0, x2 = 0, x3 = 1, max. Z = 3
9. x1 = 2, x2 = 6, max. Z = 36
232

6.6 Revised Simplex Method in Standard Form II


Standard form II is used when the artificial variables are needed for getting an initial
basis matrix as an identity matrix. In this case two phase method is used to handle the
artificial variables. Phase I consist of finding an initial basic feasible solution by driving
all the artificial variables to zero while in phase II we start with the initial B.F.S. obtained
in phase I and continue till the optimal solution is found.

Phase I : To simplify notations, we assume that the initial matrix does not contain any
positive unit vector. In other words we assume that the basis or the origin problem
contain all artificial vectors corresponding to artificial variables introduced in all the m
constraints. Here we consider one more objective function Z a , known as artificial
objective function defined as

Max. Z a = − x1a − x2 a − ... − x ma

where x1a , x2 a ,..., x ma are the artificial variables introduced in the first, second ,..., and
mth constraints respectively. In the objective function Z a the prices of all the artificial
vectors are taken as −1.

Here in this case we have two objective functions, so in place of considering the problem
with (m + 1) constraints (as in standard form I) we have to consider the problem in the
revised form with (m + 2) constraints in which m constraints correspond to the constraints
of the given problem and the other two constraints correspond to each of the objective
functions Z and Z a .

Thus in standard form II of the revised simplex method, the system of constraint
equations can be written as

Z − c1 x1 − c2 x2 − ... − cn x n + 0. x1a + 0. x2 a + ... + 0 x m1 = 0 


Za + x1a + x2 a + ... + x ma = 0 

a11 x1 + a12 x2 + ... + a1n x n + x1a = b1 
a21 x1 + a22 x2 + ... + a2 n x n + x2 a = b2 

... ... ... ... ... ... ...(1)
... ... ... ... ... ...

am1 x1 + am2 x2 + ... + amn x n + x mn = bm 

x j ≥ 0, x ia ≥ 0, j = 1, 2,..., n, i = 1, 2,..., m.

Since all the artificial variables x ia ≥ 0, thus in phase I, our problem is to maximize Z a
first, subject to the constraint equations (1) with Z a and Z both unrestricted in sign.

Now there are two possibilities.


1. Max. Za = 0 : In this case (clear from 2nd constraint) x1a = 0, x2 a = 0,..., x ma = 0.
i.e., all the artificial variables are automatically driven to zero. In this case the values
233

of the original variables x1, x2 ,..., x n for this max. Z a will form the B.F.S. in phase I
with which we shall start in phase II.
2. Max. Za < 0 : In this case it is clear that at least one artificial variable has a
non-negative value. Hence in this case there exists no F.S. of the original problem.

Phase II : After driving all the artificial variables equal to zero in phase I we get a B.F.S.
of the problem.

We enter phase II with this B.F.S. obtained in phase I and proceed to get the optimal
solution exactly similarly as in revised simplex method in standard form I.

6.7 Notations, Basis and its Inverse in Standard Form II


1. Notations : These (m + 2) constraints given in (1) of 6.6 can be expressed in the
matrix form as follows :

β1(2 ) β2(2 ).....β m(2 )


e1(2 ) e2(2 ) α1(2 ) α 2(2 ) α n(2 ) αn+1(2 ) α n + 2(2 ) α n + m(2 ) x B(2 ) b(2 )
 Z 
1 0M − c1 − c2 − cn M 0 0 ... 0  Z 
0 1M 0 0 0M 1 1 ...1   a  0 
   x1   
... ... M ... ... ... ... M ... ...   x  0 
0 0M a11 a12 ... a1n M 1 0 ... 0   2   b1 
   M  =  
0 0M a21 a22 ... a2n M 0 1 ... 0   x   b2 
... M ... ... ... ...   n   ... 
 M   x1a   
... M ... ... ... M ...   M  bm 
0 0M am1 am2 ... 0 0 ...1   
 a mnM   x ma 

...(2)

or [e1(2 ), e2(2 ), α1(2 ), α 2(2 ) ..., α n(2 ), α n + 1(2 ), α n + 2(2 ), ..., α n + m(2 )] x B(2 ) = b(2 )

where α j(2 ) = [− c j , 0, a1 j , a2 j ,..., amj ] = [− c j , 0, α j ] for j = 1, 2,..., n

and α j(2 ) = [0, 1, e j ]

for j = n + 1, n + 2,..., n + m

Note that α j(2 ), j = n + 1,..., n + m are the columns of the coeffs. of artificial variables

x1a ,..., x ma .

x B(2 ) = [Z , Z a , x1,..., x n, x1a ,..., x ma ]

b = [0, 0, b1, b2 ,..., bm]


234

e1(2 ) and e2(2 ) are (m + 2) component column vectors corresponding to the two
objective functions Z and Z a respectively.

2. Basis : In standard form II we have (m + 2) constraints, so we shell need a basis


matrix of order (m + 2) × (m + 2). Using the subscript 2 on B for the basis matrix in
standard form II, we have

Z Za x1a x2 a ... x ma
B2 = (2 )
[e1(2 ), e2(2 ), an + 1 , an + 2(2 ), ..., an + m(2 )]

1 0 M 0 0 ... 0 
0 1 M 1 1 .. . 1 
 
... ... M ... ... ... ... 
= 0 0 M 1 0 ... 0 
 
0 0 M 0 1 ... 0 
M M M M M ... M 
 
0 0 M 0 M0 ... 1 

1 0 M 0  1 0 M − CB 
0 1 M 1m   0 1 M − CBa 
=   =  
 ... ... M ...   ... ... M ...  ...(3)
0 0 M I m   0 0 M B 
 

Here B = I m is basis of the original problem

and − CB = 0 = (0, 0,..., 0)

− CBa = 1m = (11
, ,...,1)

CB = (cB1, cB2 ,..., cBm) = (0, 0,..., 0)

CBa = (cBa1, cBa2 ,..., cBam) = (−1, − 1,..., − 1)

where c j , cBaj are the coefficients of the basic variables x1a , x2 a ,..., x ma in the given
objective function Z and the artificial objective function Z a respectively.

3. Inverse of the basis

We have

1 0 M − CB 
0 1 M − CBa  I − CB(2 ) 
B2 =  =2 
 ... ... M ...  0 B 
0 0 M B 
 

where CB(2 ) = [CB , CBa ]


235

Since B −1 exist and is known, therefore using article 1.15, B2 −1 is given by

I C (2 ) B −1 
B2 −1 =  2 B  ...(4)
 0 B −1 

1 0 M CB B −1 
   1 0 CB 
0 1 M C B −1  
B2 −1 =  Ba = 0 1 CBa 
... ... M ...   
 − 1  0 0 I m 
0 0 M B 

Since we always start with B = I m,

∴ B −1 = I m, CB B −1 = CB I m = CB

and CBa B −1 = − (11


, ,...,1). I m = CBa

4. Properties of B2 −1

We have

−1
 1 0 C B −1 
B  − c   − c j + CB B a j 
  j  
 
(i) B2 −1 aj(2 ) = 0 1 CBa B −1  −1
 0  =  CBa B aj + 0 
0 0 
B −1   αj  B −1aj

     

C Y − c 
j j
 B 
= CBa Y j − 0  Q B −1 α j = Y j = [ y1 j ,..., ymj ]
 Yj 
 

 Z − c   −∆ 
j j j
   
= Z ja − c ja  = − ∆ ja 
 Yj   Yj 
   

From which we conclude that

When we multiply first row of B2 −1 with α 2(2 ), we get

Z j − c j = −∆ j

When we multiply second row of B2 −1 with α 2(2 ), we get

Z ja − c ja = −∆ ja

and when we multiply last m rows of B2 −1 with α 2(2 ), we get Y j .


236

(ii) We have
 1 0 C B −1  −1
0   CB B b 
 B 
B2 −1 b (2 ) = 0 1 CBa B −1  0  = C B −1b 

   Ba 
0 0  b   
 B −1  B −1b 
  

 CB x B   Z 
= CBa x B  =  Z a  Q B −1 b = x B
   
 x B  x B 

From which we conclude that

When we multiply first row of B2 −1 with b(2 ) we get Z (the value of the objective

function in the original problem).

When we multiply second row of B2 −1 with b(2 ) we get Z a (the value of the artificial

objective function)

and when we multiply last m rows of B2 −1 with b(2 ) we get x B , the solution of the

original problem.

6.8 Computational Procedure of the Revised Simplex


Method in Standard Form II
When the artificial variables are needed to convert the constraints of the L.P.P. into
equalities then two phase method is applied step by step as follows :

Phase I
Step 1 : To express the given problem in revised simplex form standard II

Convert the given L.P.P. (maximization problem) in the form of revised simplex method
standard II as in article 6.6.

Step 2 : To find the initial basic feasible solution and the basis matrix B2

In this step we proceed to obtain the initial basis matrix B2 and its inverse B2 −1 by
relations (3) and (4) of article 6.7 respectively.

Then the first B.F.S. x B is obtained by multiplying last m rows of B 2−1 by b(2 ).

x B(2 ) = [0, 0, x B ]

In this solution x ≥ 0 but no restriction on Z and Z a .

Step 3 : Construction of the starting table for revised simplex method

Now we construct the simplex table for revised simplex method as follows :
237

Variables B 2 −1 Solution Yk(2 ) = Mini Ratio


in the x B(2 ) = B 2 −1 α k(2 ) x 
M (2 ) (2 ) (2 ) Mini  Bi , yik > 0 
basis e1(2 ) e2(2 ) β1 , β2 ,.., β m B 2 −1b(2 )
M y
 ik 
M

Z 1 M 0
Za 0 M 1

x B1 0 M 0
M M M
x Bi 0 M 0
M M M M
M M M M
x Bm 0 0

Here in Phase I we maximize Z a , not Z.

Step 4 : (i) If Z a = 0, then we conclude that all artificial variables are zero and so go to
phase II.

(ii) If Z a < 0, we compute ∆ ja for all α j(2 ) not in the basis B2 , by the formula

Z ja = c ja − Z ja = − (second row of B2 −1) × (α j(2 ))

and then continue to the step 5 and so on.

Step 5 : To find the vectors entering and leaving the basis

When Z a < 0 and all ∆ ja ≤ 0, then Z a is max and hence no feasible solution exists. If at
least one ∆ ja > 0, then we proceed to find entering and leaving vectors.

(i) To find entering vector : The entering vector will be taken as

Max
α k (2 ) if ∆ ka = {∆ ja }
j

for those j for which α j(2 ) is not in the basis B2 .

(ii) To find outgoing vector : First compute Y k (2 ) by the formula Y k (2 ) = B2 −1. α k (2 ).

The vector Br(2 ) to be removed from the basis is determined by the minimum ratio
rule. It is taken corresponding to that value of r for which

x Br Mini  x Bi 
=  , yik > 0 
yrk i  yik 

where α k (2 ) is the entering vector and Y k (2 ) is the column vector corresponding to


α k (2 ).
238

Step 6 : Determination of the improved solution

After determining the entering and outgoing vectors; we get the next revised simplex
table as usual.

We repeat the step 1 to 6 till we get

Max. Z a = 0 or all ∆ j (for phase I) ≤ 0.

If Max. Z a = 0 then we conclude that all the artificial variables are zero and so we proceed
to phase II.

If all ∆ j (for phase I) are ≤ 0 and Max. Z a < 0 then no feasible solution exist.

Thus we proceed to phase II with Max. Z a = 0.

Phase II : In phase II, Z a it treated like a artificial variable, so it is removed from the
basic solution. Thus we remove the second row and second column from the constraints
(2) of article 6.7. The reason is that in phase II we deal with the original objective function
Z which is to be maximized and so the prices of all the artificial variables are zero.

Now the basis matrix B2 of order (m + 2) × (m + 2) is reduced to B1 of order (m + 1) × (m + 1).


Hence now we proceed exactly as in revised simplex method standard I.

Example 1: Solve the following L.P.P. by revised simplex method :


Mini Z = x1 + 2 x2

subject to 2 x1 + 5 x2 ≥ 6

x1 + x2 ≥ 2

and x1, x2 ≥ 0

Solution: The given problem is a minimization problem. Converting the minimization


objective function Z to the maximization function Z ′, we have

Max. Z ′ = − Z = − x1 − 2 x2 .

Here all bi ≥ 0.

Introducing the surplus variables x3 , x4 and artificial variables x1a , x2 a the constraints of
the given L.P.P. reduce to

2 x1 + 5 x2 − x3 + x1a =6

x1 + x2 − x4 + x2 a =2
239

Since artificial variables are needed to get an identity matrix so this problem will be
solved by two phase method.

Phase I
Step 1 : To express the given problem in revised simplex form standard II
First we convert the given L.P.P. in the revised simple form standard II and get the
following constraints :

Z ′ + 1. x1 + 2. x2 + 0. x3 + 0. x4 + 0. x1a + 0. x2 a = 0

Za + x1a + x2 a =0

2 x1 + 5 x2 − x3 + x1a =6

x1 + x2 − x4 + x2 a =2

x j ≥ 0, x ia ≥ 0, j = 1, 2, 3, 4,; i = 1, 2

where Z a is the artificial objective function to be maximized in phase I.

i.e., Max. Z a = − x1a − x2 a .

The system of constraints can be expressed in the matrix form as follows :

β1(2 ) β2(2 ) x B(2 )


e1(2 ) e2(2 ) α1(2 ) α 2(2 ) α 3(2 ) α 4(2 ) α (2 ) α (2 )
5 6  Z′ 
1 0M 1 2 0 0 M 0 0  Z  (2 )
 M M   a b
0 1M 0 0 0 0 M 1 1   x1   0 
... ... M ... ... ... ... M ... ...   x   0 
0 2
0 : 2 5 −1 0 : 1 0 × x  = 6 
 M   3   
M
0 0 1 1 0 −1 0 1   x  2 
   4  
 x1a 
x 
 2a 
Step 2 : To find the initial basic feasible solution and the basis matrix B2

Here the basis matrix B2 is given by


1 0 M 0 0
0 1 0 M − CB 
1 M 1 1 
  0 1 M − CBa 
B2 = ... ... M ... ... =  
0 ... ... M ... 
0 M 1 0 
  0 0 M B 
 0 0 M 0 1  

1 0 
where CB = (0, 0), CBa = (−1, − 1), B =   = I2
0 1

Since B −1 = I2 −1 = I2
240

 1 0 C B −1   1 0 C 
 B  B
∴ B2 −1 = 0 1 CBa B −1  = 0 1 CBa 
 
 −1  0 0 I2 
0 0 B 


1 0 0 0
0 1 −1 −1 
= 
0 0 1 0
0 0 0 1 

1 0 0 0  0   0 
0 1 −1 −1  0  −8
Now B2 −1. b(2 ) =     =   = x (2 )
B
0 0 1 0  6   6 
0 0 0     
1  2   2 

Step 3 : Construction of the starting simplex table for revised simplex method

The starting simplex table for revised simplex method is as follows :


(Phase I)

Variables B 2 −1 Solution Yk(2 ) = Y2(2 ) Mini Ratio


in the x B(2 ) = = B 2 −1 α 2(2 ) x B Y2
e1(2 ) e2(2 ) β1(2 ) β2(2 )
basis B 2 −1 b(2 ) yi2 > 0
α 5(2 ) α 6(2 )

Z′ 1 0 0 0 0 2
Za 0 1 −1 −1 −8 −6

x1a ( x B1) 0 0 1 0 6 5 6
(Mini) →
x2 a ( x B2 ) 0 0 0 1 2 1 5
2


Outgoing vector

Step 4 : Since Za = −8 < 0

∴ We compute ∆ ja for all α j(2 ) , j = 1, 2, 3, 4 not in the basis B 2

[∆1a , ∆2 a , ∆3 a , ∆4 a ] = − (second row of B2 −1) . [α1(2 ) α 2(2 ) α 3(2 ) α 4(2 )]

1 2 0 0
0 0 0 0
= − (0,1 − 1 − 1)   = [3, 6, − 1, −1]
2 5 −1 0 
1 1 0 −1

∴ ∆1a = 3, ∆2 a = 6, ∆3 a = −1, ∆4 a = −1
241

Since few ∆ ja > 0, therefore we can improve Z a . So we find the entering and outgoing
vectors to improve Za.

(i) To find entering vector : Since ∆ ka = Max. ∆ ja = 6 = ∆2 a .

∴ k = 2, i.e., α 2(2 ) is the entering vector and x2 the variable entering in B.F.S.

(ii) To find outgoing vector : Since α 2(2 ) is entering vector,

∴ we compute

Y2(2 ) = B2 −1 α 2(2 ) = [2, − 6, 5, 1].

Applying the minimum ratio rule, β r(2 ) is outgoing vector if

x Br Mini  x Bi 
=  , y > 0
yr2 i  yi2 i2 

6 2  6 x
= Mini  ,  = = B1
5 1  5 y12

∴ r =1 i.e.,β1(2 ) is the outgoing vector and key element = 5.

Step. 5 : To find the improved solution : In order to bring α 2(2 ) in place of β1(2 ) in the
basis matrix, we divide the third row by 5 and then add −2, 6 and −1 time of it in the first,
second and fourth rows respectively. Thus the next simplex table is as follows :

Variables B 2 −1 Solution Yk(2 ) = Y1(2 ) Mini Ratio


in the x B(2 ) = B 2 −1 α1(2 ) x B Y1
basis e1(2 ) e2(2 ) β1(2 ) β2(2 ) yi1 > 0
α 2(2 ) α 6(2 )

Z′ 1 0 −2 5 0 −12 5 15
Za 0 1 15 −1 −4 5 −3 5

x2 ( x B1) 0 0 15 0 65 25 3
4
x2 a ( x B2 ) 0 0 −1 5 1 45 35 (Mini) →
3

Since Z a = −4 5 < 0.

We compute ∆ ja for all α j(2 ), j = 1, 3, 4, 5, not in the basis.

[∆1a , ∆3 a , ∆4 a , ∆5 a ] = − (second row of B2 −1) . [α1(2 ) α 3(2 ) α 4(2 ) α 5(2 )]


242

1 0 0 0
0 0 0 1
= − (0, 1, 1 5 , − 1)   = [3 5 , 1 5 , − 1, −6 5]
2 −1 0 1
 1 0 −1 0 

∴ ∆1a = 3 5 , ∆3 a = 1 5 , ∆4 a = −1, ∆5 a = −6 5

Since few ∆ ja > 0, ∴ we proceed to find the entering and outgoing vectors to improve Z a .

∆ ka = Max. ∆ ja = 3 5 = ∆1a

∴ Since α1(2 ) is the entering vector.

∴ To find outgoing vector first, we compute

Y1(2 ) = B2 −1 α1(2 ) = [1 5 , −3 5 , 2 5 , 3 5].

Applying the minimum ratio rule we find that β2(2 ).(= α 6(2 )) is the outgoing vector.

∴ Key element = 3 5.

Step 6 : Proceeding as usual, the next simple table is as follows :

Variables B2 −1 Solution Yk(1)


in the x B(2 )
basis e1(2 ) e2(2 ) β1(2 ) β2(2 )
α 2(2 ) α1(2 )

Z′ 1 0 −1 3 −1 3 −8 3
Za 0 1 0 0 0

x2 ( x B1) 0 0 13 −2 3 23
x1 ( x B2 ) 0 0 −1 3 53 43

Since Z a = 0, from which we come to the conclusion that all artificial variables are driven
to zero.

∴ Now we enter the phase II and forget all the artificial variables.

Phase II

Now we enter phase II to maximize Z ′. For this we remove Z a from the above table, i.e.,
we construct the following table in phase II by removing second row and second column
of the above table :
243

Variables in B1−1 Solution


the basis x B(1)
e1(1) β1(1) β2(1)
α 2(1) α1(1)

Z′ 1 −1 3 −1 3 −8 3
x2 ( x B1) 0 13 −2 3 23
x1 ( x B2 ) 0 −1 3 5 3 43

Now we compute ∆ j for all ∆ j(1), j = 3, 4 not in the basis B1.

[∆3 , ∆4 ] = − (first row of B1−1.) [α 3(1) α 4(1)]

 0 0
= − (1, −1 3 , −1 3). −1 0  = [−1 3 , −1 3]
 
 0 −1

∴ ∆3 = −1 3 , ∆4 = − 1 3

which are both negative.

Hence the solution given in the above table is optimal.

Optimal solution is

x1 = 4 3 , x2 = 2 3, Max. Z ′ = −8 3

∴ Mini Z = − Z ′ = 8 3.

6.9 Advantages and Disadvantages of Revised


Simplex Method over the Original Simplex Method
6.9.1 Advantages
There are few benefits of revised simplex method over that of simplex method.
1. The revised simplex method provides us more informations at lesser computational
effort.
2. In revised simplex method we have to make less entries in each tableau. In the
simplex method we have to introduce (n + 1)(m + 1) entries in each tableau while in
revised simplex method these are only (m + 1)(m + 2). Thus, in case n is large in
comparison to m, a great labour is avoided.
3. The revised simplex method generates automatically the inverse of the current basis
matrix.

6.9.2 Disadvantages
While solving the numerical problem by revised simplex method the original column
vectors not in the basis matrix are also required. Thus there may be more computational
mistakes in revised simplex method than that in simplex method.
244

1. Describe the revised simplex method, when artificial vectors are added to obtain an
identity matrix for the initial basis matrix.

Solve the following L.P. problem's by revised simplex method :

2. Min. Z = 2 x1 + x2 3. Max. Z = 5 x1 + 3 x2
subject to 3 x1 + x2 ≤ 3 subject to 4 x1 + 5 x2 ≥ 10
4 x1 + 3 x2 ≥ 6 5 x1 + 2 x2 ≤ 10
x1 + 2 x2 ≤ 3 3 x1 + 8 x2 ≤ 12
x1, x2 ≥ 0 x1, x2 ≥ 0

4. Min. Z = x1 + x2 5. Max. Z = −5 x2
subject to x1 + 2 x2 ≥ 7 subject to x1 + x2 ≤ 2
4 x1 + x2 ≥ 6 x1 + 5 x2 ≥ 10
x1, x2 ≥ 0 x1, x2 ≥ 10

Multiple Choice Questions


1. In standard form I of revised simplex method we do not need :
(a) Slack variables (b) Surplus variables
(c) Artificial variables (d) None of these
2. In standard form II of revised simplex method we need :
(a) Artificial variables (b) Slack variables
(c) Surplus variables (d) None of these
3. The number of additional constraints in standard form I of revised simplex method
is :
(a) 0 (b) 1
(c) 2 (d) None of these
4. The number of additional constraints in standard form II of revised simplex method
is :
(a) 0 (b) 1
(c) 2 (d) None of these
5. In standard form I of revised simplex method the basis matrix is denoted by :
(a) B (b) B0
(c) B1 (d) B2
245

6. In standard form II of revised simplex method the basis matrix is denoted by :


(a) B0 (b) B1

(c) B (d) B2

Fill in the Blank


1. In standard form I of revised simplex method an identity matrix is available after
adding ............... and ............... variables.
2. In standard form I of revised simplex method, the objective function is also treated
as another (additional) ............... .
3. In standard form II of revised simplex method the artificial variables are needed to
get an ............... .
4. For the solution of a problem by revised simplex method we use two ...............
method in standard form II.
5. In revised simplex method the inverse of the current ............... matrix is obtained
automatically.
246

Exercise
2. x1 = 3 5 , x2 = 6 5 , min. Z = 12 5

3. x1 = 28 17 , x2 = 15 17 , max. Z = 185 17

4. x1 = 5 7 , x2 = 22 7 , min. Z = 27 7

5. x1 = 0, x2 = 2, max. Z = −10

Multiple Choice Questions


1. (c) 2. (a)
3. (b) 4. (c)
5. (c) 6. (d)

Fill in the Blank


1. slack, surplus 2. constraint
3. identity matrix 4. phase
5. basis
mmm
247

7.1 Introduction
he optimal solution of a linear programming problem, Max. Z = c x, subject to
T A x = b, and x ≥ 0, depends upon the parameters aij , bi and c j . So far we have
assumed that these parameters are given (constant), but there are problems where these
are not constant and vary from time to time. For example, in a diet-problem the cost of
any individual feed will vary from time to time. Here in this chapter we are interested to
know the limits of variations of these parameters so that the solution remains optimal
feasible. In other words, we wish to see the sensitiveness of the optimal feasible solution
corresponding to the variations of these parameters.

The investigations that deal with changes in the optimal solutions due to discrete variations in the
parameters aij , bi and c j are called sensitivity analysis.

The purpose of this chapter i.e., objective of sensitivity analysis is to find, how to
preserve, to a minimum, the additional computational efforts which arise in solving the
problem as a new one. In most of the cases it is not necessary to solve the problem again
but a relatively small amount of calculations applied to the old optimal solution will be
sufficient.

In this chapter we shall discuss the following changes (variations) in the L.P.P.
1. Variations in the price vector c.
2. Variations in the requirement vector b.
248

3. Variations in the element aij of the coefficient matrix.


4. Addition of a new variable to the problem.
5. Addition of a new constraint to the problem.

7.2 Variation in The Price Vector c


Consider the L.P.P. Max. Z = c x, subject to A x = b, x ≥ 0. If x B is the optimal basic
feasible solution and B the optimal basis matrix, we have

x B = B −1b.

It is clear that x B is independent of c therefore the change in some component c j

of c will not change x B i.e., x B will always remain basic feasible solution.

Condition of optimality for the solution x B , is ∆ j = c j − Z j ≤ 0 for all j not in the basis
which is satisfied before any change in c j . But when c j changes, the condition of
optimality may not be satisfied. The change in the price vector c may be made in the
following two ways :
1. Variation in c j ∉ c B (i.e., change in c j which is the price of the non-basic

variable x j ).

Since x B is an optimal solution of the L.P.P. (maximization problem)

∴ ∆ j = c j − Z j ≤ 0, for all j not in the basis.

If ∆ ck is the change in the cost ck and ck is not present in c B (the vector of the costs
associated to basic variables) then c B remains unaltered with this change. Also there is no
change in B.

∴ Z j = c B B −1 α j = c B Y j also remains unaltered.

If x B is still an optimal solution of the given L.P.P.

when ck change to ck + ∆ck then,

c j − Z j remains unaltered for all j (≠ k) not in the basis,

∴ We must have

(ck + ∆ ck ) − Z k ≤ 0

or ∆ ck ≤ Z k − ck (= − ∆ k ). ...(1)

Hence, if ck ∉c B changes to ck + ∆ ck such that ∆ ck ≤ Z k − ck (= − ∆ k ), the value of the


objective function and the optimal solution of the problem remains unchanged. Note
that there is no lower bound to ∆ ck .
249

2. Variation in c j ∈ c B (i.e., change in c j which is the price of the basic variable).

We know that
m
Z j = c B B −1 α j = c B Y j = ∑ cBi yij
i=1

Let ∆ cBk be the change in cBk , the price of the basic variable x Bk then if Z j * is the value
of Z j in this solution, then we have
m
Z *j = ∑ cBi yij + (cBk + ∆ cBk ) ykj
i=1
i≠ k

m
= ∑ c y + yk j ∆ cBk = Z j + ykj ∆ cBk
Bi ij
i=1

∴ c j − Z *j = c j − (Z j + ykj . ∆ cBk ) = (c j − Z j ) − ykj ∆ cBk

The solution x B will remain optimal for the change ∆ cBk in cBk if

c j − Z *j ≤ 0 for all j not in the basis

or (c j − Z j ) − ykj ∆ cBk ≤ 0

or ykj . ∆cBk ≥ c j − Z j
cj − Z j
∴ ∆cBk ≥ , for ykj > 0
ykj

cj − Z j
and ∆cBk ≤ , for ykj < 0. ...(2)
ykj

Hence, the range of ∆cBk (change in the price of cBk corresponding to the basic variable
x Bk ) so that the solution remains optimal is given by

cj − Z j  c − Z 
Max .   ≤ ∆ cBk ≤ Min.  j j
 ...(3)
y kj ≥ 0  ykj  y kj < 0  yk j 
   

for all j corresponding to which α j is not in the optimal basis.

If no ykj > 0, there is no lower bound to ∆ cBk and if no ykj < 0, there is no upper bound to
∆ cBk .

To find the change in the value of the objective function

The value of the objective function for the price vector c is given
m
Z = c B. x B = ∑ cBi. xBi.
i =1
250

When cBk ∈c B is changed to cBk + ∆ cBk , if Z * is the value of the objective function then
m
Z* = ∑ cBi. xBi + (cBk + ∆ cBk ) xBk
i =1
i≠ k

m
= ∑ cBi. xBi + xBk . ∆ cBk = Z + xBk ∆ cBk .
i =1

Hence, if ∆cBk (change in cBk ) satisfies (3), the solution x B will remain optimal and the
value of the objective function will be improved by an amount x Bk . ∆ Bk where x Bk is the
basic variable corresponding to cBk .

Note : If the variation in c j is obtained as a ≤ c j ≤ b, the student are advised to check the

optimality of the solution for the extreme values of c j , i.e., for c j = a and c j = b. If the
solution is not optimal for c j = a, then the variation in c j will be given by a < c j ≤ b
similarly, for c j = b.

Example 1: The linear programming problem is


Max. Z = 3 x1 + 5 x2

subject to x1 + x2 ≤ 1

2 x1 + 3 x2 ≤ 1

and x1, x2 ≥ 0

Obtain the variations in c j ( j = 1, 2 ) which are permitted without changing the optimal

solution.
Solution: Introducing the slack variables x3 and x4 the given L.P.P. becomes

Max. Z = 3 x1 + 5 x2

subject to x1 + x2 + x3 ≤1

2 x1 + 3 x2 + x4 ≤1

x1, x2 , x3 , x4 ≥ 0

Taking x1 = 0, x2 = 0, we get x3 = 1, x4 = 1, which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table :
251

cj 3 5 0 0 Min. Ratio
B c B (C B ) X B Y2 ,
x B (= X B ) Y1 Y2 Y3 Y4
yi2 > 0

Y3 0 1 1 1 1 0 11 =1
Y4 0 1 2 3 0 1 1 3 (min)

Z = c B. x B = 0 ∆j 3 5 0 0
↑ ↓
Y3 0 23 13 0 1 −1 3
Y2 5 13 23 1 0 13

Z = c B. x B = 5 3 ∆j −1 3 0 0 −5 3

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

Hence, the optimal solution of the L.P.P. is

x1 = 0, x2 = 1 3 and Max. Z = 5 3

Here c B = (cB , cB ) = (0, 5) = (c3 , c2 )


1 2

To find variation in c1 : Hence, c1 ∉c B

∴ From (1), article 7.2, the change ∆ c1 in c1, so that the solution remains optimal is
given by

∆ c1 ≤ Z1 − c1 = − ∆1 or ∆ c1 ≤ 1 3

∴ The range over which c1 can vary maintaining the optimality of the solution given in
above table is

− ∞ < c1 ≤ c1 + ∆ c1

i.e., − ∞ < c1 ≤ 3 + 1 3

i.e., − ∞ < c1 ≤ 10 3

The value of the objective function will not change in this case.

To find variation in c2 : Here c2 ∈c B and c2 = cBk = cB2 = 5 ∈c B i.e., k = 2

∴ from (3), article 7.2 the range of ∆ cB2 (change in cB2 ) is given by

cj − Z j  cj − Z j 
Max .   ≤ ∆ cB2 ≤ Min .  
y2 j > 0  y2 j  y2 j < 0  y2 j 
   
252

Here k = 2 and y21 = 2 3 , y24 = 1 3 > 0. Here we cannot consider y22 and y23 as
Y2 (= α 2 = β2 ) and Y3 (= α 3 = β1) are in the basis.

Since no y2 j < 0, ∴ there is no upper bound to ∆ cB2 .

The change in ∆ cB is given by


2

 ∆ ∆ 
Max.  1 , 4  ≤ ∆ cB2 < ∞
 y21 y24 

 −1 3 −5 3 
or Max.  ,  ≤ ∆ cB2 < ∞
 2 3 1 3 

or −1 2 ≤ ∆ cB2 < ∞

⇒ cB2 −1 2 ≤ c2 < cB2 + ∞ or 5 −1 2 ≤ c2 < 5 + ∞ or 9 2 ≤ c2 < ∞

Example 2: For the L.P.P. Max. Z = 5 x1 + 3 x2

subject to 3 x1 + 5 x2 ≤ 15

5 x1 + 2 x2 ≤ 10

and x1, x2 ≥ 0

Find an optimal solution.

Hence, find how far the component c1 of the vector c of the function Z = c x can be
increased without destroying the optimality of the solution.

Solution: Introducing the slack variables x3 , x4 , the given L.P.P. becomes

Max. Z = 5 x1 + 3 x2 + 0. x3 + 0. x4

subject to 3 x1 + 5 x2 + x3 ≤ 15

5 x1 + 2 x2 + x4 ≤ 10

and x1, x2 , x3 , x4 ≥ 0

Taking x1 = 0, x2 = 0, we get x3 = 15, x4 = 10 which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table :
253

B C B(c B ) cj 5 3 0 0 Min. Ratio


X B Y1, yi > 0
X B (x B ) Y1 Y2 Y3 Y4 1

Y3 0 15 3 5 1 0 15 3 = 5
Y4 0 10 5 2 0 1 10 5 = 2 (min) →

Z = cB . x B = 0 ∆j 5 3 0 0 X B Y2 yi2 > 0
↑ ↓
Y3 0 9 0 19 5 1 −3 5 45 19 (min) →
Y1 5 2 1 25 0 15 5

Z = c B . x B = 10 ∆j 0 1 0 −1
↑ ↓
Y2 3 45 19 0 1 5 19 −3 19
Y1 5 20 19 1 0 −2 19 5 19
Z = cB . x B ∆j 0 0 −5 19 −16 19
= 235 19

In the last table since all ∆ j ≤ 0, therefore this solution is optimal.

Hence, the optimal solution of the problem is

x1 = 20 19 , x2 = 45 19 and Max. Z = 235 19

Here c B = (cB1, cB2 ) = (3, 5) = (c2 , c1)

To find variation in c1 : Here c1 ∈c B and c1 = cBk = cB2 = 5 i.e., k = 2

The range of variation in ∆cB2 for the solution to remain optimal is given by

cj − Z j  c − Z j 
Max .   ≤ ∆ cB2 ≤ Min .  j 
y2 j > 0  y2 j  y2 j < 0  y2 j 
   

Here y23 = −2 19 < 0, y24 = 5 19 > 0, we cannot consider y21 and y22 as
Y1 (= α1 = β2 ), Y2 (= α 2 = β1) are in the basis.
c4 − Z4 c − Z3
∴ ≤ ∆cB2 ≤ 3
y24 y23

−16 19 −5 19
or ≤ ∆cB2 ≤
5 19 −2 19

or −16 5 ≤ ∆c1 ≤ 5 2

⇒ 5 −16 5 ≤ c1 ≤ 5 + 5 2 [Q Given c1 = 5]

or 9 5 ≤ c1 ≤ 15 2

which give the variation in c1 without affecting the optimality of the solution.
254

Example 3: Solve the following L.P.P.

Max. Z = − x2 + 3 x3 − 2 x5

subject to x1 + 3 x2 − x3 + 2 x5 = 7

−2 x2 + 4 x3 + x4 = 12

−4 x2 + 3 x3 + 8 x5 + x6 = 10

and x j ≥ 0 , j = 1, 2, . . . , 6

Find the variations of the costs c1, c2 , c3 , c4 , c5 and c6 for which the optimal solution
remains optimal.

Solution: The columns corresponding to the coefficients of x1, x4 and x6 form a unit
matrix, so x1, x4 , x6 may be taken as the basic variables and thus there is no need to
consider artificial variables.

Taking x2 = 0, x3 = 0, x5 = 0, we get x1 = 7, x4 = 12, x6 = 10, which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table.

cj 0 −1 3 0 −2 0 Min. Ratio
B C B(c B ) X B Y3 , yi3 > 0
X B (x B ) Y1 Y2 Y3 Y4 Y5 Y6

Y1 0 7 1 3 −1 0 2 0 
Y4 0 12 0 −2 4 1 0 0 12 4 = 3 (min)

Y6 0 10 0 −4 3 0 8 1 10 3

Z = cB . x B = 0 ∆j 0 −1 3 0 −2 0 X B Y2
↑ ↓

Y1 0 10 1 52 0 14 2 0 4 (min) →
Y3 3 3 0 −1 2 1 14 0 0 
Y6 0 1 0 −5 2 0 −3 4 8 1 

Z = cB . x B = 9 ∆j 0 12 0 −3 4 −2 0
↓ ↑

Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1

Z = c B . x B = 11 ∆j −1 5 0 0 −4 5 −12 5 0
255

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

Hence optimal solution of this L.P.P. is

x1 = 0, x2 = 4, x3 = 5, x4 = 0, x5 = 0, x6 = 11 and Max. Z = 11

Here c B = (cB1, cB2 , cB3 ) = (−1, 3, 0) = (c2 , c3 , c6 ).

To find variations in c1, c4 , c5 ∉c B .

Since for ck ∉ c B , ∆ ck ≤ Z k − ck (= − ∆ k )

∴ ∆ c1 ≤ − ∆1 i.e.,∆ c1 ≤ 1 5 , ∆ c4 ≤ − ∆4 i.e., ∆ c4 ≤ 4 5

and ∆ c5 ≤ − ∆5 i.e., ∆ c5 ≤ 12 5

Q c1 = 0, c4 = 0 and c5 = −2

∴ c1 ≤ 0 + ∆ c1 i.e., c1 ≤ 1 5, similarly c4 ≤ 4 5 , c5 ≤ 2 5

There is no lower bounds to c1, c4 , c5 .

To find variations in c2 , c3 , c6 ∈ c B. Variation in c j = c Bk ∈ c B is given by

cj − Z j  cj − Z j 
Max.   ≤ ∆cBk ≤ Min.  
y kj > 0  ykj  y kj < 0  ykj 
   

for all j corresponding to which α j (= Y j ) is not in the optimal basis.

Variation in c2 ∈c B . Here c2 = cBk = cB1 = −1 i.e., k = 1

∴ The range of variation in ∆ cB1 is given by

cj − Z j  cj − Z j 
Max.   ≤ ∆ cB1 ≤ Min.  
y1j > 0  y1 j  y1j < 0  y1 j 
   

Here y11 = 2 5 > 0, y14 = 1 10 > 0, y15 = 4 5 > 0 no y1 j < 0. We cannot consider
y12 , y13 , y16 as Y2 (= α 2 = β1), Y3 (= α 3 = β2 ), Y6 (= α 6 = β3 ) are in the basis.

 c − Z1 c4 − Z4 c5 − Z5 
∴ Max.  1 , ,  ≤ ∆cB1 < ∞
 y11 y14 y15 

 −1 5 −4 5 −12 5 
Max.  , ,  ≤ ∆c2 < ∞
 2 5 1 10 4 5 

or Max. (−1 2 , − 8, − 3) ≤ ∆ c2 < ∞ or −1 2 ≤ ∆ c2 < ∞


256

⇒ −1 −1 2 ≤ c2 < −1 + ∞ i.e.,−3 2 ≤ c2 < ∞ Q c2 = −1

Variation in c3 ∈c B . Here c3 = cBk = cB2 i.e., k = 2

∴ The range of variation in ∆ cB2 is given by

cj − Z j  cj − Z j 
Max.   ≤ ∆ cB2 ≤ Min.  
y2 j > 0  y2 j  y2 j < 0  y2 j 
   

Here y21 = 1 5 > 0, y24 = 3 10 > 0, y25 = 2 5 > 0 and no y2 j < 0

We cannot consider y22 , y23 , y26 as Y2 (= α 2 = β1), Y3 (= α 3 = β2 ), Y6 (= α 6 = β3 ) are in


the basis.

 c − Z1 c4 − Z4 c5 − Z5 
∴ Max.  1 , ,  ≤ ∆ cB2 < ∞
 y21 y24 y25 

 −1 5 −4 5 −12 5 
or Max.  , ,  ≤ ∆ cB2 < ∞
 1 5 3 10 2 5 

or Max. (−1, −8 3 , − 6) ≤ ∆ c3 < ∞ or −1 ≤ ∆ c3 < ∞

⇒ 3 − 1 ≤ c3 < 3 + ∞ i.e.,2 ≤ c3 < ∞ Q c3 = 3

Variation in c6 ∈c B . Here c6 = cBk = cB3 i.e., k = 3

∴ The range of variation in ∆cB3 is given by

cj − Z j  cj − Z j 
Max.   ≤ ∆ cB3 ≤ Min.  
y3 j > 0  y3 j  y3 j < 0  y3 j 
   

Here y31 = 1 > 0, y35 = 10 > 0, and y34 = −1 2 < 0

We cannot consider y32 , y33 , y36 as Y2 (= α 2 = β1), Y3 (= α 3 = β2 ), Y6 (= α 6 = β3 ) are in


the basis.

 c − Z1 c5 − Z5   c − Z4 
∴ Max.  1 ,  ≤ ∆ cB3 ≤ Min.  4
  y


 y31 y35   34 

 −1 5 −12 5   −4 5 
or Max.  ,  ≤ ∆ c6 ≤ Min.  
 1 10   −1 2 

or Max. (−1 5 , −6 25) ≤ ∆ c6 ≤ Min. (8 5) or −1 5 ≤ ∆ c6 ≤ 8 5

⇒ 0 −1 5 ≤ c6 ≤ 0 + 8 5 i.e., −1 5 ≤ c6 ≤ 8 5 Q c6 = 0
257

Hence, the variations of the costs c1, c2 ,..., c6 for which the optimal solution remains
optimal are as follows :

− ∞ < c1 ≤ 1 5 , −3 2 ≤ c2 < ∞, 2 ≤ c3 < ∞, − ∞ < c4 ≤ 4 5

− ∞ < c5 ≤ 2 5 , −1 5 ≤ c6 ≤ 8 5.

7.3 Variation in the Requirement Vector b


We know that the condition of optimality for the B.F.S. of a L.P.P. is ∆ j = c j − Z j ≤ 0.
Since ∆ j does not involve any of bi, if any component bi, of the requirement vector
b = [b1, b2 ,..., bm] is changed then this change will not effect the conditions of optimality.
Hence, if any component bi is change to bi + ∆bi then the new solution thus obtained will
remain optimal. But x B = B −1b depends on b, therefore any change in b may affect the
feasibility of the optimal solution i.e., the optimal solution obtained by changing b may
or may not be feasible. Thus, a change in bi must be of the magnitude that preserves
feasibility of the given solution.

Let the component bl of the requirement vector b be changed to bl + ∆bl , therefore if the
new requirement vector is b*, then

b* = [b1, b2 ,..., b l + ∆ b l ,..., bm]

If x B* is the solution of the new L.P.P. obtained by changing bl to bl + ∆bl , then


x B * = B −1 b * where B is the optimal basis.

Let B −1 = (β1, β2 ,..., β l ,..., β m]

β11 β12 ... β1l ... β1m 


β β22 ... β2 l ... β2 m 
 21 
... ... ... ... 
= 
β i1 β i2 ... β il ... β im 
... ... ... ... 
 
β m1 β m2 ... β ml ... β mm 

[l-th component]
6474 8
Since b* = [b1, b2 ,..., bl + ∆bl ,..., bm]

= [b1 + 0, b2 + 0,..., bl + ∆bl ,... bm + 0]

= [b1, b2 ,..., bl ,..., bm] + [0, 0,..., ∆bl ,..., 0]

= b + [0, 0,..., ∆bl ,..., 0]


258

∴ x*B = B −1. b *

= B −1 .{b + [0, 0,..., ∆bl ,...,0]}

= B −1b + B −1.[0, 0,..., ∆bl ,..., 0]


l-th component

= x B + (β1, β2 ,..., β l ,..., β m).[0, 0,..., ∆bl ,..., 0]

= x B + β l . ∆bl

= [ x B1, x B2 ,..., x Bl ,..., x Bm] + [β1l , β2 l ,..., β ll ,..., β ml ]. ∆bl .

= [ x B1 + β1l ∆bl ,..., x Bl + β ll ∆bl ,..., x Bm + β ml . ∆bl ] ...(1)

Now if the solution x *B is feasible, then

x Bi + β il ∆bl ≥ 0 for all i = 1, 2,..., m

or β il ∆bl ≥ − x Bi

x
∴ ∆bl ≥ − Bi , for β il > 0
β il

x
and ∆bl ≤ − Bi , for β il < 0
β il

Hence the range of ∆bl , so that the optimal solution x *B also remains feasible is given by

 x   x Bi 
Max − Bi  ≤ ∆bl ≤ Min −  ...(2)
β il 
β il > 0  β il < 0  β il 

and the new value of the optimal solution is given by (1)

To find the change in the value of the objective function.

The given value of the objection function for a requirement vector is given by
m
Z = c Bx B = ∑ cBi. xBi
i =1

When bl is changed to bl + ∆bl if Z * is the new value of the objective function, then

Z * = c B x *B

= (cB1, cB2 ,..., cBm).[ x B1 + β1l ∆bl ,..., x Bm + β ml ∆bl ]


259

m m m
= ∑ cBi.( x Bi + β il . ∆bl ) = ∑ cBi. x Bi + ∑ cBi.βil . ∆bl
i =1 i =1 i =1
m
=Z + ∑ cBi.βil . ∆bl
i =1

Hence, if ∆bl (change in bl ) satisfies (2), then the solution x *B given by (1) is also optimal
feasible solution and the value of the objective function is improved by an amount
m

∑ cBi .βil . ∆bl .


i =1

Example 4: Given the following L.P.P.

Max. Z = − x1 + 2 x2 − x3

subject to 3 x1 + x2 − x3 ≤ 10

− x1 + 4 x2 + x3 ≥ 6

x2 + x3 ≤ 4

and x1, x2 , x3 ≥ 0 .

Find the separate range of b1, b2 and b3 (the constants on the right hand sides of the
constraints) consistent with the optimal solution.

Solution: Introducing the slack variables x4 , x6 surplus variables x5 and artificial


variable x a , the given L.P.P. reduces to

Max. Z = − x1 + 2 x2 − x3 + 0. x4 + 0. x5 + 0. x6 − Mx a

subject to 3 x1 + x2 − x3 + x4 = 10

− x1 + 4 x2 + x3 − x5 + xa = 6

x2 + x3 + x6 =4

and x1,..., x7 ≥ 0

Taking x1 = 0 = x2 = x3 = x5 , we get x4 = 10, x a = 6, x6 = 4, which is the starting B.F.S.


The solution of the problem by simplex method is given in the following table.
260

cB cj −1 2 −1 0 0 0 −M Min. Ratio
B
(C B ) X B Y2 ,
x B (X B) Y1 Y2 Y3 Y4 Y5 Y6 A1
yi2 > 0

Y4 0 10 3 1 −1 1 0 0 0 10 1 = 10
A1 −M 6 −1 4 1 0 −1 0 1 6 4 = 3 2(min)

Y6 0 4 0 1 1 0 0 1 0
4 1=4

Z = c B. x B ∆j −1 − M 2 + 4 M −1 + M 0 −M 0 0 X B Y5 yi5 > 0
= −6 M ↑ ↓

Y4 0 17 2 13 4 0 −5 4 1 14 0 −1 4 (17 2) (1 4) = 34
Y2 2 32 −1 4 1 14 0 −1 4 0 14 

Y6 0 52 14 0 3 4 0 14 1 −1 4 (5 2) (1 4)
=10 (min)

Z = c B. x B ∆j −1 2 0 −3 2 0 12 0 −M
−1 2
=3 ↑ ↓

Y4 0 6 3 0 −2 1 0 −1 0
Y2 2 4 0 1 1 0 0 1 0
Y5 0 10 1 0 3 0 1 4 −1

Z = c B. x B ∆j −1 0 −3 0 0 −2 −M
=8

In the last table all ∆ j ≤ 0, therefore this solution is optimal,

Optimal solution is x1 = 0, x2 = 4, x3 = 0, Max. Z = 8

Here B = (α 4 , α 2 , α 5 ), b = [10, 6, 4],

x B = [ x4 , x2 , x5 ] = [ x B1, x B2 , x B3 ]

= [6, 4, 10]

*
 1 0 −1
−1  
∴ B = (β1, β2 , β3 ) = 0 0 1 *

0 −1 4 

To find variation in b1 : After changing b1 to b1 + ∆b1, the new requirement vector is given
by

b* B = [10 + ∆b1, 6, 4]

* B −1 is obtained from the last simplex table and it consists of the columns corresponding to the
first basis (α4 , A1, α6 )
261

From relation (2), article 7.3 the range of ∆b1 consistent with the optimal feasible
solution is given by

−x  −x 
Max  Bi  ≤ ∆b1 ≤ Min  Bi 
β i1 > 0  β i1  β i1 < 0  β i1 

Here β11 = 1 > 0. Since no β i1 < 0

∴ There is no upper bound to ∆b1

 x 
∴ We have Max. − B1  ≤ ∆b1 < ∞
 β11 

6
or − ≤ ∆b1 < ∞
1

∴ Range for b1 is −6 + 10 ≤ b1 < 10 + ∞ or 4 ≤ b1 < ∞ Q b1 = 10

To find variation in b2 : After changing b2 to b2 + ∆b2 , the new requirement vector is

b**
B = [10, 6 + ∆b2 , 4]

From relation (2), article 7.3 the range of ∆b2 consistent with the optimal feasible
solution is given by

−x  −x 
Max  Bi  ≤ ∆b2 ≤ Min  Bi  i.e., −∞ ≤ ∆b ≤ −10 = 10
2 −1
β i2 > 0  β i2  β i2 < 0  β i2 
   

Here β32 = −1 < 0 and no β i2 > 0, ∴ there is no lower bound to ∆b2

 x B2 
∴ We have − ∞ < ∆b2 ≤ Mini − 
βi 2 < 0  β i2 
 

−10
or − ∞ < ∆b2 ≤ = 10
−1

∴ Range of variation of b2 is

− ∞ < b2 ≤ 6 + 10 Q b2 = 6

i.e., − ∞ < b2 ≤ 16

To find variation in b3 : After changing b3 to b3 + ∆b3 , the new requirement vector is

b***
B = [10, 6, 4 + ∆b3 ]

from relation (2), article 7.3 the range of ∆b3 , consistent with the optimal feasible
solution is given by
262

−x  −x 
Max  Bi  ≤ ∆b3 ≤ Min  Bi 
β i 3 > 0  βi 3  βi 3 < 0  βi 3 
   

Here β23 = 1 > 0, β33 = 4 > 0 and β13 = −1 < 0

−x −x  −x 
∴ Max.  B2 , B3  ≤ ∆b3 ≤ min.  B1 
β
 23 β33   β13 

or Max. {− 4 1, − 10 4} ≤ ∆b3 ≤ Min. {−6 (−1)}

or −5 2 ≤ ∆b3 ≤ 6

∴ Range of variation of b3 is

4 −5 2 ≤ b3 ≤ 4 + 6 Q b3 = 4

or 3 2 ≤ b3 ≤ 10.

Example 5: Find the optimum solution of the problem :

Maximize Z = 6 x1 + 8 x2

Subject to the constraints :


5 x1 + 10 x2 ≤ 60

4 x1 + 4 x2 ≤ 40

and x1, x2 ≥ 0

Apply sensitivity analysis to find the solution of the given L.P.P. if :


60  40 
(i) The right hand side vector   of the constraints of the L.P.P. is changed to  .
40
  20 

60  20 
(ii) The right hand side vector   of the constraints is changed to  . [Meerut 2007]
40
  40 

Solution: The given problem can be written as

Ax = b ...(1)
5 10  x  60 
where A =  = [α1, α 2 ], x =  1 , b =  
4 4  x
 2 40 

Introducing the slack variables x3 , x4 the given L.P.P. reduces to :

Max. Z = 6 x1 + 8 x2 + 0. x3 + 0. x4

subject to 5 x1 + 10 x2 + x3 = 60
263

4 x1 + 4 x2 + x4 = 40

and x1, x2 , x3 , x4 ≥ 0

Taking x1 = 0, x2 = 0, we get x3 = 60, x4 = 40, which is the basic feasible solution of the
problem :

B cB cj 6 8 0 0 Mini. Ratio

x B Y2 , yi2 > 0
xB Y1 Y2 Y3 Y4

Y3 0 60 5 10 1 0 60 10 = 6 (mini)

Y4 0 40 4 4 0 1
40 4 = 10

Z = cB x B = 0 ∆j 6 8 0 0 x B Y1 y i1 > 0
↑ ↓

Y2 8 6 12 1 1 10 0 12
Y4 0 16 2 0 −2 5 1 16 2 (mini)

Z = c B x B = 48 ∆j 2 0 −4 5 0

Y2 8 2 0 1 15 −1 4
Y1 6 8 1 0 −1 5 12

Z = c B x B = 64 ∆j
0 0 −2 5 −1

Since no ∆ j > 0, so this solution is optimal. The optimal solution is

x1 = 8, x2 = 2 and Max. Z = 64

In the last simplex table, matrix A reduced to unit matrix in proper form can be
written as

B cB xB Y1 Y2 Y3 Y4

Y1 6 8 1 0 −1 5 12
Y2 8 2 0 1 15 −1 4

−1 5 1 2  1 −4 10 
Hence B −1 =  =  
 1 5 −1 4  20  4 −5 

x 
From (1) A x = b ⇒ x =  1  = B −1b
 x2 
264

60  40 
(i) When b =   is changed to b′ =  , the new values of the basic variables in the
40  20 
table will become x B = B −1b′

 x  1 −4 10  40  2 
i.e., xB =  1 =    =  
 x2  20  4 −5  20  3 

i.e., x1 = 2, x2 = 3
Since both x1 and x2 are non-negative, so this solution is basic feasible solution. The
new optimal value of Z = 6 × 2 + 8 × 3 = 36

60  20 
(ii) When b =   is changed to b′′ =  , the new values of the basic variables in final
40  40 
iteration in the above table, will become

 x1  1 −4 10  20  16 


x B = B −1b′′ i.e., x B =   =  4 −5    =  
 x2  20   40  −6 

i.e., x1 = 16, x2 = −6
Since x2 = −6 is negative, so this solution becomes infeasible. Dual Simplex
algorithm (see 8.11) can be used to clear this infeasibility. The modified simplex
table (from last table of previous table) can be written as

B cB cj 6 8 0 0

xB Y1 Y2 Y3 Y4

Y1 6 16 1 0 −1 5 12
Y2 8 −6 0 1 15 −1 4 →

Z = c B x B = 48 ∆j 0 0 −2 5 −1
↓ ↑

Y1 6 4 1 2 15 0
Y4 0 24 0 −4 −4 5 1

Z = c B x B = 24 ∆j 0 –4 –6/5 0

Q x2 = −6 is the negative basic variable, so β2 (= Y2 ) is the leaving vector.

∆k Mini  ∆ j 
  ∆4   −1  ∆4
Q =  , y2 j < 0  = Mini.  , y24 < 0  = Mini  =4 =
y2 k j  y2 j   y24   −1 4  y24

∴ k = 4 i.e., α 4 (= Y4 ) is the entering vector and key element = y24 = −1 4.


265

Proceeding as usual the last simplex table is shown in above table.

Here all ∆ j ≤ 0, so the solution x1 = 4, x2 = 0 is optimal and feasible. Hence when b is

changed to b′′, the basic feasible optimal solution is x1 = 4, x2 = 0 and Max. Z = 24,

7.4 Variation in the Element alk of the Coefficient Matrix A


Let the element alk of the l-th row and k-th column of the coefficient matrix A be changed
to alk + ∆alk . Now there are two possibilities according as alk is or not the element of the
optimal basis B.

Case I : When alk is not an element of the optimal basis B.

If alk is not an element of the optimal basis B, a change in alk will not affect B and so B −1

remain the same. Thus, change in such alk will not affect the solution x B = B −1b. Hence,
the feasibility of the solution x B is not affected. But the change of alk may change the
optimality of the solution i.e., the solution x B may not be the optimal solution of the new
L.P.P. (obtained by changing alk to alk + ∆alk ). Thus, we have to find the range of
variation of alk so that the solution still remains optimal.

Since all the column vector except α k of the matrix A remain unaffected by the variation
∆alk in alk .

∴ ∆ j = c j − Z j ≤ 0 for all j (≠ k) not in the basis.

Thus, the solution x B will remain optimal for the new L.P.P. also, if

∆*k = ck − Z *k ≤ 0 ...(1)

where ∆*k and Z *k are ∆ k and Z k for the new L.P.P.

Let B −1 = (β1, β2 ,..., β l ,..., β m)

and α k = [α1k , α 2 k ,..., α lk ,..., α mk ]

If α *k is α k for the new L.P.P., then

α *k = [a1k , a2 k ,..., alk + ∆alk ,..., amk ]

= [a1k + 0, a2 k + 0,..., alk + ∆alk ,..., amk + 0]

= [a1k , a2 k ,..., alk ,..., amk ] + [0, 0,..., ∆alk ,..., 0]

= α k + [0, 0,..., ∆ alk ,..., 0]


123
l-th component
266

and Z *k = c B B −1α *k

= c B ⋅ B −1 {α k + [0, 0,..., ∆alk ,..., 0]}

= c B . B −1α k + c B . B −1 [0, 0,..., ∆alk ,..., 0]

= Z k + c B .(β1, β2 ,..., β l ,..., β m).[0, 0,..., ∆alk ,..., 0]

= Z k + c B (β l ∆alk )

= Z k + ∆alk .c Bβ l

= Z k + ∆alk (cB1, cB2 ,..., cBl ,..., cBm)× [β1l , β2 l ,..., β ll ,..., β ml ]
m
= Z k + ∆alk ∑ cBi.βil
i =1

∴ from (1), the solution x B will remain optimal, if


m
∆*k = ck − Z *k = ck − Z k − ∆alk ∑ cBi . βil ≤ 0
i =1
m
or ∆alk ∑ cBi βil ≥ ck − Z k (= ∆k )
i =1
m
∆k
∴ ∆alk ≥ m for ∑ cBi βil > 0
∑ cBi βil i =1
i =1
m
∆k
and ∆alk ≤ m for ∑ cBi βil < 0
∑ cBi βil i =1
i =1

Hence, the range of ∆alk (change in alk ∉ B), so that the solution x B remains optimal and
feasible, is given by

   
   
   
 ∆k   ∆k 
 ≤ ∆alk ≤ 
  m 
    m 
  ...(2)

 ∑ cBi β il  > 0 
 

 ∑ cBi β il  < 0 
 
  i = 1     i = 1  
m
If ∑ cBi.βil = 0, ∆alk is unrestricted.
i =1
267

m
If ∑ cBi.βil > 0, there is no upper bound to ∆alk ,
i =1

m
and if ∑ cBi.βil < 0 there is no lower bound to ∆alk .
i =1

Note : There is no change in the value of the objective function if ∆alk satisfies (2).

Case II : When alk is an element of the optimal basis B.

Since alk is an element of the optimal basis B then if alk is changed to alk + ∆alk , the
optimal basis B will certainly be changed and hence B −1 will also be changed.
Consequently x B = B −1b and Z j = c B B −1α j will also change. A change in Z j may

disturb the optimality condition ∆ j = c j − Z j ≤ 0 while a change in x B may disturb the


feasibility of the solution. Hence our aim is to find the range of variation of alk so that
neither the optimality nor the feasibility of the solution is disturbed.

When the element alk ∈ B is changed to alk + ∆alk then let optimal basis B, the solution
x B and Z j be represented by B*, x *B , and Z *j respectively.

Let B = (b1, b2 ,..., b m) and α k = b p

∴ alk = blp and alk + ∆alk = blp + ∆blp

0 0 ... 0 ... 0
 
0 0 ... 0 ... 0
 M 
M M M
∴ B* = B +  ∆blp
0 0 ... .. . 0 ← l - th row
 M 
M M M
0 ... 0 ... 0 ... 0
 ↑
p− th column

= B + ∆blp.0 lp, where 0 lp is the null matrix except for the (l, p)-th element which equals to
unity.

= B (I + B −1 ∆blp.0 lp)

−1
∴ B* = {B (I + B −1 ∆blp.0 lp)}−1 = (I + B −1 ∆blp.0 lp)−1. B −1
...(3)
−1
[Q ( AB) = B −1 A −1]

If B −1 = (β1, β2 ,..., β m), then


268

0 0 ... 0 ... 0 
0 0 ... 0 ... 0 
 
−1 −1  M M M M
I + B ∆blp.0 lp = I + B  
0 0 ... ∆blp ... 0 
M M M M
 
0 0 ... 0 ... 0 

β11 β12 ... β1l ... β1m  0 0 ... 0... 0 


β ... β2 m  0 0
 21
β22 ... β2 l
  ... 0... 0 

 M M M M  M M M M
= I + → β β p2 ... β pl

... β pm  × 0 0 
... ∆blp ... 0  ← l-th row
p− th row  p1 
 M M M M  M M M M
 
β m1 β m2 ... β ml ... β mm  0 0 ... 0

.. . 0 
 ↑   ↑ 
l − th column p− th column

1 0 ... 0 ... 0 0 0 ... β1l ∆blp ... 0 


0  0 0 ... β2 l ∆blp ... 0 
1 ... 0 ... 0 
 
M M M M M M M M
= → 0  + 0 
0 ... β pl ∆blp ... 0  ←
p− th row 
0 ... 1 ... 0  p− th row
M M M M M M M M
   
0 0 ... 0 ... 1 0 0 ... β ml ∆blp ... 0 
 ↑   ↑ 
p − th column p− th column

1 0 ... β1l ∆blp ... 0 


0 1 ... β2 l ∆blp ... 0 

M M M M
= 0 
0 ... 1 + β pl ∆blp ... 0  ←
p− th row
M M M M
 
0 0 ... β ml ∆blp ... 1
 ↑ 
p− th column

 −β1l ∆blp 
1 0 ... ... 0 
 D 
 
 −β2 l ∆blp 
0 1 ... 0
−1 −1  M
D 
∴ (I + B ∆blp.0 lp) =  M M ... M  ...(4)
 1 
0 0 ...
D
0
M M M M
 
 −β ml ∆blp 
0 0 ...
D
... 1
 
269

 −1 Adj C 
where D = 1 + β lp ∆blp. Q C = 
 | C| 

1. To find the range of variation of alk (= blp ) for the feasibility of the solution.

The new solution is given by

x *B = B*−1. b

= (I + B −1 . ∆ blp . 0 lp)−1 B −1b [from (3)]

= (I + B −1 . ∆ blp . 0 lp)−1 x B [since x B = B −1b]

 −β1l ∆blp 
1 0 ... ... 0 
 D 
 −β2 l ∆blp   x B1 
0 1 ... ... 0  x 
M D  B2 
M M M   M 
= →  1 x 
p− th row 0 0 ... ... 0   Bp 
 D 
M  M 
M M M  
 −β ml ∆blp   x Bm 
0
 0 ... ... 1
D
 ↑ 
p− th column

[Substituting from (4)]


 β1l . ∆blp 
 x B1 − . x Bp 
 D 
 β2 l . ∆blp 
 x B2 − . x Bp 
 D 
or x *B =  M  ...(5)
1
 x Bp 
 D 
 M 
 β ml . ∆blp 
 x Bm − . x Bp 
 D 

The solution x * is feasible if x * ≥ 0.


B B

∴ From (5). the solution x * is feasible,


B
β il . ∆blp
if x Bi − . x Bp ≥ 0, for all i = 1, 2,..., m(i ≠ p) ...(6)
D

1 ...(7)
and . x ≥ 0.
D Bp

Since x Bp ≥ 0
270

∴ (7) will hold if we have

D = 1 + β pl.∆blp > 0 ...(8)

∴ from (6), (assuming that (8) is true), we have

x BiD − β il.∆blp. x Bp ≥ 0 or x Bi.(1 + β pl ∆blp) − β il ∆blp x Bp ≥ 0

or x Bi ≥ (β il x Bp − β pl x Bi). ∆blp for all i ≠ p

x Bi
∴ ∆blp ≤ , for β il x Bp − β pl x Bi > 0
β il x Bp − β pl x Bi

x Bi
and ∆blp ≤ , for β il x Bp − β pl x Bi < 0
β il x Bp − β pl x Bi

Hence, the range of ∆alk (= ∆blp) ∈ B, so that the solution remains feasible, is given by

 x   x 
Max.  Bi  ≤ ∆alk ≤ Min  Bi  ...(9)
 Pi < 0   Pi > 0 

where D = 1 + β pl ∆blp = 1 + β pl ∆alk > 0,

Pi = β il x Bp − β pl x Bi.

If no Pi < 0, there is no lower bound to ∆alk .

and if no Pi > 0, there is no upper bound to ∆alk .

2. To find the range of variation of alk for the optimality of the solution.

For optimality of the solution x * , we must have


B

∆* j = c j − Z * j ≤ 0 for all j not in the basis.

−1
Now Z * j = c B B* α j = c B (I + B −1. ∆blp.0 lp)−1. B −1α j

= c B (I + B −1. ∆blp.0 lp)−1. Y j

Since B −1α j = Y j = [ y1 j , y2 j ,..., y pj ,..., ymj ].

Using (4) and simplifying as before as in (i), we have


271

 β1l . ∆blp 
 y1 j − . y pj 
 D 
 β2 l . ∆blp 
 y2 j − . y pj 
 D 
Z*j = c B  M 
1
 ⋅ y pj 
 D 
 M 
 β ml . ∆blp 
 ymj − . y pj 
 D 

 β1l ∆blp 
 y1 j − . y pj 
 D 
 β2 l ∆blp 
 y2 j − . y pj 
 D 
= (cB1, cB2 ,..., cBp,..., cBm)  M 
1
 ⋅ y pj  ←
 D  p− th row
 M 
 β ml ∆blp 
 ymj − . y pj 
 D 
m
 β il ∆blp  cBp. y pj
= ∑ cBi  yij − D
y pj  +

 D
i =1
i≠ P

m
 cBi β il ∆blp   β pl ∆blp  cBp. y pj
= ∑  cBi yij − D
y pj  − cBp  y pj −



 D
y pj  +

 D
i =1

m m
1 1
= ∑ cBi yij −
D ∑ cBiβil ∆ blp ypj − D [cBp.(D − β pl ∆blp) ypj − cBp ypj ]
i =1 i =1

m
1
= Zj −
D ∑ cBi βil ∆blp ypj .
i =1
m
Since ∑ cBi yij = Z j and D = 1 + β pl ∆blp
i =1

∴ for optimality of the solution x *


B
m
1
∆* j = c J − Z * j = c j − Z j + ∑ cBi.βil ∆blp ypj ≤ 0 ∀j not in the basis
D
i =1

m
or D (c j − Z j ) + ∑ cBi βil ∆blp ypj ≤ 0 assuming that D = 1 + β pl ∆blp > 0
i =1
272

m
or (1 + β pl ∆blp) (c j − Z j ) + ∑ cBi βil ∆blp ypj ≤ 0
i =1

 m 
β (c − Z ) + 
or  pl j j c∑Bi il pj  ∆ blp ≤ −(c j − Z j )
β y
 i =1 

or Qj ∆blp ≤ − ∆ j
m  m 
where Qj = β pl (c j − Z j ) + ∑ cBi β il y pj = β pl ∆ j +  ∑ cBiβ il  y pj
i =1 i =1 

−∆ j −∆ j
∴ ∆blp ≤ , for Qj > 0 and ∆blp ≥ for Qj < 0
Qj Qj

Hence, the range of ∆alk (= ∆blp) so that the solution remains optimal is given by

 ∆j   ∆j 
Max. −  ≤ ∆alk ≤ Min. −  ...(10)
 Qj < 0   Qj > 0 

for all j not in the basis.

If no Qj < 0, there is no lower bound to ∆alk , and if no Qj > 0, there is no upper bound to
∆alk .

Hence, a change ∆alk in alk (an element of basis matrix B) for the solution to remain
feasible and optimal can be made such that (8), (9) and (10) are satisfied.

Note : It is important to note that alk , the (l, k)th element of the coefficient matrix A, is
(l, p)th element blp of the basis matrix B.

Example 6: Solve the following L.P.P.


Max. Z = 10 x1 + 3 x2 + 6 x3 + 5 x4

subject to x1 + 2 x2 + x4 ≤ 6

3 x1 + 2 x3 ≤ 5

x2 + 4 x3 + 5 x4 ≤ 3

and x1, x2 , x3 , x4 ≥ 0

Compute the limits for a11 and a23 so that the new solution remains optimal feasible
solution.
273

Solution: The given L.P.P in standard form can be written as follows :

Max. Z = 10 x1 + 3 x2 + 6 x3 + 5 x4 + 0. x5 + 0. x6 + 0. x7

subject to x1 + 2 x2 + 0. x3 + x4 + x5 = 6

3 x1 + 0. x2 + 2 x3 + 0. x4 + x6 = 5

0. x1 + 1x2 + 4. x3 + 5. x4 + x7 = 3

and x1, x2 ,..., x7 ≥ 0

Taking x1 = 0 = x2 = x3 = x4 we get x5 = 6, x6 = 5, x7 = 3

which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table.

B c B (C B ) cj 10 3 6 5 0 0 0 Min. Ratio

x B( X B ) Y1 Y2 Y3 Y4 Y5 Y6 Y7 X B Y1, xi1 > 0

Y5 0 6 1 2 0 1 1 0 0 6 1=6
Y6 0 5 3 0 2 0 0 1 0 5 3 (min)

Y7 0 3 0 1 4 5 0 0 1

Z = c B. x B ∆j 10 3 6 5 0 0 0 X B Y4 , yi4 > 0
=0 ↑ ↓

Y5 0 13 3 0 2 −2 3 1 1 −1 3 0 13 / 3
Y1 10 5 3 1 0 2 3 0 0 13 0 
Y7 0 3 0 1 4 5 0 0 1 3 5 (min)

Z = c B. x B ∆j 0 3 −2 3 5 0 −10 3 0 X B Y2 , yi2 > 0


= 50 3 ↑ ↓

Y5 0 56 15 0 9 5 −22 15 0 1 −1 3 −1 5 56 27 (min)

Y1 10 53 1 0 2 3 0 0 13 0

Y4 5 35 0 15 4 5 1 0 0 15
3

Z = c B. x B ∆j 0 2 −14 3 0 0 −10 3 −1
= 59 3 ↑ ↓

Y2 3 56 27 0 1 −22 27 0 5 9 −5 27 −1 9
Y1 10 5 3 1 0 23 0 0 13 0
Y4 5 5 27 0 0 26 27 1 −1 9 1 27 2 9

Z = c B. x B ∆j 0 0 −82 27 0 −10 9 −80 27 −7 9


= 643 27
274

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

∴ Optimal solution of the given problem is given by

x1 = 5 3 , x2 = 56 27 , x3 = 0, x4 = 5 27 and Z = 643 27

∴ B = (Y2 , Y1, Y4 ) = (b1, b2 , b3 ), cB = (cB1, cB2 , cB3 ) = (3,10, 5)

x B = [ x B1, x B2 , x B3 ] = [56 27 , 5 3 , 5 27]


α5 α6 α7
Since the initial (starting) basic matrix was (Y5 Y6 Y7 ) :

 5 9 −5 27 −1 9 
 
∴ from the above table B −1 = (β1, β2 , β3 ) =  0 13 0 
−1 9 1 27 2 9 

i.e., β11 = 5 9 , β21 = 0, β31 = −1 9 ; β12 = −5 27 , β22 = 1 3 ,

β32 = 1 27 and β13 = −1 9 , β23 = 0, β33 = 2 9

To find limits of variation of a11for the feasibility of the solution.

From (9) of article, 7.4, the range of variation of alk = a11 for the feasibility of the solution
is given by

 x   x 
Max.  Bi  ≤ ∆alk ≤ Min.  Bi  ...(1)
P
 i < 0   Pi > 0 

where D = 1 + β pl ∆blp ≥ 0, Pi = β il x Bp − β pl x Bi

Here alk = a11 ∈ B. Since a11 ∈ α1(= Y1) = b2

∴ alk = a11 = b12 = blp i.e.,l = 1 and k = 1, p = 2

and i = 1, 2, 3, (Since there are only three rows in A)

x B1 = 56 27 , x B2 = 5 3 , x B3 = 5 27

∴ P1 = β11 x B2 − β21 x B1 = (5 9).(5 3) − 0 = 25 27 > 0

P2 = β21 x B2 − β21 x B2 = 0 . 5 3 − 0 = 0

P3 = β31 x B2 − β21 x B3 = −1 9 . 5 3 − 0 = −5 27 < 0

∴ from (1), we have

 x   x B1 
Max.  B3  ≤ ∆a11 ≤ Min.  
P
 3 < 0   P1 > 0 
275

5 27 56 27
or ≤ ∆a11 ≤
−5 27 25 27

or −1 ≤ ∆a11 ≤ 56 25 ...(A)

Also D = 1 + β pl ∆ alk = 1 ≥ 0, holds as β pl = β21 = 0.

Again from (10) article 7.4, the range of variation of alk = a11 for the optimality of the
solution is given by

 −∆ j   −∆ j 
Max.   ≤ ∆alk ≤ Min.   ...(2)
 Qj < 0   Qj > 0 

 m 
 
where Qj = β pl ∆ j + ∑
i = 1
cBi β il  y pj for all j not in the basis

 

 3 
 
Here Qj = β21∆ j +  ∑
i = 1
cBiβ i1 y2 j

 

= β21∆ j + (cB1 β11 + cB2β21 + cB3β31) y2 j

= 0 + (3 . 5 9 + 10 . 0 − 5 .1 9) y2 j = (10 9) y2 j

for all j = 3, 5, 6, 7 not in the basis.

Q3 = 10 9 . y23 = 10 9 . 2 3 = 20 27 > 0

Q5 = 10 9 , y25 = 10 9.0 = 0

Q6 = 10 9 . y26 = 10 9 .1 3 = 10 27 > 0

Q7 = 10 9 . y27 = 10 9.0 = 0

Since no Qj < 0

∴ there is no lower bound to alk = (a11)

∴ from (2), we have

 ∆ ∆ 
− ∞ < ∆a11 ≤ Min. − 3 , − 6 
 Q3 Q6 

82 27 80 27 
or − ∞ < ∆a11 ≤ Min.  , 
20 27 10 27 

or − ∞ < ∆a11 ≤ 41 10 ...(B)


276

Since the range for ∆a11 so that the solution remains optimal and feasible is given by (A)
and (B) both,

∴ Both (A) and (B) are satisfied

if −1 ≤ ∆a11 ≤ 56 25

Since a11 = 1

∴ limits of variation of a11 are

−1 + 1 ≤ a11 ≤ 56 25 + 1

or 0 ≤ a11 ≤ 81 25

To find limits of variation of a23 : Here a23 ∈α 3 (Y3 ) which is not in B. From (2),
article 7.4 the range of ∆alk (change in alk ∉ B) so that the solution remains optimal and
feasible is given by
∆k ∆k
≤ ∆alk ≤
 m   m 
   
∑
i = 1
cBi β il  > 0

∑
i = 1
cBi β il  < 0

   

Here alk = a23 = 2 ∴ l = 2, k = 3

∆ k = ∆3 = − 82 27 , m = 3 (number of constraints)
m 3

∑ cBi β il = ∑ cBi βi2 = cB1 β12 + cB2 β22 + cB3 β32


i =1 i =1

= 3 .(−5 27) + 10.(1 3) + 5 (1 27) = 80 27 > 0

∴ There is no upper bound to ∆a23

∴ from (3), we have


∆3
≤ ∆a23 < ∞
3
∑ cBi βi2 > 0
i =1

−82 27
or ≤ ∆a23 < ∞ or −41 40 ≤ ∆a23 < ∞
80 27

Since a23 = 2,

∴ limits of variation of a23 are

−41 40 + 2 ≤ a23 < ∞ + 2 or 39 40 ≤ a23 < ∞


277

7.5 Addition of a New Variable to the Problem


If a new variable is introduced in a L.P.P. whose optimal solution has been obtained, then
the solution of the problem will remain feasible. Addition of an extra variable x n + 1 to the
problem will introduce an extra column say α n + 1 to the coefficient matrix A and an extra
cost cn + 1 will be introduced in the price vector c. Thus, the addition of this extra variable

may affect the optimality of the problem.

For the same basis B the solution x B of the original problem will remain optimal
(maxima) if

cn + 1 − Z n + 1 ≤ 0

or cn + 1 − c B B −1α n + 1 ≤ 0
...(1)

i.e., if (1) is satisfied then the new variable becomes just like a non-basic variable having
zero value.

If cn + 1 − Z n + 1 > 0, then the solution x B is no more optimal for the new problem and can
be improved by introducing α n + 1 in the basis. Here, we can start with the last simplex
table giving the optimal feasible solution of the original problem by introducing one
more column corresponding to the variable x n + 1.

Example 7: Solve the following L.P.P.

Max. Z = 3 x1 + 5 x2

subject to x1 + x3 = 4

3 x1 + 2 x2 + x4 = 18

and x1, x2 , x3 , x4 ≥ 0

If a new variable x5 is introduced in the above L.P.P. with price 7, then we have the
following problem :
Max. Z′ = 3 x1 + 5 x2 + 7 x5

subject to x1 + x3 + x5 = 4

3 x1 + 2 x2 + x4 + 2 x5 = 18

and x1, x2 , x3 , x4 , x5 ≥ 0

Find the solution of the new L.P.P.


278

Solution: First part : Taking x1 = 0, x2 = 0, we get x3 = 4, x4 = 18 which is the starting


B.F.S.

(Here, the columns of the coefficients of x3 , x4 form a unit matrix, therefore x3 and x4
may be taken as the basic variables).

Proceeding as usual the successive simplex tables for the original L.P.P. are as follows :

B c B (C B ) cj 3 5 0 0 Mini. Ratio

x B( X B ) Y1 Y2 Y3 Y4 x B Y2 , yi2 > 0

Y3 0 4 1 0 1 0 
Y4 0 18 3 2 0 1 18 2 = 9 (min) →

Z = c B. x B = 0 ∆j 3 5 0 0
↑ ↓

Y3 0 4 1 0 1 0
Y2 5 9 32 1 0 12

Z = c B . x B = 45 ∆j –9/2 0 0 –5/2

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

∴ Optimal solution of the given L.P.P. is

x1 = 0, x2 = 9, x3 = 4, x4 = 0,

Max. Z = 45

From the final table the solution of the dual of the given L.P.P. is

w1 = 0, w2 = 5 2

Min. Z D = 45

Revised L.P.P. : The dual of the new L.P.P. is same as that the original L.P.P. with one
more constraint w1 + 2 w2 ≥ 7 with corresponds to the new variable x5 .

The optimal solution of the dual of the original L.P.P. is w1 = 0, w2 = 5 2 which does not
satisfy this constraint. Thus, we see that the optimal solution of the dual of the given
L.P.P. is not optimal solution of the dual of the revised L.P.P. Hence, the optimal
solution of the given L.P.P. is not optimal for the revised problem and can be improved
by introducing α 5 = [1, 2], (column of A corresponding to the new variable introduced) in
the basis.

Thus, consider one more column Y5 in the above table.


279

1 0 
Since B −1 =  
0 1 2 

1 0  1 1
∴ Y5 = B −1α 5 =  .   =  
0 1 2  2  1

∆5 = c5 − c BY5 = 7 − (0, 5),(11


, ) =2

We get the following simplex table for revised L.P.P.

B c B (C B ) cj 3 5 0 0 7 Min. Ratio

x B (X B) Y1 Y2 Y3 Y4 Y5 X B Y5, yi > 0
5

Y3 0 4 1 0 1 0 1 4 1(min) →
Y2 5 9 32 1 0 12 1 91

Z ′ = c B x B = 45 ∆j −9 2 0 0 −5 2 2
↓ ↑

Y5 7 4 1 0 1 0 1
Y2 5 5 12 1 −1 12 0

Z ′ = c B x B = 53 ∆j −13 2 0 −2 −5 2 0

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

∴ Optimal solution of revised L.P.P. is

x1 = 0, x2 = 5, x3 = 0, x4 = 0, x5 = 4 and Max. Z ′ = 53

7.6 Addition of a New Constraint to the Problem


Let a new constraint be introduced to L.P.P. whose optimal solution has been obtained.
Here, we assume that the additional constraint does not introduce any new variable with
non-zero-price.

Let Z * be the optimal (maximal) value of the objective function for the new L.P.P. while
it was Z for the original problem. Let Z * > Z . Since the new optimal solution satisfies the
first m constraints as well as the additional new constraint, therefore, it is also an optimal
solution of the original problem which is contradiction to the fact that we already had an
optimal solution to the original L.P.P. Here Z * cannot be more than Z.

∴ We conclude that Z * ≤ Z . (Max.)

Thus, we have the following two cases :


280

Case I : If the optimal solution of the original L.P.P. satisfies the new constraint, it is also
an optimal solution of the new L.P.P. In this case the additional constraint is redundant.

Case II : If the optimal solution of the original L.P.P. does not satisfy the new constraint,
a new optimal solution of the new L.P. problem must be obtained as follows :

To find the new Optimal solution of the new enlarged problem.


Let B and B1 be the optimal basis of the original and the enlarged L.P. problem
respectively. Clearly B1 is the square matrix of order (m + 1) if B is the square matrix of
order m.

∴ We can write

B 0 
B1 =   ...(1)
α ±1

The last column of B1 corresponds to the slack, surplus or artificial vector associated with
the additional new constraint and α is the row vector of the coefficients, in the new
constraint, of the variables which correspond to the vectors in the optimal basis B.

Since B −1 exists and is known therefore the inverse of B1 is given by

 B −1 0
B1−1 =  −1  ...(2)
mα B ±1

Let am + 1, j be the coefficient of x j in the (m + 1)th constraint and α *j the column vector of

the coefficients of x j in the enlarged problem. Also if Y j* and Z *j are Y j and Z j for the new

problem, then we have


 B −1 0  αj 
Y j* = B1−1. α *j =  
−1
±1 am + 1, j
m αB 

 Yj 
= −1 
mαB α j ± am + 1, j 

 Yj 
or Y j* = 
mα Y j ± am + 1, j 
 
 Yj 
Now Z *j = c B1Y j* = (c B , cB, m+1) 
mαY j ± am + 1, j 
 

or Z *j = c B Y j + cB, m+1 .(mαY j ± am + 1, j) ...(3)

1. If slack or surplus variable is introduced in the additional constraint.

In this case cB, m + 1 = 0


281

∴ from (3),
Z *j = c B Y j = Z j

∴ c j − Z *j = c j − Z j

∴ c j − Z *j

remains unchanged in this case. Since the optimal solution of the original problem does
not satisfy the new constraint, therefore the slack or surplus variable introduced in the
new constraint is negative. Hence, we can apply the dual simplex algorithm to find an optimal
feasible solution of the new problem.
2. If the artificial variable is introduced in the additional constraint i.e., if the
additional constraint is a perfect equality. In this case the additional vector is an
artificial vector. Now there are two possibilities :
(i) If the artificial variable in the basic solution is negative, then assigning a price
to the artificial variable we can use the dual simplex algorithm for the removal
of the artificial variable from the basis.
(ii) If the artificial variable in the basic solution is positive, then assigning a price
− M to the artificial variable we can use the standard simplex method for the
removal of the artificial variable from the basis. It is important to note that in
this case c j − Z j will be changed.
Note : If the addition of new constraint alters the nature of the problem, then the new
problem must be solved as a fresh problem.

Example 8: Consider the following table which presents an optimal solution to some
linear programming problem.

cj 2 4 1 3 2 0 0 0
B cB
xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8
Y1 2 3 1 0 0 −1 0 0 .5 0 .2 −1
Y2 4 1 0 1 0 2 1 −1 0 0 .5
Y3 1 7 0 0 1 −1 −2 5 −0 . 3 2

Z = c B x B = 17 ∆j 0 0 0 −2 0 −2 −0 .1 −2

If the additional constraint 2 x1 + 3 x2 − x3 + 2 x4 − 4 x5 ≤ 5 were annexed to the system,


would there be any change in the optimal solution ? Justify your answer.

Solution: From the table the optimal solution of the given L.P. problem is
x1 = 3, x2 = 1, x3 = 7, x4 = 0 = x5 = x6 = x7 = x8
282

which also satisfies the new additional constraint

2 x1 + 3 x2 − x3 + 2 x4 − 4 x5 ≤ 5

Thus, the optimal solution of the given problem will not be changed if we introduce the
above constraint to it.

Hence, the additional constraint is redundant and the optimal sol. of the given L.P.P. is
also the optimal solution of the new L.P.P.

1. Find an optimal solution to the following L.P.P.

Max. Z = 15 x1 + 45 x2

subject to x2 ≤ 50

x1 + 16
. x2 ≤ 240

0.5 x1 + 2.0 x2 ≤ 162

and x1, x2 ≥ 0.

If Max. Z = ∑ ci x i (i = 1, 2) and c2 is kept fixed at 45, find how much can c1 be


changed without affecting the above optimal solution.
2. Find an optimal solution to the following L.P.P. problem

Max. Z = 3 x1 + 5 x2

subject to x1 ≤ 4

x2 ≤ 6

3 x1 + 2 x2 ≤ 18

and x1, x2 ≥ 0,

what happens to this optimal solution if the objective function is changed to

Z * = 3 x1 + x2 .

3. Find an optimal solution to the following L.P.P.


Max. Z = 15 x1 + 45 x2

subject to x1 + 16 x2 ≤ 240

5 x1 + 2 x2 ≤ 162

x2 ≤ 50, x1, x2 ≥ 0
If max Z = ∑ c j x j , j = 1, 2 and c2 is fixed at 45, determined how much c1 can be
changed without affecting the above solution.
283

4. Solve the following L.P.P. and find how much can c1 be changed without affecting
the optimal solution.

Max. Z = 15 x1 + 45 x2 = c1 x1 + c2 x2

subject to x1 + 16
. x2 ≤ 240

0.5 x1 + 2 x2 + x3 = 162

x2 + x4 = 50

and x1, x2 , x3 , x4 ≥ 0
5. Solve the L.P.P.

Max. Z = 3 x1 + 4 x2 + x3 + 7 x4

subject to 8 x1 + 3 x2 + 4 x3 + x4 ≤ 7

2 x1 + 6 x2 + x3 + 5 x4 ≤ 3

x1 + 4 x2 + 5 x3 + 2 x4 ≤ 8, and x1, x2 , x3 , x4 ≥ 0
Discuss the effect discrete changes in c on the optimality of an optimum basic
feasible solution to the above L.P.P.
6. For L.P.P. given in Example 4, determine the effect of discrete changes in
c j ( j = 1, 2, 3) on the optimal solution.
7. For L.P.P. given Example 3, find the limits of variation of b2 without affecting the
optimality of the solution. [Meerut 2006]

8. Discuss the effect of discrete changes in the requirement (on the right hand sides of
the inequalities) for the L.P.P. given in Q.5.
9. Solve the problem.

(i) Max. Z = 5 x1 + 12 x2 + 4 x3

subject to, x1 + 2 x2 + x3 ≤ 5

2 x1 − x2 + 3 x3 = 2

and x1, x2 , x3 ≥ 0

Also
  7
(ii) Discuss the effect of changing the requirement vector from 5  to   on the
2  2 
optimum solution.
5  3 
(iii) Discuss the effect of changing the requirement vector from   to   on the
2  9 
optimum solution.
10. Consider the following optimum simplex table for a maximization problem (with all
constraint of ≤ type), where x4 is slack and a1 is an artificial variable. Let a new
variable x5 ≥ 0 be introduced in the problem with a cost 30 assigned to it in the
284

objective function. Also suppose that the coefficients of x5 in the two constraints
are 5 and 7 respectively.

B cB xB Y1 Y2 Y3 Y4 A1

Y2 12 85 0 1 −1 5 25 −1 5
Y1 5 95 1 0 75 15 25

Z = 141 5 ∆j 0 0 −3 5 −29 5 −M + 2 5

Discuss the effect of this new variable on the optimality of the given problem.
11. The following table gives the optimal solution to a L.P.P.

Max. Z = 3 x1 + 5 x2 + 4 x3 ,

subject to 2 x1 + 3 x2 ≤ 8

2 x2 + 5 x3 ≤ 10,

3 x1 + 2 x2 + 4 x3 ≤ 15

x1, x2 , x3 ≥ 0
For the above L.P.P. calculate the following :
(i) How much c3 and c4 can be increased before the present basic solution will no
longer be optimal ? Also, find the change in the value of the objective function
if possible.
(ii) How much b2 can be changed maintaining the feasibility of the solution ?
(iii) Find the limits for the changes in a14 and a24 so that the new solution remains
optimal feasible solution.

B cB xB Y1 Y2 Y3 Y4 Y5 Y6

Y2 5 50 41 0 1 0 15 41 8 41 −10 41
Y3 4 62 41 0 0 1 −6 41 5 41 4 41
Y1 3 89 41 1 0 0 −2 41 −12 41 15 41

Z = c Bx B ∆j 0 0 0 −45 41 −24 41 −11 41


= 765 41

12. Consider the following table which presents an optimal solution to some L.P.P.

B cB cj 2 3 1 0 0

xB Y1 Y2 Y3 Y4 Y5
Y1 2 1 1 0 0.5 4 −0.5
Y2 3 2 0 1 1 −1 2

Z =8 ∆j 0 0 −3 −5 −5
285

For the above problem assuming that Y4 and Y5 were, in that order, in the initial
identity matrix basis, calculate the followings :
(i) How much can b1 and b2 be increased without affecting the optimality and
feasibility of the solution ?
(ii) How much c3 can be increased before the present basic solution will no longer
be optimal ?
13. Discuss the effect of adding a new non-negative variable x8 in the Q.5 on the
optimality of its optimum solution. It is given that the coefficient of x8 in the
constraints of the problem are 2, 7 and 3 respectively, the cost component
associated with x8 being 5.
Also explain the situation when we have c8 = 10 instead of 5.
14. Consider the L.P.P.

Min. Z = x2 − 3 x3 + 2 x5

subject to 3 x2 − x3 + 2 x5 ≤ 7

−2 x2 + 4 x3 ≤ 12

−4 x2 − 3 x3 + 8 x5 ≤ 10

and x2 , x3 , x5 ≥ 0

The optimal table is given as follows :

B cB cj 0 1 −3 0 2 0

xB Y1 Y2 Y3 Y4 Y5 Y6

Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1

Z = c B x B = 11 ∆j −1 5 0 0 −4 5 −12 5 0

(i) Formulate the dual problem for this primal problem.


(ii) What are the optimal values of dual variables ?
(iii) How much c5 be decreased before Y5 goes into basis ?
(iv) How much can the 7 in first constraint be increased before the basis would
change ?
286

Multiple Choice Questions


1. In sensitivity analysis we deal with changes in the optimal solutions due to discrete
variations in the parameters :
(a) ci (b) bi
(c) aij (d) All of these
2. In a L.P.P. if the component c j ∉c B of the price vector c changes then its optimal

basic feasible solution :


(a) Also changes (b) Remains unchanged
(c) Is reduced to trivial solution (d) None of these
3. To maintain optimality of current optimal solution for a change ∆ck in the
coefficient ck of non-basic variable x k , we have :
(a) ∆ck = Z k − ck (b) ∆ck > Z k − ck
(c) ∆ck ≤ Z k − ck (d) None of these
4. If alk changes to alk + ∆alk , where alk is not an element of optimal basis B, so that the
solution x B remains optimal and feasible, then the value of the objective function :
(a) Decreases (b) Increases
(c) Does not change (d) None of these
5. By the addition of a new variable with non-zero cost to the problem the optimal
solution :
(a) Changes (b) May change
(c) Does not change (d) None of these
6. By the addition of a new variable with non-zero cost to the L.P. problem, its existing
optimal solution can further be improved if :
(a) cj − Z j < 0 (b) cj − Z j > 0
(c) cj − Z j = 0 (d) None of these
7. Addition of a new constraint to the existing constraints of a L.P.P. will cause :
(a) A change in the objective function coefficients
(b) No change in the objective function coefficients
(c) A change in the existing optimal solution of it satisfying the new constraint
(d) None of these.

Fill in the Blank


1. The range of ∆ cBk , the change in cBk , the coefficient of the basic variable x Bk to
maintain the optimality of the current optimal solution is given by
cj − Z j  ....... 
Max.   ≤ ∆ cBk ≤ Mini.  .
y kj > 0  ykj  y kj < 0  ykj 
   
287

2. If ∆bl is the change in bl so that the optimal solution x * B also remains feasible, then
the value of the objective function is improved by the amount ............
3. If an additional constraint with ‘=’ sign is introduced in the L.P.P. and the artificial
variable used in this constraint appears in the basis with positive value, then for the
removal of this artificial variable from the basis we assign price − M to this variable
and use the ................. method.

True / False
1. If the coefficient ck of the non-basic variable x k changes to ck + ∆ck to maintain the
optimality of the current optical solution, the value of the objective function is
increased by ∆ck . x k .
2. If the coefficient cBk of the basic variable x k changes to cBk + ∆cBk to maintain the
optimality of the current optimal solution, the value of the objective function is
increased by x Bk . ∆cBk
3. The range of ∆bl , so that the optimal solution x *B also remains optimal is given by

 x Bi   x Bi 
Max. −  ≤ ∆ bl ≤ Min. − 
β il > 0  β il  β il < 0  β il 

4. The addition of a new constraint to a L.P.P. will not change the existing optimal
solution if this optimal solution satisfy the new constraint.
288

Multiple Choice Questions


1. (d) 2. (b)

3. (c) 4. (c)

5. (b) 6. (b)

7. (a)

Fill in the Blank


m

1. cj − Z j 2. ∑ cBi βil ∆bl


i =1

3. Standard Simplex

True / False
1. False 2. True

3. True 4. True
mmm
Unit-4

Chapter-8: Duality in Linear Programming

Chapter-9: Integer Programming


291

8.1 Introduction
he concept of duality was one of the most important discoveries in the early
T development of linear programming. It explains that corresponding to every linear
programming problem there is another linear programming problem. The original
problem is known as ‘primal’ and the other, the related problem is known as the ‘dual’.
The two problems are replicas of each other. If the primal problem is of maximization
type, the dual will be of the minimization type. If the optimal solution to one problem is
known to us, we can easily find the optimal solution of the other. This fact is important
because at times it is easier to solve the dual than the primal.

To understand the concept of duality more clearly, consider the following diet problem :

The amount of two vitamins A and B per unit present in two different foods A1 and A2
respectively are given below :

Food Minimum daily


Vitamin
A1 A2 requirement

A 6 9 60
B 4 13 108
Cost (per unit) ` 12 ` 18
292

The objective of the diet problem is to ascertain the quantities of foods A1 and A2 that
should be eaten to meet the minimum daily requirements of vitamins A and B at a
minimum cost.

Let x1 and x2 be the number of units of foods A1 and A2 to be purchased respectively.


Then the problem is to find the values of x1 and x2 which minimize

Z x = 12 x1 + 18 x2

subject to the constraints

6 x1 + 9 x2 ≥ 60, 4 x1 + 13 x2 ≥ 108, x1, x2 ≥ 0

We shall consider this linear programming problem as the primal problem. Now we shall
formulate the dual corresponding to this primal.

Suppose a wholesale dealer sells two vitamins A and B. Customers purchase the two
vitamins from him in the form of two foods A1 and A2 (as given in the above table). The
foods A1 and A2 have their market value only because of their vitamin contents. Now the
dealer has to fix-up the maximum per unit selling prices for the two vitamins A and B in
such a way that the resulting prices of the foods A1 and A2 do not exceed their existing
market prices.

Suppose the dealer decides to fix-up the two prices y1 and y2 respectively.

Then the problem is to determine the values of y1 y2 which maximize

Z y = 60 y1 + 108 y2

subject to 6 y1 + 4 y2 ≤ 12

9 y1 + 13 y2 ≤ 18

y1, y2 ≥ 0

Observing the above primal and the dual a little closely, we find that
1. Primal is a minimization problem while the dual is a maximization problem.
2. The constraint values 60, 108 of the primal have become the coefficients of the dual
variables y1 and y2 in the objective function of the dual in that order. The
coefficients for the variables x1 and x2 in the objective function of the primal have
become the constraint values in the dual.
3. The constraint coefficient matrix of the dual is the transpose of the constraint
coefficient matrix of the primal.
4. The direction of inequalities in the dual is the reverse of that in the primal.

We can represent the primal-dual relationship in the following form :


293

Primal Dual

Minimize Maximize
c x b′ y
x  y 
Z x = (12 18)  1  Z y = (60 108)  1 
 x2   y2 

subject to subject to
A x b A′ y c′
6 9   x1   60  6 4   y1  12 
4 13   x  ≥ 108 9 13   y  ≤ 18
   2      2  

x1, x2 ≥ 0 y1, y2 ≥ 0

8.2 Standard or Canonical Form of a Primal


[Meerut 2007 (BP), 09 (BP), 12 (BP)]

While dealing with duality theory, the given L.P.P. (known as primal problem) should be
changed to the standard (or canonical) form. A linear programming problem is said to be
in standard primal form (or canonical primal form) if
1. For a maximization L.P.P. all the constraints have ≤ sign.
2. For a minimization L.P.P. all the constraints have ≥ sign.

8.3 Methods to Convert a L.P.P. into its Dual


[Gorakhpur 2011]

8.3.1 Dual of a Symmetrical Primal Form


Consider the following linear programming problem, we may call it the symmetrical
(standard canonical) primal form :

Primal L.P.P. : Find the variables x1, x2 ,..., x n which maximize

Z p = c1 x1 + c2 x2 + ... + cn x n

subject to,

a11, x1 + a12 x2 + ... + a1n x n ≤ b1

a21 x1 + a22 x2 + ... + a2 n x n ≤ b2


... ... ...
... ... ...

am1 x1 + am2 x2 + ... + amn x n ≤ bm


294

and x1, x2 ,..., x n ≥ 0 ,

the signs of the parameters a, b, c are arbitrary.

To obtain the dual of the above primal, following steps are required :
(i) Minimize the objective function instead of maximizing it.
(ii) Interchange the role of constant terms and the coefficients of the objective
function.
(iii) Find A′, where A′ denotes the transpose of the coefficient matrix A.
(iv) Reverse the direction of inequalities.

Thus the dual problem is :

Find the variables w1, w2 ,..., wm which minimize

Z w = b1w1 + b2 w2 + ... + bmwm

subject to a11w1 + a21w2 + ... + am1wm ≥ c1

a12 w1 + a22 w2 + ... + am2 wm ≥ c2


... ... ...
... ... ...

a1nw1 + a2 nw2 + ... + amnwm ≥ cn

and w1, w2 ,..., wn ≥ 0

8.3.2 Matrix Form of Symmetric Primal-Dual Problem [Meerut 2004]

Primal Problem : Find a column vector x ∈ R n which maximizes

Z x = c x, c ∈ R n

subject to A x ≤ b, b ∈ R m,

x ≥ 0 and A is an m × n real matrix.

Dual Problem : Find a column vector w ∈ R m which minimizes

Z w = b′ w

subject to A′ w ≥ c′

w ≥ 0 , A′, b′, c′ are the transposes of A, b and c respectively.

For example : Consider the symmetric primal problem

Max. Z x = 5 x1 + 9 x2

subject to x1 ≤ 6
295

x1 + x2 ≤ 13

x2 ≤ 8,

x1, x2 ≥ 0.

The corresponding dual problem is

Min. Z w = 6 w1 + 13 w2 + 8w3

subject to w1 + w2 ≥ 5

w2 + w3 ≥ 9,

w1, w2 , w3 ≥ 0.

Note : The following table gives a simple and convenient method to remember the
primal-dual relationship :

( x1, x2 ,... x n) Min.

 w1   a11 a12 ... a1n   b1 


 w  a a ... a2 n   b2 
 2   21 22 ≤  
M
  ... ... ... ...   M 
w  a
 m   m1 am2 ... amn  bm 

Max. (c1, c2 ,..., cn)

For primal constraints, read across the table while for dual constraints read down the
columns.

8.3.3 Dual of an Unsymmetric Primal Forms Problems


Consider the following L.P.P. we may call it the unsymmetric primal form :

Primal Problem : Find a column vector x ∈ R n which maximizes

Z x = c x, c ∈ R n

subject to A x = b, b ∈ R m

x ≥ 0 and A is an m × n real matrix.

Dual Problem : Find a column vector w ∈ R m which minimizes

Z w = b′ w

subject to A′ w ≥ c′.

In this case the dual variables are unrestricted in sign.


296

8.3.4 Dual of an L.P.P. with Mixed Restrictions


Sometimes a given L.P.P. contains a mixture of inequalities (≥, ≤), equations;
non-negative variables and unrestricted variables, then to obtain its dual we proceed in
the following manner :
1. If a constraint is an equation (has = sign), replace it by two constraints involving
the inequalities going in opposite directions.

For example

The equation 2 x1 + 5 x2 = 9 is replaced by

In maximization problem

2 x1 + 5 x2 ≤ 9 …(1)

and 2 x1 + 5 x2 ≥ 9 …(2)

In minimization problem

2 x1 + 5 x2 ≥ 9 ...(3)

and 2 x1 + 5 x2 ≤ 9 ...(4)
In the problem of maximization, all constraints should have ≤ sign. So we multiply
both sides of constraint (2) by –1.
So in maximization problem, the equation 2 x1 + 5 x2 = 9 is replaced by 2 x1 + 5 x2 ≤ 9
and −2 x1 − 5 x2 ≤ −9.
Similarly, if the problem is of minimization, all constraints should have ≥ sign.
So in this case the equation say 2 x1 + 5 x2 = 9 is replaced by 2 x1 + 5 x2 ≥ 9 and
−2 x1 − 5 x2 ≥ −9.
2. If there is some unrestricted variable, replace it by the difference of two
non-negative variables.
3. Now to find the dual problem proceed as in article 8.3.1.

Note : The dual variables corresponding to primal equality constraints must be


unrestricted in sign and those associated with primal inequalities must be non-negative.

Example 1: Find the dual of the following L.P.P. :


Min. Z = 10 x1 + 20 x2

subject to 3 x1 + 2 x2 ≥ 18

x1 + 3 x2 ≥ 8

2 x1 − x2 ≤ 6, x1, x2 ≥ 0 .
297

Solution: The given L.P.P. in the standard primal form is

Min. Z = 10 x1 + 20 x2

subject to 3 x1 + 2 x2 ≥ 18

x1 + 3 x2 ≥ 8

−2 x1 + x2 ≥ −6

and x1, x2 ≥ 0

The matrix form of this problem is

Min. Z = 10 x1 + 20 x2 = (10, 20) [ x1, x2 ] = c x

 3 2  18
   x1   
subject to  1 3    ≥  8
x
−2 1  2  −6 

or A x ≥ b, x1, x2 ≥ 0.

∴ The dual of this problem is

Max. Z D = b′ y = (18, 8, − 6)[ y1, y2 , y3 } = 18 y1 + 8 y2 − 6 y3

y 
3 1 −2   1  10 
subject to A′ y ≤ c′ or  y ≤
2 3 1  2  20 
 y3 

3 y1 + y2 − 2 y3  10 
or  ≤ 
2 y1 + 3 y2 + y3  20 

or 3 y1 + y2 − 2 y3 ≤ 10, 2 y1 + 3 y2 + y3 ≤ 20, y1, y2 , y3 ≥ 0.

Hence dual of the given problem is

Max. Z D = 18 y1 + 8 y2 − 6 y3

subject to 3 y1 + y2 − 2 y3 ≤ 10, 2 y1 + 3 y2 + y3 ≤ 20, y1, y2 , y3 ≥ 0.

Example 2: Write the dual of the following problem :


Min. Z = 3 x1 − 2 x2 + 4 x3

subject to 3 x1 + 5 x2 + 4 x3 ≥ 7

6 x1 + x2 + 3 x3 ≥ 4

7 x1 − 2 x2 − x3 ≤ 10

x1 − 2 x2 + 5 x3 ≥ 3

4 x1 + 7 x2 − 2 x3 ≥ 2

x1, x2 , x3 ≥ 0 . [Gorakhpur 2009]


298

Solution: The standard primal form of the given L.P.P. is

Min. Z = 3 x1 − 2 x2 + 4 x3

subject to 3 x1 + 5 x2 + 4 x3 ≥ 7

6 x1 + x2 + 3 x3 ≥ 4

−7 x1 + 2 x2 + x3 ≥ −10

x1 − 2 x2 + 5 x3 ≥ 3

4 x1 + 7 x2 − 2 x3 ≥ 2, x1, x2 , x3 ≥ 0.

The matrix form of the above problem is

Min. Z = (3, − 2, 4) [ x1, x2 , x3 ] = c . x

 3 5 4  7
 6 1 3  x1   4 

    
subject to  −7 2 1  x2  ≥ −10  or A x ≥ b, x1, x2 , x3 ≥ 0.
 1 −2 5   x3   3 

 4 7 −2   2 

∴ The dual of the given primal is

Max. Z D = b′ y = (7, 4, − 10, 3, 2) [ y1, y2 , y3 , y4 , y5 ]

= 7 y1 + 4 y2 − 10 y3 + 3 y4 + 2 y5

subject to A′ y ≤ c′

 y1 
3 6 −7 1 4  y   3
   2  
or 5 1 2 −2 7  y3  ≤ −2 
4 3 y   
1 5 −2   4   4
 y5 

3 y1 + 6 y2 − 7 y3 + y4 + 4 y5   3 
   
or 5 y1 + y2 + 2 y3 − 2 y4 + 7 y5  ≤ −2 
4 y1 + 3 y2 + y3 + 5 y4 − 2 y5   4 

Hence the dual problem of the given L.P.P. is

or Max. Z D = 7 y1 + 4 y2 − 10 y3 + 3 y4 + 2 y5

subject to 3 y1 + 6 y2 − 7 y3 + y4 + 4 y5 ≤ 3

5 y1 + y2 + 2 y3 − 2 y4 + 7 y5 ≤ −2

4 y1 + 3 y2 + y3 + 5 y4 − 2 y5 ≤ 4,

y1, y2 , y3 , y4 , y5 ≥ 0.
299

Example 3: Write the dual of the following problem :

Min. Z = 2 x2 + 5 x3

subject to x1 + x2 ≥ 2

2 x1 + x2 + 6 x3 ≤ 6

x1 − x2 + 3 x3 = 4

x1, x2 , x3 ≥ 0 . [Meerut 2005]

Solution: First we shall convert the given problem to standard primal form.
1. Since the problem is of minimization therefore all the constraints should have the
sign ≥.
2. Multiplying the second constraint by −1, if becomes

−2 x1 − x2 − 6 x3 ≥ −6
3. Since the third constraint is an equality, so replacing it by the following two
constraints :

x1 − x2 + 3 x3 ≥ 4

and x1 − x2 + 3 x3 ≤ 4

or x1 − x2 + 3 x3 ≥ 4

and − x1 + x2 − 3 x3 ≥ −4

∴ The given problem in standard primal form is

Min. Z = 0 x1 + 2 x2 + 5 x3

subject to x1 + x2 ≥ 2

−2 x1 − x2 − 6 x3 ≥ −6

x1 − x2 + 3 x3 ≥ 4

− x1 + x2 − 3 x3 ≥ −4

x1, x2 , x3 ≥ 0.

The matrix form of the above problem is

Min. Z = (0, 2, 5) [ x1, x2 , x3 ] = c . x

 1 1 0  2
−2 −1 −6   x1  −6 
subject to    x  ≥   or A x ≥ b, x , x , x ≥ 0
 1 −1 3   2   4  1 2 3
 −1 1 −3   x3  −4 
   
300

∴ The dual of the given primal is

Max. Z D = b′ . y = (2, − 6, 4, − 4) [ y1, y2 , y3′ , y3′′ ]

= 2 y1 − 6 y2 + 4 ( y3′ − y3′′ )

subject to A′ y ≤ c′
y 
 1 −2 1 −1  1  0 
  y  
 1 −1 −1 1  2  ≤ 2 
 y3′ 
0 −6 3 −3    5 
 y3′′ 

 y1 − 2 y2 + y3′ − y3′′  0 
   
or  y1 − y2 − y3′ + y3′′  ≤ 2 ,
0. y1 − 6 y2 + 3 y3′ − 3 y3′′  5 

y1, y2 , y3′ , y3′′ ≥ 0

or Max. Z D = 2 y1 − 6 y2 + 4( y3′ − y3′′ )

subject to

y1 − 2 y2 + ( y3′ − y3′′ ) ≤ 0

y1 − y2 − ( y3′ − y3′′ ) ≤ 2

−6 y2 + 3 ( y3′ − y3′′ ) ≤ 5,

y1, y2 , y3′ , y3′′ ≥ 0

Substituting y3 = y3′ − y3′′ , the required dual is

Max. Z D = 2 y1 − 6 y2 + 4 y3

subject to

y1 − 2 y2 + y3 ≤ 0, y1 − y2 − y3 ≤ 2,−6 y2 + 3 y3 ≤ 5,

y1, y2 ≥ 0 and y3 is unrestricted in sign.

Note : The variable corresponding to the equality equation (here third equation) in the
constraint will be unrestricted in sign in dual problem.

Example 4: Write the dual of the following L.P.P. :


Max. Z = 2 x1 + 3 x2 + x3

subject to 4 x1 + 3 x2 + x3 = 6

x1 + 2 x2 + 5 x3 = 4,

x1, x2 , x3 ≥ 0 .
301

Solution: First we shall convert the given L.P.P. into standard primal form.

Here both constraints are equalities, so replacing each by two inequalities, we get the
constraints,

4 x1 + 3 x2 + x3 ≤ 6 and 4 x1 + 3 x2 + x3 ≥ 6

x1 + 2 x2 + 5 x3 ≤ 4 and x1 + 2 x2 + 5 x3 ≥ 4.

Since the given problem is of maximization, so all the constraints should have the sign ≤.

The standard primal form of the given L.P.P. is

Max. Z = 2 x1 + 3 x2 + x3

subject to 4 x1 + 3 x2 + x3 ≤ 6

−4 x1 − 3 x2 − x3 ≤ −6

x1 + 2 x2 + 5 x3 ≤ 4

− x1 − 2 x2 − 5 x3 ≤ −4,

x1, x2 , x3 ≥ 0.

The matrix form of the above problem is

Max. Z = 2 x1 + 3 x2 + x3 = (2, 3, 1) [ x1, x2 , x3 ] = c. x

subject to

 4 3 1  6
−4 −3 −1  x1  −6 
  x  ≤  
 1 2 5  2   4
 −1 −2 −5   x3  −4 
   

or A x ≤ b,

x1, x2 , x3 ≥ 0.

∴ The dual to the primal is

Min. Z D = b′ . y = (6, − 6, 4, − 4) [ y1′ , y1′′, y2′ , y2′′ ]

= 6 ( y1′ − y1′′) + 4 ( y2′ − y2′′ )

 y′ 
4 −4 1 −1  1  2 
   y1′′   
subject to A′ y ≥ c′ or 3 −3 2 −2   y′  ≥ 3 
 1 −1 5 −5   2  1
 y2′′ 
302

 4 y1′ − 4 y1′′ + y2′ − y2′′  2 


   
or 3 y1′ − 3 y1′′ + 2 y2′ − 2 y2′′  ≥ 3 
 y1′ − y1′′ + 5 y2′ − 5 y2′′  1

or 4 ( y1′ − y1′′) + y2′ − y2′′ ≥ 2

3 ( y1′ − y1′′) + 2 ( y2′ − y2′′ ) ≥ 3

( y1′ − y1′′) + 5 ( y2′ − y2′′ ) ≥ 1,

y1′ , y1′′, y2′ , y2′′ ≥ 0.

Substituting y1 = y1′ − y1′′, y2 = y2′ − y2′′ , the required dual is

Min. Z D = 6 y1 + 4 y2

subject to 4 y1 + y2 ≥ 2, 3 y1 + 2 y2 ≥ 3, y1 + 5 y2 ≥ 1,

where y1, y2 are unrestricted in sign.

Example 5: Write the dual of the following problem :


Min. Z = x1 + x2 + x3

subject to x1 − 3 x2 + 4 x3 = 5, x1 − 2 x2 ≤ 3, 2 x2 − x3 ≥ 4,

x1, x2 ≥ 0 , x3 is unrestricted. [Meerut 2006]

Solution: First we shall write the given L.P.P. in the standard primal form, substituting
x3 = x3′ − x3′′ , x3′ ≥ 0, x3′′ ≥ 0. The given problem can be written in the standard primal
form as

Min. Z = x1 + x2 + x3′ − x3′′

subject to x1 − 3 x2 + 4( x3′ − x3′′ ) ≥ 5

− x1 + 3 x2 − 4 ( x3′ − x3′′ ) ≥ −5

− x1 + 2 x2 ≥ −3

2 x2 − ( x3′ − x3′′ ) ≥ 4,

x1, x2 , x3′ , x3′′ ≥ 0.

The matrix form of the above problem is

Min. Z = (1, 1, 1, − 1) [ x1, x2 , x3′ , x3′′ ] = c . x

subject to
 1 −3 4 −4   x1   5 
 −1 3 −4 4   x2  −5 
   ≥ 
 −1 2 0 0   x3′  −3 
 0 2 −1 1  x3′′   4 

303

or A x ≥ b, x1, x2 , x3′ , x3′′ ≥ 0

Now the dual of the given primal is

Max. Z D = b′ . y = (5, − 5, − 3, 4) [ y1′ , y1′′, y2 , y3 ]

= 5 ( y1′ − y1′′) − 3 y2 + 4 y3

subject to A′ y ≤ c′

 1 −1 −1 0   y1′   1
 −3 3 2 2   y1′′   1
or    ≤ 
 4 −4 0 −1  y2   1
 −4 4 0 1  y3  −1

or ( y1′ − y1′′) − y2 ≤ 1

−3 ( y1′ − y1′′) + 2 y2 + 2 y3 ≤ 1

4 ( y1′ − y1′′) − y3 ≤ 1

−4 ( y1′ − y1′′) + y3 ≤ −1,

y1′ , y1′′, y2 , y3 ≥ 0.

Substituting y1 = y1′ − y1′′, the required dual is

Max. Z D = 5 y1 − 3 y2 + 4 y3

subject to y1 − y2 ≤ 1, − 3 y1 + 2 y2 + 2 y3 ≤ 1, 4 y1 − y3 ≤ 1, 4 y1 − y3 ≥ 1

y2 , y3 ≥ 0 and y1 is unrestricted.

Hence the required dual is

Max. Z D = 5 y1 − 3 y2 + 4 y3

subject to y1 − y2 ≤ 1, − 3 y1 + 2 y2 + 2 y3 ≤ 1, 4 y1 − y3 = 1,

y2 , y3 ≥ 0, y1 is unrestricted in sign.

Example 6: Write the dual of the following problem :

Max. Z = 3 x1 + 5 x2 + 7 x3

subject to x1 + x2 + 3 x3 ≤ 10

4 x1 − x2 + 2 x3 ≥ 15,

x1, x2 ≥ 0 , x3 is unrestricted.
304

Solution: First we shall write the given L.P.P. in the standard primal form.
Since the given problem is of maximization therefore all the constraints should have the
sing ≤.

Substituting x3 = x3′ − x3′′ , x3′ ≥ 0, x3′′ ≥ 0, the standard primal form of the given problem is

Max. Z = 3 x1 + 5 x2 + 7 ( x3′ − x3′′ )

subject to

x1 + x2 + 3 x3′ − 3 x3′′ ≤ 10

−4 x1 + x2 − 2 x3′ + 2 x3′′ ≤ −15

x1, x2 , x3′ , x3′′ ≥ 0.

The matrix form of the above problem is

Max. Z = (3, 5, 7, − 7) [ x1, x2 , x3′ , x3′′ ] = c . x

subject to

 x1 
 1 1 3 −3   x2   10 
 −4 1 −2 ≤
 2   x3′  −15 
 x′′ 
 3

or A x ≤ b, x1, x2 , x3′ , x3′′ ≥ 0.

∴ The dual of the given problem is

Min. Z D = b′ y = (10, − 15) [ y1, y2 ] = 10 y1 − 15 y2

subject to A′ y ≥ c′

 1 −4   3  y1 − 4 y2   3 
 1 1  y1   5   y + y   5
or   ≥  or  1 2 ≥ 
 3 −2   y2   7  3 y1 − 2 y2   7
 −3  −3 y + 2 y   
 2  −7
   1 2   −7

or Min. Z D = 10 y1 − 15 y2

subject to y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 ≥ 7, − 3 y1 + 2 y2 ≥ −7

or y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 ≥ 7, 3 y1 − 2 y2 ≤ 7

y1, y2 ≥ 0.

Hence the required dual problem is

Min. Z D = 10 y1 − 15 y2

subject to y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 = 7, y1, y2 ≥ 0.
305

1. Describe a method to convert a L.P.P. into its dual. [Gorakhpur 2011]

Find the dual of the following linear programming problems :

2. Max. Z = x1 − x2 + 3 x3 3. Max. Z = 3 x1 + 5 x2 + 4 x3
subject to subject to
x1 + x2 + x3 ≤ 10 2 x1 + 3 x2 ≤ 8
2 x1 − x3 ≤ 2 2 x2 + 5 x3 ≤ 10
2 x1 − 2 x2 + 3 x3 ≤ 6 3 x1 + 2 x2 + 4 x3 ≤ 15
x1, x2 , x3 ≥ 0. x1, x2 , x3 ≥ 0. [Meerut 2012]
4. Max. Z = 5 x1 + 3 x2 5. Min. Z = 4 x1 + 6 x2 + 18 x3
subject to subject to
3 x1 + 5 x2 ≤ 15 x1 + 3 x3 ≥ 3
5 x1 + 2 x2 ≤ 15 x2 + 2 x3 ≥ 5
x1, x2 ≥ 0. [Gorakhpur 2008] x1, x2 , x3 ≥ 0.
6. Max. Z = 3 x1 + 4 x2 7. Min. Z = 3 x1 + x2
subject to subject to
2 x1 + 6 x2 ≤ 16 2 x1 + 3 x2 ≥ 2
5 x1 + 2 x2 ≥ 20 x1 + x2 ≥ 1
x1, x2 ≥ 0. [Kanpur 2007] x1, x2 ≥ 0. [Kanpur 2012]
8. Min. Z = 2 x1 + 2 x2 + 4 x3 9. Min. Z = 7 x1 + 3 x2 + 8 x3
subject to subject to
2 x1 + 3 x2 + 5 x3 ≥ 2 8 x1 + 2 x2 + x2 ≥ 3
3 x1 + x2 + 7 x3 ≤ 3 3 x1 + 6 x2 + 4 x3 ≥ 4
x1 + 4 x2 + 6 x3 ≤ 5 4 x1 + x2 + 5 x3 ≥ 1
x1, x2 , x3 ≥ 0. [Meerut 2011 (BP)] x1 + 5 x2 + 2 x3 ≥ 7
x1, x2 , x3 ≥ 0.
10. Max. Z = 2 x1 + 5 x2 + 6 x3 11. Min. Z = 3 x1 − 2 x2 + 4 x3
subject to subject to
5 x1 + 6 x2 − x3 ≤ 3 3 x1 + 5 x2 + 4 x3 ≤ 7
−2 x1 + x2 + 4 x3 ≤ 4 6 x1 + 3 x2 + 2 x3 ≤ 9
x1 − 5 x2 + 3 x3 ≤ 1 2 x1 + 7 x2 + 9 x3 ≥ 5
−3 x1 − 3 x2 + 7 x3 ≤ 6 4 x1 − 2 x2 + 7 x3 ≥ 1
x1, x2 , x3 ≥ 0. [Kanpur 2011] x1, x2 , x3 ≥ 0. [Gorakhpur 2011]
306

12. Max. Z = 3 x1 + x2 + 4 x3 + x4 + 9 x5 13. Max. Z = 4 x1 − 2 x2 + 7 x3


subject to subject to
4 x1 − 5 x2 − 9 x3 + x4 − 2 x5 ≤ 6 3 x1 + 5 x2 + 4 x3 ≥ 9
2 x1 + 3 x2 + 4 x3 − 5 x4 + x5 ≤ 9 4 x1 + x2 + 3 x3 ≥ 7
x1 + x2 − 5 x3 − 7 x4 + 11x5 ≤ 10, 7 x1 − 3 x2 + x3 ≤ 10
x1, x2 , x3 , x4 , x5 ≥ 0. x1 + 5 x2 − 2 x3 ≥ 2
4 x1 − 2 x2 + 7 x3 ≥ 8
x1, x2 , x3 ≥ 0. [Gorakhpur 2010]
14. Max. Z = x1 + 3 x2 15. Max. Z = 3 x1 + x2 + x3 − x4
subject to subject to
3 x1 + 2 x2 ≤ 6 x1 + 5 x2 + 3 x3 + 4 x4 ≤ 5
3 x1 + x2 = 4 x1 + x2 = −1
x1, x2 ≥ 0. x3 − x4 ≥ −5
x1, x2 , x3 , x4 ≥ 0.
16. Min. Z = 2 x1 + 3 x2 + 4 x3 17. Max. Z = 3 x1 − 2 x2
subject to subject to
2 x1 + 3 x2 + 5 x3 ≥ 2 x1 + x2 ≤ 5
3 x1 + x2 + 7 x3 = 3 x1 ≤ 4
x1 + 4 x2 + 6 x3 ≤ 5 1 ≤ x2 ≤ 6. [Kanpur 2010]
x1, x2 ≥ 0, x3 is unrestricted.
[Kanpur 2009]
18. Min. Z = x1 − 3 x2 − 2 x3 19. Min. Z = x1 + x2 + x3
subject to subject to
3 x1 − x2 + 2 x3 ≤ 7 x1 − 3 x2 + 4 x3 = 5
2 x1 − 4 x2 ≥ 12 x1 − 2 x2 ≤ 3
−4 x1 + 3 x2 + 8 x3 = 10 2 x2 − x3 ≥ 4
x1, x2 ≥ 0, x3 is unrestricted. x1, x3 ≥ 0, x2 is unrestricted.
[Gorakhpur 2007]
20. Max. Z = 3 x1 + x2 + 2 x3 − x4 21. Max. Z = 4 x1 + 2 x2
subject to subject to
2 x1 − x2 + 3 x3 + x4 = 1 x1 − 2 x2 ≥ 2
x1 + x2 − x3 + x4 = 3 x1 + 2 x2 = 8
x1, x2 , x3 ≥ 0, x4 is unrestricted. x1 − x2 ≤ 10
x1 ≥ 0, x2 unrestricted in sign.
307

2. Min. Z D = 10 y1 + 2 y2 + 6 y3 3. Min. Z D = 8 y1 + 10 y2 + 15 y3

subject to subject to

y1 + 2 y2 + 2 y3 ≥ 1 2 y1 + 3 y3 ≥ 3

− y1 + 2 y3 ≤ 1 3 y1 + 2 y2 + 2 y3 ≥ 5

y1 − y2 + 3 y3 ≥ 3 5 y2 + 4 y3 ≥ 4

y1, y2 , y3 ≥ 0 y1, y2 , y3 ≥ 0

4. Min. Z D = 15 y1 + 15 y2 5. Max. Z D = 3 y1 + 5 y2

subject to subject to

3 y1 + 5 y2 ≥ 5 y1 ≤ 4, y2 ≤ 6

5 y1 + 2 y2 ≥ 3 3 y1 + 2 y2 ≤ 18

y1, y2 ≥ 0 y1, y2 ≥ 0

6. Min. Z D = 16 y1 − 20 y2 7. Max. Z D = 2 y1 + y2

subject to subject to

2 y1 − 5 y2 ≥ 3 2 y1 + y2 ≤ 3

6 y1 − 2 y2 ≥ 4 3 y1 + y2 ≤ 1

y1, y2 ≥ 0 y1, y2 ≥ 0.

8. Max. Z D = 2 y1 − 3 y2 − 5 y3 9. Max. Z D = 3 y1 + 4 y2 + y3 + 7 y4

subject to subject to

2 y1 − 3 y2 − y3 ≤ 2 8 y1 + 3 y2 + 4 y3 + y4 ≤ 7

3 y1 − y2 − 4 y3 ≤ 2 2 y1 + 6 y2 + y3 + 5 y4 ≤ 3

5 y1 − 7 y2 − 6 y3 ≤ 4 y1 + 4 y2 + 5 y3 + 2 y4 ≤ 8

y1, y2 , y3 ≥ 0. y1, y2 , y3 , y4 ≥ 0.

10. Min. Z D = 3 y1 + 4 y2 + y3 + 6 y4 11. Max. Z D = −7 y1 − 9 y2 + 5 y3 + y4

subject to subject to

5 y1 − 2 y2 + y3 − 3 y4 ≥ 2 −3 y1 − 6 y2 + 2 y3 + 4 y4 ≤ 3

6 y1 + y2 − 5 y3 − 3 y4 ≥ 5 5 y1 + 3 y2 − 7 y3 + 2 y4 ≥ 2

− y1 + 4 y2 + 3 y3 + 7 y4 ≥ 6 −4 y1 − 2 y2 + 9 y3 + 7 y4 ≤ 4

y1, y2 , y3 y4 ≥ 0 y1, y2 , y3 , y4 ≥ 0
308

12. Min. Z D = 6 y1 + 9 y2 + 10 y3 13. Min. Z D = −9 y1 − 7 y2 + 10 y3


subject to −2 y4 − 8 y5
subject to
4 y1 + 2 y2 + y3 ≥ 3
−3 y1 − 4 y2 + 7 y3 − y4 − 4 y5 ≥ 4
−5 y1 + 3 y2 + y3 ≥ 1
+5 y1 + y2 + 3 y3 + 5 y4 − 2 y5 ≤ 2
−9 y1 + 4 y2 − 5 y3 ≥ 4
−4 y1 − 3 y2 + y3 + 2 y4 − 7 y5 ≥ 7
y1 − 5 y2 − 7 y3 ≥ 1
y1, y2 , y3 , y4 , y5 ≥ 0
−2 y1 + y2 + 11 y3 ≥ 9
y1, y2 , y3 ≥ 0

14. Min. Z D = 6 y1 + 4 y2 15. Min. Z D = 5 y1 − y2 + 5 y3


subject to subject to
3 y1 + 3 y2 ≥ 1 y1 + y2 ≥ 3
2 y1 + y2 ≥ 3 5 y1 + y2 ≥ 1
y1 ≥ 0, y2 is unrestricted 3 y1 − y3 ≥ 1
4 y1 + y3 ≥ −1
y1, y3 ≥ 0, y2 is unrestricted

16. Max. Z D = 2 y1 + 3 y2 − 5 y3 17. Min. Z D = 5 y1 + 4 y2 − y3 + 6 y4


subject to subject to
2 y1 + 3 y2 − y3 ≤ 2 y1 + y2 = 3
3 y1 + y2 − 4 y3 ≤ 3 − y1 + y3 − y4 ≤ 2
5 y1 + 7 y2 − 6 y3 = 4 y1, y2 , y3 , y4 ≥ 0
y1, y3 ≥ 0, y2 is unrestricted

18. Max. Z D = −7 y1 + 12 y2 + 10 y3 19. Max. Z D = 5 y1 − 3 y2 + 4 y3


subject to subject to
−3 y1 + 2 y2 − 4 y3 ≤ 1 y1 − y2 ≤ 1
− y1 + 4 y2 − 3 y3 ≥ 3 −3 y1 + 2 y2 + 2 y3 = 1
2 y1 − 8 y3 = 2 4 y1 − y3 ≤ 1
y1, y2 ≥ 0, y3 is unrestricted y2 , y3 ≥ 0, y1 is unrestricted

20. Min. Z D = y1 + 3 y2 21. Min. Z D = −2 y1 + 8 y2 + 10 y3


subject to subject to
2 y1 + y2 ≥ 3 − y1 + y2 + y3 ≥ 4
− y1 + y2 ≥ 1 2 y1 + 2 y2 − y3 = 2
3 y1 − y2 ≥ 2 y1, y3 ≥ 0 y2 is unrestricted
y1 + y2 = −1
y1, y2 are unrestricted
309

8.4 Duality Theorems


We observe that dual of a primal problem is also a linear programming problem therefore
we can also construct the dual of a dual problem.

Here we shall prove a number of fundamental theorems to describe the relation between
the primal and its dual. The relationship between the primal and dual is extremely useful
in the development of mathematical programming.

Theorem 1: The dual of a dual of the given primal is the primal itself. [Kanpur 2011]

Proof: Suppose the given linear programming problem is

Primal Problem

Max. Z P = c1 x1 + c2 x2 + ... + cn x n

subject to

a11 x1 + a12 x2 + ... + a1n x n ≤ b1 


a21 x1 + a22 x2 + ... + a2 n x n ≤ b2 

... ... ... 

... ... ...  ...(1)
am1 x1 + am2 x2 + ... + amn x n ≤ bm 

x1, x2 ,..., x n ≥ 0. 

Dual Problem

The dual of the above primal can be written as

Min. Z D = b1w1 + b2 w2 + ... + bmwm

subject to

a11w1 + a21w2 + ... + am1wm ≥ c1 


a12 w1 + a22 w2 + ... + am2 wm ≥ c2 

... ... ... 
... ... ...  ...(2)

a1nw1 + a2 nw2 + ... + amnwm ≥ cn 

w1, w2 ,..., wm ≥ 0. 

Now we have to construct the dual of the above dual. First we shall change the above dual
to standard maximization form.

Max. (− Z D ) = − b1w1 − b2 w2 − ... − bmwm


310

subject to

− a11w1 − a21w2 − ... − am1wm ≤ − c1 


− a12 w1 − a22 w2 − ... − am2 wm ≤ − c2 

... ... ... 
... ... ... 
 ...(3)
− a1nw1 − a2 nw2 − ... − amnwm ≤ − cn 

w1, w2 ,..., wm ≥ 0. 

Dual of the dual : Considering the above dual a primal, its dual can be written as

Min. Z y = − c1 y1 − c2 y2 − ... − cn yn 

subject to 
− a11 y1 − a12 y2 − . .. − a1n yn ≥ − b1 
− a21 y1 − a22 y2 − ... − a2 n yn ≥ − b2 

... ... ... 
... ... ...  ...(4)

− am1 y1 − am2 y2 − ... − amn yn ≥ − bm.
y1, y2 ,..., yn ≥ 0 

Changing the above problem to maximization and multiplying each constraint by −1, we
get

Max. Z ′y = c1 y1 + c2 y2 + ... + cn yn,(− Z y = Z ′y say)


subject to a11 y1 + a12 y2 + ... + a1n yn ≤ b1 

a21 y1 + a22 y2 + ... + a2 n yn ≤ b2 
... ... ... 
... ... ...  ...(5)

am1 y1 + am2 y2 + ... + amn yn ≤ bm,
y1, y2 ,..., yn ≥ 0 

which is identical to the given linear programming problem (primal problem).

Hence the dual of the dual is primal.

Theorem 2: If x is any feasible solution to the primal problem Max. Z P = c x, subject to


A x ≤ b, x ≥ 0 and w is any feasible solution to the dual problem.

Min. Z D = b′ w subject to A′w ≥ c′, w ≥ 0 , then c x ≤ b′ w i.e., Z P ≤ Z D .

Proof: Consider the primal problem


Max. Z P = c x 

subject to Ax ≤ b  ...(1)
x ≥ 0 
311

Let x = [ x1, x2 ,..., x n] be any feasible solution to (1).

The dual of the above primal is

Min. Z D = b′ w  ...(2)
subject to A′ w ≥ c′, w ≥ 0.

Let w = [w1, w2 ,..., wm] be any feasible solution to the dual (2).

Now w (m-component column vector) is the feasible solution of the dual and A x ≤ b are
the constraints of the primal (1). If we multiply both sides of A x ≤ b with w′, the sign of
the inequality remains unchanged.

∴ w′ ( A x ) ≤ w′ b

⇒ ( A′ w)′ x ≤ (b′ w)′ ...(3)

Similarly x (n-component column vector) is the feasible solution of the primal (1) and
A′ w ≥ c′ denotes the constraints of the dual (2) so we can write

x ′ ( A′ w) ≥ x ′ c′

⇒ x ′ (w′ A)′ ≥ (c x )′

⇒ [(w′ A) x ]′ ≥ (c x )′

⇒ (w′ A) x ≥ c x

⇒ ( A′ w)′ x ≥ c x. ...(4)

From relations (3) and (4), we have

c x ≤ ( A′ w)′ x ≤ (b′ w)′

⇒ c x ≤ (b′ w)′ ⇒ c x ≤ b′ w

⇒ ZP ≤ ZD.

Theorem 3: If ^
x is a feasible solution to the primal problem Max. Z P = c x subject to

A x ≤ b, x ≥ 0 and ^
w is a feasible solution to its dual

Min. Z D = b′ w subject to A′w ≥ c′, w ≥ 0

such that c ^x = b′ ^
w, then ^
x is the optimal solution of the primal and ^
w is the optimal
solution of the dual problem.
Or
The necessary and sufficient condition for any linear programming problem and its dual to
have optimal solution is that both have feasible solutions.
312

Proof: Consider the primal problem

Max. Z P = c x 
 ...(1)
subject to A x ≤ b, x ≥ 0.

Let x be any feasible solution to (1).

The dual of the problem (1) is

Min. Z D = b′ w 
 ...(2)
subject to A′ w ≥ c′, w ≥ 0 

Let ^
w be any feasible solution to the dual (2).

Proceeding as in theorem (2) above, we have

c x ≤ b′ ^
w

⇒ c x ≤ c^
x , since c ^
x = b′ ^
w

⇒ Value of the objective function of the primal problem at the feasible solution ^
x is
greater than its value at any other feasible solution x.

Hence ^
x is the optimal solution of the primal problem (for maximization problem).

Again suppose w is any feasible solution of the dual problem (2). ^


x is the given feasible
solution of the primal problem (1),

Proceeding as in theorem (2), we have

c^
x ≤ b′ w

⇒ b′ ^
w ≤ b′ w since c ^
x = b′ ^
w

⇒ The value of the objective function of the dual problem at the given feasible

solution ^
w is less than its value at any other feasible solution w.

Hence ^
w is the optimal solution of the dual problem (for minimization problem).

Theorem 4: (Basic Duality Theorem) : If x 0 is an optimum solution to the primal, then


there exists a feasible solution w0 to the dual such that

c x 0 = b′ w0 ,

where b′ is the transpose of b.

Proof: Consider the following primal and dual problems :


313

Primal Problem
Max. Z P = c x 
subject to Ax ≤ b, x ≥ 0  ...(1)

Dual Problem
Min. Z D = b′ w 
subject to A′ w ≥ c′, w ≥ 0  ...(2)

Suppose x 0 is an optimum solution to the primal. Then to solve the primal problem by
simplex method, introduce slack variables to each of the constraints.

Now (1) can be written as


Max. Z P = c x
subject to A x + Ix s = b

where x s ∈ R mrepresents the vector of slack variables and I is the associated m × m

identity matrix.

Suppose x 0 = (x B , 0) is an optimum solution to the primal problem where x B denotes


the optimum basic feasible solution.

We know x B = B −1 b, B is the optimal basis of A.

∴ The optimal primal objective function is

Z = c x0 = c B x B ,

c B is the row vector containing the prices of the basic variables.

We know

Z j − cj = cB Y j − cj

c B −1α − c , ∀α j ∈ A
 B j j
= −1
c
 B B e j − 0, ∀ e j ∈ I.

Since x 0 is an optimal solution, so we have

Z j − c j ≥ 0, ∀ j

⇒ c B B −1α j ≥ c j , c B B −1e j > 0, ∀ j

⇒ c B B −1 A ≥ c, c B B −1 ≥ 0 (in matrix form)

⇒ A′ B −1c B′ ≥ c′, B −1c B ≥ 0

⇒ A′ w0 ≥ c′ ; w0 ≥ 0 , taking B −1c B = w0 ∈ R m
314

⇒ w0 is a feasible solution of the dual problem. Also corresponding dual objective


function is

b′ w0 = w0′ b = c B B −1b = c B x B = cx 0

Hence corresponding to a given optimal solution x 0 of the primal there exists a feasible
solution w0 of the dual such that

c x 0 = b′ w0

Note : Similarly we can prove the following theorem :

If w0 is an optimum solution to the dual, then there exists a feasible solution x 0 to the
primal such that c x 0 = b′ w0

Theorem 5: (Fundamental Duality Theorem)


(i) If either the primal or the dual problem has a finite optimal solution then the other
problem also has a finite optimal solution and the optimal values of the objective
function in both the problems are the same.
(ii) If primal (dual) problem has an unbounded optimum solution, the other problem has
either no solution at all or an unbounded solution.

Proof: (i) Consider the primal and the dual problems

Primal Problem
Max. Z P = c x 
 ...(1)
subject to Ax ≤ b, x ≥ 0 

Dual Problem
Min. Z D = b′ w 

subject to A′ w ≥ c′  ...(2)
w ≥ 0 

To prove the theorem we shall construct an optimal solution to the dual from a given
optimal solution of the primal.
Let us assume that the primal has a finite optimal feasible solution x B .
To solve the primal (1) by simplex method, introduce slack variables to each of the
constraints.
Now (1) can be written as
Max. Z P = c x 

subject to A x + I x s = b  ...(3)
x ≥ 0 , x s ≥ 0, 

where x s ∈ R m represents the vector of slack variables and I is the associated m × m


identity matrix. B is the basis matrix and suppose c B is the m-component row vector
containing the prices of the basic variables.
315

Now x B is the optimal solution to the primal so we have

c j − Z j ≤ 0, ∀ j

We know Z j = c B Y j = c B B −1α j

∴ c j − c B B −1α j ≤ 0, ∀ j

or c B B −1α j ≥ c j
...(4)

⇒ c B B −1 (α1, α 2 ,..., α n) ≥ (c1, c2 ,... cn)

⇒ c B B −1 A ≥ c ...(5)

Taking (^
w)′ = c B B −1, where ^
w = [w1, w2 ,..., wm].

Then from (5), we have

(^
w)′ A ≥ c ⇒ [(^
w)′ A]′ ≥ c′

⇒ A′(^
w ) ≥ c′

^
⇒ w is the solution of the dual (2).
Again considering the relation (4) with α j corresponding to slack variables, we have

c B B −1(e j ) ≥ 0 , j = 1, 2,...., m; c j = 0,

⇒ (^
w)′ e j ≥ 0 ⇒ (^
w)′ ≥ 0

^
⇒ w ≥0
^
⇒ w is the feasible solution of the dual (2).

Now it remains to show that ^


w is an optimal solution to the dual (2).

We have

Z D = b′ ^
w = [(^
w)′ b]′ = (^
w)′ b

= (c B B −1)b = c B (B −1b)

= c Bx B = ZP .

Since ^
w and x B are the feasible solutions of the dual (2) and the primal (1) respectively
and

ZD = ZP
316

Therefore ^
w is the optimal solution of the dual (2). (See the result of theorem 3)
(ii) We shall prove it by contradiction. Suppose, when the primal problem has an
unbounded solution, the dual problem has a finite optimal solution.
We have proved in theorem 1 that dual of the dual is the original primal. Also in
part (i) of this theorem we have proved that if a primal has a finite optimal solution,
the dual also has a finite optimal solution.
Thus if we consider the dual as the primal, then its dual (which is the original
primal) must have a finite optimal solution, which contradicts the hypothesis.
Hence the dual has no finite optimal solution. If the primal problem has an
unbounded solution, either the dual has no solution or an unbounded solution.
Similarly, we can prove that if the dual has an unbounded solution, the primal has
no solution or an unbounded solution.

Theorem 6: If any of the constraints in the primal is a perfect equality, the corresponding
dual variable is unrestricted in sign.

Proof: Suppose in the given primal the k-th constraint is an equality. Writing the primal
in the standard primal form, we have

Max. Z = c1 x1 + c2 x2 + ... + cn x n

subject to
a11 x1 + a12 x2 + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 n x n ≤ b2
... ... ...
... ... ...
ak1 x1 + ak 2 x2 + ... + akn x n ≤ bk
− ak1 x1 − ak 2 x2 − ... − akn x n ≤ − bk
... ... ...
... ... ...
am1 x1 + am2 x2 + ... + amn x n ≤ bm,
x1, x2 ,..., x n ≥ 0

The dual of the above primal is

Min. Z D = b1w1 + b2 w2 + ... + bk (w′k − w′′k ) + ... + bmwm

subject to
a11w1 + a21w2 + ... + ak1 (w′k − w′′k ) + ... + am1wm ≥ c1
a12 w1 + a22 w2 + ... + ak 2 (w′k − w′′k ) + ... + am2 wm ≥ c2
... ... ...
... ... ...
a1nw1 + a2 nw2 + ... + akn (w′k − w′′k ) + ... + amnwm ≥ cn
w1, w2 , ..., w′k , w′′k ,..., wm ≥ 0.
317

Substituting wk = w′k − w′′k in the above dual, we get

Min. Z D = b1w1 + b2 w2 + ... + bk wk + ... + bmwm

subject to
a11w1 + a21w2 + ... + ak1 wk + ... + am1wm ≥ c1
a12 w1 + a22 w2 + ... + ak 2 wk + ... + am2 wm ≥ c2
... ... ...
... ... ...
a1nw1 + a2 nw2 + ... + akn wk + ... + amnwm ≥ cn
w1, w2 ,..., wk −1, wk + 1,..., wm ≥ 0,

wk is unrestricted in sign because

wk > 0 if w′k > w′′k and wk < 0 if w′k < w′′k .

Theorem 7: If any variable of the primal is unrestricted in sign, the corresponding


constraint in the dual will be a strict equality.

Proof: Consider the primal in which the k-th variable is unrestricted in sign.

Max. Z = c1 x1 + c2 x2 + ... + ck x k + ... + cn x n

subject to
a11 x1 + a12 x2 + ... + a1k x k + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 k x k + ... + a2 n x n ≤ b2
... ... ...
... ... ...
am1 x1 + am2 x2 + ... + amk x k + ... + amn x n ≤ bm
x1, x2 ,..., x k −1, x k + 1,..., x n ≥ 0,

x k is unrestricted in sign.

Substituting x k = x′k − x′′k , x′k ≥ 0, x′′k ≥ 0 in the given primal, it changes to

Max. Z = c1 x1 + c2 x2 + ... + ck ( x′k − x′′k ) + ... + cn x n

subject to
a11 x1 + a12 x2 + ... + a1k (x′k − x′′k ) + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 k (x′k − x′′k ) + ... + a2 n x n ≤ b2
... ... ... ...
... ... ... ...
am1 x1 + am2 x2 + ... + amk (x′k − x′′k ) + ... + amn x n ≤ bm,
x1, x2 ,..., x k −1, x′k , x′′k , x k + 1,..., x n ≥ 0

Writing the dual of the above primal, we have

Min. Z D = b1w1 + b2 w2 + ... + bmwm,


318

subject to

a11w1 + a21w2 + ... + am1wm ≥ c1


a12 w1 + a22 w2 + ... + am2 wm ≥ c2
... ... ...
... ... ...
a1k w1 + a2 k w2 + ... + amk wm ≥ ck
− a1k w1 − a2 k w2 − ... − amk wm ≥ − ck
... ... ...
... ... ...
a1nw1 + a2 nw2 + ... + amnwm ≥ cn,
w1, w2 ,..., wm ≥ 0

The two constraints

a1k w1 + a2 k w2 + ... + amk wm ≥ ck

and − a1k w1 − a2 k w2 − ... − amk wm ≥ − ck

are equivalent to the single equation

a1k w1 + a2 k w2 + ... + amk wm = ck .

Hence if the k-th variable of the primal is unrestricted, the k-th constraint in the dual is an
equality.

8.5 Existence Theorems


1. There exists a bounded (finite) optimum solution to a linear programming problem if
an only if there exists a feasible solution to both primal and its dual.
2. If there does not exist any feasible solution to the dual (primal) but there exists at
least one to the primal (dual), then there does not exist any finite optimum solution
to the primal (dual).
3. If there does not exist any finite optimum solution to the primal (dual) then there
does not exist any feasible solution to the dual (primal).

Proof: 1. Suppose Max. Z = c x


Primal problem
subject to A x ≤ b, x ≥ 0

and Min. Z D = b′ w
Dual problem
subject to A′ w ≥ c′, w ≥ 0

are the primal and dual problems respectively.


Again suppose there exists an optimum feasible solution to the primal problem. Then by
fundamental theorem of duality the dual problem has at least one feasible solution.
319

Conversely, let us assume that both the primal and the dual possess feasible solution ^
x

and ^
w respectively. Then c ^
x and b′ ^
w are both finite and c ^
x ≤ b′ ^
w i.e.,b′ ^
w acts as an

upper bound on c ^
x although not necessarily the least upper bound. Hence the primal
must have finite optimum solution.

2. Consider the primal problem

Max. Z = c x,

subject to A x ≤ b, x ≥ 0

The corresponding dual problem is

Min, Z D = b′ w

subject to A′ w ≥ c′, w ≥ 0
Suppose there does not exist any feasible solution to the dual but there does exist

one to the primal, say ^


x . Then c ^
x is the value of the primal objective function.

If we suppose that ^
x is an optimum solution to the primal then by fundamental
theorem of duality there must exist a feasible solution to the dual which contradicts
the hypothesis. Therefore no feasible solution to the primal can be optimal.

Similarly, we can start with the primal.


3. Suppose the primal problem does not possess any finite optimum solution.
Without loss of generality, if can be assumed that there exists a feasible solution to
the primal. Now if we assume that the dual problem possesses a feasible solution
then by existence theorem 1 (above) the primal problem must possess a finite
optimum solution. This contradicts the hypothesis, hence the dual problem does
not possess a feasible solution.

8.6 Complementary Slackness Theorems


For the optimal feasible solutions of the primal and the dual systems
1. If the inequality occurs in the i-th relation of either system (the corresponding slack or
surplus variable is positive), then the i-th variable of its dual is zero.
2. If the j-th variable is positive in either system, the j-th relation of its dual holds as a
strict equality (i.e., the corresponding slack or surplus variable w m + j = 0).

Proof: The objective function of the primal and dual problems in explicit form can be
written as

Max. Z P = c1 x1 + c2 x2 + ... + cn x n (primal) ...(1)


320

Min. Z D = b1w1 + b2 w2 + ... + bmwm (dual) ...(2)

After introducing the non-negative slack variables x n + 1, x n + 2 ,..., x n + m the primal


constraint equations can be written as

a11 x1 + a12 x2 + ... + a1n x n + x n + 1 = b1 


a21 x1 + a22 x2 + ... + a2 n x n + x n + 2 = b2 

... ... ... ... 
... ... ... ... 
 ...(3)
am1 x1 + am2 x2 + ... + amn x n + x n + m = bm 
x1, x2 ,..., x n + m ≥ 0. 

Again introducing the non-negative surplus variables wm + 1, wm + 2 ,..., wm + n the dual


constraint equations can be written as
a11w1 + a21w2 + ... + am1wm − wm + 1 = c1 
a12 w1 + a22 w2 + ... + am2 wm − wm + 2 = c2 

... ... ... ... 
... ... ... ... 
 ...(4)
a1nw1 + a2 nw2 + ... + amnwm − wm + n = cn 
w1, w2 ,..., wm, wm + 1,..., wm + n ≥ 0. 

Now, multiplying the equations (3) by w1, w2 ,..., wm respectively and then adding the
resulting equations, we have
m m m
x1 ∑ ai1 wi + x2 ∑ ai2 wi + ... + x n ∑ ain wi
i =1 i =1 i =1
m
+ x n + 1 w1 + x n + 2 w2 + ... + x n + m wm = ∑ bi wi
...(5)
i =1

Subtracting (5) from (1), we have

 m   m   m 
 c − ∑ a w  x +  c − ∑ a w  x + ... +  c − ∑ a w  x
 1 i
1 i 1  2 i2 i 2  n in i n
 i =1   i =1   i =1 

 m 
− w1 x n + 1 − w2 x n + 2 − ... − wm x n + m =  Z P − ∑ bi wi
 
 i =1 

= ZP − ZD ...(6)

From (4), we have

− wm + j = c j − (a1 j w1 + a2 j w2 + ... + amj wm), j = 1, 2,..., n


m
or − wm + j = c j − ∑ aij wi for all j = 1, 2,..., n. ...(7)
i =1
321

Using (7) in (6), we have

− (wm + 1 x1 + wm + 2 x2 + ... + wm + n x n)

− (w1 x n + 1 + w2 x n + 2 + ... + wm x n + m) = Z P − Z D ...(8)

Now if x * = [ x1*, x2*,..., x n*] and w * = [w1*, w2*,..., wm*] be the optimal solutions to the
primal and the dual problems respectively, then by Duality theorem

Z P * = Z D*

Thus for the optimal solutions x * and w* of the primal and dual problems the
corresponding slack and surplus variables are

x*n+ i ≥ 0, i = 1, 2,...., m and w*m+ j ≥ 0, j = 1, 2,...., n.

From equation (8), we have

(w*m + 1 x1* + w*m + 2 x2* + ... + w*m + n x n*)

+ (w1* x*n + 1 + w2* x*n + 2 + ... + w*m x*n + m) = 0


...(9)

Since all the variables in (9) are non-negative so their product terms in (9) are also
non-negative. For the validity of relation (9) each term must be individually equal to zero

i.e., w*m + j x*j = 0 (∀ j = 1, 2,..., n)


...(10)

and w*i x*n + i = 0 (∀ i = 1, 2,..., m)


...(11)

1. From (11), if x*n + i > 0 then we must have w*i = 0, i.e., if the slack variable in the i-th

relation of the primal is positive then i-th variable of the dual is zero.
Again from (10), if w*m + j > 0 then we must have x*j = 0, i.e., if the surplus variable in

the i-th relation of the dual is positive then j-th variable of the primal (dual of the
dual) is zero.
This proves the part (1) of the theorem.

2. Also from (10), if x*j > 0, then we must have w*m + j = 0 i.e., if the j-th variable in the

primal is positive then the j-th relation in the dual is strictly an equality (as
w*m + j = 0).

Again from (11), if w*i > 0, then x*n + i = 0

i.e., if the i-th variable in the dual is positive then the j-th relation in the primal (dual
of the dual) is strictly an equality (as x*n + i = 0).
322

This proves the part (2) of the theorem.

Alternative Statement : A necessary and sufficient condition for any pair of feasible
solutions to the primal and dual to be optimal is that

wi x n + i = 0, i = 1, 2,..., m where x n + i is the slack variable in the primal

and x j wm + j = 0, j = 1, 2,..., n where wm + j is the surplus variable for the dual.

Primal and Dual Correspondence : With the help of the duality theorems developed
in article 8.5 and 8.6 we note the following correspondence between the primal and the
dual problems :

S. No. Primal Dual

1. Objective function Max. Z P . Objective function Min. Z D .


2. Requirement vector. Price vector.
3. Coefficient matrix A. Transpose of the coefficient matrix, A′
or AT .
4. Constraints with ≤ sign. Constraints with ≥ sign.
5. Relation. Variable.
6. i-th inequality. i-th variable wi ≥ 0
7. i-th constraint an equality. i-th variable wi unrestricted in sign.
8. Variable. Relation.
9. i-th variable x i > 0. i-th relation a strict inequality.
10. i-th variable x i unrestricted in sign. i-th constraint a strict equality.
11. i-th slack variable positive. i-th variable zero.
12. i-th variable zero. i-th surplus variable positive.
13. Finite optimal solution. Finite optimal solution with equal
optimal value of objective function.
14. Unbounded solution. No solution or an unbounded solution.
15. No finite optimal solution. No feasible solution.

8.7 Rules for Obtaining the Solution to the Dual From


the Final Simplex Table of the Primal and Vice-versa
From the final simplex table of the primal problem we can also read the optimal solution
of the dual and vice-versa. For this we observe the following rules :

1. The optimal value of the primal objective function is equal to the optimal value of
the dual objective function.
323

i.e., Max. Z P = Min. Z D .

This has been proved in Duality Theorem.

2. The ∆ j 's (∆ j = c j − Z j ) with sign changed for the slack (or surplus) variables in the
final simplex table for the primal are the values of the corresponding optimal dual
variables in the final simplex table for the dual.
Proof: In the final simplex table (for primal) corresponding to slack and surplus
variables

− ∆ j = − (c j − Z j ) = − (0 − Z j )

= c B Y j = c B B −1α j

where B is the optimal basis and c B , the corresponding price vector.

But, for i-th slack variable, α j becomes a unit vector ei with 1 at the i-th place.

∴ − ∆ j = c B B −1ei = (w1, w2 ,..., wm) ei = wi

where wi is i-th dual variable.

3. If either problems has unbounded solution, then the other will have no feasible
solutions.
Note : It is always advantageous to apply the simplex method to the problem
having lesser number of constraints. Therefore, first we shall solve the problem
(primal or dual) with lesser number of constraints by simplex method and then read
the solution of the other from the final simplex table according to the rules
described above.

8.8 Some Useful Aspects of Duality


In some cases it is easier to solve linear programming problems through their duals. If the
number of original variables in the primal problem is considerably less than the number
of slack and surplus variables, we must opt to solve the problem through its dual.

Duality plays an important role not only in linear programming but also in physics,
economics, engineering etc.

In physics, we use it in parallel circuit and series circuit theory.

In economics it is used in the formulation of input and output systems.

In game theory, we use it to find the optimal strategies of the player B when he minimizes
his losses. Then, using duality, we can change the player A's problem into player B's
problem and vice-versa.
324

Example 1: Write the dual of the following problem and solve it :

Max. Z = 4 x1 + 2 x2

subject to − x1 − x2 ≤ −3

− x1 + x2 ≤ −2, x1, x2 ≥ 0 .

Hence or otherwise write down the solution of the primal.

Solution: The given problem is in standard primal form.

∴ The dual to the given primal is

Min. Z D = −3 w1 − 2 w2

subject to − w1 − w2 ≥ 4

− w1 + w2 ≥ 2

w1, w2 ≥ 0.

Changing the objective function to maximization and introducing surplus variables


w3 ≥ 0, w4 ≥ 0 and artificial variables wa , wa ≥ 0 the above dual problem reduces to
1 2

Max. Z D′ = 3 w1 + 2 w2 + 0 w3 + 0 w4 − Mwa − Mwa


1 2

subject to − w1 − w2 − w3 + wa =4
1

− w1 + w2 − w4 + wa =2
2

Taking w1 = 0 = w2 = w3 = w4 , we get w5 = 4, w6 = 2 which is the initial B.F.S. of dual


problem.
Now applying the simplex method to obtain the optimal solution, we have

cj 3 2 0 0 −M −M Min. ratio
B cB w B W2
wB W1 W2 W3 W4 A1 A2
w i2 > 0

A1 −M 4 −1 −1 −1 0 1 0 
A2 −M 2 −1 1 0 −1 0 1 2 (min)

Z D′ = −6 M ∆j 3 −2 M 2 −M −M 0 0
↑ ↓
A1 −M 6 −2 0 −1 −1 1 1
W2 2 2 −1 1 0 −1 0 1

Z D′ = 4 − 6 M ∆j 5 −2 M 0 −M 2−M 0 −2
325

In the last simplex table no ∆ j > 0 and a non-zero artificial variable appears in the basis
therefore the dual problem does not possess any optimum basic feasible solution.
Consequently, the given problem does not possess any feasible solution.

Example 2: Write the dual of the following linear programming problem and hence solve
it :
Max. Z = 3 x1 − 2 x2

subject to x1 ≤ 4

x2 ≤ 6

x1 + x2 ≤ 5

− x2 ≤ −1

x1, x2 ≥ 0 . [Meerut 2007 (BP), 08]

Solution: The given problem is in standard primal form. The dual of the given primal is

Min. Z D = 4 w1 + 6 w2 + 5 w3 − w4

subject to w1 + w3 ≥ 3

w2 + w3 − w4 ≥ −2,

w1, w2 , w3 , w4 ≥ 0.

Changing the dual problem to maximization multiplying both sides of second constraint
by –1 to make R.H.S. positive, introducing surplus variable w5 and slack variable w6 to
change the inequalities into equations, the dual problem becomes

Max. Z D′ = −4 w1 − 6 w2 − 5 w3 + w4 + 0 w5 + 0 w6

subject to w1 + w3 − w5 = 3

− w2 − w3 + w4 + w6 = 2,

w1, w2 ,..., w6 ≥ 0.

Here we have not introduced the artificial variable in the first constraint equation as w1
will serve the purpose.

Taking w2 = 0, w3 = 0, w4 = 0, w5 = 0, we get w1 = 3, w6 = 2 which is the starting B.F.S.

The solution by simplex method is given in the following table :


326

cj −4 −6 −5 1 0 0 Min. ratio

B cB wB W1 W2 W3 W4 W5 W6 w B W4
w i4 > 0
W1 −4 3 1 0 1 0 −1 0 
W6 0 2 0 −1 −1 1 0 1 2 (min)

Z D′ = −12 ∆j 0 −6 −1 1 −4 0
↑ ↓
W1 −4 3 1 0 1 0 −1 0
W4 1 2 0 −1 −1 1 0 1

Z D′ = −10 ∆j 0 −5 0 0 −4 −1

Since all ∆ j ≤ 0, therefore the last table gives the optimal solution of the dual.

The optimal solution of the dual is

w1 = 3, w2 = 0, w3 = 0, w4 = 2,

Min. Z D = −Max. Z D′ = 10.

To read the solution of the primal from the final simplex table of the dual.

The optimal solution of the primal problem is

x1 = − ∆5 = 4, x2 = − ∆6 = 1 and Max. Z = Min. Z D = 10.

Example 3: Use duality to solve

Min. Z = 3 x1 + x2

subject to x1 + x2 ≥ 1

2 x1 + 3 x2 ≥ 2, x1, x2 ≥ 0 . [Meerut 2008 (BP)]

Solution: The given L.P.P. is in the standard primal form. The dual problem is given by

Max. Z D = w1 + 2 w2

subject to w1 + 2 w2 ≤ 3

w1 + 3 w2 ≤ 1, w1, w2 ≥ 0.

Introducing slack variables w3 and w4 to change the constraint inequalities into


equations, the dual problem becomes

Max. Z D = w1 + 2 w2 + 0 w2 + 0 w4

subject to w1 + 2 w2 + w3 =3

w1 + 3 w2 + w4 = 1,
327

w1, w2 , w3 , w4 ≥ 0.

Taking w1 = 0, w2 = 0, we get w3 = 3, w4 = 1, which is initial B.F.S.

The solution by simplex method is given in the following table :

cj 1 2 0 0 Min ratio
B cB w B W2
wB W1 W2 W3 W4 w i2 > 0

W3 0 3 1 2 1 0 32
W4 0 1 1 3 0 1 13→
(min)

ZD = 0 ∆j 1 2 0 0 w B W1
↑ ↓
W3 0 73 13 0 1 −2 3 7
W2 2 13 13 1 0 13 1→
(min)

ZD = 2 3 ∆j 13 0 0 −2 3
↑ ↓
W3 0 2 0 −1 1 −1
W1 1 1 1 3 0 1

ZD = 1 ∆j 0 −1 0 −1

In last table, since all ∆ j ≤ 0, therefore the dual problem has optimal solution

w1 = 1, w2 = 0, Max. Z D = 1

Now the solution of the original primal problem from the last simplex table of the dual is

x1 = − ∆3 = 0, x2 = − ∆4 = 1, min Z = max. Z D = 1.

Example 4: Apply simplex method to solve the following problem :

Max. Z = 30 x1 + 23 x2 + 29 x3

subject to 6 x1 + 5 x2 + 3 x3 ≤ 26

4 x1 + 2 x2 + 5 x3 ≤ 7

x1, x2 , x3 ≥ 0 .

Hence or otherwise find the solution to the dual of the above problem.

Solution: The given problem is of maximization. Introducing slack variables x4 , x5 to


change the constraint inequalities into equations, we get
328

Max. Z = 30 x1 + 23 x2 + 29 x3 + 0 x4 + 0 x5 ,

subject to 6 x1 + 5 x2 + 3 x3 + x4 = 26

4 x1 + 2 x2 + 5 x3 + x5 = 7,

x1, x2 ,...., x5 ≥ 0.

Taking x1 = 0, x2 = 0, x3 = 0, we get x4 = 26, x5 = 7, which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table :

cj 30 23 29 0 0 Min. ratio
x B Y1
B cB
xB Y1 Y2 Y3 Y4 Y5 yi1 > 0

Y4 0 26 6 5 3 1 0 26 6
Y5 0 7 4 2 5 0 1 74→
(min)

Z = c Bx B = 0 ∆j 30 23 29 0 0 x B Y2
↑ ↓ yi2 > 0

Y4 0 31 2 0 2 −9 2 1 −3 2 31 4
Y1 30 74 1 12 54 0 14 72→
(min)

Z = 105 2 ∆j 0 8 −17 2 0 −15 2


↓ ↑

Y4 0 17 2 −4 0 −19 2 1 −5 2
Y2 23 72 2 1 52 0 12

Z = 161 2 ∆j −16 0 −57 2 0 −23 2

In the last simplex table all ∆ j ≤ 0 therefore the optimal solution is

x1 = 0, x2 = 7 2 , x3 = 0 and Max. Z = 161 2.

Dual Problem
The given problem is in standard primal form. The dual of the given problem is

Min. Z D = 26 w1 + 7w2

subject to 6 w1 + 4 w2 ≥ 30

5 w1 + 2 w2 ≥ 23

3 w1 + 5 w2 ≥ 29,

w1, w2 ≥ 0.

To real the solution of the dual from the final simplex table of the primal problem.
329

w1 = − ∆4 = 0, w2 = − ∆5 = 23 2

and Min. Z D = Max. Z = 161 2

Example 5: Write the dual of the following problem :


Max. Z = 5 x1 − 2 x2 + 3 x3

subject to 2 x1 + 2 x2 − x3 ≥ 2

3 x1 − 4 x2 ≤ 3

x2 + 3 x3 ≤ 5, and x1, x2 , x3 ≥ 0 [Meerut 2006]

Hence or otherwise write the solution of the primal.

Solution: Writing the given L.P.P. into standard primal form,

we get

Max. Z = 5 x1 − 2 x2 + 3 x3

subject to

−2 x1 − 2 x2 + x3 ≤ −2

3 x1 − 4 x2 ≤ 3

x2 + 3 x3 ≤ 5, and x1, x2 , x3 ≥ 0

The dual of the given primal is

Min. Z D = −2 w1 + 3 w2 + 5 w3

subject to

−2 w1 + 3 w2 ≥ 5

−2 w1 − 4 w2 + w3 ≥ −2

w1 + 3 w3 ≥ 3, and w1, w2 , w3 ≥ 0.

Now to solve the dual problem by simplex method, converting it to maximization


problem and multiplying both sides of the second constraint by −1, we get

Max. Z ′D = 2 w1 − 3 w2 − 5 w3

subject to −2 w1 + 3 w2 + 0 w3 ≥ 5

2 w1 + 4 w2 − w3 ≤ 2

w1 + 0 w2 + 3 w3 ≥ 3,

w1, w2 , w3 ≥ 0.

Introducing surplus variables w4 , w6 , slack variable w5 and artificial variables wa , wa the


1 2
above dual problem changes to
330

Max. Z ′D = 2 w1 − 3 w2 − 5 w3 + 0 w4 + 0 w5 + 0 w6 − Mwa − Mwa


1 2

subject to

−2 w1 + 3 w2 + 0 w3 − w4 + wa =5
1

2 w1 + 4 w2 − w3 + 0 w4 + w5 =2

w1 + 0 w2 + 3 w3 − w6 + wa =3
2

w1, w2 ,..., w6 , wa , wa ≥ 0.
1 2

Taking w1 = 0 = w2 = w3 = w4 = w6 , we get w5 = 2, wa = 5, wa = 3 which is the starting


1 2
B.F.S.

The solution of this dual problem by simplex method is given in the following table :

cj 2 −3 −5 0 0 0 −M −M Min.
ratio
B cB
wB W1 W2 W3 W4 W5 W6 A1 A2 w B W2
w i2 > 0
A1 −M 5 −2 3 0 −1 0 0 1 0 53
W5 0 2 2 4 −1 0 1 0 0 0 2 4(min)
A2 −M 3 1 0 3 0 0 −1 0 1 →

Z ′ D = −8 M ∆j 2−M −3 + 3M −5 + 3 M −M 0 −M 0 0 w B W3
↑ ↓ w i3 > 0

A1 −M 72 −7 2 0 34 −1 −3 4 0 1 0 14 3
W2 −3 12 12 1 −1 4 0 14 0 0 0 Neg.
A2 −M 3 1 0 3 0 0 −1 0 1 1 (min)

Z ′D = ∆j 7 − 5M 0 15 M − 23 −M 3 − 3M −M 0 0 w B W6
13 3 2 4 4 ↓ w i6 > 0
− M −
2 2 ↑
A1 − M 11 4 −15 4 0 0 −1 −3 4 14 1 11(min)

W2 −3 3 4 7 12 1 0 0 14 −1 12 0 
W3 −5 1 13 0 1 0 0 −1 3 0 
Z ′D = ∆j 65 − 45M 0 0 −M 3 − 3 M 3 M − 23 0
11 29 12 4 12 ↓
− M −
4 4 ↑
W6 0 11 −15 0 0 −4 −3 1
W2 −3 53 −2 3 1 0 −1 3 0 0
W3 −5 14 3 −14 3 0 1 −4 3 −1 0
85 ∆j −70 3 0 0 −23 3 −5 0
ZD
′ =−
3

In the last simplex table all ∆ j ≤ 0 therefore it gives optimal solution of the dual problem.

The optimal solution of the dual is


331

w1 = 0, w2 = 5 3 , w3 = 14 3

and Min. Z D = − Max. Z ′D = 85 3

Now the optimal solution of the primal is

x1 = − ∆4 = 23 3 , x2 = − ∆5 = 5, x3 = − ∆6 = 0

and Max. Z = Min. Z D = 85 3.

Example 6: One unit of product A contributes ` 7 and requires 3 units of raw material
and 2 hours of labour. One unit of product B contributes ` 5 and requires one unit of raw
material and one hour of labour. Availability of the raw material at present is 48 units
and there are 40 hours of labour.
(i) Formulate the linear programming problem.
(ii) Write the dual and solve it by simplex method. Also find the optimal product mix.
[Meerut 2009, 11 (BP), 12]

Solution: The linear programming problem corresponding to the given information is

Max. Z = 7 x1 + 5 x2

subject to 3 x1 + x2 ≤ 48

2 x1 + x2 ≤ 40

x1, x2 ≥ 0

The given L.P.P. is in standard primal form.

The dual problem can be written as

Min. Z D = 48w1 + 40 w2

subject to 3 w1 + 2 w2 ≥ 7

w1 + w2 ≥ 5,

w1, w2 ≥ 0

To solve the dual problem by simplex method, changing the dual objective function to
maximization and introducing surplus variables w3 , w4 and artificial variables wa , wa
1 2
respectively to change the constraint inequalities into equations. The dual problem can
be written as

Max. Z ′D = −48w1 − 40 w2 + 0 w3 + 0 w4 − Mwa − Mwa


1 2

subject to 3 w1 + 2 w2 − w3 + wa = 7
1

w1 + w2 − w4 + wa = 5
2

w1, w2 , w3 , w4 , wa , wa ≥ 0
1 2
332

Taking w1 = 0 = w2 = w3 = w4 , we get wa = 7, wa = 5 which is the starting B.F.S.


1 2

The solution by simplex method is given in the following table :

cj −48 −40 0 0 −M −M Min.


ratio
B cB
wB W1 W2 W3 W4 A1 A2 w B W1
w i1 > 0

A1 −M 7 3 2 −1 0 1 0 73→
A2 −M 5 1 1 0 −1 0 1 (min)
5

Z ′D = −12 M ∆j 4 M − 48 3 M − 40 −M −M 0 0 w B W2
↑ ↓ w i2 > 0

W1 −48 73 1 23 −1 3 0 13 0 72→
A2 −M 83 0 13 13 −1 −1 3 1 (min)
8

Z ′D = − ∆j 0 M −16 −M 4M 0 w B W3
−8 16 −
8 ↓ 3 3
(112 + M) wi3 > 0
3 ↑

W2 −40 72 32 1 −1 2 0 −1 2 0 
A2 − M 32 −1 2 0 12 −1 −1 2 1 3 (min)

Z ′D = − ∆j 1 0 1 −M 3 0
12 − M M −20 20 − M
3 2 2 2 ↓
(140 + M ) ↑
2
W2 −40 5 1 1 0 −1 0 1
W3 0 3 −1 0 1 −2 −1 2

Z ′D = −200 ∆j −8 0 0 −40 −M 40 − M

In the last table all ∆ j ≤ 0, therefore this solution is optimal.

∴ Solution of the dual problem is

w1 = 0, w2 = 5, Min. Z D = −Max. Z ′D = 200.

Hence the solution of the primal problem is

x1 = − ∆3 = 0, x2 = − ∆4 = 40 and Max. Z = 200.

Thus the optimal product mix is :

None of the product A and 40 units of product B for a total contribution of ` 200.
333

1. Find the solution of the following problem by simplex method :

Max. Z x = 40 x1 + 50 x2

subject to 2 x1 + 3 x2 ≤ 3, 8 x1 + 4 x2 ≤ 5, x1, x2 ≥ 0.
Write the dual of the given L.P.P. Also find the solution of the dual problem.

Use principle of duality to solve the following problems :

2. Min. Z = x1 − x2

subject to 2 x1 + x2 ≥ 2, − x1 − x2 ≥ 1 and x1, x2 ≥ 0.

3. Max. Z = 3 x1 + 2 x2
subject to 2 x1 + x2 ≤ 5, x1 + x2 ≤ 3 and x1, x2 ≥ 0. [Meerut 2010]

4. Max. Z = 2 x1 + x2

subject to x1 + 2 x2 ≤ 10, x1 + x2 ≤ 6

x1 − x2 ≤ 2, x1 − 2 x2 ≤ 1 and x1, x2 ≥ 0. [Meerut 2009 (BP); Gorakhpur 2007]

5. Min. Z = 2 x1 + 2 x2

subject to 2 x1 + 4 x2 ≥ 1, x1 + 2 x2 ≥ 1.

2 x1 + x2 ≥ 1, and x1, x2 ≥ 0. [Meerut 2011]

6. Max. Z = 3 x1 + 2 x2

subject to x1 + x2 ≥ 1, x1 + x2 ≤ 7,

x1 + 2 x2 ≤ 10, x2 ≤ 3 and x1, x2 ≥ 0. [Kanpur 2011; Meerut 2004, 12 (BP)]

7. Formulate the dual of the given L.P.P. and hence solve it.

Min. Z = 3 x1 − 2 x2 + 4 x3

subject to 3 x1 + 5 x2 + 4 x3 ≥ 7, 6 x1 + x2 + 3 x3 ≥ 4,

7 x1 − 2 x2 − x3 ≤ 10, x1 − 2 x2 + 5 x3 ≥ 3

4 x1 + 7 x2 − 2 x3 ≥ 2
and x1, x2 , x3 ≥ 0.

8. Use duality theory to solve the given L.P.P.

Min. Z = 4 x1 + 3 x2 + 6 x3

subject to x1 + x3 ≥ 2, x2 + x3 ≥ 5 and x1, x2 , x3 ≥ 0.

9. Solve the following L.P.P. using duality theory :

Min. Z = 50 x1 − 80 x2 − 140 x3 ,
334

subject to x1 − x2 − 3 x3 ≥ 4, x1 − 2 x2 − 2 x3 ≥ 3
and x1, x2 , x3 ≥ 0.

10. Apply the principle of duality to solve the following L.P.P. :

Max. Z = 3 x1 − 2 x2

subject to x1 + x2 ≤ 5, x1 ≥ 4, 1 ≤ x2 ≤ 6
and x1, x2 ≥ 0.

11. Solve the following primal problem and its dual by simplex method :

Max. Z = 5 x1 + 12 x2 + 4 x3

subject to x1 + 2 x2 + x3 ≤ 5, 2 x1 − x2 + 3 x3 = 2, x1, x2 , x3 ≥ 0.
Verify that the solution of the primal can be read from the optimal table of the dual
and vice-versa. [Meerut 2007]

12. Write the dual of the following primal :

Max. Z = 40 x1 + 35 x2

subject to 2 x1 + 3 x2 ≤ 60, 4 x1 + 3 x2 ≤ 96, x1, x2 ≥ 0.

Solve the primal and the dual by simplex method. Compare the optimal solutions of
the two problems. [Meerut 2006 (BP)]

13. Use duality to solve the following L.P.P. :

Max. Z = 4 x1 + 3 x2

subject to x1 ≤ 6, x2 ≤ 8, x1 + x2 ≤ 7, 3 x1 + x2 ≤ 15, − x2 ≤ 1 and x1, x2 ≥ 0.

14. Find the dual of the following L.P.P. :


Max. Z = 2 x1 − x2
subject to x1 + x2 ≤ 10, − 2 x1 + x2 ≤ 2
4 x1 + 3 x2 ≥ 12, x1, x2 ≥ 0.

Solve the primal problem by simplex method and deduce from it the solution to the
dual problem.

15. Find the dual of the following problem :


Min. Z = 6 x + 5 y + 2 z
subject to x + 3 y + 2 z ≥ 5, 2 x + 2 y + z ≥ 2,
4 x − 2 y + 3 z ≥ −1, x, y, z ≥ 0.
Hence or otherwise solve the primal problem.

16. Use duality to solve the problem

Min. Z = 10 x1 + 6 x2 + 2 x3
335

subject to − x1 + x2 + x3 ≥ 1, 3 x1 + x2 − x3 ≥ 2, and x1, x2 , x3 ≥ 0.

17. Write the dual of the following problem :

Max. Z = 2 x1 + 3 x2 ,

subject to 2 x1 + 2 x2 ≤ 10, 2 x1 + x2 ≤ 6,

x1 + 2 x2 ≤ 6, x1, x2 ≥ 0.
Solve the primal and the find then solution to the dual.

18. Using duality solve the following L.P.P. :

Max. Z = 0.7 x1 + 0.5 x2

subject to x1 ≥ 4, x2 ≥ 6, x1 + 2 x2 ≥ 20, 2 x1 + x2 ≥ 18

x1, x2 ≥ 0. [Gorakhpur 2010]

19. Using duality solve the following L.P.P. :

Max. Z = 2 x1 + x2

subject to − x1 + 2 x2 ≤ 2, x1 + x2 ≤ 4, x1 ≤ 3, x1, x2 ≥ 0. [Gorakhpur 2009]

20. A company makes three products X , Y , Z out of three raw materials A, B, C. The
number of units of raw materials required to produce one unit of the product is as
given in the following table :

Raw material X Y Z

A 1 2 1

B 2 1 4

C 2 5 1

The unit profit contribution of the products X , Y and Z and ` 40, 25 and 50
respectively. The number of units of raw materials available are 36, 60 and 45
respectively.
(i) Determine the product mix that will maximize the total profit.
(ii) Through the final simplex table write the solution to the dual problem.
336

1. x1 = 3 16 , x2 = 7 8 , max. Z x = 51. 25

Dual. w1 = 15, w2 = 5 4, min. Z D = 51. 25

2. no feasible solution.

3. x1 = 2, x2 = 1, max. Z = 8

4. x1 = 4, x2 = 2, max. Z = 10

5. x1 = 1 3 , x2 = 1 3 , min. Z = 4 3.

6. x1 = 7, x2 = 0, Max. Z = 21.

7. no feasible solution.

8. x1 = 0, x2 = 3, x3 = 2, min. Z = 21

9. no feasible solution.

10. x1 = 4, x2 = 1, max. Z = 10

11. x = 9 5 , x = 8 5 , x = 0, max. Z = 28 1
1 2 3 5
1
Dual w1 = 29 5 , w2 = −2 5 , Min. Z D = 28
5

12. x1 = 18, x2 = 8, max, Z = 1000.

Dual w1 = 10 3 , w2 = 25 3 , min, Z D = 1000.

13. x1 = 4, x2 = 3, max. Z = 25

14. x1 = 10, x2 = 0, max. Z = 20


Dual w1 = 2, w2 = 0, w3 = 0, min. Z D = 20

15. x = 0, y = 0, z = 5 2 , Min. Z = 5

16. x1 = 1 4 , x2 = 5 4 , x3 = 0, Min. Z = 10

17. x1 = 2, x2 = 2, max. Z = 10

Dual w1 = 0, w2 = 1 3 , w3 = 4 3 , min. Z D = 10

18. Unbounded

19. x1 = 3, x2 = 1, max Z = 7

20. x = 20, y = 0, z = 5, Max. P = 1050,


Dual r = 0, s = 10, t = 10, min. R = 1050.
337

8.9 Dual Simplex Method


For a L.P.P. (maximization problem) the optimality criterion of the simplex method,
c j − Z j = c j − CB B −1.α j ≤ 0, for all j, where B is the basis, depends only on α j and c j and is

independent of the requirement vector b. Thus, every basic solution with all c j − Z j ≤ 0

will not be feasible, but any basic feasible solution with all c j − Z j ≤ 0 will certainly be an
optimal solution. The dual simplex method uses the above remarks. Thus, whereas the
regular simplex method starts with a basic feasible but non-optimal solution and
proceeds towards optimality, the dual simplex method starts with a basic infeasible but
optimal solution and proceeds towards feasibility.

The dual simplex algorithm was discovered by C.E. Lemke, a student of Charnes while
applying the simplex method to the dual of a L.P.P.

8.10 Advantage of Dual Simplex Algorithm


The advantage of the dual simplex algorithm over the other method is that here we do
not require any artificial variables. Hence, a great deal of work or a lot of labour is saved
whenever this method is applicable.

8.11 Computational Procedure of the


Dual Simplex Algorithm
[Meerut 2008]
Step I : To form the given L.P.P. in standard primal form
1. If the problem is of minimization. Convert it into the maximization problem.
2. Write all the constraints in the form of inequalities involving ≤ sign.

Note that by doing so some bi's may change to negative values.

Step II : To find initial basic solution


Introduce the slack variables in the constraints to reduce them to equalities. Then to find
the initial basic solution take all the given variables equal to zero and calculate the values
of the slack variables. This solution will be the starting (initial) basic solution, which
may not be feasible. Let X B = ( x B1, x B2 ,..., x Bm) be the initial basic solution
corresponding to the basis matrix B = (β1, β2 ,..., β m).

Step III : Construction of the starting simplex table


Construct the simplex table as usual in simplex method.

Step IV : To test the initial solution for optimality


Compute ∆ j = c j − Z j = c j − CB Y j for each column.
338

1. If all ∆ j ≤ 0 and all x Bi are non-negative, the solution found above is optimum basic
feasible solution.
2. If all ∆ j ≤ 0 and at least one x Bi is negative, then proceed to step V.
3. If any ∆ j > 0 then the method fails.

Step V : To find the vector incoming (entering) and leaving (outgoing) the basis

Here the outgoing vector is determined first.

To determine the outgoing vector : Here β r i.e., the r-th columns in the basis i.e., the
corresponding vector x r in the basis is the outgoing vector, if

x Br = Min.{ x Bi, x Bi < 0}

To determine the incoming vector (α k ) : If β r is the outgoing vector, then α k is taken


as the entering (incoming) vector for the value of k, for which

∆k Min  ∆ j 
=  , yrj < 0 
yrk j  yrj 

If all yrj ≥ 0, then the problem has no F.S.

Step VI : Test of optimality


If entering the vector α k in place of β r in the basis all basic variables reduces to
non-negative values, then this solution is optimal F.S. But if at least one basic variable is
negative, then this solution is not optimal F.S. In that case repeat step V and VI,
iteratively till an optimal F.S. is obtained.

Example 1: Solve the following L.P.P. by the simplex dual algorithm :


Min. Z = 3 x1 + x2

subject to x1 + x2 ≥ 1

2 x1 + 3 x2 ≥ 2

and x1, x2 ≥ 0 . [Meerut 2006; Gorakhpur 2008, 11]

Solution: For the clear understanding of the procedure we shall solve this example step
by step.

Step I : First we write the given L.P.P. in the standard primal form.

Max. Z P = − Z = −3 x1 − x2

subject to − x1 − x2 ≤ −1
339

−2 x1 − 3 x2 ≤ −2

and x1, x2 ≥ 0

Since objective function is that of maximization and all c j < 0.

∴ It is possible to find an infeasible but optimal basic solution and hence we can solve
this problem by dual simplex algorithm.

Step II : Introducing the slack variables x3 and x4 the constraints reduce to the following
equalities :

− x1 − x2 + x3 = −1

−2 x1 − 3 x2 + x4 = −2

Taking x1 = 0 = x2 , we get x3 = −1, x4 = −2, which is the starting basic infeasible solution
to the primal.

Step III : The starting simplex table is as follows :

cj −3 −1 0 0
B cB
X B (= x B ) Y1 Y2 Y3 (β1) Y4 (β2 )

Y3 0 −1 −1 −1 1 0
Y4 0 −2 −2 −3 0 1 →

ZP = cB x B ∆j −3 −1 0 0
=0 ↑ ↓

Step IV : ∆1 = c1 − Z1 = c1 − c BY1 = −3

∆2 = c2 − Z2 = c2 − c BY2 = −1

Thus, the starting solution x1 = 0, x2 = 0, x3 = −1, x4 = −1 is a basic solution which is


infeasible but optimal.

Step V : To find the improved solution we shall determine the leaving vector and
entering vector to the basis.

To determine the leaving vector (β r )

Since x Br = Min ( x Bi, x Bi < 0).

= Min. ( x B1, x B2 ) = Min. (−1, − 2) = −2 = x B2

∴ r = 2 i.e., β2 (= Y4 ) is the leaving vector.

To determine the entering vector (α k )


340

∆k ∆k Min.  ∆ j 
= =  , y2 j < 0 
yrk y2 k j  y2 j 

 ∆ ∆   −3 −1  1 ∆2
= Min  1 , 2  = Min  , = =
y y
 21 22   − 2 − 3  3 y22

∴ k = 2. i.e., α 2 (= Y2 ) is the entering vector.

∴ Key element is y22 = −3

Proceeding as usual the second simplex table is as follows :


cj −3 −1 0 0
B cB
X B (= x B ) Y1 Y2 (β2 ) Y3 (β1) Y4
Y3 0 −1 3 −1 3 0 1 −1 3 →
Y2 −1 2 3 2 3 1 0 −1 3
ZP = cB x B ∆j −7 3 0 0 −1 3
= −2 3 ↓ ↑

The solution given in this table is x1 = 0, x2 = 2 3 , x3 = −1 3 , x4 = 0 which is infeasible


but optimal hence it can be improved further for which we again find the leaving and
entering vectors.
To determine the leaving vector (β r )

Since x Br = Min. { x Bi, x Bi < 0} = Min. { x B1} = −1 3 = x B1

∴ r = 1, i.e., β1 (= Y3 ) is the leaving vector.

To determine the entering vector (α k )

∆k ∆k Min.  ∆ j  ∆ ∆ 
= =  . y1 j < 0  = Min  1 . 4 
yrk y1k j  y1 j  y11 y14 
 
 −7 3 −1 3  ∆4
= Min  ,  = Min {7,1} = 1 =
 − 1 3 − 1 3  y14

∴ k = 4 i.e., α 4 (= Y4 ) is the entering vector.


∴ Key element is y14 = −1 3
Proceeding as usual the third simplex table is as follows

cj −3 −1 0 0
B cB
X B (= x B ) Y1 Y2 (β2 ) Y3 Y4 (β1)
Y4 0 1 1 0 −3 1
Y2 −1 1 1 1 −1 0

Z P = −1 ∆j −2 0 −1 0
341

Q All ∆ j ≤ 0 and all x B1 > 0, so the solution is optimal and feasible, which is

x1 = 0, x2 = 1, max. Z p = −1 i.e., min Z = 1

The above solution in different steps can be more conveniently represented by a single
table as shown below :

cj −3 −1 0 0 Mini x Br
B cB
xB Y1 Y2 Y3 Y4 x Br < 0

Y3 (β1) 0 −1 −1 −1 1 0 
Y4 (β2 ) 0 −2 −2 −3 0 1 − 2 ( x B2 ) →

ZP = cB x B ∆j −3 −1 0 0
=0

Mini ratio −3 −2 −1 −3 — —
∆ j y2 j y2 j < 0 =3 2
=1 3 ↑
Y3 (β1) 0 −1 3 −1 3 0 1 −1 3 − 1 3 ( x B1) →
Y2 (β2 ) −1 23 23 1 0 −1 3

Z P = −2 3 ∆j −7 3 0 0 −1 3
Mini ratio −7 3 — — −1 3
=7 =1
∆ j y1 j y1 j < 0 −1 3 −1 3

Y4 0 1 1 0 −3 1 All x Br > 0
Y2 −1 1 1 1 −1 0

Z P = −1 ∆j −2 0 −1 0

From the last table x1 = 0, x2 = 1, x3 = 0, x4 = 1 which is feasible and optimal as x j ≥ 0 and


∆ j ≤ 0 for all j = 1, 2, 3, 4.

Hence the optimal solution of the given L.P.P. is x1 = 0, x2 = 0, Mini. Z = − Max Z P = 1.

Example 2: Solve the following L.P.P. by the simplex algorithm :


Min. Z = 3 x1 + 2 x2 + x3 + 4 x4

subject to 2 x1 + 4 x2 + 5 x3 + x4 ≥ 10

3 x1 − x2 + 7 x3 − 2 x4 ≥ 2

5 x1 + 2 x2 + x3 + 6 x4 ≥ 15

and x1, x2 , x3 , x4 ≥ 0 .

Solution: Step I : The given L.P.P. in the standard primal form is

Max. Z P = −3 x1 − 2 x2 − x3 − 4 x4
342

subject to −2 x1 − 4 x2 − 5 x3 − x4 ≤ −10

−3 x1 + x2 − 7 x3 + 2 x4 ≤ −2

−5 x1 − 2 x2 − x3 − 6 x4 ≤ −15

and x1, x2 , x3 , x4 ≥ 0.

Since objective function is of maximization and all c j < 0, we can solve this L.P.P. by dual
simplex algorithm.

Step II : Introducing the slack variables x5 , x6 and x7 the constraints of the above
problem reduce to the following equalities :

−2 x1 − 4 x2 − 5 x3 − x4 + x5 = −10

−3 x1 + x2 − 7 x3 + 2 x4 + x6 = −2

−5 x1 − 2 x2 − x3 − 6 x4 + x7 = −15

Taking x1 = 0 = x2 = x3 = x4 , we get x5 = −10, x6 = −2, x7 = −15 which is the starting


basic solution to the primal and is infeasible.

The solution to the problem using dual simplex algorithm is given in the following table :

cj −3 −2 −1 −4 0 0 0 Mini x Br
B cB
xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 x Br < 0
Y5 0 −10 −2 −4 −5 −1 1 0 0
Y6 0 −2 −3 1 −7 2 0 1 0
−15 ( xB3 )
Y7 0 −15 −5 −2 −1 −6 0 0 1 →
Z = c B xB = 0 ∆j −3 −2 −1 −4 0 0 0
Mini ratio −3 −5 −2 −1 −4 2
=1 =1 =
∆ j y3 j =35 −2 −1 −6 3
Y3 j < 0 ↑ min

Y5 0 −4 0 −16 5 −23 5 75 1 0 −2 5 −4( xB1 )


Y6 0 7 0 11 5 −32 5 28 5 0 1 −3 5 →
Y1 −3 3 1 2 5 15 65 0 0 −1 5

Z = −9 ∆j 0 −4 5 −2 5 −2 5 0 0 −3 5
Mini ratio — −4 5 −2 5 — — — −3 5
∆ j y1 j −16 5 −23 5 −2 5
=14 = 2 23 =32
Y1 j < 0
min ↑
Y3 −1 20 23 0 16 23 1 −7 23 − 5 23 0 2 23 All xBr > 0
Y6 0 289 23 0 153 23 0 84 23 −32 23 1 −1 23
Y1 −3 65 23 1 6 23 0 29 23 1 23 0 −5 23

Z = −215 23 ∆j 0 −12 23 0 −12 23 −2 23 0 −13 23


343

The solution given in the last table is

x1 = 65 23 , x2 = 0, x3 = 20 23 , x4 = 0 = x5 , x6 = 289 23 , x7 = 0

which is feasible and optimal (since all ∆ j ≤ 0).

Hence the optimal feasible solution of the given L.P.P. is

x1 = 65 23 , x2 = 0, x3 = 20 23 , x4 = 0

and Min. Z = − Max. Z P = 215 23

Example 3: Use dual simplex method to solve the following L.P.P. :

Min. Z = 6 x1 + 7 x2 + 3 x3 + 5 x4

subject to, 5 x1 + 6 x2 − 3 x3 + 4 x4 ≥ 12

x2 + 5 x3 − 6 x4 ≥ 10

2 x1 + 5 x2 + x3 + x4 ≥ 8

and x1, x2 , x3 , x4 = 0 . [Meerut 2006 (BP)]

Solution: Step I : The given L.P.P. in the standard primal form is

Max. Z P = −6 x1 − 7 x2 − 3 x3 − 5 x4

subject to −5 x1 − 6 x2 + 3 x3 − 4 x4 ≤ −12

− x2 − 5 x3 + 6 x4 ≤ −10

−2 x1 − 5 x2 − x3 − x4 ≤ −8

and x1, x2 , x3 , x4 ≥ 0

Since objective function is of maximization and all c j < 0,we can solve this L.P.P. by dual
simplex algorithm.

Step II : Introducing the slack variables x5 , x6 and x7 the constraints of the above
problem reduce to the following equalities :

−5 x1 − 6 x2 + 3 x3 − 4 x4 + x5 = −12

− x2 − 5 x3 + 6 x4 + x6 = −10

−2 x1 − 5 x2 − x3 − x4 + x7 = −8

Taking x1 = 0 = x2 = x3 = x4 , we get x5 = −12, x6 = −10, x7 = −8 is the starting basic


solution to the primal and is infeasible.
344

The solution to the problem using dual simplex algorithm is given in the following table :

cj −6 −7 −3 −5 0 0 0 Min.
B cB x Br

xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 x Br < 0
Y5 0 −12 −5 −6 3 −4 1 0 0 −12 ( xB1 )
Y6 0 −10 0 −1 −5 6 0 1 0 
Y7 0 −8 −2 −5 −1 −1 0 0 1 

Z P = CB X B = 0 ∆j −6 −7 −3 −5 0 0 0

Mini. Ratio −6 ( −5 ) −7 ( −6 ) — ( −5 ) ( −4 )
∆ j y 1j =65 =76 =54
y 1 j< 0 (min)

Y2 −7 2 56 1 −1 2 23 −1 6 0 0 
Y6 0 −8 56 0 −11 2 20 3 −1 6 1 0 −8 ( xB2 )
Y7 0 2 13 6 0 −7 2 73 −5 6 0 1 

Z P = −14 ∆j −1 6 0 −13 2 −1 3 −7 6 0 0

Mini Ratio — — −13 2 — −7 6 — —


∆ j y2 j −11 2 −1 6
= 13 (min) =7
y2 j < 0 11

Y2 −7 30 11 25 33 1 0 2 33 −5 33 −1 11 0 All
Y3 −3 16 11 −5 33 0 1 −40 33 1 33 −2 11 0 xBr > 0
Y7 0 78 11 18 11 0 0 −21 11 −8 11 −7 11 1

Z P = − 258 11 ∆j −38 33 0 0 −271 33 −32 33 −13 11 0

The solution given in the last table is

x1 = 0 = x4 = x5 = x6 , x2 = 30 11, x3 = 16 11 and x7 = 78 11,

which is feasible and optimal. (since all ∆ j ≤ 0)

Hence, the optimal feasible solution of the given L.P.P. is

x1 = 0, x2 = 30 11, x3 = 16 11, x4 = 0 and Min. Z = − Max. Z P = 258 11


345

1. Define the dual of a L.P.P.


2. Write a short note on duality theory.
3. Prove that the dual of the dual of a L.P.P. is the problem itself.
4. What are the advantages of duality method ?
5. What are the advantages of dual simplex algorithm ?
6. Write short note on “Reading the solution to the dual from the final simplex table
of the primal”.

Solve the following L.P.P. by the dual simplex algorithm :

7. Min. Z = x1 + 2 x2 8. Min. Z = x1 + 2 x2 + 3 x3
subject to 2 x1 + x2 ≥ 4 subject to 2 x1 − x2 + x3 ≥ 4
x1 + 7 x2 ≥ 7 x1 + x2 + 2 x3 ≤ 8
and x1, x2 ≥ 0 x2 − x3 ≥ 2
and x1, x2 , x3 ≥ 0

9. Max. Z = −2 x1 − 2 x2 − 4 x3 10. Min. Z = 2 x1 + x2


subject to 2 x1 + 3 x2 + 5 x3 ≥ 2 subject to 2 x1 + x2 ≥ 3
3 x1 + x2 + 7 x3 ≤ 3 4 x1 + 3 x2 ≥ 6
x1 + 4 x2 + 6 x3 ≤ 5 x1 + 2 x2 ≥ 3
and x1, x2 , x3 ≥ 0 and x1, x2 ≥ 0

11. Max. Z = −2 x1 − x3

subject to x1 − 2 x2 + 4 x3 ≥ 8

x1 + x2 − x3 ≥ 5

and x1, x2 , x3 ≥ 0

12. Can you apply Dual Simplex Algorithm to the following L.P.P. ?

Max. Z = 5 x1 + 3 x2

subject to 3 x1 + 5 x2 ≤ 15, 5 x1 + 2 x2 ≤ 10 and x1, x2 ≥ 0.

13. A diet conscious house wife wishes to ensure certain minimum in take of vitamins
A, B and C for the family. The minimum daily (quantity) needs of the vitamins A, B
and C for the family are respectively 30, 20 and 16 units. For the supply of these
minimum vitamin requirements, the house wife relies on two fresh foods. The first
one provides 7, 5, 2 units of the three vitamins per gram respectively and the second
one provides 2, 4, 8 units of the same three vitamins per gram of the food stuff
346

respectively. The first food stuff costs ` 3 per gram and the second ` 2 per gram. The
problem is how many grams of each food stuff should the house wife buy everyday
to keep her food bill as low as possible ?
(i) Formulate the underlying L.P.P.
(ii) Write the “Dual” problem.
(iii) Solve the “Dual” problem by using simplex method.
(iv) Solve the primal problem graphically.
(v) Interpret the dual problem and its solution.

Multiple Choice Questions


1. In standard primal form if the problem is of maximization, all the constraints
involve the sign :
(a) ≥ (b) ≤
(c) = (d) Unrestricted
2. If the standard primal problem is of minimization, all the constraints involve the
sign :
(a) ≥ (b) ≤
(c) = (d) None of these
3. If a finite optimal feasible solution exists for the primal then the dual has :
(a) Unbounded solution (b) No solution
(c) A finite feasible optimal solution (d) None of these
4. If the primal problem has an unbounded solution, the dual problem has :
(a) A finite optimal feasible solution
(b) No solution
(c) Either no solution or an unbounded solution
(d) None of these
5. If the i-th slack variable of the primal is positive, then the i-th variable of the dual is :
(a) + ive (b) – ive
(c) Zero (d) Unrestricted
6. If both the primal and dual problems have finite optimal solutions and Z P , Z D are
the optimal values of the objective functions of the primal and dual respectively
then we have :
(a) ZP > ZD (b) ZP < ZD
(c) ZP = ZD (d) None of these
347

Fill in the Blank


1. If the primal problem is a maximization problem, its dual will be a ..................
problem.
2. If any constraint in the primal is a perfect equality, the corresponding dual variable
is .................. in sign.
3. If the primal problem has an unbounded solution, the dual has either no solution or
an .................. solution. [Meerut 2005]

4. If the primal has a finite optimal solution then the values of the objective functions
of the primal and dual are .................. .
5. If x is any feasible solution to the primal and w is any feasible solution to the dual
problem then Z p .................. Z D where Z P and Z D are the objective functions of
the primal and dual respectively.

True or False
1. The dual of a dual is the primal itself.
2. If the primal problem has a finite feasible solution, the dual problem has no
solution.
3. The coefficient matrix of the dual is obtained by transposing the coefficient matrix
of the primal.
4. If the primal is a maximization problem, the dual is also a maximization problem.
5. Requirement vector of the primal is the price vector of the dual.
6. In a standard primal problem, if all the constraints have the sign ≥, it is a
maximization problem.
348

Exercise 8.3
7. x1 = 21 13 , x2 = 10 13, Min. Z = 41 13

8. x1 = 3, x2 = 2, x3 = 0 Min. Z = 7

9. x1 = 0, x2 = 2 3 , x3 = 0, Max. Z = − 4 3

10. x1 = 3 5 , x2 = 6 5, Min. Z = 12 5

11. x1 = 0, x2 = 14, x3 = 9, Max. Z = −9

12. No

13. If x1, x2 grams of the two food stuffs are purchased, then
(i) L.P.P. is Min. Z = 3 x1 + 2 x2 ,
subject to 7 x1 + 2 x2 ≥ 30, 5 x1 + 4 x2 ≥ 20
2 x1 + 8 x2 ≤ 16 and x1, x2 , x3 ≥ 0
(ii) Dual. Max. Z D = 30 w1 + 20 w2 + 16 w3
subject to 7w1 + 5 w2 + 2 w3 ≤ 3,
2 w1 + 4 w2 + 8w3 ≤ 2 and w1, w2 , w3 ≤ 0
(iii) w1 = 5 13 , w2 = 0, w3 = 2 13 , Max. Z D = `.14
(iv) x1 = 4, x2 = 1, Min. Z = `.14

Multiple Choice Questions


1. (b) 2. (a)
3. (c) 4. (c)
5. (c) 6. (c)

Fill in the Blank


1. minimization 2. unrestricted
3. unbounded 4. equal
5. ≤

True or False
1. True 2. False
3. True 4. False
5. True 6. False
mmm
349

9.1 Introduction
he ‘Integer Programming Problem’ abbreviated as I.P.P. is special class of linear
T programming problem (L.P.P.) where all or some of the variables in the optimal
solution are restricted to assume non-negative integer values.

Thus, the general I.P.P. can be stated as follows :

Optimize the linear function


n
Z = ∑ ci x i ...(1)
i =1

Subject to the constraints,


n

∑ aij x j ≤ bi, i = 1, 2,..., m


...(2)
j =1

and xj ≥ 0

and some x j are integers.

There are two types of the Integer Programming Problems.


350

1. All Integer Programming Problem (All I.P.P.) : An I.P.P. is termed as all I.P.P.
(or pure I.P.P.) if all the variables in the optimal solution are restricted to assume
non-negative integer values. [Agra 2000, 01]

2. Mixed Integer Programming Problem (Mixed I.P.P.) : An I.P.P. is termed as


mixed I.P.P. if only some variables in the optimal solution are restricted to assume
non-negative integer values while the remaining variables are free to take any
non-negative values.

9.2 Importance or Need of I.P.P.


Quite often, in business and industry we require the discrete nature (or values) of the
variable involved in many decision making situations. For example in a factory
manufacturing trucks or cars etc., the quantity (or number) manufactured can be whole
(discrete) number only as a fraction of truck or car is not required. In assignment
problems and travelling salesman problems etc., the variables involved can assume
integer values only. In allocation of goods, a shipment must involve a discrete number of
trucks etc. In sequencing and routing decisions we require the discrete values of variables.
Thus, we come across many integer programming problems and hence need some
systematic procedure for obtaining the exact optimal integer solution to such problems.

9.3 Solution of I.P.P.


A symmetric procedure for solving an all I.P.P. was first developed by R.E. Gomory in
1958. He also extended the procedure to solve the mixed I.P.P. He derived algorithms to
find the optimum solution of the given I.P.P. in a finite number of steps, making use of
familiar dual simplex method.

After this several algorithms came up to solve I.P.P. An efficient method with relatively
new approach developed by A.H. Lang and A.G. Doig is “Branch and Bound Technique”.

9.4 Gomory's All I.P.P. Method


[Meerut 2007]
In this method, the I.P.P. is first solved by the regular simplex method, ignoring the
integer condition of the variables. If all the variables in the optimum solution thus
obtained have integer values, the current solution is the desired optimum integer
solution, otherwise the considered L.P.P. is modified by inserting a new constraint
known as “Gomory's constraint” which reduces some non-integer values of variables to
integer one but does not eliminate any feasible integer. Then an optimum solution to this
modified I.P.P. is obtained by using standard algorithm. If all the variables in this
solution obtained are integers then the optimum solution of the given I.P.P. is attained
otherwise another Gomory's constraint is inserted in the above L.P.P. and again this new
351

problem is solved to get an integer valued optimum solution. This procedure is repeated
iteratively until the required integer valued optimum solution is obtained.

9.5 Construction of Gomory's Constraint and


Gomory's Cutting Plane
The construction of the Gomory's constraint is based on the fact that a solution which
satisfies the constraints in the given I.P.P. also satisfies any other constraint derived by
adding or subtracting two or more constraints or multiplying a constraint by a non-zero
number.

Now first we introduce two notions as follows :

[a] = largest integral part of number a,

i.e., the greatest integer less than or equal to a,

and f = positive fractional part of number a,

thus, we have a = [a] + f , clearly 0 ≤ f < 1

For example
1 1 1 1
(i) if a = 4 , then [a] = 4 and f = , so that 4 = 4 +
3 3 3 3
1 2 1 2
and (ii) if a = −4 , then [a] = −5 and f = , so that −4 = −5 +
3 3 3 3

Now we proceed for the construction of the Gomory constraint, as follows :


Let the optimum solution of the maximization L.P.P. (ignoring the condition of integer
values of the variables) obtained by simplex method be expressed by the following table :
Note that in this table 9.1 the basic variables x B1, x B2 ,..., x Bm are arranged in order, for
convenience.
Table 9.1

B cB cj c1 c2 ... ci ... cm cm + 1 ... cn

xB Y1(β1) Y2 (β2 ) ... Yi (β i ) ... Ym(β m ) Ym + 1 ... Yn

Y1 cB1 x B1 1 0 ... 0 ... 0 y1, m + 1 ... y1n


Y2 cB2 x B2 0 1 ... 0 ... 0 y2, m + 1 ... y2 n
M M M M M M M M M M M M
Yi cBi x Bi 0 0 ... 1 ... 0 yi, m + 1 ... yin
M M M M M M M M M M M M
Ym cBm x Bm 0 0 ... 0 ... 1 ym, m + 1 ... ymn

xj x B1 x B2 ... x Bi ... x Bm 0 ... 0


352

Let the i-th basic variable x Bi be non-integer.

Note that 1≤ i ≤ m

∴ Using i-th row of the above table 9.1, we have

x Bi = 0. x1 + 0 . x2 + ... + 1. x i + ... + 0. x m + yi, m + 1 x m + 1 + ... + yin x n


n
= xi+ ∑ yij x j
j = m+1

n
∴ x i = x Bi − ∑ yij x j
...(1)
j = m+1

Let x Bi = [ x Bi] + fBi and yij = [ yij ] + fij

where [ x Bi] = Largest integral part of x Bi i.e., [ x Bi] ≤ x Bi

and [ yij ] = Largest integral part of yij , i.e., [ yij ] ≤ yij

fBi = positive fractional part of x Bi i.e., 0 ≤ fBi < 1

and fij = positive fractional part of yij , i.e., 0 ≤ fij < 1

Clearly [ x Bi] ≤ x Bi,[ yij ] ≤ yij , 0 ≤ fBi < 1 and 0 ≤ fij < 1

Thus, from (1), we have


n
x i = {[ x Bi] + fBi} − ∑ {[ yij ] + fij } x j
j = m+1
n n
or fBi − ∑ fij x j = x i − [ x Bi] + ∑ [ yij ] x j ...(2)
j = m+1 j = m+1

Now if the variables x i (i = 1, 2,..., m) and x j ( j = m + 1,... n) are all integers then the R.H.S
n
of (2) is an integer and hence the L.H.S. fBi − ∑ fij x j of (2) must also be an integer.
j = m+1
n n
Since ∑ fij x j is positive; ∴ fBi − ∑ fij x j ≤ fBi < 1,
j = m+1 j = m+1
n
i.e., fBi − ∑ fij x j is an integer less than 1. Thus, it can either be zero or negative
j = m+1

integer.
353

Hence, we have the inequality


n
fBi − ∑ fij x j ≤ 0
j = m+1

n
or − ∑ fij x j ≤ − fBi
j = m+1

n
or − ∑ fij x j ≤ − fBi
...(3)
j ∈R

where R is the set of indices corresponding to all non-basic variables.

This is called the Gomory constraint.

Introducing the non-negative slack variable x G1; the above inequality reduce to the
constraint equation.
n
− ∑ fij x j + x G1 = − fBi ...(4)
j = m+1

By definition x G1 must also be an integer.

The constraint equation (3) is called Gomory constraint equation or Gomory cutting
plane.

Adding the Gomory constraint equation (3) to the optimum simplex table 9.1 we obtain
the following new table 9.2 :
Table 9.2

B cB cj c1 c2 ... ci ... cm cm + 1 ... cn c G1 = 0

xB Y1 Y2 .... Yi ... Ym Ym + 1 ... Yn YG1


(β1) (β2 ) (β i ) (β m ) (β m + 1)

Y1 cB1 x B1 1 ... 0 ... 0 y1, m + 1 ... y1n 0

Y2 cB2 x B2 0 1 ... 0 ... 0 y2, m + 1 ... y2 n 0

M M M M M M M M M M M M M

Yi cBi x Bi 0 0 ... 1 ... 0 yi, m + 1 ... yin 0

M M M M M M M M M M M M M
Ym cBm cBm 0 0 ... 0 ... 1 ym, m + 1 ... ymn 0

Y G1 0 − fBi 0 0 ... 0 ... 0 − fi, m + 1 ... − fin 1

xj x B1 x B2 ... x Bi ... x Bm 0 ... 0 − fBi


354

Since − fBi is negative, the optimum solution given by the above table is not feasible
hence we apply the dual simplex algorithm to obtain the optimum feasible solution.

If all the variables in the solution thus obtained are integers then the process ends
otherwise we construct the second Gomory constraint from the resulting simplex table,
introduce it in that table and solve by dual simplex algorithm. The process is repeated
until and integer value solution is obtained.

9.6 All-Integer Cutting Plane Algorithm i.e.,


Computational Procedure for the Solution of All
I.P.P. by Gomory Method
It consists of the following steps systematically :

Step 1 : If the problem is of minimization, convert is into the maximization problem.

Step 2 : Make all the bi's positive.

Step 3 : Convert the constraints into equations by introducing the non-negative slack
and or surplus variables.

Step 4 : Obtain the optimum solution of the given L.P.P. ignoring the integer condition
of the variables by using simplex algorithm.

Step 5 : Test the integerability of the optimum solution obtained in step 4. Now there
are two possibilities.
1. The optimum solution have all integer values, then the required solution has been
obtained.
2. The optimum solution does not have all integral values then proceed to the next
step.

Step 6 : If only one variable say x k = x Bi has the fractional value, then corresponding to
the i-th row in which this fractional variables lies in the optimal simplex table (obtained
in step 4), form the Gomory's constraint by using the formula

− ∑ fij x j ≤ − fBi …(1)


j∈R

where R is the set of indices corresponding to all non-basic variables.


However, if more than one variable are fractional then select that non-integral variable
which contain the largest fractional part.
Introducing the slack variable say x G1 obtain the Gomory's constraint equation

− ∑ fij x j + x G1 = − fBi
j∈R
355

Step 7 : Add the Gomory's constraint equation at the bottom of the optimal simplex
table obtained in step (4). Thus, the solution in the table will be infeasible optimal
solution as − fBi < 0 and ∆ j ≤ 0, ∀ j. Now use dual simplex method to change the
infeasible solution to feasible optimum solution. Here the slack variable x G1 will be taken
as the first leaving basic variable in the above table.

Step 8 : Test the integerability of the optimum feasible solution obtained in step 7. Now
again there are tow possibilities.
1. The optimum solution obtained in step 7 have all integral values, then the required
solution is attained.
2. The optimum solution does not have all integral values. In this case repeat step 6 to
step 8, until the required optimum solution is obtained.

Example 1: Solve the following L.P.P. by Gomory technique :

Maximize, Z = 3 x2 ,

Subject to the constraints


3 x1 + 2 x2 ≤ 7

x1 − x2 ≥ −2

x1, x2 ≥ 0 and are integers. [Meerut 2005]

Solution: We shall solve this example step wise, so that the students may understand the
procedure.

Step 1 : The problem is of maximization.

Step 2 : Making all the bi's positive the constraints reduce to

3 x1 + 2 x2 ≤ 7

− x1 + x2 ≤ 2

Step 3 : Now the inequalities are converted to equalities by the introduction of slack
variables x3 and x4 as follows :

3 x1 + 2 x2 + x3 =7

− x1 + x2 + x4 =2

Step 4 : Now we solve the given L.P.P. by simplex method, ignoring the integer condition
of the variables. Taking x1 = 0, x2 = 0, we get x3 = 7, x4 = 2, which is the starting B.F.S.
The solution of the problem by simplex method is given in the following table :
356

Table 9.3

cj 0 3 0 0 Min. Ratio

xB Y1 Y2 Y3 Y4 x B Y2
B cB yi2 > 0
Y3 0 7 3 2 1 0 72
Y4 0 2 −1 1 0 1 2 1(Min) →

Z =0 ∆j 0 3 0 0 X B Y1, yi1 > 0


↑ ↓

Y3 0 3 5 0 1 −2 3 5 (Min) →
Y2 3 2 −1 1 0 1 

Z =6 ∆j 3 0 0 −3
↑ ↓

Y1 0 35 1 0 15 −2 5
Y2 3 13 5 0 1 15 35

Z = 39 5 ∆j 0 0 −3 5 −9 5

Thus, the Optimum solution obtained is

x1 = 3 5 , x2 = 13 5 = 2 + 3 5 , Z = 39 5

Step 5 : Since the optimum solution obtained above does not have all integer values, we
proceed to the next step.

Step 6 : Construction of the first Gomory constraint.

Since the fraction parts in the value of x1, x2 are each equal to 3 5, we select at random
any one of these. Let us choose the x1 = x B1, which lie in the first row of the last part of
above optimum simplex table 9.3.

∴ Here i = 1, m = 2, n = 4,

∴ putting these value in (1) of article 9.6. The corresponding Gomory constraint is
given by

− ∑ f1 j x j ≤ − fB1
...(1)
j∈R

Hence, from the Optimum Simplex table, R = {3, 4}

x B1 = 3 5 ∴ fB1 = 3 5,

y13 = 1 5 and y14 = −2 5 = −1 + 3 5


357

∴ f13 = 1 5 and f14 = 3 5

Substituting in (1) the first Gomory constraint is*

− f13 x3 − f14 x4 ≤ − fB1

1 3 3
or − x − x ≤−
5 3 5 4 5

Adding the non-negative slack variable x G1, the corresponding Gomory Constraint
equation is given by

−(1 5) x3 − (3 5) x4 + x G1 = −3 5

Step 7 : Adding the above new constraint in the optimum simplex table, we get the
following table 9.4

Table 9.4

cj 0 3 0 0 0

B cB xB Y1(β1) Y2 (β2 ) Y3 Y4 YG1(β3 )

Y1 0 35 1 0 15 −2 5 0
Y2 3 13 5 0 1 15 35 0

Y G1 0 −3 5 0 0 −1 5 −3 5 1→

Z = 39 5 ∆j 0 0 −3 5 −9 5 0
↑ ↓

* Another method of Finding Gomory Constraint is as follows. Taking the first row as source row,
the corresponding equation is

1 2
1. x1 + 0 . x2 + x3 − x4 = 3 5
5 5
1
or x1 +
x3 + ( −1 + 3 5 ) x4 = 3 5
5
1 3 3
or x3 + x4 = + ( − x1 + x4 )
5 5 5
Since all variables must have non-negative integral values, therefore the L.H.S. is non-negative
and so the R.H.S. should also be non-negative.

1 3 3
∴ x3 + x4 = + (Non-negative integer)
5 5 5
1 3 3
∴ x3 + x4 ≥
5 5 5
1 3 3
or − x3 − x4 ≤ −
5 5 5
358

Since here x G1 = −3 5 < 0. ∴ the solution given by above table is not feasible.

∴ Now proceed by using dual simplex algorithm.

Taking Leaving Vector as yG1 i.e., x Br = x B3 ∴r =3

To determine the entering vector (α k )

∆k ∆k  ∆ j 
Here = = Min.  , y3 j < 0 
yrk y3 k j  y3 j 

 ∆3 ∆4   −3 5 −9 5 
= Min.  ,  = Min.  , 
 y33 y34   −1 5 −3 5 

= Min. {3,3} ∴ k = 3 or 4

Taking k = 3, i.e., taking Y3 (= α 3 ) as the entering vector, the revised simplex table is

Table 9.5

cj 0 3 0 0 0

B cB xB Y1(β1) Y2 (β2 ) Y3 (β3 ) Y4 YG1

Y1 0 0 1 0 0 −1 1
Y2 3 2 0 1 0 0 1
Y3 0 3 0 0 1 3 −5

Z =6 ∆j 0 0 0 0 −3

which shows that the optimal feasible solution is an integer valued.

Hence, the required solution is

x1 = 0, x2 = 2 and Maximum Z = 6,

Note 1 : All the calculations done in step 7, may also be done in a single table as follows

Adding the Gomory constraint equation in the last part of the table 9.3, we get the
following table in which basic variable x G1 = − 3 5 < 0.

i.e., the solution is not feasible.

So we proceed by using Dual Simplex method as follows :


359

Table 9.6

cj 0 3 0 0 0

B cB xB Y1 Y2 Y3 Y4 YG1

Y1 0 35 1 0 15 −2 5 0
Y2 3 13 5 0 1 15 35 0

Y G1 0 −3 5 0 0 −1 5 −3 5 1→
Z = cB x B ∆j 0 0 −3 5 −9 5 0
= 39 5

Mini ratio — — −3 5 −9 5
=3 =3
∆j −1 5 −3 5
, y3 j < 0 ↑
y3 j

Y1 0 0 1 0 0 −1 1
Y2 3 2 0 1 0 0 1
Y3 0 3 0 0 1 3 −5

Z = cB x B = 6 ∆j 0 0 0 0 −3

From the above table the optimal and integral solution of the given problem is

x1 = 0, x2 = 2 and Max. Z = 6

Note 2: Another optimal solution of the problem is

x1 = 1, x2 = 2 and maximum Z = 6,

which is obtained by choosing Y4 as the entering vector in the first table 9.4 or 9.6

9.7 Graphical Interpretation of Cutting Plane Method


The shaded area shown by dots is the permissible region for the values of x1, x2 Z is
maximum at the point P (3 5 , 13 5). (see fig. 9.1 or page 360)

∴ The solution of the given problem, ignoring the integer values of x1, x2 is

x1 = 3 5 , x2 = 13 5 and Max. Z = 39 5

To find the integer value solution, we add the following constraint known as Gomory
constraint :
1 3 3
− x − x ≤−
5 3 5 4 5
or x3 + 3 x4 ≥ 3 (see step 6) ...(1)
360

x2
7/2

3
P(3/5, 13/5)
(0, 2) (1, 2)
x2=2

3x 1
–2

+
= 1

2x 2
x2

=
x1

7
x1
–2 0 1 2 7/3

Fig. 9.1

Adding the non-zero slack variables x3 , x4 the given inequalities reduce to the following
equalities

3 x1 + 2 x2 + x3 = 7 and − x1 + x2 + x4 = 2

Giving x3 = 7 − 3 x1 − 2 x2 and x4 = 2 + x1 − x2

Substituting in (1), the Gomory constraint in terms of x1 and x2 is given by

(7 − 3 x1 − 2 x2 ) + 3(2 + x1 − x2 ) ≥ 3

or x2 ≤ 2.

Drawing the line x2 = 2, the above feasible region is cut off to the shaded region shown by
the dots and cross (×) together.

Thus, the required optimal inter valued solution is

x1 = 0, x2 = 2 and max. Z = 6

or x1 = 1, x2 = 2 and max. Z = 6

Example 2: The owner of a ready-made garments makes two types of shirts known as Zee
shirts and Button Down shirts. He makes profit of ` 1 and ` 4 per shirt on Zee shirts and
Button-Down shirt respectively. He has two tailors (Tailor A and Tailor B) at his
disposal to stitch these shirts. Tailor A and Tailor B can devote at the most 7 hours and
15 hours per day respectively. Both these shirts are to be stitched by both the tailors.
Tailors A and Tailors B spend two hours and five hours respectively in stitching a Zee
shirt and four hours and three hours respectively in stitching a Button-Down shirt. How
many shirts of both the types should be stitched in order to maximize daily profit?
(i) Set up and solve the L.P.P.
(ii) If the optimal solution is not-integer-valued, use the Gomory technique to derive the
optimal integer solution.
361

Solution: (i) Formulation as L.P.P.

Let the owner manufacture x1 and x2 number of shirts of two types respectively.

Z-shirts ( x1) BD-shirts ( x2 ) Availability in hrs.

Tailor A (Time in hrs.) 2 4 7

Tailor B (Time in hrs.) 5 3 15

Profit (in `) 1 4

Profit Z = ` (1x1 + 4 x2 )

Time devoted by tailor A = 2 x1 + 4 x2 ≤ 7

Time devoted by tailor B = 5 x1 + 3 x2 ≤ 15

Hence the given problem formulated as L.P.P. is

Max. Z = x1 + 4 x2

subject to 2 x1 + 4 x2 ≤ 7

5 x1 + 3 x2 ≤ 15

and x1, x2 ≥ 0, both integers.

(ii) Solution of the above L.P.P. : The problem is of maximization and both bi's are
positive. Introducing the non-negative slack variables x3 and x4 , the given problem
becomes.

Max. Z = x1 + 4 x2 + 0 . x3 + 0 . x4

subject to 2 x1 + 4 x2 + x3 =7

5 x1 + 3 x2 + x4 = 15

x1, x2 , x3 , x4 ≥ 0

Now we solve the problem by simplex method, ignoring the integer condition of
variables.

Taking x1 = 0, x2 = 0, we get x3 = 7, x4 = 15, which is the starting B.F.S.

The solution of the problem by simplex method is given in the following table :
362

Table 9.7

cj 1 4 0 0 Min. Ratio

XB Y1 Y2 Y3 Y4 x B Y2
B cB yi2 > 0

Y3 0 7 2 4 1 0 7 4 min →
Y4 0 15 5 3 0 1 15 3

Z = cB x B = 0 ∆j 1 4 0 0

Y2 4 74 12 1 14 0
Y4 0 39 4 72 0 −3 4 1

Z = cB x B = 7 ∆j −1 0 −1 0

From the above table optimal solution is

x1 = 0, x2 = 7 4 = 1 + 3 4, x3 = 0, x4 = 39 4 = 9 + 3 4, Max. Z = 7

This solution is not integer valued, so we form a Gomory's constraint.

Here fractional part in both x2 and x4 is equal to 3/4, so we can consider anyone of these.
Considering x2 = 7 4 = 1 + 3 4 = x B1, which lie in row one, the corresponding equation is

(1 2) x1 + 1. x2 + (1 4) x3 + 0 . x4 = 7 4 = 1 + 3 4

or (1 2) x1 + (1 4) x3 = 3 4 + (1 − x2 )

∴ (1 2) x1 + (1 4) x3 ≥ 3 4 , Q1 − x2 is non-negative integer.

or − (1 2) x1 − (1 4) x3 ≤ −3 4

which is Gomory's constraint.

Introducing non-negative slack variable x G1, the corresponding Gomory constraint


equation is

− (1 2) x1 − (1 4) x3 + x G1 = −3 4

Adding this Gomory constraint equation at the bottom of the last part of above table, we
get the table in which, the basic variable x G1 = −3 4 < 0.

i.e., the solution is not feasible

∴ We shall proceed by dual simplex method as follows :


363

Table 9.8

cj 1 4 0 0 0

B cB xB Y1 Y2 Y3 Y4 YG1
Y2 4 74 12 1 14 0 0
Y4 0 39 4 72 0 −3 4 1 0

x G1 0 −3 4 −1 2 0 −1 4 0 1→
Min. Ratio −1 — −1
=2 =4
∆j −1 2 −1 4
, y3 j < 0 min
y3 j

Y2 4 1 0 1 0 0 1
Y4 0 92 0 0 −5 2 1 7
Y1 1 32 1 0 12 0 −2

Z = c B x B = 11 2 ∆j 0 0 −1 2 0 −2

From the above table, the optimal solution is

x1 = 3 2 = 1 + 1 2 , x2 = 1, x4 = 9 2 = 4 + 1 2 , Max. Z = 11 2

This solution is also not integer valued, so we form second Gomorys constraint.

Here fractional part in both x1 and x4 is 1/2. So we can take any one of these. Taking

x1 = 3 2 = 1 + 1 2 = x B3 , which is in third row.

The equation corresponding to this third row is

1x1 + 0 . x2 + (1 2) x3 + 0 . x4 − 2 x G1 = 1 + 1 2

or (1 2) x3 = 1 2 + (1 − x1 + 2 x G1)

∴ (1 2) x3 ≥ 1 2 , Q 1 − x1 + 2 x G1 is non-negative integer

or −(1 2) x3 ≤ −1 2

which is second Gomory's constraint. Introducing non-negative slack variable x G2 , the


corresponding Gomory's constraint is

− (1 2) x3 + x G2 = − 1 2

Adding this Gomory's constraints at the bottom of the last part of the above table, we get
the table in which basic variable x G2 = −1 2 < 0.

i.e., the solution is not feasible.


∴ We shall proceed by using dual simplex method as follows :
364

Table 9.9

cj 1 4 0 0 0 0
B cB
xB Y1 Y2 Y3 Y4 YG1 YG2
Y2 4 1 0 1 0 0 1 0
Y4 0 92 0 0 −5 2 1 7 0
Y1 1 32 1 0 12 0 −2 0

Y G2 0 −1 2 0 0 −1 2 0 0 1→
Z = c B x B = 11 2 ∆j 0 0 −1 2 0 −2 0

Min. Ratio — — −1 2 — —
=1
∆j −1 2
, y4 j < 0 (min)
y4 j ↑

Y2 4 1 0 1 0 0 1 0
Y4 0 7 0 0 0 1 7 −5
Y1 1 1 1 0 0 0 −2 1
Y3 0 1 0 0 1 0 0 −2

Z = cB x B = 5 ∆j 0 0 0 0 −2 −1

For the table the optimal and integral solution of the given problem is

x1 = 1, x2 = 1 and Max. Z = ` 5.

Hence the owner of the ready made garments should make one Zee-shirt and one
Button-Down shirt to get a maximum profit of ` 5.

1. Describe Gomory's method of solving an all integer programming problem


[Meerut 2007]
Find the optimum solution of the following I.P.P. by Gomory cutting plane
method

2. Max. Z = x1 − 2 x2 3. Max. Z = x1 + 2 x2
subject to the constraints subject to the constraints
4 x1 + 2 x2 ≤ 15 x1 + x2 ≤ 7
x1, x2 ≥ 0 and are integers. 2 x1 ≤ 11, 2 x2 ≤ 7
x1, x2 ≥ 0 and are integers.
[Meerut 2006 (BP)]
365

4. Max. Z = x1 + x2 5. Max. Z = 4 x1 + 3 x2
subject to the constraints subject to the constraints
3 x1 + 2 x2 ≤ 5 x1 + 2 x2 ≤ 4
x2 ≤ 2 2 x1 + x2 ≤ 6
x1, x2 ≥ 0 and are integers. x1, x2 ≥ 0 and are integers.
[Meerut 2005(BP)]

6. Max. Z = x1 − x2 7. Max. Z = 11x1 + 4 x2


subject to the constraints subject to the constraints
x1 + 2 x2 ≤ 4 − x1 + 2 x2 ≤ 4
6 x1 + 2 x2 ≤ 9 5 x1 + 2 x2 ≤ 16
x1, x2 ≥ 0 and are integers. 2 x1 − x2 ≤ 4
x1, x2 ≥ 0 and are integers.

8. Max. Z = 3 x2 9. Max. Z = 7 x1 + 9 x2
subject to the constraints subject to the constraints
3 x1 + 2 x2 ≤ 7 − x1 + 3 x2 ≤ 6
x1 − x2 ≤ −2 7 x1 + x2 ≤ 35
x1, x2 ≥ 0 x1, x2 ≥ 0 and integers.

10. Max. Z = x4 + 5 x2 11. Max. Z = 2 x1 + 2 x2


subject to the constraints subject to the constraints
x1 + 10 x2 ≤ 20 5 x1 + 3 x2 ≤ 8
x1 ≤ 2 x1 + 2 x2 ≤ 4
x1, x2 ≥ 0 and are integers. x1, x2 ≥ 0 and are integers.

12. Max. Z = x1 + x2 13. Max. Z = x1 + x2


subject to the constraints subject to the constraints
2 x1 + 5 x2 ≤ 16 − x1 + 7 x2 ≤ 21
6 x1 + 5 x2 ≤ 30 7 x1 − x2 ≤ 21
x1, x2 ≥ 0 and are integers. x1, x2 ≥ 0 and are integers.

14. Max. Z = 5 x1 + 8 x2 15. Max. Z = 2 x1 + 3 x2


subject to the constraints subject to the constraints
x1 + 2 x2 ≤ 8 −3 x1 + 7 x2 ≤ 14
4 x1 + x2 ≤ 10 7 x1 − 3 x2 ≤ 14
x1, x2 ≥ 0 and are integers. x1, x2 ≥ 0 and are integers.
366

2. x1 = 3, x2 = 0, max. Z = 3 3. x1 = 4, x2 = 3, max. Z = 10

4. x1 = 0, x2 = 2 or x1 = 1, x2 = 1 5. x1 = 3, x2 = 0, max. Z = 12
max. Z = 2
6. x1 = 1, x2 = 0, max. Z = 1 7. x1 = 2, x2 = 3, max. Z = 34

8. x1 = 0, x2 = 3, max. Z = 9 9. x1 = 4, x2 = 3, max. Z = 55

10. x1 = 0, x2 = 2, max. Z = 10 11. x1 = 1, x2 = 1 or x1 = 0, x2 = 2,


max. Z = 4
12. x = 5, x = 0 or x = 3, x = 2, 13. x1 = 3, x2 = 3, max. Z = 6
1 2 1 2
max. Z = 5
14. x1 = 0, x2 = 4, max. Z = 32 15. x1 = 3, x2 = 3, max. Z = 15

9.8 Mixed-Integer Cutting Plane Algorithm


i.e., Computational procedure for the solution of mixed-integer programming
problem using Gomory's cutting plane.

The solution method of mixed-integer programming problem is quite similar to that of


all-integer programming problem, except that the procedure of obtaining Gomory's
constraint is slightly different.

The computational procedure consists of the following steps systematically :

Step 1 : Formulate the given L.P.P. in to a standard maximization form and determine
its optimum solution ignoring the integer condition of the variables by using simplex
algorithm.

Step 2 : Test the integrability of the optimum solution obtained in the above step. Now
there are two possibilities.
(i) The optimum solution has integral values of all integer restricted variables, then the
required solution has been obtained.
(ii) The optimum solution does not have integral values of all integer restricted,
variables, then proceed to the next step.

Step 3 : If only one of the integer restricted variables has the fractional value, then
corresponding to the row in which this fractional variables lies in the optimum simplex
table, form the Gomory's Constraint by using the following formula :
If the basic variable x Bi = x k (integer restricted variable) has the fractional value and lies
in the i-th row of the optimum simplex table, then the formula for obtaining Gomory's
constraint is as follows :
367

 f 
− ∑ yij x j −  Bi  ∑ yij x j ≤ − fBi
f −
 Bi 1
j ∈R + j ∈R −

where x Bi = [ x Bi] + fBi

[ x Bi] = Largest integral part of x Bi s.t. [ x Bi] ≤ x Bi

fBi = Largest fractional part of x Bi, st.t. 0 ≤ fBi < 1

R + = [ j : yij ≥ 0], R − = { j : yij < 0}

and R = {R −, R + } set of indices corresponding to all non-basic variables.

Thus, introducing the non-negative slack variable x G1, the Gomory's constraint
equation i.e.,Gomory's cutting plane is given by

 f 
− ∑ yij x j −  Bi  ∑ yij x j + x G1 = − fBi
f −
 Bi 1
j ∈R + j ∈R −

However, if more than one, integer restricted variables are not integers than the
non-integral variable which contains the largest fractional part is taken.

Step 4 : Adding the Gomory's constraint equation at the bottom of the optimum simplex
table obtained in step 1, obtain the row optimum solution by using dual simplex
algorithm. Note that the slack variable x G1 will be taken as the first leaving basic variable
in the above table.

Step 5 : Test the integrability of the optimum solution obtained in step 4. Now again
there are two possibilities.
(i) The optimum solution obtained in step 4 have integral values of all integer
restricted variables, then the required solution has been obtained.
(ii) The optimum solution does not have integral values of all integer restricted
variables. In this case repeat steps (3) and (4), until the required optimal solution is
obtained.

Example 1: Solve the following mixed-integer programming problem, using Gomory's


cutting plane method :
Max. Z = x1 + x2

subject to the constraints


3 x1 + 2 x2 ≤ 5, x2 ≤ 2,

x1, x2 ≥ 0 and x1 is an integer.


368

Solution: Step 1 : Introducing the slack variables x3 and x4 , the given L.P.P. into the
standard maximization form is

Max. Z = x1 + x2 + 0. x3 + 0. x4

s.t. 3 x1 + 2 x2 + x3 =5

0 x1 + x2 + x4 =2

and x1, x2 , x3 , x4 ≥ 0

Ignoring the integer condition for variables x1, and proceeding as usual by simplex
method, all the computation work is shown in the following table. Initial B.F.S. obtained
by taking x1 = 0, x2 = 0, is x3 = 5, x4 = 2.

Table 9.10
cj 1 1 0 0 Min. Ratio

B cB xB Y1 Y2 Y3 Y4 x B Y2 , yi2 > 0

Y3 0 5 3 2 1 0 52
Y4 0 2 0 1 0 1 2 1(Min. ) →

Z =0 ∆j 1 1 0 0 x B Y3 , yi3 > 0
↑ ↓

Y3 0 1 3 0 1 −2 1 3 (Min. ) →
Y2 1 2 0 1 0 1 

Z =2 ∆j 1 0 0 −1
↑ ↓

Y1 1 13 1 0 13 −2 3
Y2 1 2 0 1 0 1

Z =7 3 ∆j 0 0 −1 3 −1 3

Since ∆ j ≤ 0, ∀ j, so the optimal solution ignoring the integer condition is

x1 = 1 3 , x2 = 2. Max. Z = 7 3

Step 2 : In the above optimum solution, x1 = 1 3 fractional, while the condition is that x1
is integer. So we proceed to the next step.

Step 3 : It is a problem of mixed integer programming, so the Gomory's constraint is


obtained by the formula.

 f 
− ∑ yij x j −  Bi  ∑ yij x j ≤ − fBi
 fBi − 1
j ∈R + j ∈R −
369

Here x1 = x Bi = x B1 = [ x B1] + fB1 = 1 3 = 0 + 1 3 ∴ fB1 = 1 3 , i = 1

R + = { j = 3 : y13 = 1 3 > 0}, R − = { j = 4 : y14 = − 2 3 < 0}

∴ Gomory's Constraint is

 13 
− y13 x3 −   . y14 x4 ≤ − 1 3
1 3 − 1

or − (1 3) x3 − (− 1 2) (− 2 3) x4 ≤ − 1 3

Introducing, non-negative slack variable x G1, the corresponding Gomory's constraint


equation 1 i.e., Gomory cutting plane, is

− (1 3) x3 − (1 3) x4 + x G1 = −1 3

Step 4 : Adding the Gomory's constraint equation at the bottom of the last part of the
above table, we get the following table, in which the basic variable x G1 = − 1 3 < 0 :

i.e., the solution is not feasible.

So we proved by using Dual Simplex method as follows :

Table 9.11
cj 1 1 0 0 0

B cB xB Y1 Y2 Y3 Y4 yG1

Y1 1 13 1 0 13 −2 3 0
Y2 1 2 0 1 0 1 0

Y G1 0 −1 3 0 0 −1 3 −1 3 1→
Z = cB x B = 7 3 ∆j 0 0 −1 3 −1 3 0
Min. ratio — — −1 3 −1 3
=1 =1
∆j −1 3 −1 3
, y3 j < 0
y3 j ↑

Y1 1 1 1 0 1 0 −2
Y2 1 1 0 1 −1 0 3
Y3 0 1 0 0 1 1 −3

Z = cB x B = 2 ∆j 0 0 0 0 −1

From the above table, the optimal solution of the given problem with x1 as integer is

x1 = 1, x2 = 1 and max. Z = 2.

Note : In the last table 9.11, if we table Y3 as the entering vector in place of Y4 , then the
alternate optimal solution is x1 = 0, x2 = 0 and max. Z = 2.
370

Example 2: Solve the following mixed-integer programming problem, using Gomory's


cutting plane method :

Max. Z = 3 x1 + x2 + 3 x3

subject to the constraints

− x1 + 2 x2 + x3 ≤ 4, 4 x2 − 3 x3 ≤ 2, x1 − 3 x2 + 2 x3 ≤ 3,

and x1, x2 , x3 ≥ 0 , where x1 and x3 are integer.

Solution: Step 1 : Introducing the slack variables x4 , x5 and x6 , the given L.P.P. in to
the standard maximization form is

Max. Z = 3 x1 + x2 + 3 x3

s.t. − x1 + 2 x2 + x3 + x4 =4

4 x2 − 3 x3 + x5 =2

x1 − 3 x2 + 2 x3 + x6 =3

x1, x2 , x3 , x4 , x5 , x6 ≥ 0,

Ignoring the integer condition for variables x1 and x3 , and proceeding as usual, the final
simplex table, giving the optimum solution is as follows :

Table 9.12

cj 3 1 3 0 0 0

B cB xB Y1 Y2 Y3 Y4 Y5 Y6

Y3 3 10 3 0 0 1 49 19 49
Y2 1 3 0 1 0 13 13 13
Y1 3 16 3 1 0 0 19 79 10 9

Z = 29 ∆j 0 0 0 −2 −3 −5

Q ∆ j ≤ 0, ∀ j, so this solution, ignoring integer condition is

x1 = 16 3 = 5 + 1 3 , x2 = 3, x3 = 10 3 = 3 + 1 3 and Max. Z = 29

Step 2 : Here x1 and x3 are not integers, as required,so we proceed to the next step.

Step 3 : Here x1 = 16 3 = 5 + 1 3 , x3 = 10 3 = 3 + 1 3

So we first consider x3 to change its value to integral value

Q x3 = x Bi = x B1 = 10 3 = 3 + 1 3 = [ x B1] + fB1 ∴ i = 1, fB1 = 1 3


371

∴ The Gomory's constraints is given by

 f 
− ∑ y1 j x j −  B1  ∑ y1 j x j ≤ − fB1 ...(1)
 fB1 − 1
j ∈R + j ∈R −

for i = 1, i.e., in the first row in the above table, for non-basic variables.

R + = { j = 4, 5, 6 : y1 j > 0}, R − = { j : y3 j < 0} = φ

i.e., y14 = 4 9 , y15 = 1 9 , y16 = 4 9 > 0

∴ Gomory's constraint is

 13 
− y14 x4 − y15 x5 − y16 x6 −   . 0 ≤ −1 3
1 3 − 1

or − (4 9) x4 − (1 9) x5 − (4 9) x6 ≤ − 1 3

Introducing non-negative slack variable x G1, the Gomory's constraint equation (i.e.,
Gomory's cutting plane) is

− (4 9) x4 − (1 9) x5 − (4 9) x6 + x G1 = − 1 3

Adding this Gomory's constraint equation at the bottom of the above table 9.12, we get
the following table in which the basic variable x G1 = −1 3 < 0. i.e., the solution is not
feasible. ∴we proceed by using dual simplex method as follows :
Table 9.13

cj 3 1 3 0 0 0 0

B cB xB Y1 Y2 Y3 Y4 Y5 Y6 yG1(β4 )

Y3 3 10 3 0 0 1 49 19 49 0
Y2 1 3 0 1 0 13 13 13 0
Y1 3 16 3 1 0 0 19 79 10 9 0

yG1 0 −1 3 0 0 0 −4 9 −1 9 −4 9 1→

Z = 29 ∆j 0 0 0 −2 −3 −5 0
Min. Ratio — — — −2 9 −3 −5 45
= = 27 =
∆j −4 9 2 −1 9 −4 4
, y4 j < 0 min 9
y4 j

Y3 3 3 0 0 1 0 0 0 1
Y2 1 11 4 0 1 0 0 14 0 34
Y1 3 21 4 1 0 0 0 34 1 14
Y4 0 34 0 0 0 1 14 1 −9 4
Z = 43 2 ∆j 0 0 0 0 −5 2 −3 −9 2
372

From the above table optimal solution with x3 as integer is


43
x1 = 21 4 = 5 + 1 4 , x2 = 11 4 = 2 + 3 4 , x3 = 3, x4 = 3 4, max. Z =
2

Here x1 is not integer (as required). So we form another Gomory constraint.

Now x1 lie in 3rd row and

x1 = x Bi = x B3 = 21 4 = 5 + 1 4 = [ x B3 ] + fB3 ∴ i = 3 and fB3 = 1 4.

Also in the 3rd row, for non basic variables

R + = {5, 6, 7, y3 j > 0} and R − = { j : y3 j < 0} = φ

∴ From (1), the second Gomory's constraint is

 14 
− y35 x5 − y36 x6 − y37 x G1 −   . 0 ≤ −1 4
1 4 − 1

or − (3 4) x5 − 1x6 − (1 4) x G1 ≤ −1 4

Introducing the non-negative slack variable x G2 , the corresponding Gomory constraint


equation is

−(3 4) x5 − x6 − (1 4) x G1 + x G2 = − 1 4.

Adding this second Gomory constraint equation at the bottom of the last part of above
table, we get the table in which the basic variable x G2 = − 1 4 < 0.

i.e., the solution is not feasible,

So we proceed by using Dual simplex method as follows :

Table 9.14

cj 3 1 3 0 0 0 0 0

yG2
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 yG1
(β5 )

Y3 3 3 0 0 1 0 0 0 1 0
Y2 1 11 4 0 1 0 0 14 0 34 0
Y1 3 21 4 1 0 0 0 34 1 14 0
Y4 0 34 0 0 0 1 14 1 −9 4 0

yG2 0 −1 4 0 0 0 0 −3 4 −1 −1 4 1→

Z = 43 2 ∆j 0 0 0 0 −5 2 −3 −9 2 0

Min. Ratio — — — — −5 2 10 −3 −9 2
= =3 = 18
∆j −3 4 3 −1 −1 4
, y5 j < 0 (min)
y5 j ↑
373

Y3 3 3 0 0 1 0 0 0 1 0
Y2 1 11 4 0 1 0 0 14 0 34 0
Y1 3 5 1 0 0 0 0 0 0 1
Y4 0 12 0 0 0 1 −1 2 0 −5 2 1
Y6 0 14 0 0 0 0 34 1 14 −1

Z = 107 4 ∆j 0 0 0 0 −1 4 0 −15 4 −3

From the above table the optimal solution of the given problem with x1 and x3 having
integral values is x1 = 5, x2 = 11 4 , x3 = 3 and max. Z = 107 4.

Find the optimum solution to the following mixed-inter programming problems,


by Gomory's cutting plane method :

1. Max. Z = 3 x1 + 4 x2 2. Max. Z = 7 x1 + 9 x2
subject to the constraints subject to the constraints
3 x1 − x2 ≤ 12 − x1 + 3 x2 ≤ 6
3 x1 + 11x2 ≤ 66 7 x1 + x2 ≤ 35
x1, x2 ≥ 0 and x2 is an integer x1, x2 ≥ 0 and x1 is an integer

3. Max. Z = 4 x1 + 6 x2 + 2 x3

subject to the constraints

4 x1 − 4 x2 ≤ 5, − x1 + 6 x2 ≤ 5

− x1 + x2 + x3 ≤ 5, x1, x2 , x3 ≥ 0, x1 and x3 are integers.

1. x1 = 16 3 , x2 = 4, Z = 32 2. x1 = 4, x2 = 10 3 , Z = 58

3. x1 = 2, x2 = 1, x3 = 6, Z = 26
374

9.9 The Branch-and-Bound Technique


[Meerut 2006 (BP), 08]

The branch-and-bound method was first developed by A.H. Land and A.G. Doig and was
further developed by J.D.C. Little et al. and other researchers.
This technique is applicable to both all integer programming problems as well as mixed
integer programming problems. This is the most general technique for the solution of an
I.P.P. in which only a few or all the variables are constrained by their upper or lower
bounds, or by both.

The technique, called the Branch-and-Bound technique, for a maximization problem


is discussed below :

Let the given I.P.P. be as follows :


n
Max. Z = ∑ cj x j ...(1)
j =1

Subject to the constraints.


n

∑ aij x j ≤ bi, i = 1, 2,..., m ...(2)


j =1

x j is integral valued for

j = 1, 2,..., r ≤ n ....(3)

and x j ≥ 0, for j = r + 1, r + 2,..., n ...(4)

Also let there exist lower and upper bounds for the optimum values of each integer valued
variable x j such that

Lj ≤ x j ≤ U j , j = 1, 2,..., r . ...(5)

Thus, any optimum solution of (1) to (5) must satisfy only one of the constraints

x j ≤ [x j] ...(6)

and x j ≥ [x j] + 1 ...(7)

Thus, ignoring the integer restriction (3) if x*t is the value of the variable x t in the
optimum solution of the above L.P.P. given by (1) to (5), then, in an integer valued
solution we have

either Lt ≤ x t ≤ [ x*t ] ...(8)

or [ x*t ] + 1 ≤ x t ≤ U t ...(9)
For example if x1 = 3.5 ignoring integer constraint then in integer valued solution
either L1 ≤ x1 ≤ 3 or 4 ≤ x1 ≤ U1
375

Thus, the given I.I.P. given by (1) to (5) has two sub-I.P. problems :
(i) given by (1), (2), (3), (4) and (8)
and (ii) given by (1), (2), (3), (4) and (9)

In the above two sub-problems constraint (5) is modified only for x t (i.e., for x j , j = t).

Now solve these two sub I.P. problems. If the two problems posses integer valued
solution then the solution having the larger value of Z is taken as the optimum solution
of the given problem. If either of these sub-problems does not have an integer valued
solution then sub-divide this again into two sub-problems and proceed similarly till an
optimum integral valued solution is obtained.

9.10 Branch-and-Bound Algorithm


The systematic step by step solution of an I.P.P. by Branch-and-Bound technique is as
follows :

Step 1 : Solve the given I.P.P ignoring the integer, valued condition.

Step 2 : Test the integrability of the optimum solution obtained in step 1. Now there are
two possibilities.
(i) The optimum solution is integral valued then the required solution is obtained.
(ii) The optimum solution is not integral valued then proceed to the next step 3.

Step 3 : If the optimal value x*t of the variable x t is fractional then form two sub
problems.
Sub problem 1 Sub problem 2
Given problem with one Given problem with one
more constraint more constraint
x t ≤ [ x*t ] x t ≥ [ x*t ] + 1

Step 4 : Solve the two sub problems 1 and 2 obtained in step 1. Now there are three
possibilities.
(i) If the optimal solution of the two sub problems are integral valued then the required
solution is that which given larger value of Z.
(ii) If the optimal solution of one sub-problem is integer value and the other
sub-problem has no feasible optimal solution, then the required solution is that of
the sub-problem having integer valued solution.
(iii) If the optimal solution of one sub-problem is integer valued while that of the other
sub-problem is fractional valued then record the integer valued solution and repeat
step 3 and 4 for the fractional valued sub-problem. Continue step 3 and 4
iteratively, till all integral valued solutions are recorded.
376

Step 5 : From all the recorded integral valued solutions choose that solution which given
the largest value of Z. This is the required optimal solution of the problem.

Example 1: Use Branch-and-Bound technique to solve the following problem :

Max. Z = 7 x1 + 9 x2

subject to − x1 + 3 x2 ≤ 6

7 x1 + x2 ≤ 35

0 ≤ x1, x2 ≤ 7

x1, x2 are integers.

Solution: Step 1 : The given problem can be written as

Max. Z = 7 x1 + 9 x2

subject to − x1 + 3 x2 ≤ 6

7 x1 + x2 ≤ 35

x1 ≤ 7

x2 ≤ 7

and x1, x2 ≥ 0 and integers.

Ignoring the condition of integer values of variables, the optimal solution of the given
problem, by graphical method, is given by
7x 1+

x2
x 2=
35

7
x2=7
(9/2, 7/2)
6
3x 2=
–x 1+
7/2 x1=7
2
7k
x1
–6 –9k 0 9/2 5
Z=7x1+9x2=0
x –9
gives 1 =
x2 7
Fig. 9.2
377

x1 = 9 2 = 4.5, x2 = 7 2 = 3.5 and Max. Z = 63

This value of Z represents the initial upper bound of the value of objective function Z. So
the value of the objective function Z for integral values of variables in the permissible
region cannot exceed 63.

Step 2 : Here none of x1 and x2 is integral. We consider x1 first.

Q x1 = 9 2 = 4 . 5 and [ x1] = 4, so we form the following two sub-problems :

Step 3 : Now we form the following two sub-problems :

Sub-Problem 1 Sub-Problem 2

Max. Z = 7 x1 + 9 x2 Max. Z = 7 x1 + 9 x2

subject to − x1 + 3 x2 ≤ 6 subject to − x1 + 3 x2 ≤ 6

7 x1 + x2 ≤ 35 7 x1 + x2 ≤ 35

x1 ≤ 4 x1 ≥ 5

x2 ≤ 7 x2 ≤ 7

x1, x2 ≥ 0 x1, x2 ≥ 0

Step 4 : By graphical method the optimal solutions of the above two sub problems are as
follows (see fig. 9.3) :

Sub-Problem 1. x1 = 4, x2 = 10 3 , Max, Z = 58

Sub-Problem 2. x1 = 5, x2 = 0, and Max, Z = 35

which has integral values.

Since the solution of Sub-Problem 1 is not integral as x2 = 10 3 = 3 + 1 3 i.e., [ x2 ] = 3

x2 (9/2, 7/2)
(4, 10/3)

S.P. 1 S.P. 2

(5, 0)
x1
0
x1=4
x1=5
(Sub Prob. 1 and 2)

Fig. 9.3
378

∴ We sub-divide sub problem 1 into the following two sub-problems :

Sub-Problems 3 Sub-Problems 4

Max. Z = 7 x1 + 9 x2 Max. Z = 7 x1 + 9 x2
subject to − x1 + 3 x2 ≤ 6 subject to − x1 + 3 x2 ≤ 6
7 x1 + x2 ≤ 35 7 x1 + x2 ≤ 35
x1 ≤ 4 x1 ≤ 4
x2 ≤ 3 x2 ≥ 4
x1, x2 ≥ 0 x1, x2 ≥ 0

By graphical method, the optimal solutions x2 S.P. 4


x2=4
of the two sub-problems 3 and 4 are as 4
follows :
x2=3
3
Sub-Problem 3 : x1 = 4, x2 = 3, Max, Z = 55 (4, 3)
2
S.P. 3
which is integral valued.

Sub-Problem 4 : No-feasible solution.


x1
0 5
The entire procedure (Branch-and-Bound) x1=4
is given in the following figure 9.5. (Sub Prob. 3 and 4)

Fig. 9.4

Given Prob.

x1=9/2=4.5
x2=7/2=3.5
Max. Z=63

x1 ≤ 4 x1 ≥ 5

Sub. Problem 1 Sub. Problem 2

x1=4
x1=5, x2=0
x2=10/3 = 3+1/3
Max. Z=Z2=35
Max. Z=Z1=58

x2 ≤ 3 x2 ≥ 4
Sub. Problem 3 Sub. Problem 4

x1=4, x2=3
No. F.S.
Max. Z=Z3=55

Fig. 9.5

The solutions of sub-problems 2 and 3 are integer valued and Z3 > Z2 . Hence the
optimal integral solution of the given problem is x1 = 4, x2 = 3 and Max. Z = 55.
379

1. Describe the Branch-and-Bound technique to solve integer programming problem.


[Meerut 2006 (BP), 08 (BP)]

2. Write short note on Branch-and-Bound technique. [Meerut 2008]

Use Branch-and-Bound technique to solve the following I.P.P. :

3. Max. Z = x1 + x2 4. Max. Z = x1 + x2

subject to 3 x1 + 2 x2 ≤ 12 subject to 4 x1 − x2 ≤ 10

x2 ≤ 2 2 x1 + 5 x2 ≤ 10

x1, x2 ≥ 0 4 x1 − 3 x2 ≤ 6
and are integers. x1, x2 = 0, 1, 2,...

5. Max. Z = 7 x1 + 9 x2 6. Min. Z = 4 x1 + 3 x2

subject to the constraints subject to the constraints

− x1 + 3 x2 ≤ 6 5 x1 + 3 x2 ≥ 30

7 x1 + x2 ≤ 36 x1 ≤ 4, x2 ≤ 6

x2 ≤ 7 x1, x2 ≥ 0 and are integers

x1, x2 ≥ 0
and x1, x2 are integers. [Agra 2002]

7. Max. Z = x1 + x2 8. Min. Z = 3 x1 + 2 . 5 x2
subject to x1 + 7 x2 ≤ 28 subject to x1 + 2 x2 ≥ 20
14 x1 + 4 x2 ≤ 63 3 x1 + 2 x2 ≥ 50

x1, x2 ≥ 0 and integers. x1, x2 ≥ 0


and integers.

9. Max. Z = 2 x1 + 3 x2 10. Max. Z = 2 x + x


1 2
s.t. −3 x1 + 7 x2 ≤ 14 s.t. x1 ≤ 3 2

7 x1 − 3 x2 ≤ 14 x2 ≤ 3 2

x1, x2 ≥ 0 x1, x2 ≥ 0
and integers. and integers
380

Multiple Choice Questions


1. Rounding off the optimum values of the variables to the nearest integer value in an
L.P. problem :
(a) May not satisfy all the given constraints
(b) The value of the objective function so obtained may be optimal value
(c) Both (a) and (b) are true
(d) None of these

2. While solving and I.P.P. any non-integer variable in the solution obtained by
simplex method is picked up to :
(a) Enter the solution (b) Leave the solution
(c) Obtain Gomory cut constraint (d) None of those

3. Addition of the Gomory constraint to the optimum simplex table :


(a) Makes the previous optimal solution in feasible
(b) Makes the previous optimal solution to unoptimal solution
(c) Makes the previous optimal solution to feasible and optimal
(d) None of these

4. The infeasible solution obtained by the addition of the Gomory constraint equation
to the optimal simplex table is changed to feasible optimal solution by using :
(a) Simplex method (b) Dual simplex method
(c) Revised simplex method (d) None of these

5. Gomory's cutting plane method may be used for the solution of :


(a) All integer programming problem
(b) Mixed-integer programming problem
(c) Both (a) and (b)
(d) None of (a) and (b)

6. Branch-and-Bound technique may be used for the solution of :


(a) All-integer programming problem
(b) Mixed-integer programming problem
(c) Both (a) and (b)
(d) None of (a) and (b)
381

Fill in the Blank(s)


1. A L.P.P. in which some or all of the variables must take non-negative integral values
is referred to as ..................... .

2. An integer programming problem can be solved by ..................... cutting plane


method.

3. Gomory's cutting plane method was developed by ..................... .

4. Branch-and-Bound method to solve I.P.P. was developed by ..................... and


..................... .
382

Exercise 9.3
3. x1 = 2, x2 = 2; x1 = 3, x2 = 1; 4. x1 = 2, x2 = 1, Z = 3
x1 = 4, x2 = 0, Z = 4

5. x1 = 4, x2 = 3, Z = 55 6. x1 = 3, x2 = 5, Z = 27

7. x1 = 3, x2 = 3, Z = 6 8. x1 = 14, x2 = 4, Z = 52

9. x1 = 3, x2 = 3, Z = 15 10. x1 = 1, x2 = 1, Z = 3

Multiple Choice Questions


1. (c) 2. (c)

3. (a) 4. (b)

5. (c) 6. (c)

Fill in the Blank


1. I.P.P. 2. Gomory

3. R.E. Gomory 4. A.H. Lang and A.G. Doig


mmm
Unit-5

Chapter-10: The Transportation Problem

Chapter-11: The Assignment Problem


385

10.1 Introduction
s discussed earlier, the most generalized method for solving linear programming
A problem is the simplex method. However there is a particular class of linear
programming problem which is related to a lot of very practical problems generally called
‘Transportation problems’. It is far more simple to solve these problems by
‘Transportation Technique’ than by ‘simplex’.

10.2 The Nature of Transportation Problem


Let there be more than one centres, called ‘origins’ or ‘sources’ from where the various
amounts of a single homogeneous commodity need to be transported to more than one
places called ‘destinations’. The costs of transporting from each of the origins to each of
the destinations are different and known. The problem is to transport the amounts of the
commodity from various origins to various destinations in such a way that the total cost
of transportation is minimum. This type of problem is known as transportation
problem. It occurs very frequently in practical life or say matches with many real
situations.
For example, a fan manufacturing concern has m factories located in n different cities.
The total supply of manufactured product is absorbed by n retail dealers in n different
cities of the country. Then the transportation problem is to form the transportation
schedule that will minimize the total cost of transporting fans from different factory
locations to different retail dealers.
386

10.3 Mathematical Formulation of Transportation Problem


[Kanpur 2012; Gorakhpur 2011]

Let there be m origins and n destinations (n may or may not be equal to m) with i-th origin
possessing ai units of a certain product and j-th destination requiring bj units of the same
product. Assume that the total available is equal to the total required
m n
i.e., ∑ ai = ∑ bj …(1)
i =1 j =1

Let cij be the cost of transportation of one unit product from i-th origin to j-th destination
and x ij be the quantity transported from the i-th origin to the j-th destination. Then the
problem is to determine non-negative (≥ 0) values of x ij , satisfying the availability
restrictions as well as the requirement restrictions, in such a way that the total
transportation cost is minimized.

i.e., find x ij (≥ 0) for i = 1, 2, ..., m ; j = 1, 2,..., n which minimize


m n
Z = ∑ ∑ cij xij ...(2)
i =1 j =1

n
such that ∑ x ij = ai, i = 1, 2,..., m
...(3)
j =1

m
and ∑ x ij = bj , j = 1, 2,..., n
...(4)
i =1

The equations (3) and (4) may be called the row and column equations respectively.

Note : 1. The objective function (2) and the constraint equations (3) and (4) are all
linear in x ij , so the above problem may be looked like a linear programming problem.
Thus a transportation problem is a special type of L.P.P.

2. Since all x ij ≥ 0, it follows that each ai ≥ 0 and each bj ≥ 0.

10.4 Tabular Representation of Transportation Problem


Suppose there are m factories (origins) Fi (i = 1, 2,..., m) and n warehouses (destinations)
Wj ( j = 1, 2,..., n). The transportation problem as described above can be represented in a
tabular form as follows :
387

Destination → W1 W2 LLL Wj LLL Wn Available


Origin ↓
F1 c11 c12 LLL c1 j LLL c1n a1
F2 c21 c22 LLL c2 j LLL c2 n a2
M M M M M M
M M M M M M
M M M M M M
Fi ci1 ci2 LLL cij LLL cin ai
M M M M M M
M M M M M M
M M M M M M
Fm cm1 cm2 LLL cmj LLL cmn am
m n
Requirement b1 b2 LLL bj LLL bn
∑ ai = ∑ bj
i =1 i =1

The calculations are made directly on the transportation array given below which gives
the current trial solution.
Destination → W1 W2 LLL Wj LLL Wn Available
Origin ↓
F1 x11 x12 LLL x1 j LLL x1n a1
F2 x21 x22 LLL x2 j LLL x2 n a2
M M M M M M
M M M M M M
M M M M M M
Fi x i1 x i2 LLL x ij LLL x in ai
M M M M M M
M M M M M M
M M M M M M
Fm x m1 x m2 LLL x mj LLL x mn am
m n
Requirement b1 b2 LLL bj LLL bn
∑ ai = ∑ bj
i =1 j =1

The above two tables can be combined together by writing costs cij within the brackets( ).

10.5 Feasible Solution, Basic Feasible Solution and


Optimum Solution of Transportation Problem
Now we shall define few terms that are used in the transportation problem.
1. Feasible Solution (F.S.) : A feasible solution to a transportation problem is a set of
non-negative individual allocations ( x ij ≥ 0) which satisfies the row and column sum
restrictions [i.e., equations (3) and (4) of article 10.4].
388

2. Basic Feasible Solution (B.F.S.) : A feasible solution of an m by n transportation


problem is said to be basic if the total number of positive allocations is equal to
m + n −1 i.e., one less than the sum of the number of rows and columns.
[Meerut 2007 (BP), 08, 12 (BP)]

3. Optimum solution : A feasible solution (not necessarily basic) is said to be


optimum if it minimizes the total transportation cost.

10.6 Existence of Feasible Solution


Theorem A necessary and sufficient condition for the existence of feasible solution of a
transportation problem is

∑ ai = ∑ bj (i = 1, 2, . . . , m ; j = 1, 2, . . . , n).
[Kanpur 2009]

Proof: The condition is necessary : Let there exist a feasible solution to the
transportation problem. Then
n n

∑ x ij = ai, i = 1, 2,..., m and ∑ xij = bj , j = 1, 2,..., n


j =1 i =1

Summing over all i and j respectively, we get


m n m n m n

∑ ∑ x ij = ∑ ai and ∑∑ x ij = ∑ bj
i =1 j =1 i =1 j =1 i =1 j =1

m n
⇒ ∑ ai = ∑ bj .
i =1 j =1

m n
The condition is sufficient : Let ∑ ai = ∑ bj = k (say).
i =1 j =1

If x ij = λ ibj for all i and j, where λ i ≠ 0 is any real number, then


n n n n
1 a
∑ x ij = ∑ λ i bj = λ i ∑ bj = k λ i ⇒ λ i = ∑ x ij = i
k k
j =1 j =1 j =1 j =1

ai bj
Thus, x ij = λ i bj = , for all i and j.
k

As ai ≥ 0, bi ≥ 0 so x ij ≥ 0 for all i and j.

Hence a feasible solution exists.


389

Note : Such a transportation problem in which ∑ ai = ∑ bj is termed as balanced


transportation problem. Hence a balanced transportation problem always has a F.S.

10.7 Basic Feasible Solution of a Transportation Problem


[Meerut 2004]

A transportation problem is a special case of a linear programming problem. So the


definition of B.F.S. is same as given earlier for L.P.P. But we find that in a transportation
problem out of mn unknowns there are only m + n −1 basic variables. This happens due to
redundancy in the constraints of the transportation problem. The condition

∑ ai = ∑ bj can be used to reduce one constraint. This can be easily justified by proving
the following theorem :

Theorem Out of (m + n) equations, there are only m + n − 1 independent equations in a


transportation problem, m and n being the number of origins and destinations and any
one equation can be dropped as the redundant equation.

Proof: Consider m row equations and n −1 column equations of the transportation


problem as
n

∑ x ij = ai, i = 1, 2,..., m
...(1)
j =1

m
and ∑ x ij = bj , j = 1, 2,..., n − 1
...(2)
i =1

Now adding m origin constraints given in (1), we get


m n m

∑ ∑ x ij = ∑ ai ...(3)
i =1 j =1 i =1

Also, adding (n −1) destination constraints given in (2), we get

n −1 m n −1

∑∑ x ij = ∑ bj ...(4)
j =1 i =1 j =1

Subtracting (4) from (3), we get


m n n −1 m m n −1

∑ ∑ x ij − ∑ ∑ x ij = ∑ ai − ∑ bj
i =1 j =1 j =1 i =1 i =1 j =1
390

m  n n−1  n n −1
 
or ∑ ∑ 

x ij − ∑ x ij  =∑
 j =1
b j − ∑ bj [∴ ∑ ai = ∑ bj ]
i =1 j =1 j =1  j = 1

m
or ∑ x in = bn, which is the n-th destination-constraint.
i =1

It follows that if m + n −1 constraints are satisfied then the (m + n)th constraint will be
automatically satisfied due to the condition ∑ ai = ∑ bj . Thus we have only (m + n −1)
linearly independent equations. Out of (m + n) equations, one (any) is redundant.

It indicates that a B.F.S. will contain at most m + n −1 positive variables, others being
zero.

Hence the theorem is proved.

10.8 Existence of an Optimal Solution


Theorem There always exists an optimal solution to a balanced transportation problem.
m n
Proof: We have ∑ ai = ∑ bj
i =1 j =1

It follows that a feasible solution exists of the problem i.e., x ij ≥ 0 for all i and j.

From the constraints of the problem each x ij ≤ min (ai, bj ).

Thus 0 ≤ x ij ≤ min (ai, bj ) i.e., the feasible region of the problem is non-empty, closed and
bounded.

Hence, there exists an optimal solution.

10.9 Loops in Transportation Table and Their Properties


[Meerut 2008, 09 (BP)]
Loop : Definition : An ordered set of four or more cell is said to form a loop if it has the
following properties. :
1. Any two adjacent cells of the set lie either in the same row or in the same column
and
2. No three or more adjacent cells lie in the same row or in the same column.

The first cell of the set will follow the last one in the set.

We get a closed path satisfying the above conditions (1) and (2) if we join the cells of a
loop by horizontal and vertical line segments.
391

Consider two sets L = {(11


, ),(4,1),(4, 4),(2, 4),(2, 3),(1, 3)}

and L′ = {(3,1),(3, 4),(2, 4),(2, 3),(2, 2),(4, 2),(11


, )}

where (i, j)th cell of the transportation table is denoted by (i, j). Then it can be observed
that the set L forms a loop while the set L′ does not form a loop, because the three cells
(2, 4),(2, 3) and (2, 2) lie in the same row. Diagramatic illustration is given below :
Loop Non-Loop

(1, 1) (1, 3) (1, 1)

(2, 3) (2, 4) (2, 2) (2, 3) (2, 4)

(3, 1) (3, 4)

(4, 1) (4, 4) (4, 2)

Properties : (i) Every loop has an even number of cells.


(ii) A feasible solution to a transportation problem is basic if and only if, the
corresponding cells in the transportation table do not form a loop.

10.10 Solution of a Transportation Problem


[Meerut 2004]

The solution of a Transportation problem consists of the following two steps :

Step 1 : To find an initial basic feasible solution.


Step 2 : To obtain an optimal solution by making successive improvements to initial
basic feasible solution (obtained in step 1) until no further decrease in the transportation
cost is possible.

10.11 Methods to Find an Initial Basic Feasible Solution


Here we describe some simple methods to obtain the initial basic feasible solution.

10.11.1 Method 1 : North-West Corner Rule [Meerut 2005]

In this rule we have the following steps :

Step 1 : Start with the cell (1, 1) at the North-West corner i.e., the top-most left corner
and allocate there maximum possible amount. Thus x11 = min (a1, b1).
Step 2 : (i) If b1 < a1 then x11 = b1 and there is still some quantity available left in row 1.
So move to the right hand cell (1, 2) and make the second allocation of amount
x12 = min (a1 − x11, b2 ) in the cell (1, 2).
392

(ii) If b1 > a1, then x11 = a1 and there is still some requirement left in column 1. So move
vertically downwards to the cell (2, 1) and make the second allocation of amount
x21 = min (a2 , b1 − x11) in this cell.
(iii) If b1 = a1 then x12 = 0 or x21 = 0.

Start from the new North-West corner of the transportation table and allocate there as
much as possible.

Step 3 : Repeat steps 1 and 2 until all the available quantity is exhausted or all the
requirement is satisfied.

The following example explains the method :

Find the initial basic feasible solution of the following transportation problem :

To
W1 W2 W3 Available

F1 2 7 4 5

F2 3 3 1 8

From F3 5 4 7 7

F4 1 6 2 14

Requirement 7 9 18 34

First we construct an empty 4 by 3 matrix complete with row and column requirements.

Start with the cell (1, 1) at the North-West corner (top-most left corner) and allocate it
maximum possible amount. Thus x11 = 5 as minimum of a1 = 5 and b1 = 7 is 5.

To
W1 W2 W3 Available

F1 5 (2) 5

F2 2 (3) 6 (3) 8

From F3 3 (4) 4 (7) 7

F4 14 (2) 14

Requirement 7 9 18

There is no amount left available at source 1 so in place of moving to right we move


vertically downwards to the cell (2, 1) and allocate as much as possible there. The column
1 still needs the amount 2 and the amount 8 is available in row 2 so we allocate the
393

maximum amount 2 to the cell (2, 1) i.e., x21 = 2. Thus Allocations for column 1 are
complete. Now we move to the right of the cell (2, 1). Since the amount 6 is still available
in row 2 and amount 9 is needed in column 2, so we allocate the maximum amount 6 in
the cell (2, 2) i.e., x22 = 6. This completes the allocations for row 2. Now we move
vertically downwards to the cell (3, 2). In column 2 the amount 3 is still needed and in
row 3 amount available is 7 so we allocate the maximum amount 3 in the cell (3, 2) i.e.,
x32 = 3. Thus allocations for column 2 are complete. Now the amount 4 is still available
in row 3 and amount 18 is needed in column 3, so we move to the cell (3, 3) and allocate
the maximum amount 4 to this cell i.e., x33 = 4. Thus, there is no amount left available at
source 3. Now we move downwards to the cell (4, 3). The amount 14 is still needed in
column 3 and an equal amount 14 is available in row 4, so we allocate the amount 14 to
the cell (4, 3) i.e., x43 = 14. The resulting feasible solution is shown in the above table.
Allocations in the cells are in such a way that the total in each row and each column is the
same as shown against the respective rows and columns.

Multiplying each individual allocation by its corresponding unit cost in ( ), and adding,
the total cost corresponding to this feasible solution is

= ` (5 . 2 + 2 . 3 + 6 . 3 + 3 . 4 + 4 . 7 + 14 . 2) = ` 102.

Note : In this method we always move to the right or down, so no loop can be formulated
by drawing horizontal and vertical lines to the allocations. Also at each step (allocation)
at least one row or column is discarded from further consideration, while the last
allocation discards both a row and a column simultaneously, so we cannot get more than
(m + n −1) individual positive allocations. Thus we always get a non-degenerate basic feasible
solution by the North-West corner rule.

10.11.2 Method 2 : Lowest Cost Entry (Matrix Minima) Method


In this method we have the following steps :

Step 1 : Examine the cost matrix carefully and find the lowest cost. Let it be cij . Then

allocate x ij as much as possible in the cell (i, j). x ij = min (ai, bj ).

Step 2 : (i) If x ij = ai, then the capacity of the i-th origin is completely exhausted. In this

case cross out the i-th row of the transportation table and decrease the requirement bj by
ai. Now go to step 3.
(ii) If x ij = bj , then the requirement of j-th destination is completely satisfied. In this
case cross out the j-th column of the transportation table and decrease ai by bj . Now
go to step 3.
(iii) If x ij = ai = bj , then either cross-out the i-th row or j-th column but not both. Now go
to step 3.
Step 3 : Repeat steps 1 and 2 for the reduced transportation table until all the available is
exhausted or all the requirement is satisfied.
394

Note : If the cell of lowest cost is not unique, we can select any one of these cells.

The method is well explained by taking the same numerical example as in method 1.

To
W1 W2 W3

F1 (2) 2(7) 3(4) 5

F2 (3) (3) 8(1) 8

From F3 (5) 7(4) (7) 7

F4 7(1) (6) 7(2) 14

7 9 18
First we write the cost and requirement matrix. We examine the cost matrix and find that
there is lowest cost 1 in cell (2, 3) and in (4, 1). We choose any one of these, say the cell
(2, 3) and allocate the maximum possible amount 8 to this cell. This exhausts the
availability from F2 and leaves the requirement 10 of W3 . Now leaving the second row in
the reduced transportation table we find that there is lowest cost 1 in cell (4, 1). Here we
allocate the maximum possible amount 7. This satisfies the requirement of W1 and leaves
the availability 7 in F4 . Leaving the first column, in the reduced transportation table we
find the lowest cost 2 in the cell (4, 3). We allocate the maximum possible amount 7 to
this cell. This exhausts the availability from F4 and leaves requirement 3 of W3 . Leaving
the 4th row, in the reduced transportation table, we find that there is lowest cost 4 in cell
(1, 3) and in cell (3, 2). We allocate the amount 3 in the cell (1, 3). This satisfies the
requirement of W3 and leaves the availability 2 in F1. In order to satisfy the availability of
F1 we allocate the amount 2 to the cell (1, 2). To complete the requirement of 9 units in
column 2, we allocate the amount 7 to the cell (3, 2).

Thus we get the required B.F.S. shown in the above table.

The transportation cost = ` (2 . 7 + 3 . 4 + 8 .1 + 7 . 4 + 7 .1 + 7 . 2) = ` 83

10.11.3 Method 3 : Unit Cost Penalty Method


(Vogel's Approximation Method)
In this method we have the following steps :

Step 1 : Identify the smallest and next to smallest costs for each row of the
transportation table. Find the difference between them for each row. Write these
differences alongside the transportation table against the respective rows by enclosing
them in parentheses. Write the similar differences for each column below the
corresponding column. These are called ‘penalties’.
Step 2 : Now select the row or column for which the penalty is the largest. If a tie occurs,
use any arbitrary tie breaking choice. Allocate the maximum possible amount to the cell
395

with lowest cost in that particular row or column. Let the largest penalty correspond to
i-th row and let cij be the smallest cost in the i-th row. Allocate the amount x ij = min(ai, bi)
in the cell (i, j). Then we cross out the i-th row or the j-th column in the usual manner and
construct the reduced matrix with remaining availability and requirements.

Step 3 : Now compute the row and column penalties for the reduced transportation table
and repeat the step 2. We continue this process until all the available quantity is
exhausted or all the requirements are satisfied.

The method is well explained by taking the same numerical example as in method 1.

First we write the cost and requirement matrix and compute the penalties as follows :
W1 W2 W3 Available Penalties

F1 5(2) (7) (4) 5 (2)

F2 (3) (3) (1) 8 (2)

F3 (5) (4) (7) 7 (1)

F4 (1) (6) (2) 14 (1)

Requirement 7 9 18
Penalties (1) (1) (1)

We find that the maximum penalty (2) is associated with row 1 and row 2, so we may
select any one of these. If we select row 1, then we allocate the maximum possible
amount to the lowest cost cell in this row i.e., cell (1, 1). Thus x11 = min (5, 7) = 5. This
exhausts the availability from F1. So we cross the row 1. Leaving this row, the reduced
cost and requirement matrix is as follows :
W1 W2 W3 Available Penalties

F2 (3) (3) 8(1) 8 (2)

F3 (5) (4) (7) 7 (1)

F4 (1) (6) (2) 14 (1)

Requirement 2 9 18
Penalties (2) (1) (1)

We note that the amount still needed to column 1 is 2.


Since the maximum penalty (2) is associated with row 1 and column 1 so we may select
any one of these.
We select row 1 of this table and allocate the maximum possible amount 8 to the cell
with cost 1 (lowest) in this row. This exhausts the availability from F2 and leaves the
396

requirement 10 for W3 . Again leaving row of F2 , the reduced transportation table is as


follows :
W1 W2 W3 Available Penalties

F3 (5) (4) (7) 7 (1)

F4 (1) (6) 10(2) 14 (1)

Requirement 2 9 10
Penalties (4) (2) (5)

In this table, the maximum penalty (5) is associated with column 3 so the maximum
possible amount 10 is allocated to the cell with lowest cost 2 in this column. This
completes the requirement of W3 . After leaving the column corresponding to W3 the
remaining table is as follows :

W1 W2 Available Penalties

F3 (5) 7(4) 7 (1)

F4 2(1) 2(6) 4 (5)

Requirement 2 9
Penalties (4) (2)

In this table, the maximum penalty (5) is associated with row 2, so the maximum possible
amount 2 is allocated to the cell with lowest cost 1 in this row.

The remaining amount 2 available to F4 is allocated to the cell with cost 6. In the last to
meet the requirement of W2 the amount 7 is allocated to the cell with cost 4.

Thus we get the required B.F.S as shown in the table.

W1 W2 W3 Available

F1 5(2) (7) (4) 5

F2 (3) (3) 8(1) 8

F3 (5) 7(4) (7) 7

F4 2(1) 2(6) 10(2) 14

Requirement 7 9 18
397

The total transportation cost

= ` (5 . 2 + 8 .1 + 7 . 4 + 2 .1 + 2 . 6 + 10 . 2) = ` 80

Note : If in the selected row or column the minimum cost is not unique, then allocate in
that cell in which more allocations can be made at lower cost cell.

Important : Although Vogel's method takes more time as compared to other methods,
but it reduces the time in reaching the optimal solution. To obtain the optimal solution
the students are advised to find the initial B.F.S. by Vogel's method.

1. Define transportation problem. [Meerut 2008 (BP)]

2. What is transportation problem ? Give the mathematical formulation of


transportation problem. [Kanpur 2012; Gorakhpur 2011]

3. If all the sources are emptied and all the destinations are filled show that

∑ ai = ∑ bj is a necessary and sufficient condition for the existence of a feasible


solution to a transportation problem.

4. Prove that there are only m + n −1 independent equations in a transportation


problem, m and n being the no. of origins and destinations respectively and that any
one equation can be dropped as the redundant equation.

5. Explain the North-West corner rule for obtaining an initial basic feasible solution of
a transportation problem.

6. Explain the lowest cost entry method for obtaining an initial basic feasible solution
of a transportation problem.

7. Explain Vogel's Approximation Method of solving a transportation problem.

8. Use North-West corner rule to determine an initial basic feasible solution to the
following transportation problem :

(i) To (ii) Destination


I II III IV Supply D1 D2 D3 D4 Supply

A 13 11 15 20 2 O1 6 4 1 5 14

From B 17 14 12 13 6 Origin O2 8 9 2 7 16

C 18 18 15 12 7 O3 4 3 6 2 5

Demand 3 3 4 5 [Meerut 2004] Demand 6 10 15 4


398

9. Determine an initial B.F.S. to the following transportation problem by using the


North-West corner rule :
Destination
I II III IV V Supply

A 2 11 10 3 7 4
B 1 4 7 2 1 8
Origin
C 3 9 4 8 12 9
Demand 3 3 4 5 6
10. Using ‘lowest cost entry method’ find the initial B.F.S. of the following
transportation problem :
(i) Destinations
A B C D Supply

I 1 5 3 3 34

Origins II 3 3 1 2 15

III 0 2 2 3 12

IV 2 7 2 4 19

Demand 21 25 17 17 [Meerut 2009]

(ii) Destination
D1 D2 D3 D4 Capacity

O1 1 2 3 4 6

Origin O2 4 3 2 0 8

O3 0 2 2 1 10

Demand 4 6 8 6
11. Obtain an initial B.F.S. to the following transportation problem using Vogel's
approximation method :
To
I II III IV Available

A 5 1 3 3 34
3 3 5 4 15
From B
C 6 4 4 3 12

D 4 1 4 2 19

Requirement 21 25 17 17
399

12. Determine an initial B.F.S. to the following transportation table using (i) matrix
minima method (ii) Vogel's approximation method :
Destination
D1 D2 D3 D4 Supply

O1 1 2 1 4 30

Origin O2 3 3 2 1 50

O3 4 2 5 9 20

Demand 20 40 30 10 100

13. Find the initial basic feasible solution of the following transportation problem using
(i) North-West corner rule (ii) matrix minima method (iii) Vogel's approximation
method:
Warehouse
W1 W2 W3 W4 Capacity

F1 19 30 50 10 7

Factory F2 70 30 40 60 9

F3 40 8 70 20 18

Requirement 5 8 7 14

14. Find initial basic feasible solutions of the following transportation problems by
Vogel's Approximation method :

(i) Destination (ii) Destination


D1 D2 D3 D4 Supply D1 D2 D3 D4 Supply

O1 5 8 3 6 30 S1 3 7 6 4 5

Sources O2 4 5 7 4 50 Sources S2 2 4 3 2 2

O3 6 2 4 6 20 S3 4 3 8 5 3

Demand 30 40 20 10 100 Demand 3 3 2 2


[Kanpur 2009] [Gorakhpur 2007]
400

(iii) Destination (iv) Destination


D1 D2 D3 D4 Supply D1 D2 D3 D4 Supply

O1 11 13 17 14 250 O1 21 16 25 13 11

Origin O2 16 18 14 10 300 Origin O2 17 18 14 23 13

O3 21 24 13 10 400 O3 32 27 18 41 19

Demand 200 225 275 250 950 Demand 6 10 12 15 43


[Gorakhpur 2008] [Gorakhpur 2009, 10]

(v) Stores
S1 S2 S3 S4 Supply

A 5 1 3 3 34

Warehouse B 3 3 5 4 15
[Kanpur 2008]
C 6 4 4 3 12

D 4 1 4 2 19

Demand 21 25 17 17 80

8. (i) x11 = 2, x21 = 1, x22 = 3, x23 = 2, x33 = 2, x34 = 5.


(ii) x11 = 6, x12 = 8, x22 = 2, x23 = 14, x33 = 1, x34 = 4

9. x11 = 3, x12 = 1, x22 = 2, x23 = 4, x24 = 2, x34 = 3, x35 = 6


10. (i) x11 = 9, x12 = 8, x14 = 17, x23 = 15, x31 = 12, x42 = 17, x43 = 2
(ii) x12 = 6, x23 = 2, x24 = 6, x31 = 4, x33 = 6.

11. x12 = 25, x13 = 9, x21 = 15, x33 = 8, x34 = 4, x41 = 6, x44 = 13
12. For (i) and (ii) both : x11 = 20, x13 = 10, x22 = 20, x 23 = 20, x24 = 10, x32 = 20
13. (i) x11 = 5, x12 = 2, x22 = 6, x23 = 3, x33 = 4, x34 = 14
(ii) x14 = 7, x21 = 2, x23 = 7, x31 = 3, x32 = 8, x34 = 7
(iii) x11 = 5, x14 = 2, x23 = 7, x24 = 2, x32 = 8, x34 = 10

14. (i) x11 = 10, x13 = 20, x21 = 20, x22 = 20, x24 = 10, x32 = 20
(ii) x11 = 3, x14 = 2, x23 = 2, x32 = 3
(iii) x11 = 200, x12 = 50, x22 = 175, x24 = 125, x33 = 275, x34 = 125
(iv) x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12
(v) x12 = 25, x13 = 9, x21 = 15, x31 = 4, x33 = 8, x41 = 2, x44 = 17
401

10.12 Non-Degenerate Basic Feasible Solution


A feasible solution of m by n transportation problem is said to be non-degenerate basic
feasible solution if it has the following two properties :
1. Total number of positive allocations is exactly equal to m + n −1.
2. These allocations are in independent positions.

In other words, if a F.S. involves exactly (m + n −1) independent individual positive


allocations, then it is known as non-degenerate B.F.S. otherwise, it is said to be
degenerate B.F.S.

Independent position of a set of allocation means that it is always impossible to form any
closed loop through these allocations. A loop may or may not involve all allocations. It
consists of (at least 4) horizontal and vertical lines with an allocation at each corner
which, in turn, is a join of a horizontal and a vertical lines.

Independent positions Non-independent positions

In these tables the positions of allocations are indicated by *.

10.13 Optimality Test


After obtaining an initial basic feasible solution to a given transportation problem, we
test this solution for optimality i.e., whether or not the solution thus obtained minimize
the total transportation cost.

It is important to note that the optimality test is applicable to a non-degenerate B.F.S.


i.e., to a F.S. consisting of (m + n −1) allocations in independent positions.

10.13.1 The Stepping Stone Method of Testing the Optimality


Here we determine the value, termed as cell evaluation, corresponding to each empty cell
of the matrix. For this we start with an empty cell (i.e., the cell in which there is no
allocation) and allocate 1 unit to this cell. To maintain the row and column sums we
make necessary adjustments in the solution. Then the net change in the total cost
resulting from the adjustment, called cell evaluation of that cell is calculated. If the cell
evaluation is positive then the new solution increases the total transportation cost while
if cell evaluation is negative, then the new solution reduce to transportation cost. Thus if
all the cell evaluations are greater than or equal to zero then we cannot decrease the total
cost more. Hence the solution under test is the required optimal solution.
402

Since there are m . n − (m + n − 1) = (m − 1)(n − 1) empty cells, therefore there are (m − 1)(n − 1)
such cell evaluations. So the process of computing individually cell evaluations for all
unoccupied cells is very complicated. To avoid this complication, we prove the following
result :

Theorem If we have a feasible solution consisting of m + n − 1 independent allocations,


and if numbers ui and v j satisfying c rs = ur + vs , for each occupied cell (r, s), then the

evaluation dij corresponding to each empty cell (i, j) is given by dij = c ij − (ui + v j ).

Proof: The transportation problem is to find x ij ≥ 0. which minimize


m n
Z = ∑ ∑ cij xij ...(1)
i =1 j =1

subject to the restrictions


n n

∑ x ij = ai or 0 = ai − ∑ x ij , i = 1, 2,..., m ...(2)
j =1 j =1
m m
and ∑ x ij = bj or 0 = bj − ∑ x ij , j = 1, 2,..., n ...(3)
i =1 j =1

Multiplying (2) by ui (i = 1, 2,..., m), (3) by v j ( j = 1, 2,..., n) and adding to (1), we have
m n m  n  n  m 
   
Z = ∑ ∑ cij x ij + ∑ui  ai −


x ij  + ∑
v j  bj −
 j =1 

x ij 

i =1 j =1 i =1  j =1   i =1 
m n m n
or Z = ∑ ∑ [cij − (ui + v j )] x ij + ∑ ui ai + ∑ v j bj ...(4)
i =1 j =1 i =1 j =1

But it is given that for each occupied cell (r , s) (cell with positive allocation)

crs = ur + vs . ...(5)

Hence in the objective function (4), all the terms of positive allocations vanish as their
coefficients are zero. Thus, for this feasible solution the value of the objective function
(4) reduces to
m n
Z = ∑ ui ai + ∑ v j bj . ...(6)
i =1 j =1

Let us determine the cell evaluation for the empty cell (h, k). When we allocate one unit
to this empty cell, the positive allocations become (m + n) in number and hence they
become dependent in position. So a closed loop can be formed. Let the closed loop thus
formed be as shown in the following figure :
403

(h, k) (h, s)
chk chs
+1 –1

crk –1 +1 c
rs
(r, k) (r, s)

Here cells (h, s), (r , s) and (r , k) are all occupied cells and hence

chs = uh + vs , crs = ur + vs , crk = ur + vk .

We have to decrease the individual allocations at (h, s) and (r , k) cell and increase at cell
(r , s) by 1 unit to maintain the row and column sums. So the values of the individual
allocations in these occupied cells are changed but in the objective function (4) they
contribute nothing as their coefficient are necessarily zero. Thus, corresponding to this
new solution [allocating 1 unit at cell (h, k)], the value of the objective function is given
by
m n
Z ′ = chk − (uh + vk ) + ∑ ui ai + ∑ v j bj ...(7)
i =1 j =1

Hence the cell evaluation is given by

dhk = Z ′ − Z = chk − (uh + vk ).

Thus, in general to each empty cell (i, j) the cell evaluation is given by

dij = cij − (ui + v j ).

This completes the proof of the theorem.

Note 1 : The above result is proved by considering a square (or rectangle) shaped loop. It
can be generalised by considering a loop of an arbitrary shape connecting empty cell to
the occupied cells.

Note 2: Since there are (m + n −1) number of equations of the form (5) in (m + n) number
of unknowns ui and v j , so assigning an arbitrary value to one of ui or v j the rest of the
(m + n −1) unknowns can be easily found. Generally, we choose that ui or v j = 0 for which
the corresponding row or column has the maximum number of individual allocations.
The numbers ui and v j are called Fictitious costs or Shadow costs.
404

10.14 The Modified Distribution (MODI) Method


The modified distribution (MODI) method is similar to that of stepping stone method
except that here each of the unoccupied cells in process is evaluated more effectively. In
the stepping stone method a closed path is traced for each of the unoccupied cells whose
cell evaluations (net changes in the total cost resulting from the adjustment) are
calculated and the unoccupied cell with the most negative cell evaluation is selected to
enter the next solution. However in modified distribution method the improvement
(opportunity) costs of all the unoccupied cells are calculated without tracing their
respective closed paths. In modified distribution method we need to trace anyone closed
path after identifying the unoccupied cell with most negative value. Hence a lot of time
and energy can be shaved choosing modified distribution (MODI) method over the
stepping stone method.

10.15 Transportation Algorithm for Minimization


Problem (MODI) Method
For minimization transportation problem the transportation algorithm gives an iterative
procedure. The procedure determines an optimum solution in a finite number of steps
which are as follows :

Step 1 : Construct a transportation table entering the capacities a1, a2 ,..., am of the
origins and the requirements b1, b2 ,..., bn of the destinations. Enter the costs cij in ( ) at
the upper left corners of all the cells.

Step 2 : Find an initial basic feasible solution (allocations in independent positions) of


the problem by one of the methods given in article 10.11. Enter these allocations at the
centres of the cells.

Step 3 : Find the set of (m + n) numbers ui (i = 1, 2,..., m) and v j ( j = 1, 2,..., n) for each row

and column such that for each occupied cell (r , s)


cij ui +vj
crs = u r + vs .

Step 4 : Calculate the value ui + v j for each unoccupied cell

(i, j) and enter it at the upper right corner of the


corresponding cell. dij

Step 5 : Calculate the cell evaluation dij = cij − (ui + v j ) for

each unoccupied cell (i, j) and enter it at the lower right corner of the corresponding cell.

Step 6 : Examine the sign of each dij for unoccupied cells and apply optimality test :

(i) If all dij > 0, then the solution under test is optimal and unique.
405

(ii) If all dij ≥ 0, with at least one dij = 0, then the solution under test is optimal and an
alternative optimal solution exists.
(iii) If at least one dij < 0, the then solution is not optimal. In this case go to step 7.

Step 7 : Select the min (most negative) dij . Form a closed loop by joining this cell of min

dij with the cells of positive allocations. Then allocate in this cell as much as you can by
vacating at least one of the pre-occupied cells and maintaining the row and column sum
restrictions. This will give a new B.F.S.

Step 8 : Repeat the steps 3 to 6 to test the optimality of this new B.F.S. continue the
process until an optimum basic feasible solution is obtained.

Example 1: Determine the optimum basic feasible solution to the following transportation
problem :
D1 D2 D3 D4 ai↓

O1 5 3 6 2 19

O2 4 7 9 1 37

O3 3 4 7 5 34 [Meerut 2005]

bj→ 16 18 31 25

Solution: Step 1 : Using Vogel's method, the initial basic feasible solution is obtained as
follows :
D1 D2 D3 D4
(5) (3) (6) (2)
O1 18 1 19

(4) (7) (9) (1)


O2 12 25 37

(3) (4) (7) (5)


O3 4 30 34

16 18 31 25

Total transportation cost

= ` (18 . 3 + 1. 6 + 12 . 4 + 25 .1 + 4 . 3 + 30 . 7) = ` 355.
406

Step 2 : Now ui (i = 1, 2, 3) and v j ( j = 1, 2, 3, 4) are to be determined by means of the unit

cost in the respective occupied cells only. For each occupied cell (r , s), crs = ur + vs .

Since all rows contain the same (maximum) number of allocations, so take any of the ui
(say u3 ) equal to zero.

When u3 = 0, v3 = 7 (since c33 = u3 + v3 : c33 = 7)

Similarly c31 = u3 + v1 or 3 = 0 + v1 or v1 = 3.

Again c21 = u2 + v1 or 4 = u2 + 3 or u2 = 1.

In the same way, c24 = 1 = u2 + v4 gives v4 = 0 ;

c13 = 6 = u1 + v3 gives u1 = −1 and c12 = 3 = u1 + v2 gives v2 = 4.

Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter it at

the upper right corner of the corresponding unoccupied cell.

Step 4 : The we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)

and enter at the lower right corner of the corresponding unoccupied cell.

Thus we get the following table :


ui ↓
(5) (2) (3) (6) (2) (–1)
18 1 –1
(3) (3)
(4) (7) (5) (9) (8) (1)
12 25 1
(2) (1)
(3) (4) (4) (7) (5) (0)
4 30 0
(0) (5)
vj →
3 4 7 0

Step 5 : Since all dij ≥ 0, the solution under test is optimal. But an alternative optimal

solution will also exist as d32 = 0.

Thus the solution of the given problem is

x12 = 18, x13 = 1, x21 = 12, x24 = 25, x31 = 4, x33 = 30

and minimum transportation cost = ` 355.


407

Example 2: Find the optimum solution to the following transportation problem :

D1 D2 D3 D4 Supply

O1 23 27 16 18 30

O2 12 17 20 51 40

O3 22 28 12 32 53

Demand 22 35 25 41
[Meerut 2009]

Solution: Step 1: By Vogel's method an initial B.F.S. of the given problem is given in the
following table :
D1 D2 D3 D4 ai ↓

(23) (27) (16) (18)


O1 30 30

(12) (17) (20) (51)


O2 5 35 40

(22) (28) (12) (32)


O3 17 25 11 53

bj → 22 35 25 41

Total transportation cost

= ` (18 . 30 + 12 . 5 + 17 . 35 + 22 .17 + 12 . 25 + 32 .11) = ` 2221.

Step 2 : Now we determine a set of ui and v j such that for each occupied cell

(r , s), crs = ur + vs .

For this we choose u3 = 0 (since row 3 contains maximum number of allocations).

Since c31 = 22 = u3 + v1, c33 = 12 = u3 + v3 , c34 = 32 = u3 + v4

∴ v1 = 22, v3 = 12, v4 = 32.

Also c21 = 12 = u2 + v1, c22 = 17 = u2 + v2 , c14 = 18 = u1 + v4

∴ u2 = −10, v2 = 27, u1 = −14.

Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter at the

upper right corner of the corresponding unoccupied cell.

Step 4 : Then we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)

and enter at the lower right corner of the corresponding unoccupied cell.
408

Thus we get the following table :


ui ↓
(23) (8) (27) (13) (16) (–2) (18)
30 –14
(15) (14) (18)
(12) (17) (20) (2) (51) (22)
5 35 –10
(18) (29)
(22) (28) (27) (12) (32)
17 25 11 0
(1)
vj → 22 27 12 32

Step 5 : Since all dij for empty cells are > 0, the solution under test is optimal.

Thus the solution of the given problem is

x14 = 30, x21 = 5, x22 = 35, x31 = 17, x33 = 25, x34 = 11

and minimum transportation cost = ` 2221.

Example 3: Solve the following transportation problem in which cell entries represent unit
cost :
To
O/D D1 D2 D3 Available

O1 2 7 4 5

3 3 1 8
From O2
O3 5 4 7 7

O4 1 6 2 14

Required 7 9 18 34

[Meerut 2006 (BP), 07 (BP), 08 (BP); Kanpur 2010, 11]

Solution: Step 1 : Using Vogel's method, the initial B.F.S. is obtained as given below :
ai ↓
(2) (7) (4)
5 5

(3) (3) (1)


8 8

(5) (4) (7)


7 7

(1) (6) (2)


2 2 10 14
bj → 7 9 18
409

Total transportation cost = ` (2 . 5 + 1. 8 + 4 . 7 + 1. 2 + 6 . 2 + 2 .10) = ` 80

Step 2 : To test the solution for optimality, we find a set of

ui (i = 1, 2, 3, 4) and v j ( j = 1, 2, 3)

such that for each occupied cell (r , s), crs = ur + vs .

For this let us choose u4 = 0 (as row 4 contains maximum number of allocations).

Now c41 = 1 = u4 + v1, c42 = 6 = u4 + v2 , c43 = 2 = u4 + v3

∴ v1 = 1, v2 = 6, v3 = 2.

Also c11 = 2 = u1 + v1, c23 = 1 = u2 + v3 , c32 = 4 = u3 + v2

∴ u1 = 1, u2 = −1, u3 = −2.

Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter it at

the upper right corner of the corresponding unoccupied cell.

Step 4 : Then we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)

and enter it at the lower right corner of the corresponding unoccupied cell.

Thus we get the following table :


ui ↓
(2) (7) (7) (4) (3)
5 1
(0) (1)
(3) (0) (3) (5) (1)
8 –1
(3) (–2)
(5) (–1) (4) (7) (0)
7 –2
(6) (7)
(1) (6) (2)
2 2 10 0

vj → 1 6 2

Step 5 : From the above table, we observe that the cell evaluation d22 = −2, is negative.
Therefore the solution under test is not optimal. The solution can be improved as shown
in the next step.

Step 6 : Since minimum dij is d22 = −2 (negative), so we allocate (say, θ) to the cell (2, 2)

as much as possible.
410

Now when we have decided to include one more cell to the solution the occupied cells
become dependent. Therefore we first identify the loop joining cell (2, 2) with the
occupied cells.

It is easily seen by the following rule that at the most θ = 2 units can be allocated from cell
(4, 2) to cell (2, 2) still satisfying the row and column total and non-negativity
restrictions on the allocations.

Rule to determine θ : The value of θ, in general, is obtained by equating to zero the


minimum of the allocations containing − θ (not + θ) at the corners of the closed loop.

Here min [8 − θ, 2 − θ] = 0

or 2 −θ = 0

or θ = 2 units.

Thus cell (2, 2) enters the solution, while cell (4, 2) leaves the solution i.e., it becomes
empty. Thus improved B.F.S. is obtained. The illustration is shown in the following
tables :

(2) (7) (4)


5 5 5 5

(3) (3) (1)


θ 8–θ 8 2 6 8

(5) (4) (7)


7 7 7 7

(1) (6) (2)


2 2–θ 10+θ 14 2 12 14

7 9 18 7 9 18

Total transportation cost for this B.F.S.

= ` (2 . 5 + 3 . 2 + 1. 6 + 4 . 7 + 1. 2 + 2 .12)

= ` 76,

which is less than that for the initial B.F.S.

Now we shall test this improved B.F.S. for optimality.

Step 7 : Proceeding as in steps 2, 3 and 4 we get the following table :


411

uj ↓
(2) (7) (5) (4) (3)
5 1
(2) (1)
(3) (0) (3) (1)
2 6 –1
(3)
(5) (1) (4) (7) (2)
7 0
(4) (5)
(1) (6) (4) (2)
12 0
(2)
vj→ 1 4 2

Step 8 : Since for empty cells (dij ) > 0, the solution under test is optimal.

Thus the solution of the given problem is

x11 = 5, x22 = 2, x23 = 6, x32 = 7, x41 = 2, x43 = 12

and minimum transportation cost = ` 76.

Example 4: There are three parties who supply and three who require the following
quantities of coal :

Party 1 14 tons Consumer A 6 tons


Party 2 12 tons Consumer B 10 tons
Party 3 5 tons Consumer C 15 tons

The cost matrix is as follows :


A B C

1 6 8 4

2 4 9 3

3 1 2 6

Find the schedule of transportation policy which minimizes the cost.

Solution: By matrix minima method the initial B.F.S. of the problem is as follows :
412

Destinations
A B C
Available
(6) (8) (4)
1 1 10 3 14

(4) (9) (3)


2 12 12
Origins

(1) (2) (6)


3 5 5

Required 6 10 15

Total transportation cost = ` (6 .1 + 8 .10 + 4 . 3 + 3 .12 + 1. 5) = ` 139.

To test the solution for optimality : Finding the set of ui (i = 1, 2, 3), v j ( j = 1, 2, 3) such

that for occupied cells crs = ur + vs and then entering evaluations (ui + v j ) and dij in the
unoccupied cells, we get the following table :
ui↓
(6) (8) (4)
1 10 3 0

(4) (5) (9) (7) (3)


12 –1
(–1) (2)
(1) (2) (3) (6) (–1)
5 –5
(–1) (7)

vj→ 6 8 4

Since all dij are not ≥ 0, the solution under test is not optimal.

First iteration : Here two cell evaluations are negative and they are both most negative.
We shall include any one of them in the solution. Let us allocate at cell (2,1) as much as
possible.
Improved solution
(6) (8) (4)
1–θ 10 3+θ 10 4

(4) (9) (3)


θ 12 – θ 1 11

(1) (2) (6)


5 5
413

Here min [1 − θ,12 − θ] = 0

⇒ 1− θ = 0

⇒ θ = 1.

Thus cell (2,1) enters the solution while cell (1, 1) leaves the solution i.e., it becomes
empty.

Transportation cost = ` (8 .10 + 4 . 4 + 4 .1 + 3 .11 + 1. 5)

= ` 138.

To test the improved solution for optimality : We have the following table giving all
the necessary information :
ui↓
(6) (5) (8) (4)
10 4 0
(1)
(4) (9) (7) (3)
1 11 –1
(2)
(1) (2) (4) (6) (0)
5 –4
(–2) (6)
vj→ 5 8 4

Since all dij are not ≥ 0, the solution under test is not optimal.

Second Iteration : Since the largest negative cell evaluation is d32 = −2, so allocate as
much as possible to cell (3, 2)

Next Improved solution

(6) (8) (4)


10 – θ 4+θ 5 9

(4) (9) (3)


1+θ 11 – θ 6 6

(1) (2) (6)


5–θ θ 5

Here min [5 − θ, 10 − θ, 10 − θ] = 0 ⇒ 5 − θ = 0 ⇒ θ = 5.

Thus cell (3, 2) enters the solution while cell (3, 1) leaves the solution.

Transportation cost = ` (8 . 5 + 4 . 9 + 4 . 6 + 3 . 6 + 2 . 5) = ` 128.


414

To test the next improved solution for optimality.


ui ↓
(6) (5) (8) (4)
5 9 1
(1)
(4) (9) (7) (3)
6 6 0
(2)
(1) (–1) (2) (6) (–2)
5 –5
(2) (8)

vj → 4 7 3

Since all dij for empty cells are > 0, so the solution under test is optimal.

Thus the solution of the given problem is

x12 = 5, x13 = 9, x21 = 6, x23 = 6, x32 = 5

and minimum cost = ` 128.

Example 5: Given the following data :

Destinations
1 2 3 Capacity

1 2 2 3 10

Sources 2 4 1 2 15

3 1 3 × 40

Demand 20 15 30

The cost of shipment from third source to the third destination is not known. How many
units should be transported from sources to the destinations so that the total cost of
transporting all the units to their destinations is a minimum.

Solution: Since the cost c33 is unknown, we assign a large cost, say M, to this cell.

Then using Vogel's method an initial B.F.S. is obtained as shown in the table.
415

1 2 3
ai ↓
(2) (2) (3)
1 10 10

(4) (1) (2)


2 15 15

(1) (3) (M)


3 20 15 5 40

bj → 20 15 30

To test the solution for optimality we have the following table :


ui ↓
(2) (4 – M) (2) (6 – M) (3)
10 3–M
(M – 2) (M – 4)

(4) (3 – M) (1) (5 – M) (2)


15 2–M
(M + 1) (M – 4)

(1) (3) (M)


20 15 5 0

vj → 1 3 M

Since M is very large, we observe that all the cell evaluations dij ≥ 0. Hence the current
solution is optimum.

Thus the solution of the given problem is

x13 = 10, x23 = 15, x31 = 20, x32 = 15, x33 = 5.

Note : The cell (3, 3) also appears in the solution for which the cost of shipment is not
known. This is known as pseudo optimum basic feasible solution.

10.16 Degeneracy in Transportation Problems


[Meerut 2006 (BP)]

We recall that a B.F.S. to an m-origin and n-destination transportation problem will


contain at most (m + n −1) independent non-zero allocations. If this number is exactly
(m + n −1), the B.F.S. is said to be non-degenerate otherwise it is said to be a degenerate
one. Thus degeneracy in transportation problem occurs whenever the number of
independent individual allocations is less than (m + n −1).
416

Degeneracy in transportation problem can occur in two ways :

1. B.F.S. may be degenerate from the initial stage onward.

2. It may become degenerate at any intermediate stage, when the selection of one
entering cell empties two or more pre-occupied cells simultaneously.

In such cases, to resolve degeneracy, we allocate an extremely small amount (close to


zero) to one or more empty cell of the matrix (generally lowest cost cells if possible), so
that the total number of occupied (allocated) cells become (m + n −1) at independent
positions.

The extremely small quantity usually denoted by ∆ (delta) or ε (epsilon) satisfies the
following conditions :

1. ∆ < x ij for x ij > 0

2. x ij + ∆ = x ij = x ij − ∆, x ij > 0

3. ∆ + 0 = ∆.

4. If there are more than one ∆'s introduced in the solution, then
(i) If ∆, ∆′ are in the same row, ∆ < ∆′ when ∆ is to the left to ∆′ and
(ii) If ∆, ∆′ are in the same column, ∆ < ∆′ when ∆ is above ∆′.

Above rules show that even after introducing ∆, the original solution of the problem is
not changed. It is merely technique to apply to optimality test, As ∆ has no physical
significance, ultimately it is to be omitted.

Following examples will make the procedure clear :

Example 1: A manufacturer wants to ship 8 loads of his product as shown in the table.
The matrix gives the mileage from origin O to destination D. Shipping costs are ` 10 per
load per mile, What shipping schedule should be used ?

D1 D2 D3 Available

O1 50 30 220 1

O2 90 45 170 3

O3 250 200 50 4

Required 4 2 2

Solution: The initial B.F.S. by Vogel's method is obtained as follows :


417

D1 D2 D3
ai ↓
(50) (30) (220)
O1 1 1

(90) (45) (170)


O2 3 3

(250) (200) (50)


O3 2 2 4

bj → 4 2 2

Since the total number of allocations is 4 which is one less than m + n − 1 = 5, hence this
solution is a degenerate solution. So the attempt to assign ui and v j values to the above
table will not succeed.

To resolved this degeneracy we allocate a very small amount ∆ to some suitable cell. We
allocate ∆ to the cell (1, 2) getting 5 allocations at independent positions.

(50) (30) (220)


1 ∆ 1+∆ = 1

(90) (45) (170)


3 3

(250) (200) (50)


2 2 4

4 2+ ∆ = 2 2

To test the solution for optimality : Now to test solution for optimality we have the
following table :
ui ↓
(50) (30) (220) –120
1+θ ∆−θ –170
340
(90) (45) 70 (170) –80
3−θ θ –130
–25 250
(250) 220 (200) (50)
2 2 0
30
vj → 220 200 50
418

Since d22 = −25 < 0, so the solution under test is not optimal. Now we shall allocate to
this cell (2, 2) as much as possible. Thus we take ∆ from cell (1, 2) to cell (2, 2) and form
the now table to check the solution for optimality.
ui ↓
(50) (30) 5 (220) –145
1 –195
25 365
(90) (45) (170) –105
3 ∆ –155
275
(250) 245 (200) (50)
2 2 0
5
vj → 245 200 50

Since for empty cells at dij are > 0, so the solution under test is optimal.

Thus the solution of the given problem is

x11 = 1, x21 = 3, x32 = 2, x33 = 2

and minimum mileage = 50 .1 + 90 . 3 + 200 . 2 + 50 . 2 = 820 i.e., minimum cost = ` 8200.

Example 2: Solve the following transportation problem :


Destinations
D1 D2 D3 ai ↓

O1 7 4 0 5

Sources O2 6 8 0 15

O3 3 9 0 9

bj → 15 6 8
[Meerut 2006]

Solution: By ‘North-West Corner Rule’ the non-degenerate initial B.F.S. is obtained in


the following table :
D1 D2 D3

(7) (4) (0)


O1 5 5

(6) (8) (0)


O2 10 5 15

(3) (9) (0)


O3 1 8 9

15 6 8
419

Now to test this solution for optimality we get the following table in usual manner :
ui ↓
(7) (4) 9 (0) 0
5 0
–5 0
(6) (8) (0) –1
10 5 –1
1
(3) 7 (9) (0)
1 8 0
–4
vj → 7 9 0

Since all the cell evaluations are not ≥ 0, so the solution under test is not optimal.

The largest negative cell evaluation is d12 = −5, so we allocate as much as possible to this
cell (1, 2).

(7) (4) (0)


5–θ θ 5 5 5

(6) (8) (0)


10 + θ 5–θ 15 15 15

(3) (9) (0)


1 8 9 1 8 9

15 6 8 15 6 8

Here maximum value of θ is obtained by usual rule :

min [5 − θ, 5, − θ] = 0 i.e., 5 − θ = 0 i.e., θ = 5.

Since simultaneously two cells vacate, so the number of allocations becomes less than
m + n −1 i.e., 5. Hence this is a degenerate solution. Technically we cannot apply the
optimality test. A negligible quantity ∆ may also be introduced in the independent cell
(3, 1), although least cost independent cell is (2, 3).

(7) (4) (0)


5 5

(6) (8) (0)


15 15

(3) (9) (0)


∆ 1 8 9+∆ = 9

15 +∆ = 15 6 8
420

To test this solution for optimality we have the following table :


ui ↓
(7) –2 (4) (0) –5
5 –5
9 5
(6) (8) 12 (0) 3
15 3
–4 –3
(3) (9) (0)
∆ 1 8 0

vj →
3 9 0

Since all cell evaluations are not ≥ 0, so the solution under test is not optimal. The largest
negative cell evaluation is d22 = −4, so allocate as much as possible to the cell (2, 2).

(7) (4) (0)


5 5 5

(6) (8) (0)


15 – θ θ 14 1 15

(3) (9) (0)


∆+θ 1–θ 8 1 8 9

15 6 8

Here maximum value of θ is obtained by usual rule :

min (15 − θ,1 − θ) = 0 i.e., 1 − θ = 0 i.e., θ = 1.

To test this new improved solution for optimality we have the following table :
ui ↓
(7) 2 (4) (0) –1
5 –4
5 1
(6) (8) (0) 3
14 1 0
–3
(3) (9) 5 (0)
1 8 –3
4
vj → 6 8 3

Since all the cell evaluations are not ≥ 0, so the solution under test is not optimal. The
largest negative cell evaluation is d23 = −3, so allocate as much as possible to the cell (2, 3).
421

(7) (4) (0)


5 5 5

(6) (8) (0)


14 – θ 1 θ 6 1 8 15

(3) (9) (0)


8–θ 9 9

15 6 8

Here maximum value of θ is obtained by usual rule :

min (14 − θ, 8 − θ) i.e., 8 − θ = 0 or θ = 8.

To test this new improved solution for optimality we have the following table :
ui ↓
(7) 2 (4) (0) –4
5 –4
5 4
(6) (8) (0)
6 1 8 0

(3) (9) 5 (0) –3


9 –3
4 3
vj → 6 8 0

Since all the cell evaluations for empty cells are positive, the solution under test is
optimal.

Thus the solution of the given problem is

x12 = 5, x21 = 6, x22 = 1, x23 = 8, x31 = 9

and minimum transportation cost = ` (4 . 5 + 6 . 6 + 8 .1 + 0 . 8 + 3 . 9) = ` 91.

10.17 Unbalanced Transportation Problem


[Meerut 2008, 10; Gorakhpur 2007, 11]
If in a transportation problem, the sum of all available quantities is not equal to the sum
m n
of all requirements i.e., if ∑ ai ≠ ∑ bj then such problem is called an unbalanced
i= 1 j =1

transportation problem.

An unbalanced transportation problem may occur in two different forms :


422

1. Shortage in availability, i.e., ∑ ai < ∑ bj : To modify this type of unbalanced


transportation problem to balanced type we introduce a dummy source row in the
transportation table. The unit transportation costs from this dummy source to any
destination are all set equal to zero. The availability at this dummy source is
assumed to be equal to the difference∑ bi − ∑ aj .
2. Excess of availability, i.e., ∑ ai > ∑ b j : To modify this type of unbalanced

transportation problem to balanced type we introduce a dummy destination


column in the transportation table. The unit transportation costs to this dummy
destination from any source are all set equal to zero. The requirement at this
dummy destination is assumed to be equal to the difference ∑ ai − ∑ bj .

Example 1: Solve the following unbalanced transportation problem (symbols have their
usual meanings) :
D1 D2 D3 ai ↓

O1 4 3 2 10

O2 2 5 0 13

O3 3 8 6 12

bj → 8 5 4

Solution: Here ∑ ai = 35, ∑ bj = 17. Since ∑ ai is greater than ∑ bj , the problem


is of unbalanced type. We convert this problem to a balanced one by introducing a
fictitious destination D4 with requirement 35 − 17 = 18 having all the transportation costs
equal to zero. The balanced transportation table is given below :
D1 D2 D3 D4 ai ↓

O1 4 3 2 0 10

O2 2 5 0 0 13

O3 3 8 6 0 12

bj → 8 5 4 18

Applying the Vogel's method in the usual manner, the initial B.F.S. is obtained as given
below :
423

D1 D2 D3 D4
ai ↓
(4) (3) (0) (0)
O1 5 5 10

(2) (5) (0) (0)


O2 8 4 1 13

(3) (8) (6) (6)


O3 12 12

bj → 8 5 4 18

This gives the transportation cost

= ` (3 . 5 + 0 . 5 + 2 . 8 + 0 . 4 + 0 .1 + 0 .12) = ` 31.

To test this solution for optimality we have the following table :


ui ↓
(4) 2 (3) (0) 0 (0)
5 5 0
2 2
(2) (5) 3 (0) (0)
8 4 1 0
2
(3) 2 (8) 3 (6) 0 (0)
12 0
1 5 6
vj → 2 3 0 0

Since all dij ≥ 0, so the solution under test is optimal.

Thus the solution of the given problem is x12 = 5, x21 = 8, x23 = 4 and min. cost = ` 31.

(Allocations in dummy column are not considered).


424

1. Explain how it is tested whether a B.F.S. of a transportation problem is optimal or


not ?
2. How does the problem of degeneracy arise in a transportation problem ?
[Meerut 2008 (BP)]
3. Explain how to solve the degeneracy in transportation problems ?
4. Explain briefly the step-wise description of the computational procedure for solving
the transportation problem.
5. What do you mean by an unbalanced transportation problem ? Explain how to
convert an unbalanced transportation problem into a balanced one ?
[Meerut 2008; Gorakhpur 2007, 08, 11]
6. Obtain an initial feasible solution to the following transportation problem.
Is this solution an optimal solution ? If not, obtain the optimal solution.
W1 W2 W3 W4 ai ↓

F1 19 30 50 10 7

F2 70 30 40 60 9

F3 40 8 70 20 18

bj → 5 8 7 14 [Meerut 2008, 12]


7. Solve the transportation problem.
D1 D2 D3 D4 Available

O1 1 2 1 4 30

O2 3 3 2 1 50

O3 4 2 5 9 20

Required 20 40 30 10 100 Total [Kanpur 2007]


8. Determine the optimum basic feasible solution to the following transportation
problem :
D1 D2 D3 D4 D5 Capacity

O1 5 5 6 4 2 9

O2 6 9 7 8 5 13

O3 5 6 4 6 3 9

Demand 3 7 8 5 8
where Oi and D j denote i-th origin and j-th destination respectively.
[Meerut 2012 (BP)]
425

9. Is x13 = 50, x14 = 20, x21 = 55, x31 = 30, x32 = 35, x34 = 25 an optimum solution
of the following transportation problem ?
Available units

6 1 9 3 70

11 5 2 8 55

10 12 4 7 90

Required units 85 35 50 45

If not, modify it to obtain an optimum basic feasible solution.


[Meerut 2011 (BP); Gorakhpur 2007]

10. A company has four plants P1, P2 , P3 , P4 from which it supplies to three markets
M1, M2 , M3 . Determine the optimal transportation plan from the following data
giving the plant to market shifting costs, quantities available at each plant and
quantities required at each market.

P1 P2 P3 P4 Required

M1 19 14 23 11 11

M2 15 16 12 21 13

M3 30 25 16 39 19

Available 6 10 12 15

11. The following table gives the cost for transporting material from supply points
A, B, C and D to demand points E, F, G, H and J.
E F G H J

A 8 10 12 17 15

B 15 13 18 11 9
From
C 14 20 6 10 13

D 13 19 7 5 12

The present allocation is as follows :


A to E 90; A to F10; B to F150; C to F10; C to G 50; C to J 120; D to H 210; D to
J 70.
Check if this allocation is optimum. If not, find an optimum schedule.
12. Solve the following transportation problem for minimum cost :
426

To
I II III IV

A 15 10 17 18 2

From B 16 13 12 13 6

C 12 17 20 11 7

3 3 4 5 [Meerut 2009 (BP)]

13. Solve the following problem :


To Supply

21 16 25 13 11

From 17 18 14 23 13

32 27 18 41 19

Demand 6 10 12 15 43 Total [Meerut 2011]

14. Obtain an optimum B.F.S. to the following degenerate transportation problem :


To Available

7 3 4 2

2 1 3 3
From
3 4 6 5

Demand 4 1 5 [Meerut 2007]

15. Given below the unit cost array with supplies ai ; i = 1, 2, 3 and demand bj ;
j = 1, 2, 3, 4.
ai ↓

8 10 7 6 50

12 9 4 7 40

9 11 10 8 30
bj →
25 32 40 23

Find the optimal solution to the above problem.


16. A company has three plants A, B and C and three warehouses X , Y and Z. Number
of units available at the plants is 60, 70 and 80 respectively. Demands at X , Y and
Z are 50, 80 and 80 respectively. Unit costs of transportation are as follows :
427

X Y Z

A 8 7 3

B 3 8 9

C 11 3 5

What would be your transportation plan ? Give minimum distribution cost.


17. The cost-requirement table for the transportation problem is given below :
To
W1 W2 W3 W4 W5 Available

F1 4 3 1 2 6 40

F2 5 2 3 4 5 30
From
F3 3 5 6 3 2 20

F4 2 4 4 5 3 10
Required 30 30 15 20 5
Obtain the optimal solution of the problem.
18. Solve the transportation problem where all entries are unit costs.
D1 D2 D3 D4 D5 ai ↓

O1 73 40 9 79 20 8

O2 62 93 96 8 13 7

O3 96 65 80 50 65 9

O4 57 58 29 12 87 3

O5 56 23 87 18 12 5
bj →
6 8 10 4 4 [Meerut 2004]

19. Solve the following transportation problem (cell entries represent unit cost) :
Available

5 3 7 3 8 5 3

5 6 12 5 7 11 4

2 1 2 4 8 2 2

9 6 10 5 10 9 8

Required 3 3 6 2 1 2 17
428

20. Solve the following cost minimizing transportation problems :


(i) D1 D2 D3 D4 Available (ii) D1 D2 D3 D4 Available
O1 5 3 6 5 15 O1 1 2 3 4 6
O2 10 7 12 4 11 O2 4 3 2 0 8
O3 7 5 8 4 13 O3 0 2 2 1 10

Demand 8 12 13 6 Demand 4 6 8 6

21. Consider the following unbalanced transportation problem :


To
1 2 3 Supply
1 5 1 7 10
From 2 6 4 6 80
3 3 2 5 15
Demand 75 20 50

Since there is not enough supply, some of the demands at these destinations may
not be satisfied. Suppose there are penalty costs for every unsatisfied demand unit
which are given by 5, 3 and 2 for destination 1, 2 and 3 respectively. Find the
optimal solution.
22. Solve the following transportation problem :
To
D1 D2 D3 D4 ai ↓

O1 5 3 6 2 19

From O2 4 7 9 1 37
O3 3 4 7 5 34
bj →
16 18 32 25 [Meerut 2010]

Multiple Choice Questions


1. An assignment problem is a special case of an m × n transportation problem in which :
(a) m=n (b) m = 2n
(c) 2m = n (d) None of these
2. In a basic feasible solution of an m by n transportation problem the number of
positive allocations is atmost :
(a) m+ n (b) m + n −1
(c) m−n (d) None of those
429

3. The necessary and sufficient condition for the existence of a feasible solution of a
transportation problem is :
(a) ∑ ai = ∑ bj (b) ∑ ai ≠ ∑ bj
(c) ∑ ai = 0 (d) ∑ bj = 0
4. In a transportation problem a loop may be defined as an ordered set of atleast :
(a) 3 cells (b) 4 cells
(c) 5 cells (d) 6 cells
5. If we have a feasible solution consisting of m + n −1 independent allocations, and if
numbers ui and v j satisfying crs = ur + vs , for each occupied cell (r , s) then the
evaluation dij corresponding to each empty cell (i, j) is given by :
(a) dij = cij − (ui + v j ) (b) dij = cij + (ui + v j )
(c) dij = cij − (ui − v j ) (d) dij = cij + (ui − v j )
6. To improve the current B.F.S. if it is not optimal we allocate to the cell for which dij
is :
(a) Minimum and negative (b) Maximum and positive
(c) 0 (d) None of these
7. In a transportation problem the solution under test will be optimal if all the cell
evaluations are :
(a) <0 (b) >0
(c) ≤0 (d) ≥0
8. In Vogel's approximation method we select the row or column for which the penalty
is the :
(a) Largest (b) Smallest
(c) Zero (d) None of these
9. To find an initial B.F.S. we start with the cell (1, 1) in :
(a) North-West Corner Rule (b) Lowest Cost Entry Method
(c) Vogel's approximation method (d) None of these
10. To find an initial B.F.S. by Matrix Minima Method, we first choose the cell with :
(a) Zero cost (b) Highest cost
(c) Lowest cost (d) None of these

Fill in the Blank


1. The transportation problem is to transport various amounts of a single
homogeneous commodity, that are initially stored at various origins, to different
destinations in such a way that the total transportation cost is ................. .
m
2. In a balanced m × n transportation problem ∑ ai = ....., where ai's are capacities of
i =1

the sources and bj 's are requirements of the destinations.


430

3. The transportation problem can be regarded as a generalization of the ................. .


[Meerut 2005]

4. A feasible solution of m by n transportation problem is said to be non-degenerate


basic solution if number of positive allocations is exactly equal to ................. .
5. By North-West corner rule we always get a ................. basic feasible solution.
[Meerut 2005]

6. The optimality test is applicable to a F.S. consisting of ................. allocations in


independent positions.
7. In a transportation problem the solution under test will be optimal and unique if all
the cell evaluations are ................. .
8. In Vogel's approximation method the differences of the smallest and second
smallest costs in each row and column are called ................. .
9. In computational procedure of optimality test we choose that ui or v j = 0 for which
the corresponding row or column has the ................. number of individual
allocations.
10. The iterative procedure of determining an optimum solution of a minimization
transportation problem is known as ................. .

True or False
1. A feasible solution is said to be optimal if it minimizes the total transportation cost.
[Meerut 2005]

2. In a non-degenerate basic feasible solution the number of positive allocations is


exactly equal to m + n − 2.
3. We always get a non-degenerate basic feasible solution by North-West corner rule.
4. A balanced transportation problem always has a F.S.
5. There always exists a B.F.S. of a balanced transportation problem.
6. Every loop will contain an odd number of cells. [Meerut 2004]

7. In optimality test for all the occupied cells, cell evaluations dij are non-zero.
8. If the cell evaluations dij are > 0 for all the unoccupied cells, then the optimum
solution is unique.
9. There are m + n independent equations in a transportation problem, m and n being
the number of origins and destinations.
10. In a non-degenerate basic feasible solution the allocations must be in independent
positions.
431

Exercise 10.2
6. Initial B.F.S. is x11 = 5, x14 = 2, x23 = 7, x24 = 2, x32 = 8, x34 = 10 ; not optimal.
Optimal solution is x11 = 5, x14 = 2, x22 = 2, x23 = 7, x32 = 6, x34 = 12

7. x11 = 20, x13 = 10, x22 = 20, x23 = 20, x24 = 10, x32 = 20 ; min. cost = ` 180

8. x12 = 4, x14 = 5, x21 = 3, x22 = 2, x25 = 8, x32 = 1, x33 = 8 min cost = ` 154

9. No ; x12 = 30, x14 = 40, x22 = 5, x23 = 50, x31 = 85, x34 = 5
and min. cost = ` 1160

10. x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12 ; min. cost = ` 710.

11. Not; x12 = 100, x22 = 70, x25 = 80, x31 = 90, x33 = 50, x35 = 40, x44 = 201, x45 = 70
or x12 = 100, x22 = 70, x25 = 80, x31 = 20, x33 = 50, x35 = 110, x41 = 70, x44 = 210

12. x12 = 2, x22 = 1, x23 = 4, x24 = 1, x31 = 3, x34 = 4 ; min. cost = ` 174

13. x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12 ; min. cost = ` 796

14. x13 = 2, x22 = 1, x23 = 2, x31 = 4, x33 = 1 ; min. cost = ` 33

15. x11 = 25, x12 = 2, x14 = 23, x23 = 40, x32 = 30 ; min. cost = ` 848

16. x13 = 60, x21 = 50, x23 = 20, x32 = 80 ; min. cost = ` 750

17. x11 = 5, x13 = 15, x14 = 20, x22 = 30, x31 = 15, x35 = 5, x41 = 10; min.cost = ` 210

18. x13 = 8, x24 = 4, x25 = 3, x31 = 5, x32 = 4, x41 = 1, x43 = 2, x52 = 4, x55 = 1 ;
min. cost = ` 33

19. x12 = 1, x16 = 2, x21 = 3, x25 = 1, x33 = 2, x42 = 2, x43 = 4, x44 = 2;


min. cost = ` 101

20. (i) x11 = 8, x12 = 7, x22 = 5, x24 = 6, x33 = 13 ; min cost = ` 224
(ii) x12 = 6, x23 = 2, x24 = 6, x31 = 4, x33 = 6 ; min. cost = ` 28

21. x12 = 10, x21 = 60, x22 = 10, x23 = 10, x31 = 15 ; min. cost = ` 515

22. x12 = 18, x13 = 6, x21 = 12, x24 = 25, x31 = 4, x33 = 30, min. cost = ` 355.

Multiple Choice Questions


1. (a) 2. (b)
3. (a) 4. (b)
5. (a) 6. (a)
7. (d) 8. (a)
9. (a) 10. (c)
432

Fill in the Blank


n

1. minimum 2. ∑ bj
j =1

3. assignment problem 4. m + n −1
5. non-degenerate. 6. m + n −1
7. >0 8. penalties
9. maximum 10. MODI Method

True or False
1. True 2. False
3. True 4. True
5. True 6. False
7. False 8. True
9. False 10. True
mmm
433

11.1 Introduction
n the present chapter we deal with a special type of linear programming problem
I generally called ‘Assignment problem’. Although simplex method is powerful enough to
solve all the linear programming problems, but the above type of the problems may be
solved by a very interesting method called the ‘Assignment Technique’ which is described in
this chapter.

The classical problems where the objective is to assign a number of origins (jobs) to the
equal number of destinations (persons) at a minimum cost (or maximum profit) are
called ‘Assignment problems’. Here we make the assumption that each person can perform
each job but with varying degree of efficiency.

Such types of problems may consist of assigning men to offices, classes to rooms, drivers
to trucks, trucks to delivery routes or problems to research teams, etc.

11.2 The Nature of Assignment Problem


Let there be n jobs to be performed and for doing these jobs n persons are available.
Assume that each person can do each job at a time, though with varying degree of
efficiency. Let cij be the cost (payment) of assigning i-th person to the j-th job. Then the
problem is to find an assignment (which job should be assigned to which person) so that
the total cost for performing all jobs is minimum.
434

The above assignment problem can be stated in the form of n × n matrix [cij ] of real
numbers called cost matrix or effectiveness matrix as follows :

Cost Matrix
Jobs
1 2 3 LLL j LLL n
1 c11 c12 c13 LLL c1 j LLL c1n
2 c21 c22 c23 LLL c2 j LLL c2 n
3 c31 c32 c33 LLL c3 j LLL c3 n
Persons M M M M M M
M M M M M M
M M M M M M

i c i1 c i2 c i3 LLL c ij LLL c in
M M M M M M
M M M M M M
M M M M M M

n c n1 c n2 c n3 LLL c nj LLL c nn

11.3 Mathematical Formulation of Assignment problem


[Kanpur 2012; Gorakhpur 2010]

Mathematically an assignment problem can be stated as follows :

Minimize the total cost


n n
Z = ∑ ∑ cij xij
i =1 j =1

1, if ith person is assigned to the jth job


where x ij = 
0, if not

subject to the conditions


n
(i) ∑ x ij = 1, (only one job is done by the i-th person, i = 1, 2,..., n)
j =1

n
(ii) ∑ x ij = 1, (only one person should be assigned to the j-th job, j = 1, 2,..., n)
i =1
435

11.4 Difference Between a Transportation Problem


and an Assignment Problem
[Meerut 2006 (BP), 07; Kanpur 2011, 12]

An assignment problem may be regarded as a special case of the transportation


problem.Here each origin person represents the source and the jobs represents the
destinations. Here m = n i.e., the number of persons (or origin) is equal to the number of
jobs and all ai and bj are unity i.e., ai = 1 for all i and bj = 1 for all j. In assignment problem
each person (or origin) is associated with one and only one job i.e., x ij is limited to two
values 0 and 1 only. In these circumstances exactly n of x ij can be non-zero (i.e.,1), one
for each origin and one for each job.

11.5 Fundamental Theorems


Now we shall prove two important theorems on which the solution to an assignment
problem is fundamentally based.

Theorem 1: (Reduction Theorem) : In an assignment problem if we add (or subtract) a


constant to every element of a row (or column) of the cost matrix [c ij ] , then an assignment

which minimizes the total cost for one matrix also minimizes the total cost for the other
matrix.
Or
Mathematical Statement of Reduction Theorem : If xij = X ij
n n
minimizes Z = ∑ ∑ cij xij over all xij = 0 or 1 such that
i =1j =1
n n

∑ xij = 1, ∑ xij = 1 then xij = X ij also minimizes


i =1 j =1
n n
Z′ = ∑ ∑ cij′ xij where cij′ = cij ± ai ± bj ; ai, bj
i =1j =1

are some real numbers for i, j = 1, 2, . . . , n.

Proof: We have
n n n n
Z′ = ∑ ∑ cij′ x ij = ∑ ∑ (cij ± ai ± bj ) xij
i =1 j =1 i =1 j =1
436

n n n n n n
= ∑ ∑ cij x ij ± ∑ ∑ ai x ij ± ∑ ∑ bj xij
i =1 j =1 i =1 j =1 i =1 j =1

n n n n  n n 
Q Z = cij x ij 
=Z± ∑ ∑ ai x ij ± ∑ ∑ bj x ij
 ∑ ∑ 
i =1 j =1 j =1 i =1  i =1 j =1 
n n  n n 
Q x ij 
=Z± ∑ ai .1 ± ∑ bj .1
 ∑ x ij = 1 = ∑ 
i =1 j =1  i =1 j =1 
n n
=Z± ∑ ai ± ∑ bj
i =1 j =1
n n
Since terms ∑ ai , ∑ bj are independent of x ij 's it follows that Z ′ is minimized
i =1 j =1

whenever Z is minimized and conversely.

Hence x ij = X ij which minimizes Z will also minimize Z ′.

Theorem 2: If all c ij ≥ 0 and there exists a solution xij = X ij which satisfies


n n

∑ ∑ cij xij = 0
i =1j =1

then this solution is an optimal solution for the problem (i.e., minimizes the objective
function).
n n
Proof: Since all cij ≥ 0 and all x ij ≥ 0, the objective function Z = ∑ ∑ cij xij cannot
i =1 j =1

be negative. The minimum possible value that Z can attain is 0.


n n
Hence the solution x ij = X ij for which ∑ ∑ cij xij = 0 is an optimal solution.
i =1 j =1

11.6 Assignment Algorithm


(Hungarian Assignment Method)
From the two theorems of 11.5 we get a powerful method known as Hungarian
assignment method for solving an assignment problem. Various steps of the
computational procedure for obtaining an optimal assignment are as follows :

Step 1 : Subtract the minimum element of each row of the cost matrix, from all the
elements of the respective rows. Further, modify the resulting matrix by subtracting the
437

minimum element of each column from all the elements of the respective columns. These
operations create zeros.

Step 2 : Make assignments using only zeros. If a complete assignment is possible then
this is the required optimal assignment plan and if not then we shall modify the cost
matrix to create some more zeros in it.

Thus at the end of step I, the question arises to decide whether a complete assignment is
possible or not. It can be done easily in case of smaller cost matrices but it is not so easy in
case of larger cost matrices. For this we apply the following procedure :

Starting with row 1 of the matrix obtained in step 1, examine rows successively until a
row with exactly one zero element is found. Mark ‘,’ at this zero as an assignment will be
made there. Mark ‘×’ at all other zeros if lying in the column containing the assigned zero.
This eliminates the possibility of marking further assignment in that column. Continue
in this manner until all the rows have been examined.

When the set of rows has been completely examined, an identical procedure is applied
successively to columns. Starting with column 1, examine all the columns until a column
containing exactly one unmarked zero is found. Then make an assignment in that
position (indicated by ,) and mark ‘×’ at all zeros in the row containing this marked zero.
proceed in this way until the last column is examined.

Continue the above operations on rows and columns successively until we reach to any of
the two situations
(i) all the zeros have been marked ‘,’ or ‘×’
(ii) the remaining zeros lie at least two in each row and column.

In situation (i), we have a maximal assignment (assignment as much as we can and in


situation (ii) still we have some zeros to be treated. To work with such situations of zeros
there is again an algorithm, complicated enough. But to avoid this highly complicated
algorithm we use the trial and error method to break up such ties of zeros.

Now there are two possibilities :


(i) If there is an assignment in every row and every column (i.e., total number of
marked ‘,’ zeros is exactly n), then we have obtained a complete optimal
assignment plan.
(ii) If every row and every column do not contain an assignment (i.e., total number of
marked ‘,’ zeros is less than n), then we shall modify the cost matrix by creating
some more zeros in it.
Step 3 : If in step 2 every row and every column of the matrix do not contain assignment
then draw the minimum number of horizontal and vertical lines to cover all the zeros at
least once in the resulting matrix.
438

Rule to draw minimum number of lines


(i) Mark (3) all rows that do not have any marked ‘,’ zero.
(ii) Mark (3) columns which have zeros in marked rows.
(iii) Mark (3) rows (not already marked) which have assignments in marked columns.
(iv) Repeat steps (ii) and (iii) until the chain of marking ends.
(v) Draw lines through unmarked rows and through marked columns.

This will give us the minimal system of lines.

Note 1: The lines thus drawn (horizontal and vertical both) are the minimum number of
lines to pass through all the zeros of the matrix. It can be shown that the minimum
number of lines required to pass through all the zeros of the matrix is the same as the
maximum number of assigned independent zeros of the matrix.

Thus if the number of lines is exactly n, then the complete assignment plan is obtained
while if the number of lines is less than n, then the complete assignment is not possible.
2. These lines cover all the zeros and each line passes through one and only one
marked zero (assignment). If there are two marked ‘,’ zeros in a row then it follows
that we are assigning two jobs to one person which is a violation of the hypothesis.
Thus no line passes through more than one marked ‘,’ zero.

Step 4 : Select the smallest of the elements that do not have a line through them,
subtract if from all the elements that do not have a line through them, add it to every
element that lies at the intersection of two lines and leave the remaining elements of the
matrix unchanged. In the modified matrix number of zeros are increased (never decrease)
than that in step 2. Now apply the step 2 to this new matrix. If still a complete optimal
assignment is not possible in this matrix, then repeat steps 3 and 4 iteratively. Continue
the process until minimum number of lines be n.

Thus exactly one marked ‘,’ zero in each row and each column of the matrix is obtained.
The assignment corresponding to these marked ‘,’ zeros will give the optimal
assignment.

Note : The procedure of subtracting the minimum element of all uncovered elements
from all such elements and adding this minimum element to the elements placed at the
intersection does not change the optimum solution i.e., the two matrices will have the
same optimal solution. For the above mentioned two operations of addition and
subtraction are the resultant of operations of subtracting the above chosen minimum
element from the uncrossed rows and adding it to all the crossed columns and such
operations do not change the optimum solution.
439

Example 1: A department head has four subordinates, and four tasks have to be
performed. Subordinates differ in efficiency and tasks differ in their intrinsic difficulty.
Time each man would take to perform each task is given in the effectiveness matrix below.
How should the tasks be allocated, one to a man, so as to minimize the total man hours ?
[Kanpur 2007]

Subordinates

I II III IV
A 8 26 17 11
B 13 28 4 26
Tasks C 38 19 18 15
D 19 26 24 10

Solution: We shall solve this problem step by step to understand the method described
above.
Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row, the matrix reduces to

I II III IV
A 0 18 9 3
B 9 24 0 22
C 23 4 3 0
D 9 16 14 0

Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to

I II III IV
A 0 14 9 3
B 9 20 0 22
C 23 0 3 0
D 9 12 14 0

Step 2 : Starting with row 1 of this reduced matrix, we examine the rows one by one until
a row containing only one zero element is found. We mark ‘,’ at this zero, i.e., make an
assignment and mark a cross ‘×’ over all zeros lying in the column containing the assigned
zero. Continue in this manner until all the rows have been examined. Thus we get the
following matrix.
440

I II III IV

A 0 14 9 3

B 9 20 0 22

C 23 0 3 0

D 9 12 14 0

Now starting with column 1, examine all the columns until a column containing only one
zero (column 2) is found. We mark , at this zero i.e., make an assignment and cross other
zeros if lying in the row containing this marked zero. Continue in this manner until all
the columns have been examined. Thus we get the following matrix.

I II III IV

A 0 14 9 3

B 9 20 0 22

C 23 0 3 0

D 9 12 14 0

At this stage we see that all zeros have been either assigned or crossed out and every row
and every column have an assignment. Hence an optimal solution has been obtained.
The optimal solution is

A → I, B → III, C → II, D → IV,

And the minimum total man hours (from the original matrix)

= 8 + 4 + 19 + 10 = 41.

Example 2: Solve the minimal assignment problem whose effectiveness matrix is

Jobs

I II III IV

A 2 3 4 5

B 4 5 6 7

Operators C 7 8 9 8

D 3 5 8 4
[Meerut 2005; Kanpur 2009]

Solution: Step 1 : Subtracting the minimum element of each row from every element of
the corresponding row, the matrix reduces to
441

0 1 2 3

0 1 2 3

0 1 2 1

0 2 5 1

Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to

0 0 0 2

0 0 0 2

0 0 0 0

0 1 3 0

Step 2 : Now test whether it is possible to make an assignment using only zeros. Here
none of the rows or columns contain exactly one zero, therefore we start with row 1
searching two zeros. While examining rows successively, it is observed that row 4 has two
zeros. Now, arbitrarily make an assignment (indicated by ,) to one of these two zeros,
say zero in the column 1 and cross other zeros in row 4 and column 1 (table 1). Now we
examine the columns and find column 4 which contains only one unmarked zero in row
3. We make assignment (indicated by ,) at this zero and cross all other zeros of this row
(table 2). Now again we check the rows and columns for one unmarked zero. There is no
such row or column. So we start with row 1 searching two unmarked zeros and find the
row 1 containing two such zeros. We mark ‘,’ at any one of these zeros, say zero of
column 2, and cross other zeros of row 1 (not already crossed) and column 2. Now the
second row contains only one unmarked zero in third column where we can make an
assignment (indicated by ,) (table 3).

At this stage all zeros have been either assigned or crossed out. We observe that every row
and every column have one assignment, so we have the complete ‘zero assignment’.

Tables 1, 2, 3 show the necessary steps for reaching the optimal assignment.

Table 1 Table 2 Table 3

×
0 0 0 2 ×
0 0 0 2 ×
0 0 ×
0 2

×
0 0 0 2 ×
0 0 0 2 ×
0 ×
0 0 2

×
0 0 0 0 ×
0 ×
0 ×
0 0 ×
0 ×
0 ×
0 0

0 1 3 ×
0 0 1 3 ×
0 0 1 3 ×
0

Thus we get the following optimal assignment :


442

A → II, B → III, C → IV, D → I.

Minimum cost = 3 + 6 + 8 + 3 = 20
Note : In this example other optimal assignments are also possible. Student must try to
find them. Each will have the same cost 20.

Example 3: A car hire company has one car at each of five depots a, b, c, d and e. A
customer requires a car in each town namely A, B, C, D and E. Distance (in kms) between
depots (origins) and towns (destinations) are given in the following distance matrix :

a b c d e

A 160 130 175 190 200

B 135 120 130 160 175

C 140 110 155 170 185

D 50 50 80 80 110

E 55 35 70 80 105

How should cars be assigned to customers so as to minimize the distance travelled ?

Solution: Step 1 : Subtracting the minimum element of each row from every element of
the corresponding row, the matrix reduces to

30 0 45 60 70

15 0 10 40 55

30 0 45 60 75

0 0 30 30 60

20 0 35 45 70

Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to

30 0 35 30 15

15 0 0 10 0

30 0 35 30 20

0 0 20 0 5

20 0 25 15 15

Step 2 : Now we give the zero assignments in our usual manner. Row 1 has a single zero
in column 2. Make an assignment by marking ‘,’ around it and delete other zeros (if
443

any) in column 2 by marking ‘×’. Examining the set of rows completely, an identical
procedure is applied successively to columns.

30 0 35 30 15

15 ×
0 0 10 ×
0

30 ×
0 35 30 20

0 ×
0 20 ×
0 5

20 ×
0 25 15 15

Now column 1 has a single zero in row 4. Make an assignment by marking ‘,’ at this zero
and cross the other zero of row 4 which is not yet crossed. Column 3 has a single zero in
row 2, make an assignment at this zero by putting ‘,’ and cross the other zero of row
which is not yet crossed. At this stage all zeros have been either assigned or crossed out. It
is observed that row 3, row 5, column 4 and column 5 each has no assignment. Hence the
required solution cannot be obtained at this stage. So we proceed to the next step.

Step 3 : In this step we draw minimum number of lines to cover all zeros at least once.
For this we proceed as follows :
(i) Mark (√) row 3 and row 5 as they have no assignments.
(ii) Mark (√) column 2 as having zeros in the marked rows 3 and 5.
(iii) Mark (√) row 1 as it contains assignment in the marked column 2.
No further rows or columns will be required to mark during this procedure.
(iv) Now draw line L1 through marked column 2. Then draw lines L2 and L3 through
unmarked rows 2 and 4.

The required lines will be L1, L2 and L3 . No zero is left uncovered.

L1

30 0 35 30 15 3 (4)

L2 15 ×
0 0 10 ×
0

30 ×
0 35 30 20 3 (1)

L3 0 ×
0 20 ×
0 5

20 ×
0 25 15 15 3 (2)

(3)
444

Step 4 : In this step we select the smallest element among all uncovered elements of the
matrix of step 3.

Here this element is 15. Subtracting this element 15 from all the elements that do not
have a line through them and adding to every element that lies at the intersection of two
lines and leaving the remaining elements unchanged we get the following matrix.

15 0 20 15 0
15 15 0 10 0
15 0 20 15 5
0 15 20 0 5
5 0 10 0 0

Step 5 : Now again performing the step 2 we make the zero assignments. It is observed
that there are no remaining zeros and every row and column has an assignment as shown
in the table.

15 ×
0 20 15 0

15 15 0 10 ×
0

15 0 20 15 5

0 15 20 ×
0 5

5 ×
0 10 0 ×
0

Thus the complete optimal assignment plan is given by

A → e, B → c, C → b, D → a, E → d.

From the original matrix, the minimum cost (distance travelled)

= (200 + 130 + 110 + 50 + 80) kms. = 570 kms.

Example 4: Solve the assignment problem represented by the following matrix :

I II III IV V VI
A 9 22 58 11 19 27
B 43 78 72 50 63 48
C 41 28 91 37 45 33
D 74 42 27 49 39 32
E 36 11 57 22 25 18
F 3 56 53 31 17 28
[Kanpur 2009]
445

Solution: Step 1: Subtracting the minimum element of each row from every element of
the corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the matrix reduces to

0 13 49 0 0 13

0 35 29 5 10 0

13 0 63 7 7 0

47 15 0 20 2 0

25 0 46 9 4 2

0 53 50 26 4 20

Step 2 : Make the ‘zero assignments’ in usual manner. The illustration is shown in the
table.

×
0 13 49 0 ×
0 13

×
0 35 29 5 10 0

13 ×
0 63 7 7 ×
0

47 15 0 20 2 ×
0

25 0 46 9 4 2

0 53 50 26 4 20

Since row 3 and column 5 have no assignments so we proceed to the next step.
Step 3 : Draw minimum number of lines to cover all zeros at least once. For this we
proceed as follows :
(i) Mark (3) row 3 as having no assignment.
(ii) Mark (3) columns 2 and 6 as having zeros in marked row 3.
(iii) Mark (3) rows 5 and 2 as having assignments in the marked columns 2 and 6.
(iv) Mark (3) column 1 (not already marked) as having zero in the marked row 2.
(v) Then mark (3) row 6 as having assignment in the marked column1.

Now draw lines L1, L2 , L3 through marked columns 1, 2, 6 respectively and L4 , L5 through
unmarked rows 1, 4 respectively. This way minimum set of five lines (5<6) to cover all
the zeros is obtained.
446

L1 L2 L3

L4 ×
0 13 49 0 ×
0 13

×
0 35 29 5 10 0 3(5)

3(1)
13 ×
0 63 7 7 ×
0

L5 47 15 0 20 2 ×
0

25 0 46 9 4 2 3(4)

0 53 50 26 4 20 3(7)

3 3 3
(6) (2) (3)

Step 4 : Now the smallest element among all uncovered elements is 4. Subtracting this
element 4 from all the uncovered elements, adding to every element that lies at the
intersection of two lines and leaving the remaining elements unchanged the matrix of
step 3 reduces to the new form a shown in the table.

4 17 49 0 0 17

0 35 25 1 6 0

13 0 59 3 3 0

51 19 0 20 2 4

25 0 42 5 0 2

0 53 46 22 0 20

Step 5 : Repeating the step 2, make the ‘zero assignments’ as shown in the following
table. Thus exactly one marked ‘,’ zero in each row and each column of the matrix is
obtained.

4 17 49 0 ×
0 17

0 35 25 1 6 ×
0

13 ×
0 59 3 3 0

51 19 0 20 2 4

25 0 42 5 ×
0 2

×
0 53 46 22 0 20
447

Thus the optimal assignment is

A → IV, B → I, C → VI, D → III, E → II, F → V.

From the original matrix, minimum cost = 11 + 43 + 33 + 27 + 11 + 17 = 142.

Note : Another optimal solution of this assignment problem is shown in the following
table i.e., A → IV, B → VI, C → II, D → III, E → V, F → I.

4 17 49 0 ×
0 17

×
0 35 25 1 6 0

13 0 59 3 3 ×
0

51 19 0 20 2 4

25 ×
0 42 5 0 2

0 53 46 22 ×
0 20

From the original matrix, minimum cost = 11 + 48 + 28 + 27 + 25 + 3 = 142.

Example 5: An airline that operates seven days a week, has the time table shown below.
Crews must have a minimum layover of 5 hours between flights. Obtain the pair of flights
that minimizes layover time away from home. For any given pair the crew will be based at
the city that results in the smallest layover.

Delhi-Jaipur Jaipur-Delhi

Flight No. Departure Arrival Flight No. Departure Arrival

1 7.00 A.M. 8.00 A.M. 101 8.00 A.M. 9.15 A.M.

2 8.00 A.M. 9.00 A.M. 102 8.30 A.M. 9.45 A.M.

3 1.30 P.M. 2.30 P.M. 103 12.00 Noon 1.15 P.M.

4 6.30 P.M. 7.30 P.M. 104 5.30 P.M. 6.45 P.M.

For each pair, mention the town where the crew should be based.

Solution: First we construct the tables for layover times between the flights. Suppose we
pair the flight no. 1 with flight no. 103 when crew is based at Delhi. Then the time of stay
at Jaipur will be the layover time away from home. Now a plane of flight no. 1 which
reaches Jaipur at 8.00 A.M., cannot fly at 12.00 Noon on the same day as minimum
layover time is 5 hours. So it will depart Jaipur on the next day which will result in a
layover time of 28 hours. Similarly, other layover times can be calculated.
448

Tables for layover times in hours


When crew based at Delhi When crew based at Jaipur

101 102 103 104 101 102 103 104

1 24 24.5 28 9.5 1 21.75 21.25 17.75 12.25

2 23 23.5 27 8.5 2 22.75 22.25 18.75 13.25

3 17.5 18 21.5 27 3 28.25 27.75 24.25 18.75

4 12.5 13 16.5 22 4 9.25 8.75 5.25 23.75

To avoid the fractions we measure the layover times in terms of quarter hour (0.25 hr. or
15 minutes) as one unit of time. Thus multiplying the above tables by 4, the modified
tables are as follows :
When crew based at Delhi When crew based at Jaipur

101 102 103 104 101 102 103 104

1 96 98 112 38 1 87 85 71 49

2 92 94 108 34 2 91 89 75 53

3 70 72 86 108 3 113 111 97 75

4 50 52 66 88 4 37 35 21 95

As a next step we combine the above two tables, choosing that base which gives a lesser
layover time for each pairing. The layover times marked with ‘*’ denote that crew is based
at Jaipur, otherwise the crew is based at Delhi.
Minimum layover time table

101 102 103 104

1 87* 85* 71* 38

2 91* 89* 75* 34

3 70 72 86 75*

4 37* 35* 21* 88

Now this is a usual minimal assignment problem. Solving it by usual assignment


technique, finally we get the following table.

101 102 103 104

1 4* 0* 0* 0

2 12* 8* 8* 0

3 0 0 28 50*

4 4* 0* 0* 100
449

Giving the zero assignments, we get the following tables :


101 102 103 104 101 102 103 104

1 4 0 * ×
0 ×
0 1 4* ×
0 0 * ×
0

2 12* 8* 8* 0 2 12* 8* 8* 0

3 0 ×
0 28 50* 3 0 ×
0 28 50*

4 4* ×
0 0 * 100 4 4* 0 * ×
0 100

From the above tables, two optimal assignments are


(i) (1 → 102)*, (2 → 104), (3 → 101), (4 → 103) *
(ii) (1 → 103)*, (2 → 104), (3 → 101), (4 → 102) *.

In both the cases minimum layover time is 210 quarter hours i.e., 52 hours 30 minutes.

11.7 The Maximal Assignment Problems


We have discussed a problem of minimizing the total cost. Sometimes the assignment
problem deals with the maximization of an objective function rather than to minimize it.
For example, the problem may be to assign persons to jobs in such a way that the
expected profit is maximum. This problem may be solved easily by first converting it to a
minimization problem and then we can use the usual procedure of assignment algorithm.
This conversion can be very easily done by modifying the given profit matrix to the cost
matrix in either of the two ways.
1. Select the greatest element of the given profit matrix and then subtract each element of the
matrix from this greatest element to get the modified matrix.
For, if [cij ] is the given profit matrix and crk is the greatest element of this matrix
then the modified matrix will be [cij′ ], where cij′ = crk − cij . It can be shown that if

x ij = X ij maximizes Z = ∑ ∑ cij x ij , then x ij = X ij minimizes the function


Z ′ = ∑ ∑ cij′ x ij , where cij′ = crk − cij . It follows from the relation

Z ′ = ∑ ∑ cij′ x ij = ∑ ∑ (crk − cij ) x ij = ∑ ∑ crk x ij − ∑ ∑ cij x ij = ncrk − Z.

2. Place minus sign before each element of the profit matrix to get the modified matrix.
In this case if [cij ] is the given profit matrix then the modified matrix will be [cij′ ]

where cij′ = − cij . It can be shown that if x ij = X ij maximizes Z = ∑ ∑ cij x ij then

x ij = X ij minimizes Z ′ = ∑ ∑ cij′ x ij .
450

Example 1: A company has 5 jobs to be done. The following matrix shows the return in
rupees on assigning i-th (i = 1, 2, 3, 4, 5 ) machine to the j-th job (i = A, B, C, D, E).
Assign the five jobs to the five machines so as to maximize the total expected profit.

Jobs

A B C D E

1 5 11 10 12 4

2 2 4 6 3 5

Machines 3 3 12 5 14 6

4 6 14 4 11 7

5 7 9 8 12 5
[Meerut 2007]

Solution: First we shall convert the problem from maximization to minimization. The
greatest element of the given matrix is 14. Subtracting all the elements of the given
matrix from 14, the modified matrix is as follows.

9 3 4 2 10

12 10 8 11 9

11 2 9 0 8

8 0 10 3 7

7 5 6 2 9

Now we shall follow the usual procedure of solving an assignment problem.


Step 1: Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting minimum element of each column from every
element of the corresponding column, the matrix reduces to

3 1 2 0 7

0 2 0 3 0

7 2 9 0 7

4 0 10 3 6

1 3 4 0 6

Step 2 : Giving zero assignments in the usual manner we observe that rows 3, 5 and
columns 3, 5 have no assignments. So we draw minimum number of lines to cover all the
zeros at least once. The number of such lines is 3.
451

L1

3 1 2 0 7 3 (4)

L2 0 2 ×
0 3 ×
0

7 2 9 ×
0 7 3 (1)

L3 4 0 10 3 6

1 3 4 ×
0 6 3 (2)

3
(3)

Step 3 : In this table the smallest of the uncovered elements is 1. Subtracting this
element from all uncovered elements, adding to each element that is at the intersection
of two lines and leaving all remaining elements unchanged, the reduced matrix is as
follows.

2 0 1 0 6

0 2 0 4 0

6 1 8 0 6

4 0 10 4 6

0 2 3 0 5

Step 4 : Giving zero assignments in usual manner, we observe that row 1 and column 5
have no zero assignments. So we again draw minimum number of lines to cover all zeros
at least once. Number of such lines is 4. visible

L1 L2

2 ×
0 1 ×
0 6
3(1)
L3 ×
0 2 0 4 ×
0

6 1 8 0 6 3(5)

4 0 10 4 6 3(4)

L4 0 2 3 ×
0 5

3 3
(2) (3)
452

Step 5 : In the last reduced table the smallest uncovered element is 1. Subtracting this
element 1 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged, the reduced matrix
is

1 0 0 0 5
0 3 0 5 0
5 1 7 0 5
3 0 9 4 5
0 3 3 1 5

Step 6 : Giving zero assignments in the usual manner we observe that each row and each
column have an assignment.
A B C D E

1 1 ×
0 0 ×
0 5

2 ×
0 3 ×
0 5 0

3 5 1 7 0 5

4 3 0 9 4 5

5 0 3 3 1 5

Hence the optimal assignment of jobs to maximize the profit is


Machine → job : 1 → C, 2 → E, 3 → D, 4 → B, 5 → A.
From the given matrix, the maximum profit = ` (10 + 5 + 14 + 14 + 7) = ` 50

Example 2: A company has four territories open and four salesmen available for
assignment. The territories are not equally rich in their sales potential ; it is estimated
that a typical salesman operating in each territory would bring in the following annual
sales :

Territory : I II III IV
Annual sales (`) : 60,000 50,000 40,000 30,000

Four salesman are also considered to differ in their ability : It is estimated that, working
under the same conditions, their yearly sales would be proportionately as follows :

Salesman : A B C D
Proportion : 7 5 5 4
If the criterion is maximum expected total sales, then intuitive answer is to assign the best
salesman to the richest territory, the next best salesman to the second richest, and so on.
Verify this answer by the assignment technique.
453

Solution: To construct the effectiveness matrix.

The sum of proportions of sales of four salesman = 7 + 5 + 5 + 4 = 21.

Considering ` 10,000 as one unit, the annual sales in the four territories by the four
salesmen are as follows :

Salesman.
7 7 7 7 42 35 28 21
A × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
5 5 5 5 30 25 20 15
B × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
5 5 5 5 30 25 20 15
C × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
4 4 4 4 24 20 16 12
D × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21

In order to avoid fractional values we consider the sales in 21 years.

Thus, the maximum sale matrix is obtained as follows :

Sales in ` 10,000 → 6 5 4 3

Sales proportion ↓ I II III IV

7 A 42 35 28 21

5 B 30 25 20 15

5 C 30 25 20 15

4 D 24 20 16 12

This is a ‘maximization’ problem. To convert it into a ‘minimization’ one let us multiply


each element of the above matrix by −1. Thus resulting matrix becomes :

I II III IV

A −42 −35 −28 −21

B −30 −25 −20 −15

C −30 −25 −20 −15

D −24 −20 −16 −12

Now we shall solve the minimization problem by usual assignment algorithm.

Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the reduced matrix is
454

0 3 6 9

0 1 2 3

0 1 2 3

0 0 0 0

Step 2 : Giving zero assignments in the usual manner, we observe that the row 2, 3 and
column 3, 4 have no assignments. So we draw minimum number of lines to cover all zeros
at least once. Number of such lines is 2.
L1

0 3 6 0 3(4)

×
0 1 2 3 3(1)

×
0 1 2 3 3(2)

L2 ×
0 0 ×
0 ×
0

3
(3)

Step 3 : In the last table the smallest uncovered element is 1. Subtracting this element 1
from all uncovered elements, adding to each element that lies at the intersection of two
lines and leaving remaining elements unchanged, the matrix reduces to

0 2 5 8

0 0 1 2

0 0 1 2

1 0 0 0

Step 4 : Giving zero assignments in the usual manner, we observe that row 3 and column
4 have no assignments.
L1 L2

0 2 5 8 3(4)

×
0 0 1 2 3(5)

×
0 ×
0 1 2 3(1)

L3 1 ×
0 0 ×
0

3 3
(2) (3)
455

So we again draw minimum number of lines to cover all zeros at least once. The number
of such lines is 3.

Step 5 : In this table the smallest uncovered element is 1. Subtracting this element 1
from all uncovered elements, adding to each element that lies at the intersection of two
lines and leaving remaining elements unchanged, we get the following reduced matrix.

0 2 4 7

0 0 0 1

0 0 0 1

2 1 0 0

Step 6 : Giving ‘zero assignments’, we get the two assignments as shown in the following
tables. :

I II III IV I II III IV

A 0 2 4 7 A 0 2 4 7

B ×
0 0 ×
0 1 B ×
0 ×
0 0 1

C ×
0 ×
0 0 1 C ×
0 0 ×
0 1

D 2 1 ×
0 0 D 2 1 ×
0 0

Thus two optimal solutions are

(i) A → I, B → II, C → III, D → IV

(ii) A → I, B → III, C → II, D → IV.

From both the solutions, it is obvious that the best salesman A is assigned to the richest
territory I, the worst salesman D to the poorest territory IV. Salesman B and C being
equally good, so they may be assigned to either II to III. This verifies the given intuitive
answer.

11.8 Unbalanced Assignment Problem


[Meerut 2004, 07 (BP), 09 (BP), 12, 12 (BP)]

In case the number of tasks (jobs) is not equal to the number of facilities (persons), the
assignment problem is called an unbalanced assignment problem. Thus the cost
matrix of an unbalanced assignment problem is not a square matrix. For the solution of
such problem we add the dummy (fictitious) rows or columns to the given matrix with
zero costs to form it a square matrix. Then the usual assignment algorithm can be applied
to this resulting balanced assignment problem.
456

Example 1: A department head has four tasks to be performed and three subordinates.
The subordinates differ in efficiency. The estimates of the time, each subordinate would
take to perform, are given below in the matrix. How should he allocate the tasks, one to
each man, so as to minimize the total man hours ?
Subordinates
1 2 3
I 9 26 15
Tasks II 13 27 6
III 35 20 15
IV 18 30 20

Solution: Since the matrix is not square, it is an unbalanced assignment problem. We


introduce one fictitious subordinate (4th column with zero costs) to get a square matrix.
Thus the resulting matrix is shown in the following table. Now the problem can be solved
by usual method.
1 2 3 4
I 9 26 15 0
II 13 27 6 0
III 35 20 15 0
IV 18 30 20 0

Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the matrix reduces to

0 6 9 0
4 7 0 0
26 0 9 0
9 10 14 0

Step 2 : Giving zero assignments in the usual manner, we observe that, each row and
each column have zero assignments.
1 2 3 4

I 0 6 9 ×
0

II 4 7 0 ×
0

III 26 0 9 ×
0

IV 9 10 14 0
457

Hence the optimal assignment is as follows.

Tasks → subordinates, I → 1, II → 3, III → 2.

Task IV remains unassigned.

From the original matrix, the total time (man hours) = 9 + 6 + 20 = 35 hours.

Example 2: A company is faced with the problem of assigning six different machines to
five different jobs. The costs are estimated as follows (in hundreds of rupees) :

Jobs

1 2 3 4 5

1 2.5 5 1 6 1
2 2 5 1.5 7 3

Machines 3 3 6.5 2 8 3

4 3.5 7 2 9 4.5

5 4 7 3 9 6
6 6 9 5 10 6

Solve the problem assuming that the objective is to minimize the total cost. [Meerut 2006]

Solution: Since the matrix is not square, it is an unbalanced assignment problem. We


introduce one fictitious job (6th column with zero costs) to get a square matrix. Further
the matrix involves elements in decimal. We can make them complete numbers by
multiplying each element of the cost matrix by 2. The new modified cost matrix is shown
in the following table.

1 2 3 4 5 6

1 5 10 2 12 2 0

2 4 10 3 14 6 0

3 6 13 4 16 6 0

4 7 14 4 18 9 0

5 8 14 6 18 12 0

6 12 18 10 20 12 0

Now this problem can be solved by the usual procedure :

Step 1 : Subtracting the smallest element in each row from every element of the
corresponding row and then subtracting the smallest element in each column from every
element of the corresponding column, we get the following table :
458

1 0 0 0 0 0

0 0 1 2 4 0

2 3 2 4 4 0

3 4 2 6 7 0

4 4 4 6 10 0

8 8 8 8 10 0

Step 2 : Giving zero assignments in the usual manner, we observe that rows 3, 4, 5 and
columns 3, 4, 5 have no zero assignments.
L1

L2 1 0 ×
0 ×
0 ×
0 ×
0

L3 0 ×
0 1 2 4 ×
0

2 3 2 4 4 0 3(5)

3 4 2 6 7 ×
0 3(1)

4 4 4 6 10 ×
0 3(2)

8 8 8 8 10 ×
0 3(3)

3
(4)

So we draw minimum number of lines to cover all zeros at least once.

Step 3 : The smallest element among all the uncovered elements is 2. Subtracting this
element 2 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving all other elements unchanged, we get the following
matrix.
1 0 0 0 0 2
0 0 1 2 4 2
0 1 0 2 2 0
1 2 0 4 5 0
2 2 2 4 8 0
6 6 6 6 8 0

Step 4 : Giving zero assignments in the usual manner, we observe that row 6 and column
5 have no zero assignments.

So we again draw minimum number of lines to cover all zeros at least once.
459

L1

L2 1 ×
0 ×
0 0 ×
0 2

L3 ×
0 0 1 2 4 2

L4 0 1 ×
0 2 2 ×
0

L5 1 2 0 4 5 ×
0

2 2 2 4 8 0 3(3)

6 6 6 6 8 ×
0 3(1)

3
(2)
Step 5 : The smallest of the uncovered elements is 2. Subtracting this element 2 from all
uncovered elements adding to each element that lies at the intersection of two lines and
leaving all remaining elements unchanged, we get the following table.

1 0 0 0 0 4
0 0 1 2 4 4
0 1 0 2 2 2
1 2 0 4 5 2
0 0 0 2 6 0
4 4 4 4 6 0

Step 6 : Giving zero assignments in the usual manner, we observe that row 2 and column
5 have no zero assignments.
So we again draw minimum number of lines to cover all zeros at least once.
L1 L2 L3 L 4

L5 1 ×
0 ×
0 0 ×
0 4

×
0 ×
0 1 2 4 4 3(1)

0 1 ×
0 2 2 2 3(4)

1 2 0 4 5 2 3(6)

×
0 0 ×
0 2 6 ×
0 3(7)

4 4 4 4 6 0 3(9)

3 3 3 3
(2) (3) (5) (8)
460

Step 7 : The smallest element among the uncovered elements is 2. Subtracting this
element 2 from all the uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged, we get the following
table.
3 2 2 0 0 6
0 0 1 0 2 4
0 1 0 0 0 2
1 2 0 2 3 2
0 0 0 0 4 0
4 4 4 2 4 0

Step 8 : Giving zero assignments in the usual manner, we get a number of optimal
assignments. Here two optimal assignments are given as follows :

1 2 3 4 5 6 1 2 3 4 5 6

1 3 2 2 0 ×
0 6 1 3 2 2 0 ×
0 6

2 ×
0 0 ×
0 1 2 4 2 0 ×
0 1×
0 2 4

3 ×
0 1 ×
0 ×
0 0 2 3 ×
0 1 ×
0 ×
0 0 2

4 1 2 0 2 3 2 4 1 2 0 2 3 2

5 0 ×
0 ×
0 ×
0 4 ×
0 5 ×
0 0 ×
0 ×
0 4 ×
0

6 4 4 4 2 4 0 6 4 4 4 2 4 0

Hence the two of the optimal assignments are

Machine → Job :
(i) 1 → 4, 2 → 2, 3 → 5, 4 → 3, 5 → 1
and (ii) 1 → 4, 2 → 1, 3 → 5, 4 → 3, 5 → 2.

The 6th machine will be left unassigned.

In both the cases, from the original matrix, total minimum cost

= 20 i.e., ` 2,000.

11.9 Restrictions on Assignment


Sometimes due to some restrictions (technical, legal or others) the assignment of a
particular facility to a particular job is not permitted. To over come such difficulty we
assign a very high cost (say, infinite cost) to the corresponding cell, so that the activity
will be automatically excluded from the optimal solution.
461

Example 1: Four engineers are available to design four projects. Engineer 2 is not
competent to design the project B. Given the following time estimates needed to each
engineer to design a given project, find how should the engineers be assigned to projects so
as to minimize the total design time of four project.
Projects

A B C D
1 12 10 10 8
Engineers 2 14 Not suitable 15 11
3 6 10 16 4
4 8 10 9 7

Solution: To avoid the assignment 2 → B, we take its time to be very large (say ∞). Then
the cost matrix of the resulting assignment problem is shown in the following table :
12 10 10 8
14 ∞ 15 11
6 10 16 4
8 10 9 7

Now we apply the assignment technique in the usual manner.

Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting minimum element of each column from every
element of the corresponding column, the reduced matrix is
3 0 0 0
2 ∞ 2 0
1 4 10 0
0 1 0 0

Step 2 : Giving zero assignments in the usual manner, we observe that row 3 and column
3 have no zero assignments. So we draw minimum number of lines to cover all zeros at
least once. Number of such zeros is 3.
L1

L2 3 0 ×
0 ×
0

2 ∞ 2 0 3(3)

1 4 10 ×
0 3(1)

L3 0 1 ×
0 ×
0

√(2)
462

Step 3 : In the above table, the smallest of the uncovered elements is 1. Subtracting this
element 1 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged we get the following
matrix.

3 0 0 1

1 ∞ 1 0

0 3 9 0

0 1 0 1

Step 4 : Giving zero assignments in the usual manner, we observe that each row and each
column have a zero assignment.
A B C D
1 3 0 ×
0 1

2 1 ∞ 1 0

3 0 3 9 ×
0

4 ×
0 1 0 1

Hence the optimal assignment is

Engineer → Project : 1 → B, 2 → D, 3 → A, 4 → C.

From the given matrix total minimum time = 10 + 11 + 6 + 9 = 36.

1. Define assignment problem. [Meerut 2008 (BP), 09]

2. What is an assignment problem ? Give two areas of its applications.


[Meerut 2006 (BP)]

3. Give the mathematical formulation of an assignment problem.


4. Define unbalanced assignment problem. [Meerut 2007 (BP), 09 (BP), 12, 12 (BP)]

5. Explain the difference between a transportation problem and an assignment


problem. [Meerut 2006 (BP), 07; Kanpur 2012; Gorakhpur 2008, 09, 10, 11]

6. Give an algorithm to solve an assignment problem.


7. Describe the Hungarian method of solving the assignment problem.
8. A computer centre has got three expert programmers. The centre needs three
application programmes to be developed. The head of the computer centre, after
463

studying carefully the programmes to be developed, estimates the computer time in


minutes required by the experts to the application programmes as follows :
Programme
A B C
1 120 100 80
Programmer 2 70 90 110
3 110 140 120

Assign the programmers to the programmes in such a way that the total computer
time is least. [Agra 2003]

9. Find the optimal solution for the assignment problem with the following cost
matrix :
I II III IV V
A 11 17 8 16 20
B 9 7 12 6 15
C 13 16 15 12 16
D 21 24 17 28 26
E 14 10 12 11 15
[Meerut 2009, 11]

10. Find the optimal assignment for the problems with given cost matrix

(i) I II III IV (ii) 1 2 3 4

A 5 3 1 8 A 10 12 19 11

B 7 9 2 6 B 5 10 7 8

C 6 4 5 7 C 12 14 13 11

D 5 7 7 6 D 8 15 11 9

[Meerut 2007 (BP)] [Meerut 2008; Kanpur 2012; Gorakhpur 2007]

11. Solve the following minimal assignment problems :

(i) Jobs (ii) Man

1 2 3 4 5 1 2 3 4 5

A 8 4 2 6 1 I 12 8 7 15 4

B 0 9 5 5 4 II 7 9 17 14 10

Person C 3 8 9 2 6 Job III 9 6 12 6 7

D 4 3 1 0 3 IV 7 6 14 6 10

E 9 5 8 9 5 V 9 6 12 10 6
[Meerut 2004]
464

(iii) Job (iv) Man

I II III IV V I II III IV V

A 5 11 10 12 4 A 1 3 2 3 6

B 2 4 6 3 5 B 2 4 3 1 5

Machine C 3 12 5 14 6 Job C 5 6 3 4 6

D 6 14 4 11 7 D 3 1 4 2 2

E 7 9 8 12 5 E 1 5 6 5 4
[Meerut 2008 (BP), 10, 11 (BP), 12 (BP)] [Meerut 2009 (BP), 12]

12. One car is available at each of the stations 1, 2, 3, 4, 5, 6 and one car is required at
each of the stations 7, 8, 9, 10, 11, 12. The distances between the various stations
are given in the matrix below. How should the cars be despatched so as to minimize
the total mileage covered ?
7 8 9 10 11 12
1 41 72 39 52 25 51
2 22 29 49 65 81 50
3 27 39 60 51 32 32
4 45 50 48 52 37 43
5 29 40 39 26 30 33
6 82 40 40 60 51 30

13. A national truck-rental service has a surplus of one truck in each of the cities 1, 2, 3,
4, 5, 6 and a deficit of one truck in each of the cities 7, 8, 9, 10, 11, 12. The
distances (in kilometre) between the cities with a surplus and the cities with a
deficit are displayed below :
To
7 8 9 10 11 12
1 31 62 29 42 15 41
2 12 19 39 55 71 40
From 3 17 29 50 41 22 22
4 35 40 38 42 27 33
5 19 30 29 16 20 23
6 72 30 30 50 41 20

How should the trucks be dispersed so as to minimize the total distance travelled ?
14. Five wagons are available at five stations 1, 2, 3, 4 and 5. These are required at five
stations I, II, III, IV and V. The mileages between various stations are given by the
following table :
465

I II III IV V

1 10 5 9 18 11
2 13 9 6 12 14

3 3 2 4 4 5

4 18 9 12 17 15
5 11 6 14 19 10

How should the wagons be transported so as to minimize the total mileage covered?
15. A marketing manager has 5 salesmen and 5 sales-districts. Considering the
capabilities of the salesmen and the nature of districts, the marketing manager
estimates that sales per month (in hundred rupees) for each salesman in each
district would be as follows :

Districts

A B C D E

1 32 38 40 28 40

2 40 24 28 21 36
Salesmen 3 41 27 33 30 37

4 22 38 41 36 36

5 29 33 40 35 39

Find the assignment of salesmen to districts that will result in maximum sales.
16. Find the minimum cost solution for the 5 × 5 assignment problem whose cost
coefficients are as given below :

I II III IV V
1 −2 −4 −8 −6 −1
2 0 −9 −5 −5 −4
3 −3 −8 0 −2 −6
4 −4 −3 −1 0 −3
5 −9 −5 −9 −9 −5

17. Alpha corporation has four plants each of which can manufacture any of the four
products. Production costs differ from plant to plant as do sales revenue. From the
following data, obtain which product each plant should produce to maximize
profit?
466

Sales revenue (` 1,000) Production cost (` 1,000)


Product Product

Plant ↓ 1 2 3 4 Plant ↓ 1 2 3 4

A 50 68 49 62 A 49 60 45 61

B 60 70 51 74 B 55 63 45 69

C 55 67 53 70 C 52 62 49 68

D 58 65 54 69 D 55 64 48 66

[Hint : Construct the profit matrix by using : Profit = revenue – cost].


18. An airline that operates seven days a week has a time table shown below. Crews
must have minimum layover of 6 hours between flights. Obtain the pairing of
flights that minimizes layover time away from home. For any given pairing the crew
will be based at the city that results in the smaller layover.

Delhi-Calcutta Calcutta-Delhi
Flight No. Departure Arrival Flight No. Departure Arrival
1 7.00 A.M. 9.00 A.M. 101 9.00 A.M. 11.00 A.M.
2 9.00 A.M. 11.00 A.M. 102 10.00 A.M. 12.00 Noon
3 1.30 P.M. 3.30 P.M. 103 3.30 P.M. 5.30 P.M.
4 7.30 P.M. 9.30 P.M. 104 8.00 P.M. 10.00 P.M.

For each pair also mention the town where the crew should be based.
19. Solve the following minimal assignment problems.

(i) Men (ii) Men


A B C D A B C D
I 20 22 28 15 I 12 30 21 15
Jobs II 16 20 12 13 Jobs II 18 33 9 31
III 19 23 14 25 III 44 25 24 21
IV 10 16 12 10 IV 23 30 28 14

[Kanpur 2007, 10, 11]

20. Solve the following minimal assignment problems.

(i) Jobs (ii) Jobs


I II III IV 1 2 3 4
A 8 10 7 9 I 42 35 28 21
Operators B 3 8 5 6 Machines II 30 25 20 15
C 10 12 11 9 III 30 25 20 15
D 6 13 9 7 IV 24 20 16 12
[Kanpur 2008] [Kanpur 2010]
467

(iii) Jobs (iv) Jobs


I II III IV 1 2 3 4
A 2 3 4 5 I 20 22 28 15
Operators B 4 5 6 7 II 16 20 12 13
C 7 8 9 8 III 19 23 14 25
D 3 5 8 4 IV 10 16 12 10

[Kanpur 2009]

21. Four jobs are to be done on four different machines. The cost (in rupees) of
producing i-th job on the j-th machine is given below :

Machines ↓
Jobs ↓
M1 M2 M3 M4

J1 15 13 14 17
J2 11 12 15 13
J3 13 12 10 11
J4 15 17 14 16

Assign the jobs to different machines so as to minimize the total cost. What is the
minimum total cost ? [Gorakhpur 2009]

22. Four jobs are to be done on four different machines. The cost (in rupees) of
producing i-th job on the j-th machine is given below :

Machines ↓
Jobs ↓
M1 M2 M3 M4

J1 15 11 13 15
J2 17 12 12 13
J3 14 15 10 14
J4 16 13 11 17

Assign the jobs to different machines so as to minimize the total cost. What is the
minimum total cost ? [Gorakhpur 2008, 10]

23. Four salesmen are to be assigned to four districts. Estimates of the sales revenue in
hundred of ` for sale are as below :

Salesmen / Districts A B C D
1 320 350 400 280
2 400 250 300 220
3 420 270 340 300
4 250 390 410 350
468

Give the assignment pattern that maximizes the sales revenue.


24. The jobs A, B, C are to be assigned to three machines X , Y , Z . The processing costs
(`) are as given in the matrix shown below. Find the allocation which will minimize
the overall processing cost.
Machine
X Y Z
A 19 28 31
Job B 11 17 16
C 12 15 13

25. A company has 4 machines to do 3 jobs. Each job can be assigned to one and only
one machine. The cost of each job on each machine is given in the following table :
Machine
W X Y Z
A 18 24 28 32
Job B 8 13 17 19
C 10 15 19 22

What are the job assignments which will minimize the cost ? [Meerut 2004]

26. A company is faced with the problem of assigning 4 machines to 6 different jobs
(one machine to one job only). The profit are estimated as follows :
Machine
A B C D
1 3 6 2 6
2 7 1 4 4
Job 3 3 8 5 8
4 6 4 3 7
5 5 2 4 3
6 5 7 6 4

Solve the problem to maximize the total profit.


27. The owner of a small machine shop has four persons available to assign to jobs for
the day. Five jobs are offered with the expected profit in rupees for each person on
each job being as follows :
Job
A B C D E
1 6.20 7.80 5.00 10.10 8.20
2 7.10 8.40 6.10 7.30 5.90
Person 3 8.70 9.20 11.10 7.10 8.10
4 4.80 6.40 8.70 7.70 8.00
469

Find the assignment of persons to jobs that will result in a maximum profit. Which
job should be declined ?
28. Five operators have to be assigned to five machines. The assignment costs are given
in the table below :
Machine
I II III IV V
A 5 5  2 6
B 7 4 2 3 4
Operator C 9 3 5  3
D 7 2 6 7 2
E 6 5 7 9 1

Operator A cannot operate machine III and operator C cannot operate machine IV.
Find the optimal assignment schedule.
29. There are 3 persons P1, P2 and P3 and 5 jobs J1, J2 ,..., J5 . Each person can do only
one job and a job is to be done by one person only. Using Hungarian method, find
which 2 jobs should be left undone in the following cost minimizing assignment
problem.

J1 J2 J3 J4 J5

P1 7 8 6 5 9

P2 9 6 7 6 10

P3 8 7 9 5 6

30. Use the Hungarian method to find which of the two jobs should be left undone
when each of the 4 persons will do only one job in the following cost minimizing
assignment problem :

Job

J1 J2 J3 J4 J5 J6
P1 10 9 11 12 8 5

Person P2 12 10 9 11 9 4

P3 8 11 10 7 12 6

P4 10 7 8 10 10 5
470

8. 1 → C, 2 → B, 3 → A.
9. A → I, B → IV, C → V, D → III, E → II ; min. cost = 60.
10. (i) A → III, B → IV, C → II, D → I, min. cost = 16
(ii) A → 2, B → 3, C → 4, D → 1, min. cost =38
11. (i) A → 5, B → 1, C → 4, D → 3, E → 2 ; min. cost =9
(ii) I → 3, II → 1, III → 2, IV → 4, V → 5
or I → 3, II → 1, III → 4, IV → 2, V → 5.
(iii) A → V, B → IV, C → I, D → III, E → II
(iv) A → I, B → IV, C → III, D → II, E → V.
12. 1 → 11, 2 → 8, 3 → 7, 4 → 9, 5 → 10, 6 → 12.
13. 1 → 11, 2 → 8, 3 → 7, 4 → 9, 5 → 10, 6 → 12.
14. 1 → I, 2 → III, 3 → IV, 4 → II, 5 → V ; 39 miles.
15. 1 → B, 2 → A, 3 → E, 4 → C, 5 → D
or 1 → B, 2 → A, 3 → E, 4 → D, 5 → C, etc.
16. 1 → III, 2 → II, 3 → V, 4 → I, 5 → IV.
17. A → 2, B → 4, C → 1, D → 3.
18. (1 → 103), (2 → 104), (3 → 101) ;(4 → 102)
Delhi Delhi Delhi Calcutta
19. (i) I → B, II → D, III → C, IV → A
or I → D, II → B, III → C, IV → A
(ii) I → A, II → C, III → B, IV → D
20. (i) A → III, B → I, C → II, D → IV (ii) I → 4, II → 3, III → 2, IV → 1
(iii) A → I, B → II, C → III, D → IV or A → II, B → III, C → IV, D → I
or A → III, B → II, C → IV, D → I etc.
(iv) I → 2, II → 4, III → 3, IV → 1 or I → 4, II → 2, III → 3, IV → 1.
21. J1 → M2 , J2 → M1, J3 → M4 , J4 → M3 . min cost ` 49.
22. J1 → M2 , J2 → M4 , J3 → M1, J4 → M3 . min cost ` 49.
23. 1 → C, 2 → A, 3 → D, 4 → B.
24. A → X, B → Y , C → Z
25. A → W , B → X , C → Y or A → W , B → Y , C → X . No job to machine Z.
26. 2 → A, 3 → B, 4 → D, 6 → C ; max. profit = 28.
27. 1 → D, 2 → B, 3 → C, 4 → E ; Job A should be declined.
28. A → IV, B → III, C → II, D → I, E → V
or A → IV, B → III, C → V, D → II, E → I
29. P1 → J4 , P2 → J2 , P3 → J5 ; Jobs J1 and J3 left undone.
30. P1 → J5 , P2 → J6 , P3 → J4 , P4 → J2 ; Jobs J1 and J3 left undone.
471

11.10 The Travelling Salesman (Routing) Problem


Suppose a salesman wants to visit a certain number of cities.
He knows the distances (or time or cost) of journey between every pair of cities allotted
to him. His problem is to choose such a route which starts from his home city, passes
through each city once and only once and returns to his home city in the shortest possible
distance (or in least time or at the least cost).

The above problem may be classified in two forms :


1. Symmetrical : If the distance (or time or cost) between every pair of cities is
independent of the direction of journey the problem is said to be symmetrical.
2. Asymmetrical : If for one or more pairs of cities, the distance (or time or cost)
changes with the direction, the problem is said to be asymmetrical. For example, it
takes longer time while going up-hill from city A to B instead of coming down hill
from city B to A. Similarly flying from East to West usually takes longer time than
from West to East on account of prevailing winds.

Further we note that for two cities there is only one possible route i.e.,there is no choice.
In case of three cities, say A, B and C, one of them (say A) is the home base, there are two
possible routes ; A → B → C and A → C → B. For four cities there are 3 ! = 6 possible
routes. In general, there are (n −1) ! possible routes if there are n cities. Thus, practically it
is impossible to find the best route by trying each one. That is why the travelling
salesman problem is considered as a puzzle by the mathematicians. The best procedure
to solve the problem is as if it were an assignment problem. We formulate the problem of
travelling salesman in the form of an assignment problem with the additional restriction
on his choice of route.

Formulation of a Travelling Salesman Problem as Assignment Problem

Let x ij = 1, if the salesman goes directly from city Ai to city A j , and zero otherwise. Also,
let cij be the distance (or time or cost) from city Ai to city A j . Then our problem is to
minimize Z = ∑ ∑ cij x ij with one additional restriction that the x ij ’s must be so chosen
i j
that no city is visited twice until the tour of all the cities is completed. In particular, he
cannot go directly from city Ai to Ai itself. To avoid this possibility in the minimization
process we adopt the convention cii = ∞ so that x ij can never be unity when i = j. Also we
note that only one x ij = 1 for each value of i and j. The distance (or time or cost) matrix for
this problem is given in the following table :

To
A1 A2 ................................. An
A1 ∞ c12 ................................. c1n
A2 c21 ∞ ................................. c2 n
From M M M M M
An cn1 cn2 ................................. ∞
472

We can omit the variable x ij from the problem specification. Our problem is to determine
a set of n elements of this matrix, one in each row and one in each column, so as to
minimize the sum of the elements determined above.

Note : A problem similar to travelling salesman arises when n items say Ai, i = 1, 2,..., n are
to be produced on a machine in continuation, given that cij (i, j = 1, 2,..., n) is the setup
cost of the machine when item Ai is followed by A j . Here two additional restrictions are
imposed. One restriction is that we do not follow Ai again by Ai. The other restriction is
that we do not produce an item again until all items are produced once.

Solution Procedure : The problem could be solved by assignment technique. In some


cases (violation of additional restriction) we use the method of numeration by assigning
the next minimum element of matrix in place of zero.

The following examples will make the procedure clear.

Example 1: Given the matrix of setup costs, show how to sequence the production so as to
minimize the setup cost per cycle.
To
A1 A2 A3 A4 A5
A1 ∞ 2 5 7 1
A2 6 ∞ 3 8 2
From A3 8 7 ∞ 4 7
A4 12 4 6 ∞ 5
A5 1 3 2 8 ∞
[Meerut 2007]

Solution: Consider the problem as an assignment problem. Applying the assignment


technique, we get the following matrix, showing a solution in terms of marked ‘,’ zeros :

To
A1 A2 A3 A4 A5

A1 ∞ 1 3 6 0

A2 4 ∞ 0 6 ×
0

From A 4 3 ∞ 0 3
3

A4 8 0 1 ∞ 1

A5 0 2 ×
0 7 ∞
473

The solution to the assignment problem given by above matrix is

A1 → A5 , A5 → A1, A2 → A3 , A3 → A4 , A4 → A2 .

This solution indicates to produce the products A1, then A5 and then again A1, without
producing the products A2 , A3 and A4 , which violates the additional restriction of
producing each product once and only once before returning to the first product. So this
is not a solution of the travelling salesman problem.

Now we try to find the next best solution which also satisfies the additional restriction.
The next minimum element (non-zero) in the matrix is 1. We try to bring 1 in the
solution. The cost 1 also occurs at three places.

Start by making unity-assignment in the cell (1, 2) instead of zero assignment in the cell
(1, 5). Then no other assignment can be made in the first row and the second column. So
the assignment in cell (4, 2) is shifted to cell (4, 5). The best solution of the problem lies
in the marked ‘,’ elements as shown in the table.
To
A1 A2 A3 A4 A5

A1 ∞ 1 3 6 0

A2 4 ∞ 0 6 ×
0

From A 4
3 3 ∞ 0 3

A4 8 0 1 ∞ 1

A5 0 2 ×
0 7 ∞

Thus, the required solution of the problem is

A1 → A2 → A3 → A4 → A5 → A1.

For this solution, the cost in the reduced matrix is 2.

On the other hand if we select the element 1 in the cell (4, 3) in the solution, then no
feasible solution is available in terms of zeros or for which the reduced matrix gives the
minimum cost less than 2.

Hence, the most suitable sequence is A1 → A2 → A3 → A4 → A5 → A1.

The minimum setup cost = 2 + 3 + 4 + 5 + 1 = 15.

Example 2: Solve the travelling salesman problem given by the following data :
c12 = 20 , c13 = 4, c14 = 10 , c23 = 5, c34 = 6

c25 = 10 , c35 = 6, c45 = 20 , where c ij = c ji ,

and there is no route between cities i and j if the value for c ij is not given above.
[Meerut 2005, 06 (BP)]
474

Solution: Consider the given problem as an assignment problem.


Taking cij = ∞ for i = j, the cost matrix is as follows :
1 2 3 4 5
1 ∞ 20 4 10 ∞
2 20 ∞ 5 ∞ 10
3 4 5 ∞ 6 6
4 10 ∞ 6 ∞ 20
5 ∞ 10 6 20 ∞

If there is no route between cities i and j then we have taken cij = ∞ to avoid the
possibility of going from i-th station to j-th station.

Now we shall solve the problem by usual assignment algorithm. The following tables
show the necessary steps for reaching the solution.

Table 1 Table 2
1 2 3 4 5 1 2 3 4 5
3 3
1 ∞ 15 0 4 ∞ 3 1 ∞ 12 0 1 ∞ 3

2 15 ∞ ×
0 ∞ 3 3 2 12 ∞ ×
0 ∞ 0

3 0 ×
0 ∞ ×
0 ×
0 3 0 ×
0 ∞ ×
0 ×
0

4 4 ∞ ×
0 ∞ 12 3 4 1 ∞ ×
0 ∞ 9 3

5 ∞ 3 ×
0 12 ∞ 3 5 ∞ 0 ×
0 9 ∞

Table 3 Table 4
1 2 3 4 5 1 2 3 4 5

1 ∞ 11 0 ×
0 ∞ 1 ∞ 11 ×
0 0 ∞

2 12 ∞ 1 ∞ 0 2 12 ∞ 1 ∞ 0

3 ×
0 ×
0 ∞ 0 ×
0 or 3 0 ×
0 ∞ ×
0 ×
0

4 0 ∞ ×
0 ∞ 8 4 ×
0 ∞ 0 ∞ 8

5 ∞ 0 1 9 ∞ 5 ∞ 0 1 9 ∞

Hence the optimal solution of the assignment problem is

1 → 3, 3 → 4, 4 → 1, 2 → 5, 5 → 2 , from table 3

or 1 → 4, 4 → 3, 3 → 1, 2 → 5, 5 → 2 , from table 4.
But none of these solutions is the solution for the travelling salesman problem, as it is not
allowed to return to the starting city 1 without visiting the cities 2 and 5.
475

Now we try to find the best solution which satisfies the restrictions of the travelling
salesman by shifting the positions of assignments.

Firstly we consider the solution in table 3.


1. We shift the assignment from a zero to other zero in the same row, to get the desired
result. In row one if we make an assignment at 0 in cell (1, 4) in place of 0 in cell (1,
3), then we get two assignments in column 4 and no assignment in column 3. So we
should shift the assignment at 0 in cell (3, 4) to cell (3, 3), which is not possible as
this route is not allowed (shown by ∞ in cell (3, 3)). So we cannot shift assignment
from cell (1, 3) to cell (1, 4).
Similarly in row 3, we cannot shift the assignment from cell (3, 4) to cell (3, 1).
If we shift the assignment from 0 in cell (3, 4) to cell (3, 2), then we should shift the
assignment at 0 in cell (5, 2) to 9 in cell (5, 4), so that no row or column contain
more than one assignment. Thus we get the following table.
Table 5
1 2 3 4 5
1 ∞ 11 0 0 ∞

2 12 ∞ 1 ∞ 0

3 0 0 ∞ 0 0

4 0 ∞ 0 ∞ 8

5 ∞ 0 1 9 ∞

This feasible solution is, 1 → 3 → 2 → 5 → 4 → 1.


Which satisfy the restrictions of the salesman. In this case the cost is increased by
9 − 0 = 9 in comparison to the cost 0 in table 3.
In row 4, we cannot shift assignment from 0 in cell (4, 1) to cell (4, 3). Since if we do
so then we should shift assignment from cell (1, 3) to cell (1, 1), which is not
allowed.
Table 6
2. Since the smallest element other than 0
1 2 3 4 5
is 1 in table 3 so we try to bring 1 into
the solution. Since the element 1 occurs 1 ∞ 11 0 0 ∞
at two places in cell (2, 3) and (5, 3), so
2 12 ∞ 1 ∞ 0
we shall consider both the cases
separately, until the acceptable solution 3 0 0 ∞ 0 0
is attained.
4 0 ∞ 0 ∞ 8
Making assignment at 1 in cell (2, 3)
instead of 0 in cell (2, 5), we should shift 5 ∞ 0 1 9 ∞
assignment from cell (1, 3) to (1, 5)
476

which is not allowed as there is ∞ in cell (1,5).


Again shifting assignment from 0 in cell (5, 2) to 1 in cell (5, 3), we should shift the
assignment from cell (1, 3) to cell (1, 2) so that no row or column contain more than
one assignment. Thus we get the following table.
This feasible solution is, 1 → 2 → 5 → 3 → 4 → 1.
In this case the cost is increased by (11 + 1) − 0 = 12, in comparison to previous cost 0
in table 3.
3. Since the next smallest element greater than 1 is 8 in table 3 in cell (4, 5). So we
shift the assignment from 0 in cell (4, 1) to 8 in cell (4, 5), and then we should shift
assignment from cell (2, 5) to 12 in cell (2, 1). Thus we get the following table.
Table 7
1 2 3 4 5
1 ∞ 11 0 0 ∞

2 12 ∞ 1 ∞ 0

3 0 0 ∞ 0 0

4 0 ∞ 0 ∞ 8

5 ∞ 0 1 9 ∞

This feasible solution is 1 → 3 → 4 → 5 → 2 → 1.


In this case the cost is increased by (12 + 8) − 0 = 20 in comparison to the cost 0 in
table 3.
4. Since the next smallest element greater than 8 is 9 in table 3. in cell(5, 4), so shifting
the assignment from 0 in cell (5, 2) to cell (5, 4), we should shift assignment from
cell (3, 4) to cell (3, 2), so that no row or column contain more than two
assignments.
By this shifting we get the same solution as in table 5. in which the increase in cost is
9.
Other shifting of assignments from 0 in any cell of table 3 to any other cell do not
give any other new solution. Thus from above all possible solutions, we note that
the minimum increase in cost is 9 of solution given in table 5.
Hence the optimal feasible solution of the problem i.e., the optimum route of the
salesman is 1 → 3 → 2 → 5 → 4 → 1 given in table 5.
The corresponding total cost = 4 + 5 + 10 + 20 + 10 = 49.
Proceeding similarly from table 4, we get the optimal route as 1 → 4 → 5 → 2 → 3 → 1
with the same total cost 49.
477

1. Write a short note on travelling salesman problem.


2. State the travelling salesman and formulate it as an assignment problem.
3. How can the travelling salesman problem be solved using assignment algorithm.
[Meerut 2007]

4. Solve the ‘travelling salesman problem’ given by the following data :


c12 = 4, c13 = 7, c14 = 3, c23 = 6, c24 = 3 and c34 = 7, where cij = c ji. [Meerut 2004]

5. Solve the travelling salesman problem in the matrix shown below :

1 2 3 4 5
1 ∞ 6 12 6 4
2 6 ∞ 10 5 4
3 8 7 ∞ 11 3
4 5 4 11 ∞ 5
5 5 2 7 8 ∞

6. A salesman has to visit five cities, A, B, C, D and E. The distances (in hundred
miles) between the five cities are as follows :

To
A B C D E
A  7 6 8 4
B 7  8 5 6
From C 6 8  9 7
D 8 5 9  8
E 4 6 7 8 

If the salesman starts from city A and has to come back to city A, which route
should he select so that the total distance travelled is minimum. [Gorakhpur 2007]

7. Solve the following travelling salesman problem

To
1 2 3 4 5 6
1 ∞ 20 23 27 29 34
2 21 ∞ 19 26 31 24
From 3 26 28 ∞ 15 36 26
4 25 16 25 ∞ 23 18
5 23 40 23 31 ∞ 10
6 27 18 12 35 16 ∞ [Meerut 2006]
478

Multiple Choice Questions


1. The complete optimal assignment is obtained if in the reduced cost matrix of order n
the number of marked ‘,’ zeros is
(a) Less than n (b) Greater than n
(c) Exactly n (d) None of these
2. An optimal assignment exists if the total reduced cost of the assignment is
(a) Zero (b) One
(c) Two (d) None of these
3. In an unbalanced assignment problem to form a square matrix fictitious rows or
columns are added in the matrix with costs
(a) 1 (b) 0
(c) ∞ (d) None of these
4. If a salesman wants to visit n cities then the number of possible routes is
(a) n! (b) (n −1) !
(c) n (d) None of these
5. In the process of drawing minimum number of lines to cover all the zeros of the
reduced matrix we draw lines through
(a) Marked columns (b) Unmarked rows
(c) Unmarked rows and marked columns
(d) None of these.

Fill in the Blank


1. The problems where the objective is to assign a number of origins to the equal
number of destinations at a ................. cost are called ‘Assignment problems’.
2. In an assignment problem if x ij denotes that the i-th person is to be assigned the j-th
n n
job the ∑ x ij = ................. and ∑ x ij = ................. .
i =1 j =1

3. In an assignment problem with cost (cij ), if all cij ≥ 0, then a feasible solution ( x ij )
n n
which satisfies ∑ ∑ cij x ij = ................., is optimal for the problem.
i = 1j = 1 [Meerut 2005]

4. In travelling salesman problem the elements of the leading diagonal of the cost
matrix are taken to be ................. .
5. If the cost matrix of an assignment problem is not a square matrix (number of
sources is not equal to the number of destinations), the assignment problem is
called an ................. assignment problem.
479

True or False
1. If in an assignment problem, a constant is added or subtracted to every element of a
row (or column) of the cost matrix [cij ], then an assignment which minimizes the
total cost for one matrix, also minimizes the total cost for the other matrix.
2. For solving ans assignment problem we modify the cost matrix by creating zeros in
it. [Meerut 2004]

3. If there is no solution to the travelling salesman problem among zeros then the best
solution lies in assigning the greatest element of the reduced matrix in place of zero.
4. The procedure of subtracting the minimum element not covered by any line, from
all the uncovered elements and adding the same element to all the elements lying at
the intersection of two lines results in a matrix with different optimal assignments
as the original matrix.
480

Exercise 11.2
4. 1 → 3 → 2 → 4 → 1 ; or 1 → 4 → 2 → 3 → 1 ; min. cost =19
5. 1 → 3 → 5 → 2 → 4 → 1 ; min. cost =27
6. A → E → B → D → C → A ; or A → C → D → B → E → A ; min distance = 30
hundred miles.
7. 1 → 5 → 6 → 3 → 4 → 2 → 1 ; min. cost =103.

Multiple Choice Questions


1. (c) 2. (a)
3. (b) 4. (b)
5. (c)

Fill in the Blank


1. minimum 2. 1 ; 1
3. 0 4. ∞
5. unbalanced

True or False
1. True 2. True
3. False 4. False
mmm

You might also like