You are on page 1of 166

CHAPTER 0

OPERATIONS RESEARCH

0.1 Introduction

The tern Operations Research was introduced in 1940 in United Kingdom. The new
science came into existence in a military context. During World War II, military management
called on scientists from various disciplines and organised them into teams to assist in solving
strategic and tractical problems, i.e., to discuss, evolve and suggest ways and means to
improve the execution of various military operations. This new approach to systematic and
scientific study of the operations of the system was called Operations Research or Operational
Research (OR).

0.2 Mathematical Preliminaries

 
Let X = (x1, x2, …, xn) be an ordered n-type and R n   X
 X   x1 , x 2 ,..., x n  
 . If X

and Rn satisfies some postulates suggested, then we say x is an n-dimensional vector and Rn is
an n-dimensional vector space.

Definition:

Let V be a set such that X, Y, Z  V and a, b,  ¡ , then the following postulates hold

(i) X+YV
(ii) X+Y=Y+X
(iii) (X+Y) + Z = X + (Y+Z)
(iv) There exists an element 0  V, called the null vector or zero vector such that
X + 0 = X.
(v) There exists an element -XV, called additive inverse such that X + (-X) = 0.
(vi) aXV
(vii) a (X+Y) = aX + aY
(viii) (a+b) X = aX + bX
(ix) ab (X) = a (bX)
School of Distance Education, University of Calicut
(x) 1. X = X (1 is the identity in the Real field)

Then V is called a vector space and its elements are called vectors.

Eg: let X = (x1, x2 … xn) be an ordered n-tuple of real members and Rn be the set of all such
n-tuples. If we define the sum of the n-tuples as

X +Y= (x1 + y1, x2 +y2 … xn + yn)

and the product with the real number a as aX = (ax1, ax2 … axn)
Then Rn is a vector space, where zero vector 0 = (0,0 … 0)

Definition: A subset W of V is called a subspace of the vector space V, if W is itself a vector


space with respect to the operation suggested is the definition of the vector space.

Linear Dependence

Let Xi, i = 1, 2, …, m be vectors of V, then X is called a linear combination of the


m
vector Xi if X = a X ,a R
i
i i i

Definition: The vectors Xi, i = 1, 2, …, m of V are said to be linearly dependent if there


m
exists real numbers ai not all zero, such that a X
i 1
i i 0

If, however, this is so only if ai = 0  i. Then the vectors are said to be linearly independent.
Definition: V is said to be of dimension m, if there exists at least one set of m linearly
independent vectors in V, while every set of m + 1 vectors in V is linearly dependent. The
linearly independent set is called a basis of V.

Theorem: A set of m linearly independent vectors in a vector space V of dimension m spans V.

Proof: Let Yi , i  1,2,...m be m linearly independent vectors in V.

Let X be any vector  V, . Since V is of dimension m, set of m + 1 vectors of V must


be linearly dependent.

m
ie., a 0 X   a i Yi  0 where a 0  0.
i 1

For a 0  0; Yi 's a are linearly dependent vectors, which is not by hypothesis.


Operations Research 6
School of Distance Education, University of Calicut
m  a 
 X    i  Yi
i 1  a 0 

ie., any vector of V can be expressed as a linear combination of the linearly independent
vectors Yi 's.

Note: The set of linearly independent vectors spanning a vector space is not unique. So basis
of a vector is not unique. But once the basis is chosen, every vector of V has unique linear
combination of vectors of the chosen basis.

Euclidean Space

The inner product X, Y of any two vectors of X and Y of V is a real number

satisfying the following properties:

(i) X, Y  Y, X

(ii) X  Z, Y  X, Y  Z, Y , Z  V

(iii) aX, Y  a X, Y ; a  R

(iv) X, X  0 if X  0; X, X  0, if X  0

Two non-zero vectors are said to be orthogonal if their inner product is zero.

Definition: A vector space with an inner product defined on it is called a Euclidean space.

Let R n be a set of ordered n-tuple of real numbers, for every pair of

n-tuples X, Y  R n . . Let,

(i) X  Y  Y  X   x1  y1, x 2  y2 , ..., x n  yn   R n

(ii) aX   ax1, ax 2 , ..., ax n   R n , a 

(iii) Inner product X'Y  Y'X   x1y1  x 2 y2  ...  x n y n   be defined.

Then the n-tuples are called vectors and R n is called a Eucledean space.

Norm of a Vector

For measuring the size of a vector norm is defined. There are many ways of defining a
norm. The following are the properties which all the various definitions of norms must
possess.
Operations Research 7
School of Distance Education, University of Calicut
Definition: Let X, Y be vectors, then any real number X , satisfying (i), (ii) and (iii),
where

(i) X  0, X  0  X  0

(ii) aX  a X , a R

(iii) X  Y  X  Y , defines norm of X.

Eg: Let X  x1  x 2  ...  x n

Then X satisfies all the three conditions and hence it is a norm of X.

X   X'X 
1
2
The Euclidean norm of an n-vector X is defined as

 
1
 x12  x 22  ...  x 2n
2
, usually denoted as X .

0.3 Linear Algebraic Equations

Consider a system of m linear equations in n unknowns x1 , x 2 , ..., x n of the form

a11x1  a12 x 2  ...  a1n x n  b1


a 21x1  a 22 x 2  ...  a 2n x n  b2 (1)
………………………………
a m1x1  a m2 x 2  ...  a mn x n  bm

where a ij ; i  1,2,... m and j  1,2,...n are real numbers. We can represent (1) in matrix

notation as AX = B, where

a11 a12 ... a1n 


a a 22 
... a 2n 
A   21 , X   x1 , x 2 , ..., x n 
T

... ... ... ... 


 
a m1 a m2 ... a mn 

and  b1 b2 ... bm 
T

Let Pj , j  1, 2,...n denote the column vectors of A and Qi , i 1,2,...,n denote the

row vectors of A.
Operations Research 8
School of Distance Education, University of Calicut

A  Q1 , Q2 ...Qm  or A   P1 P2 ... Pn 


T
Then,

 (1) may also be written as either  P1 P2 ... Pn  X  B or

Q1 Q2 ...Qm 
T
XB
(2)

Let AX  B be a system of linear equation, where A is a square matrix, then if A is non-

singular, the unique solution of AX  B is given by X  A 1B.

If B = 0, ie., AX = 0, where A is non-singular the system has a unique solution

X=0 (*)
which is called trivial solution.
If A is singular, there can be infinitely many non-trivial solution to AX  0.

Note: A square matrix A is said to be singular if A  0 then A 1 is not defined and A is

non-singular if A  0.

Consider the system of linear equations as given is (1) the system is said to be consistent if it
has a solution, otherwise it is inconsistent.

(1) can be represented as  Q1 Q2 ... Qm  X  B as in (2) where Q1 ,Q2 ,...,Qm are the row
T

vectors of the matrix A. Let r of these vectors Q1 ,Q2 ,...,Qr are linearly independent, where r
< m.

Then if the system of equation AX  B in (1) is consistent,

Then, AX  B; ie.,  Q1 Q2 ...Qm   B


T

 Q1   b1   Q1   b1 
Q    Q   
or 
2  X   b 2  is equivalent to  2  X   b2 
       
       
 Qm   bm   Qr   br 

Theorem: If A is an r  n matrix r  n, with linearly independent row vectors, then there is

at least one r  r submatrix of A which is non-singular.

Operations Research 9
School of Distance Education, University of Calicut
Theorem: Ar is non-singular if and only if the column vectors P1 , P2 ,..., Pr are linearly
independent.

Proof: Let Ar   P1 P2 ...Pr  be non-singular, if P1 P2 ...Pr are linearly dependent, it is

possible to express any one of P1 , P2 ...Pr , (Say Pm) as a linear combination of P1 , P2 ...Pr

ie., Pm  1m P1  2m P2  ...  rm Pr

Then by some column transformation the mth column of A r can be reduced to

 0,0,...,0
T
. Then the A r  0, This would contradict the assumption that A r is non-

singular.

 P1 , P2 ...Pr are linearly independent.

Conversely,

let P1 P2 ...Pr are linearly independent. Then there do not exists values 1 , 2 ,..., r not

all zero such that 1P1  2 P2  ...  r Pr  0

 1 
 
ie.,  P1P2 ... Pr     0
2
(3)
 
 
 r 

it means the only possible solution of the above system (3) in r unknowns 1 , 2 , ..., r is the

trivial solution, 1  2  ...  r  0

which implies Ar is non-singular (recall *)

The significance of this theorem is that, in order to select a non-singular submatrix of


A, in the system of equations AX  B as in (1), one may only to find a linearly independent
subset of m column vectors out of n column vectors of A.

But it is not unique, though there be at least one set of m linearly independent vectors
out of n and the maximum number of sets of m linearly independent vectors out of n is n Cm .

Operations Research 10
School of Distance Education, University of Calicut
For each choice of m linearly independent sets of vectors P1 , P2 , ..., Pm of A, there is
an infinity of solutions to the system of equations corresponding to arbitrary values of
x m1 , x m2 ... x n . In particular we may put x m1  a m2  ...  x n  0 and get the
corresponding solution.

Such a solution is called a BASIC SOLUTION.

The corresponding linearly independent column vectors are called the BASIC
VECTORS or BASIS.

0.4 Convex Sets

The point X   1    X1  X2 , 0    1 is called the convex linear

combination of X1 and X 2 . It is a point lying in the line segment joining X1 and X 2 .

A set K  E n is said to be convex, if the convex linear combination of any two points

in K belongs to K. In other words, K is convex if X1 , X2  K,


X   1    X1  X2  K for 0    1.
m
Let Xi  E n and lt  i be non negative real numbers such that  i  1. Then
i 1

X  1X1   2X2  ...   mXm is called the convex linear combination of points
Xi , i  1,2,...,m.

Theorem 1. For a set K to be convex it is necessary and sufficient that every convex linear
combination of points in K belongs K.

Theorem 2. Interaction of two convex sets is a convex set.

Definition 1. The convex hull of a set S is the intersection of all convex sets of which S is a
subset. Convex hull of S in denoted by [S]

Set S = [S] =

Note: A convex set is its own convex hull.

Vertices or Extreme Points of a Convex Set

Operations Research 11
School of Distance Education, University of Calicut
Definition: A point X of a convex set K is an extreme point or vertex of K, if it is not possible
to find two points X1 and X 2 such that X   1    X1  X2 , 0    1.

If X   1    X1  X2 , for any , where 0   1, then X is a point lying on the

line joining X1 and X 2 .

Convex Polyhedron
The set of all convex combinations of a finite number of points Xi , i  1,2,...,m is
the convex polyhedron spanned by these points.
The convex polyhedron is a convex set.
Hyper planes, Half spaces and Polytopes
Let X  E n ,  C  0  a constant row-n-vector and  R. Then we define.

(i) A hyper plane as  X | CX   

(ii) A closed half space as  X |CX   

(iii) An open half space as  X |CX   

The intersection of a finite number of closed half spaces is called polytope.


The half spaces are unbounded convex sets, since intersection of convex sets is again
convex; polytope is a closed convex set.
Theorem: For a set K to be convex it is necessary and sufficient that every convex linear
combinations of points in K  K.
Theorem: If a set K is non-empty, closed convex set and bounded from below (or above), then
it has at least one vertex.
Theorem: Every point of a convex set K is a convex linear combination of its vertices.
0.5 Quadratic Forms
Let X  E n . A homogeneous expression of the second degree of the form

f  X   C11 x12  C22 x 22  ...  Cnn x n2  C12 x1x 2  C13x1x13  C23x 2 x 3 ...
... where Cij  R, i, j 1,2...n

is called a quadratic form in the n variables x1 , x 2 ,... x n . If we substitute a ii for cii and

a ji  a ij for cij , where

a ij  a ji  12 Cij , i  j,
the quadratic form can be put as

Operations Research 12
School of Distance Education, University of Calicut
f  x   a x  a 22 x  ...  a nn x  a11x1x 2  a 21x 2 x 2  ...
2
11 1
2
2
2
n

n n
   a ijx i x j  X 'AX
i 1 j 1

where X is the column vector  x1 x 2 x 3 ...x n  ' and A is the real symmetric matrix.

 a11 a12 a1n 


a a 22 a 2n 
A   21
 
 
 a n1 a n 2 a nn 

For eg: f  x   x12  2x 22  7x 32  4x1x 2  6x1x 3  5x 2 x 3

1 2 3   x1 

can be put as  x1 x 2 x 3  2 2 5   x 
 2  2 

3 5
2 7   x 3 
Definition: A quadratic form X'AX is said to be positive definite
If X'AX  0 for all X  0.

It is said to be positive semi definite if X'AX  0 for all X  0 and there is at least

one non-zero vector for which X'AX  0.


Negative definite and negative semi-definite forms are defined by reversing the
inequality signs in the above definition.
Theorem: Let the eigen values of the real symmetric n  n matrix A be 1 ,  2 ,...,  n . Then
the quadratic form X'AX is
(i) positive definite   j  0  j

(ii) negative definite   j  0  j

(iii) positive semi-definite   j  0 with equality holding at least for one j.

(iv) negative semi-definite   j  0 with equality holding at least for one j.

*******************

Operations Research 13
CHAPTER 1
EXTREMA OF FUNCTIONS

1.1 Real-valued functions

Let X be a vector in S  En, where En is the n dimensional Euclidian space.

Definition: 1.1

A real valued function f(X) defined on S is said to be continuous at a point X0 in S, if


for each real  > 0 there exists a real number  > 0 such that

|X – X0| <   |f(X) – f(X0)| < 

One can easily see that f(X) is continuous at X0 if and only if X  X0 implies f(X) 
f(X0). If f(X) is continuous at every point in S, we say f(X) is continuous in S.

Let  xj = [0, 0, …,  xj, 0, …, 0]' be an n-vector in En with all its components zero
except the jth which is  xj,

Definition: 1.2

The partial derivative of f(X) with respect to any component xj of X is defined as


lim f ( X  x j )  f ( X ) f
, if it exists, and is denoted by
x j  0 x j x j
'
 f f f 
The vector f   , ,...,  is called gradient of f(X). It is usually denoted by gradf.
 x1 x2 xn 

Definition: 1.3

If f(X) has continuous partial derivatives with respect to each of its variables, it is said
to be differentiable.

For any point X   x1 , x2 ,..., xn  in E, let X +  X be a neighboring point of X such that


'

ΔX   δx1 ,δx2 ,...,δxn 


'
School of Distance Education, University of Calicut
The Taylor series of f(X) is defined as

f  X  ΔX   f  X    ΔX   f  X   1
2
ΔX  H  X  ΔX   e  X ,ΔX  ΔX
' ' 2

Where e  X , ΔX   0 as ΔX  0 , and,

 2 f 2 f 2 f 
 x 2 x1x2
L
x1xn 
 1

 2 f 2 f 2 f 
 L 
H  X    x2 x1 x22 x2 xn 
 L L L L 
 
 2 f  f
2
2 f 
 x x xn x2
L
xn2 
 n 1

Definition: 1.4

Let f(X) be a differentiable function and Y be a unit vector in En. The directional
lim f ( X  tY )  f ( X )
derivative of f(X) in the direction of Y is defined as
t 0 t

Using Taylor‟s Series,

f  X  tY   f  X   tY  f x  + terms in t 2
'

lim f ( X  tY )  f ( X )
 = Y'  f(X)
t 0 t

Hence Y'  f(X) is the rate of change of f(X) in the Y direction

f
The unit vector in the direction of the gradient vector is , and so the rate of change of
| f |

f(X) in the direction of the gradient vector is


f  ' f = (f ) ' (f )
= (f ) | (f ) =|  f |.
| f | (f )' (f )

By Cauchy-Schwartz inequality, |Y'  f|  |Y| |  f| = |  f| as | Y | = 1.

i.e. |Y'  f|  |  f| . This means that the rate of change of f(X) in the direction of the gradient
vector is greater than the rate in any other direction. Thus f(X) is increasing fastest in direction

Operations Research 15
School of Distance Education, University of Calicut
of  f and decreasing fastest in the direction of -  f . The directions  f and -  f are
respectively called directions of steepest ascent and steepest descent.

1.2 Extrema of Functions

Un constrained extrema of differentiable functions

Definition: 1.5

f(X) is said to have a global minimum at X0 in S, if f(X)  f(X0) for all X  S.


Similarly we can define a global maximum.

Definition: 1.6

f(X) is said to have a relative minimum at X0 in S if there exists a  - neighbourhood


of X0 such that f(X)  f(X0) for all X in the neighbourhood.

The word extremum is used to indicate either maximum or minimum. The mathematical
analysis involved in discussing maximum of a function is same as in a minimum. A
minimization problem can be easily converted into a maximization problem using the relation
Max f  x   Min  f  X 

Let X  En and f(X) be a real valued function with second order partial derivatives.
Setting X = X0 and X0 +  X0 = X , we may write the Taylor‟s as

f  X   f  X 0    ΔX 0  f 0  1
2  0   0  0   0
ΔX H X ΔX  e X ,ΔX 0  X 0
' ' 2

(1.1)

where,  f0 =  f(X) at X0 and H(X0) = H(X) at X0.

In order that X0 may be a point of relative extremum of f(X) there should be a


 - neighbourhood N of X0 such that f(X) – f(X0) in the same sign (non negative for
minimum and non positive for maximum) for every X  N. But for every X in N, the sign of
f(X) – f(X0) can be made to depend upon the sign of (  X0)'  f0 by taking  sufficiently
small. But (  X0)'  f0 will change sign with  X0.  X0 can be of either sign because
|  X0| <  ; then also |-  X0| <  , and so both  X0 and -  X0 will satisfy the condition that
X is in N. So we conclude that so long as (  X0)'  f0 can change sign, f(X) – f(X0) cannot
retain same sign for all X  N. Hence if X0 is a point of relative extremum, it is necessary

Operations Research 16
School of Distance Education, University of Calicut
that (  X0)  f0 = 0. Since in the unconstrained case the components of  X0, namely
'

 x10, …,  xn0 are independent. We conclude that  f0 = 0.

This is the necessary condition that f(X0) may be an extremum. From (1.1) the sign of
f(X) – f(X0) for all X in N can be made depend upon the sign of the quadratic expression

 ΔX 0  H  X 0  ΔX 0  by taking  sufficiently small. Hence f(X) has a relative minimum or


'

maximum at X0 according as the quadratic expression is positive or negative definite.

Now we define the saddle point of a function f.

Defenition: 1.7

The function f(Y, Z) is said to have a saddle point at (Y0, Z0) if f(Y0, Z)  f(Y0, Z0)
 f(Y, Z0) for all (Y, Z) in the neighbourhood of (Y0, Z0).

Suppose in (1.1) the quadratic expression (  X0)'H0(  X0) is indefinite, that is, the
quadratic expression can take both positive and negative values depending upon the choice of
 X0. Then By a suitable transformation of the form  X0 = B  U0 where B is a suitable n  n
matrix, the indefinite quadratic form (  X0)'H(X0)(  X0) can be written as

C1 0 0 
 C2 0 

 . 
 
 . 
 . 
'  
 ΔU 0   Cm   ΔU 0 
 Cm 1 
 
 . 
 . 
 
0 . 
 
0 0 Cn 

where Cj  0 , j = 1, 2, …, n. Observe that since the quadratic expression is indefinite some of


the diagonal elements are positive and some are negative.

  Y0 
Write  U0 =  
 Z 0 

Where  Y0 is an m-vector and  Z0 is an (n-m)-vector.


Operations Research 17
School of Distance Education, University of Calicut
Writing  Y0 = (  y10, …,  ym0) and  Z0 = (  Z10, …,  Zn-m 0)' the above expression
'

becomes,

c1(  y10)2 + …, + cm(  ym0)2 – cm+1(  Z10)2 …, - cn(  Zn-m 0)2

The sign of f(X) – f(X0) now depends upon the sign of the above expression.

Notice that for Y0  0,  Z0 = 0, f(X) - f(X0)  0 and for Y0 = 0, Z0  0, f(X) – f(X0)  0
.

Thus f(X) is said to have a saddle point at X0.

Example:- Consider the function

f  X   x12  4 x22  4 x32  4 x1 x2  4 x1 x3  16 x2 x3

f
= 0  2x1 + 4x2 + 4x3 = 0
x1
f
= 0  4x1 + 8x2 + 16x3 = 0
x2

f
= 0  4x1 + 16x2 + 8x3 = 0
x3

Solving we get x1 = 0 , x2 = 0, x3 = 0. Differentiating further we get

 2 f 2 f 2 f 
 
 x1 x1x2 x1x3 
2

1 2 2
 2 f 2 f 2 f   
H X      2 2 4 8
 x2x3 x2 2 x2x3 
 2 8 4 
 2 f 2 f 2 f 
 
 x3x1 x3x2 x32 

The principal minors are 2, 0 and –32. Hence H(X) is indefinite. Therefore, f(X) has a saddle
point at X = (0, 0, 0)'

ii) Constrained extrema

Let X  S  En and f(X) be real valued function. Also let

gi(X) = 0 , i = 1, 2, …, m (C1)

Operations Research 18
School of Distance Education, University of Calicut
for all X in some nonempty subset S1 of S, where each gi(X) is also a real valued function.

Defenition: 8

f(X) is said to have a relative extremum at X0 subject to the constraint (C1) if there
exists a  - neighbourhood N of X0 such that f(X)  f(X0) (maximum) or f(X)  f(X0)
(minimum) for every X  S1  N. In either case X0 is a point of constrained extremum of
f(X).

The problems of constrained extremal problems are solved using method of


Lagrangian multipliers. We will begin our discussion with the implicit function theorem.

Theorem : C1
Let X  En and gi(X), i = 1, 2, …, m be 'm' real valued functions in some
neighbourhood of X0 in S  En. Also let
gi  X 0   0, i  1, 2,L , m (C2)

 g1 g 2 L g m
Also let the Jacobian  0 at X0.
 ( x1 x2 L xm )

Then there exists a neighbourhood N of  xm1,0 , xm 2,0 ,..., xn,0  and functions  1,  2, …,  m

differentiable in N such that


xk =  k(xm+1 , xm+2 , …, xn) , k = 1, 2, …, m , is a solution of (C2) in N giving xk 0 at
(xm+1, 0 , xm+2, 0 , …, xn,0).
Essentially theorem indicates the condition in which m of the n variables can be
expressed explicitly in terms of the remaining n-m variables if m equations involving implicit
functions of n variables are given. Now we move to method of Lagrangian multipliers.

Theorem: C2
Let X  S  En and f(X) be a real valued differentiable function. Also let
gi(X) = 0 , i = 1, 2, …, m (C3)
Where each gi is a real differentiable function and for each X in S the m x n matrix
 gi 
 
 x j  , i = 1, 2, …, m , j = 1, 2, …, n (C4)
is of rank m. Further let f(X) has a relative extremum at X0 subject to the constraints. Then
there exists real numbers i , i = 1, 2, …, m, such that X0 is a stationary point of the function

Operations Research 19
School of Distance Education, University of Calicut
m
F(X) = f(X) +  i g i ( X ) (C5)
i 1

Proof:-

Without loss of generality assume that

  g1 , g 2 ,..., g m 
0 (C6)
  x1 , x2 ,..., xm 

By implicit function theorem define x1 , x2 , …, xm as functions of xm+1, xm+2 , …, xn near


(xm+1,0 ,xm+2,0 , …, xn,0). The variables xm+1, xm+2 , …, xn may be regarded as independent.

Since X0 is a point of relative extremum of f(X) , it is a stationary point, and so


(  X0)'(  f0) = 0, or at X0

f f  gi
 x1 +  x 2+ … +  xn = 0 i = 1, 2, …, m (C7)
 x1  x2  xn

Differentiating (C3) we get

g i g i  gi
 x1 +  x 2+ … +  xn = 0, i = 1, 2, …, m (C8)
 x1  x2  xn

Multiplying equation (C8) by  1,  2, … ,  m respectively and adding to (C7) we get

n f  g1  gm 
   x  λ1
 xj
 ...  λ m   xj = 0
 x j 
(C9)
j 1  j

Now consider the equations

f g  gm
 λ1 1  ...  λ m = 0 , j = 1, 2, …, m (C10)
 xj  xj  xj

Operations Research 20
School of Distance Education, University of Calicut
 f 
 g1 g 2 g m   1   x 
 x . . .  1
x1 x1      f 
 1   2
 . . . . . .   x2 
 .   
or  . . . . . .    = -  . 
   .
 . . . . . .     . 
 g1 g 2 g m   .  . 
. . .  f 
 xm xm xm  m   
 xm 

Using (C6) the above sets of equations will have a unique solution  1,  2 , …,  m .
Therefore, (C9) reduces to

n f g  gm 
   λ1 1  ...  λ m   xj = 0
 x j 
j  m 1   xj  xj

Also xj , j = m+1 , m+ 2, …, n are independent, this says that

f g  gm
 λ1 1  ...  λ m = 0 for j = m+1, …, n (C11)
 xj  xj  xj

Equations (C10) and (C11) together implies

f g  gm
 λ1 1  ...  λ m = 0 , j = 1, 2, …, n
 xj  xj  xj

 F(X )
ie = 0 , j = 1, 2, …, n, at X0
 xj

Thus X0 is a stationary point of F(X).

Remark:

The function F(X) is usually referred to as Lagrangian function and the parameters
 1,  2 , …,  m are called Lagrangian multipliers.

 F(X )
The solution x 1,0 , x2,0 ,...xn,0 ,...λ1 , λ 2 ,...λ m  obtained by solving
 xj
= 0,

j = 1,2, , …n & gi(X) = 0 , i = 1, 2, …, m can be a point of constrained extremum of f(X).

Operations Research 21
School of Distance Education, University of Calicut
Example:

Maximize f(X) = | X |2 , X  E3

x12 x2 2 x32
Subject to g1(X) =   -1=0
4 5 25
g2(X) = x1 + x2 – x3 = 0
The associated Lagrangian function is
x2 x 2 x 2 
F  X   x12  x22  x32  λ1  1  2  3  1  λ 2  x1  x2  x3 
 4 5 25 

F x
= 0  2x1 + 1 1 +  2 = 0
 x1 2
F 2
= 0  2x2 +  2 x2+  2 = 0
 x2 5
F 2
= 0  2x3 +  1 x3-  2 = 0
 x3 25

Solving the last three equations we get

2 2 5 2 25 2
x1 = , x2 = , x3 =
1  4 21 10 21  5
But g2(X) = x1 + x2 – x3 = 0
 2 5 25 
 2  + +  =0
 1  4 21 10 2 1  50 
g1(X) = 0 

42 2 2 52 2 62 52


2
+ + = 1
4(1  4) 2 5(21 10) 2 25(21  50) 2

 2  0

 2 5 25 
    4 + 2 10 + 2  5  = 0
 1 1 1 

Solving we get  1 = -10 , 75


17

At λ1  10, x1  1 λ 2 , x2  1 λ 2 , x3  5 λ13
3 2 6

6 5
Hence  2 =  .
19
Operations Research 22
School of Distance Education, University of Calicut
2 5 3 5 3 5
Thus the stationary points are   , , 2
 and |X| = 10
 19 19 19 
75
At  2 = - we get the stationary point as
17
 40 35 5  75
  , ,  and so |X| = 17
 646 646 646 
75
Hence the required minimum values are 10 and .
17
1.3 Convex functions

Definition: 9

A subset K  En is said to be convex if for any two points X1, X2  K, the point
X =  X1 + (1-  ) X2 also belongs to K for every  , 0    1.

Definition: 10

Let X  K  En where K is a convex set. A function f(X) is said to be convex if for


any two points X1 and X2 in K

f(X)  (1 -  )f(X1) +  f(X2) , 0    1

for every X = (1 -  )X1 +  X2.

The function is concave if the inequality sign is reversed or if -f(X) is convex.

In one-dimensional case a convex function can be well explained through the


following geometric interpretation
C B

A P

M R N
X1 X X2

Notice that x1 and x2 are any two points and x is such that X = (1 -  )X1 + X2 for some  ,
0    1.

Operations Research 23
School of Distance Education, University of Calicut
From the figure PR = f(X) , AM = f(X1) , BN = f(X2) and CR = (1 -  ) f(X1) +
 f(X2)

PR  CR

 f(X)  (1 -  ) f(X1) +  f(X2)

Or the curve APB is completely below the chord AB.

Theorem: D1

Let X  En and f(X) = X'AX be a quadratic form. If X'AX is positive semi definite,
then it is a convex function.

Proof:-

Let X1, X2 be any two points in En and X = (1-  )X1 + X2

Notice that (1 -  ) f(X1) +  f(X2) = (1 -  )( X'AX1) +  (X2'AX2)

 (1 -  ) f(X1) +  f(X2) – f(X)

= (1 -  )( X1'AX1) +  (X2'AX2) - X'AX

= (1 -  ) X1'A X2 +  X2'AX2 – ((1 -  ) X1+  X2)' A((1 -  )X1 +  X2)

=  (1 -  )( X1'AX1 + X2'AX2 – 2X1'AX2)

=  (1 -  ) (X1 – X2)' A (X1 – X2)  0

(because 0    1 and X1 - X2 is any vector in En)

 f(X)  (1 –  ) f(X1) +  f(X2) or f(X) is a convex function.

The aim of the next theorem is to establish the fact that if f(X) is a convex set, then
any relative minimum point will be a global minimum.

Theorem: D2

Let K  En be a convex set and f(X) is a convex function. If X0 is a point of relative


minimum, it is also a global minimum.

Operations Research 24
School of Distance Education, University of Calicut
Also if this minimum is attained at more than one point, then the minimum is attained at the
convex linear combination of all such points.

Proof:-

Let f(X) have a relative minimum at X0. Let X1  K. Then for any  > 0 it is possible
to choose  , 0 <  < 1, such that there exists X =  X0 + (1 -  )X1, lying in the  -
neighborhood of X0. By the definition of relative minimum, with X in this neighborhood.

f(X0)  f(X)

ie f(X0)  f(  X0 + (1 -  )X1)

  f(X0 ) + (1 -  )f(X1)

 (1 -  )f(X0)  (1 -  ) f(X1)

 f(X0)  f(X1)

 f(X0) is a global minimum.

To prove the second part, let Y0 be another point where the minimum is attained. Then

f(X0) = f(Y0)

Take X =  X0 + (1 -  ) Y0

Then f(X0)  f(  X0 + (1 -  ) Y0)

  f(X0) + (1 -  ) f(Y0)

  f(X0) + (1 -  ) f(X0) = f(X0)

 f(X0) = f(  X0 + (1 -  ) Y0) ,   ; 0<  <1

This means that minimum is attained at all convex linear combination of X0 and Y0. This
proves the theorem.

Theorem: D3

Let f(X) be defined on a convex domain K  En and be differentiable. Then f(X) is a


convex function if and only if

f(X2) – f(X1)  (X2 – X1)'  f(X1) for all X1 , X2  K.


Operations Research 25
School of Distance Education, University of Calicut
Proof:-

Suppose for any X1, X2 in K

f(X2) – f(X1)  (X2 – X1)  f(X1)

Choose the point X3 in K such that

X1 =  X2 + (1 -  )X3 , 0    1

Then f(X3) – f(X1)  (X3 – X1)'  f(X1)

From the last two inequalities

 f(X2) – f(X1) + (1 -  )(f(X3) - f(X1))  [  (X2 – X1)' + (1 -  )(X3 - X1)']  f(X1)

ie  f(X2) + (1 -  )f(X3) - f(X1)  [  X2 + (1 -  ) X3 - X1]'  f(X1) = 0

[because X1 = X2 + (1-)X3]

 f(X2) + (1 -  )f(X3)  f(X1)

or f(X1)  f(X2) + (1 -  )f(X3)

This proves the first part.

Conversely suppose f(X) is a convex function

Let X1, X2  K and 0 <  < 1 , then

(1 -  ) f(X1) +  f(X2)  f((1 –  ) X1 +  X2)

  f(X2) – f(X1)  f((1 –  ) X1 +  X2) – f(X1)

f X1  (X 2  X1 )   f (X1 )


 f(X2) – f(X1) 

Taking lim   0 we get

f(X2) – f(X1)  (X2 – X1)'  f(X1) [By taking defn. 1.4]

Hence the theorem.

Operations Research 26
School of Distance Education, University of Calicut
Theorem: D4

Let f(X) be a convex differentiable function defined in a convex domain K  En. Then
f(X0) , X0  K is a global minimum if and only if

(X – X0)'  f(X0)  0 for all X  K.

Proof:-

Let f(X0) be a global minimum. Then for all X  K,

f(X)  f(X0)

Since for any X in K,  X + (1 –  ) X0 is also in K

f(  X + (1 –  ) X0)  f(X0)

f X 0  (X  X 0 )   f (X 0 )
 0

Taking limit   0 we get

(X – X0)'  f(X0)  0

To prove the converse, let for every X  K

(X – X0)'  f(X0)  0

Since f(X) is convex

f(X) – f(X0)  (X – X0)'  f(X0)  0

 f(X)  f(X0)

Thus f(X0) is a global minimum.

1.4 General problem of mathematical programming

A general programming problem can be written in the following manner. Let f(X) be
a function whose optimum value is to be obtained under certain limitations. The limitations
are usually expressed through a set of inequalities called constraints. Typically constraints can
be written as

gi(X)  0 , i = 1, 2, … m.
Operations Research 27
School of Distance Education, University of Calicut
Thus a programming problem can be written as

Maximize (or minimize) f(X)

Subject to gi(X)  0 , i = 1, 2, … m.

Theorem: E1

Let X  En and let gi(X) , i = 1, 2, … m, be convex functions in En. Let S 


En be the set of points satisfying gi(X)  0 , i = 1, 2, … m. Then S is a convex set.

Proof:-

Let X1, X2 be in S, and let X3 =  X1 + (1 -  )X2 , 0    1. Since gi(X) is a


convex function and gi(X1)  0 , gi(X2)  0 ,

gi(X3) = gi (  X1 + (1 -  )X2)

  gi (X1) + (1 -  ) gi (X2)

 0

Hence X3 is in S, and S is a convex set.

*******************

Operations Research 28
`
CHAPTER 2
LINEAR PROGRAMMING

2.1 Linear Programming

In the general mathematical programming problem of optimizing f(X) subject to the


constraints gi(X)  0 , X  0. i = 1, 2, …, m, when the function f(X) and the constraints
gi(X) , i = 1, 2, …, m all are linear, the problem reduces to linear programming problem
(LPP). So the problem of LPP is to find an optimal (Min or Max) value of a linear function
subject to linear constraints, as

Optimise f ( x ) = c1x1 + c2 x2 + … + cn x n

Subject to a 11 x1 + a 12 x 2 + … + a 1n x n  b 1

a 21 x1 + a 2 2 x 2 + … + a 2 n x n  b 2

……………………………………. (1)

a m1 x1 + a m 2 x 2 +… + a mn x n  b m,

Where x 1 , x 2, …, x n  0.

OR,

Optimise f ( x ) = c1x1 + c2 x2 + …+ cn x n

 a11 a12 . . . a1n 


a . a2 n 
 21 a22 . .
 . . . . . . 
Subject to AX  B , X  0 , where A =  T
 , B = [b1 b2 … bm] ,
 . . . . . . 
 . . . . . . 
 
am1 am 2 . . . amn 

X = [x1 x2 … xn]T. Here c j ' s are called the cost coefficients.


School of Distance Education, University of Calicut
Slack and Surplus Variables

n
A linear constraint of the form a
j 1
ij xij  bi can be converted into an equality by

adding a new non-negative variable to the left hand side of the inequality. Such a variable
carries a numerical value, which is the difference between the right and left hand side of the
inequality and is known as a slack variable. In a similar way the non-negative variable
n
subtracted from LHS of the inequality of the form a
j 1
ij x j  bi for converting it into a

equality is called a surplus variable.

Using the slack variables xn+1, xn+2, …, xm+n, the optimization problem gives in (1) can be
rewritten into,

Optimize,

f(x) = c1x1 + c2x2 + … + cnxn + 0 x xn+1 + 0 x xn+2 + … + 0 x xm+n

subject to

a11x1 + a12x2 + … + a1nxn + xn+1 = b1


a21x1 + a22x2 + … + a2nxn + xn+2 = b2
………………………………………
………………………………………
am1x1 + am2x2 + … + amnxn + xm+n = bm

xi  0 , i = 1, 2, …, m+n

ie; to optimize f(x) = CX

subject to AX = B (2)

X 0

where C = [c1 c2 … cn, 0, 0, … 0] ,

Operations Research 30
School of Distance Education, University of Calicut
 a11 a12 . . . a1n 1 0 0 . . . 0
a 0 1 0 . . . 0 
 21 a22 . . . a2 n
A=  . . . . . . . . . . . . .
 
 . . . . . . . . . . . . .
am1 am 2 . . . amn 0 0 . . . . .1

X = [x1 x2 … xn xn+1… xm+n]T ,

B = [b1 b2 … bm] T

Consider a system of m linearly independent equations AX = B , X  0

 a11 a12 . . . a1n 


a . a2 n 
 21 a22 . .
 . . . . . .  T
where A =   , B = [b1 b2 … bm] ,
 . . . . . . 
 . . . . . . 
 
am1 am 2 . . . amn 

X = [x1 x2 … xn ]T.

Here we are considering n variables in m, (m < n) equations. A vector X satisfying AX


= B and X  0 is called a feasible solution of the system.

In this system of equations, if any of „n-m‟ variables are given the value zero, the
remaining system of „m‟ equations in „m‟ unknown may have a unique solution.

This solution along with the assumed zeros is a solution of AX = B. It is called a Basic
Solution. The „m‟ variables remaining in the system after „n-m‟ variables have been put equal
to zero are called the Basic Variables or simply a Basis. The rest of the variables may be
called non-basic. Since the unique solution of „m‟ equations in „m‟ variables may also contain
zeros, a basic solution contains at least „n-m‟ zeros.

If the number of zeros in the basic solution is „n-m‟ then the solution is called a non-
degenerate basic solution and if it is greater than n-m , the solution is said to be
degenerating.

In a basic solution if all values of xj‟s are  0, then the solution is called ‘Basic
Feasible Solution’.

Operations Research 31
School of Distance Education, University of Calicut
A Basic Feasible solution which optimize the objective function f(X) is called an
Optimum solution. In order to find a set of „m‟ basic variables out of n variables of the
system of linearly independent set of equations AX = B, it requires only to identify m linearly
independent column vectors of the matrix A.

2.2 Cannonical form of equations

Let x1, x2, …………, xm be a set of basic variables corresponding to a system of


equations AX=B (3)

Where,

 a11 a12 . . . a1m a1, m 1 . . . a1n 


 . a2 n 
 a21 a22 . . . a2 m a2, m 1 . .
 . . . . . . . . . . . 
A =  
 . . . . . . . . . . . 
 . . . . . . . . . . . 
 
am1 am 2 . . . amm am, m 1 . . . amn 

B = [b1 b2 .. …. bm]T and X = [x1 x2 …… xn ]T

AX = B can be written as

 a11 a12 . . . a1m   x1 


a  b1  a1, m 1 xm 1 . . . a1n xn 
. a2 m   x2  
 21 a22 . . . a2 n xn 
. .
b2  a2, m 1 xm 1
 . . . . . .   .  
    =  . . . . . 
 . . . . . .   .   
. . . . . 
 . . . . . .   .  
    bm  am , m 1 xm 1 . . . amn xn 
am1 am 2 . . . amm   xm  

The m x n matrix on the LHS is non singular because the basic vectors which are the
columns of this matrix are linearly independent. Pre multiplying both sides by its inverse, we
get,

 x1 
x   b1  a 1,m 1 x m 1 . . .  a 1,n x n 
 2  
 .   b 2  a 2,m 1 x m 1  a 2,n x n 
  =  ... ... ... : 
 .   
 .   ... ... ... : 
  b  a  a m.n x n 
 m m , m 1 x m 1 . . .
 xm 
Operations Research 32
School of Distance Education, University of Calicut
or

x 1  a 1,m 1 x m 1 . . .  a 1,n x n  b1 

x 2  a 2,m 1 x m 1  a 2,n x n  b 2 

... ... ... :  ….(4)
... ... ... : 

x m  a m ,m 1 x m 1 . . .  a m.n x n  b m 

Equations (4) which are equivalent to (3) are called the canonical form of the equations

provided bi  0 , i = 1, 2, … m. Corresponding to each feasible basis we can get a


canonical form , and vice versa. The advantage of putting the equations in a canonical form is
that the basis and the corresponding basic feasible solution can be immediately known. Since
basic feasible solution should have zero values for non basic variables, putting

xm+1 = xm+2 = …. = xn = 0 in (4)

we get the basic feasible solution as,

( b1 b1 , , … , b1 , 0, 0, … 0)

In an LPP if f(X) = c1x1 + c2x2 + … + cmxm + … +cnxn is the objective function associated
with the constraints of the form (3),

After eliminating the basic variables from the objective function using (4) we get

m n
f(X) =  bi ci +
i 1
c x
j m  1
j j

m
Where c j = cj - c
i 1
i a ij , j = m +1, m + 2, …, n

In the system of equations AX = B of (1) the columns corresponding to the m variables, x n+1,
xn+2,.…, xn+m are clearly seen as linearly independent and these variables can be taken as the
basic variable for an initial feasible solution.

Theorem:

The set SF of feasible solutions, if not empty, is a closed convex set (polytope) bounded
from below and so has at least one vertex.

Operations Research 33
School of Distance Education, University of Calicut
Proof:-

SF is the intersection of the hyper planes gi(X) = 0, i = 1, 2,…..,m and the set
H = {X | X  0]. All these are closed convex sets and bounded from below. Hence SF is a
closed convex set bounded from below, and so has at least one vertex (By theorem).

Alternatively,

Let X1 and X2 be two feasible solutions, then X1  0 , X2  0 and AX1 = B and AX2 = B

Consider any convex linear combination of X1 and X2

ie X = (1 -  )X1 +  X2 ; 0    1

Then AX = (1-  )AX1+  AX2

= (1-  )B +  B = B

 X is also a feasible solution.

Thus the set of all feasible solutions is a convex set.

Theorem:

A Basic Feasible Solution of L P. P is a vertex of the convex set of feasible solutions.

OR, equivalently, If a set of vectors P1, P2, …, Pm can be found that are linearly independent
such that

 1P1 +  2P2 + … +  mPm = B where  j  0 , j = 1, 2, … m

Then X  = [  1,  2, …,  m , 0 0 … 0] which is a Basic Feasible solution is an extreme


point (vertex) of SF.

Proof:-

SF be the set of feasible solutions.

Clearly X  = [  1,  2, … ,  m , 0 0 … 0]  SF

Suppose that it is not an extreme point;

Then two points X1 and X2 different from X  exists in SF such that

Operations Research 34
School of Distance Education, University of Calicut
X =  X1 + (1 -  )X2 , 0 <  < 1

ie  j =  xj1 + (1 -  )xj2 , j = 1, 2, … m and,

0 =  xj1 + (1 -  )xj2 , j = m + 1, m + 2, …n

Since X1, X2  SF , xj1 , xj2  0

Also 0 <  < 1 , Hence Xj1 = Xj2 = 0 , j = m + 1, m + 2, … n

So X1 = [x11, x21, … xm1, 0, 0 … 0]'

X2 = [x12, x22, … xm2, 0, 0 … 0]'

Since X1, X2 are solutions of AX = B,

x11P1 + x21P2 + … + xm1Pm = B (1)

x12P1 + x22P2 + … + xm2Pm = B (2)

Since X   SF, we have

 1P1 +  2P2 + …+  mPm = B (3)

(3) – (1) 

(  1 – x11)P1 + (  2 – x21)P2 + …… + (  m – xm1)Pm = 0 (4)

but P1, P2, …… ,Pm are by hypothesis, linearly independent

 (4) is possible only for (  1 – x11) = 0 , (  2 – x21) = 0 … (  m – xm1) = 0

  1 = x11 ,  2 = x21 ,  m = xm1 or X  = X1

Which is contradicting the assumption. Hence X  is an extreme point.

In converse,

Theorem:

A vertex of SF is a basic feasible solution

Operations Research 35
School of Distance Education, University of Calicut
Proof:

Let X  = (  1,  2, …  n)' be a vertex of SF.

Then since X   SF , X   0

Let r of the  j ‟s , j = 1, 2, … n be non-zero, where r  n.

Since m < n, either r  m or r > m

If r  m, then X  is obviously a basic feasible solution and so the theorem holds.

If r > m, then we may put X  as X  = [  1,  2, …  r, 0, 0 …….. 0]‟

where  j > 0 for j = 1, 2, … r

Since X  is solution of AX = B,

we have  1P1 +  2P2 + … +  rPr = B (1)

As r > m , the vectors P1, P2, … Pr are not linearly independent. So there exists  1,
 2, …  r not all zero such that,

 1P1 +  2P2 + … +  rPr = 0


(2)

Multiplying (2) by c

 c  1P1 + c  2P2 + …
+ c  rPr = 0 (3)

(1) + (3)  (  1 + c  1)P1 + (  2 + c  2 )P2 + … + (  r + c  r )Pr = B

(1) – (3)  (  1 - c  1)P1 + (  2 - c  2 )P2 + … + (  r - c  r )Pr = B

choose c > 0 sufficiently small to make,  j  c  j > 0 for j = 1, 2, …r

Then we can conclude that,

X1 = [  1 + c  1,  2 + c  2 , …  r + c  r , 0, 0, … 0]‟

and X2 = [  1 - c  1,  2 - c  2 , …  r - c  r , 0, 0, …0]‟

Operations Research 36
School of Distance Education, University of Calicut
are feasible solutions.

we have now three feasible solutions,

X  , X1, X2, which are related through,

X  = ½ X1 + ½ X2

ie, X  is a convex linear combination of X1 and X2. This means X  is not a vertex,
which contradicts our assumption.

Hence r cannot be greater than m, which means X  is a Basic Feasible Solution.

Theorem:

If SF is non-empty, the objective function f(X) has either an unbounded minimum or it


is minimum at a vertex of SF.

Proof:-

There are two cases of SF (i) SF is bounded (ii) SF is unbounded.

We are considering only the case where SF is bounded.

To prove f(X) is minimum at a vertex of SF.

Since SF is bounded SF has vertices and every point of SF is a convex linear


combination of its vertices.

Let X1, X2, ……Xp be the vertices of SF

There is a point X0  SF where f(X) is minimum, If X0 is a vertex, then nothing to prove.

Other wise, if X0 is not a vertex,

X0 can be expressed as a convex linear combination of Xr, r = 1, 2,…..….…. p and so

p
X0 = 
r 1
r Pr ; 
r
r = 1,  r  0

 
Since f(X) is linear , f(X0) = f   r X r  =  r f (Xr )   r f ( X k ) = f(Xk)
  r r

Operations Research 37
School of Distance Education, University of Calicut
where f(Xk) is the least of the values f(Xr) , r = 1, 2,…….. p,

But by hypothesis f(X0)  f(Xk)

 f(X0) = f(Xk)

which means that f(X) is minimum at Xk, which is a vertex of SF.

Theorem:

If f(X) is minimum at more than one of the vertices of SF, then it is minimum at all
those points which are the convex linear combinations of these vertices.

Proof:-

Let X1, X2, ………Xk be the vertices of SF where f(X) is minimum.

Then f(X1) = f(X2) = ………= f(Xk)

Let Y be any convex linear combination of these vertices.

k
Then, Y = 
r1
r Xr , 
r
r =1 , r  0

and since f(X) is linear

f(Y) = f(   r Xr) =  r f(Xr)


r r

=  r f(X1) = f(X1) , which means f(X) is minimum at Y also.

2.3 Methods for solving LPP

1) Graphical Approach:

A linear programming problem involving two decision variables can be solved


graphically. In the graphical method the set of solutions that satisfying the constraint
conditions are determined and the vertices of the solution set (SF) are checked for the optimal
solution.

Example:

Solve graphically, Maximize 5x1 + 3x2


Operations Research 38
School of Distance Education, University of Calicut
Subject to 4x1 + 5x2  10

5x1 + 2x2  10

3x1 + 8x2  12

x1  0 , x2  0

In a graph, the region satisfying each of constraint inequalities are shaded and the
common region for all of the constraint inequalities are found out.

This is the solution region, which form a convex set and the objective function attains
its optimum at a vertex of this set of feasible solution.

Y
7
6
5 (0, 5)
4

3x1+4x2=12 3
2
E D
1
C
A
B
(0, 0) 1 2 3 4
4x1+5x2=10
5x1+2x2=10

The set of feasible solution is ABCDE which is a convex set. The vertices are A(0, 0), B(2, 0),
E(0, 1.5), and D and E can be found out by solving the line joining at these points.

Solving the equations 5x1 + 2x2 = 10 and

4x1 + 5x2 = 10

 30 10 
 the vertex C =  , 
 17 17 

solving 4x1 + 5x2 = 10 and

Operations Research 39
School of Distance Education, University of Calicut
3x1 + 8x2 = 12

 20 18 
 the vertex D =  , 
 17 17 

At A(0 , 0) value of the objective function

f(x) = 5x1 + 3x2 = 0

At B(2 , 0) , f(x) = 10

At E(0 , 1.5) , f(x) = 4.5

 30 10  180
At C  ,  , f(x) =
 17 17  17

 20 18  154
At D  ,  , f(x) =
 17 17  17

 30 10 
Among these f(x) is maximum at C  , 
 17 17 

180
 the maximum value of the function =
17

30 10
x1 = and x2 =
17 17

Unfortunately, the graphical method cannot be used with more than two, or possibly
three variables. Hence it comes the importance of simplex method.

(ii) Simplex Method

The simplex method is an iterative procedure for solving a linear programming


problem in a finite number of steps. This method provides an algorithm which consists in
moving from one vertex of the set of feasible solution (SF) to another in a prescribed manner
such that the value of the objective function f(X) at the succeeding vertex is less than that of
the preceding vertex for a minimization problem.

The algebraic procedure of solving an LPP through Simplex Method is at first illustrated
through an example.

Operations Research 40
School of Distance Education, University of Calicut
Maximize f(x) = 4x1 + 5x2
Subject to x1 - 2x2  2
2x1 + x2  6
x1 + 2x2  5
-x1 + x2  2
x1  0 , x2  0

Step I

The problem is changed into minimization problem Max f(x) = Minimizing  (x) ,
where  (x)= -f(x).

Add slack variables to the constraint inequalities and state problem in the general form as
follows,

Minimize  (x) = -4x1 – 5x2

Subject to

x1 – 2x2 + x3 = 2
2x1 + x2 + + x4 =6
x1 + 2x2 + x5 = 5
-x1 + x2 + x6= 2
xi  0 , i = 1, 2, ……….. 6
where x3 , x4, x5, x6 are the slack variables.

Step II

Since the problem is with four constraint equations in six unknowns, for getting an initial
basic feasible solution, the variable x3, x4, x5 and x6 having linearly independent column
vectors are taken as basic variables and the other two variables x1 and x2 are equating to zero.

Then we get the initial basic feasible solution as x1 = 0 , x2 = 0 , x3 = 2 , x4 = 6 , x5 = 5 , x6 = 2


and the min  (X) = 0

Operations Research 41
School of Distance Education, University of Calicut
First Iteration Table

Cj -4 -5 0 0 0 0

CB yB xB P1 P2 P3 P4 P5 P6 Ratio

0 x3 2 1 -2 1 0 0 0 2/-2

0 x4 6 2 1 0 1 0 0 6/1

0 x5 5 1 2 0 0 1 0 5/1

0 x6 2 -1 1 0 0 0 1 2/1 

=0 -4 -5 0 0 0 0  Cj - Zj

where Zj = PjT CB

Step III

From the iteration table all the Cj – Zj , j = 1, 2, ……..6 values are positive the current
solution is optimal. But here the Cj – Zj values corresponds to the variables x1 and x2

ie; C1 – Z1 and C2 – Z2 are negative. So the current solution is not optimal. Hence, the
variable corresponds to the maximum negative values of Cj – Zj is taken as the basic variable
and one of the current basic variables is to be removed. In our case the variable x 2 enters to
basic variable set (basis). To determine the outgoing variable from the basis, find the ratios of
xB values to the corresponding value of the vector P2, the variable corresponds the minimum
positive value of these ratios goes out from the basis. Here the ratios are (2/-2 , 6/1 , 5/1 , 2/1).
Among these the minimum positive value 2/1 corresponds to the variable x6. So x6 goes out .

Step IV

Introducing the new variable x2 to basis and eliminating x2 from the first three
equations with the help of last using some elementary transformations, we get the second
iteration table as

Operations Research 42
School of Distance Education, University of Calicut

Cj -4 -5 0 0 0 0

CB yB xB P1 P2 P3 P4 P5 P6 Ratio

0 x3 6 -1 0 1 0 0 2 6/-1

0 x4 4 3 0 0 1 0 -1 4/3

0 x5 1 3 0 0 0 1 -2 1/3

-5 x2 2 -1 1 0 0 0 1 2/-1

 = -10 -9 0 0 0 0 5  Cj - Zj

Now the solution is x1 = 0 , x2 = 2 , x3 = 6 , x4 = 4 , x5 = 1 , x6 = 0   (x) = -10

Since all Cj – Zj‟s for j = 1, 2, … 6, are not all  0 the current solution is not optimal.

As the methods described in the first iteration, here x1 enters to basis and x5 goes out.

Continuing the iteration until optimal solution attains.

Third iteration table

Cj -4 -5 0 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6 Ratio
19 / 3
0 x3 19/3 0 0 1 0 1/3 4/3
4/3
0 x4 3 0 0 0 1 -1 1 3/1
1/ 3
-4 x1 1/3 1 0 0 0 1/3 -2/3
 2/3
7/3
-5 x2 7/3 0 1 0 0 1/3 1/3
1/ 3
 = -13 0 0 0 0 3 -1  Cj - Zj
C6 – Z6 is negative. The current solution is not optimal x6 enters to basis and x4 goes out.

Operations Research 43
School of Distance Education, University of Calicut
Fourth iteration table

Cj -4 -5 0 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
0 x3 7/3 0 0 1 -4/3 5/3 0
0 x6 3 0 0 0 1 -1 1
-4 x1 7/3 1 0 0 2/3 1/3 0
-5 x2 4/3 0 1 0 -1/3 2/3 0
 = -16 0 0 0 1 2 0  Cj  Z j

Since all Cj – Zj values are  0 , the current solution is optimal.

Hence the minimum value of  (X) = -16

where, x1 = 7/3 , x2 = 4/3 , x3 = 7/3 , x4 = 0 , x5 = 0 , x6 = 3.

But x3, x4, x5 and x6 are slack variables, Hence the solution of the problem is

Minimum  (X) = -16

where x1 = 7/3 and x2 = 4/3.

In short the simplex algorithm can be stated as follows.

Steps:

1. Make the LPP in minimization type


2. In standard form of the LPP ensure bi‟s are non-negative or make them as
non- negative.
3. Obtain the initial basic feasible solution.
4. Form the simplex table.
5. Compute net evaluation Cj – Zj , Zj = PjT CB . If all Cj – Zj‟s are  0 , the solution
is optimal.
6. If not improve the solution by finding the new basic variable by noting the
maximum negative net evaluation.
7. Determine the outgoing variable which is the variable with least positive ratio of
xB value with corresponding entry in the column vector of the incoming variable.

Operations Research 44
School of Distance Education, University of Calicut
8. Form the simplex table with the revised basis.
9. Go to step 5.

2.4 Degeneracy

In simplex method, the least of a set of non-negative ratios decides the outgoing
variable at a particular iteration. It may happen that two or more ratios are equal and the least.
In that case a tie occurs as to which variable to drop. One can arbitrarily decide in favour of
one, but then it turns out that the variables which tied with it and continue to remain in the
basis also become zero. In other words, one or more of the basic variables too have zero
value. Such a case is called degenerate. In degeneracy, the basis, theoretically, changed, but
the value of the objective function remains the same. Geometrically, it may be interpreted as
the case of two coincident vertices. We change from one to the other vertex but substantially
remain where we are. In most cases we go ahead with our iterations and finally reach a basis
with improved value for the objective function, and finally get the optimal solution. But in
some cases after many iterations we arrive back at the degenerate basis from which we
started. Such cases are termed as cyclic degeneracy.

2.5 Simplex Method – Artificial Variable technique (Two phase Method)

While solving LPP, when the constraints are of '  ' type and all the constants on the
right hand sides of the inequalities are non negative, it is easy to reach an initial basic feasible
solution. But if there is „greater than‟ constraint with non-negative right-hand side or „less
than‟ constraint with negative right-hand side, then a basic feasible solution cannot be
obtained right away.

To overcome this difficulty we first put the constraints so that the right-hand side
constants are all non-negative. Then we introduce the necessary slack variables. To get a basic
feasible solution of this system we formulate an auxiliary LP Problem whose one basic
feasible solution can be obtained straight away with the help of artificial variables introduced.
The optimal solution of this auxiliary problem gives a basic feasible solution to the original
problem.

While using this method, in first phase the auxiliary problem is solved using simplex
method and then the original problem is solved in second phase, with the initial basic solution
obtained from the optimal solution of the problem solved in the first phase.

The method is illustrated here through an example:


Operations Research 45
School of Distance Education, University of Calicut
Minimize f(X) = 4x1 + 5x2
Subject to 2x1 + x2  6
x1 + 2x2  5
x1 + x2  1
x1 + 4x2  2
x1, x2  0.

Introducing slack variables and surplus variables, the problem changed as

Minimize f(x) = 4x1 + 5x2

Subject to 2x1 + x2 + x3 = 6
x1 + 2x2 + x4 = 5
x1 + x2 -x5 = 1
x1 + 4x2 - x6 = 2
x1, x2, x3 , x4, x5, x6  0.

The solution x1 = 0 , x2 = 0 , x3 = 6 , x4 = 5 , x5 = -1 , x6 = -2 is not a basic feasible solution.

To get a starting basic feasible solution to the problem we are formulating an auxiliary
problem by introducing two artificial variables x7 and x8 to the third and fourth constraint
equations, the new problem is to,

Minimize g(X) = x7 + x8
Subject to 2x1 + x2 + x3 = 6
x1 + 2x2 + x4 = 5
x1 + x2 - x 5 + x7 = 1
x1 + 4x2 –x6 + x8 = 2
xi  0 , i = 1, 2,…,8

Here we got a problem with four equations and eight unknowns. Keeping x 3, x4, x7 and x8 as
basic variables, we get an initial basic feasible solution to the problem as x1 = 0 , x2 = 0 , x3 =
6 , x4 = 5 , x5 = 0 , x6 = 0 , x7 = 1 , x8 = 2 and g(X) = 0.

Operations Research 46
School of Distance Education, University of Calicut
The simplex table is as follows:

Cj 0 0 0 0 0 0 1 1

CB yB xB P1 P2 P3 P4 P5 P6 P7 P8 ratio
0 x3 6 2 1 1 0 0 0 0 0 6/1
0 x4 5 1 2 0 1 0 0 0 0 5/2
0 x7 1 1 1 0 0 -1 0 1 0 1/1
0 x8 2 1 4 0 0 0 -1 0 1 2/4

g=3 -2 -5 0 0 1 1 0 0  Cj - Zj

Since the current solution is not optimal, the variable x2 enters to and x8 leaves from the basis,
then the new simplex table is,

Cj 0 0 0 0 0 0 1 1
CB yB xB P1 P2 P3 P4 P5 P6 P7 P8 ratio
11
0 x3 11/2 7/4 0 1 0 0 ¼ 0 -¼ 2
7
4
4
0 x4 4 ½ 0 0 1 0 ½ 0 -½ 1
2
1
1 x7 ½ ¾ ¾ 0 0 -1 ¼ 1 -¼ 2
3
4
1
0 X2 ½ ¼ 1 0 0 0 -¼ 0 ¼ 2
1
4
g=½ -¾ 0 0 0 1 -¼ 0 5/4  Cj - Zj

x1 enters to x7 leaves from the basis

Cj 0 0 0 0 0 0 1 1
CB yB xB P1 P2 P3 P4 P5 P6 P7 P8
0 x3 13/3 0 0 1 0 7/3 -1/3 -7/3 1/3
0 x4 11/3 0 0 0 1 2/3 1/3 -2/3 -1/3
0 x1 2/3 1 0 0 0 -4/3 1/3 4/3 -1/3
0 x2 1/3 0 1 0 0 1/3 -1/3 -1/3 1/3
g=0 0 1 0 0 0 0 1 1  Cj - Zj

Operations Research 47
School of Distance Education, University of Calicut
Since all cj – Zj‟s are  0, Solution is optimal

From here the second phase of the problem starts.

In simplex table we are removing the columns of artificial variables, ie, P7 and P8.

Giving the original cost coefficients as given in our original problem, but accepting the
variable remaining in the optimal solution as the basis for the original problem and containing
the simplex method.

Then the simplex table is as follows.

Cj 4 5 0 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
0 x3 13/3 0 0 1 0 7/3 -1/3
0 x4 11/3 0 0 0 1 2/3 1/3
4 x1 2/3 1 0 0 2/3 -4/3 1/3
5 x2 1/3 0 1 0 0 1/3 -1/3
 Cj  Z j
g = 13/3 0 0 0 0 11/3 1/3

Here all Cj – Zj‟s are non-negative. So current solution is optimal for the original problem.
Hence the solution of our problem is

Min g(X) = 13/3

where x1 = 2/3 , x2 = 1/3

2.6 Big M-Method

It is also possible to solve an LPP in one phase even after introducing artificial
variables. One such method is called Big M-method. In this method the original objective
function f is replaced by

m
F = f+M x
i 1
ni

where xn+i are the artificial variables and M is an arbitrary large number as compared to the
coefficients in f. The modified objective function F is minimized subject to the constraints of
Operations Research 48
School of Distance Education, University of Calicut
the original problem. It can be shown that, if in the optimal solution of the modified problem
all the artificial variables are zero, then that is also the optimal solution of the original
problem. But in the optimal solution of the modified problem if all the artificial variables are
not zero, the conclusion is that the original problem is not feasible. If the modified problem is
found to have an unbounded minimum, then the original problem to, if feasible, is unbounded.

2.7 Simplex Multipliers

Consider the objective function

f(X) =C1x1 + C2x2 + … + Cmxm + Cm+1xm+1 + … + Cnxn (1)

Subject to the constraint equations,

a11x1  a12 x2  ......  a1m xm  a1m 1 xm 1  ........  a1n xn  b1 


a21x1  a22 x2  ......  a2 m xm  a2 m 1 xm 1  ........  a2 n xn  b2 

...... ...... ....... ....... ....... ....... ....... ....  (2)
....... ....... ....... ....... ........ ....... ....... .... 

am1 x1  am 2 x2  ......  amm xm  amm 1 xm 1  ........  amn xn  bn 

xi  0 , i = 1, 2, …, n

as shown in the canonical form of equations, if we assume x1, x2, … xm are the basic
variables, the objective function f(X) can be expressed in terms of non basic variables x m+1,
xm+2, … xn as follows

m n
f(X) =  bi Ci +
i 1
C
j  m 1
j xj

m
where C j = Cj - C a
i 1
i ij , j=m+1,m+2,…n

This type of expression for f(X) in terms of non-basic variable is necessary at every iteration
of the simplex method.

It is possible to get the expression for objective of function in terms of non-basic variables
directly from the original equations (2) as follows:

Let us multiply each of the equations in (2) by constants  1,  2, …..  m and add them to
(1)

Operations Research 49
School of Distance Education, University of Calicut
m m m
 (C1 +  a i1  i )x2 + (C2 +  a i 2  i )x2 + … +(Cn +  a in  i )xn
i 1 i 1 i 1

m
= f+  bi  i
i 1

Choose  1,  2, …  m such that the coefficients of x1, x2, … xm become zero, That is,
choose  i such as,

m
 a ij  i = - Cj ; j = 1, 2, … m (3)
i 1

m m m
 (Cm + 1 +  a im  1  i )xm+1 + (Cm+2 +  a im  2  i )xm+2 + …+(Cn +  a in  i )
i 1 i 1 i 1

m
= f+  bi  i
i 1

n m m
 f= C j x j -
j  m 1
 bi  i , where C j = Cj +  a ij  i , j = m+ 1 , m+ 2, … n
i 1 i 1
(4)

The solution of m equations in  1,  2, ……  m given by (3) give the required values of  i

In matrix form (3) is written as

A0'  = - C0 '

 a11 a12 . . a1m 


a a 22 . . a 2m 
 21 
Where A0 =  .
'
. . . . 
 
 . . . . . 
a m1 am2 . . a mm 

π = [  1,  2, …  m]'

C0' = [C1, C2, … Cm]'

π = -[ A0' ] - 1 C0'

π' = - C0 A0 -1
Operations Research 50
School of Distance Education, University of Calicut
The vector  is called the multiplier vector and its components are the simplex multipliers.

 at any stage the inverse of A0, the matrix of basic vectors at that stage, is
To calculate
needed. Having calculated  i for for any basis the value of the objective function for that
basis is given by

m
f = -  b i i
i 1

because the remaining terms in equation (4) contains the non basic variables x m+1, xm+2,… xn,
which are assumed as zero.

2.8 Revised simplex Method

Revised simplex method is a modification of the original simplex method which is


more economical on the computer, by using less memory of the computer by avoiding
unnecessary calculations happening in original method, but is not very convenient to work
with hand.

Revised simplex method not working out the entire simplex tableau but calculating only the
values required in the following essential steps.

(i) For a feasible basis xj , j = 1, 2, …, m, the coefficients C j , j = m + 1 , … of equation


m n
f(X) =  bi Ci +
i 1
C
j  m 1
j x j are directly calculated using the simplex multipliers.

(ii) Let one of these coefficients, say C r be negative. So we choose xr to be brought into

the basis. The numbers air , i  1, ... m, in the column of xr as in the canonical form
illustration are now directly computed.

bP min  bi 
(iii) Let =   , air > 0
aPr i  a i r 

So we replace xp by xr to get the new basis. The matrix of the new basis is obtained directly.
The steps are repeated till an optimal solution attains.

The direct calculations in the steps are explained below.

Operations Research 51
School of Distance Education, University of Calicut
-1
(i) Let A0 be the matrix of a feasible basis. A0 is computed. Then the multiplier vector
 is determined by  ' = - C0 A0- 1. Using  , calculate C j , j = m + 1 , …, n ,

m
C j = Cj +  ai j  i
i 1

(ii) From the canonical form illustration we have

 a1 r   a1r   b1   b1 
  a    b 
 a2 r   2r   b2   2
 .   .     . 
  = A0
-1   , and  .  = A0-1  
 .   .   .   . 
 .   .   .   . 
       
 a m r   amr   b m   bm 

which gives the values of a i r and bi

(iii) The new basis is obtained by replacing xp by xr. The matrix A0 for this new basis is
obtained from the old A0 by replacing in the latter the column vector Pr by Pp. The old
matrix is denoted by A0p and the new by A0r

 a11 a12 . . . a1p 1 a1p a1p1 . . . a1m 


a . . . a 2 m 
 21 a 22 . . . a 2 p 1 a 2p a 2p1
where A0p =  . . . . . . . . . . . . 
 
 . . . . . . . . . . . . 
a m1 a m 2 . . .. a mp 1 a mp a mp1 . . . a mm 

 a11 a12 . . . a1p 1 a1r a1p1 . . . a1m 


a . . . a 2 m 
 21 a 22 . . . a 2 p 1 a 2r a 2p1
and A0r =  . . . . . . . . . . . . 
 
 . . . . . . . . . . . . 
a m1 a m 2 . . .. a mp 1 a mr a mp1 . . . a mm 

Since A0p-1 A0p = Im

Operations Research 52
School of Distance Education, University of Calicut

1 0 . . . 1 p . . 0
0 1 . . .  2 p . . 0

. . . . . . . . .
 
A0p-1 A0r . . . . . . . . .
=
0 0 . . .  pp . . 0
 
. . . . . . . . .
. . . . . . . . .
 
0 0 . . .  mp . . 1

 1 p   a1r   a1 r 
  a   
 2p   2r   a2 r 
 .   .   . 
     
 .   .   . 
 a pr 
Where   pp  = A0p-1  
=  apr 
   
 .  .  . 
 .  .  . 
     
 .  .  . 
   amr   
 
 mp   am r 

1 0 . .  a1r / a p r . . . 0
 
0 1 . .  a 2r / a p r . . . 0
. . . . . . . . .
 
Let Ep = 
. . . . . . . . .
0 0 . . 1/ a p r . . . 0
 
. . . . . . . . .
 
. . . . . . . . .
0 0 . .  a mr / a p r . . . 1

 Ep A0p- 1 A0r = I
Hence Aor- 1 Aop Ep-1 = I
A0r- 1 = Ep A0p-1

Knowing a i r , Ep is computed and this implies the A0r- 1 , the inverse of the matrix of the new
basis.

2.9 Duality in LP Problems

To every LPP there corresponds another LPP called its dual. The original problem is
called the primal.
Operations Research 53
School of Distance Education, University of Calicut
Let a general LPP is stated as

Minimize f(X) = CX

Subject to AX  B

X 0

 a11 a12 . . . a1n 


a . a2 n 
 21 a22 . .
 . . . . . . 
where A =   , C = [c1 c2 … cn]
 . . . . . . 
 . . . . . . 
 
am1 am 2 . . . amn 

X = [x1 x2 … xn]' , B [b1 b2 … bm]'

when we call the above problem as Primal LP Problem, its dual is defined as the following LP
Problem.

Y   B Y
'
Maximise

Subject to: '


AY  C'

Y 0

where Y  y ,..., y ' - column m-vector.


1 m

The correspondence between the primal in the standard form and its dual:

Primal Dual
n n variable n constraints
m m constraints m variables
cj , j=1,2,…n  Cost coefficients constraint constants
bi , i=1,2,…,m  Constraint constants cost coefficients
Variables  x j  0 , j=1,…,n yi  0, i  1,..., m
n n

Constraints   a ij x j  bi
j1
a
i 1
ij yi  b j
Objective function  n m
Minimise  c j x j Maximise  bi yi
j1 i 1

Operations Research 54
School of Distance Education, University of Calicut
Let a primal L P Problem is stated as:

Minimize f = x1 – 3x2 – 2x3


Subject to 2x1 – 4x2  12
3x1 – x2 + 2x3  7
-4x1 + 3x2 + 8x3 = 10
x1  0 , x2  0 , x3 unrestricted.

To write the given primal problem in the standard form, we consider x 3, the
unrestricted variable as x3 = x4 – x5 where x4, x5  0. Multiply both sides of the equation,
3x1 – x2 + 2x3  7 by –1 , to reverse the inequality, and the equation -4x1 + 3x2 + 8x3 = 10,
is expressed as a combination of two inequalities,

- 4x1 + 3x2 + 8x3  10


and - 4x1 + 3x2 + 8x3  10

 4x1 – 3x2 – 8x3  -10


and 4x1 + 3x2 + 8x3  10

Finally the given primal LPP can be stated as,

Minimize f = x1 – 3x2 – 2(x4 – x5)

Subject to 2x1- 4x2  12


-3x1 + x2 – 2(x4 – x5)  -7
4x1 – 3x2 – 8(x4 – x5)  -10
-4x1 + 3x2 + 8(x4 – x5)  10
x1, x2, x4, x5  0

 Minimize f = x1 – 3x2 – 2(x4 – x5)


Subject to x1- 4x2  12
-3x1 + x2 – 2x4 + 2x5  -7
4x1 – 3x2 – 8x4 + 8x5  -10
-4x1 + 3x2 + 8x4 - 8x5  10
x1, x2, x4, x5  0

which is in the form

minimize f = CX
Operations Research 55
School of Distance Education, University of Calicut
AX  B
X  0

Where C = [1 -3 -2 +2]

 1 4 0 0
 3 1  2 2 
A =  
 4 3 8 8 
 
 4 3 8  8

and B [12 -7 -10 10]'

so the dual L P Problem is

Maximize  = B' Y

A' Y  C'

Y 0
where , Y = [y1 , y2 , y3 , y4]
which can be stated as:
Maximize  = 12y1 – 7y2 – 10y3 + 10y4
Subject to y1 – 3y2 + 4y3 – 4y4  1
- 4y1 + y2 – 3y3 + 3y4  -3
0y1 – 2y2 - 8y3 – 8y4  -2
0y1 + 2y2 + 8y3 – 8y4  2
y1 , y2 , y3 , y4  0

here we can take (y3 – y4) as y5, which is unrestricted. Then the problem becomes,

Maximize  = 12y1 – 7y2 – 10y5

Subject to y1 – 3y2 + 4y5  1


- 4y1 + y2 – 3y5  -3
2y2 + 8y5 = 2
y1 , y2 ,  0 and y5 unrestricted.

Operations Research 56
School of Distance Education, University of Calicut
Duality Theorems

Theorem:

The dual of a dual is primal.

Proof:-

For the primal L P Problem

Minimize f = CX
Subject to AX  B
X 0

The dual L P Problem is ,

Maximize  (Y) = B'Y


Subject to A'Y  C'
Y 0
Which can be written as,
Minimize -  (Y) = - B'Y
Subject to - A'Y  - C'
Y 0
Then the dual L P Problem of this problem can be written as,
Maximize  = - (C')' X
Subject to -(A') ' X  -(B') '
X 0
which is,
Maximize  = - CX
Subject to -A X  -B
X 0
That is
Minimize f = CX
Subject to AX  B
X 0
which is the primal problem.

Operations Research 57
School of Distance Education, University of Calicut
Theorem:
The value of the objective function f(X) for any feasible solution of the primal is not
less than the value of the objective function  (Y) for any feasible solution of the dual.
Proof:-

Consider the primal problem

Minimize f(X) = CX

Subject to AX  B

X 0

After introducing surplus variables, the problem becomes

Minimize f(X) = c1x1 + c2 x2 + … + cnxn

Subject to a11x1 + a12x2 + … + a1nxn – xn+1 = b1


a21x1 + a22x2 + … + a2nxn – xn+2 = b2
……………………………………………………………

……………………………………………………………

am1x1 + am2x2 + … + amnxn – xn+m = bm


x1, x2, … xn + m  0
In the dual of the problem,
Maximize  = B'Y
Subject to A'Y  C'
Y 0
After introducing slack variables, it becomes,
Maximize  (Y) = b1y1 + b2y2 + … + bmym
subject to a11y1 + a21y2 + … + am1ym + ym+1 = c1
a12y1 + a22y2 + … + am2ym + ym+2 = c2
………………………………………………………………….

…………………………………………………………………..

a1ny1 + a2ny2 + … + amnym + ym+n = cn and


y1, y2, … ym , ym+1 … y m+n  0
Let x1, x2, … xn+m and y1, y2, … , ym+n be any feasible solution of the primal and the dual
respectively.

Operations Research 58
School of Distance Education, University of Calicut
Multiply the primal constants by y1, y2, … ym respectively, and the dual constraints by x1, x2,
… xn respectively and add which implies two equations,
(a11x1y1 + a12x2 y1 + … + a1nxn y1 – xn+1 y1) + (a21x1 y2 + a22x2 y2+ … + a2nxn y2 – xn+1 y2) +
… + (am1x1 ym + am2x2 ym + … + amnxn ym – xm+n ym)
= b1 y1 + b2 y2 + … + bm ym =  (Y) (1)
and
(a11y1 x1 + a21 y2 x1 + … + am1 ym x1 + ym+1 x1) + (a12 y1 x2 + a22 y2 x2+ … + am2ymx2 +
ym+2x2) + … + (a1ny1xn + a2n y2 xn + … + amn ym xn + ym+n xn)
= c1x1 + c2x2 + … + cnxn = f(X) (2)
(2) – (1)
 x1ym+1 + x2 ym+2 + … + xnym+n + y1xn+1 + y2xn+2 + … + ymxn+m

= f(X) -  (Y)

since (x1, x2, … xn+m )and (y1, y2, …, ym+n) are the set of feasible solution, ie , xi  0 and

yi  0 , i = 1, 2, … m+n.

L H S is  0

 f(X) -  (Y)  0

 f(X)   (Y)

There for it satisfies, the inequality Min f(X)  Max  (Y) also.

Theorem: (Fundamental theorem of L P P)

The optimum value of f(X) of the primal, if it exists, is equal to the optimum value of
 (Y) of the dual.

Proof:

In the primal problem,

Minimize f(X) = CX
Subject to AX  B
X 0

After introducing the surplus variables, the constraints becomes,


Operations Research 59
School of Distance Education, University of Calicut
n
 ai j x j - xn+i = bi , i = 1, 2, … m
j 1

Let (x1, x2, … xn, xn+1, … xn+m ) be the optimal solution of the problem. Since it has
to be basic feasible solution, and there are only m constraint equations, at least n of the
numbers in the set of optimal solution are zero.

Let  1,  2, …  m be the simplex multipliers for this solution.

Multiply the constraint equations with  1,  2, …  m respectively and adding all the
equations with the objective function, we get,

m n m
  ibi   a ij  i
m
 i x n i
i 1 j 1
f(X) + = (Cj + i 1 )xj - i 1

(1)
since  1,  2, …  m are the simplex multipliers corresponding to the optimal solution,

m
Min f(X) = -  bi i
i 1

and all of the cost coefficients in (1) are non negative


m
 (Cj +  a ij  i ) 0 , j = 1, 2, … n
i 1

and -  i  0, i = 1, 2, … m
m
 a ij  i
- i 1  Cj ; - i 0
(2)
(2) implies (-  1, -  2, …,-  m) is the solution of the dual L P Problem
Maximize  (Y) = B'Y
Subject to A'Y  C'
Y 0
Corresponding to this solution (-  1, -  2, … -  m) , the value of
m
 (Y) = -  bi i = Min f(X)
i 1

 Min f(X) =  (Y)


But we have,

Operations Research 60
School of Distance Education, University of Calicut
Min f(X)  Max  (Y)
m
 Min f(X) =  (Y) , satisfies only when -  bi i is the maximum value of
i 1

 (Y)
 Min f(X) = Max  (Y)

From the proof of the above theorem, we get,

The negative of the simplex multipliers for the optimal solution of the primal are the
values of the variables for the optimal solution of the dual; and in similar the simplex
multipliers for the optimal solution of the dual are the values of the variables for the optimal
solution of the primal.

Theorem:

If the primal problem is feasible, then it has an unbounded optimum if and only if the
dual has no feasible solution, and vice versa.

Proof:-

Let the primal have and unbounded optimum. It means f(X) has no lower bound ie, for
any number we consider, we can find a solution to the primal where the optimal value of
primal is less than the number considered. Then if possible, let the dual have a feasible
solution. Then we can find a number as the value of the dual objective function  ,
corresponding to this feasible solution. But we have for any feasible solution of the primal, the
objective function f(X) is greater than or equal to the objective function  (Y) of the dual for
any of its feasible solution.

Hence in the case of unbounded optimum for primal and existence of feasible solution for the
dual, is contradictory. So the dual has no feasible solution when the primal having an
unbounded optimum.

In converse, let the dual is infeasible, and when primal is feasible, then to prove the primal has
an unbounded minimum.

Let f(X) have a bounded minimum for feasible X:

By theorem we have min f(X) = max  (Y) over feasible values of Y.

Operations Research 61
School of Distance Education, University of Calicut
Thus, when a bounded minimum exists for primal then the dual also should have a
feasible solution which contradict the assumption that dual is infeasible.

Hence the primal has an unbounded minimum.

Theorem: (Complementary Slackness conditions)

If in the optimal solutions of the primal and the dual

(i) a primal variable xj is positive, then the corresponding dual slack variable ym+j is zero;
and

(ii) if a primal slack xn+i is positive, then the corresponding dual variable yi is zero and
vice versa

Proof:-

Let (x1, x2, … xm, xm+1, … xn+m) and (y1, y2, … ym, … ym+n) are the set of feasible
solution of the primal and the dual respectively.

Then we have

x1ym+1 + x2 ym+2 + … + xnym+n + y1xn+1+ y2xn+2 + … +ymxn+m = f(X) -  (Y)


Again on optimal solution of Dual and Primal,
we have min f(X) = max  (Y)
Then if the given feasible solutions are optimal also, then,
x1ym+1 + x2 ym+2 + … + xnym+n + y1xn+1+ y2xn+2 + … +ymxn+m = 0 (1)
Since an optimal solution is feasible all xj  0, all yi  0. Hence to satisfy the equation (1),
each term on the LHS of (1) separately should be zero.

It follows that in a term like xjym+j , if xj > 0 then ym+j = 0 , and if ym+j > 0 , then xj = 0 Also
in term like yi xn+i , if xn+i > 0 then yi = 0 and if yi > 0 , then xn+i = 0

The complementary slackness condition can play well in solving some L P Problems.
For example;

Consider the problem,

Maximize f = 3x1 + 2x2 + x3 + 4x4


Subject to 2x1 + 2x2 + x3 + 3x4  20
3x1 + x2 + 2x3 + 2x4  20
Operations Research 62
School of Distance Education, University of Calicut
x1, x2, x3, x4  0

The dual LP Problem is

Minimize  = 20y1 + 20y2

Subject to 2y1 + 3y2  3


2y1 + y2  2
y1 + 2y2  1
3y1 + 2y2  4
y1 , y2  0

The dual problem contains two variables and hence by solving graphically the solutions of the
problem obtained as

Min  = 28 when y1 = 1.2 , y2 = 0.2

Introducing slack variables x5, x6 to the primal constraints and the surplus variables y3, y4, y5
and y6 to dual constraints we can get the following equations

2x1 + 2x2 + x3 + 3x4 + x5 = 20


3x1 + 1x2 + 2x3 + 2x4 + x6 = 20

and

2y1 + 2y2 – y3 = 3
2y1 + y2 – y4 = 2
y1 + 2y2 – y5 = 1
3y1 + 2y2 – y6 = 4

Since in the optimal solution of the dual, y1 = 1.2 and y2 = 0.2 then by using the constraint
equations of the dual LP Problem
We get, y3 = y6 = 0 , y4 > 0 , y5 > 0
By complementary slackness condition;
Since x1, x2, x3, x4, x5, x6 are primal variables and y1, y2, y3, y4, y5 and y6 are dual variables,
For optimal values of the variables, by complementary slackness property we have
x1y3 + x2y4 + x3y5 + x4y6 + y1x5 + y2x6 = 0
Therefore, since y1, y2, y4 and y5 of dual optimal are nonzero, the variables x2, x3, x5 and x6 of
primal optimal should be zero.
Operations Research 63
School of Distance Education, University of Calicut
Then the primal constraint equations for optimal solution becomes,
2x1 + 3x4 = 20
and 3x1 + 2x4 = 20
which gives x1 = 4 , x4 = 4.

Therefore the optimal solution of the primal is x1 = 4 , x2 = 4 and x3 = 0  Maximum f(X) =


28

Applications of duality

i) In an LP Problem numerical work increases more with the number of constraints than
with the number of variables. Since the two get interchanged in the dual problem, then
it is generally economical to solve the dual.

ii) It is possible under certain conditions to avoid the introduction of artificial variables to
obtain an initial basic feasible solution.

2.10 Dual Simplex Method

Consider the primal

Maximise f(X) = CX
Subject to AX  B
X 0

then the dual is

Minimize  (Y) = B'Y


Subject to A'Y  C'
Y  0.

In the primal, if all Cj  0 and bi  0, then the basis consisting of the basic variables xn+1,
xn+2, … xn+m (Surplus variables) is feasible and also optimal. Similarly, the corresponding
basis of the dual is also feasible and optimal. If some or all bi‟s are positive and all Cj  0,
then the basis xn+1, xn+2, … xn+m is not feasible for the primal, but the basis ym+1, ym+2,… ym+n
is feasible for the dual. We call this solution as “primal infeasible and dual feasible”. We can
move through basic feasible solution of the dual to get its optimal solution. Then from the
optimal solution of the dual, the optimal solution of the primal can be attained. The modified

Operations Research 64
School of Distance Education, University of Calicut
method for solving an LP problem with a non feasible basic solution, but with non negative
cost coefficients is called the dual simplex method.

Dual Simplex Method

Illustration:

Minimize f = 3x1 + 5x2 + 2x3


Subject to -x1 + 2x2 + 2x3  3
x1 + 2x2 + x3  2
-2x1 – x2 + 2x3  -4
x1 , x2 , x3  0

constraints are changed to

x1 - 2x2 - 2x3  -3
-x1 - 2x2 - x3  -2
+2x1 + x2 - 2x3  4

Adding slack variables x4, x5, and x6 to constraints,

 x1 - 2x2 - 2x3 + x4 = -3
-x1 - 2x2 - x3 + x5 = -2
2x1 + x2 - 2x3 + x6 = 4

 The initial solution is (0, 0, 0, -3, -2, 4).

It is not feasible, but all Cj‟s are positive.

The simplex table corresponding to this solution is

Cj 3 5 2 0 0 0
CB xB yB P1 P2 P3 P4 P5 P6
0 -3 x4 1 -2 -2 1 0 0
0 -2 x5 -1 -2 -1 0 1 0
0 4 x6 2 1 -2 0 0 1
0 3 5 2 0 0 0

To attain the feasibility to the solution, we proceed with the dual simplex method.

Operations Research 65
School of Distance Education, University of Calicut
Here x4 is most negative. To remove x4 from the basis, to determine the incoming vector, we
find the ratio of aij to Cj – Zj, for negative aij values corresponds to the row of x4. In this row
 2 2
a12 and a13 are negative. So the ratios are   ,   ,
 5 2

Out of which maximum negative value is –1 corresponds to the variable x3.

Then x3 enters to the basis.

As in the simplex method, the second simplex table is

Cj 3 5 2 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
2 x3 x3 = 3/2 -1/2 1 1 -1/2 0 0
0 x5 x5= -1/2 -3/2 -1 0 -1/2 1 0
0 x6 x6 = 7 1 3 0 -1 0 1
f = 3/2 4 3 0 0 1 0

Here x5 is negative. The ratio of negative a ij values corresponds to the row of x 5 to Cj – Z j


values are (4/-3/2, 3/-1,1/-1/2 ). Maximum negative of these is -2 corresponds to x4. Thus x4
enters to the basis.

The new iteration table is,

Cj 3 5 2 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
2 X3 x3 = 2 1 2 1 0 -1 0
0 X4 x4= 1 3 2 0 1 -2 0
0 X6 x6 = 8 4 5 0 0 -2 1
f = +4 1 1 0 0 2 0

Here the solution is feasible and optimal also.


So minimum f = + 4 , where x1 = x2 = 0 ; x3 = 2

Operations Research 66
School of Distance Education, University of Calicut
2.11 Sensivity Analysis

In many practical situations we want to find not only an optimal solution, but also to
determine what happens to this optimal solution when certain changes are made in the system.
We can determine the effects of these changes without solving a new problem from the very
beginning. Sensitivity analysis is the name given to such investigation.

i) Effect of change in the parameters bi

Let X0 = [x1, x2, …, xm, 0, 0, …, 0]' , X0  0 be the optimal basic variable to the
original problem.

Then A0X0 = B0 or X0 = A0-1B , A0 is a mxm matrix.

Let B changes to B +  B,

ie, (b1, b2, … bm) to (b1 +  1b, b2 +  b2, … , bm +  bm)

Then X0 +  X0 = A0-1 (B +  B)

gives new values to the basic variables, and they remains basic variable if A0-1(B +  B)  0

They would continue to be optimal also, if the relative cost coefficients C j , j = 1, 2, … n, in


the expression of objective function in terms of non basic variables  0.

But the change in B is independent of the Cj values.

 Original basis continues to be optimal basis for the changed problem. And the optimum
value of the objective function f(X) is

f(X) = -  (bi  bi ) i


i

If  B is such that A0-1(B +  B) / 0 , then the solution is not feasible. Hence we use artificial
variable technique or dual simplex method to solve the problem.

ii) Changes in Cj

Changes in the cost coefficient Cj to C j produces a change in  i values to  i . Then

the new simplex multiplier connected to the basic variable gives the optimal solution to the
m
original problem where  aij  i = - Cj , j = 1, 2, …, m.
i 1

Operations Research 67
School of Distance Education, University of Calicut
m
The relative cost coefficient of the non-basic variables are C j '= C j +  aij  i ,j = m+1, …, n.
i 1

These may not all be non negative. For some j, C j may be negative. This would mean that

the basic feasible solution which was optimal for C j is not optimal for C j . So from this point

onwards further iteration may be done with the new values of C j is obtain the new optimal

solution.
If however, C j such that all C j ' are non negative, then the original optimal basis still

remains optimal and the values of the optimal basic variables also remain unchanged and

optimum f(X) = - bi  i .

iii) Introduction of new variables

Let the new variable be xn+k, k=1, 2, 3 … and their coefficients be ai, n+k, i = 1, 2, …,m
in the constraints in the constraints and Cn+k in the cost function. Since the number of
constraints remain the same, the number of basic variables remain the same and the original
optimal solution along with zero values of the new variables would give a basic feasible
solution of the new problem. It would remain optimal if the newly introduced cost coefficients
Cn+K are such that the relative cost coefficient corresponding to them are non negative, ie
m
Cn+k' = Cn+k + a
i 1
i, n  k x i  0,  k

iv) Introduction of new constraints

If K is the set of feasible solution of the original problem and K', the set of feasible
solutions of the modified problem obtained by introducing new constraints, then K'  K. If the
original optimal solution X0, satisfies the new constraints, then X0 is in K' and since f(X0) is
minimum in K, it is also minimum in K'. In such a case really, there is no problem and the
original optimal solution continues to be optimal in the modified situation.

If some or all of the new constraints are related buy the original optimal solution, then
the problem to be solved by taking new constraints and by using appropriate method.

Operations Research 68
School of Distance Education, University of Calicut
v) Changes in aij

If the changes are in aik , where xk is a non basic variable of the optimal solution, then
m
the modified value of Ck' of Ck is Ck' = Ck +  ai k  i is obtained by using the changed vales
i 1

of aik.

If Ck'  0, then the original optimal solution continues to be optimal. If not, further iteration
with the new values of Ck' and aik may be done.

If xk is in the basic variable in the original optimal solution, then the procedure may be
as follows: introduce a new variable xn+1 to the system with coefficients ai n+1 which are the
new values of aik and put Cn+1 = Ck . In this new problem treat the original variable xk as an
artificial variable for solving phase I problem to eliminate it. Then proceed to solve phase II to
get the new optimal.

2.12 Parametric Linear Programming Problem:

In sensitivity analysis we consider the effect of changes in the values of the input data
like, Cj, aij, bj of an LP Problem on its original optimal solution. The changes considered were
discrete. Now consider the coefficients in the problem vary continuously as a function of
some parameter. The analysis of the effect of this parametric variation is called parametric
linear programming.

Consider an LPP where the cost coefficients Cj varies over the parameter  , hence objective
function is given in the form f(  ) = (1 +  C*)X

Where C is the original cost vector, C*, the cost variation vector and  a parameter which can
have any real value. At first solve the LPP for  = 0 and let (x1, x2, … xm)' = X0 be the
optimal basis. Then the relative cost coefficients of the non basic variables C j can be
m
expressed as C j = Cj -  Ci aij , j = m + 1, …, n, are all non negative.
i 1

Now let  be non zero. Then corresponding to the same basis the relative cost coefficients of

 C 
m
C j (  ) = (Cj +  Cj*) -  Ci a ij
*
the non basic variables are i
i 1

m m
 Ci aij )+  (Cj* -  Ci
*
= (Cj - aij )
i 1 i 1

Operations Research 69
School of Distance Education, University of Calicut
= C j +  Cj *
; j = m+1 , …, n

Here the optimal basis X0, obtained for  = 0, remain optimal as  varies from zero through
positive values upto where C j (  ) , j = m+1, …, n, should remain non negative. Let  1 be

the value of  , where at least one of C j (  ) values, let it be CK (  ) , happened to be

negative. Then the original optimal basis X0 lacks its optimality. Then for obtaining optimal
basis for  >  1 , the variable Xk will have to be brought into the basis. This can be done by
the usual simplex procedure. The requirement C j (  )  0 for all the non basic variables can

again be used to determine the next value  2 of  such that this new basis is optimal for  1
<    2 and is non optimal for  >  2. The process can be continued till a value of  is
reached beyond which the optimal basis remains unchanged or the optimal solution does not
exists. The same procedure can be adopted to analyze the parametric behaviour of the problem
for negative values of  starting from  = 0.

The parametric LP problem can be solved for the parametric variations in bj, the parametric
variations in aij, and for the simultaneous parametric variations of A, B and C.

*******************

Operations Research 70
CHAPTER 3
TRANSPORTATION AND
ASSIGNMENT PROBLEMS

3.1 Transportation Problem

If there are m origins (sources) and n destinations (sinks) and we want to transport the
products from the sources to destinations with minimized cost, then the cost matrix is as given
below.

Destinations

1 2 3 4 . . . n availability
1 C11 C12 C13 C14 . . . C1n a1
2 C21 C22 C23 C24 . . . C2n a2
.
Source
.
.
M Cm1 Cm2 Cm3 Cm4 . . . Cmn am
Requirement B1 b2 b3 b4 . . . bn m n
 ai   bi
i 1 i 1

(For a balanced
transportation
problem)

where Cij = cost to transport one unit of product from ith source to jth destination.

The problem is to transport require units of products from m sources to the destinations by
minimizing the total cost of transportation.

3.2 Mathematical formulation of Transportation Problem (T. P.)

Mathematically T. P. can be stated as follows:

m n
Minimize Z =   Ci j xi j
i 1 j 1
School of Distance Education, University of Calicut
where xij is the number of units transported from i source to jth destination
th

Subject to

x
j 1
ij  ai , i = 1, 2, …, m (1)

x
i 1
ij  bi , j = 1, 2, …, n (2)

m n
 ai bj
i 1 j 1
and = (3)

and xij  0 for all i = 1, 2, …, m ; j = 1, 2, …, n

Here we have m + n equations by (1) and (2), But under (3), all these m + n equations are not
linearly independent. Otherwise if (3) not hold, the m + n equations are inconsistent. The
deletion of any one equation from the set of m + n equations, the remaining m + n – 1
equations are linearly independent. Hence the number of basic variables in these equations is
m + n – 1. That is basic solution will consist of at most m + n – 1 of the variables having non
zero values.

Theorem:

The transportation problem has a triangular basis.

Proof:-

By triangular basis we mean that, there is a constraint equation in which only one
basic variable occurs, in another equation there is one more basic variable with the total
number of basic variables being not more than two, in third equation another basic variable
occurs with the total now being not more than three and so on.

There cannot be an equation in which no basic variable occurs, because then the
equation cannot be satisfied for ai  0 , bi  0.

If possible let every equation have at least two basic variables. When each row
equation having at least two basic variables, the total number of basic variables will be at least
2m. Similarly when each column equation having at least two basic variables, the total
number of basic variables will be at least 2n. If N denotes the total number of basic variables,
Operations Research 72
School of Distance Education, University of Calicut
N  2m , N  2n

Then,

i) If m > n  N  2m

 m+m
 m + n ( m > n)
ii) If m < n  N  2n

 n+n
 m + n ( m < n)
iii) If m = n  N  2m

 m+m
 m + n ( m = n)

In all cases, N  m + n , which contradict the fact that the total number of basic variables
N=m+n–1

So the assumption that there are at least two basic variables in each row and column is wrong.

Hence there exist at least one equation row or column, in which there is only one basic
variable.

Let the rth row equation be such and equation and let xrc , the variable in the rth row and
the cth column, be the only basic variable in it. Then xrc = ar. Eliminate this equation from the
system by deleting the rth row equation and putting xrc = ar in the cth column equation.
The rth row then stands cancelled, and bc is replaced by bc' = bc – ar.

The resulting system consists of m – 1 row equations and n column equations of which
m + n – 2 are linearly independent. So the number of basic variables in this system is m + n –
2. Repeating the earlier argument we conclude that there is an equation in this reduced system
which has only one basic variable. If this equation happens to be the cth column equation, in
the original system the cth column equation now contains two basic variables. So we conclude
that the original system has an equation which has at most two basic variables. Continuing
like this we next prove that there is an equation with at most three basic variables and so on.
Therefore the T. P. has a triangular basis.

Operations Research 73
School of Distance Education, University of Calicut
Finding a basic feasible solution

Since T. P. has a triangular basis, let us arbitrarily take x rc as a basic variable which
occur alone in a row or column equation. Then we are assigning the minimum of ar or bc as
the value of this variable. Then one row or column get deleted. If the row is deleted, then bc is
adjusted as bc' = bc – ar, otherwise the value of ar is adjusted as ar' = ar – bc. After eliminating
the variable xrc by assigning a value, we proceed as done first with the reduced transportation
matrix, where one row or column less. The procedure is continued till m + n - 1 rows and
columns are crossed out and an equal number of variables evaluated. The solution so obtained
is a basic feasible solution. It is important not to delete more than one row or column at each
stage after choosing a basic variable. This may happen at any stage of the procedure when ar =
bc. Then we may delete only rth row or cth column, not both. If we choose rth row to delete,
then bc is replaced by bc' = bc – ar = 0, and cth column has still to be satisfied by choosing some
other variable in the cth column to be included in the basis. The value of the variable would be
zero.

Testing for optimality

For checking the optimality of the basic feasible solution, we express the objective
function in terms of non basic variables and if all the cost coefficients in the new expression
are non negative we say the solution is optimum. The objective function of a T. P. is of the
form f(X) = c11x11 + c12x12 + c13x13 + … + c1nx1n + … + cmnxmn. Let xij contains in feasible
basis, then to remove xij from the expression of f(X). Let i and  j are the simplex multipliers
for the ith row equation and jth column equation respectively.

Then to select  i and  j values, so as satisfying

Cij +  i +  j = 0 (1)

Similarly corresponding to all of the variables in feasible basis we form equations


involving corresponding simplex multipliers as done in (1). Solving such equations, we get all
the simplex multipliers included. To start solving equations for simplex multipliers, at first we
put any one of the simplex multiplier as zero.

Using the simplex multipliers  r and  s we are calculating the cost coefficients of
the non basic cells as Crs' where Crs' =  r +  s + Crs, where (r, s)th cell corresponds to
non basic variable.

Operations Research 74
School of Distance Education, University of Calicut
This is calculated for all non basic variables. If Crs' is negative for any (r, s), the
present basis is not optimal and the value of f can be improved by bringing the variable xrs in
the basis.

Loop in transportation array

The procedure for changing the basis theoretically on a notion in the transportation
array called the loop, which is explained as follows. A set of cells L in the transportation array
is said to constitute a loop if in every row or column of the array, the number of cells
belonging to the set is either zero or two.

Changing the basis

In a transportation table, let xrs is a non basic variable while expressing the objective
function in terms of non basic variables, the coefficient of xrs, is changed to crs' where crs' is
negative. Then for an improved solution, it is decided to enter xrs to basis. Starting from (r, s)th
cell, we start a loop connecting basic cells. We assign a value to the variable x rs, so as getting
one basic variable connected in the loop is getting deleted and value of all other basic
variables are positive. With the new set of basic variables we proceed the test of optimality. If
require again make changes in the basis as described. When the basis contains less than m + n
– 1 variables then T. P. is said to be degenerating.

3.3 Unbalanced problem

m n
In a transportation problem, when the condition,  ai =  b j lacks, the problem is
i 1 j 1

said to be unbalanced, and the problem becomes infeasible. If  ai > bj for a T. P. , then
i j
there would be surplus left at the sources after all the demands are met, and if

 ai <  b j there would be deficit at the destinations after all the sources have exhausted
i j
their capacities.

If, for a T. P.  ai > bj then the problem is solved by introducing an artificial
i j

destination, with demand bn+1 =  ai - bj and since the destination is quiet artificial the
i j
cost of transportation to this destination are taken as zero and the new m x (n+1) problem is
solved for optimal solution.

Operations Research 75
School of Distance Education, University of Calicut

On other hand, if the T. P is unbalanced due to  ai < bj then an artificial source
i j

with capacity  b j -  ai is introduced. The cost of transportation from this artificial source
j i
is taken as zero and then the problem is treated for optimal solution.
Ex: Solve the transportation problem,

Destination Availability
D E F G
A 11 13 17 14 250
Source B 16 18 14 10 300
C 21 24 13 10 400
Demand 200 225 275 250 950
Solution:

To start finding the initial basic feasible solution, first consider the (1, 1)th cell. Assign
200 to (1, 1)th cell that is the maximum demand for D and cross off column of D, then the
remaining 50 from the source is assigned to E in the cell (1, 2) and cross of row of A,
proceeding like this, the following assignments are obtained as the initial feasible solution.

200 50 x11 = 200


250
11 13 17 14 x12 = 50
175 125 x22 = 175
300
16 18 14 10 x23 = 125
150 250 x33 = 150
400
21 24 13 10 x34 = 250

200 225 275 250

corresponding to this solution, the cost of transportation is ,

200 x 11 + 50 x 13 + 175 x 18 + 124 x 14 + 150 x 13 + 250 x 10 = 12, 200

Moving towards optimal solution:

Considering the basic cells, the simplex multipliers  i and  j values can be solved
using following equations.

 1 +  1 + C11 = 0   1 +  1 + 11 = 0
Operations Research 76
School of Distance Education, University of Calicut
 1 +  2 + C12 = 0   1 +  2 + 13 = 0
 2+  2 + C22 = 0   2 +  2 + 18 = 0
 2+  3 + C23 = 0   2 +  3 + 14 = 0
 3+  3 + C33 = 0   3 +  3 + 13 = 0
 3+  4 + C34 = 0   3 +  4 + 10 = 0
Put  1 = 0   1 = -11 ,  2 = -13 ,  2 = -5 ,  3 = -9 ,  3 = -4 ,  4 = -6
The net evaluation for the non basic cells are entered on the right corner of each cell.

200 50 (8) (10)  1 +  3 + C13 = 8


11 13 17 14  1 +  4 + C14 = 10
(0) 175 125 (-1)  2 +  1 + C21 = 0
16 18 14 10  2 +  4 + C24 = -1
(6) (7) 150 250  3 +  1 + C31 = 6
21 24 13 10  3 +  2 + C32 = 7

Since one of the net evaluations for non basic cell is negative, the current solution is
not optimal. Negative net evaluation corresponds to the cell (2, 4).

Then the solution is to be improved by entering x 24 to basis. And to remove one of the
current basic variable also.

Put a value  to x24 and draw a loop connecting the basic cells (3, 4) , (3, 3) and (2, 3)

200 50
11 13 17 14
175 125 -  

16 18 14 10
150 +  250 - 

21 24 13 10

By keeping feasibility of the solution, the maximum value that can be assigned to 
iS 125, then the solution becomes,

x11 = 200 , x12 = 50 , x22 = 175 , x24 = 125 , x33 = 275 , x34 = 125

Operations Research 77
School of Distance Education, University of Calicut
again to check optimality
 1 +  1 + C11 = 0   1 +  1 + 11 = 0

 1 +  2 + C12 = 0   1 +  2 + 13 = 0
 2+  2 + C22 = 0   2 +  2 + 18 = 0
 2+  3 + C23 = 0   2 +  3 + 14 = 0
 3+  3 + C33 = 0   3 +  3 + 13 = 0
 3+  4 + C34 = 0   3 +  4 + 10 = 0
Assuming  1 = 0   1 = -11 ,  2 = -13 ,  2 = -5 ,  4 = -5 ,  3 = -5 ,  3 = -8

200 50 (4) (9)


11 13 17 14
(0) 175 (1) 125
16 18 14 10
(5) (6) 275 125
21 24 13 10

Since all the net evaluations are non negative, the current solution is optimal and the cost of
transportation

= 200 x 11 + 50 x 13 = 175 x 18 + 125 x 10 + 275 x 13 = 125 x 10

= 12075

3.4 Assignment Problem

A transportation problem is reduced to some more simple form known as assignment problem
n n n
if, n = m and the objective function f =   Cij xij is to be minimized subject to  xij =
j  1i  1 j1

1,

n
 xij =1
i 1

xij = 0 or 1 , i = 1, 2, …, n, j = 1, 2, …, n
Here the number of linearly independent equations in this case is 2n – 1. So the basic
feasible solution contains 2n – 1 variables. The transportation algorithm may be used in

Operations Research 78
School of Distance Education, University of Calicut
solving this problem, but because of the constraints there would be only one non zero variable
(whose value will be 1) in each row and each column and hence the total number of non zero
entries would be n, then the remaining (n – 1) basic variables being zero. Hence the problem
is degenerating and not so convenient in using usual method. Therefore a special algorithm to
solve the assignment problem is introduced.

Theorem:

The optimum assignment schedule remains unaltered if we add or subtract a constant


to/from all the elements of the row or column of the assignment cost matrix.

Proof:-

Let the assignment problem have the cost matrix C = [Cij] , i , j = 1, 2, ……..n

Let Cij' = Cij  ui  vj , ui, vj are constants . Then for the new cost matrix C' = [Cij' ]

n n
  xij Cij
|
The objective function is Z' =
i1 j 1

n n
=   xij (Cij  ui  v j )
i1 j 1

n n n n n n
=   xij Cij +   xij ui +   xij v j
i1 j 1 i1 j 1 i1 j 1

n n
=Z  U  V where V =  ui , V = vj
i 1 j 1

Since U and V are constants, an assignment xij which minimizes Z also minimizes Z'.

If for an assignment problem all Cij  0, then an assignment schedule xij which satisfies

 xij Cij = 0, must be optimal.


i j

For an assignment cost matrix C = [Cij], by selecting suitable constants to be added to


or subtracted from the elements of the cost matrix one can ensure Cij'  0 and can produce at
least one Cij' = 0 in each row and column and can try to make assignments from among these 0
positions. The assignment schedule will be optimal if there is exactly one assignment in each
row and each column.

3.5 Generalized transportation problem

Operations Research 79
School of Distance Education, University of Calicut
The transportation problem is written mathematically as

m n
to minimize f =   Cij xij
i1 j 1
(1)
n
Subject to  xij = ai , i = 1, 2, …m
j 1

m
 xij
i 1
= bj , j = 1, 2, … n

n n
 ai bj
i 1 j 1
=

and xij  0 for all i and j


Here when we are replacing the constraints under (1) to more general linear constraints
as

n
 ai j xij = ai , i = 1, 2, …, m
j 1

m
 bij xij = bj , j = 1, 2, …, n
i 1

and keeping all other remains same,

Then the new problem is termed as Generalized transportation problem.

As the problem, being a L. P. Problem, it can be solved by the simplex method.

3.6 Transportation with transshipment

A transportation problem in which available commodity frequently moves from one


source to another source or destination before reaching its actual destination is called a
transshipment problem.

Operations Research 80
School of Distance Education, University of Calicut
The chief characteristics of the transshipment problem

1) The number of sources and destinations in the transition problem are m and n
respectively. But in the transshipment problem we have m + n sources and
destinations.

2) Commodities can move not only from sources to destinations, but also from
destination to sources.

3) The basic solution contain 2m + 2n – 1 basic variables, if we omit the variables


appearing in the m + n diagonal cells, we are left with m + n – 1 basic variables.

3.7 Caterer Problem

Consider an article which is used once and then sent for repair or servicing before it can be
used again. On a job a1, a2, …, an number of these articles are required at times T, 2T, …, nT
respectively. The job lasts till nT. The job starts at time T with a1 newly purchased articles. As
time being the requirement can be met partly by repaired articles and partly, if necessary,
through purchase of new ones. The minimum time of repair is rT and maximum is (r + s)T, r
and S being positive integers and r + s < n. The quicker the service, the higher the cost of
repair, but is in any case less than the price of a new article.

Hence the caterer problem is : How to organize purchase and repair of articles so that the job
is completed with minimum cost of the articles.

Let xij be the number of articles received back after repair which were sent for repair at time
iT to be returned at time jT. Let Cij be the cost of this repair per article.

n
 xij = Total number of repaired articles available at jT.
i 1

xij > 0 only if r  j – i  r + s; when i  j , xij is meaningless.

For such inadmissible values of i, xij is taken as +  . Hence such xij values never
comes to basis as the problem is of minimization . Shortage of articles at time jT is met by
purchase of new articles.

xn+1, j denotes the number of new articles purchased due to shortage at time jT.

Operations Research 81
School of Distance Education, University of Calicut
n
Then aj =  xij + xn+1, j , j = 1, 2, … n
i 1

n n
 aj   xij ie  xij  aj
i 1 i 1

All the articles used at time iT need not be sent for repair, as the job is to last only upto nT and
if they cannot be repaired before that time they may as well be left un repaired. The cost of
leaving an article un repaired may be taken as zero.

Let xi, n+1 = The number of articles used but not sent for repair at time iT

n
Then ai =  xij + xi,, n+1 , i = 1, 2, … n
i 1

n
(  xij also denoted the number of articles sent out for repair at time iT)
i 1

and we have

xij  0 , xi,, n+1  0 , xn+1, j  0


Then we can formulate the objective functions to
n n n
Minimize f =   Cij xij + C  xn 1 , j
j  1i  1 j 1

where C is the price of the new article

subject to the conditions,

n 1
 xij = aj , j = 1, 2, … n+1
i 1

n 1

x
j1
ij = ai , i = 1, 2, … n+1

xij  0

To put the problem in the standard transportation form

Consider the objective function f as

Operations Research 82
School of Distance Education, University of Calicut
n  1n  1
f=   Cij xij , where an+1 is taken as a sufficiently large number, usually taken the
j 1i 1

value a i which is the total number of articles required on all the days.

xn+1,n+1 is interpreted as the number of new articles left without being used and so not
purchased at all, and the cost of this imaginary transaction may be taken as Cn+1, n+1 = 0.

Then the problem turns to a transportation problem as

n  1n  1
Minimize f =   Cij xij
j 1i 1

n 1
Subject to  xij = aj , j = 1, 2, …, n+1
i 1

n 1

x
j 1
ij = ai , i = 1, 2, …, n+1

xij  0

which can be solved by the standard procedure.

*******************

Operations Research 83
CHAPTER 4
NETWORK ANALYSIS

The study of networks are very important in electrical theory, transportation and
communication systems etc. A network in its more generalized and abstract sense in called a
graph. The literature contains a large volume of results in graph theory and they are skillfully
used to study and analyze networks.

4.1 Preliminary ideas

A Graph G(V, U) or simply G is defined as a set V of elements vj , j = 1, 2, … n ;


which can be represented as points, and a set U of pairs (vj , vk) , vj, vk  V , which can be
represented as arcs joining points of V. The elements of V are called vertices and elements of
U arcs. The elements of U are denoted either by ui, i = 1, 2, … m or by (vj, vk).

If (vj , vk) are ordered pairs, we represent them by directed arcs, that is, arcs carrying
arrow marks on them denoting the direction vj to vk.
( like vj vk or j k ).

A graph with directed arcs is called a directed graph.


The graph is said to be a finite graph when V and U are finite sets.
The following graph is used to explain all the other concepts.

2 u4 9
8
u1
u7
1
u3
u11
u5
u2 3
6

u10 7
u6 u8

4 5

u9
School of Distance Education, University of Calicut
An arc (directed or undirected) is said to be incident with a vertex which if joins to some other
vertex. That is, it connects the two vertices. The directed arc ui = (vj , vk) is said to be incident
from or going from vj and incident to or going to vk; vj is called the initial vertex and vk the
terminal vertex of the arc (vj , vk).

A subgraph of G(V , U) is defined as a graph G(V1 , U1) with V1  V and U1


containing all those arcs of G which connects the vertices of G1. For example if we take
V1 = {v1 , v2 , v3} and U1 = {u1 , u2, u3} then G1(V1 , U1) is a subgraph of G. A partial graph
of G(V, U) is a graph G2(V, U2) which contains all the vertices of G and some of its arcs U2
 U . (The students may note the difference between a subgraph and a partial graph). In our
example, if we keep all the vertices and erase one or more arcs, say u1 and u2 , we are left with
a partial graph of the original graph.

Let V1 and V2 be two subsets of V such that V1  V2 =  . Let ui = (vj , vk) be an arc
such that vj  V1 and vk  V2. Then ui is said to be incident from (or going from) V1 and
incident to (or going to) V2. Generally it is said to incident with both V1 and V2 and is said to
connect them. For example if V1 = {v2 , v3} and V2 = {v6 , v9}, then u4 connects V1 and V2. It
goes from V1 and V2 and is incident with both.

Let Vk be a subset of V. Then we shall denote by  (Vk) the set of arcs of G(V, U)
incident with Vk, by  + (Vk) the set of arcs incident to Vk and by  -(Vk) the set of arcs
incident from Vk. For example

if V1 = {v2 , v3}, then

 + (V1) = { u1 , u3}

 - (V1) = {u2 , u4 , u6} and

 (V1) = {u1 , u2 , u4 , u5 , u6}.

A sequence of arcs (u1, u2, …, uk-1, uk, uk+1, …, uq) of a graph such that every
intermediate arc uk has one vertex common with the arc uk-1 and another common with uk+1 is
called a chain. In our graph, the sequence (u2 , u3 , u4 , u7) is a chain, as u3 has one vertex
common with u2 and u4 etc. A chain may also be denoted by the vertices which it connects.
For example, (u2 , u3 , u4 , u7) may also be denoted by (v1 , v3 , v2 , v9 , v6).

Operations Research 85
School of Distance Education, University of Calicut
A chain becomes a cycle if in the sequence of arcs no arc is used twice and the first arc
has a vertex common with the last arc, and this vertex is not common with any of the
intermediate arcs. For example, in the graph (u3, u5 , u7 , u4) is a cycle.

A path is a chain in which all the arcs are directed in the same sense such that the
terminal vertex of the preceding arc is the initial vertex of the succeeding arc. In our example
the sequence of arcs (u1 , u3 , u6 , u9) is a path. In either case also we may denote path in terms
of the vertices as (v1 , v2 , v3 , v4 , v5). Note that a path is a chain, but every chain is not a path.
(why?).

A circuit is a cycle in which all the arcs are directed in the same sense. In our graph
(u1 , u3, u2) and (u8, u9) are circuits.

A graph is said to be connected if for every pair of vertices there is a chain connecting
the two. The graph in our example is not connected because there is no chain connecting v6 to
v7 or v2 to v8.

But if we erase the two vertices v1 and v8 and the arc u11 the graph left is a connected graph.

If va is a vertex of a graph, then the set formed by va and all other vertices which are
connected to va by chains, and the set of arcs connecting them from a component of the graph.
Clearly a connected graph has only one component. If a graph is not connected, it has at least
two components. The graph in our example has two components, one consisting of vertices v7,
v8 and the arc u11, and the other the remaining portion.

A graph is strongly connected if there is a path connecting every pair of vertices in it.
For example telephones in a town are the vertices of a strongly connected graph. But Radio
receivers and transmitters form a connected graph but not strongly connected, because there is
a path from a transmitter to a receiver but not one from a receiver to a transmitter.

A tree is defined as a connected graph with at least two vertices and no cycles. As the
name indicates a natural tree is the best example of a graphical tree, the branches forming the
arcs and the extremities of the branches forming the vertices. Clearly a tree with n vertices has
(n-1) arcs, and that every pair of vertices is joined by one and only one chain. If we delete an
arc from a tree, the resulting graph is not connected, and if we add an arc, a cycle is formed.

Operations Research 86
School of Distance Education, University of Calicut

A tree

A vertex which is connected to every other vertex of the graph by a path is called a center of
the graph. A graph may or may not have a centre, or may have many centers. In the above
diagram of a tree, there is no centre. Every vertex of a strongly connected graph is a center. A
tree can atmost have only one centre.

A tree with a centre is called an arborescence. Clearly in an arborescence all the other
arcs directed in the same sense.

(A tree with a centre. Centre is marked . )

4.2 Minimum path problem

Consider a Graph G(V, U) where V is the set of vertices and U is the set of arcs. Let va
and vb two vertices of the graph. Associated with each arc (vj , vk) of the graph let there be a
number xjk. There may be a number of paths from va to v6. For each path we define the length
of the path as  x jk where the summation is over the sequence of arcs forming the path. The
minimum path problem is the problem of identifying the path having the smallest length.

Operations Research 87
School of Distance Education, University of Calicut
The term length is used in a very general sense. Of course, a road map connecting
various towns is a graph and the distance between any two towns along a road is the length of
the path. But sometimes, the time or the cost involved in going from one town to another is
also a length in the generalized sense. Note that in some situations the length may not be even
non-negative. We take the number xjk as a real number, positive, zero or negative.

Case 1. All arc lengths are non-negative: Suppose we have to find the minimum path from
va to vb. Let fj denote the minimum path from va to vj. We have to find fb.

Clearly fa = 0.

Let Vp be a subset of V such that va  Vp and vb  Vp. First we find the minimum
distance fj for all vj in Vp. Let (vj , vk) is an arc going from Vp. Find fj + xjk. Do this for all vj in
Vp and (vj , vk) going from Vp.

Let fr + xrs = min (fj + xjk) , vr  Vp , vs  Vp.

Then the minimum path from va to vs is


fs = fr + xrs
Now form an enlarged subset Vp+1 of V defined by Vp+1 = Vp U {vs}, and proceed as
before.
Suppose we start with p = 0 and V0 consisting of a single vertex va and fa = 0.
Following the procedure described above the sets V1 , V2, ……. Vp , Vp+1, ……. are formed.
If for some p, Vp contains vb we have found fb. If no such set cab be found, it is inferred that
there is no path connecting va to vb.

Example:

Find the minimum path from v0 to v8 in the graph given below

10 3
1 4

2 3 8 1 2 7 10
0 6 2 1 5
5 8
6
8 1 2 4 1 7

3 6
4

Let us start, with p = 0 and V0 = {0}. Clearly f0 = minimum distance from 0 to 0 is 0. There
are three arcs going from V0 = {0}, namely (0, 1) , (0, 2) and (0, 3) and the distance from 0 to

Operations Research 88
School of Distance Education, University of Calicut
1 is 2, 0 to 2 is 6 and 0 to3 is 8 and the minimum of {2, 6, 8} is 2, corresponding to vertex 1.
Form the set V1 = V0 U {1} = {0, 1}. Then find all the arcs going out of V1 and find the
minimum distance from V1 to the corresponding vertices. The arcs going from {0 , 1}are (0,
2) , 0, 3), (1, 2) , (1, 4) and (1, 5). The minimum distance to 2 is 6, 3 is 8. The minimum
distance from 0 to 2 is, the sum of the distances from 0 to 1 and from 1 to 2, which is 2 + 3 =
5. Similarly all other calculations are made. All these are tabulated conveniently in a tabular
form shown below.

p Vp F -(Vp) X f+x fs Vs
(1, 1) 2 2 2 1
0 0 0 (0, 2) 6 6
(0, 3) 8 8
0 0 (0, 2) 6 6
(0, 3) 8 8
1 1 2 (1, 2) 3 5 5 2
(1, 4) 10 12
(1, 5) 8 10
0 0 (0, 3) 8 8
1 2 (1, 4) 10 12
2 (1, 5) 8 10
2 5 (2, 3) 1 6 6 3
(2, 5) 1 6 6 5

0 0
(1, 4) 10 12
1 2
3 2 5
(3, 6) 4 10
3 6 7 4
(5, 4) 1 7
5 6
(5, 7) 5 11
0 0
1 2
2 5
4
3 6 (3, 6) 4 10 10 6
4 7 (4, 7) 3 10 10 7
5 6 (5, 7) 5 11
0 0
1 2
2 5
3 6
5
4 7
5 6
6 10 (6, 8) 7 17 17 8
7 10 (7, 8) 10 20

The minimum path is 0 1 2 3 6 8 and the minimum length is


17.

Operations Research 89
School of Distance Education, University of Calicut
Note: We can find the minimum path even if we are not given the graph. All the arc lengths
and vertices are known, the problem can be solved.

II Arc lengths unrestricted in sign.

Let Va, Vb be two vertices in the graph G(V, U) whose arc lengths are real numbers,
not necessarily positive. The problem is to find the minimum path from Va to Vb. Here we
assume that there are no circuits in the graph whose arc lengths add up to a negative number.
The reason is, if there is any such circuit we can go around and round it and decrease the
length of the path with out any limit getting an unbounded solution.(ie -  ).

How to construct an arborescence? Mark out the arcs going from Va. From the vertices
so reached mark out the arcs going out to the other vertices. Note that we need not mark all of
them. No vertex should be reached by more than one arc. Clearly if there is a vertex to which
no arc is incident, it cannot be reached from Va and so is left out. No arc incident to Va should
be drawn.

Let fj denote the length of the path from Va to any vertex Vj in the arborescence. The
arborescence determines fj uniquely for each Vj in V1, but fj need not be the minimum. Let
(vk, vj) be an arc in G but not in A1. Consider the length fk + xkj and compare it with fj. If
fj  fk + xkj, make no change. If fj > fk + xkj delete the arc incident to Vj in A1 and include the
arc (vk, vj). Thus arborescence A1 is modified to A2 and the value of fj is reduced fk + xkj. The
reduction in the value of fj being fj – (fk + xkj). Also the lengths of the paths to the vertices
going theory Vj are also reduced by the same amount. All the calculations are repeated and the
new values of fj for all Vj are calculated.

In A2 see whether we can select paths which gives smaller path to its vertices. If yes,
obtain A3 from A2 and find new values fj for each Vj. Proceeding like this we can find an
arborescence Ar which cannot be further modified. From Ar we can determine the minimum
path to all of its vertices. fb is the minimum path from Va to Vb.

Proof of the algorithm: Let (va, v1, v2, …, vb) be any path in G from va to vb. Its length is
xa1 + x12 + … + xpb. Since Ar contains all those vertices of G which can be reached from va,
the vertices in this path are in Ar also. Since Ar is an arborescence which cannot be further
changed, for every vertex Vj in Ar and for every arc (vk, vj) in G

fj  fk + xkj or fj – fk  xkj

Operations Research 90
School of Distance Education, University of Calicut
Writing these inequalities for all vertices

f1 – fa  xa1
f2 – f1  x12
..…
……
fb – fp  xpb.
Adding all of them we get,

fb – fa  xa1 + x12 + x23 + … + xpb

Since fa = 0, fb  xa1 + x12 + x23 + … + xpb.

Thus no path from va to vb in G can be smaller than fb. Since the path of length fb is also in G,
this path is the minimum.

Note: The path of maximum length can be found either by changing the signs of the lengths
of all arcs and then finding the minimum path, or by reversing the inequality fj > fk + xkj as
the criterion for changing an arc in the arborescence, so that at every stage a greater path is
selected against a smaller one.

Example: Find the minimum path from v0 to v7 is the following Graph.

6 7 3

2 2 -4 -1

5 4 2 2

3 3 –2
2
8 0 1
2 1

Draw an arborescence A1 with centre v0 consisting of all those vertices of the graph which can
be reached from v0, and the necessary number of arcs.

Operations Research 91
School of Distance Education, University of Calicut

6 7 3

2 2
5 4 2
2

2 3 -4

0 1
1
The lengths fj of the paths from v0 to different vertices vj of A1 are as follows.
f0 = 0 , f1 = 1, f2 = -4 , f3 = 3 , f4 = 3 , f5 = 2 , f6 = 4 , f7 = 5 .
Consider the vertex v2. there is an arc (v1 , v2) in G which is not in A1, such that
f2 = -4 < f1 + x12 = 1 + 2 = 3. So (1, 2) is not included.
Then consider the vertex v3. There is an arc (v2 , v3) in G which is not in A1 such that
f3 = 3 > f2 + x23 = -4 – 1 = -5.
So we delete the arc (v1 , v3) which is incident to v3 in A1 and instead include the arc (v2 , v3).
This gives a new arborescence A2 with f3 = -5.

6 7 3

2 2 -1

5 4 2

2 3 -4

0 1
1

Now consider vertex v4. there is an arc (v3 , v4) such that
f4 = 3 > f3 + x34 = -5 + -4 = -9. So delete the arc (v0 , v4) and include the arc (v3 , v4).

6 7 3
f0 = 0, f1 = 1,
2 2 -4 -1 f2 = -4, f3 = -5

5 4 2 f4 = -9, f5 = 2
f6 = 4, f7 = -1

2 -4
0 1
1
Operations Research 92
School of Distance Education, University of Calicut
Consider vertex v5. There is an arc (v4 , v5) such that
f5 = 2 > f4 + x45 = -9 + -2 = -11. So delete (v0, v5) and include (v4 , v5)

6 7 3

2 2 -4 -1

5
-1 4 2

-4

1
0 1

Consider vertex v6. There is an arc (v4 , v6) such that

f6 = 4 > f4 + x46 = -9 + -2 = -11

So delete (v5, v6) and include (v4 , v6)

6 7 3
f0 = 0, f1 = 1,
-2 2 -4 -1
f2 = -4, f3 = -5
5 4 2 f4 = -9, f5 = -11
-2
f6 = -11, f7 = -7
-4

0 1
1

Now consider vertex v7. There is an arc (v6 , v7) such that
f7 = -7 > f6 + x67 = -11 + -1 = -12.
Again f7 = -7 > f3 + x37 = -5 + -5 = -10. Since -12 < -10 we include (v6 , v7).

-1
6 7 3

-2 -4 -1
5 4 2

-2
-4

0 1
1
This cannot be changed further. For this arborescence
Operations Research 93
School of Distance Education, University of Calicut
f0 = 0 , f1 = 1 , f2 = -4 , f3 = -5 , f4 = -9 , f5 = -11 , f6 = -11, f7 = -12. The minimum path
from v0 to v7 is (v0 , v2 , v3 , v4 , v6 , v7) with length –12.
4.3 Spanning tree of minimum length

Let G(V, U) be a connected graph with undirected arcs, and let T(V, U') be a tree such
that U'  U. the set of vertices of T is the same as that of G, While all the arcs of T are arcs of
G also. Then T(V, U') is said to be a spanning tree of G(V, U) or T is said to span G.

The problem is to find the spanning tree of minimum length if the length of each arc of
G is given as a non-negative number.

Let xjk denotes the length between vj and vk. We assume that the arcs are undirected
and as such we have xjk = xkj for all arcs. Also we assume that xjk  0 for all arcs.

Algorithm:

Assumptions: No two arcs of G are of the same length. If xjk = xpq = xrs we take
xpq = xjk + e, xrs = xjk + 2e , e > 0. Keeping l small enough as not to make xrs equal to or
greater than any other arc length which is greater than xjk.

Consider the set of vertices V of the graph G(V, U). Given that G is connected. Hence
every vertex has one or more arcs incident with it. Also since all the arcs in V are of unequal
length, for each vertex, among all the arcs incident with it, there is one which is of the
smallest length. We shall say that this arc connects the vertex to its nearest neighbour.

Consider each vertex of V one by one and connect it to its nearest neighbour. In this
way we shall get a partial graph G1(V, U1) of G consisting of all its vertices and some of its
arcs. G1 may, in general, be an unconnected graph with several components, each of which is
connected within itself.

Let each of these components be treated as if it is a single vertex. We shall thus have a
new set of vertices, each vertex being a component of the graph G1. Every one of these
vertices have arcs of the graph G connecting it with one or more of other vertices. For, if it
were not so, G will not be connected. The arc of the smallest length between two vertices will
be taken as the measure of the distance between them. We again consider each of these
vertices one by one and connect each vertex to its nearest neighbour, that is the one which is
at the least distance from it. This operation will result in another graph G2(V, U2) which again
have many components. Again let each of these components be treated as vertices and let each
vertex be connected to its nearest neighbour , yielding a graph G3(V, U3). The procedure is

Operations Research 94
School of Distance Education, University of Calicut
repeated till a connected graph Gp(V, Up) is obtained. This graph is the required spanning tree.
Example: Find the minimum spanning tree of the following graph. It is an undirected graph.

14 9
1 4 8
1
18 10 13
5 15
6 3 7 6 3

2 16 12
7
2 5
8 17

Consider the vertex v1. v1 is connected to v2 , v3 , v4 and v5 whose lengths are 6, 5, 14 and 18
respectively. Among these 5 is the minimum and so v3 is the nearest neighbour of 1. Similarly
for v2, also v3 is the nearest neighbour. Find the nearest neighbours of all the vertices. The
following graph with three component A1, A2 and A3 is obtained.

8
1 4

3
A1 3
A2 6
A3

2
5
7

Treat A1 , A2 and A3 as three vertices. The arcs of G connecting A1 to A2 are of lengths 14(1-
4), 18(3 – 4) , 8(2 – 5) , 16(3 – 5) and 11(1 – 5) and so the distance between A1 to A2 is taken
as the minimum one, that is 8, and (v2 , v5) is connected. We get the following graph. Note
that there are no arcs connecting A1 and A3.

4 8
1

3
A4 A5

2 5 7

In the above graph there are two components A4 and A5. These two vertices are connected by
(v4 , v6) , (v4 , v8) , (v4 , v7) , (v5 , v7) and (v5 , v8) and the nearest neighbour is v8 and (v4 , v8)
is connected.
Operations Research 95
School of Distance Education, University of Calicut

4 9 8
1
5
6
3 7 3

2 4
2 5 7

Thus we get a single connected graph which is the smallest spanning tree. The length of the
spanning tree is 38.

Proof of the algorithms

Clearly every vertex of G(V, U) has a unique neighbour. (G is connected and all the
arc lengths are of different lengths). By connecting every vertex to tits nearest neighbour we
get the graph G1(V, U1) which has components. When these components are connected, each
to its nearest neighbour, we get another graph G2(V, U2) which has components. By repeating
this operation we finally graph Gp(V, Up) which has single component, and thus we get a
connected graph.

Claim: The graph has no cycles

If possible let it have a cycle (va , vb , vc , … vb , va). Let us mark arrows on the arcs in
this cycle (Originally there were no arcs on them) which indicate the nearest neighbour of
each vertex. For example, if the nearest neighbour of vb is vc we mark the arrow from vb to vc.
In this way every arc in the cycle will receive an arrow. Because if, for instance, (va , vb) does
not get an arrow, then neither vb is the nearest neighbour of va nor va is the nearest neighbour
of vb, and so the arc (va , vb) should not have been there at all.

After all the arcs in the cycle have been marked, one of the following two situations
arise. (i) All the arcs are marked in the same sense.

Without loss of generality let us suppose that they have been marked as v a  vb,
vb  vc, … vd  va. Now it is clear that vi is the nearest neighbour of vb (not va) we have
xab > xbc. Similar is the case with the other vertices also. Hence.

xab > xbc > … > xda.

Operations Research 96
School of Distance Education, University of Calicut
But xab > xda means that the nearest neighbour of va is not vb. This is a contradiction.
Therefore all the arcs cannot have arrow marks in the same sense. (ii) some arcs are marked
in one sense and some other arcs in the opposite. Then there must be a vertex, say, v b, such
that the arrows to break its neighbours va and vc are directed away from vb. This in turn means
that vb is nearest to va and also nearest to vc. Since the arcs are of different lengths, this is not
possible. So this also leads to a contradiction.

Thus the assumption of the existence of a cycles leads to a contradiction in every


sense. Therefore, there are no cycles in the graph Gp(V, Up).

We thus prove that the graph we get by this algorithm is a connected graph without
cycles, that is , it is a tree. Since it includes all the vertices of G, it spans G.

Claim: It is the smallest spanning tree.

If possible, let us suppose that the smallest spanning tree T(V, U|) is not the same as
Gp(V, Up). Then there must be an arc in Gp which is not in T, otherwise the two trees T and Gp
will not be different. Let this arc be (va , vb) such that vb is the nearest neighbour of va. Let us
introduce this arc in T. Since this is an additional arc introduced in a tree T, a cycle will be
formed. Let this cycle be (va , vb , vc , … , vd , va). Since vb is the nearest neighbour of va, the
length of the arc (vd , va) is greater than that of (va , vb). So if we delete the arc (vd , va) from T
and introduce (va , vb) the length of the resulting tree will decrease, which in turn means that T
is not of minimum length, which is again a contradiction that T is a tree of minimum length.
So there is not a n arc of Gp which is not in T. But the total number of arcs in Gp is the same
as in T, because both are trees spanning the same graph and should have (n – 1) arcs where n
is the number of vertices in G. Then Gp is the same as T ie Gp is the minimum spanning tree
of G.

4.4 Problem of potential difference.

Let G(V , U) be a graph, and let with each vertex vj be associated a real number fj
called its potential. If ui = (vj , vk) is an arc, then xjk = fk – fj is called the potential difference
in the arc ui. In this way with each arc of the graph is associated a potential difference. It is
obvious that xjk = -xkj. The potential difference in a chain through the vertices v1, v2 , …, vp
as x12 + x23 + … + xp – 1, p = f2 – f1 + f3 – f2 + … + fp – fp – 1 = fp – f1.

Clearly the potential difference in a cycle is zero. The problem of maximum potential
difference. Let va and vb be two vertices in a graph G(V, U), We have to find the maximum
Operations Research 97
School of Distance Education, University of Calicut
potential difference xab between the vertices such that the potential difference in all arcs (vj ,
vk) of G is subject to the condition xjk  cjk , where cjk are given constants.

In terms of potentials the above constraint may be written as fk – fj  cjk for all arcs (vj , vk)
and the problem then is to find the maximum value of fb – fa subject to these constraints. Since
we are interested in potential differences and not absolute values of potential, we may take f a
= 0 without any loss of generality. The problem is stated as follows.

Let fj be a real – valued function of vertex vj in a graph G(V, U) and let fa = 0. We


have to find the maximum value of fb subject to

fk – fj  cjk for all arcs (vj , vk) in G.

In this form the problem is identical with the minimum path problem where arc lengths are
cenrestricted in sign. We solve the problem of minimum path form va to vb with cjk as the
length of the arc (vj , vk).

Example: Find the maximum potential difference x14 between v1 and v4 of the graph with the
following data subject to the condition that for each arc xjk  cjk.

V : 1 2 3 4
U : (1, 2) (1, 3) (2, 3) (2, 4) (3, 4) (1, 4)
Cjk : 3 2 -2 1 4 -1

3 1

-1
1 4

2 -2 4

f1 = 0 , f2 = 3 , f4 = f2 + (2, 4) = 3 + 1 = 4
Now f4 = -1 , f3 = f2 + (2, 3) = 3 – 2 = 1

The maximum potential difference x14 = -1 with optimal path (v1 , v4). See the following table
also

A Path F Alternative path F


(1, 2, 4) 4 (1, 4) -1
1
(1, 3) 2 (1, 2, 3) 1
(1, 4) -1
2 (1, 2, 3, 4) 5
(1, 2, 3) 1
Operations Research 98
School of Distance Education, University of Calicut
(1, 4) -1
3
(1, 2, 3) 1

A more general problem of maximum potential difference in a network is presented if the


constraints are of the type bjk  fk – fj  cjk for all arcs (vj , vk).

Now bjk  fk – fj  cjk  fk – fj  cjk and

fj – fk  -bjk

The method of solution remains the same.

Scheduling of sequential activities:

The problem of minimum path finds an important application in scheduling and


coordinating various activities in a project so as to complete it in minimum time at a given
cost. Also it is possible to estimate the least rise in cost or the maximum saving possible if
certain activities are speeded up or slowed down to finish the project with in a prescribed
period.

A project involves a number of activities, operations or jobs which we identify as


vertices va, …, vj, … vk, …, vb of a graph, va represents the beginning and vb the end of the
project. Each job vj requires some time for its completion. It may not be possible to start on a
job unless some specified time has been spent on some other job or jobs. The problem is to
find the minimum time in which the project can be finished and the time schedule for each
job.

Let cjk be the time required on job vj before job vk can start. It is the time interval
between the start of the two jobs vj preceding vk. This information is indicated by drawing the
arc (vj, uk) and associating the length cjk with it. The time required to complete vj is
represented by the arc (vj , vb) of length vjb, as it would mean that the time cjb should be spent
on vj before the end vb can be reached. Also if vj can start only after some time has passed
from the beginning of the project, we may indicate it by caj. All arcs (vj , vk) with lengths cjk
will in this way denote a sequential relationship in terms of time among various jobs.

Each sequence of jobs which must be done before work on vj can begin is represented
by a path connecting va to vj. The longest of these paths determines the earliest time vj can
start. In this way the longest path joining va to vb gives the minimum time of completion of

Operations Research 99
School of Distance Education, University of Calicut
the project. The problem thus reduces to finding the maximum path with arc length cjk. This
path is called the critical path.

Example: A building activity has been analyzed as follows, vj stands for a job.

(i) v1 and v2 can start simultaneously, each one taking 10 days to finish.
(ii) v3 can start after 5 days and v4 after 4 days of starting v1.
(iii) v4 can start after 3 days of work on v3 and 6 days of work on v2.
(iv) v3 can start after v1 is finished and v2 is half done.
(v) v3, v4 and v5 take respectively 6, 8 and 12 days to finish. Find the critical path and
the minimum time for completion.

Based on the above information we draw a graph first. The vertices va and vb represent the
start and the finish of the project and the other vertices the various jobs to be done. The arc
lengths denote the time between the start of two jobs.

5
1 3

0 4 3 6
8
a 4 b
6 10
0
2

10 12
5

Operations Research 100


School of Distance Education, University of Calicut
Now we find the arborescence giving the maximum path. That means identify paths to various
vertices giving maximum length.

1
5 3

0 3
a 4 b

10 12

The critical path is (va , v1, v5, vb) and the minimum time for the completion of the job is 22
days.

Maximum flow problem:

Definition:

Let xi be a real number associated with every arc ui, i = 1, 2, …, m. If a graph G(V, U)

such that for every vertex Vj,  x   x , j  1, 2,..., n


i
2
i where the summation  1
is on all

arcs going to Vj and the summation 


2
is on all arcs going from Vj. Then xi is said to be a

flow in the arc ui, and the set [xi] i = 1, 2, …, m is said to be a flow in the graph G.

Let G(V , U) be a graph with V as the set of n + 2 vertices va , v1 , v2 , v3 ,… vn , vb and


U as the set of m + 1 arcs u0 , u1 , … , um. The vertex va is called the source and vb the sink
and the arc u0 connects vb to va. Note that this is the only arc going from vb and also the only
arc going va. With every arc ui , i = 1, 2, … , m is associated a real number ci  0 called the
capacity of the arc. Note that no number is associated with u0.

Let {xi} be a flow in the Graph G such that 0  xi  ci , i = 1, 2, …, m. Here the flow
in the arc u0 (that is x0) has no constraints.

Since x0 is the only flow out at vb and in at va.

Total flow in at vb = total flow out at vb


= x0
= total flow in at va

Operations Research 101


School of Distance Education, University of Calicut
= total flow out at va.

All that flows out at va flows in at vb. That is why va is called source at vb sink. The arc u0 is
called return arc. It is used as a mathematical device.

Now the problem is to find the flow {xi} such that x0 is maximum

Subject to 0  xi  ci ; i = 1, 2, …, m.

Algorithm:

Step 1: Start by assuming a feasible flow. It is always possible to assume xi = 0 for all i.

Step 2: Divide the set V of vertices in to two subsets, W1 and W2 such that W1  W2 =  .

For a start take W1 = {va} and all other vertices being in W2.

Step 3: Vertices of W2 are transferred to W1 by the following procedure. Let vj  W1, vk


 W2.if (vj , vk) is an arc ui and xi < ci , transfer vk to W1

(a) if (vk , vj) is an arc ui and xi > 0 transfer vk to W1.

(b) otherwise do not transfer.

Transfer vertices from W2 to W1 like this. Finally if vb is also transferred the flow is not
optimal.
Step 4: If the flow is not optimal, increase xi in arc of category (a) in which x i < ci (flow <
capacity) and decrease xi in arc of category (b) in which x i > 0 (flow > 0) so that the
flow remains feasible and at least one arc gets capacity flow. Then go to step 2.
After a number of steps a stage will come when vb cannot be transferred to W1 by step
3. The flow at that stage is optimal.
Example: In the following graph find the maximum flow. The numbers along arcs are their
capacity.

Operations Research 102


School of Distance Education, University of Calicut

1
1
4 2 4
0
3 2 2
5
a 2 5 6

1
2
1
1
3 6

u0
Let us assume an initial flow of zero in all arcs 0 < 3 (capacity). Let W 1 = {va}. Now
va  W1 and v1  W2 and the flow. Therefore transfer v1 to W1. Again v1  W1 and V4 
W2 such that the flow 0 < 1 (capacity). So transfer v4 also. Now v4  W1 and v3  W2 with
flow 0 < 2 (capacity). Transfer v3 to W1. Again v3  W1 and v5  W2 with flow (0) < capacity
(1). Transfer v5. Finally v5  W1 and vb  W2 with flow < capacity (0 < 5). Transfer v6 to W1.
We have gone along the chain (va, v1, v4, v3, v5, vb). The least capacity in this chain is 1. so in
each arc of this chain and also in the return arc (vb, va) increase the flow to 1, keeping the
flows in other arcs unchanged see that the modified flow is feasible.

The above steps are repeated with every modified feasible flow until it is not possible
to transfer vb to W1. In each feasible flow the numbers in ( ) indicate the chain along which it
is possible to proceed to transfer vb into W1. The asterik indicates that the flow in the
corresponding arc is equal to its capacity and cannot be further increased.

Then we consider the chain (va, v1, v5, vb) and the maximum permissible flow is 2
units. Then we consider (va, v2, v6, vb) with flow 1 unit. Next consider the chain (va, v2, v4, v3,
v6, vb) with permissible flow 1 unit. Now start with W1 = {va}. Since the flow in (va, v3) is less
than the capacity it is transferred to W1. Now (v3, v5) and (v3, v6) both are saturated. But there
is an arc (v4, v3) such v3  W1 and v4  W2 and the flow in it is greater than zero. Hence v4 is
transferred to W1. Again there is an arc (v1, v4) with v4  W1 and v1  W2 with a positive flow.
Therefore v1 is also transferred to W1. Then since the flow through the arc (v1, v5) is less than
its capacity v5 is also transferred. Finally vb is also transferred. Since vb is transferred the flow
is not optimal. But the iterations stop at this because no matter how we try we cannot transfer
v6 to W1. The maximum flow in the graph is 6.

Operations Research 103


School of Distance Education, University of Calicut

Capacity Feasible Flows


Arcs
Ci I II III IV V VI
(a, 1) 3 (0) (1) 3* 3* 3* 3*
(a, 2) 2 0 0 (0) (1) 2* 2*
(a, 3) 1 0 0 0 0 (0) 1*
(a, 4) 1 (0) 1* 1* 1* (1*) 0
(1, 5) 4 0 (0) 2 2 (2) 3
(1, 6) 2 0 0 0 0 0 0
(2, 4) 2 0 0 0 (0) 1 1
(2, 6) 1 0 0 (0) 1* 1* 1*
(3, 5) 1 (0) 1* 1* 1* 1* 1*
(3, 6) 1 0 0 0 (0) 1* 1*
(4, 3) 2 (0) 1 1 (1) (2*) 1
(4, b) 0 0* 0* 0* 0* 0* 0*
(5, 2) 1 0 0 0 0 0 0
(5, b) 1 (0) (1) 3 3 (3) 4
(6, b) 2 0 0 (0) (1) (2*) 2*
(b, a) 0 1 3 4 5 6

Definition:

If in the graph G(V, U) of the maximum flow problem, W2 is a subset of V such that
vb  W2 , va  W2, then the set of arcs Ω W2  (arcs incident to W2) is said to be a cut. The

capacity of the cut is the sum of the capacities of the arcs contained in the cut.

Theorem:

For any feasible flow {xi}, i = 1, 2, …, m in the graph, the flow x0 in the return arc is
not greater than the capacity of any cut in the graph.

Proof:-

Let  + (W2) be any cut. Consider the flow in the arcs going to and going from W 2.
We know that the flow in should be equal to flow out.

Therefore  xi = x0 +  xi where  denotes the summation over the arcs going to W2


1 2 1

and  denotes the summation over the arcs going from W2.
2

Since xi  0 for i,  xi  x0
1

Operations Research 104


School of Distance Education, University of Calicut
Also xi  Ci for all i. Therefore  Ci  x0
1

where  Ci is the capacity of the cut  + (W2).


1

Theorem:

The algorithm described earlier solves the problem of the maximum flow.

Proof:-

Suppose by the application of the algorithm a stage is reached when no vertex of W 2


can be transferred to W1 by the procedure discussed in the algorithm and vb  W2. The set of
arcs  + (W2) is a cut. Let ui   + (W2). It means that ui is an arc (vp, vq) where vp  W1 ,
vp  W2. The flow in this arc should be saturated, that is xi = ci, because if xi < ci it would have
been possible to transfer vq from W2 to W1, which is contrary to hypothesis.

Again let uj   - (W2). It means that uj is an arc (vr, vs) where vr  W2, v5  W1. The
flow in this arc should be zero, because if xi > 0, it would have been possible to transfer vr
from W2, which again is contrary to hypothesis.

We conclude that the flow into W2 is  Ci summation being over all ui   + (W2),

and the flow out of W2 is only in the return arc u0, because it is the only arc going from W2
carrying a non-zero flow. Let the flow in u0 be y0. Then since the flow in is equal to flow out,
we have

 Ci = y0 where  Ci is the capacity of the cut obtained by the application of the

algorithm. But for any flow x0 in u0,

x0   Ci where  Ci is the capacity of any cut.

 It is clear that
y0 = max x0.

This in turn implies that the algorithm leads to finding out the maximum flow.

Operations Research 105


School of Distance Education, University of Calicut
Theorem:

The maximum flow in a graph is equal to the minimum of the capacities of all possible
cuts in it.

Proof:-

We have x0   Ci

There fore max x0  min  Ci

But in the previous theorem we have seen that there is a cut corresponding to which the flow
in u0 is equal to cut capacity. Necessarily this flow should be maximum and the corresponding
cut capacity should be the least of all cut capacities.

Remark: This theorem is known as max-flow min-cut theorem.

Duality in the maximum flow problem: The idea is explained using an example.

Consider a network with 5 vertices and 8 arcs

u1 u4 u5

u6
a 3 b

u2 u3 u7

u0

The problem of maximum flow through this network is stated as a LPP as follows.

Maximise f = x0

Subject to x0 - x1 - x2 =0
x1 + x4 – x5 =0
Operations Research 106
School of Distance Education, University of Calicut
x2 – x3 –x7 =0
x3 – x4 – x6 =0
x1  c1
x2  c2
x3  c3
x4  c4
x5  c5
x6  c6
x7  c7
and xi  0 , i = 1, 2, …, 7.

Let y1, y2 , y3 , y4 be the dual variables associated with the first flow equalities and Zi ,
i = 1, 2, …, 7 are the dual variables associated with the last seven inequalities.

Then the dual problem is written as

7
Minimize  =  ci zi
i 1

Subject to y1  1
-y1 + y2 + z1  0
-y1 + y3 + z2  0
-y3 + y4 + z3  0
y2 – y4 + z4  0
-y2 + z5  0
-y4 + z6  0
-y3 + z7  0

zi  0 , yj unrestricted.

By duality theorem max f = min 

7
 ci zi
i 1
or max x0 = min (1)
But we know max x0 = min  ci (2)

where  ci is any cut. Comparing (1) and (2) we can see that for the optimal solution the
dual variables zi have the values zi = 1 for these arcs which are in the minimum cut and zi = 0
for all other arcs.

Operations Research 107


School of Distance Education, University of Calicut
4.6 Generalized Problem of maximum flow.

The problem of maximum flow in its generalized form is stated as follows.

Let G(V, U) be a graph consisting of a source va , a sink vb , vertices vj , j = 1, 2, …, m


arcs ui , i = 1, 2, …, m and a return arc u0. the problem is to find a flow {xi} in G such that
x0 is maximum

Subject to bi  xi  ci. Here xi can vary between bi and ci ; (bi , ci  R , not necessarily non-
negative). Here a negative flow xi in an arc is admissible and is interpreted as a flow – xi in
the reverse direction.

bi  xi  ci may be written as

0  xi  ci – bi or equivalently

0  yi  ci – bi (yi = xi – bi)

Now, in terms of the new variables yi (yi = xi – ci) the constraints are similar to those of the
problem already discussed in a previous section. The main difficulty arises from the fact that
if {xi} is a flow, {yi} is not necessarily a flow. We know that {xi} is a flow if x  x
1
i
2
i or

if  y  b   y  b
1
i
1
i
2
i
2
i

For every vertex vj, where  1


and 
2
are summations on arcs going to and going from vj.

If {yi} is also to be a flow, then it must happen that

 bi =  bi ,
1 2

but there is no reason why, in general, it must be so.

The difficulty is overcome by the following artifice. Introduce a vertex v0 in G and


draw arcs according to the following rules. We call the vertex v0 the fictitious vertex and the
new arcs the fictitious arcs.

If ui = (vj , vk) is an arc for which bi > 0, draw the arcs (v0, vk) and (vj , v0) , thus forming a
cycle (v0 , vk , vj , v0). Assume a flow bi in each of the arcs (v0 , vk) , (vk , vj) , (vj , v0) thus
getting a flow in the cycle.

Operations Research 108


School of Distance Education, University of Calicut
Do this for all arcs ui in G, thus getting a graph G1(V1, U1) with constant flows in
cycles so formed. These flows together form a flow in G1 as it is a linear combination of flows
in cycles. We shall refer to this flow as the fictitious flow in G1.

Let {xi} be any flow in G. It is also a flow in G1. Let this flow be superimposed on the
fictitious flow. The result is a flow because it is a linear combination of two flows. Denoting
this flow by {yi}, we have

yi = xi – bi , for ui in G
yi = the fictitious flow for all fictitious arcs.

Let us now determine the flow {yi} in G1 such that

y0 is maximum

subject to 0  yi  ci – bi for ui  G,

and yi = fictitious flow for fictitious arcs.

This involves keeping the flow constant in some arcs of G1 and varying in others till y0
is a maximum. This can be done by the algorithm discussed earlier. Having determined the
optimal flow {yi} , we put xi = yi + bi and we have required flow {xi} in G.

*******************

Operations Research 109


CHAPTER 5
INTEGER PROGRAMMING

5.1 Introduction

In many L P Problem we are interested in integer values for the variables in optimal
solution. For eg: if the variables are the number of buses on different routes in a town or the
number of bank branches in different regions in a country then our L P problem turns to
integer programming problem (ILPP). In LP problem, if some of the variables are restricted to
be integers while others are real numbers, the problem is said to be mixed integer
programming problem. (MILPP).

The set of feasible solutions of the integer Linear Programming problem (ILPP) is not
convex, because it consists of some isolated integer valued points. But when we are forming
the convex hull of all these points, every vertices of this convex hull is a feasible solution of
the ILPP.

5.2 General ILP and MILP problems

The general ILP or MILP problem can be stated as


Minimize f ( X )  CX
Subject to AX  B
X 0
X is an integer or a mixed integer vector.
When we drop the integer constraint then there is a related LP problem. Hence the
solution of ILPP is also a solution of related LPP. Then if TF denotes the set of feasible
solution of the ILPP or MILPP and SF the set of feasible solutions of the related LPP, then

TF  SF.

Since SF is non empty, convex and every point of TF is in SF, then the convex linear
combinations of points in TF are also in SF. Hence the convex hull [TF], of points of TF is a
subset of SF
Thus TF  [TF]  SF (1)
Now ILPP or MILP can be stated as
School of Distance Education, University of Calicut
Minimize f ( X )  CX 
 (2)
Subject to X  TF 
and the related LPP as
Minimize f ( X )  CX 
 (3)
Subject to X  S F 
Consider another LPP associated to these problems
Minimize f ( X )  CX 
 (4)
Subject to X  [TF ] 
Results:
(i) If an optimal solution of (3) exists and TF is non empty, then
optimal solution of (2) and (4) exists. Also the optimal solution of (3) is a lower bound for
the optimal solutions of (2) and (4).
Proof:
Let X 0 be an optimal solution of (3). Then for all X in S F

f (X0)  f (X ) .

Let Y  TF . Then from (1) , Y  S F , and so

f ( X 0 )  f (Y ) .

This means that f (Y ) , Y  TF , has a lower bound, and so (2) has an optimal solution.
Similarly we prove that (4) has an optimal solution.
(ii) If an optimal solution of (3) is an integer or a mixed integer
vector as required by the integer constraints, then it is also an optimal solution of (2).
(iii) In optimal solution of (2) is an optimal solution of (4)
conversely a basic optimal solution of (4) is an optimal solution of (2).
5.3 Methods for Solving ILPP and MILPP
Gamories cutting plane method:
Consider a general ILPP,
Minimize f ( X )  CX
Subject to AX  B
X 0
Where X is an integer vector.
The related LP problem is

Operations Research 111


School of Distance Education, University of Calicut
Minimize f ( X )  CX
Subject to AX  B
X 0
Let a basic optimal solution of the associated L P problem be x1 , x2 ,..., xm ,0,0,...,0
The corresponding canonical form of the constraints be
x1  a1' ,m 1 xm 1  ...  a1' ,n xn  b1'
x2  a2' ,m 1 xm 1  ...  a2' ,n xn  b2'
................
xm  am' ,m 1 xm 1  ...  am' ,n xn  bm'

Since the solution is necessarily feasible


xi  bi'  0 , i = 1, 2, …., m

If all the bi' s are integers, we have the solution of ILLP and nothing is to do

Suppose that bi' be non-integer

The corresponding equation to bi' in the cannonical form is

xi  ai' ,m1 xm1  ...  ai',n xn  bi' (5)

Let
bi'  bi'  i

And a  a   , j  m  1, m  2,..., n
'
ij
'
ij ij

where b  is the greatest integer  b and a  is the greatest integer 


i
'
i
' '
ij aij'

then b   0 ( b  0)
i
'
i
'

and 0 <  i < 1 ; 0   ij < 1


 (5) can be written as

   a x
n n
xi  bi' 
j  m 1
'
ij j  i  
j  m 1
ij xj.

This equation, being one of the constraint must be satisfied by every feasible solution
of the ILPP as well as the related LPP.
But for an integer feasible solution LHS should be an integer and so, the RHS too be
an integer
Also since 0   ij < 1 and xj being feasible,
n
ij x j  0.
j  m 1

Operations Research 112


School of Distance Education, University of Calicut
n
Hence for an integer solution i ij x j  0
j  m 1

n
Since 0 <  i < 1 and ij x j  0
j  m 1

then for RHS to be an integer valued,


n
i  
j  m 1
ij xj  0 (a zero or negative integer)

n
or  
j  m 1
ij x j  i (6)

But for the optimal solution of the related LPP with which we started;
x j  0, for j  m  1, m  2,..., n and so
n
i  
j  m 1
ij x j = i > 0

Thus we have discovered a linear constraint (6) which is satisfied by integer solutions
of the problem but cuts out the optimal solution of the LPP provided it is non integral.
n
So the constraint  
j  m 1
ij x j    i , with equality sign by adding slack variable is the

corresponding cutting plane.


We add this constraint to the set of constraints of the related LPP and solve the
modified problem.
If optimal solution is integral, we stop, otherwise again obtain a cutting plane
constraint to obtain the integer feasible solution. To solve the successively modified LP
problems, the dual simplex method is used.
Solve by the cutting plane method,
Minimize f = -2x1 – 3x2
Subject to 2x1 + 2x2  7
0  x1  2
0  x2  2
x1, x2 are integers.

First we solve the related LPP


Minimize f(X) = -2x1 – 3x2
Subject to 2x1 + 2x2  7
x1  2
Operations Research 113
School of Distance Education, University of Calicut
x2  2
x1, x2  0
Adding slack variables, the constraints becomes
2x1 + 2x2 + x3 =7
x1 + x4 =2
x2 + x5 = 2
Here we get a basic feasible solution to the problem as (0, 0, 7, 2, 2).
Then the first iteration table
Cj -2 -3 0 0 0
CB yB xB P1 P2 P3 P4 P5 Ratio
0 x3 7 2 2 1 0 0 7/2
0 x4 2 1 0 0 1 0 -

0 x5 2 0 1 0 0 1
½

f=0 -2 0 0 0
-3

x2 enters to and x5 leaves from basis


Then, next iteration table is
Cj -2 -3 0 0 0
CB yB xB P1 P2 P3 P4 P5
0 x3 3 2 0 1 0 -2
0 x4 2 1 0 0 1 0

-3 x2 2 0 1 0 0 1

f = -6 0 0 0 3
-2

x1 enters to and x3 leaves from basis


the new iteration table is

Operations Research 114


School of Distance Education, University of Calicut
Cj -2 -3 0 0 0
CB yB xB P1 P2 P3 P4 P5
-2 x1 3/2 1 0 1/2 0 -1
0 x4 ½ 1 0 -1/2 1 1

-3 x2 2 0 1 0 0 1

f = -9 0 0 1 0 1

Since all (Cj – Zj)‟s are non negative the current solution is optimal. But not integer valued.
To form the cutting plane constraint, we consider the equation from the table,
x1  12 x3  x5  3
2

 x1  (0  12 ) x3  x5  (1  12 ) (a)
Using (a) the cutting plane constraint can formed as
 12 x3  0 x5   12

ie  12 x3   12

  12 x3  x6   12
Giving this additional constraint, the iteration table is
Cj -2 -3 0 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
-2 x1 3/2 1 0 1/2 0 -1 0
0 x4 ½ 0 0 -1/2 1 1 0

-3 x2 2 0 1 0 0 1 0

0 x6 -1/2 0 0 -1/2 0 0 1

1
f = -9 0 0 0 1 0

Here the solution is not feasible, applying the dual simplex method, the outgoing variable can
be found as x6 and the incoming variable as x3.

Operations Research 115


School of Distance Education, University of Calicut
Using the method of dual simplex method, the new iteration table is
Cj -2 -3 0 0 0 0
CB yB xB P1 P2 P3 P4 P5 P6
-2 x1 1 1 0 0 0 -1 1
0 x4 1 0 0 0 1 1 -1

-3 x2 2 0 1 0 0 1 0

0 x3 1 0 0 1 0 0 -2

= -8 0 0 0 1 2
0

Table implies the solution is optimal, feasible and integer valued also. So we can stop the
iteration. Hence the solution is
Minimum f = -8 where, x1 = 1 , x2 = 2

5.4 Branch and Bound method


In branch and bound method for ILPP or MILPP, first we solve the related LPP. If the
optimal solution contains non-integer values to the variables, first we select one such variable.
Let the variable be xp, with non-integer value b. Let [b] be the largest integer less than b.
Since b, being feasible, is non-negative, [b] is also non-negative.
Then formulate two sub problems to the LPP by imposing the additional constraints xp  [b]
and xp  [b] + 1 respectively to the original LPP. Next solve new LP Problems, one with the
additional constraint xp  [b], and the other with the additional constraint xp  [b] + 1
separately. This operation is called branching. In effect, the set of feasible solution of the
ILPP or MILPP is partitioned into two subsets, and the optimal solution which we are seeking
is in one subset or the other, provided it exists. The two sub problems are further branched
when their solution contains non-integer values.
Branching terminates when any of the following conditions arise.
i) The sub problem yields an optimal solution which satisfies the integer constraint
on all the variables xj, j = 1, 2,…,r the sub problem is then said to have been
„fathomed‟.
ii) The optimum (Minimum) value of the objective function in the sub problem is not
lower than the minimum value of the objective function in a sub problem which
have been fathomed.
Operations Research 116
School of Distance Education, University of Calicut
iii) The sub problem turns out to be infeasible. In standard branch and bound
terminology such a situation is termed as „pruned‟.
The reasons to terminate branching in the above three cases are as follows
In case (i) the optimal solution with required integer constraint out of the subset of feasible
solutions of that sub problem have been obtained, and no further problem in that sub problem
is necessary.
In case (ii), since an integer optimal solution which is lower than the optimal solution of the
sub problem has been discovered in the set of feasible solutions of another sub problem, the
former sub problem needs no further problem, as it cannot be concealing a solution which
would make the objective function lower than what has been discovered in the latter sub
problem.
In case (iii) the sub problem obviously cannot contain the required solution. Sub problem
falling under case (ii) also said to be „pruned‟.

Consider the problem


Minimize f = 3x4 + 4x5 + 5x6
Subject to 2x1 + 2x4 – 4x5 + 2x6 = 3
2x2 + 4x4 + 2x5 – 2x6 = 5
x3 – x4 + x5 + x6 = 4
xi  0 , i = 1, 2, …,6
x1, x2 are integers

First solve the problem without considering the integer constraints. The problem is termed as
problem 1. Its sub problems are termed as problem 11 and problem 12. And naming is
continued like this.
The successive branches of the problem and the solutions are pictured. The additional
constraints on each branch are written inside of the circle.

Operations Research 117


School of Distance Education, University of Calicut

Problem 1
Solution
x1 = 3/2 , x2 = 5/2
f=0

Problem 11 Problem 12

Solution Solution
x1  1 x1 = 1 , x2 = 3/2 x1  2 x1 = 2 , x2 = 9/4
f = 3/2 f=1

Problem 111 Problem 112 Problem 121


Solution Solution Solution Solution
x1  1 x1 = ¾ x1  1 x 1 = 1 , x2 = 2 x1  2 x1 = 2 x1  2 x1 =2, x2 = 3
x2  1 x2 = 1 x2  2 x2  2 x2 = 2 x2  3
f = 9/4 f = 11/6 f = 3/2 f = 23/2

(Pruned – Reason (ii) (Fathomed-Reason(i)) (Fathomed-Reason(i)) (Fathomed-Reason(i))

From the fathomed solutions,


The minimum value for the objective function is 3/2 , with x1 = 2 , x2 = 2.
Hence it is the optimal solution to the problem.

5.5 The 0 – 1 variable problems


Many problems in operations Research can be formulated as mixed integer programmes
with some (or all) variables constrained to have value 0 or 1. Such variables are called 0-1
variables.
The presence of such variable is equivalent to the integer constraint like , 0  xj  1 , xj
is an integer, for the variable xj taking the values 0 or 1.
Let there is a problem with „either or‟ constraints.
Suppose in the problem,
either g1(X)  0 or g2(X)  0
The problem can be formulated using 0 – 1 variables,
Let y1 = 0 or 1
and y2 = 0 or 1

Operations Research 118


School of Distance Education, University of Calicut
Let L1, L2 be sufficiently large but otherwise arbitrary positive numbers such that, in any
case within the circumstances of the problem g1(X)  L1 , g2(X)  L2
Then the given „either or‟ constraints can be written, using 0 – 1 variables as,
g1(X)  L1 y1 , g2(X)  L2 y2 , y1 + y2 = 1
There are many such situations in which one can take care of by introducing 0 – 1
variables.
*******************

Operations Research 119


CHAPTER 6
QUADRATIC PROGRAMMING

6.1 Lagrangian functions : Saddle point

Consider the programming problem

Minimize f(x) x  En

Subject to gi(X)  0 , i = 1, 2, … m

Using the vector Y = (y1, y2, … ym)' let us define the function

m
F(X, Y) = f(X) + y
i 1
i gi(X)

= f(X) + Y'G(X)
where G(X) = [g1(X) , g2(X), … gm(X)]'.

The function F so defined is called Lagrangian function and the component of the vector Y,
that is y1, y2, y3, …….ym are called Lagrangian multipliers. A point (X0, Y0) is said to be a
saddle point of F(X, Y) if

F(X0, Y)  F(X0, Y0)  F(X, Y0)

in some neighbourhood of (X0, Y0). The saddle point of F(X, Y), if it exists and minimal point
of f(x) bear a theoretical relationship. We will unearth this relationship through a number of
theorems.

Theorem: A1

If F(X, Y) has a saddle point (X0, Y0) for every Y  0, then


G(X0)  0, Y0' G(X0) = 0

Proof:-

Let (X0, Y0) be a saddle point of F(X, Y) , then F(X0, Y)  F(X0, Y0)  F(X, Y0)  f(X0) +
Y' G(X0)  f(X0) + Y0' G(X0)  f(X) + Y0'G(X) (A1)
 Y G(X0)  Y0 G(X0)
' '
(A2)
School of Distance Education, University of Calicut
If possible gi(X0) > 0 for some i. For given Y0 we can take yi the ith component of Y
sufficiently large so that Y' G(X0) is large enough to violate (A2)

Thus gi(X0)  0 for all i


or G(X0)  0 (A3)
Now, since Y0  0, G(X0)  0
Y0'G(X0)  0
Put Y = 0 in (A2) then we get
0  Y0'G(X0)
Thus Y0'G(X0) =0 (A4)
Theorem: A2

If F(X, Y) has a saddle point (X0, Y0) for every Y  0, then X0 is a minimal point of f(X)
subject to the constraints G(X)  0

Proof:-

Since (X0, Y0) is a saddle point

f(X0) + Y0' G(X0)  f(X) + Y0'G(X)


 f(X0)  f(X) + Y0'G(X)  Y0'G(X0) = 0
Since Y0  0 , f(X0)  f(X) for all X
for which G(X)  0.

This shows that X0 is a minimal point.

Converse of the above theorem is not true in general. However if f(X) and all gi(X) are
convex functions, then the converse is also true. We now proceed to prove this theorem. First
we consider the following result.

Theorem: A3

Let X0 be a solution to the convex programming problem

Minimize f(X)
Subject to gi(X)  0 , X  0
where f(X) and gi(X) are convex functions. Further, let the set of points X, such that

G(X) < 0

Operations Research 121


School of Distance Education, University of Calicut
is not empty. Then there exists a vector Y0  0 such that

f(X) + Y0'G(X)  f(X0)

Now we have the final theorem of this section.

Theorem: A4

If X0 is a solution of the convex programming problem, satisfying the condition of the


above theorem. Then there exists a vector Y0 in Em such that (X0, Y0) is a saddle point of
F(X, Y).

Proof:-

From the last theorem

f(X) + Y0'G(X)  f(X0) (A5)


 f(X0) + Y0'G(X0)  f(X0)
ie Y0'G(X0)  0
But Y0  0 , G(X0)  0
 Y0'G(X0)  0
 f(X0) = f(X0) + Y0'G(X0) = F(X0 , Y0)

Hence

f(X) + Y0'G(X)  f(X0)


 f(X) + Y0'G(X)  f(X0) + Y0'G(X0)
ie F(X, Y0)  F(X0, Y0) (A6)
Again Y  0 , G(X0)  0
 Y'G(X0)  0
 f(X0) + Y'G(X0)  f(X0) = F(X0, Y0)
or F(X0, Y)  F(X0, Y0) (A7)
From (A6) and (A7)
F(X0, Y)  F(X0, Y0)  F(X, Y0)

Hence (X0, Y0) is a saddle point of F(X , Y).

Remark: We have proved that a sufficient condition that X0 is a minimal point of f(X)
subject to the constraints G(X)  0 is that the functions F(X, Y) has a saddle point (X0, Y0) ,

Operations Research 122


School of Distance Education, University of Calicut
Y0  0. It also become a necessary condition if f(X) and gi(X) are convex functions and there
exists and X such that G(X) < 0. This is known as Kuhn Tucker theorem.

6.2 Kuhn – Tucker Conditions

Suppose (X0, Y0) is a saddle point of the function Lagrangian function F(X, Y)
defined above. Then

F(X0, Y)  F(X0, Y0)  F(X, Y0) (B1)


Let x be a neighbouring point of X0. Since X is unrestricted, X0 is an interior point in the
neighbourhood under consideration. Right side inequality of (B1) implies that (X0, Y0) is a
local minimum of F(X, Y0) and so

 F ( X , Y0 ) 
  = 0 , j = 1, 2, …, n (B2)
 x j  X  X
0

Let Y be a point in the neighbourhood of Y0. Since Y  0 , for every Y, in general some of
the components of Y0 may be zero and some positive. Denoting by yi0 the components of Y0,
let

(i) yi0 > 0 , i = 1, 2, … k


(ii) yi0 = 0 , i = k + 1, … m

Let the neighbouring point Y differ from Y0 only in the ith component, the other
component in the two being equal. Then using Taylor‟s series expansion

 F ( X 0 , Y ) 
F(X0, Y) - F(X0, Y0) = (yi - yi0)   +…
 yi Y  Y
0

By choosing yi - yi0 sufficiently small,

F(X0, Y) - F(X0, Y0) can be made to depend on the sign of the term on the right. Let yi0 > 0,
then yi - yi0 can be made positive or negative by suitable choice of yi which can be greater than
or less than yi0. Therefore F(X0, Y) - F(X0, Y0) can me made positive or negative by a proper
choice of Y. But by the left side inequality of (B1) is never positive. Therefore yi0 > 0

Implies

Operations Research 123


School of Distance Education, University of Calicut

 F ( X 0 , Y ) 
  = 0 , i = 1, 2, … k (B3)
 y i Y  Y
0

Now suppose yi = 0 , then yi - yi0 is always positive.

Therefore for i = k+1, … m

 F ( X 0 , Y ) 
   0 (B4)
 yi Y  Y
0

 F ( X 0 , Y ) 
Hence yi0   = 0 (B5)
 y i Y  Y
0
m
If we write F(X, Y) = f(X) +  y g ( X ) the (B2) and
i 1
i i (B5) together implies

f m gi 
x j
+   yi

i 1  x
 = 0 , j = 1, 2, … n

(B6)
j 

g i (X)  0 

yi g i (X)  0 , i  1, 2, .. m  (B7)
yi  0 

Suppose in the programming problem, we impose the addition restriction X  0. Again
(X0 , Y0) is a saddle point when F(X0, Y)  F(X0, Y0)  F(X, Y0) , X  0 , Y  0
By Taylor‟s series expression we get

 F ( X , Y0 ) 
F(X, Y0) - F(X0, Y0) = (xi - xi0)   +…
 xi X  X
0

Repeating the arguments used above we conclude that

 F ( X , Y0 ) 
   0 and
 x j  X  0

 F ( X , Y0 ) 
xi0   =0 (B8)
 x j  X  X
0

Operations Research 124


School of Distance Education, University of Calicut
Combining (B7) and (B8) we get

f m
g 
  yi i  0 
x j i 1 x j

 f g  
xj    yi i   0 , x j  0 , g i x   0  (B9)
 x j x j  
y i g i ( X )  0 , i  1, 2, ... m 

yi  0 

The set of conditions (B6) and (B7) or (B9) are Kuhn – Tucker conditions. The conditions
(B6) and (B7) are the set of necessary conditions which (X0 , Y0) must satisfy if it is a saddle
point of F(X, Y) provided Y  0 and X is unrestricted. The condition (B9) is the necessary
condition that (X0 , Y0) being a saddle point provided X  0 , y  0. Notice that the above
conditions are only necessary; not sufficient.

Example:-

Minimize f(X) = (x1 + 1)2 + (x2 – 2)2


Subject to g1(X) = x1 – 2  0
g2(X) = x2 – 1  0
x1  0 , x2  0

The associated Lagrangian function is

F(X, Y) = (x1 + 1)2 + (x2 - 1)2 +y1 (x1 - 2) + y2(x2 - 1)


The Kuhn Tucker conditions (B9) are
2(x1 + 1) + y1  0 , 2(x2 – 2) + y2  0
x1 (2(x1 + 1) + y1) = 0 , x2 (2(x2 – 2) + y2) = 0
x1 – 2  0 , x2 – 1  0
y1(x2 – 2) = 0 , y2 (x2 - 1) = 0
x1, x2, y1, y2  0

The four equations of the above conditions gives the following 9 solutions

Operations Research 125


School of Distance Education, University of Calicut
x1 x2 y1 y2
(1) 0 0 0 0
(2) 0 1 0 2
(3) 0 2 0 0
(4) 2 0 -6 0
(5) 2 1 -6 2
(6) 2 2 -6 0
(7) -1 2 0 0
(8) -1 0 0 0
(9) -1 1 0 2

of these solutions (2) is the only one which satisfy the other conditions. Hence minimum of
f(X) is 2 at x1 = 0 , x2 = 1.

6.3 Primal and dual problems

Kuhn-Tucker theory also lead to the primal-dual concept in the optimization theory.

First of all notice that the definition of saddle point, namely

F(X0, Y)  F(X0, Y0)  F(X, Y0)

is equivalent to

min max max min


F(X0, Y0) = F(X, Y) = F(X, Y)
X Y Y X

The idea is that if F(X, Y) is first maximized with respect to Y for any fixed X and then
minimized with respect to X, and also if this order is changed and of is first minimized with
respect to X for any fixed Y and then maximization with respect to Y and if the two
operations lead to the same value of F(X, Y), then and then only F(X, Y) has a saddle point.

Notice that Lagrangian function

F(X, Y) = f(X) +Y'G(X)

Subject to G(X)  0 , Y  0
Since G(X)  0 , Y  0
max
F(X, Y) = f(X)
Y
Operations Research 126
School of Distance Education, University of Calicut
min max min
So F(X, Y) = f(X)
X Y X
min min
Also F(X, Y) = [f(X) + Y'G(X)]
X X
=  (Y ) say
If the saddle point of F(X, Y) exists,
min max
Then f(X) =  (Y )
X Y
min
Since  (Y ) = [f(X) + Y'G(X)]
X
the necessary constraint on  (Y ) is  [f(X) + Y'G(X)] = 0
Hence the dual of the problem
Min f(X)
Such that gi(X)  0 , i = 1, 2, … m
is Max  (Y ) = max[f(X) + Y'G(X)]
f ( X ) g
subject to
x j
+  yi x i = 0 , j = 1, 2, … n
j

yi  0 , i = 1, 2, … m

6.4 Quadratic Programming

A programming problem in which the objective function is quadratic and constraints


are linear is called quadratic programming problem. More specifically a quadratic
programming problem has the form

Minimize f(X) = PX + X ' CX

p
j 1
j xj
 C j k x j xk
= + (D1)
subject to AX  B ; X  0 (D2)
Here X  En , C = (Cjk) is an nxn matrix , P = (pj) a row n-vector, A = (aij) is an mxn
matrix and B = (bj) is a column m–vector. X'AX is positive semi definite. The Lagrangian
function associated with the programming problem is

F(X, Y) = PX + X'CX + Y'(AX – B)

Operations Research 127


School of Distance Education, University of Calicut
n m
= p
j 1
j x j +  C j k x j xk +  yi ( aij x j  bi )
i 1

The Kuhn – Tucker conditions using (B9)

n m
are pj + 2  C j k X k +  yi ai j  0
k 1 i 1

 n m 
xj  pj + 2  c j k xk +  yi ai j  =0
 k 1 i 1 

xj  0 , j = 1, 2, …………. n ;

n
and  ai j x j - bi  0
j 1

 n 
yi   ai j x j - bi  = 0
 j 1 

yj  0 , i = 1, 2, …………. m

Introducing slack and surplus variables in the inequalities the above inequalities becomes

n m
pj + 2  c j k xk +
k 1
 yi ai j
i 1
- vj = 0

xj vj = 0
xj  0 , vj  0
n
 ai j x j - bi + wi = 0
j 1

yi - wi = 0
yi , wi  0 , i = 1, 2, … m

This can be rearranged as

n m
2 c
k 1
jk xk +  ai j yi
i 1
- vj = -Pj (D3)

n
 ai j x j + wi = bi
j 1

Operations Research 128


School of Distance Education, University of Calicut
subject to

x j v j  0 , yi wi  0 
 (D4)
xi , yi , v j , wi  0 
(D4) says that xj  0  vj = 0
vj  0  xj = 0
yj  0  wi = 0
wi  0  yj = 0
Thus out of the total number of variables 2m + 2n, at least m+n must be zero. It means we are
looking for a basic solution of (D3) subject to (D4). The simplex method can be used to
obtain solution.

Example:-
Minimize f(X) = -x1 – x2 – x3 + ½(x12 + x22 + x32)
Subject to g1(X) = x1 + x2 + x3 – 1  0
7
g2(X) = 4x1 + 2x2 –  0
3
x1 , x2 , x3  0
The Lagrangian function for this problem is

7
F(X, Y) = -x1 – x2 – x3 + ½(x12 + x22 + x32) + y1(x1 + x2 + x3 – 1) + y2(4x1 + 2x2 – ) , and
3
the Kuhn – Tucker conditions are
-1 + x1 + y1 + 4y2  0
-1 + x2 + y1 + 2y2  0
-1 + x3 + y1  0
x1(-1 + x1 + y1 + 4y2) + x2(-1 + x2 + y1 + 2y2) + x3(-1 + x3 + y1) = 0
x1 + x2 + x3 – 1  0
4x1 + 2x2 – 7/3  0

7
y1(x1 + x2 + x3 – 1) + y2(4x1 + 2x2 – ) = 0,
3
x1 , x2 , x3 , y1 , y2  0

Introducing necessary slack and surplus variables we get

-1 + x1 + y1 + 4y2 - v1 = 0
-1 + + x2 + y1 + 2y2 - v2 = 0
-1 + + x3 + y1 - v3 = 0
-1 + x1 + x2 + x3 + w1 = 0
7
- + 4x1 + 2x2 + w2 = 0
3

Operations Research 129


School of Distance Education, University of Calicut
x1v1 + x2 v2 + x3 v3 = 0
y1 w1 + y2 w2 = 0
x1, x2, x3 , y1, y2, v1, v2, v3, w1, w2  0

Thus there are 5 linear equations in 10 variables. The last three sets of conditions imply that
only five of them can be nonzero at time. To solve the problem we introduce three artificial
variables S1, S2, S3 to the first three equations and Phase I of the two phase simplex method is
now applicable. Phase I ends when artificial variables are removed from the basis and the
solution so obtained is the solution to the problem.

Basis Value x1 x2 x3 y1 y2 V1 v2 v3 w1 w2 S1 S2 S3
S1 1 1 0 0 1 4 -1 0 0 0 0 1 0 0
S2 1 0 1 0 1 2 0 -1 0 0 0 0 1 0
S3 1 0 0 1 1 0 0 0 -1 0 0 0 0 1
W1 1 1 1 1 0 0 0 0 0 1 0 0 0 0
W2 7/3 4 2 0 0 0 0 0 0 0 1 0 0 0
f -3 -1 -1 -1 -3 -6 1 1 1
S1 1 1 0 0 1 4 -1 0 0 0 0 1 0 0
S2 1 0 1 0 1 2 0 -1 0 0 0 0 1 0
x3 1 0 0 1 1 0 0 0 -1 0 0 0 0 1
W1 1 1 1 1 0 0 0 0 0 1 0 0 0 0
W2 7/3 4 2 0 0 0 0 0 0 0 1 0 0 0
f -2 -1 -1 -2 -6 1 1 0 1
S1 1 1 0 0 1 4 -1 0 0 0 1 0
S2 1 -1 0 0 2 2 0 -1 -1 -1 1 1
x3 1 0 0 1 1 0 0 0 -1 0 1
x2 0 1 1 0 -1 0 0 0 1 1 -1
W2 7/3 2 0 0 2 0 0 0 -2 -2 1 2
f -2 0 -3 -6 1 1 1 1 0
S1 ½ 3/2 0 0 0 3 -1 ½ ½ ½ 1 -1/2 -1/2
y1 ½ -1/2 0 0 1 1 0 -1/2 -1/2 -1/2 ½ ½
x3 ½ ½ 0 1 0 -1 0 ½ -1/2 1/2 -1/2 1/2
x2 ½ ½ 1 0 0 1 0 -1/2 ½ ½ ½ -1/2
W2 4/3 3 0 0 0 -2 0 1 -1 -1 1 -1 1
f -1/2 -3/2 -3 1 -1/2 -1/2 -1/2 3/2 3/2
x1 1/3 1 0 0 0 2 -2/3 1/3 1/3 1/3 2/3 -1/3 -1/3
y1 2/3 0 0 0 1 2 -1/3 -1/3 -1/3 -1/3 1/3 1/3 1/3
x3 1/3 0 0 1 0 -2 1/3 1/3 -1/3 1/3 -1/3 -1/3 -1/3
x2 1/3 0 1 0 0 0 1/3 2/3 1/3 1/3 -1/3 2/3 -1/3
W2 1/3 0 0 0 0 -8 2 0 -2 -2 1 -2 0 2
f 0 0 0 0 0 0 1 1 1

In the first iteration the variable y2 with most negative net evaluation cannot enter the basis as
w2 cannot be removed from the basis immediately (y2w2 = 0). Similarly y1 cannot enter the

Operations Research 130


School of Distance Education, University of Calicut
basis as w1 cannot be removed immediately. Subsequently x3 enters the basis and S3 is
removed from the basis. In the second iteration x2 enters and w1 is removed and in the third
iteration y1 enters and S2 is removed. Finally in the fourth iteration x1 enters and S1 goes out.

Solution of the problem is x1 = x2 = x3 = ½ and Min f = -5/6

6.5 Separable programming

This is another kind of non linear programming problem where a modified version of
the simplex method is applicable.

Definition:

A function f(x1, x2, … xn) is said to be separable if it is possible to express it as a sum


of functions each of them is a function of a single variable. Otherwise f(x1, x2, … xn) can be
written as f(x1, x2, … xn) = f1(x1) + f2(x2) + … + fn(xn).

Definition:

The programming problem

Minimize f(x1, x2, … xn)

Subject to gi(x1, x2, … xn) = b , i = 1, 2, … m

xj  0 , j = 1, 2, … m

is said to be separable if all the functions f, gi, i = 1, 2, … m are separable.

The technique employed here is to approximate the non linear function by piece wise
linear function and then apply the simplex technique in some manner to get an optimal or
mear optimal solution.

Let  (t) be a non linear function of a single variable t and let t be in a small interval [t1, t2].

Write  1 =  (t1) ,  2 =  (t2). Let  (t) be the equation of the line passing through (t1,  1)
and (t2 ,  2) then

 (t )  1 2  1
=
t  t1 t2  t1

t2  t t  t1
or  (t) = 1 + 2
t2  t1 t2  t1
Operations Research 131
School of Distance Education, University of Calicut
=  1 1 +  2 2 ,  1 +  2 = 1
 1,  2  0.

Now let us suppose we want to approximate the non linear function  (t) , a  t  b , by a
piece wise linear function. We divide the interval [a, b] in to n subintervals [tj , tj+1] , j = 0, 1,
…. n-1. In each of these subintervals we may approximate  (t) by a linear function, as
explained above.

In [t0 , t1] :  (t) =  0  0 +  1  1 ,  0+  1 = 1;  0  0 ,  1  0

In [t1 , t2] :  (t) =  1  1+  2  2 ,  1+  2 = 1 ;  1  0 ,  2  0


……………………………………………………………….
………………………………………………………………..
In [tj , tj+1] :  (t) =  j  j +  1  1 ,  j+1+  j+1 ;  j+  j+1 = 1 ,  j  0 ,  j+1  0

Combining these equations in to a single equation we get

 (t) =  0  0 +  1  1+ … +  n  n

n
subject to (i) j = 1
j 1

(ii)  j  0

(iii) No more than two of the  ‟s can be non zero at the same time and if two
of them are zero, then they are adjacent.

For example consider the function f(x) = x 3 , 0  x  4. We may divide the


interval [a, b] into four equal parts with nodal points x = 0 , 1, 2, 3, 4 and f0 = 0 , f1 = 1 , f2 =
8 , f3 = 27, f4 = 64. Then

x3   0 f0+  1 f1 +  2f2+  3 f3 +  4f4


=  1 + 8  2+27  3 + 64  4

Operations Research 132


School of Distance Education, University of Calicut
4
j = 1,  j  0 , j = x = 0 , 1, 2, 3, 4 and not more than two of these variables are non
j0

zero, and if two are non zero they are adjacent.

Example:

Maximize f(x1 , x2) = 2x1 + 3x22 + 4

Subject to g1(x1, x2) = 4x1 + 2x22  16

x1  0 , x2  0

Clearly both objective function and constraints are separable f(x1, x2) = f1(x1) + f2(x2)
= (2x1 + 4) + 3x24
g1(x1, x2) = g11(x1) + g12(x2)
= 4x1 + 2x22

of which f1(x1) and g11(x1) are already linear. From the constraints 4x 1 + 2x22  16 , x1  0
, x2  0 it is clear that 0  x2  3. Dividing [0, 3] in to three equal parts we have

j xj f2 = 3x24 g12 = 2x22


0 0 0 0
1 1 3 2
2 2 48 8
3 3 243 18
Hence f2 = 3  21 + 48  22 + 243  23

g12 = 2  21 + 8  22 + 18  23

3
 2 j = 1 ,  2j  0 , j = 0, 1, 2, 3.
j 1

And if 0 <  2j < 1, then only either  2j+1 or  2j-1 is non zero. Thus the given problem can be
replaced by the approximate problem

Maximize f = 2x1 + 4 + 3  21 + 48  22 + 243  23

Subject to g1 = 4x1 + 2  21 + 8  22 + 18  23  16
 20 +  21+  22 +  23 = 1
x1 ,  20 + x21 , x22 ,  23  0

Operations Research 133


School of Distance Education, University of Calicut
with not more than two of the  ‟s and those two necessarily adjacent, being non zero. A
simplex method with restricted basis can be used to solve the problem. First introduce a slack
variables to make the first constraint as equality. Rest of the calculations are summarized in
the table below.

Basis Value x1  20  21  22  23 S
S 16 4 0 2 8 18 1
 20 1 0 1 1 1 1 0
-f 4 -2 0 -3 -48 -243

S 8 4 -8 -6 0 10 1
 21 1 0 1 1 1 1 0
-f 52 -2 48 45 0 -195 0

 23 4/5 2/5 -4/5 -3/5 0 1 1/10


 22 1/5 -2/5 9/5 8/5 1 0 -1/10
-f 208 76 -108 -72 0 0 195/10
S,  20 are the obvious choice for the initial variables. The relative cost coefficient of  23 in
least negative. But  23 cannot enter the basis as it will result in the removal of S so that  20,
and  23 both become positive, which should not happen. The next candidate  22 can enter
the basis as it will result in the removal of  20. In the very next iteration S will be replaced by
 23. The solution obtained in the third iteration is not optimal.  20 and  21 should replace
 22 at the next stage. But this will end up with the inadmissible combination of either (  23 ,
 20) or (  23 ,  21) in the basis. Hence we cannot improve the solution any further. The
approximate optimal solution is x1 =  20 =  21 = 0 ,  22 = 1/5 ,  23 = 4/5 and f = 208.

Example: 2
Maximize f(x1, x2) = 2x1 + 3x24 + 4
Subject to g1(x1 , x2) = 3x12 + 4x22  36
x1  0 , x2  0
Notice that
f(x1 , x2) = f1(x1) + f2(x2) = 2x1 + 4 + 3x24
g1(x1 , x2) = g11(x1) + g12(x2) = 3x12 + 4x22
From the constraint it is immediate to see that 0  x1  4 and 0  x2  3. Dividing these
intervals in to subintervals of unit length, we get the value of the functions at the nodal points
as follows.
Operations Research 134
School of Distance Education, University of Calicut
f1(xij) g11(xij) f2(x2j) g12(x2j)
j xij x2j
= 2xij + 4 = 3xij2 = 3x2j4 = 4x2j2
0 0 4 0 0 0 0
1 1 6 3 1 3 4
2 2 8 12 2 48 16
3 3 10 27 2 48 16
4 4 12 48 3 243 36

The piecewise linear approximation are

f1(x1) = 2x1 + 4 = 4  10 + 6  11 + 8  12 + 10  13 + 12  14
g11(x1) = 3x12 = 3  11 + 12  12 + 27  13 + 48  14
f2(x2) = 3x24 = 3  21 + 48  22 + 243  23
g12(x2) = 4x22 = 4  21 + 16  22 + 36  23

Thus the approximate problem becomes

Maximize f = 4  10 + 6  11 + 8  12 + 10  13 + 12  14 + 3  21 + 48  22 + 243  23

Subject to

3  1+ 12  132 + 27  13 + 48  14 + 4  21 + 16  22 + 36  23  36

 10 +  11 +  12 +  13 +  14 = 1

 20 +  21 + 48  22 + 243  23 = 1

 10 +  11, ……………….  23  0

with not more than two of  ‟s and those two necessarily adjacent, being non zero.

Operations Research 135


School of Distance Education, University of Calicut
The subsequent simplex procedure is summarized below.

Basis Values  10  11  12  13  14  20  21  22  23 S
S 36 0 3 12 27 48 0 4 16 36 1
 10 1 1 1 1 1 1 0 0 0 0 0
 20 1 0 0 0 0 0 1 1 1 1 0
-f 4 0 -2 -4 -6 -8 0 -3 -48 -243 0
S 0 0 3 12 27 48 -36 -32 -20 0 1
 10 1 1 1 1 1 1 0 0 0 0 0
 23 1 0 0 0 0 0 1 1 1 1 0
-f 247 0 -2 -4 -6 -8 243 240 195 0 0
 11 0 0 1 4 9 16 -12 -32/3 -20/3 0 1/3
 10 1 1 0 -3 -8 -15 12 32/3 20/3 0 -1/3
 23 1 0 0 0 0 0 1 1 1 1 1
-f 247 0 0 4 12 24 219 656/3 545/3 0 2/3

The optimal solution is  10 = 1,  23 = 1 and all other variables being zero , f = 247. In
terms of the original variables x1 = 0 , x2 = 3 , Max f = 247.

*******************

Operations Research 136


CHAPTER 7
GEOMETRIC PROGRAMMING

7.1 introduction

Geometric programming is an optimization technique applicable to programming


problems involving functions of special mathematical term called posynomials. A posynomial
m
is defined as f(x) =  ui ( X )
i 1

ai1 ai2 ain

where ui  x   Ci x1ai1 x2ai 2 ...xnain

The method is based on the inequality in algebra namely that the arithmetic mean is greater
than or equal to the geometric mean.

Let (v)i , i = 1, 2, ………. m be m positive numbers and  i be m positive rational


numbers. Then

m
1 i vi
  Ui 
i 1/ 
 i 1
(A1)
m
where  =  i ,  i are called weight functions
i 1

i
Putting =  i , we get

m m
  i vi   vi  i
i 1 i 1

 i‟s are called normalized weight functions. Further putting  i vi = ui , we get


i
m u 
 ui     i 
i 1  i
m
 i =1 (A2)
i 1

i
Now = i

School of Distance Education, University of Calicut
i / 
n m  u 
  ui     i 
i 1 i  1  i 


m 
u 
i

or (  ui )     i    ,  i =  (A3)
i  1  i 

ui
It may be noted that the inequality (A2) becomes an equality when are equal. Similarly
i
(A3) is an equality if ui/  i are equal.

7.2 Illustrative Examples


Example: 1.1
Consider the problem minimize
C1
f(X) = + C2 x2 x3 + C3 x1x3 + C4 x1x2
x1 x2 x3

where Ci > 0 , xj > 0 , i = 1, 2, 3, 4 , j = 1, 2, 3


C1
Let u1 = , u2 = C2 x2 x3 , u3 = C3 x1x3 , u4 = C4 x1x2
x1 x2 x3

Using inequality (A2) we get


4 4 i
f(X) =  ui   (ui /  i )
i 1 1

1 2 3 4
 C1   C 2 x 2 x3   C3 x1 x3   C4 x1 x2 
ie f(X)         
 x1 x2 x31   2   3   4 
 
1 2 3 4
C   C2   C3   C4 
=  1        x1 1   3   4 x21   2   4 x3 1   2   3
 1   2   3   4 
 i‟s are such that  i  0 ,  i = 1
In order to make the RHS of the above expression independent of X we take

- 1 +  3 +  4 = 0

- 1 +  2 +  4 = 0

- 1 +  2 +  3 = 0

These are called orthogonality conditions.

These equations along with the conditions  1 +  2 +  3 +  4 = 1 are solved to get


Operations Research 138
School of Distance Education, University of Calicut
 1 = 2/5 ,  2 = 1/5 ,  3 = 1/5 ,  4 = 1/5
With these values (A4) reduces to
f(X)   ( ) (A5)
where  is a vector with components  1 ,  2 ,  3 , 4

1 2 3 4
C   C2   C3   C4 
and  ( ) =  1       
 1   2   3   4 
Since equality in (5) is possible
Min Max
f (X ) =  ( )
X 
Thus the original problem of minimizing f(X) has been converted into a new problem of
maximizing  ( ) . In the present problem  1 ,  2 ,  3 ,  4 are uniquely solved and so  ( )
is a constant.
Min
Hence f ( X ) =  ( ) ,
X
which can be easily evaluated for given numerical values of C1 , C2 , C3 and C4.
To find the optimum values of xi we now proceed as below. Min f(X) is attained when
u1 u2 u3 u4
= = = = M (say)
1 2 3 4
Then  ui = M  i =M

Min
 f ( X ) = f(X0) = M
X
 ui =  i f(X0)
Define Zj = log xj , j = 1, 2, 3 , then

C1
u1 =
x1 x2 x3

u1
 log = -log x1 – log x2 – log x3
C1
u1
 -Z1 – Z2 – Z3 = log
C1

u2
Similarly Z2 + Z3 = log
C2

Operations Research 139


School of Distance Education, University of Calicut
u3
Z1 + Z3 = log
C3
u4
Z2 + Z3 = log
C4
This can be written as
AZ = b
 1  1 1  z1   log u1 / C1 
0 1 1 z  log u / C 
when A=  , Z=  2  & b=  2 2
1 0 1  z3   log u3 / C3 
     
1 1 0  z4  log u4 / C4 
Any three of the above equations can be solved for getting the solution of x1 , x2 and x3.
Example: 2
C1
Minimize f(X) = + C2 x2 x3
x1 x2 x3
Subject to g1(X) = C3x1x3 + C4x1x2 = 1
Cj > 0 , xj > 0 , i = 1, 2, 3, 4 ; j = 1, 2, 3
C1
Let u1 = , u2 = C2x2x3 , u3 = C3x1x3 , u4 = C4x1x2
x1 x2 x3
Then f(X) = u1 + u2 , g1(X) = u3 + u4
Introducing normalized weights 1 and  2 we get,

 u1  1  u2   2
   
f(X) = u1 + u2   1   2  (A6)
1  2 = 1
Using non normalized weights  3 &  4

 u  z3  u4  z4 
1 = (u3 + u4)    3     (A7)
3  4 
where  = 3  4
Using (6) and (7)

 u  1  u2   2  u 3  z3  u4  z4 
u1 + u2   1        
 1   2  3  4 

Substituting for u1 , u2 , u3 , u4 we get

Operations Research 140


School of Distance Education, University of Calicut
1 2
C   C2   C3  z 3  u4  z4
u1 + u2   1          x11 3  4 x2 1  2 3 x3 1  2 3 (A8)
 1   2   3  4 
Again in order to make the RHS of the inequality independent of xj we put
 1   3   4  0 

 1   2   4  0  (A9)
 1   2   3  0 

These are the orthogonality conditions. With these the inequality reduces to
 C1  1  C2   2  C3  z 3  u4  z4
 ( ) =         (3  4 ) 3   4
 1   2   3  4 
Since equality is possible in (A8) we have
Min f(X) = Max  ( ) =  ( )
Solving (A9) along with 1  2 = 1
we get  1 = 2/3 ,  2 =  3 =  4 = 1/3
Thus we get  ( ) = 3(C12 C2C3)1/3
and Min f(X) = 3(C12 C2C3C4)1/3
For minimal solution we require equality in (8) , which is possible if and only if both
(6) & (7) are equalities.
For (6) to be an equality
u1 u2 u3 u4
= and for (7) to be an equality =
1 2 3 4
u1 u2
 ( ) = u1 + u2 =  1 + 2
1 2
u1 u1
= 1 + 2
1 1
u1
= (  1+  2)
1
u1
=
1
2
 u1 =  1  ( ) =  ( )
3
1
and u2 =  2  ( ) =  ( )
3

u3 + u4 = 1

Operations Research 141


School of Distance Education, University of Calicut
u3 u4
 3 + 4 = 1
3 4
u3
 (  3 +  4) = 1
3
3 1/ 3 1
 u3 = = = = u4
3   4 1 / 3 1 / 3 2

C1 2 1
 =  ( ) , C2 x2x3 =  ( )
x1 x2 x3 3 3
1
C3x1x3 = , C4x1x2 = ½
2
Now put Zj = log xj , j = 1, 2, 3
2
Then log C1 – Z1 – Z2 – Z3 = log  ( )
3
1
log C2 + Z2 + Z3 = log  ( )
3
log C3 + Z1 + Z3 = log ½
log C4 + Z1 + Z2 = log ½
solving the above equations yields the solutions for Z1 , Z3 which in terms gives value for x1 ,
x2 and x3.
7.2 General Method
The general form of G P problem is as follows.
Minimize f0(X) ,
Subject to fk(X)  1 , k = 1, 2, … p
where fk(X) , k = 1, 2, … p are posynomials of the type
nk
fk(X) =  uik ( X )
i  mk

n
 xj
a jik
where uik (X) = Cik (B1)
j 1

m0 = 1 , mk = nk-1 + 1 for k = 1, 2, … p , ajik  R


Cik > 0 , X = [x1, x2, … xn]| > 0 , uik > 0 , fk > 0
Notice that
f0(X)  f0(X) f1(X) f2(X) … fp(X)
 (f ) 0 f (X) 1 … f (X)  p
0 1 p

Operations Research 142


School of Distance Education, University of Calicut
k
p
 nk 
=    ui k 
k  0 i  m k 
p nk  u i k  i k k 
    ( k ) 
i  mk   i k
k 0 

 
n0
where  i k > 0 ,  i 0 = 1 = 0
i 1

nk
and   i k = k , k = 1, 2, ………… p
i  mk

Using (B1)
 nk n 
p  ci k  i k k
f0(X) =       xj a (k )  ik jik
 
k  0 i  mk   i k

  j 1


p  nk  u i k  i k k 
=     ( k )   p nk aik  jik 

k  0i  m k   i k 

 k 0 i  m 
  
p nk
We know now choose aik such that   ik  jik  0, j 1, 2,...,n (B2)
k 0 i  m

These are called orthogonality conditions for the obvious reason that they represent.
Orthogonal relationship between the vector whose components are  i k , and each of the

vectors whose components aj i k


Using (B2) we conclude that
 nk  c i k
p  i k k 
f0(X)     (k )  =  ( )
 
i  mk   i k
k 0   
The right hand side of the above inequality is independent of X.

We thus arrive at a dual of the original problem. The dual problem consists of finding
maximum value of  ( ) subject to

n0
i k > 0 ,  i 0 = 1 = 0
i 1

nk
  i k = k , k = 1, 2, …, p
i  mk

The theorem connecting the primal and dual in the following.

Operations Research 143


School of Distance Education, University of Calicut
Theorem:

If the primal GP problem is feasible and there is at least one vector X > 0 such that
fk(X) < 1, k = 1, 2, …, p , then
(i) the corresponding dual problem is feasible.
(ii) There exists X0 and  0 such that

f0(X0) =  ( 0 ) (B3)
which is the solution of the problem.
(iii) The optimal primal and optimal dual variables are related by
 i 0 = ui 0/f0, i = 1, 2, ……..n0 (B4)

 i k = k ui k , i = mk ……nk , k = 1, 2, … p (B5)

k (1 – fk) = 0 , k = 1, 2, … p (B6)
In the above equations all the variables and functions assume their optimal values.
The optimal value of the primal objective function is given by (B3) we then determine
optimal values of uik from (B4) and (B5). Taking the logarithm of each of the equations in
n
 xj
a jik
uik(X) =
j 1

We get linear equations in log xj which can be solved to give optimal values of log xj and are
finally xj. This completes the solution of the problem.

*******************

Operations Research 144


CHAPTER 8
DYNAMIC PROGRAMMING

8.1 Introduction

The method of dynamic programming (DP) was developed in 1950‟s through the work
of Richard Bellman. The essential feature of the method is that a multivariate optimization
problem is decomposed in to a series of stages, optimization being done at each stage with
respect to one variable only. Both discrete and continuous problems can be handled with this
method, also deterministic as well as stochastic models can be tackled.

8.2 General Theoretical considerations

Suppose there is a system which is initially in a state described by a vector XN, and as
a result of certain decisions denoted by the vector U, finally assumes the state X0.
Diagrammatically we may denote the system as in figure below. The box represents the
transformation TN which functionally may be written as

X0 = TN(XN, U) (1)

XN TN X0

N

We may regard XN as the input X0 as the output. Associated with this transformation, suppose
there is a real valued function

 N(XN, U) (2)
called the objective function or return function.

The problem is to determine U for a given input XN so that  N is minimum subject to the
constraint (1). Notice that with XN and X0 specified, the transformation (1) is actually a
constraint on U.

Let us suppose that in an abstract sense the problem can be decomposed into a number
of stages j , j = 1, 2, … N , with Xj representing the input let the jth stage. The system is
School of Distance Education, University of Calicut
imagined to pass through a succession of intermediate stages XN-1 , XN-2, … X2, X1 before it
reaches the final state X0 from the initial state XN. Each state Xj-1 is a function of the input
state Xj and the decision Uj so that

Xj-1 = tj(Xj , Uj) (3)


th
Also assume that at the j stage there is a stage return function

fj(Xj , Uj) (4)


and the return function  N is some function of the stage returns, so that

 N =  N (fN , fN –1 , … f2 , f1) (5)


It may be noticed that (5) does not contradict (2) because with the help of (3) and (4) it is
possible to write (5) in the form.

 N =  N(XN , UN , UN-1 , … U1), which is same as (2).

The situation may be represented by the figure just given below. This is the model of a serial
multistage problem.

UN Uj

XN tN tj TN
XN-1 Xj Xj-1 X1 X0
fN fj fi

The dynamic method postulates that under certain conditions the problem of determining the
optimal U (= U*) which would minimize  N(XN , U) subject to X0 = TN(XN , U) can be
reduced to a serial multistage problem of determining sequentially the optimal decisions Uj* ,
j = 1, 2, … N, which minimizes  N( fN , fN – 1, … f1). If  N is of the form

 N =fN o fN-1 o fN-2 … f2 o f1 (6)


where o denotes a composition operator (usually addition or multiplication), then we may put

 N =fN o N-1 (7)


where  N-1 = fN-1 o fN-2 o … o f1 (8)
and then it may be possible to assert that

Operations Research 146


School of Distance Education, University of Calicut
min imum
FN(XN) =  N(XN , UN , … U1)
U N , .......U 2 ,U1

min
= [fN o FN-1(XN –1)] (9)
UN

min imum
where FN –1(XN-1) =  N -1(XN -1 , UN - 1 , … U1) (10)
U N 1 , ....... U 2 , U1

Property (7) of the return function (6) is called separability. Due to separability on successive
stages we may postulate the recursive equation

min
Fj(Xj) = [fj o Fj-1(Xj –1)], j = 2, 3, … N (11)
Uj

min
With F1(X1) = f1 (12)
U1
subject to Xj-1 = tj (Xj , Uj) , j = 2, 3, … N (13)
which may enable us to solve the problem recursively.

Equation (11), (12) and (13) constitute the recursive procedure involving Bellman‟s
principle of optimality. The principle is : In a multistage system with out feedback (that is, in
which subsequent decisions do not influence the outcomes of the earlier decision), whatever
the earlier status and decisions be, the subsequent decisions must form an optimal policy with
respect to current state which has arisen from earlier decisions.

Example:

Minimize u12 + u22 + u32


Subject to u1 + u2 + u3  10 , u1, u2, u3  0

The state variables can be taken as

x3 = u1 + u2 + u3  10 , x2 = u1 + u2 = x3 – u3
x1 = u1 = x2 - u2

and F3(x3) =
min
u3
u 2
3 
 F2 x 2  , F2 x 2  
min 2
u2

u 2  F1 x1  
Now F1(x1) = u12 = (x2 – u2)2
min
F2(x2) = (u22 + (x2 – u2)2)
u2

Operations Research 147


School of Distance Education, University of Calicut
2 2
But u2 + (x2 – u2) is minimum if
2u2 – 2(x2 – u2) = 0  u2 = x2/2
Hence F2(x2) = x22/2
min
Hence F3(x3) = (u32 + F2(x2))
u3

min ( x3  u3 ) 2
2
= (u3 + )
u3 2
The above function is minimum when
2u3 – (x3 – u3) = 0  u3 = x3/3

Hence F3(x3) = x32/3 , x3  10

Obviously F3(x3) is least for x3 = 10

The minimum value of u12 + u22 + u32 is 100/3

10
with u3 = 10/3 , x2 = x3 – u3 = 10 - = 20/3
3

20
u2 = x2/2 = / 2 = 10/3
3

Finally u1 = x2 – u2

20 10
= - = 10/3
3 3

Thus u1 = u2 = u3 = 10/3

Example: 2

Determine max (u12 + u22 + u32) subject to

u1 u2 u3  6 , u1, u2 ,u3 are positive integers.

The state variables are

x2
x3 = u1 u2 u3  6 , x2 = x3/u3 = u1u2 , x1 = = u1
u2

Operations Research 148


School of Distance Education, University of Calicut
2
The stage returns fj(uj) = uj , j = 1, 2, 3

uj 1 2 3 4 5 6
fj(uj) 1 4 9 16 25 36
State transformations xj – 1 , j = 2, 3

uj 1 2 3 4 5 6
xj
1 1 - - - - -
2 2 1 - - - -
3 3 0 1 - - -
4 4 2 - 1 - -
5 5 - - - 1 -
6 6 3 2 - - 1

Now we move to recursive operations

x1 1 2 3 4 5 6
f1(x1) 1 4 9 16 25 36

min
F2(x2) = [f2(u2) + F1(x2/u2)]
u2

min  2  x2  
2

= u    
u2  2  u2  
 

F2(u2) F1(x1) F2(x2)


u2
1 2 3 4 5 6 1 2 3 4 5 6
x2
1 1 - - - - - 1 - - - - - 2
2 1 4 - - - - 4 1 - - - - 5
3 1 - 9 - - - 9 - 1 - - - 10
4 1 4 - 16 - - 16 4 - 4 - - 17
5 1 - - - 25 - 25 - - - 4 - 26
6 1 4 9 - - 36 36 9 4 - - 1 37

Operations Research 149


School of Distance Education, University of Calicut

min  2  x3 
2

F3(u3) = u    
u3  3  u3  
 

F3(u3) F2(x2) F3(x3)


u3
1 2 3 4 5 6 1 2 3 4 5 6
x3
1 1 - - - - - 2 - - - - - 3
2 1 4 - - - - 5 2 - - - - 6
3 1 - 9 - - - 10 - 2 - - - 11
4 1 4 - 16 - - 17 5 - 2 - - 18
5 1 - - - 25 - 26 - - - 2 - 27
6 1 4 9 - - 36 37 10 5 - - 2 38

Answer is 38 with u3 = 1 , u2 = 1 , u1 = 6
Example: 3

A student has to take examination in three courses A, B, C. He has three days available
for study. He feels it would be best to devote a whole day to the study of some course, so that
he may study a course for one day, two days or three days or not at all. His estimates of the
grades he may get by study are as follows.

Course A B C
Study
days
0 0 1 0

1 1 1 1

2 1 3 3
3 3 4 3

How should he study so that he maximizes the sum of his grades?

Operations Research 150


School of Distance Education, University of Calicut
Let u1, u2, u3 be the number of days he should study the course A, B, C respectively,
and let f1(u1), , f2(u2) , f3(u3) be the grades earned by such a study. The problem is to maximize

f1(u1) + f2(u2) + f3(u3)

subject to u1 + u2 + u3  3

u1, u2, u3  0 and integers.

Let us introduce the state variables xj defined as follows.

x3 = u1 + u2 + u3  3
x2 = u1 + u2  x3- u3
x1 = u1 = x2 - u2

The state transformation functions

xj-1 = tj(xj , uj) , j = 2, 3 are thus defined.

The recursive formula applicable to this problem are

max
Fj(Xj) = [fj(uj) + Fj-1(xj –1)] , j = 2, 3
Uj

F1(x1) = f1(u1)

max
where F3(x3) = [f1(u1) + f2(u2) + f3(u3)]
u1 , u2 , u3

max
for any feasible x3. The required solution would be F(x3). Essential computational details
x3
are expressed below.

Stage returns State transformations xj –1


uj 0 1 2 3 uj 0 1 2 3
j xj
1 0 1 1 3 0 0 - - -
2 1 1 3 4 1 1 0 - -
3 0 1 3 3 2 2 1 0 -
3 3 2 1 0

Operations Research 151


School of Distance Education, University of Calicut
Recursive operations

f2(u2) F1 (x1) f2(u2) + F1(x1) F2(x2)


u2
0 1 2 3 0 1 2 3 0 1 2 3
x2
0 1 - - - 0 - - - 1 - - - 1
1 1 1 - - 1 0 - - 2 1 - - (2)
2 1 1 3 - 1 1 0 - 2 2 3 - 3
3 1 1 3 4 3 1 1 0 4 2 4 4 4

f3(u3) F2(x2) f3(u3) + F2 (x2) F3(x3)


u3
0 1 2 3 0 1 2 3 0 1 2 3
x3
0 0 - - - 1 - - - 1 - - - 1
1 0 1 - - 2 1 - - 2 2 - - 4
2 0 1 3 - 3 2 1 - 2 3 4 - 4
3 0 1 3 3 4 3 2 1 4 4 5 4 (5)
The required maximum values is 5, and tracing the path backwards we get the optimal policy
u3 = 2 , u2 = 0 , u1 = 1

8.3 Backward and forward equations

So far we are using a recursive procedure in which x j is regarded as input as xj-1 as the
output for the jth stage, the stage returns are expressed as functions of the stage inputs, and
recursive analysis proceeds from stage 1 to stage N. This procedure is called backward
recursion because of the state transformation function being of the state transformation
function being of the type Xj-1 = tj(Xj , Uj). Backward recursion is convenient when the
problem involves optimization with respect to a given input XN, because then the output X0 is
very naturally left out of consideration.

If however, the problem is to optimize the system with respect to a given out put X0, it
would be convenient to reverse the direction, regard Xj as a function of Xj-1 and Uj and put
Xj = tj (Xj-1 , Uj) , j = 1, 2, … N, and also stage returns as functions of stage outputs and then
proceed from stage N to stage 1. This procedure is called forward recursion.

Example:

Determine max u1 u2 u3 subject to u1 + u2 + u3 = 5 , u1 , u2 , u3  0


Operations Research 152
School of Distance Education, University of Calicut
The state variables are

x3 = u1 + u2 + u3 , x2 = x3 – u3 = u1 + u2 , x1 = x2 – u2 = u1

Max
Also F3(x3) [u3 F2(x2)]
u3

Max
F2(x2) [u2 F1(x1)]
u2

Max
F1(x1) u1 = x2- u2
u1

Max
 F2(x2) [u2 (x2 – u2)]
u2

2
x 
By calculus u2 = x2/2 , F2(x2) =  2 
 2

F3(x3) = Max [u3 F2(x3 – u3)]

 x  u  2

= Max  u3 3 3 
 4 

x3  u3 5 5 / 3
Again by calculus u3 = x3/3 = 5/3  u2 = = = 5/3 , u1 = x2 – u2 = 5/3
2 2

Max u1 u2 u3 = 125/3

*******************

Operations Research 153


CHAPTER 9
THEORY OF GAMES

9.1 Introduction

Game is defined as an actively between two or more persons involving moves by each
person according to a set of rules, at the end of which each person receives some benefit or
satisfaction or suffers loss. In many activities like trade and commerce, battles and wars,
various types of games present situations in which different parties compete to achieve their
own objective and present others from achieving theirs. Mathematical model of such
situations and their solutions form the subject matter of the theory of games.

9.2 Characteristics of game

(i) Chance or strategy: In a game if the moves are determined by chance, it is called a
game of chance, if they are determined by skill, it is called a game of strategy.

(ii) Number of persons: A game is called n-person game if the number of persons
playing it is n. (The person means, an individual or a group aiming at a particular objective).

(iii) Number of alternatives: The Number of alternatives available to each person in a


particular move may be finite or infinite. A finite game has a finite number of moves each
involving a finite number of alternatives, otherwise, the game is called an infinite game.

(iv) Pay off: A quantitative measure of the satisfaction of a person gets at the end of the play
is called a pay-off. It is a real valued function of the variables in the game. Let pi be the pay
n
off to the person i, i = 1, 2, … n, in an n-person game. Then if  pi = 0 , the game is said to
i 1

be an n-person zero-sum game.

Strategy:

A strategy of a player has been loosely defined as a rule for decision making in
advance of all the players by which he decides, the activities he should adopt. ie., strategy for
a given player is a set of rules, that specifies which of the available sources of action, he
should make at each play. This strategy may be of two types.
School of Distance Education, University of Calicut
(i) Pure strategy: If a player knows exactly what the other player is going to do, then the
pure strategy is a decision rule to select a particular course of action.

ii) Mixed strategy: If a player is guessing as to which activity is to be selected by the other
on any particular occasion, a probabilistic situation is obtained and objective function is to
maximize the expected gain. Thus the mixed strategy is a selection among pure strategies with
fixed probability.

Two person zero-sum game

A game with only two players P1 and P2 is called a two person zero sum game, if the
losses of one player are equivalent to the gain of the other. So that the sum of their net gain is
zero.

Matrix (or rectangular) games:

A matrix game is a zero –sum two person game which can be represented in a
rectangular matrix form.

Pay-off Matrix

Suppose that in a two person zero sum game, the player P1 has 'm' activities and the
player P2 has 'n' activities to perform. Then a mxn matrix A = [aij] of pay off to P1 can be
found by adopting the following rules

i) Raw designations for the matrix are activities available to P1


ii) Column designation for the matrix are activities available to P2
iii) The cell entries aij‟s are the payments to the player P1, is Pi‟s pay off
matrix when P1 chooses the activity i and P2 chooses the activity j.

Consider the coin matching game involving two players A and B only. Each player selects
either a head H or a tail T. If outcomes match, A wins Re. 1 from B other wise B wins Re. 1
from A. Then the pay off matrix is as follows,

Player B
H T
H  1  1
Player A  1 
T  1

Operations Research 155


School of Distance Education, University of Calicut
9.3 Saddle Point:

A saddle point of a pay off matrix of player A is a position of such an element in the
pay off matrix which is minimum in its row and maximum in its column. That is if a pay off
matrix [aij] is such that,

Max Min Min Max


aij = aij = ars
i j j i

Then the matrix is said to have a saddle point ars, and ars is called the value of the game. Best
strategy for the player A is 'r' and that of B is 's'.

Some theorems:

Let f(X, Y) be a real valued function of two vectors X and Y , X  En , Y  Em. The
minimum value of f(X, Y) with respect to Y, keeping X fixed is denoted as
Min
 (X) = f(X, Y)
Y
The maximum value of  (X ) for some value of X, is
Max Max Min
 (X ) = f(X, Y)
X X Y
Min Max
Similarly we can explain the expression f (X, Y) also.
Y X
Theorem: 1
Max Min Min Max
Let f(X, Y) be such that both f(X, Y) and f(X, Y) exist. Then
X Y Y X
Max Min Min Max
f(X, Y)  f(X, Y)
X Y Y X
Proof:-
Let X0 and Y0 be some arbitrary points in En and Em respectively, then
Min
f(X0, Y)  f(X0 , Y0)
Y
Max
and f(X, Y0)  f(X0 , Y0)
X
Min Max
Hence f(X0, Y)  f(X, Y0)
Y X

Operations Research 156


School of Distance Education, University of Calicut
But Y0 is arbitrarily chosen and could have been any point in Em, and for every one of them
Max
the inequality should hold. Even if we had chosen Y0 to be that point for which f(X, Y)
X
has the least value, the inequality shall be true. So

Min Min Max


f(X0, Y)  f(X, Y)
Y Y X

Also since X0 is any point in En, the inequality will hold even if we choose that X0 which
Min
makes f(X, Y) maximum.
Y

Max Min Min Max


Therefore f(X, Y)  f(X, Y)
X Y Y X

Theorem: 2

Max Min Min Max


Let f(X, Y) be such that both f(X, Y) and f(X, Y) exist. Then the
X Y Y X
necessary and sufficient condition for the existence of a saddle point (X0, Y0) of (X, Y) is that

Max Min Min Max


f(X0 , Y0) = f(X, Y) = f(X, Y)
X Y Y X

Proof:-
A point (X0, Y0) , X0  En , Y0  Em is said to be a saddle point of f(X, Y) if

Necessary part

f(X, Y0)  f(X0 , Y0)  f(X0 , Y)  X  Em, Y  En

Consider (X0, Y0) be a saddle point of f(X, Y)

Then f(X, Y0)  f(X0 , Y0) ,  X  En

Max
f(X, Y0)  f(X0 , Y0)
X

Min  Max  Max


but ,  f(X, Y)   f(X, Y0)
Y  X  X

Operations Research 157


School of Distance Education, University of Calicut
Min Max
therefore f(X, Y)  f(X0, Y0) (1)
Y X
Again f(X0, Y0)  f(X0, Y) ,  Y  Em
Min
 f(X0, Y0)  f(X0, Y)
Y

Min Max  Min 


but f(X0, Y)   f(X, Y) 
Y X  Y 
Max Min
 f(X0, Y0)  f(X, Y) (2)
X Y
Min Max Max Min
(1) & (2)  f(X, Y)  f(X0, Y0)  f(X, Y)
Y X X Y
Min Max Max Min
but from theorem 1 , f(X, Y)  f(X, Y)
Y X X Y
combining these two we get,
Max Min Min Max
f(X, Y) = f(X, Y) = f(X0, Y0)
X Y Y X
Sufficient part:
Max Min Min Max
Let f(X0, Y0) = f(X, Y) = f(X, Y)
X Y Y X
Min Max
 f(X0, Y) = f(X, Y0)
Y X
By definition of minimum,
Min
f(X0, Y)  f(X0, Y0)
Y
Min Max
Since f(X0, Y) = f(X, Y0)
Y X
Max
we get, f(X, Y0)  f(X0, Y0)
X
which means f(X , Y0)  f(X0, Y0) for all X
Also by definition of maximum ,
Max
f(X, Y0)  f(X0, Y0)
X
Max Min
as we have f(X, Y0) = f(X0, Y)
X Y

Operations Research 158


School of Distance Education, University of Calicut
Min
we get f(X0, Y)  f(X0, Y0)
Y
that is f(X0, Y)  f(X0, Y0) for all Y
Thus f(X, Y0)  f(X0 , Y0)  f(X0 , Y)
which means that (X0, Y0) is a saddle point of f(X, Y).
If the given matrix has no saddle point, the game has no optimal strategies in the
earlier sense. By introducing probability with choice and mathematical expectation with pay
off, the concept of optimal strategy can be extended to apply all matrices.
Definition:
m n
If X = [xi] , xi  0 ,  xi = 1 and Y = [yi] , yi  0,
i 1
y
i 1
i = 1 are the mixed

strategies for players P1 and P2 respectively, then the mathematical expectation of the pay off
function E(X , Y) in the game whose pay off matrix A = [aij] is defined as
m n
E(X, Y) =   xi aij y j = X'AY ;aij is the pay off to P1, when P1 play ith and P2
i 1 j 1

plays jth strategy with probability xI and yi respectively.


In a game the objective of P1 is to choose an X, which maximizing his least expectation and
the objective of P2 is to choose a Y which minimizing P1‟s greatest expectation.
Max Min Min Max
If E(X, Y) = E(X, Y) = E(X0 , Y0) then (X0 , Y0) is defined as the
X Y Y X
strategic saddle point of the game X0, Y0 are defined as the optimal strategies and
v = E (X0 , Y0) is the value of the game.
Consider the pay off matrix of a player A given as follows,

H T Min Max Min

A H 2 -1 -1
-1
T -1 0 -1

Max 2 0

Min Max 2

Here M ax min a ij  M in max a ij


i j j i

Operations Research 159


School of Distance Education, University of Calicut
So there exists no saddle point and hence pure strategies also.

Let X = (x , 1 – x) , 0 < x 1 be the mixed strategy of A ie., A select strategy H with


probability x and T with probability (1 – x).

Then the expected gain of player A, when player B always play H,

is E(A, H) = 2x + (-1) (1 – x) = 3x – 1

Similarly, expected gain of player A , when B play T

is E(A, T) = -1 x + 0 (1 – x) = -x

 the best strategy for A is such that,

E(A, H) = E (A, T) = E(A)

 3x – 1 = -x  x = ¼

 best strategy of a is  14 , 3 4 
Then the value of the game = -¼

In a similar way the best strategy of B, can also be found as (¼ , ¾).

9.4 Theorems of matrix games

Theorem:

Let A be an mxn matrix, and let Pj, and Qi , j = 1, 2, … n , i = 1, 2, … m be its column


and row vectors respectively. Then either (i) there exists a Y in Sn such that

QiY  0 for all i, or (ii) there exists an X in Sm such that X' Pj > 0 for all j.

Where Sm is the set of ordered m-tuples of non negative numbers whose sum is unity.

Proof:-

Let  i Sm be a vector such that its ith element is unity and all other components are
zero.

Consider the m + n points  1 ,  2 , …,  m , P1 , P2, …, Pn belonging to Em.

Operations Research 160


School of Distance Education, University of Calicut
Let C be the convex hull of all the m + n points. Then the origin O of Em is either in C or not
in C. Consider the cases, first

(i) Let O be in C.

Then O can be expressed as a convex linear combination of the m + n points


which span C. Hence there exists

[  1,  2, …  m ,  1,  2, …  n]  Sn + m

m n
such that  i  i +   j Pj =O ; 0  i  1, 0  j  1
i 1 j 1

n
or i +   j aij = 0 ; i = 1, 2, … m
j 1

n
since  i  0 ,   j aij  0 ; i = 1, 2, … m
j 1

n
also j >0
j 1

n
j =0 is not possible, if it is so , each  j values should be equal to zero, then for
j 1

n
i +   j aij = 0 is to be true, each  i should be zero. Which is contradictory to the
j 1

assumption that O is a convex linear combination of m + n points which spans C.

n n
then dividing the inequality   j aij  0  i = 1, 2, …, m by j
j 1 j 1

μj 1
j aij
 n
 0
μ
j 1
j

n j
  y j aij  0 , where yj = n
,  yj = 1 , 0 < yj < 1
j 1
j
j 1

Operations Research 161


School of Distance Education, University of Calicut
 Qi Y  0 , i = 1, 2, …, m

where Qi = (ai1 , ai2 , … aim)

Y = (y1 , y2 , … yn)T

Considering the case (ii) O  C

Then by the theorem on separating hyper planes, there exists a hyper plane containing
O , say BZ = 0 such that C is contained in the half space BZ > 0. In particular, since  i  C,

B  i > 0, B = [b1b2 … bm] and i = [0 0 …, 1, …0]T;

i = 1, 2, … m

 bi > 0 , i = 1, 2, …, m

m
and therefore  bi aij >0
i 1

where bi is the ith component of B. Also Pj  C and so BPj > 0, j=1, 2, …, n

m
 bi aij >0
i 1

m
xi  bi
Dividing the inequality by  bi and putting m
,
i 1
b
i 1
i

m
  xi aij >0 ,  xi = 1 , 0 < xi < 1
i 1

 X' Pj > 0 , j = 1, 2, …, n

where X = (x1, x2, …, xm)

and Pj = (aij , a2j , …, amj)T

Operations Research 162


School of Distance Education, University of Calicut
9.5 Fundamental theorem of rectangular games

max min min max


For an m x n matrix game both E(X, Y) and E(X, Y) exist and are
X Y Y X
equal.

Proof:-

E(X, Y) is a continuous linear function of X defined over the closed and bounded
max
subset Sm of Em for each Y in Sn. Therefore E(X, Y) exists and is a continuous function
X
min max
of Y. Since Sn is also closed and bounded, E(X, Y) exists. Similarly existence of
Y X
max min
E(X, Y) also can be proved.
X Y

For a pay off matrix A, we have either, (1) there exists a Y in Sn such that QiY 1, 2, … m or
(ii) X in Sm such that X'Pj > 0 j = 1, 2, …, m. X and Y satisfies the conditions of a mixed
strategy also.

m
Let (ii) hold. Ie  xi aij > 0 (1)
i 1

Multiplying (1) by the component yj of Y and summing for all j

n m
   xi aij y j > 0 for all Y
j 1 i 1

ie., E(X, Y) > 0 Y

min
Hence E(X, Y) > 0
Y

then consequently, Max Min E  X , Y   0 (2)


x y

If (i) hold, then as above , we get

m n
  xi aij y j  0 X
i 1 j 1

ie E(X, Y)  0 X
Operations Research 163
School of Distance Education, University of Calicut
max
 E(X, Y)  0
X

min max
 E(X, Y)  0 (3)
Y X
At least one of the inequalities (2) or (3) must true.

max min min max


Hence E(X, Y) < 0 < E(X, Y) is not true. (4)
X Y Y X
Let Ak be the matrix [aij – k] formed by subtracting k from each of the elements of A, and let
its expectation function be Ek(X, Y)

m n
Then Ek(X, Y) =   xi (aij  k )y j
i 1 j 1

m n m n
=   xi aij y j -k   xi y j
i 1 j 1 i 1 j 1

= E(X, Y) - k [   xi  1 and  yj = 1] (5)


i j

Since A is any matrix, what is true for A is true for Ak also.

max min min max


Therefore (4)  Ek(X, Y) < 0 < Ek(X, Y) is not true
X Y Y X

Hence using (5)

Max Min E  X , Y   k  Min Max E  X , Y  is also not true or any value of k.


X Y Y X

max min min max


So it can be concluded as E(X, Y) < E(X, Y) is false.
X Y Y X

max min min max


Hence E(X, Y)  E(X, Y)
X Y Y X

max min min max


But we have E(X, Y)  E(X, Y)
X Y Y X

max min min max


Hence E(X, Y) = E(X, Y)
X Y Y X

Operations Research 164


School of Distance Education, University of Calicut
max min min max
Let E(X, Y) = E(X0 , Y0) = E(X, Y)
X Y Y X

Then (X0 , Y0) is a strategic saddle point and

E(X0 , Y0) is the value of the game.

When  i , j = 1, 2, … m and  j , i = 1, 2, … n are pure strategies of X and Y respectively,

max min n
then
i
E ( i , Y0 ) = E(X0 , Y0) =
j
E(X0 ,  j ) , where E (  i , Y) =  aij y j and E(X,
j 1

m
j) =  xi ai j
i 1

9.6 Graphical Solution

The solution of a rectangular game can easily obtained by graphical methods. We


proceed with an illustration for graphical method to solve a 2xn or mx2 rectangular game.

Consider the game with the following pay off matrix

P1

1 2 3 4

1 19 15 17 16
P2
2 0 20 15 5

max min min max


For this game aij  aij
i j j i

ie , no saddle point exists.

Let (x, 1-x) be the mixed strategy of player P1

Then E(X ,  1 ) = 19x + 0(1 – x) = 19x (1)


E(X ,  2 ) = 15x +2 0(1 – x) = 20 - 5x (2)

E(X ,  3 ) = 17x + 15(1 – x) = 15 + 2x (3)

E(X ,  4 ) = 16x + 5(1 – x) = 5 + 11x (4)


Operations Research 165
School of Distance Education, University of Calicut
Plot E against x in the domain 0  x  1 and can obtain the following figure.

20 20
E(x) = 20-5x
19

E(x) = 12+2x
17
16
15 C D 15

B
E(x) = 5+11x
10 10

5 5
E(x) = 19x
4 4
3 3
2 2
1 1

0 1 x

-5 -5

The four lines represents the flow expected pay off to P1 and the continuous piece wise linear
curve O B C D gives the least expectation for any value of x , 0  x  1.

P1 must choose x so as maximizing his least expectation. C is the point where the least
expectation is maximum at C, the value of x, by solving the line joinings can be get as 15/16.

So the optimal mixed strategy of P1 is (15/16 , 1/16)

245
and hence value of the game = 19 x 15/16 = .
16

To find the optimal strategy of P2, consider Y = (y1 , y2 , y3 , y4) denotes the mixed strategy of
P2.

245
We have E(X0 , Y0) =  xi aij y j = 16
i j

Operations Research 166


School of Distance Education, University of Calicut
285 245 270 245 245
 y1+ y2 + y3 + y4 =
16 16 16 16 16

285 270 245 245


 y1 + y3 + (y2 +y4) =
16 16 16 16

285 270 245 245


 y1 + y3 + (1 - (y1 +y3)) = ( y1 + y2 + y3 + y4 = 1)
16 16 16 16

40 25
 y1 + y3 = 0
16 16

 y1 = 0 , y3 = 0 ( y1  0 , y3  0)

Hence Y = [0 y2 0 y4]

= ( 0 , y2 , 0, 1 - y2)

Then E ( 1 , Y) = 15y2 + 16(1 – y2) = 16 – y2 (5)

E ( ε 2 , Y) = 20y2 + 5(1 – y2) = 5 + 15y2 (6)


Plot (5) & (6) against y2 in the domain 0  y2  1

the figure is

20

E(Y) = 16 – y2 B
C
16 15
15
10 10
E(Y)=5+15Y2

5 5
A

0 1 y2

The curve A BC represents the maximum pay off to P1 for any value of y2. So P2 must choose
y2 so as to minimize this pay off. This is at the point B,

By solving the lines intersected at B,

Operations Research 167


School of Distance Education, University of Calicut
 y2 = 11/16

So the optimal mixed strategy of Y is (0 , 11/16 , 0 , 5/16)

Note:-

For an m x 2 game, the optimal strategy of P2 is found first and then using the value of the
game, the optimal strategy of P1 also obtained.

9.7 Dominance

In some cases it is possible to reduce the size of the pay off matrix by using the
dominance property. This occurs when one or more of the pure strategies of either player can
be deleted because, they are inferior to at least one of the remaining strategies and hence it is
never used. In this case, deleted strategies are dominated by superior one‟s.

Rule I: If each element in one row (say r) of the pay off matrix arj less than or equal to
the corresponding elements in other row (say s). then player I will never choose rth strategy,
that is for all j = 1, 2, … n , arj  asj, then probability of choosing rth strategy is zero. So rth
row of the matrix can be deleted.

Rule II: If each element in one column (say p) is greater than or equal to the
corresponding element in the other column (say q), then the player P2, will never choose the
strategy corresponding to column p. In this case the column p is dominated to column q and
can be deleted.

Rule III: Dominance need not be based on the superiority of pure strategy only. A given
strategy can be dominated if it is inferior to an average of two or more other pure strategies.

In general some convex linear combinations of some rows dominates the ith row, then
ith row will be deleted. If the ith row dominates the convex linear combination of some other
rows then one of the rows involving in the combination may be deleted. Similar arguments
follow for columns also.

Operations Research 168


School of Distance Education, University of Calicut
9.8 Rectangular game as an L P Problem

Solving a rectangular game is equal to solving an L P Problem equivalent to game.


Thus in solving a game one can use the simplex method, after converting the given
rectangular game to the equivalent L P Problem. The method of conversion follows.

Let the mxn matrix A = [aij] be the pay off matrix of a game with value v, where v is
any real number.

It is to keep all elements aij of the matrix A as positive. For this if necessary add a
constant k (> 0) to every terms of the matrix A, so as to keep all aij‟s are positive. Then the
value of the game associated to the new matrix v also be positive.

Let us assume that if necessary, after this transformation, the matrix of the game is A =
[aij] where aij > 0 for all i and j, and the value of the game is v > 0.

Let X0 = (x1 , x2 , … xm) and Y0 = (y1, y2, … yn) be optimal strategies of P1 and P2
respectively.

Then we have E(X0 ,  j)  E(X0 , Y0) = v , for all j

m
Or  ai j xi  v , j = 1, 2, … n (1)
i 1

m
subject to x =1
i 1
i (2)

and xi  0 , i = 1, 2, … m (3)
dividing (1) , (2) and (3) by v,

we get,

a x i  1 ; j = 1, 2, … n
'
ij
i 1

m
1
x  ; xi '  0
'
subject to i
i 1 v

xi
where xi ' =
v

The strategy of P1 is to maximize v. Therefore to choose xi ' , such that


Operations Research 169
School of Distance Education, University of Calicut
m
1
x
'
f= i = is minimum
i 1 v

a xi  1 ; j = 1, 2, … n
'
subject to ij
i 1

xi '  0 ; i = 1, 2, … m

m
Min. f   xi
i 1

m
Subject to a x
i 1
ij i
'
1 , j  1, 2,...m

xi  0 i  1, 2,..., m
That is

This is an L P P of player in standard primal form.

1
The value of the game is V 
f min

and the optimal strategy of P1 is

xi = xi ' v, i = 1, 2, …, m

where xi ' is the optimal solution of the L P Problem.

Similarly when we are start with the problem of P2, by taking the inequalities

E(  i , Y0)  E (X0 , Y0) = v

And the LP formulation of player P2 can be derived as the dual of the L P Problem of player
P1.

*******************

Operations Research 170

You might also like