Professional Documents
Culture Documents
Operations Research-I
(Mathematics)
For
K J
UNIT CONVEX SETS AND
01 FUNCTIONS
1.0 INTRODUCTION
The roots of operations research can be found when early attempts were made to use a
scientific approach in technical problems and in the management of organisations at the time of
world war II. Britian had very limited military resources and therefore there was an urgent need
to allocate resources to the various military operations and to the activities of each operation in
an effective manner. Therefore the british military executives and managers called upon a team
of scientists to apply a scientific method to study the technical problems related to air and land
defence of the country. As the team was dealing with (military) operations the work of this team
of scientists was named as OR in Britian.
Their efforts were instrumental in winning the air battle of Britian, and of the North Attantic
etc.
The success of this team of scientists in Britian encouraged United States, Canada and
France to start with such efforts. The work of this team was given various names in United
States such as Operational analysis, operations evaluation operations research etc.
The apparent success of OR in the military attracted the attention of industrial
management in this new field. In this way OR began to creep into industry and many governmental
organisations.
After the war, many scientists were motivated to pursue research relevant in this new
branch. The first technique in this field called the simplex method for solving linear programming
problem was developed by American mathematician, George Dantzing in 1947. Since then
many techniques such as quadratic programming, dynamical programming, inventory theory,
queing theory etc. are developed. Thus the impact of OR can be experienced in almost all walks
of life.
Definition of OR
We give few definitions of OR.
1) OR is the application of the theories of probability, linear programming, queuing
theory etc. to the problems of war, industry, agriculture and many organisation.
2) OR is the art of winning war without actually fighting.
3) OR is the art of giving bad answers to the problems where otherwise the worse
answers are given. (T. L. Saathy 58)
1
Use of OR
In general we can say that whenever there is a problem there is OR for help. In addition
to the military operations research is widely used in many organisations. Now we discuss the
scope of OR in various fields.
1) Defence : There is a necessity to formulate optimum strategies that may give
maximum benefit. OR helps the military executives to select the best course of
action to win the battle.
2) Industry : The company executives require the use of OR for the following :
1) Production department to minimize the cost of production.
2) Marketing department to maximize the amount sold and to minimize the
cost of sales.
3) Finance department to minimize the capital required to maintain any level
of business.
The various departments come in conflict with each other as the policy of one
department is against the policy of the other. This difficulty is solved by the
application of OR techniques. Thus OR has great scope in industry. Now a days
almost all big industries in India make use of OR techniques.
3) L. I. C. : OR techniques are applicable to enable L. I. C. officers to decide the
premium rates of various policies in the best interest of the corporation.
4) Agriculture : With the increase of population and resulting shortage of food there
is a need to increase agriculture output for a country. But there are many problems
faced by the agriculture department of a country. e. g. (i) climate conditions (ii)
Problem of optimal distribution of water from the resources etc.
Thus there is a need of the policy under the given restrictions for which OR
techniques are useful to determine the best policies.
5) Planning : Careful planning plays an important role in the economic development
of many organisations for which OR techniques are fruitful for such planning.
CONVEX SETS AND THEIR PROPERTIES
1.1 o b g t
Definition I (Convex Set) Let Rn x x1, x 2 ,...., xn xi R, i 1, 2,..., n
A subset S Rn , is said to be convex, if for any two points x1, x2 in S the line segment
joining the points x1 and x2 is also contained in S.
b g
x1, x2 S x1 1 x2 S ; 0 1
nb2 2
g
Show that the set S x1, x2 : 3 x1 2 x2 6 is convex. s
Solution :
b g
Let x, y S where x x1, x2 and y y1, y 2 . b g
Since x, y S, 3x12 2x 22 6 and 3y12 2y 22 6 .
mu : u x b1 g y, 0 1r
For some ,0 1 , let u bu , u g be a point of this set, so that
1 2
Now,
b g 2
3 u12 2 u22 3 x1 1 y1 2 x2 1 y 2 b g 2
2
2 3 x12 2 x22 1 3 y12 2 y 22 2 1 3 x1 y1 2 x 2 y 2
2
6 2 6 1 12 1 6
1 2 1 2
Since x1 y1
2
0, x1y1
2
x1 y12 similarly x2 y2
2
x2 y22 and
3
Example 1.2
In Rn consider,,
m r
S1 x x 1 where x x12 x22 ... xn2 d i
1/ 2
Take x1, x2 S
x1 (1 ) x2 x1 1 x2 b g
x1 (1 ) x2 1
Example 1.3
ob g 2
Show that C x1, x2 2 x1 3 x 2 7 R is convex set. t
Solution :
Let b g g b
x x1, x2 and y y1, y 2 C and let o 1.
Let w x b1 g y b w , w g 1 2
w b x , x g b1 gb y , y g
1 2 1 2
b w , w g c x b1 g y , x b1 g y h
1 2 1 1 2 2
w x b1 g y , w x b1 g y
1 1 1 2 2 2
2 w 3 w b2 x 3 x g b1 gb2 y 3 y g
1 2 1 2 1 2
Since x, y C, 2 x1 3 x2 7, 2 y1 3 y 2 7
Hence 2 w 1 3 w 2 . 7 1 . 7 7 b g
b g b g
w w 1, w 2 x 1 y C, , 0 1 .
4
Example 1.4
ob g 3
Show that S x1, x 2 , x 3 2 x1 x 2 x 3 4 R is a convex set. t
Solution :
b g b g
Let x x1, x2 , x3 and y y1, y 2 , y 3 be any two points in S. Then by hypothesis,
2 x1 x2 x3 4 , 2 y1 y 2 y 3 4 .......... (i)
Let b g b g
w w 1, w 2 , w 3 x 1 y where 0 1
w b x , x , x g b1 gb y , y , y g
1 2 3 1 2 3
We have,
c b g h c b g h c
2 w 1 w 2 w 3 2 x1 1 y1 x2 1 y 2 x3 1 y 3 b g h
b2 x x x g b1 gb2 y y y g
1 2 3 1 2 3
b g
w x 1 y S for all x, y, S and for all such that 0 1
S is a convex set.
Example 1.5
3
Show that in R , S {bx , x , x g x
1 2 3
2
}
x12 x22 x23 1 is a convex set.
Solution :
b g
Let x x1, x2 , x3 and y y1, y 2 , y 3 S .b g
2 2
Then x x12 x22 x23 1 and y12 y 22 y 23 y 1 .......... (i)
b g b
Let 0 1 and z x 1 y where z z1, z2 , z3 g
Then z b x , x , x g b1 gb y , y , y g
1 2 3 1 2 3
5
c b g b g
z x1 1 y1, x2 1 y 2 , x3 1 y 3 b g h
2
b g 2
c
z x1 1 y1 x2 1 y 2 b g h c x b1 g y h
2
3 3
2
b1 g y y y 2 b1 gb x y x g
2 2
z 2 x12 x22 x23 2
1
2
2
2
3 1 1 2 y 2 x3 y 3 (ii)
1 2
2
For i = 1, 2, 3, since xi yi 0 , xi yi
2
xi yi2 and therefore
1 2
x1y1 x 2 y 2 x3 y 3 x1 x 22 x 32 y12 y 22 y32
2
1
1 1 1
2
S is a convex set.
Theorem 1.1
The intersection of any finite number of convex sets is a convex set.
Proof
Let S1, S2 ,..., Sn be a finite number of convex sets, and let S S1 S2 ... Sn .
Let x, y S and 0 1
b g
x 1 y Si for each i = 1, 2,..., n
x b1 g y S, S ... S
2 n S
S is a convex set.
Theorem 1.2
Let S and T be convex sets in Rn . Then S T is also convex for any , in R.
Proof
Let x, y S T
6
Then x u1 v1 and y u2 v 2 , where u1,u2 S and v1, v 2 T
x 1 y u1 v1 1 u2 v 2
u1,u2 S , S is convex.
u1 1 u2 S
Similarly, v1 1 v 2 T
x 1 y S T ,
Hence S T is convex.
Definition 1.2
A convex combination of a finite number of points x1, x2 ,..., xn is a point
x 1 x1 2 x2 ... n xn
Remark
r r
Let i xi K where K is convex and xi K, 1, 0, i 1, 2,..., r
i 1
i i
i1
7
r 1 r 1
Consider i xi , xi K, i 1, i 0 , i 1, 2,..., r 1
i 1 i 1
i) r 1 0
ii) r 1 0
Case (I)
r 1 r
r 1 0
i 1
i xi
i 1
i xi K
Case (II)
r
r 1 i xi
i1
r 1 0 i x1 1 r 1 r 1 xr 1
i1 1 r 1
1 r 1 y r 1 xr 1
r
i x i i r r
r 1
where y
i 1
xi i xi and i 1
1 r 1 i 1 1 r 1 i 1
i 1
r i 1 r 1
and i 1 i1
1 r 1
1
i 1 r 1
r
Thus i 0 , i 1 and therefore y K .
i1
Therefore by hypothesis y K .
r 1 r
Hence i xi i y r 1 xr 1 1 r 1 y r 1 xr K becasue the right hand side
i1 i 1
is the convex linear combination of two points y and xr 1 in K which by hypothesis is convex.
8
This proves the theorem for r + 1 points. It is true for r = 2 by definition. Hence theorem
is proved.
Definition 1.3
The convex hull of a set S is the intersection of all convex sets containing S. We shall
denote by [S] the convex hull of S.
Remark
Every set has a convex hull , because Rn is a convex set and so there is always at least
one convex set Rn of which every set is a subset. Also a convex set is its own convex hull.
Theorem 1.4
The convex hull of S is the set of all finite convex combinations of points in S.
Proof
Let K be the set of all finite convex combination of the points in S.
Then by theorem 1.3, K is a convex set containing S.
Hence S K . Let K 1 be any convex set which contains S. Then K 1 contains all convex
combinations of points in K 1 . Hence it contains all convex combinations of points in S.
Hence K K 1 .
Thus K is a subset of all convex sets containing S which shows that K is the intersection
of all convex sets containing S. Hence K = [S].
i.e. K is the convex hull of S.
Theorem 1.5
The set of all convex combinations of a finite number of points x1, x2 ,..., xm is a convex
set.
Proof
R| m m U|
S|
Let S x x i xi , i 0, 1V|
T i 1 i1
i
W
m
To show that S is a convex set take x' and x' ' in S, so that x' ' x
i 1
i i where 'i 0
m m m
9
m m
x i 1
i xi (1 ) ' ' x
i 1
i i
m
x ' (1 ) ' '
i 1
i i xi
m
We can write x
i1
i xi
i i i
i1 i 1
m m
i1
' i (1 ) ' ' .1 (1 ) 1 1
i1
i
Thus for each pair of points x' and x' ' in S the line segment joining them is contained in
S. Hence S is a convex set.
Theorem 1.6
Every point of [S] can be expressed as a convex combination of at most (n + 1) points of
S Rn .
Proof
By definition of convex null and theorem 1.1, [S] is a convex set.
m m
x
i 1
i xi , 1, 0 ,
i1
i i x [ S]
Let us suppose if possible that there is an x S for which m > n + 1. Since the space
Rn is n - dimensional, not more than n vectors in Rn can be linearly independent. Consider the
vectors, x1 xm , x2 xm ,..., xm 1 xm .
10
Since m - 1 > n these (m - 1) vectors cannot be linearly independent.
m1
bx x
i 1
i i m g 0
m1 F Ix
m 1
or i 1
i xi GH JK
i 1
i m 0
m m 1
or i xi 0 where m i 1
i
i1
m
or 0
i1
i
mim
for at least one i. This will happen if i
RS UV over those values of i for which 0 or
i
T W i
i
i max
RS UV
i
m m m
LM 1 & 0OP
m m
N
i i i 1
Also
i1 i1 i1 1
i
1
i
Q
m m m
Now
i1
i xi
i 1
i xi
i 1
i xi
m
x x
i 1
i i
m
(Since x 0 )
i1
i i
11
Definition 1.4
A point x of a convex set K is an extreme point or vertex of K if it is not possible to find two
points x1, x2 in K such that
b g
x 1 x1 x2 , 0 1
i. e. x K V .
Then by definition
m m
y1
i1
i xi , 1, 0
i 1
i i
m m
y2 x , 1, 0
i 1
i i
i 1
i i
Now let, b g
y 1 y1 y 2 , 0 1
m m
y 1 i xi i xi
i 1 i 1
12
m m
y
i 1
(1 ) i i xi x ,
i 1
i i
where i (1 ) i i
convex set.
Theorem 1.9
The set of vertices of a convex polyhedron is a subset of its spanning points.
Proof
Let W be the set of points spanning the convex polyhedron, and V be the set of its
vertices. If possible let y V but y W . Since y is in the poly hedron by definition it is a convex
linear combination of points of W all of which are other than y (by assumption). Hence by
definition y is not a vertex which is a contradiction. Therefore y W or V W .
Remark
It is obvious that there can be spanning points which are not vertices. For example
consider the points A, B, C, D in R2 such that D is in the triangle formed by the vertices A, B, C.
The four points span the triangle ABC but D is not a vertex.
b g
Let x Rn , C 0 a constant row n - vector and R . Then we define,
i) A hyperplane as x c x r m
ii) A closed half - space as mx c x r or x c x
Definition 1.6
mx x x r X where
0 b g d
x x1, x2 ,..., xn x12 x22 ... xn2 i
1/ 2
13
Definition 1.7
Definition 1.8
If Rn the point x is a boundry point of the set S if every - neighbourhood of x contains
some points which are in S and some points which are not in S.
For example in
m r m r
S1 x x 1 , S2 x x 1 , x R2 the points on the circumference of the circle
x12 x22 1 are the boundry points of S1 and S2 . S1 contains all its boundry points while S2
contains none of them.
Definition 1.9
A set is said to be closed if it contains all its boundry points and is said to be open if its
complement is closed.
Definition 1.10
A set S is said to be bounded from below if there exists y in Rn with each component
finite such that for every x S , y x . Note : y x y j x j , j 1, 2,..., n .
Definition 1.11
A set S is bounded if there exists a finite real number M 0 such that for all x in S,
x M.
Corollary 1.10
A hyperplane is a closed set.
Proof
m r
Let x c x 0 be a hyperplane.
Let x1 be the boundry point of the hyperplane. Suppose it is not a point of the hyper
plane.
Then either c x1 0 or c x1 0 .
Now c x c x1 x x1
b g b
c x c x1 c x x1 c x c x c x1 c x x1 g
14
b
c x c x1 c x x1 g
b
c x 1 c x x1 g c x1 1 1
c x 1 c x x1
m r
Consider the nb d of x1 , x x x1 where is an orbitary positive number..
0 1
Let
2c
This shows that x is in the half space c x 0 . Hence there exits a nbd. of x1 which
contains no points of the hyperplane c x 0 . Hence x1 is not a boundry point of the hyperplane.
This is a contradiction. Thus there is no boundry point of the hyper plane which is not in the
hyperplane. Hence the hyperplane is a closed set.
Definition 1.12
l q
In Rn , every hyper plane x / c x determines two open half spaces and two closed
half spaces. The open half spaces are :
X1 x c x and X2 x c x
X3 x c x and X4 x c x .
Corollery 1.11
A hyperplane is a convex set.
Proof
m r
Let X x c x be a hyperplane and let x1, x2 be any two points of this hyperplane.
Then c x1 and c x2 . Now if 0 1 , we have
b g b g
c x1 (1 ) x2 c x1 c 1 x2
b c x g b1 g c x
1 2
= b1 g
Therefore the point x b1 g x for 0 1 is in the hyperplane. Hence the hyperplane
1 2
is a convex set.
15
Corollary 1.12
m r m r
The closed half spaces H1 x c x and H2 x c x are convex sets.
Proof
Let x1, x2 be any two points of H1 . Then c x1 and c x2 . If 0 1 .
c x1 (1 ) x2 ( c x1 ) (1 ) c x2
1(1 )
Corollary 1.13
m r m r
The open half spaces H1 x c x and H2 x c x are convex sets.
Proof
Let x1, x2 be any two points of H1 .
Then c x1 , c x2
If 0 1 , we have
b g
c x1 (1 ) x2 c x1 (1 ) c x2
(1 )
Let S Rn be any closed convex set and w S be a boundary point. Then a hyperplane
c x z is called a supporting hyperplane of S at w , if
i) c w z and
ii) S H or S H
l q l
where H x : c x z and H x : c x z q
Remarks
1) The supporting hyperplane need not be unique.
2) S may intersect the supporting hyperplane in more than one boundary points.
16
Theorem 1.14
Let S be a closed convex set. Then S has extreme points in every supporting hyperplane.
Proof
Let w be a boundary point of a closed convex set S.
m
Let c x z be a supporting hyperplane at w S . Let B S x c x z .r
Then B is a closed convex set and B for w B .
Let us assume to the contrary that an extreme point b of B, is not an extreme point of
S. Then there exist x1, x2 S , such that
b g
b x1 1 x2 , 0 1
Therefore c b c x b1 g c x .
1 2 .......... (i)
c x1 z and c x2 z
b g
c b z 1 z z or b g
c b z 1 z z
17
Theorem 1.15 (Separating Hyperplane)
Let S Rn be a closed convex set. Then for any point y not is S, there is a hyperplane
containing y such that S is contained in one of the open half spaces determined by the
hyperplane.
Proof
x2 S
w
y
cx=
z x1
b g
u 1 w S for 0 1 .......... (ii)
u (1 ) w y w y
b w yg b u w g 2
wy
2
2 2
b
2 u w w y 2 w y u w w y gb g 2
2
b
2 u w 2 w y u w 0gb g
2
u w 2 w y u w 0 .
18
b g
Letting 0 , and c w y ; we get
bw ygbu wg 0 or c bu w g 0 i.e. c u c w
or c u c y c w c y
or b g b
c uy c wy c g 2
Hence c u c y .
Putting c y z , we get c u z .
bg
Let S be a non - empty convex subset of Rn . A function f x on S is said to be convex if
for any two vectors x1 and x2 in S.
b g b g b gb g
f x1 1 x2 f x1 1 f x2 0 1
b g b g b gb g
f x1 1 x2 f x1 1 f x2 0 1
y y
y = f (x)
S
y = f (x)
o x o x
It follows from the above two definitions that every strictly convex function is also convex.
The graph of a strictly convex function has been illustrated in Fig. A.
19
Definition 1.16 [Concave (strictly concave) function]
bg
A function f x on a non - empty subset S of Rn is said to be concave (strictly concave)
bg
if f x is convex (strictly convex).
Clearly, every strictly concave function is also concave. The graph of a strictly concave
function has been illustrated in Fig. B.
y y
y = f (x)
y = f (x)
S
o x o X0 x
The following results are the immediate consequences of the above definitions :
20
Definition 1.17 (Global minima)
b g
A global minimum of the function f ( x ) is said to be attained at x0 if f x0 f ( x) for all x
in the feasible region.
b g
positive such thai f x0 f ( x) for all x in the feasible region which also satisfy the condition
x0 x .
Example :
The function f ( x) x12 x13 subject to the constraint x1 0 , has a local minimum at x1 0 .
bg
Note that f x has no global minimum at all.
bg
Let f ( x ) be a convex function on a convex set S. If f x has a local minimum on S, then
this local minimum is also a global minimum on S.
Proof :
bg b g
Let f x have a local minimum f x0 at x0 which is not a global minimum on S. Then,
b g b g b g bg
there exists at least one x1 in S x1 x0 such that f x1 f x0 . Since f x is a convex function
on S, we have
b g b g b gb g
f x1 1 x0 f x1 1 f x0
Thus f x b1 g x f b x g
1 0 0
b g
x1 1 x0 x 0 x1 x0 , if x x
1 0
b g bg
Thus x1 1 x0 will give a smaller value for f x in the - neighbourhood of x0 ,
bg
whenever min 1, | x1 x0 . This contradicts the fact that f x takes on a local minimum
at x0 . Hence x0 is a global minimal point.
21
Corollary 1.17
If a function f(x) has a local minimum on a convex set S on which it is strictly convex,
then this local minimum is also a global minimum on that set. This global minimum is attained at
a single point.
Theorem 1.18
bg
Let f x be a convex function on a convex set S. Then the set of points in S at which
bg
f x takes on its global minimum, is a convex set.
Proof :
The result is obvious if the global - minimum is attained at just a single point. Let us
assume that the global minimum is attained at two different points x1 and x2 of S. Then
b g b g
f x1 f x2 .
bg
Now, since f x is convex,
f x b1 g x f b x g b1 g f b x g f b x g
2 1 2 1 2 0 1
f x b1 g x f b x g f ( x )
2 1 2 1
f x2 (1 ) x1 f ( x1 )
b g
Thus every point x x2 1 x1 corresponds to a global minima. The set of all such
x is, obviously, a convex set.
Corollary 1.19
If the global minimum is attainable at two different points of S, then it is attainable at an
infinite number of points of S.
Theorem 1.20
bg bg
Let f x be differentiable on its domain. If f x is defined on an open convex set S, then
bg
f x is convex if
b
f ( x2 ) f ( x1 ) x2 x1 g T
f ( x1 )
b g b
f ( x2 ) f x1 x2 x1 g T
b g bg
f x1 then f x is convex.
22
b g
Since x1, x2 S, x0 x2 1 x1 for 0 1 implies that x0 S .
b g b g b
f x1 f x0 x1 x0 g T
b g
f x0 .......... (i)
b g b gb
f x2 f x0 x2 x0 g f bx g
T
0
.......... (ii)
f b x g b1 g f b x g f b x g x b1 g x f b x g x f b x g
2 1 0
T
2
T
1 0
T
0 0
f x 0 x 0T f x 0 x 0T f x 0 f x 0
b g
Using the definition of x0 , this yields f ( x2 ) 1 f ( x1 ) f x2 1 x1 , b g
which implies that f x is convex.
EXERCISES
3) a) ob g 3
Show that S x1, x 2 , x 3 2 x1 x 2 x 3 4 R is convex set. t
b) Show that in R3 the closed ball x12 x22 x23 1 is a convex set.
4) a) l
Show that the closed half spaces H1 x / c x z as H2 x / c x z are q l q
convex sets.
b) m r
The open half spaces x c x z and x c x z are convex sets. m r
c) The intersection of any finite number of convex sets is a convex set.
23
5) a) ob g t
Show that S x1, x 2 , x3 2 x1 x 2 x 3 4, x1 2 x 2 x 3 1 is a convex
set.
b) Let A be an m n matrix and b be on n - vector then show that
{x R n
}
A x b is a convex set.
c) Let S and T be convex sets in Rn . Then for any scalars , prove that
S T is a convex set.
d) Prove that the set of all convex combinations of a finite number of points
x1, x2 ,..., xm is a convex set.
6) a) If V is any finite subste of vectors in Rn , then prove that the convex hull of
V is the set of all convex combinations of vertors in V.
b) l q
If A x, y Rn these prove that A x y .
c) Let S be a convex set with a non empty interior. Then prove that cl S is
also a convex set.
8) a) Let S Rn be a closed convex set and y S . Then prove that there exist
m
unique x0 S such that y x0 mn y x x S . r
b) Let X Rn be a closed convex set. Then show that for any point y not in
X. There exist a hyerplane containing y s. t. X is contained in one of the
open half spaces determined by the hyperplane.
24
UNIT
02 LINEAR PROGRAMMING
2.0 INTRODUCTION
In 1947, George Dantzig and his associates, while working in the US department of Air
Force, observed that a large number of military planning problems could be formulated as
maximizing / minimizing a linear function (profit / cost) whose variables were restricted to values
satisfying a system of linear constraints (e.g. 2x1 + 3x2 5). The term programming refers to
the process of determining a particular action plane. Since the objective function (profit / cost)
and constraints are linear, problems are called linear programming problems.
The general Linear Programming Problem (L.P.P.)
b g
The general linear programming problem is to find a vector x1, x2 ,..., xn which minimizes
the linear form (i. e. objective function)
Where the ai j , bi and c j (i = 1, 2, ..., m, j = 1, 2, ..., m) are given constants and m < n. We
e
shall assume that the equations (2.3) have been multiplied by (-1) where necessary to make all
bi 0 . The function (2.1) is called objective function and system (2.2) and (2.3) are called
constraints.
n
The general L. P. P. is also denoted by : Minimize z c
j1
j xj
a
j 1
ij x j bi (i = 1, 2, ..., m)
25
Definition 2.1
b g
A feasible solution to the L. P. P. is a vector x x1, x2 ,..., xn which satisfies the conditions
(2.2) and (2.3).
Definition 2.2
A basic solution to (2.3) (or L. P. problem) is a solution obtained by setting any n - m
variables equal to zero and solving for the remaining m variables, provided that the determinant
of the coefficients of these m variables is non zero. The m variables are called the basic variables.
Definition 2.3
A basic feasible solution is a basic solution in which all the basic variables are non
negative.
Definition 2.4
A non degenerate basic feasible solution is a basic feasible solution in which all the
basic variables are positive.
Definition 2.5
A feasible solution which either maximizes or minimizes the objective function is called
an optimal feasible solution.
Theorem 2.1
The collection of all felsible solutions to the L. P. P. is a convex set.
Proof
If the set F has only one point then obviously F is a convex set. Assume that F has more
than one point.
A x1 b, x1 0 and A x2 b, x2 0
b g
Let x0 x1 1 x2 where x1, x2 F, 0 1 .
b g
Then A x0 A x1 1 x2
b g
A x1 1 A x2 ,
b b1 g b b
26
The empty set occurs when the constraints of the set can not be satisfied simultaneously.
In this case the system yields no solution.
An unbounded set implies that the region of fisible solutions is not constrained in atleast
one direction.
Finally closed set implies that the region of fessible solutions is a convex polyhedron
since it is defined by the intersection of a finite number of linear constraints.
Note : We shall rewrite the definition of basic solution.
Basic Solution
Every basic feasible solution of A x b is an extreme point of the convex set of feasible
solutions (of A x b ) and conversely every extreme point of the convex set of feasible solutions
is a basic feasible solution to A x b .
Proof
b g
x xB , o where o is an (n - m) component null vector. Also assume that the vectors of
the matrix A are so arranged that the first m column vectors correspond to xB and we denote
this sub matrix of A by B (called the basic matrix) and we denote the remaining (n - m) column
vectors by R. Thus A = (B, R).
27
Accordingly the system A x b becomes
b g
(B, R) xB , o b or B xB b .
By the definition of a basic solution B must be non singular.
Hence xB B 1 b
To prove that every basic feasible solution is an extreme point of the convex set of
feasible solutions.
If possible assume that the two distinct feasible solution x1 and x2 exist such that
b g
x x1 1 x2 , 0 1 .......... (1)
where xB (1) and xB ( 2 ) are the first m components of x1 and x2 respectively and u1, u2
denote the last (n - m) component vectors of x1 and x2 respectively..
From (1) and (2)
(1) ( 2)
xB , o xB , u1 (1 ) xB , u2 .......... (3)
(1) ( 2)
i. e. xB , o xB (1 ) xB , u1 (1 ) u2
b g
Therefore u1 1 u2 0 .......... (4)
u1 u2 0 .......... (5)
(1) (2 )
A x1 b, A x2 b B xB b and BxB b
(1) ( 2)
xB xB B 1 b xB
This shows that x x1 x2 which contradicts the fact that x1 x2 . Conseqnently x cannot
be expressed as a convex combination of any two distinct points in the set of feasible solutions
and hence it must be an extreme point.
Conversely
b g
Let x x1, x2 ,..., xn be an extreme point of the convex set of feasible solutions.
28
feasible solution of A x b if the column vectors associate with positive elements of x are
linearly independent.
Assume that k - components of x are positives (remaining are zeros). Arrange the
variables so that the first k components are positive. Then
k
x j a j b, x j 0, j 1,2,...,k .......... (6)
j 1
If possible assume that the vectors a1, a2 ,..., ak are not linearly independent. So they
are linearly dependent and hence there exist scalars j not all zero such that
1 a1 2 a2 ... k ak 0
or
j 1
j aj 0 .......... (7)
k k
j 1
x j aj
j 1
j aj b
or dx h a b
j 1
j j j
x1 x1 1, x 2 2 ,..., x k k ,0,0,...,0 .......... (8)
(n - k) components
and
x 2 x1 1,x 2 2 ,..., x k k ,0,0,...,0 .......... (9)
(n - k) components
R| x U|
Since x j 0 select such that 0 min S| j
j 0 V|
T j
W
Then the first k components of x1 , x 2 will always be positive.
Since the remaining components of x1 and x2 are zeros, it follows that x1 and x2 are
feasible solutions different from x . Adding (8) and (9) we obtain.
29
x1 x 2 2 x1, x 2 ,..., x k ,0,0,...,0
1 1
x1 x2 x1, x 2 ,..., xk ,0,0,...,0 x
2 2
Thus x can be expressed as a convex combination of two distinct points x1 and x2 by
1
selecting
2
1 1
i. e. x x1 1 x 2
2 2
This contradicts the assumption that x is an extreme point of the convex set of feasible
solutions.
Hence a1, a2 ,..., ak are linearly independent and hence x is a basic feasible solution.
Suppose xm is the extreme point among x1, x2 ,..., xk at which the value of the objective
function is maximum say z .
i. e. z max c xi c xm
1 i k
30
Let x0 F which is not an extreme point and let z 0 be the corresponding value of the
objective function.
Then x0 1 x1 2 x2 ... k xk
z0 c 1 xm c 2 xm ... c k xm
z0 c 1 .... m xm c xm
i. e. z0 z
This implies that the value of the objective function at any point in the set of fessible
solutions is less than or equal to the maximal value z at extreme points.
Let x1, x2 ,...., xr (r k ) be the extreme points of the set F at which the objective function
assumes the same optimum value. This means.
z c x1 c x2 ... c xr
r
Further let x 1 x1 2 x2 ... r xr , j 0 and
j 1
j 1 be convex combination of there
extreme points.
Then c x c 1 x1 2 x 2 ... r x r
1 c x1 2 c x 2 ... r c xr 1 z ... r z
b
1 2 ... r zg
z Thus c x z
This proves the result.
Note
Consider the general L. P. P.
31
LMa .......... a
11 1n OP
AM
Ma .......... a
21 2n PP
MM................... PP
Na .......... a
m1 mn Q
c ( c1, c 2 ,..., cn )
b g b
x x1,..., xn , b b1, b 2 ,..., bm g
Where rank of A i. e. r (A) = m < n.
For convenience column vectors will also be represented by row vectors without using
the transpose symbol (T). So there should be no confusion in understanding the scalar
multiplication of two vectors c and x .
Form an m m non singular submatrix B of A called the basic matrix, whose columns
are linearly independents vectors. Let these column vectors be renamed as
Let aj y1 j 1 y 2 j 2 ... ym j m
a j 1, 2 ,..., m y1j , y 2 j ,..., y m j
d
i. e. a j B y j where y j yi j , y 2 j ,..., ym j i
i. e. a j B y j where y d y , y
j 1j 2 j ,..., y m j i
i. e. y j B 1 aj where y i j (i 1,..., m) are scalars.
The vector y j will change if the columns of A forming B change. Any basic matrix B will
yield a basic solution to A x b . The solution may be denoted by m component vector as
d i
xB xB 1, xB 2, ...., xB m where xB is determined from xB B 1 b . .......... (4)
32
Note that xBi corresponds to the column i of the matrix B. The variables xB 1, xB 2,..., xB m
are called basic variables and the remaining (n - m) variables are non basic variables.
d
Let cB cB 1, c B2 , ..., cB m i
where cBi is the coefficient of the basic variable xBi in the objective function.
So z cB 1 xB 1 cB 2 xB 2 ... cB m xB m 0
d id
z cB 1,..., cB m xB 1,..., xB m i
z cB xB ........... (5)
m
z j y1 j cB 1 y 2 j cB 2 ... ym j cBm c
i1
Bi yi j
d id
z j cB 1,..., cBm yi j , y 2 j ,..., ym j i
z j cB y j
Example 2.1
Illustrate the above definitions and notations for the following L. P. problem.
Maximize z x1 2 x2 3 x3 0 x4 0 x5
subject to 4 x1 2 x2 x3 x4 4
x1 2 x2 3 x3 x5 8
Solution :
Constraints equations in matrix form may be written as
a1 a2 a3 a4 a5 x b
LMx OP
1
LM4 0 O M P L4 O
x
Mx P
2
2 1 1
N1 1PQ M P MN8 PQ
3
2 3 0
MMx PP
4
Nx Q
5
33
or A xb
b g
A basis matrix B 1, 2 is formed using columns a3 and a1 where
1 a3
LM1 OP, a1
LM4OP
N3 Q 2
N1 Q
The rank of the matrix A is 2 and column vectors a3 , a1 are linearly independent and
thus form a basis for R2 . Thus basis matrix is
b
B 1, 2 g LMN13 14OPQ
a3 a1
1
Then the basic feasible solution is xB B b
F 1 ad j.BI b
xB GH B JK
1 L 1 4 O L4 O 1 L28 O
11 MN 3 1 PQ MN8 PQ 11 MN 4 PQ
xB
LM 28 OP x
11 L OP
M PM
B1
xB
MM 4 PP MNx PQ
N11 Q
B2
28 4
Hence the basic solution is xB1 x3 , xB 2 x1 and the remaining non basic
11 11
variables are (always) zero i. e. x2 x4 x5 0 .
cB 2 coeff. of xB 2 coeff. of x1 c1 1
FG 28 / 11IJ 88
z cB xB ( 3,1)
H 4 / 11 K 11
b g
Also any vector aj j 1, 2, 3, 4, 5 can be expressed as a linear combination of vectors
b g
j j 1, 2 .
34
Let a j y1j 1 y 2 j 2 y1j a3 y 2j a1
6 4
Hence y12 and 1 y 22 .
11 11
b g FGH 64 // 11
z2 cB y2 3,1
IJ
11K
L 6 4 O 22 2
M3. 1. P
N 11 11Q 11
Similarly z1, z3 , z4 and z 5 can also be obtained.
Theorem 2.4
Proof
To prove this assume that there exists a feasible solution to A x b with p n positive
variables.
Number the variables, so that the first p variables are positive. Then the feasible solution
can be written as
n
x a b
j 1
j j ........... (1)
and hence
b
x j 0 ,( j 1, 2,...., p), x j 0, j p 1, p 2,..., n g ........... (2)
Case (i)
35
Case (ii)
Suppose the vectors aj ( j 1, 2,..., p) are linearly dependent. We shall show that under
these circumstances it is possible to reduce the number of positive variables step by step until
the columns associated with the positive variables are linearly independent.
When the aj ( j 1, 2,..., p) are linearly dependent, there exist j not all zero such that
j 1
j aj 0 .......... (3)
x a b, x 0 ( j 1, 2,..., p)
j 1
j j j .......... (4)
to zero.
p
Suppose some vector ar of the p vectors in
j1
j aj 0 is expressed in terms of the
remaining (p - 1) vectors.
j
Thus ar jr r
aj .......... (5)
p
F
GH x x
jIJ a b
K
j r j
j 1 r .......... (6)
jr
Here we have not more than (p - 1) variables. However we are not sure that all these
variables are non negative (In general if we choose ar orbitrarily some variables may be negative)
We wish to obtain
j
x j xr 0 ( j = 1, 2, ..., p), j r .......... (7)
r
For any j for which j 0 (7) will be satisfied automatically. When j 0 we have,
xj xr
0 if j 0 .......... (8)
j r
36
xj xr
0 if j 0 .......... (9)
j r
xr min x j |RS
j 0
|UV
r
j
j |T |W ........ (10)
GH x x
F jIJ a b
K
Thus a fessible solution j r j
j 1 r
jr
If the columns associated with the positive variabls are linearly independent by case (i)
we have a basic feasible solution. If the columns associates with the positive variables are
linearly dependent we can repeat the same procedure and reduce one of the positive variables
to 0. Utimately we shall arrive at a solution such that the columns corresponding to the positive
variables are linearly independent. (Note that a single non zero vector is always linearly
independent)
OR
Theorem 2.5
If a linear programming problem
max. z c x s. t. A x b, x 0
has at least one feasible solution then it has at least one basic feasible solution.
Proof
Let
b
x0 x1, x2 ,..., xk , 0, 0,..., 0 g
be a feasible solution to the L. P. P. with positive components x1, x2 ,..., xk .
Let a1, a2 ,..., ak be the first k columns of A (associated with the positive variables
x1, x2 ,... x x respectively)
Then by hypothesis
37
Case (i)
b
Suppose a1, a2 ,..., ak are linearly indepedent. In this case x0 x1, x2 , xk , 0,..., 0 is a g
basic fessible solution.
Case (ii)
Suppose a1, a2 ,..., ak are linearly dependent.
1 a1 ... k ak 0 with atleast one j 0 and hence assume this j 0 . .......... (2)
Let v 1max
R|S U|V, 0
j
jk
|T x |W
j
j (i.e. m +x is taken over those j fro which j x )
1
Multiply (2) by and then subtract from (1) to get
v
k k
1
x j aj v j aj b
j 1 j 1
k j
x j aj b .......... (3)
j 1 v
x x1 1 , x 2 2 ,..., xk k ,0,0,...,0
v v v
is a new solution of A x b .
We have v
j
xj
or x j
j
v
b1 j kg
The new solution x satisfies non negativity restriction.
j
Since x j 0 for at least one j, x is a feasible solution with at the most k - 1 positive
v
variables. All other variables are 0.
If the columns associated with the positive variables are still linearly. dependent, repeat
the above procedure. Cuntinuing in this way we get the column vectors ossociated with positive
variables which are linearly independent. Thus by case (i) we get a basic feasible solution.
38
Example 2.2
If x1 2, x2 3, x3 1 is a feasible solution of a L. P. P. problem
max. z x1 2 x2 4 x3
subject to 2 x1 x2 4 x3 11
3 x1 x2 5 x3 14
x1, x2 , x3 0
find a Basic Feasible Solution
Solution :
We have A x b
L2 1 4O
LMx OP L11O 1
AM
N3 1 5PQ MMNx PPQ MN14PQ
where ,x x ,b 2
Hence 2 a1 3 a2 1. a3 b
a1
LM2OP, a LM1OP, a LM4OP, b LM11OP
Where
N3Q N1Q N5Q N14Q
2 3
Step (2)
The vectors a1, a2 , a3 associated wsith the positive variables x1, x2 , x3 are linearly
dependent so one of the vectors is a linear combination of the remaining two.
Let a3 1 a1 2 a2 Thus
Maximum no. of lin. independnet columns is less than 3 since row rank of coefficient
matrix A is 2.
4 LM OP LM
2 1 2
Now 5 3
OP
NQ N
1 2 Q
2 1 2 4, 3 1 2 5
39
1 1, 2 2
a3 a1 2 a2
i. e. a1 2 a2 a3 0
Where 1 1, 2 2, 3 1
Step (3)
Now determine which of the variables x1, x2 , x3 should be 0. For this find
F I, 0
v max GH x JK
j
j
j
max 1 , 2 (since 1 1 0 , 2 2 0 )
x1 x 2
1 2 2
max ,
2 3 3
FG 1 IJ is a reduced solution where
H
x x1
v
, x2 2 , x3 3
v v K
1 1 1
x1 2
v 2/3 2
2 2
x2 3 0
v 2/3
3 1 FG 5 IJ
x3
v
1
2/9 H
2 K
FG 1 , 0, 5 IJ
x
H 2 2K
Step (4)
FG 1 , 0, 5 IJ
Now the solution x
H 2 2K is to be tested for basicness. The determinant of the
40
LM2 4 OP 0
N3 5 Q
Obviously a1, a3 are linearly independent.
FG 1 , 0, 5 IJ is a B. F. S.
Hence x
H 2 2K
Theorem 2.6
Let a L. P. P. have a B. F. S. If for any column aj in A but not in B b1, b2 ,..., bm (basicn s
m
vectors for columns in A) we have a j y
i 1
ij bi with at least one y i j 0 (i = 1, 2, ..., m) then we
Proof
n s
Let xB be a BFS of the LPP, where B b1, b2 ,..., bm forms a basis for the columns of A.
d
For any column aj in A a j B , we have h
m
aj y
i 1
ij bi
Suppose some y rj 0
Then
aj y
i 1
ij bi y r j br
ir
m
aj 1
br
yr j
yr j
y
i 1
ij bi
ir
m
Hence B xB b gives b x
i1
Bi bi
41
m
LM a 1 m
OP
b x Bi bi xBr MM y j
yr j
y bP
PQ
ij i
i1
ir N rj i 1
ir
m L OP b x
MM x
yi j Br
b xBr aj
i 1
ir
N Bi
yr j PQ y i
rj
The new solution xB is also a basic solution with the basic variables.
F yi j I ,i 1, 2,...,m, i r
GH
xB i xB i xB r
yr j JK
xBr
and x Br
yr j
Case (1)
Let xBr 0
In this case the new set of basic variables is obviously non negative, since we have
assumed the existance of a BFS, xB .
Case (2)
xBr 0
We have y r j 0
b g
For the remaining y i j i r , y i j 0, y i j 0 or y i j 0 .
If yi j 0 for some i, xB i xB i 0, xB r 0
If yi j 0 still xB i 0 and xB r 0 .
Suppose yi j 0
yi j
We require xB i xB i xB r 0, i r
yr j
xB i xB r
So we must have y y , where y i j 0 .
ij rj
42
xBr xB i R| U|
We select r in such a way that y min y yi j 0 S| V|
rj ij T W
Then we have a B. F. S.
Example 2.3
Given a basic feasible solution x3 4 and x4 8 to the L. P. P..
max. z x1 2 x2 subject to
x1 2 x2 x3 4
x1 4 x2 x4 8 ,
obtain a new B. F. S.
Solution :
We have A xb
Where A
LM1 2 1 0OP, x b x x , x , x g, b b4, 8g
N1 2 0 1 Q 1 2 3 4
We have B xB b where B M
L1 0OP
N0 1 Q
x d x x i b4, 8 g, x x 4, x x 8
B B1 B2 B1 3 B2 4
L1 O
b M P, b M P
L0 O
1 1
N0 Q 2
N1 Q2
y1 B 1 a1
LM1 0OP LM1OP LM1OP LMy OP 11
N0 1 Q N1Q N1Q Ny Q 21
1 0 2 2 y12
y 2 B 1 a 2
0 1 4 4 y 22
a2 B 1 a2 y 21 b1 y 22 b2 .
43
Since y11 1, y 21 1 0 we can insert a1 in B. We now select r br for replacement by
a1 which corresponds to the value of r determined by the minimum ratio rule :
xB r R| x , y 0U|
minS| y Bi
V|
yr 1
i
T i1 W i1
min M
L x , x OP
B1 B2
Ny y Q 11 21
L4 8O x
min M , P 4
B1
N 1 1Q y 11
r 1
b g FG or FG , IJ, a,
IJ
B a1, 2
H H K 1 2 1 2 2
K
B
LM1 0OP
N1 1 Q
1
We can now find the basic feasible solution xB either by using the result xB B b or by
the transformation formulae.
yi j
xB i xB i xBr , i 1,..., m, i r
yr j
xB r
and xB r y for i = r = 1, xi xB 1
rj
Now 1 b1 is removed means x 3 will not be a basic feasible solution. In its plane x 4
corresponding to a1 will be a B. F. S. and x1 xB 1 .
xB 1 x3 4
xB 1 4
y11 1 1
44
y 21 y 21 1
xB 2 xB 2 xB 1 x4 x3 84 4
y11 y11 1
x1 xB 1 4, x2 0, x3 0, x4 xB 2 4
Theorem 2.7
If a linear programming problem,
Max. z= c x, , s. t. A x = b, x 0 ,
has at least one optimal feasible solution, then at least one basic feasible solution must
be optimal.
Proof
m n k
Let x x1, x 2 ,..., xk 0,0,...,0
0
be an optimal feasible solution to the given linear programming problem which yields
the optimum value
k k
z c x j j . Also x a b
j 1
j j .......... (1)
j1
If a1, a2 ,..., ak are linearly independent then x 0 is an optimed BFS. Otherwise a1, a2 ,..., ak
are linearly dependent and there exist j , not all 0,
such that
j 1
j aj 0 where at least one 0
j .......... (2)
F I
Let V max
1 j k GH x JKj
j
.......... (3)
45
k
F
GH x V JK a b
j I
j j .......... (4)
j1
FG 1 IJ
H
x x1
v v v K
, x2 2 ,...., xk k , 0, 0,...., 0 is a new solution of A x b .
j j
From (3) v x x j v 0, j 1, 2,....., k
j
j
Thus x is a feasible solution and since x j 0 for at least one j, contains at the
v x
most k - 1 non zero variables other variables being zero.
If the column vectors associated with the positive variables are still linearly dependent
we repeat the above process and finally get the solution which is a BFS. So without loss of
generality the solution x will be assumed as a basic feasible solution.
We have to prove that x is also optimum solution.
The value of the objective function corresponding to this solution x will become
F
GH
k
j IJ c x 1 c
k k
K
z c j xj j j j j
j 1 j1 j 1
k
1
or zz
c
j 1
j j .......... (5)
k
(since z c x j j )
j1
But, for optimality z must be equal to z . Hence x will be optimal solution if and only if
we prove,
k
c
j 1
j j 0 in equation ( 5 ).
c
j 1
j j 0
46
Then, there will be two possibilities :
k
1) c
j 1
j j 0
2) c
j 1
j j 0
Now, in either of these two cases we can find a real number, say r, such that
k
r c
j 1
j j 0
(in first case, r will be positive and in second case r will negative)
k
i. e. c dr h 0
j 1
j j .......... (6)
Now adding c
j 1
j x j to both sides on ( 6 ), we have
k k k
c j r j c j x j c j x j
j 1 j 1 j 1
or c dx r h z
j 1
j j j
.......... (7)
F I
m n k
Now, GG x r , x JJ
2 r 2 ,..., xk r k , 0, 0,..., 0 is also a solution for any value of r which
H K
1 1
F mnk
I
GG x r , x JJ
2 r 2 ,..., xk r k , 0,..., 0 satisfies the non - negativity restrictions also.
H K
1 1
We now proceed to prove this statement. To satisfy the non - negativity restriction, we
need
x j r j 0, j 1, 2,..., k
or r j xj
47
We have
r
U|xj
, if j 0
|| j
r , if 0 V
x | j
or
|| j
j
r unrestricted, if 0
|| j
W
Thus, we observe that if we select r satisfying the relationship
max F x I r j F x I min
i GH JK d i GH JK
j j
j .......... (8)
d 0
j j j 0 j
then x j r j 0 for j = 1, 2, ...., k. We note that if there is no j for which j 0 , then there
is no lower limit for r and if there is no j for which j 0 , then there is no upper limit for r..
Furthermore,
max F x I 0 F x I 0 min
i GH JK d i GH JK
j j
j and j
d 0
j j j 0 j
This proves that when r lies in the non - empty interval given by ( 8 ), then the infinite
number of solutions.
F m n k
I
GG x r , x 2 r 2 ,..., xk r k , 0, 0,..., 0 JJ
H K
1 1
Now, from ( 7 ) we conclude that the left hand side c d x r h yields the value of the j j j
i1
objective function which is strictly greater than the greatest value of the objective function. This
k
contradiction proves that c
j 1
j j 0 and hence x is optimal.
48
Reduction of any feasible solution to a basic feasible solution
Example 2.4
If x1 2, x2 3, x3 1 , be a feasible solution of linear programming problem :
Max. z x1 2 x2 4 x3 ,
subject to 2 x1 x2 4 x3 11 ,
3 x1 x2 5 x3 14 ,
x1, x2 , x3 0 ,
then find a basic feasible solution.
Solution :
We express the above system as
a1 a2 a3 b
FG 2 1 4IJ FG xx IJ FG11IJ
1
H3 5K G J H 14K
2
1
Hx K3
or x1 a1 x2 a2 x3 a3 b
Where a1
LM2OP, a LM1OP, a LM4OP, b LM11OP
N3Q N1Q N5Q N14Q
2 3
Since the vectors a1, a2 , a3 associated with the corresponding varialbes x1, x2 , x3 are
linearly dependent, therefore one of the vectors can be expressed in terms of the remaining two.
Thus,
LM4OP LM2 OP
1 2
or
N5 Q N3 Q1 2
which gives
2 1 2 4 ,
3 1 2 5
49
Solving these two equations we get 1 1, 2 2 . Now substituting these values of 1
and 2 in (1), we get the linear combination
a1 2 a 2 a 3 0 or
j 1
j aj 0 .......... (2)
Where 1 1, 2 2, 3 1
Now we have to determine which one of the three variables x1, x2 , x3 should be zero.b g
max
1 j 3
j
max
RS ,
1 2
,
3 UV
xj Tx x1 2 x3 W
RS 1 , 2 , 1UV 2
max
T2 3 1 W 3
FG 1 IJ
H
Let x x1
, x2 2 , x3 3
K
1 1 1
Then, x1 2 ,
2
2
3
2 2
x2 3 0 (which was expected also),
2
3
F I
x3 3 1
1 5
GG JJ
2
3
2 GH JK
FG 1 , 0, 5 IJ will be a basic feasible if the vectors a LM2OP and a LM4OP
Now this solution x
H 2 2K 1
N3 Q 3
N5 Q
associated with non - zero variables x1 and x 3 are linearly Independent.
1 5
x1 , x2 0, x3
2 2
50
LM OP LM OP LM OP LM OP
1 1
0
1 5 4
11
To verify, we have
2 3N Q NQ N Q N Q
1 2 5 14
Example 2.5
Show that the feasible solution x1 1, x2 0, x3 1, z 3 to the system
x1 x2 x3 2
x1 x2 x3 2
Solution :
First, we express the given system of constraint equations in matrix form :
N1 1 Q M P N2Q
MNx PQ
2
1
3
L1
AM
1 1OP, x LMxx OP, b LM2OP
1
N1 1 1Q MM PP N2Q
2
Nx Q3
a1
LM1OP and a LM1OP
N1Q N1Q
3
d i b g
Let A a1, a2 ,..., aa m and B 1, 2 ,..., m be a non singular submatrix of A.
51
Assume that a non - degenerate basic feasible solution xB B 1 b to A x b yields a
value of the objective function z cB xB . If for any colunm aj in A but not in B we have c j z j 0 ,
m
b
and if at least one y i j 0 i 1, 2,..., m where a j g y
i 1
ij i , then we can find a new basic feasible
Proof
We shall obtain a new basic feasible solution by replacing one of the vectors (say aj ) in
A but not in B by some vector in B (say r ). Obviously,,
aj i (i = 1, 2, ..., m)
m
aj y
i 1
ij i
Now, by using the replacement theorem, a j can replace r and still maintains the basic
matrix, provided y r j 0 .
m
1 yi j
r
yr j
aj
i 1 yr j
i
.......... (3)
ir
Also, we have B xB b
or b , ,...., g dx
1 2 m B 1, xB 2 ,...., xBr ,...., xBm i b
or xB 11 xB 2 2 ... xB r r ... xBm m b
or
x
i1
B i i xB r r b
.......... (4)
ir
52
Substituting the value of r from (3) in (4), we obtain
m
LM 1 y
OP m
x x My a y P b ij
MN PQ
Bi i Br j i
i1 rj i1 rj
ir ir
F m
Gx x
y I x
H i 1 y K J
Bi
y
a b Br
ij
rj
i
Br
rj
j
.......... (5 a)
i r
m
or
x
i1
Bi i xB r aj b
.......... (5 b)
ir
yi j
Where xB i xB i xB r y , i 1, 2,..., m ; i r , .......... (6 a)
rj
xB r
xB r
yr j
bfor i rg .......... (6 b)
F
Gx
IJ
H B 1, xB 2 ,..., xB r ,...., xB m
K
F
Gx
y1 j y2 j xB r ym j I
H B 1 xB r
yr j
, xB 2 xB r
yr j
,....,
yr j
,...., xB m xB r
yr j JK
and other non - basic components are zero.
For the new basic solution to be feasible, we require
xB i 0, i 1, 2,...., m
yi j
Hence xB i xB r 0, i 1, 2,..., m, i r and .......... (7 a)
yr j
xBr
0 .......... (7 b)
yr j
53
We see that ( 7 b ) holds as y r j 0 and since we start with a non - degenerate basic
b g
feasible solution, xB i 0, i 1, 2,...., m . If y r j 0 and y i j 0 i r , then ( 7 a ) is satisfied. If y r j 0
b g
and y i j 0 i r , then equation ( 7 a ) is satisfied only when
xB i xB r
0 (dividing ( 7 a ) by yi j 0 )
yi j yr j
xBi xBi
or
yr j yi j
xB r xB i
or
yr j yi j
xB r
Min
LM x Bi
OP
or yr j i
MN y ij PQ
This, if we select r such that
xB r
Min
LM x Bi
, yi j 0
OP
yr j i
MN y ij PQ .......... (8)
then column r will be removed from basis matrix B to replace a j so that the new basic
solution will be feasible. This completes the proof.
Note
1) We denote the new non - singular matrtx, obtained from B by replacing r with
aj by
FG
IJ
H
B B1,B 2 ,...,Bm , where
K
B i i , i r, Br a j
2) If the minimum in ( 8 ) is not unique, the new basic solution will be degenerate.
In this case, the number of positive basic variables will be less than m.
The procedure in above theorem can be explained by the following numerical
example.
54
Example 2.6
Given the non - degeneate basic feasible solution x3 4 and x4 8 to the following LP
problem
Max. z x1 2 x2 , subject to
x1 2 x2 x3 4
x1 4 x2 x4 8
obtain the new basic feasible solution.
Solution :
F x I F 4I,B F1 0I, b F 4I
xB GH x JK GH 8JK GH 0 1JK GH 8JK
B1
B2
a1 a2 1 2
F 0I
FG1 2 1 0I GG 0JJ
A
H1 4 0
J G 4J
1K
, x
GH 8JK
The y j ' s for every column aj in A but not in B are
y1 B 1 a1
FG1 0IJ FG1IJ FG1IJ FG y 11 IJ
H 0 1K H1K H1K H y 21 K
y 2 B 1 a G
F1 0IJ FG 2IJ FG 2IJ FG y 12 IJ
2
H 0 1K H 4K H 4K H y 22 K
Since y11 1, y 21 1 are > 0, we can insert a1 in B. We now select r for replacement by
a1 which corresponds to the value of suffix r determined by the minimum ratio rule :
xB r
Min
LM x Bi
, y i1 0
OP
yr 1 i
MN y i1 PQ
55
Therefore,
xBr
Min
LM x B1
,
xB 2 OP
yr 1 Ny 11 y 21 Q
xBr LM 4 , 8 OP 4
yr 1
Min
N 1 1Q 1
xB r xB 1
r 1
yr 1 y11
Hence we remove 1 .
The new basis matrix becomes
FG
IJ b g
H K
B 1, 2 a1, 2 (because a1 is replaced by 1 )
F1
G
0I
J
H1 1K
1
Now we can find the new basic feasible solution xB either by using the result xB B b
or using the transformation formulae ( 7 a ) and ( 7 b) of Theorem 2.8.
Hence the new basic feasible solution is :
xB 1 4
xB 1 4
y11 1
y 21 1
xB 2 xB 2 xB 1 84 4
y11 1
x1 xB 1 4, x2 0, x3 0, x4 xB 2 4
we note that, if we had inserted a2 instead of a1 , the new basic feasible solution would
have been degenerate. We have developed the procedure for obtaining a new basic feasible
solution. Now we determine the value of the objective function corresponding to this new basic
feasible solution. We verify, whether z z where z denotes the new value of the objective
function. For this, we prove the following theorem.
56
Theorem 2.9
Proof
The value of the objection function for the original basic feasible solution is
z cB xB
d
cB 1, cB 2 ,...., cBm i dx B 1, xB 2 ,...., xB m i
m
or z c
i1
Bi xBi .......... (A)
m m
or
z
i1
cBi xB i c
i 1
Bi xBi cBr xB r
ir
b g
where cB i cBi i r , cB r c j
m
Therefore,
z c
i 1
Bi xB i c j xB r
ir
Subsituting the values of new variables xBi and xB r from ( 7 a ) and (7 b ) of Theorem
2.8 into the last expression, we get
m Fx yi j I c x
z c
i1
Bi GHBi xB r
yr j JK y j
Br
rj ........... (B)
ir
F yr j I 0
Since the term for which i = r is cB r xB r xB r GH yr j JK
57
we can include it in the summation ( B ) without changing z , so that
m Fx yi j I c x
z c i1
Bi GH Bi xB r
yr j JK y j
Br
rj
m m
xBr xBr
i 1
cB i xBi
yr j
c
i1
Bi yi j
yr j
cj
xBr xB r
z zj cj
yr j yr j
d
z c j zj h xy Br
rj
xBr
d h
z c j z j , where
yr j .......... (C)
Now, from ( C ) we observe that the new value z of the objective function is the original
d h
value z plus the quantity c j z j . Since 0 , and c j z j is greater than 0. The value of the
objective function is improved.
Example 2.7
In worked example ( 2.6 ) show that the new value of the objective function is improved.
Solution :
Since c1 1, c 2 2, c 3 0, c 4 0 , then the original solution x3 4, x4 8, x1 x2 0 gives
z 1 0 2 0 0 4 0 8 0
Since z1 cB y1 0, 0 b g FGH11IJK 0
and since c1 z1 1 0 0 , z should exceed z ( = 0). From ( C ) we get
z z c1 z1 b g xy B1
11
b g
z 0 4 1 0
4 z0
58
Theorem 2.10
If we select the vector ak to replace r in B the suffix k can be selected by means of
d h
ck zk Max c j z j , c j z j 0 , so that the value of the objective function
j
xB r
dc z h
z z j j
yr j
Thus to give maximum value of z we should select that value of j for which the term.
xB r
yr j
dc z h is maximum.
i j
xB r
have to compute y for each a j having c j z j 0 by the rule
rj
xB r
Min
LM x Bi
, yi j 0
OP
yr j j
MN y ij PQ
But the change in objective function depends on
xB r
c z j both.
yr j and j
xB r xB r
Thus to avoid large number of computations of y , we can neglect the value of y .
rj rj
Hence the most convenient and time saving rule for choosing the vector ak to enter the
basis B consists of selecting the largest c j z j . This is equivalent to choosing the vector ak to
replace r by means of
d h
ck zk Max c j z j , for c j z j 0 .
j
59
Note
The following are the advantages of using the above test.
1. The choice of vector ak to enter the basis B by using above criteria gives the
greatest possible increase in z in each step.
2. More than m iterations will not be needed to reach the optimal basic feasible
solution.
3. It saves a time by giving the required solution in the least number of steps.
Surplus Variable
If a constraint has ‘ ’ sign then in order to make it an equality we have to subtract
something non-negative from left hand side of inequality.
Definition
The positive variable which is subtracted from the left hand side of the constraint to
convert it into equation is called surplus variable.
e.g. x1 x 2 3 x1 x 2 3
Step 3
Make all variables non-negative if variable x is unrestricted in sign write x x ' x "
where x ', x " 0 .
Step 4
Convert objective function in maximization form.
Min f x Max f x
60
Example
Express the following LPP in standard form.
Min z x1 2x 2 x 3
Subject to
2x1 3x 2 4x3 4
3x1 5x 2 2x 3 7
Step 1
2x1 3x 2 4x3 x 4 4
3x1 5x 2 2x3 x5 7
Step 2
2x1 3x 2 4x3 x 4 4
3x1 5x 2 2x3 x5 7
Step 3
Step 4
61
Subject to
Example 2.8
Solve the L. P. problem.
Max. z 3 x1 5 x2 4 x3
subject to 2 x1 3 x2 8
2 x2 5 x3 10
3 x1 2 x2 4 x3 15
and x1, x2 , x3 0
Solution :
The inequalities are converted into equalities by introduction of slack variables x 4 , x5
and x 6 as follows.
2 x1 3 x2 0. x3 x4 8
0 x1 2 x2 5 x3 x5 10
3 x1 2 x2 4 x3 x6 15
Take x1 0, x2 0, x3 0
Now we construct a starting simplex table. Here we compute j for all zero variables
x j , j = 1, 2, 3 by the formula.
j C j CB Yj
1 C1 CB Y1
b gb g
1 3 0, 0, 0 2, 0, 3 3
2 C 2 CB Y2
b gb g
2 5 0, 0, 0 3, 2, 2 5
62
3 C 3 CB Y3
b gb
3 4 0, 0, 0 0, 5, 2 4 g
Since all j are not less than or equal to zero therefore the solution is not optimal. So
we proceed to the next step.
To find incoming vector :
b g
Since 2 3 is max. of 1, 2 , 3 therefore 2 y 2 is incoming vector..
xB
b g b g b g b g b g b g
1 2 3 1 2 3 y2
8
Y4 0 8 2 3 0 1 0 0
3
Y5 0 10 0 2 5 0 1 0 5
15
Y6 0 15 3 2 4 0 0 1
4
z cB xB xj 0 0 0 8 10 15
=0 cj 3 5 4 0 0 0
j 3 5 4 x x x
A B
To find outgoing vector
xB 1 Fx ,x ,x I
G
Y HY Y Y K
2
B1
12
J B2
22
B3
32
x
B1 L 8 15 O
M , 5, P
i. e.
Y2 N3 2 Q
x R| x , Y U|
We have Y
Br
S| Ymin Bi
0 V|
r2 T i
i2
i2
W
63
min
RS x B1
, ,
UV
xB 2 xB 3
8
i
TY 12 Y22 Y32 W 3
Hence r = 1
Since 2 is incoming vector and 1 is outgoing vector, therefore the key element is
b g
y12 a12 as shown in table 1 which is equal to 3.
xB Y1 Y2 Y3 Y4 Y5 Y6
Y4 8 2 3 0 1 0 0
Y5 10 0 2 5 0 1 0
Y6 15 3 2 4 0 0 1
Divide key element by 3 to get unity at this position and then subtract 2 times of the first
row (obtained after dividing by 3) from the second and third row.
xB Y1 Y2 Y3 Y4 Y5 Y6
8 2 1
Y2 1 0 0 0
3 3 3
14 4 2
Y5 0 5 1 0
3 3 3
29 5 2
Y6 0 4 0 1
3 3 3
b g
Now we construct second simplex table in which 1 Y4 is replaced by 2 y 2 . b g
64
Second simplex table 2
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
xB
b g1 b g b g
2 3 y3
8 2 1
Y2 5 1 0 0 0 --
3 3 3
14 4 2 14
Y5 0 0 5 1 0 min
3 3 3 15
29 5 2 29
Y6 0 0 4 0 1
3 3 3 12
8 14 29
z cB xB xj 0 0 0
3 3 3
cj 3 5 4 0 0 0
1 5
j x 4 x x
3 3
A B
incoming outgoing
vector vector
To test the optimality of the solution compute j for all zero variables x1, x3 and x 4 .
1 c1 cB Y1 3 5, 0, 0 b g FGH 32 , 43 , 53 IJK
10 1
1 c1 cB Y1 3
3 3
b gb g
3 c 3 cB y 3 4 5, 0, 0 0, 5, 4 = 4 - 0 = 0
F 1 2 2I
Y 0 b5, 0, 0g G , , J
4 c 4 cB 4
H 3 3 3K
5
4
3
Since all j are not less than or equal to zero, therefore this solution is also not optimal.
65
b g
Since 3 4 is maximum of the j ' s , 3 Y3 is the incoming vector..
xB r
min
LM x , Y OP
Bi
Also Yr 3
i
MN Y i3 PQi3 0
min M
L x , x OP
B2 B3
MN Y Y PQ (since Y
23 33
13 0)
L14 29 O 14 x
min M , P
B2
N15 12 Q 15 Y 23
r2
b g
Therefore 2 y 5 is the outgoing vector and y 23 a 23 5 is the key element.
b g
In order to bring y 3 in place of 2 y 5 we make the following intermediate table.
xB Y1 Y2 Y3 Y4 Y5 Y6
8 2 1
Y2 1 0 0 0
3 3 3
14 4 2
Y5 0 5 1 0
3 3 3
29 5 2
Y6 0 4 0 1
3 3 3
Divide the key element by 5 to get 1 at this position, then subtract 4 times of the second
row thus obtained from the third row.
xB Y1 Y2 Y3 Y4 Y5 Y6
8 2 1
Y2 1 0 0 0
3 3 3
14 4 2 1
Y5 0 1 0
15 5 15 5
89 41 2 4
Y6 0 0 1
15 15 15 5
b g
The third simplex table in which 2 Y5 is replaced by Y3 is as follows
66
Table 3
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
xB
b g b g
1 2 b g
3 y1
8 2 1
Y2 5 1 0 0 0 4
3 3 3
14 4 2 1 7
Y5 4 0 1 0 neg.
15 15 15 5 2
89 41 2 4 89
Y6 0 0 0 1 min
15 15 15 05 41
8 14 89
xj 0 0 0
3 15 15
cj 3 5 4 0 0 0
11 17 4
j x x x
15 15 5
A B
Incoming Outgoing
vector vector
To test the optimality of the solution again compute j for all zero variables x1, x4 and
x5 .
b
1 c1 cB y1 3 5, 4, 0 g FGH 23 , 154 , 1541 IJK
FG 10 16 IJ 11
H 3 15 K 15
3
3G
F 50 16 IJ 45 34
H 15 K 15
y F1 2 2 I
0 b5, 4, 0g G , , J
H 3 15 15 K
4
c c
4 4 B
4
F 5 8 I 17 F 1 4I
G J , c c y 0 b5, 4, 0 g G 0, , J = - 4 /5
4
H 3 15 K 15 H 5 5K
5 5 B 5
67
4
Also 5 c 5 c e y 5
5
Since all the j ' s are not less than or equal to zero, therefore the solution is not optimal.
b g
Since 1 is maximum of the j ' s , it follows that, 1 Y1 is the incoming vector..
xBr
min
LM x , Y 0OP
Bi
Also Yr 1 i
MN Y i1 PQ i1
min M
L y , x OP
i
B1
NY Y Q
11
B3
31
b Y 21 g
is negative
LM4, 89 OP 89
min
i
N 41Q 41
r 3 .
b g
i. e. 3 Y6 is the outgoing vector and Y31 a31
41
15
is the key element.
b g
Again in order to bring Y1 in place of 3 Y6 we make the following intermediate
table.
xB Y1 Y2 Y3 Y4 Y5 Y6
8 2 1
Y2 1 0 0 0
3 3 3
14 4 2 1
Y3 0 1 0
15 15 15 5
89 41 2 4
Y6 0 0 1
15 15 15 5
41 2
Divide the key element by to get 1 at this position, then subtract times of the third
15 3
4
row from the first row and adding times of the third row to the second row we have,
15
68
xB Y1 Y2 Y3 Y4 Y5 Y6
50 15 8 10
Y2 0 1 0
41 41 41 41
62 6 5 4
Y3 0 0 1
41 41 41 41
89 2 12 15
Y6 1 0 0
15 41 41 41
b g
The fourth simplex table in which 3 Y5 is replaced by y1 is as follows.
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
3 1 1
50 15 8 10
Y2 5 0 1 0
41 41 41 41
62 6 5 4
Y3 4 0 0 1
41 41 41 41
89 2 12 15
Y1 3 1 0 0
41 41 41 41
89 50 62
z cB xB xj 0 0 0
41 41 41
= 765/41 cj 3 5 4 0 0 0
45 24 11
j x x x
41 41 41
To test the optimality of the solution again compate j for all zero variables x 4 , x5 and
x6 .
4 c 4 cB
1
4
b
0 5, 4, 3
15
g FGH
5
, ,
41 41 41
2
45
41
IJ
K
5 c 5 cB
1
5
b
0 5, 4, 3 g FGH 418 , 415 , 12
41K
IJ 24
41
6 c 6 cB
1
6
b g FGH
10 4 15
0 5, 4, 3 , ,
41 41 41
11
41
IJ
K
69
Since all the j ' s for zero variables are negative so, this solution is optimal.
89 50 62
Hence x1 , x2 , x3
41 41 41
765
and max. z
41
Computational Procedure for Simplex Method
Example
Max z 3x1 2x 2
Subject to x1 x 2 4
x1 x 2 2 , x1, x 2 0
Answer
Step 1
Convert the given LPP into a standar form.
Subject to x1 x 2 x 4 2 , x1, x 2 , x 3 , x 4 0
Step 2
Construct starting simplex table. Variable which form identity matrix in starting simplex
table are basic variables cB represent cost of basic variables.
Basic cB 3 2 0 0
variable cost of
B.V. cB xB x1 x2 x3 x4
x3 0 4 1 1 0
x4 0 2 1 –1 0 1
Step 3
Calculate j cB x j c j
1 cB x1 c1 2 cB x 2 c 2
3 4 0
70
Step 4 : Optimality Test
(i) If all j 0 the solution is optimal. Alterative optimal solutio will exist if any j
corresponding to nm basic xj is also zero.
(iii) If at least one j 0 then solution is not optimal and therefore proceed to improve the
solution in the next step.
Step 5
Choose incoming and outgoing variable.
Let k Min j 0
j
xBr x
If Min Bi / xki 0
xkr i
xki
Then xBr is outgoing variable from the set of basic variables x3 and x4.
k Min j
j
Since
x
4 2
Min Bk Min , 2
k
x1k 1 1
71
cj 3 2 0 0 Min
B.V. cB xB x1 x2 x3 x4 Ratio
4
x3 0 4 1 1 1 0 4
1
2
x4 0 2 1 –1 0 1 1
1
j –3 –2 0 0
Step 6
In order to make x1 as basic variable perform elementary row operations to convert
column corresponding to variable x1 as unit vector. Here operation R1 – R2 will make column
corresponding to variable x1 as unit vector. The position 1 in the unit vector depends upon the
position of incoming variable in basic variables.
3 2 0 0
B.V. cB xB x1 x2 x3 x4
x3 0 2 0 2 1 –1
x1 3 2 1 –1 0 1
2
x3 0 2 0 2 1 –1 1
2
x1 3 2 1 –1 0 1 ---
j 0 –5 0 3
Step 4 : 2 0
xB
Therefore, variable x2 is incoming component wise ratio x is {1, –}. Minimum ratio
2
corresponds to x3 and x3 is outgoing variable. Now make coumn corresponding to x2 as unit
vector.
72
3 2 0 0 Min
B.V. cB xB x1 x2 x3 x4 ratio
1 1
x2 2 1 0 1
2 2
1 1
x1 3 3 1 0
2 2
3 1
j 0 0
2 2
Minimize z x1 3 x2 2 x3
Subject to 3 x1 x2 2 x3 7
2 x1 4 x2 12
4 x1 3 x2 8 x3 10
x1, x2 , x3 0
Solution :
First we convert the problem of minimization to maximization problem by taking objective
function z' z .
max. z' z x1 3 x 2 2 x 3
3 x1 x2 2 x3 x4 7
2 x1 4 x2 0 x3 x5 12
4 x1 3 x2 8 x3 x6 10
73
Starting simplex table
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
xBi
b g b g
1 2 3 1 2 3 Y12
Y4 0 7 3 -1 2 1 0 0 7 neg.
Y5 0 12 -2 4 0 0 1 0 3 min
10
Y6 0 10 -4 3 8 0 0 1
3
z1 cB x8 xj 0 0 0 7 12 10
=0 cj -1 3 -2 0 0 0
j -1 3 -2 x x x
b gb
1 c1 cB y1 1 0, 0, 0 3, 2, 4 1 g
2 c 2 cB y 2 3 b0, 0, 0 gb 1, 4, 3g 3
3 c 3 cB y 3 2 b0, 0, 0gb2, 0, 8 g 2
Since all the j are not less than or equal to zero therefore the solution is not optimal.
2 is maximum.
Hence the incoming vector is 2 y 2 b g and by mini ratio rule outgoing vector is
b g
2 y 5 .
b g b g
In order to b ring 2 y 2 in place of 2 y 5 the inter mediate table is as follows.
xB Y1 Y2 Y3 Y4 Y5 Y6
Y4 7 3 -1 2 1 0 0
Y5 12 -2 4 0 0 1 0
Y6 10 -4 3 8 0 0 1
74
xB Y1 Y2 Y3 Y4 Y5 Y6
5 1
Y4 10 0 2 1 0
2 4
1 1
Y2 3 1 0 0 0
2 4
5 3
Y6 1 0 8 0 1
2 4
Second simplex table
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
xB
2 1 3 Y1
5 1
Y4 0 10 0 2 1 0 4 min
2 4
1 1
Y2 3 3 1 0 0 0 -6 neg.
2 4
5 3 2
Y6 0 1 0 8 0 1 neg
2 4 5
xj 0 3 0 10 0 1
cj -1 3 -2 0 0 0
1 3
j x -2 x x
2 4
A B
F 5 1 5I 1
c c y 1 b0, 3, 0g G , , J
1 1 B 1
H 2 2 2K 2
c c y 2 b0, 3, 0 gb2, 0, 8 g 2
3 3 B 3
F1 1 I 3
c c y 0 b0, 3, 0g G , , 3J
5 5 B 5
H4 4 K 4
Since all the j are not less than or equal to zero the solution is not optimal.
75
1
Here 1 is maximum.
2
b g
Therefore y1 is the incoming, vector and by the minimal ratio rate we find that 1 y 4
as the outgoing vector.
5
Therefore key element y11 .
2
xB Y1 Y2 Y3 Y4 Y5 Y6
5 1
Y4 10 0 2 1 0
2 4
1 1
Y2 3 1 0 0 0
2 4
5 3
Y6 1 0 8 0 1
2 4
xB Y1 Y2 Y3 Y4 Y5 Y6
4 2 1
Y1 4 1 0 0
5 5 10
1 3
Y2 5 0 1 1 0
5 10
5 1
Y6 11 0 0 13 1
2 2
76
Third simplex table
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
1 2 3
4 2 1
Y1 -1 4 1 0 0
5 5 10
1 3
Y2 3 5 0 1 1 0
2 10
5 1
Y6 0 11 0 0 13 1
2 2
z' cB xB xj 4 5 0 0 0 11
= 11 cj -1 3 -2 0 0 0
21 11 41
j x x x
5 10 40
b
3 c 3 cB Y3 2 1, 3, 0 g FGH 45 ,113I 21
, J
K 5
b
4 c 4 cB Y4 0 1, 3, 0 g FGH 25 , 21 , 52 IJK 10
11
b
5 c 5 cB Y5 0 1, 3, 0 g FGH 101 , 38 , 198 IJK 40
41
Since all j ' s for all non zao variables are negative so this solution is optimal.
Optimal solution is
x1 4, x2 5, x3 0
max. z 2 x1 5 x2 7 x3
subject to 3 x1 2 x2 4 x3 100
x1 4 x2 2 x3 100
77
x1 x2 3 x3 100
x1, x2 , x3 0
Solution :
3 x1 2 x2 4 x3 x4 100
x1 4 x2 2 x3 x5 100
x1 x2 3 x3 x6 100
Take x1 x2 x3 100
Therefore starting B. F. S. is
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
1 2 3 1 2 3
Y4 0 100 3 2 4 1 0 0 25 min
Y5 0 100 1 4 2 0 1 0 50
100
Y6 0 100 1 1 3 0 0 1
3
=0 cj 2 5 7 0 0 0
j 2 5 7 x x x
A B
in out
b
1 c1 cB y1 2 0, 0,0 3,11gb g
, 2
3 c 3 cB y 3 7 0 7
Since all j are not less than or equal to zero for zero variables, so the solution is not
optimal.
78
Since 3 7 is maximum therefore 3 y 3 is the incoming vector.. b g
By the min ratio rule
R| x U| 100 25 , for i = 1
min S| y
Bi
V| 4 , yi 3 0
T W
i3
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 min ratio
xB
1 2 6 y2
3 1 1
Y3 7 25 1 0 0 50
4 2 4
1 1 50
Y5 0 50 3 0 1 0
2 2 3
3 1 3
Y6 0 25 0 0 1 -50 neg.
4 2 4
xj 0 0 25 0 50 25
cj 2 5 7 0 0 0
13 3 7
j x x x
4 2 4
A B
incoming
vector
For above simplex table
79
b
2 c 2 cB y 2 5 7, 0, 0 g FGH 21 , 3, 21IJK 5 21 32
b
4 c cB y 4 0 7, 0, 0 g FGH 41 , 21 , 34 IJK 74
Since all j are not less than or equal to zero so the solution is not optimal.
3
Here 2 is max.
2
Therefore y 2 is incoming vector and by min ratio rule we find that 2 y 5 is the outgoing b g
vector. Key element is 3. Intermidiate table is :
xB Y1 Y2 Y3 Y4 Y5 Y6
3 1 1
Y3 25 1 0 0
4 2 4
1 1
Y5 50 3 0 1 0
2 2
5 1 3
Y6 25 0 0 1
4 2 4
The third simplex table is as follows.
B CB XB Y1 Y2 Y3 Y4 Y5 Y6 Min
50 5 1 1
Y3 7 0 1 0
3 6 3 6
50 1 1 1
Y2 5 1 0 0
3 6 6 3
100 4 5 1
Y6 0 0 0 1
3 3 6 6
50 50 100
xj 0 0 0
3 3 3
cj 2 5 7 0 0 0
3 1
j -3 x x x
2 2
80
b
1 C1 CB Y1 z 7, 5, 0 g FGH 56 , 61 , 43 IJK 3
b
4 C 4 CB Y4 0 7, 5, 0 g FGH 31 , 61 , 1IJK 32
b g FGH
1 1 1
5 C 5 CB Y5 0 7, 5, 0 , ,
6 3 2
1
2
IJ
K
Since all j for zero variables are negative, this solution is optimal.
50 50
Optimal solution is x1 0, x2 , x3 and Max. z 200 .
3 3
Complete solution with all computational steps is conveniently represented in the following
example.
Example :
Solve Max z 7x1 5x 2
Solution :
Max z 7x1 5x 2
cj 7 5 0 0 Min
xB
B.V. cB xB x1 x2 x3 x4 ratio x
i
x3 0 3 1 2 1 0 6
x4 0 12 4 3 0 1 3
j –7 –5 0 0
5 1
x3 0 3 0 1
4 4
3 1
x1 7 3 1 0
4 4
1 7
j 0 0
4 4
81
Since j 0 j the solution is optimal.
Solution :
x1 = 3, x2 = 0 and Max z = 7(3) + 5(0) = 21.
Artificial Variable Technique
If starting simplex table do not contain identity matrix, we introduce new type of variables
caled artificia variables. These variables are fictitious and donot have any physical meaning.
This is only a device to introduce identity matrix in starting simplex table and to get basic
feasible solution so that simplex method may be adopted. Artificial variables are eliminated
from the simplex table as and when they become zero.
Two Phase Simplex Method
The process of eliminating artificial variables is performed in phase I and phase II is
used to get an optimal solution.
Computational Procedure of Two Phase Simplex Method
Phase I
In this phase the simplex method is applied to LPP with artificial variables leading to a
final simplex table containing a bsic feasible solution (BFS) to the original problem.
Step 1
Assign a cost – 1 to each artificial variable and cost 0 to all other variables.
Step 2
Solve by simplex method until either of three possibilities do arise.
(i) If Max z* < 0, given original problem does not have any feasible solution.
(ii) If Max z* = 0 and atleast one artificial variable appears in the optimal basis (basic
variable in last simplex table) at zero level then proceed to Phase II.
(iii) Ig Max z* = 0 and no artificial variable appears in the optimal basis proceed to Phase II.
Phase II
Assign the actual cost to the variables in objective function and zero cost to every artificial
variable that appears in the basis. This new objective function is now maximized by simplex
method with last simplex table of phase I as starting simplex table with actual cost values.
Example 1
Solve the following problem
Max z x1 x 2
Subject to 2x1 x 2 4
x1 7x 2 7 , x1, x 2 0
82
Solution :
Convert the given problem into standard LPP.
Max z x1 x 2
s.t. 2x1 x 2 x 3 4 , x1 7x 2 x 4 7
x1
2 1 1 0 x 2 4
i.e. 1 7 0 1 x 7
3
x4
Since coefficient matrix donot contain identity matrix, we have to solve this problem by
two phase method by introducing artificial variables.
Subject to 2x1 x 2 x3 a1 4
x1 7x 2 x 4 a1 7 , x1, x 2 ,x 3 , x 4 ,a1,a2 0
0 0 0 0 –1 –1 Min
B.V. cB xB x1 x2 x3 x4 a1 a2 Rato
a1 –1 4 2 1 –1 0 1 0 4
a2 –1 7 1 7 0 –1 0 1 1
j –3 –8 1 1 0 0
13 1 1 21
a1 –1 3 0 –1 1
7 7 7 13
1 1 8
x2 0 1 1 0 0 7
7 7 7
13 1 8
0 1 0
7 7 7
21 7 1 7 1
x1 0 1 0
13 13 13 13 13
10 1 2 1 2
x2 0 0 1
13 13 13 13 13
0 0 0 0 0 1
83
Since j 0 j , an optimum basic feasible solution to the auxiliary LPP has been
attained.
21 10
x1 , x2 , x 3 x 4 a1 a2 0 .
13 13
By step 2 (iii) proceed to Phase II.
Phse II
Remove column of a1 and a2 from last simplex table. Starting simplex table will be last
simplex table of phase I. Whereas objective function is a function given in original problem.
Max z x1 x 2
cj –1 –1 0 0
B.V. cB xB x1 x2 x3 x4
21 7 1
x1 –1 1 0
13 13 13
10 1 2
x2 –1 0 1
13 13 13
6 1
j 0 0
13 13
23 10
x1 , x2
13 13
Min z x1 x 2
23 10 33
13 13 13
Example 2
Max z x1 2x 2 3x3
Subject to 2x1 x 2 3x 3 2
84
Solution :
Though constraints are in the form of equations coefficient matrix do not contain identity
matrix and therefore one has to introduce artificial variables and solve by two phase simplex
method.
Phase I
Max z a1 a2
cj 0 0 0 –1 –1 Min
B.V. cB xB x1 x2 x3 a1 a2 ratio
2
a1 –1 2 –2 1 3 1 0
3
1
a2 –1 1 2 3 4 0 1
4
j 0 –4 –7 0 0
5 7 5 3
a1 –1 0 1
4 2 4 4
1 1 3 1
x3 0 1 0
4 2 4 4
7 5 7
0 0
2 4 4
5
But Max z a1 a2 0
4
Therefore (by step 2(i) of phase I) original problem does not possess any feasible solution.
85
Alternatively example 1 can be solved as follows.
Example 2.11
Solve the following L. P. problem
Min. z x1 x2
subject to 2 x1 x2 4
x1 7 x2 7 , x1, x2 0
Solution :
First we convert the problem of minimization to the maximization problem by taking the
objective function z' z i. e.
Max. z' z x1 x2
2 x1 x2 x3 4
x1 7 x2 x4 7
Here we can not get the starting B. F. S. so we introduce the artificial variables (positive)
x 5 and x 6 .
The above equations may be written as
2 x1 x2 x3 x5 4
x1 7 x2 x4 x6 7
The problem will be solved in two phases.
Phase : 1
This phase consists of the removal of artificial variables.
Table 1
xB Y1 Y2 Y3 Y4 A 1 (1 ) A 2 ( 2 )
A1 4 2 1 -1 0 1 0
A2 7 1 7 0 -1 0 1
xj 0 0 0 0 4 7
A B
86
First we shall remove the artificial variable vector (columns) A 1 and A 2 from the basis
matrix. In place of artificial variable vector the entering vector should be so chosen that the
revised solution is non negative (B. F.) solution.
We can remove A 2 and introduce y 2 in its place in the basic matrix. For this we divide
the second row by 7 and then subtract it from the first row. Thus we get the following table.
It maybe seen that if y1, y 3 , y 4 is entered in place of A 2 then the revised solution is not
non negative. So we can not enter either of them, in place of A 2 . Since artificial variable x 6
becomes zero, we forget about A 2 for ever and will not consider it in any other table.
xB Y1 Y2 Y3 Y4 A1 A 2
b g2 b g1
13 1 1
A1 3 0 -1 1
7 7 7
1 1 1
Y2 1 1 0 0
7 7 7
xj 0 1 0 0 3 0
A
Now we proceed to remove A 1 and introduce y1 in its place in basic matrix. For this we
7 1
mutiply first row by and subtract times of this new row from the second row. Thus we get
13 7
the following table.
Table 2
xB Y1 Y2 Y3 Y4 A 1
b g b g
1 2
21 7 1 7
y1 1 0
13 13 13 13
10 1 14 1
y2 0 1
13 13 91 13
21 10
xj 0 0 0
13 13
Since the artificial variable x 5 becomes zero we forget about A 1 and will not consider it
again.
87
Thus we get the following solution in phase (1)
21 10
x1 , x2 , x3 0, x4 0
13 13
Which is the B. F. S. with which we proceed to get the optimal solution by simplex
method.
Phase (II)
The starting simplex table
B cB xB Y1 Y2 Y3 Y4 Min. ratio
b g b g
1 2
21 7 1
Y1 -1 1 0
13 13 13
10 1 14
Y2 -1 0 1
13 13 91
21 10
z' cB xB xj 0 0
13 13
31
cj -1 -1 0 0
13
6 7
j x x
13 91
b
3 c 3 cB y 3 0 1, 1 g FGH
IJ
K
7 1
,
13 13
6
13
F 1 14 I 7
0 b 1, 1g G , J
4 c 4 cB y 4
H 13 91K 91
Since j s for all zero variables are negative so the solution is optimal.
21 10
x1 , x2 and
13 13
31
Min. z = - max. z'
13
88
Example 2.12
Solve the following L. P. Problem
Max. z x1 2 x2 3 x3 x4
Subject to x1 2 x2 3 x3 15
2 x1 x2 5 x3 20
x1 2 x2 x3 x4 10
x1, x2 , x3 , x4 0
Solution :
In order to get an identity matrix we need two more columns of the unit matrix as one
column of unit matrix (coeff. of x 4 ) is present in the constraints.
Thus we need only two artificial variables in the first two contraints. Introducing the
artificial variables x 5 and x 6 we have,
x1 2 x2 3 x3 0. x4 x5 15
2 x1 x2 5 x3 0. x4 x6 20
x1 2 x2 x3 x4 10
Phase (1)
xB Y1 Y2 Y3 Y4 A1 A2
b g b g b g
3 1 2
A1 15 1 2 3 0 1 0
A2 20 2 1 5 0 0 1
Y4 10 1 2 1 1 0 0
xj 0 0 0 10 15 20
A B
First we remove the artificial variable vector A 2 and introduce y 3 in its place.
For this we divide the second row by 5 and subtract it 3 and one times of it from the first
and third rows respectively.
89
Thus we get the following table.
Second Table
xB Y1 Y2 Y3 Y4 A1 A2
b g b g
3 1
1 7 3
A1 3 0 0 1
5 5 5
2 1 1
Y3 4 1 0 0
5 5 5
3 9 1
Y4 6 0 1 0
5 5 5
xj 0 0 4 6 3 0
A
Now the artificial variable x6 0 so we shall not consider it again. Again we remove the
5
artificial variable vector A 1 and introduce y 2 in its place. For this we multiply first row by and
7
1 9
then subtract its and times from the second and third rows.
5 5
Thus we get the following table.
xB Y1 Y2 Y3 Y4 A1
b g b g b g
1 2 3
15 1 5
Y2 1 0 0
7 7 7
25 3 1
Y3 0 1 0
7 7 7
15 6 9
Y4 0 0 1
7 7 7
15 25 15
xj 0 0
7 7 7
Here the artifical variable x5 0 . We shall not consider it in the other table.
90
Thus we get the following B. F. S. with which we can proceed, for the optimal solution by
simplex method.
15 25 15
x1 0, x2 , x3 , x4
7 7 7
Phase (II)
The starting simplex table is as follows.
B cB xB Y1 Y2 Y3 Y4 min ratio
xB
1 2 3 y1
15 1
Y2 2 1 0 0 - 14 (neg.)
7 7
25 3 25
Y3 3 0 1 0
7 7 3
15 6 5
Y4 -1 0 0 1 (min)
7 7 2
15 25 15
xj 0
7 7 7
cj 1 2 3 -1
6
j x x x
7
A B
b g FGH
1 3 6
1 c1 cB Y1 1 2, 3, 1 , ,
7 7 7
6
7
IJ
K
Since all j are not less than or equal to zero so the solution is not optimal.
Here y1 is the incoming vector and by minimum ratio rule we find that y 4 is the outgoing
vector.
6
Therefore key element y 31 .
7
7 1
In order to bring y1 in place of y 4 multiply third row by and then add itss times in
6 7
3
first row and subtract times from the second row..
7
91
The second simplex table is as follows.
B cB xB Y1 Y2 Y3 Y4 min ratio
3 1 2
5 1
Y2 2 0 1 0
2 6
5 1
Y3 3 0 0 1
2 2
5 7
Y1 1 1 0 0
2 6
5 5 5
z cB xB xj 0
2 2 2
= 15 cj 1 2 3 -1
j x x x -1
b
4 c 4 cB y 4 1 2, 3,1 g FGH 61 , 21 , 76 IJK 1
Since 4 for zero variable is negative so the solution is optimal.
Optimal solution is
5 5 5
x1 , x2 , x3 and max. z 15 .
2 2 2
Example 2.13
Using simplex algorithm solve the L. P. problem
Min. z 4 x1 8 x2 3 x3
Subject to x1 x2 2
2 x1 x3 5
x1, x2 , x3 0
Solution :
First we convert the problem of minimization to maximization problem by taking z' z .
max. z' z 4 x1 8 x2 3 x3
92
Introducing the surplus variables x 4 , x5 the equations obtained are
x1 x2 0. x3 x4 2
2 x1 0 x2 x3 x5 5
The columns of x 2 and x 3 form a unit matrix. Therefore there is no need to introduce
the artificial variables.
Taking x1 0, x4 0, x5 0 we have
x2 2, x3 5 as starting B. F. S.
Starting simplex table
B cB xB Y1 Y2 Y3 Y4 Y5 min ratio
xB
b g b g b g b g b g
1 1 2 4 5 y1
Y2 -8 2 1 1 0 -1 0 2 min
5
Y3 -3 5 2 0 1 0 -1
2
xj 0 2 5 0 0
cj -4 -8 -3 0 0
j 10 x x -8 -3
A B
b gb g
1 c1 cB y1 4 8, 3 1, 2 10
4 c 4 cB y 4 0 b 8, 3 gb 1, 0g 8
5 c 5 cB y 5 0 b 8, 3 gb0, 1g 3
Since all j s are not less than or equal to zero so the solution is not optimal.
Max. j 10 1
b g
Entering vector is 1 y1 and by minimum ratio rule we find that outgoing vector is
b g
1 y 2 .
93
In order to bring 1 in place of 1 we subtract 2 times of the first row from the second
row.
Second simplex table is
B cB xB Y1 Y2 Y3 Y4 Y5 min ratio
xB
b g
1 b g2 y4
Y1 -4 2 1 1 0 -1 0 -2 neg.
1
Y3 -3 1 0 -2 1 2 -1 min
2
xj 2 0 1 0 0
cj -4 -8 -3 0 0
j x - 10 x 2 -3
B A
b gb g
2 c 2 cB y 2 8 4, 3 1, 2 10
c c y 0 b 4, 3 gb 1, 2g 2
4 4 B 4
c c y 0 b 4, 3 gb0, 1g 3
5 5 B 5
Since all j s are not less than or equal to zero this solution is not optimal.
Since Max j 4 , the incoming vector is y 4 and by the minimum ratio rule we find that
b g
the outgoing vector is y 3 2 .
Key element = 2
In order to bring y 4 in place of y 3 we divide the second row by 2 and then add it to the
first row.
94
Third simplex table is
B cB xB Y1 Y2 Y3 Y4 Y5 min ratio
b g1 b g2
5 1 1
Y1 -4 1 0 0
2 2 2
1 1 1
Y4 0 0 -1 1
2 2 2
5 1
xj 0 0 0
2 2
cj -4 -8 -3 0 0
j x -8 -1 x -2
b gb g
2 c 2 cB y 2 8 4, 0 0, 1 8
F 1 1I
3 b 4, 0g G , J 1
3 c 3 cB y 3
H 2 2K
F 1 1I
0 b 4, 0g G , J 2
5 c 5 cB y 5
H 2 2K
Since all j ' s are negative, this solution is optimal.
5
x1 , x2 0, x3 0
2
b
and min z max. z' 10 g
EXERCISES
Max. z 3 x1 5 x2 4 x3
Subject to 2 x1 3 x2 8
2 x2 5 x3 10
3 x1 2 x2 4 x3 15
and x1, x2 , x3 0
95
2) Solve by simplex method the following L. P. Problem
Minimize z x1 3 x2 2 x3
Subject to 3 x1 x2 2 x3 7
x1 4 x2 12
4 x1 3 x2 8 x3 16
x1, x2 , x3 0
3) Solve the following L. P. Problem
Minimize z x1 x2
Subject to 2 x1 x2 4
x1 7 x2 7
x1 x2 0
4) Using the simplex method to solve the following L. P. Problem
Max. z x1 2 x2 3 x3 x4
Subject to x1 2 x2 3 x3 15
2 x1 x2 5 x3 20
x1 2 x2 x3 x 10
x1, x2 , x3 , x3 0
5) Using the simplex method solve the L. P. Problem
Min. z 4 x1 8 x2 3 x3
Subject to x1 x2 2
2 x1 x3 5
x1, x2 , x3 0
6) Using the simplex method, solve the following.
Max. z 2 x1 5 x2 7 x3
Subject to 3 x1 2 x2 4 x3 100
x1 4 x2 2 x3 100
x1 x2 3 x3 100
x1, x2 , x3 0
96
7) Solve the following L. P. Problem
3 1
Max. z x1 150 x2 x3 x4
4 50
x1 1
Subject to 60 x2 x3 4 x4 0
4 25
x1 1
90 x2 x3 3 x4 0
2 50
and x1, x2 , x3 , x4 0
8) Use the simplex method to solve the following
Max. z 30 x1 23 x2 29 x3
Subject to, 6 x1 5 x2 3 x3 26
4 x1 2 x2 5 x3 7
and x1, x2 , x3 0
Also read the solution of the dual of the above problem from the final table.
9) Use two phase simplex method to solve.
Miximize z 3 x1 2 x2 x3 x4
Subject to 4 x1 5 x2 x3 3 x4 5
2 x1 3 x2 4 x3 5 x4 7
x j 0, c j 1, 2, 3 ,4
Maximize z 3 x1 4 x2 ,
Subject to x1 4 x2 8, x1 2 x2 4
x1, x2 0
11) Solve the following L. P. P.
Maximize z 2 x1 x2
Subject to 4 x1 3 x2 12
4 x1 x2 8
4 x1 x2 8
x1, x2 0
97
12) Solve the following L. P. P.
Max. z 5 x1 3 x2
Subject to x1 x2 2 ,
5 x1 2 x2 10 ,
3 x1 8 x2 12
x1 0, x2 0
13) Solve by L. P. P.
Max. z 22 x1 30 x2 25 x3
subject to 2 x1 2 x2 100
2 x1 x2 x3 100
x1 2 x2 2 x3 100 ,
x1, x2 , x3 0
14) Solve the L. P. P.
Max. z 5 x1 2 x2 3 x3
subject to 2 x1 2 x2 x3 2 ,
3 x1 4 x2 3 ,
x2 3 x3 5
x1, x2 , x3 0
15) Solve the L. P. P.
Max. z x1 15 x2 2 x3 5 x4
Subject to 3 x1 2 x2 x3 x4 6
2 x1 x2 x6 5 x4 4
2 x1 6 x2 8 x3 4 x4 0
x1 3 x2 4 x3 3 x4 0
x1, x2 , x3 , x4 0
98
UNIT DEGENERACY, DUALITY AND
03 REVISED SIMPLEX METHOD
3.0 INTRODUCTION
We have considered the L. P. Problems in which by minimum ratio rule we get only one
vector to be deleted from the basis. But there are the L. P. Problems where we get more than
one vector which may be deleted from the basis.
i. e. minimum occurs for more than one value of i then the problem is to select the vector
to be deleted from the basis (If we choose one vector say i (i is one of i1, i2 ,..., is ) and delete it
from the basis then the next solution may be a degenerate B. F. S. Such problem is called
problem of degeneracy.
It is observed that when the simplex method is applied to a degenerate B. F. S. to get a
new B. F. S., the value of the objective function may remain unchanged i. e. the value of the
objective function is not improved.
The procedure for such problems of degeneracy is as follows.
R|dX i U|
Let
min
S| y , y
Bi
ik 0 V| occur at i i1, i2 ,..., is
T W
i
ik
l
Let I1 i1, i2 ,..., is q
1) Renumber the columns of the table starting with the columns in the basis. Let
y1, y 2 ,... etc. be the new numbers of columns. Let y t be the new number of
entering vector y k i. e. y k y t .
R| U| i I
yi 1
Calculate min y S| V|
2)
ik T W 1 . If minimum is unique then delete the corresponding
99
If minimum is not unique then proceed to the next step.
R| y U| i I
3) Calculate mini S| y i2
V| where I2 is the set of all those values of i I1 , for
T ik W 2
In this case if minimum is unique then correspondng vector is deleted from the
basis. If in this case also, minimum is not unique proceed to the next step.
|RS y i3 |UV i I
4) Compute mini
|T y ik |W 3 where I3 is the set of those values of i I2 for which
Proceeding in this way we can get a unique minimum value of i i. e. the unique
vector to be deleted from the basis.
Example 3.1
Solve the L. P. Problem
3 1
Max. z x1 150 x2 x3 x4
4 50
1 1
Subject to x1 60 x2 x3 9 x4 0
4 25
1 1
x1 90 x2 x3 3 x4 0
2 50
x3 1
and x1, x2 , x3 , x4 0
Solution :
Introducing the slack variables in the constraints we get the following equalities
1 1
x1 60 x2 x3 9 x4 x5 0
4 25
1 1
x1 90 x2 x3 3 x4 x6 0
2 50
x 3 x7 1
Taking x1 0, x2 0, x3 0, x4 0 we have
x5 0, x6 0, x7 1
100
Which is the starting B. F. S.
Starting simplex table
B cB xB y4 y5 y6 y7 y1 y2 y3 Min ratio
xB
y1 y2 y3 y4 y 5 (1) y 6 ( 2 ) y 7 ( 3 ) y1
1 1
y5 0 0 -60 9 1 0 0 0
4 25
1 1
y6 0 0 - 90 3 0 1 0 0
2 50
y7 0 1 0 0 1 0 0 0 1 -
z cB xB xj 0 0 0 0 0 0 1
3 1
=0 cj -150 -6 0 0 0
4 50
3 1
j -150 -6 0 0 x
4 50
A B
1 c1 cB y1
3
4
b
0, 0, 0 g FGH 41 , 21 , 0IJK 34
b gb
2 c 2 cB y 2 150 0, 0, 0 60, 90, 0 150 g
3 c 3 cB y 3
1
50
b 1
0, 0, 0 ,
25 50
g FGH
1
,1
1
50
IJ
K
b gb
4 c 4 cB y 4 6 0, 0, 0 9, 3, 0 6 g
Since all j are not less than as equal to zero therefore the solution is not optimal
3
and max j 1
4
101
This minimum is o and occurs for i = 1 and i = 2.
This problem is a problem of degeneracy.
Therefore to select the vector to be deleted from the basic we proceed as follows.
1) First of all we renumber the columns of above table as follows.
Let y1 y 5 , y 2 y 6 , y 3 y 7
y 4 y1 y 5 y 2 , y 6 y 3 , y 7 y 4
2) Since minimum ratio occurs for
i = 1 and i = 2 it follows that
I1 1, 2l q
Incoming vector is y1 y 4 , k 4 for i = 1, 2
R| y U| min R y , y U
min
S| y
i1
V| ST y y VW 11 21
i I,
T i4 W 14 24
R| U|
min i S
| 1 , 0 |V min l4, 0q
|| FGH 41IJK FGH 21IJK ||
T W
y21
0
y 24
This minimum is unique and occur for i = 2. Therefore the vector to be deleted (i. e. the
outgoing vector) from the basis is y 2 2 y 6 . b g
1
Therefore key element is y 21 .
2
1
Therefore in older to bring y1 in place of y 6 we divide the second row by and then
2
1
subtract times of this row from the first row..
4
102
Second simple table
B cB xB y1 y2 y3 y4 y5 y6 y7 Min ratio
xB
b g 2 b g1 b g3 y3
3 15 1
y5 0 0 0 -15 1 0 --
100 2 2
3 1
y1 0 1 -180 6 0 2 0 --
4 25
y7 0 1 0 0 1 0 0 0 1 1 (Min)
xj 0 0 0 0 0 0 1
3 1
cj -150 -6 0 0 0
4 50
1 21 3
j 0 -15 0 x
20 2 2
A B
Incoming Outgoing
Vector Vector
3FG IJ b g
H K
2 c 2 cB y 2 150 0, , 0 15, 180, 0 15
4
1 F 3 IF 3 1 I 1
G 0, , 0J G , ,1J
50 H 4 K H 100 25 K 20
3 c 3 cB y 3
F 3 I F 15 I 21
6 G 0, , 0J G , 6, 0J
4 c 4 cB y 4
H 4 KH 2 K 2
3
6 c 6 cB y 6
2
Since all j are not less than or equal to zero therefore the solution is not optimal.
1
Max. j 3
20
103
1
Therefore incoming vector is and by minimum ratio rule we find that the outgoing
3
b g
vector is y 7 2 .
xB xB1 xB 2 3
(In considering y we need not consider the ratios y and y since y13 and
B 13 23 100
1
y 23 are both negative.)
25
b g
In order to bring y 3 in place at y 7 3 we add
3
100
and
1
25
times of the third row in the
first and second rows respatively.
The third simplex table
B cB xB y1 y2 y3 y4 y5 y6 y7
b g 2 b g
3 b g1
3 15 1 3
y5 0 0 -15 0 1
100 2 2 100
3 1 1
y1 1 -180 0 6 0 2
4 25 25
1
y3 1 0 0 1 0 0 0 1
50
1 3
xj 0 1 0 0 0
25 100
3 1
cj -150 -6 0 0 0
4 50
21 3 1
j x -15 x x
2 2 20
1
x1 , x2 0, x3 1, x4 0
25
1
and Max. z
20
DUALITY
Introduction
Every L. P. Problem is associated with another L. P. Problem called the dual of the
problem. Consider a L. P. Problem
Max. z c1 x1 c 2 x2 ... cn xn
a 21 x1 a 22 x2 ... a 2 n xn b 2
...............................................
Mini z b1 w 1 b 2 w 2 ... bm w m
...................................................
and w 1, w 2 ,..., w m 0
105
Also problem (1) is called the primal problem.
In a matrix notation a L. P. Problem is
Max. z c x
Subject to A x b
and x 0
and its dual is defined as
Min z b' w
and w 0
LMw OP 1
Where w M P
w
MM.... PP
2
MNw PQ
m
and A ', b ', c ' are the transposes of the matrices A, b and c respectively..
It is obvious from the definition that the dual of the dual is the primal itself.
It is important to note that we can write the dual of a problem if all its constraints involve
the sign .
If the constraint has a sign then multiply both the sides by - 1 and makes the sign .
a j 1
ij x j bi ........... (4)
a
j 1
ij x j bi ........... (5)
5) may be written as
n
a
j1
ij x j bi
106
Standard form of the primal
The L. P. Problem is in standard primal form if
1) It is a problem of maximization and
2) All the constraints involve the sign .
Relationship between two problems (Primal and dual)
The two problems (primal and the dual) are related to each other in the following manner.
1) If one is a maximization problem then the other is a minimization problem.
2) If one of them has a finite optimal solution then the other problem also has a finite
optimal solution.
3) From the final simplex table of one problem the solution of the other an be read
from the j row below the columns of slack and surplus variables as follows.
d h
The j ' s j c j z j c j cB y j with the sign changed for the slack vectors in
the optimal (final) simplex table for the primal are the values of the corresponding
optimal dual variables in the final simplex table for the dual problem.
4) The optimal values of the objective functions in both the problems are the same
that is Max. Z x Min Z w .
5) If one problem has an unbounded solution then other has no feasible solution.
Example 3.2
Write the dual of the problem
Mini. z 3 x1 x2
Subject to 2 x1 3 x2 2
x1 x 2 1
and x1, x2 0
Solution :
First we write the problem in standard primal form as follows.
Such that 2 x1 3 x2 2
and x1 x2 1
and x1, x2 0
which may be written as
107
Max z' 3, 1
LMx OP
1
Nx Q
2
LM 2 3 O L x O L 2O
Such that
N 1 M P
1 PQ Nx Q MN 1 PQ
1
and x1, x2 0
The dual of the given problem is given by
Mini. z 2, 1
LMw OP
1
Nw Q
2
2LM 1O Lw O L 3O
such that 3
N M P
1PQ Nw Q MN 1 PQ
1
and w 1, w 2 0
or mini. z 2 w1 w 2
such that 2 w1 w 2 3
3 w1 w 2 1
Example 3.3
Write the dual of the problem
miz. z 2 x2 5 x3
such that x1 x2 2
2 x1 x2 6 x3 6
x1 x2 3 x3 4
and x1, x2 , x3 0 .
Solution :
First we write the given problem in standard primal form as follows.
1) The objective function is changed from minimization to maximization.
x1 x2 3 x3 4
108
and x1 x2 3 x3 4
The second may be written as
x1 x2 3 x3 4
Max. z' 0 x1 2 x2 5 x3
subject to x1 x2 2
2 x1 x2 6 x3 6
x1 x2 3 x3 4
x1 x2 3 x3 4
and x1, x2 , x3 0
LMx OP 1
MM PP
i. e. Max. z' 0, 2, 5 x , 2
Nx Q 3
LM 1 1 OP Lx O LM2 OP
0
6 P M P M6 P
such that M
1
2 1
MM 1 3 P M P M4 P
x
P Mx P M P
2
1
N1 3 Q N Q N4 Q
3
1
and x1, x2 , x3 , x4 0
Therefore the dual of the given problem is given by
LMw OP 1
Mini z 2, 6, 4, 4 M P
w
MMw PP
2
MNw PQ 4
LM 1 2 1 1 OP LMww OP LM 0 OP
1
such that M
1
MN 0
1 1 1 PP MMw PP MM2PP
2
3 Q M P N5 Q3
6 3
MNw PQ4
and w 1, w 2 , w 3 , w 4 0
109
or Min. z 2 w 1 6 w 2 4 w 3 4 w 4
such that w1 2 w 2 w 3 w 4 0
w1 w 2 w 3 w 4 2
0 w1 6 w 2 3 w 3 3 w 4 5
and w 1, w 2 , w 3 , w 4 0
Example 3.4
Apply the simplex method to solve the following
Max. z 30 x1 23 x2 29 x3
s. t. 6 x1 5 x2 3 x3 26
4 x1 2 x2 5 x3 7
Also read the solution of the dual of the above problem from the final table.
Solution :
6 x1 5 x2 3 x3 x4 26
4 x1 2 x2 5 x3 x5 7
110
Starting Simplex Table
XB
B cB xB y1 y2 y3 y4 y5 Min. Ratio. y
1
b g b g b g b g b g
1 2 3 1 2
13
y4 0 26 6 5 3 1 0
3
7
y5 0 7 4 2 5 0 1 Min
4
z cB xB xj 0 0 0 26 7
=0 cj 30 23 29 0 0
j 30 23 29 x x
A B
Incoming Outgoing
b gb g
1 c1 cB y1 30 0, 0 0, 4 30
Similarly 2 23, 3 29
Since all j are not less than or equal to zero therefore the solution is not optimal.
Max. j 30 1
b g
Hence 1 y1 is incoming vector and by minimum ratio rule we find that y 5 2 is b g
outgoing vector.
111
Second simplex table
XB
B cB xB y1 y2 y3 y4 y5 Min. Ratio. y
2
b g 2 b g1
31 9 3 31
y4 0 0 2 1
2 2 2 4
7 1 5 1 7
y1 30 1 0
4 2 4 4 8
7 31
z cB xB xj 0 0 0
4 2
105
cj 30 23 29 0 0
2
17 15
j x 8 x
2 2
B A
b g FGH 21IJK 8
2 c 2 cB y 2 23 0, 30 2,
F 9 5 I 17
29 b0, 30g G , J
3 c 3 cB y 3
H 2 4K 2
= 29 - 37.5 = - 8.5
b 3 1
g FGH
5 c 5 cB y 5 0, 30 ,
2 4
15
2
IJ
K
Since all j are not less than or equal to zero so the solution is not optimal. Here y 2 is
insuming vector and y1 is out going vector..
1
The key element is y 22
2
112
Final simplex table
B cB xB y1 y2 y3 y4 y5 Min. Ratio.
17 19 5
y4 0 -4 0 1
2 2 2
7 5 1
y2 23 2 1 0
2 2 2
7 17
z cB xB xj 0 0 0
2 2
161
cj 30 23 29 0 0
2
57 23
j 16 x x
2 2
b gb
1 c1 cB y1 30 0, 23 4, 2 16 g
b g FGH 192 , 52 IJK 572
3 c 3 cB y 3 29 0, 23
F 5 1I 23
0 b0, 23g G , J
5 c 5 cB y 5
H 2 2K 2
Since all j are 0 the solution isoptimal.
7 161
x1 0, x2 , x3 0 and max. z .
2 2
LMx OP 1
L6 5 3 OP LMxx OP LM26OP
1
and x1, x2 , x3 0
113
Therefore the dual of the given problem is given by
Mini z 26, 7
LMw OP
1
Nw Q2
LM6 4O
P2P LMw OP LMM23OPP
30
such that M
1
5
MN3 5 PQ N Q MN29 PQ
w 2
where w 1, w 2 0
OR
Min. z 26 w 1 7 w 2
s. t. 6 w 1 4 w 2 30
5 w 1 2 w 2 23
3 w 1 5 w 2 29 .......... (2)
where w 1, w 2 0
s. t. 6 w 1 4 w 2 w 3 w 6 30
5 w 1 2 w 2 w 4 w 7 23
3 w 1 s w 2 w 5 w 8 29
and w 1, w 2 ,..., w 8 0
114
Simplex table of the dual is
B cB xB y1 y2 y3 y4 y5
57 19 5
y5 0 0 0 1
2 2 2
y3 0 16 4 0 1 -2 0
23 5 1
y2 -7 1 0 0
2 2 2
161 23 57
z1 wj 0 16 0
2 2 2
cj - 26 -7 0 0 0
17 9
j x x x
2 2
23 161
Therefore solution is w 1 0, w 2 ,Min. z
2 2
DUALITY IN LINEAR PROGRAMMING
Definition : Primal Problem
n
Max z x ci xi c T x
i1
s.t. Ax b , x 0 , A mn
s.t. A T w c , w 0
( x has n components, w has m components)
Step 1 : Convert the objective function into max form Min z Max z .
Step 2 : If the constraint has ' ' then multiply the constraint by (–1)
Step 3 : If the constraint has '=' then replace this constraint by two constraints ' ' and ' ' e.g.
x1 x 2 2 x1 x 2 2 and x1 x 2 2 .
115
Step 4 : Every unrestricted variable is replaced by the difference of two non-negative variables
e.g. x1 is unrestricted.
(i) A A T
(ii) Interchange b , c .
(iii)
(iv) Minimize objective function.
s.t. x1 3x 2 5
x1 x 2 7 , x1, x 2 0
s.t. Ax b , x 0
x1 T
Primal : Max z 3x1 2x 2 3 2 x c x
1
1 3 x1 5
s.t. 1 1 x 7 , x 0
2
w1
Dual : Min z w 5 7 w
2
1 1 w 1 3
3 1 w 2 , w1, w 2 0
2
Max z 2x1 3x 2 x3
s.t. x1 x 2 3x 3 8
x1 x 2 x 3 4 , x1,x 2 , x3 0
116
x1
Answer : Max z 2 3 1 x 2
x 3
x1
1 1 3 8
x2
s.t. 1 1 1 4 , x1,x 2 , x3 0
x3
1 1 2
1 1 w1 3
s.t. w , w1, w 2 0
3 1 2 1
Primal : Max z c T x
s.t. Ax b , x 0
Dual : Min z w b T w
s.t. A T w c , w 0
Example : Find the dual of the following Primal.
a3 x1 x 2 3x 3 4 , x1,x 2 , x3 0
a 3
Answer : Max z 'x 2x 2 5x 3 z'x zx
x1 x 2 2 , 2x1 x 2 6x 3 6
x1 x 2 3x 3 4 , x1 x 2 3x3 4 , x1,x 2 , x3 0
s.t. x1 x 2 2
2x1 x 2 6x 3 6
x1 x 2 3x 3 4
x1 x 2 3x3 4 , x1,x 2 , x3 0
117
x1
Standard : Max z'x 0 2 5 x 2
x3
1 1 0 2
2 1 6 x1 6
x
s.t. 1 1 3 2 4 , x1,x 2 , x3 0
x3
1 1 3 4
w1
1 2 1 1 0
1 1 1 1 w 2 2
s.t. w , w , w , w , w 0
1 2 3 4
0 6 3 3 3 5
w 4
Min z w 2w1 6w 2 4 w 3 w 4
w1 2w 2 1 w 3 w 4 0
w1 w 2 1 w 3 w 4 2
6w 2 3 w 3 w 4 5 , w1, w 2 , w 3 , w 4 0
s.t. w 1 2w 2 w 3' 0
w 1 w 2 w 3' 2
6w 2 3w 3' 5 , w 2 , w 2 0
w 3 is unrestricted.
Observation : Third constraint in primal is equation. Third variable in its dual is unrestricted in
sign.
Example : Find dual of
118
s.t. 2x1 3x 2 5x 3 2 , 3x1 4x 2 6x3 5
x1, x 2 0 , x 3 unrestricted.
3x1 4x 2 6x3 5 , x 2 , x 2 0
x3 x4 x5 , x 4, x5 0
2x1 3x 2 5 x 4 x5 2
x1
x
Standard Primal : Max z x 2 3 4 4 2
'
x4
x5
x1
2 3 5 5 x 2 2
3
4 6 6 x 4 5 , x1, x 2 , x 4 , x 5 0
x5
Min z w 2w1 5w 2
2 3 2
w
3 4 1 3
s.t. 5 6 w 2 4 , w1, w 2 0
5 6 4
Min z w 2w1 5w 2
2w1 3w 2 2 , 3w1 4w 2 3
5w1 6w 2 4
5w1 6w 2 4 5w1 6w 2 4
5w1 6w 2 4
119
Observation : 3rd variable in primal is unrestricted. 3rd constraint in its dual is an equation.
s.t. Ax b, x 0
Dual : Min z w b T w
s.t. A T w c, w 0
Max z x c T x
Min z w b T w
Max z w b T w
(III) Max z w b T w
s.t. A T w c, w 0
Min zu c T u
s.t. Au b, u 0 Au b, u 0
120
Thus we have,
Theorem : If x is any FS to primal problem and w is any FS to the dual problem then,
c T x bTw
n m
i.e. c i x i bi w i
i1 i1
n
i.e. aijx j bi , i = 1, 2, 3, ..., n .... (1)
j1
m
apk wp c k , k = 1, 2, 3, ....., n
p 1
n n m m n
ci xi api wp xi wp api xi
i1 i1
p1 p 1 i1
121
n m n m
i i p pj j wpbp
c x w a x (by 1)
i1 p 1 j1 p1
cx bw
n
c x c1c 2 cn x1x 2 xn c i xi c T x
i1
c T x bT w
.
is a FS to its dual then c x b w
Proof : We know that if x is a FS to the primal and w
c x c x c x
Thus c x b w
But c x b w
bw bw
bw is minimum.
If x0 w 0 is an optimum solution to the primal (dual) then there exist a feasible solution
122
1
j c B x j c j cB B a j c j a j A
1
cB B e j 0 ej I
If x0 w 0 is an optimum solution to the primal (dual) then there exist a feasible solution
x
A mn , Imm identity A I b
m n m x 5 n m 1
1
A B C where B 0 then xB B b .
xB
Let x0 be an optimum solution to the primal where xB Rm , 0 Rnm then
0
1
xB B b .
1
j cBT x j c j cBT B aj c j , j = 1, 2, 3, ..., n
1
cBT B e j 0 , j = n + 1, .... n + m
Since x0 is optimal j 0 .
1
cBT B a j c j 0 , j = 1, 2, 3, ....., n
1
cBT B e j 0 , j = n + 1, n + 2, .... n + m
1
cBT B a j c j , j = 1, 2, 3, ..., n
123
1 T 1
cBT B A c T and cB B e j 0 , j = n + 1, .... n + m
1 m
Put c BT B w 0T (say) w 0 R
Then w T0 A c T or A T w 0 c .
1 1
Since cBT B e j 0 , cBT B 0 i.e. w 0T 0
Thus ATw0 c , w0 0
1
b T w 0 w T b cBT B b cBT xB
Since b T w 0 cBT xB
Max z x c T x
x1, x 2 , x3 ,....., xn 0
124
w1
a11 a 21 ak1 ak1 am1 w 2 c1
a12 a 22 ak2 ak2 am2 c 2
'
a13 a23 ak3 ak3 am3 w k c 3
"
wk
a akn akn amn cn
1n a 2n
w m
Thus we have,
m
Min z w bi w i
i1
a1n w1 a 2n w 2 .... akn w k ..... amn w m c n
w1, w 2 ,...., w k 1, w k 1,....,w m 0 , wk unrestricted kth variable in dual is unrestricted in sign.
Theorem : If pth variable in primal is unrestricted in sign then pth constraint of the dual is an
equation.
125
s.t. a11x1 a12 x 2 a13 x3 ..... a1p xp .... a1n xn b1
Max z x c1x1 ..... c p xp' x p" .... c n x n
s.t.
a11x1 a12 x 2 ..... a1p x p' x p" .... a1n x n b1
am1x1 am2 x 2 ..... amp x p' x p" .... amn x n bm
126
a1p w1 a2p w 2 a3p w 3 ..... amp w m c p
Subject to
z c1 x1 c 2 x2 ... cn xn 0 xn 1 0 xn 2 ... 0 xn m 0 U|
a11 x1 a12 x2 ... a1n xn xn 1 b |
a 21 x1 a 22 x2 ... a2 n xn xn 2 b V
| 1
: : : : : |
|2
.......... (3.4)
1. x0 a 01 x1 a 0 2 x2 ... a 0 n xn a 0, n 1 xn 1 ... a 0, n m xn m 0 OP
0. x0 a11 x1 a12 x2 ... a1n xn 1. xn 1 .... 0. xn m b1 PP
: : : : : : :
PP .......... (3.5)
0. x0 am 1 x1 am 2 x2 ... amn xn 0 xn 1 .... 1 xn m bm Q
Again, writing the system (3.5) in matrix form,
LM1 OP
MM....
a 01 a 02 .... a0 n a 0 , n 1 a 0, n m
.........,............. P M
Lx OP L0 OP
PP MMb
0
.... ...........
0 PP M PP
x
MM0 M:
1
PP MMb:
1
a11 a12 ... a1n 1
: PM PQ .......... (3.6)
MM:0 : : : :
1P
P MNx nm Q N
N Q
m
am 1 am 2 ...... am n 0
N0 A Q Nx Q Nb Q .......... (3.7)
Where a da a ,..., a
0 01, 02 0 m ,..., a0 , n m i and the remaining symbols have their usual
meanings.
128
The matrix equation (3.7) can be expressed in the original notation form as
Equation (3.7) or (3.7) is referred to as standard form 1 for the revised simplex method.
(1) (1)
Therefore, B1 e1, 1 ,...., m
129
If the basis matrix B for A X b is be represented by
MM....................................................... PP
B M PP
0 11 ... 12 1m
MM0
1
21 ... 22
PP 2m ......... (3.11)
MM: : : :
PQ
N0 m1 ... m2 mm
b g b g
where c Bi i 1, 2,..., m are the coefficients of xB i i 1, 2,..., m in the equation.
z c1 x1 c 2 x2 ... cn xn 0 xn 1 ... 0 xn m 0
and CB cB 1, cB 2 ,...., cB m
Hence, the basic matrix B1 [in equation (3.11)] can be represented in the partitioned
form as
B1
LM1 OP
cB
N0 BQ .......... (3.12)
Now the right side of (3.12) can be used to obtaine the basis matrix B1 in revised
simplex method for standard form I.
M
LM I Q OP
If
N0 R Q .......... (3.13)
130
M1
LM I QR1 OP
MN0 R 1 PQ .......... (3.14)
Now, to apply this rule to computer B1 1 , compare the matrices B1 (3.12) and M
(3.13) to get,
I = 1, Q = CB and R = B.
Substituting these values of I, Q, R in the formula (3.13) for matrix inverse, we
get,
B1 1
LM1 CB B 1 OP
MN0 B 1 PQ .......... (3.15)
V) Any aj(1) (not in the basis matrix B1 ) can be expressed as the linear combination
of column vectors
d id
y 0 j , y1 j ,..., ym j (01) , 1(1) ,..., (m1) i
Yj(1) B1 (From (3.10)
whic yields
Yj(1)
LM1 C B OP LM c OP LM c C
B
1
j j BB
1
aj OP
MN0 B PQ MN a PQ MN 0 B
1
j
1
aj PQ
M
L c z OP LMz c OP L O
MN Y PQ MN Y PQ MMNY PPQ
j j j j
.......... (3.17)
j j j
131
Note : The advantage of treating the objective function as one of the constraints
d h
is that, z j c j or j for any aj not in the basis can be easily computed by taking
the product of first row of B1 1 , with aj(1) not in the basis, that is,
d i
j z j c j first row of B1 1 a j(1) not in the basis.
L1 C B OP L0 O LM1 0 C dB b iOP
M
1 1
LC X OP LM z OP
M B B
N X Q NX Q [ because x B b, c x
1
B B B z]
B B
Thus,
X (B1)
LMC X OP LM z OP
B B
N X Q NX Q
B B
.......... (3.19)
B11
LMI CB B 1 OP
MN0 B1 PQ .......... (3.20)
But, the initial basis matrix B for the original problem is always m m identity matrix b g
bI g . We note that I
m m always appears in (AX = b) (if it is not so, it can be made to appear in A by
introducing the artifical variables).
Since B Im B 1
132
B1 1
LM1 CB Im OP
N0 Im Q
B1 1 M
L1 CB OP
or
N0 Im Q
Furthermore, if after ensuring that all bi 0 only the slack variables are needed and the
initial basis matrix B Im appears, then
cB 1 cB 2 cB 3 ... cB m 0, i. e. cB 0 .
MM10 0 ... 0
PP
L1
M
0 O
M0
1 ... 0
0P I
I PQ M P
B1 1 0 ...
N0 m
MM: : :P
m1
N0 0 ... 1 PQ
Thus, the inverse of the initial basis matrix B will be B1 1 B1 Im 1 with which we start
the revised simplex procedure.
Then, the initial basic solution is
After obtaining the initial basis matrix inverse B 1 Im 1 and an initial basic feasible solution
to start with the revised ‘simplex ‘procudure, we have to construct the starting revised simplex
table.
To Construct the Starting Table in Standard Form I.
b g b g
Since x0 z should always be in the basis, the first column (01) e1 of initial basis
matrix inverse B 1 Im 1 will not be removed at any subsequent iteration. The remaining column
Yk(1)
LMz c OP LM OP
k k k
N Y Q NY Q
k k
133
k min j (for those j for which a j is not in B1 ).
Note : If there is a tie, we can use smallest index j which is an arbitary rule but
computationally useful.
Finally, it is conculded that only the column vectors
Variables in B1 1
z 1 0 0 -- 0 0 zk c k A table for
xBm 0 0 0 -- 1 bm y mk
Example 3.5
Solve the following linear programming problem by revised simplex method.
Max z 2 x1 x2
subject to 3 x1 4 x2 6, 6 x1 x2 3, x1, x2 0 .
Solution :
Step : 1 Express the given problem in Standard Form - I
After ensuring that all bi 0 and transforming the objective function of originlal problem
for maximization of z (if necessary), introduce non - negative slack variables to convert the
inequalities to equations. It should be noted that the objective function is also treated as if it were
the first constaint equation.
134
Thus, the given problem is transformed to the following form,
z 2 x1 x2 0
3 x1 4 x2 x3 6 .......... (i)
6 x1 x2 x4 3
We, proceed to obtain the initial basis matrix B1 as an identity matrix and complete all
the columns of starting revised simplex table except the last column Yk(1) (which can be done in
Step 5)
Applying this step, the system (i) of constraint equations can be expressed in the following
matrix form.
0 M P
P Mx P MN3PQ
2
MN00 3
6
4
1
1
0 1Q M P
3
Nx Q 4
Here the columns (01) , 1(1) and (21) form the basis matrix B1 (whose inverse is also B1 ,
because B1 I3 here). Now starting revise simplex table can be constructed as follows:
Table 1 Table 2
z 1 0 0 0 3 4
xB1 x3 0 1 0 6 6 1
xB 2 x4 0 0 1 3
135
1 (first row of B1 1 ) x a1(1) (1, 0, 0 )( 2, 3, 6 )
1 ( 2) 0 3 0 6 2
b g b g
(1, 0, 0) 1, 4,1 1 1 0 4 0 1 1
l , q nfirst row of B s
1 2
1
1 a1(1) , a(21)
LM 2 1 OP
1, 0, 0 MM 3 4 P
N6 1PQ
LM1 (2) 0 3 0 6OP LM 2OP l 2, 1q
N1 (1) 0 4 0 1 Q N 1Q
which gives the values 1 2, 2 1 as obtained earlier..
Step : 4
b g
Now apply the usual rule to test the starting solution x1 x2 0, x3 6, x4 3 for optimality..
Since 1, 2 obtained in step 3 are both negative, so the starting basic feasible solution
is not optimal. Hence we proceed to determine the entering vector ak(1) .
Step : 5
n s
Let k min j for those j for which a(j 1) are not in the basis
So, we have
k min 1, 2 min 2, 1 2 1
Hence k = 1
Hence a1(1) enters the basis and the variables x1 will enter the solution.
Now, in order to find the leaving vector we first compute y (k1) for k = 1.
Step : 6
136
therefore, Y1(1) a1(1) 2, 3, 6 . b g
Now complete the last column X (k1) of starting table 1 by writing Y1(1) a1(1) 2, 3, 6 in b g
that column. So the starting has grows to the following form.
Table 3
x3 0 1 0 6 3
x4 0 0 1 3 6
Step : 7
The vector (r1) to be removed from the basis is determined by using the minimum ratio
rule (similar to that of ordinary simplex method).
xBr xB i LM OP
Let y min , yik 0
rk
i y ik MN PQ
Putting k = 1 (which has been obtained in step 6)
xBr
min
LM xBi
OP
, yi 1 0 min
LM x B1
,
xB 2 OP
yr 1 i
MN y i1 PQ Ny 11 y 21 Q
LM 6 , 3 OP 3
min
N3 6Q 6
xB r xB 2
So y y and r = 2.
r1 21
137
Table 4
FG X IJ
HY K
B
the basis min.
1
z 1 0 0 0 -2
xB1 x3 0 1 0 6 3 6/3
xB 2 x4 0 0 1 3 6 3/6
B A
Leaving vector (21) Key element
xB i LM OP
Remark : If the min , yik 0 is attained for more than one value of i, the resulting
i yik MN PQ
basic feasiIe solution will be degenerate. In that case, we use the usual techniques to resolve
the degeneracy.
Step 8
In order to bring uniformity with the ordinary simplex method adopt the simple matrix
transformation rules. Here the intermediate coefficient matrix is :
R1 0 0 0 -2
R2 1 0 6 3
R3 0 1 3 6
B
The column e1 will never change. So there is no need to write the column e1 in the
intermediate coefficient matrix. Also, the vector Y1(1) is going to be replaced by the outgoing
vector (21) .
Now, divide the row R3 by key element 6. Then add twice of third row to first, and 3 times
of third row to second. In this way, obtain the next matrix.
138
1(1) (21) X B(1)
0 1/3 1 0
1 -1/2 9/2 0
0 1/6 1/2 1
Table 5
Vari. (z) b
(k = 2) min X B / Y2 g a(41) a(21)
z 1 0 1/ 3 1 -2 /3 0 -1
9/2
x3 0 1 -1/2 9/2 7/2 0 4
7/2
1/ 2
x1 0 0 1/6 1/2 1/6 1 1
1/ 6
A
B1 1
The improved solution is read from this table as :
z 1, x3 9 / 2, x1 1/ 2, x2 x4 0 .
Step : 9
l , q dfirst row of B i da
4 2
1
1
(1)
4 , i
a(21) ,
F 1I M
L0 1OP
G 1, 0, J M0
H 3 K M1 41PP
N Q
LM 1 0 0 0 1 1OP
M
MM1 ( 1) 0 4 1 1 PPP
3
N 3 Q
L 1/ 3OP
M
N2 / 3 Q
139
1 2
Thus, we get 4 , 2
3 3
LM 1 , 2 OP
k min 4 , 2 min
N3 3 Q 2 . Hence k = 2.
So a(21) should enter the solution, means the variable x 2 will enter the basic solution.
Step : 11 Determination of the leaving vector, given the entering vector a2(1) .
So remove the vector 1(1) from the basis, to bring it in place of Y2(1) by matrix
transformation.
Step : 12 Determination of new table for improved solution
For this, the intermediate coefficient matrix is :
R1 0 1/2 1 -2/3
B A
Applying the operations :
2 2 FG IJ FG 2 R IJ, and R FG
1 2 IJ
7
R2 ,R1
3 H K H7 K 2 3
H
6 7
R2 , we get
K
140
1(1) (21) X B(1)
4 / 21 5 / 21 13 / 7 0
2/7 -1/7 9/7 1
- 1 / 21 1 / 42 2/7 0
Now, the table for improved solution is as follows :
Variable in B1 1
e1 1(1) (21)
z 1 4 / 21 5 / 21 13/7 0 0
x1 xB 2 0 - 1 / 21 4 / 21 2/7 1 0
Third Iteration
Step : 13
l , q dfirst row of B i da
4 3
1
1
(1) (1)
4 , a3 i
LM0 0 OP
b1, 4 / 21, 5 / 21g M0 1 P
MN1 0 PQ
LM1 0 4 / 21 0 5 / 21 1OP LM5 / 21OP
N1 0 4 / 21 1 5 / 21 0Q N4 / 21Q
Therefore
4 5 / 21; 3 4 / 21
z 13 / 7, x2 9 / 7, x1 2 / 7
Example 3.6
Solve the following problem by revised simplex method :
141
Solution :
First express the given problem in revised simplex form :
z x1 2 x2 0
x1 x2 x3 3
x1 2 x2 x4 5
3 x1 x2 x5 6
Then express the system of constraint equations in the following matrix form :
MM0 1 1 1 0
M P
0 P M x P M5 P
2
MN00 1 2 0 1
P
1Q M x P N6 Q
3
MP
3 1 0 0
MMx PP
4
N Q
5
Now form the revised simplex table for the first iteration.
Table
Variables in B1 1
the basis (01) 1(1) (21) (31) X B(1) YK(1) Min a1(1) a(21)
e1 da i da i da i
(1)
3
(1)
4
(1)
5 (k=2) bX B / Y2 g
B
z 1 0 0 0 0 -2 -1 -2
x3 xB 1 0 1 0 0 3 1 3/1 1 1
x4 xB 2 0 0 1 0 5 2 5/2 1 2
x5 xB 3 0 0 0 1 6 1 6/1 3 1
142
Step : 1
l , q dfirst row of B i da
1 2
1
1
(1)
1 , a(21) i
LM 1 2 OP
b1, 0, 0, 0g M
1P
l 1, 2q
1
MM 1 2 P
P
N3 1Q
Hence 1 1, 2 2
So the vector a(21) must enter the basis. This shows that x 2 will enter the basic feasible
solution.
N0 0 0 1 Q N 1 Q N 1Q
Here (2) is the ‘key element’ corresponding to which (21) must leave the basis matrix.
Hence x 3 will be outgoing variable.
143
Step : 4 Determination of the improved solution.
The intermediate coefficient matrix is :
B A
Apply usual rules of transformation to obtain
0 1 0 5 0
1 -1/2 0 1/2 0
0 1/2 0 5/2 1
0 -1/2 1 7/2 0
The table for improved solution.
Table : 2
Variables in B1 1
x3 xB 1 0 1 - 1/2 0 1/2 1 0
x2 xB 2 0 0 1/2 0 5/2 1 1
x5 xB 3 0 0 -1/2 1 7/2 3 0
z 5, x3 1/ 2, x2 5 / 2, x5 7 / 2 .
Step : 5
LM 1 0OP
b , g b1, 0,1, 0g MM 11
1 4
0P
1P
l0, 1q
MN 3 0Q
P
Hence 1 0, 4 1
144
Since 1 and 4 both are 0 , the solution under test is optimal.
Furthermore, 1 0 shows that the problem has alternative optimum solutions. Thus,
the required optimal solution is is x1 0, x2 5 / 2,max z 5 .
Example 3.7
Solve by revised simplex method :
Max. z 6 x1 2 x2 3 x3
subject to 2 x1 x2 2 x3 2
x1 4 x3 4
x1, x2 , x3 0
Solution :
The problem in the revised simplex form may be expressed by introducing the slack
variables x 4 and x 5 as
z 6 x1 2 x2 3 x3 0
2 x1 x2 2 x3 x4 2
x1 4 x3 x5 4
The system of constraint equations may be represented in the following matrix form :
MMxx PP
4
N Q
5
145
Variables in B1 1 Min a1(1) a(21) a(31)
x4 xB 1 0 1 0 2 2 2/2 2 -1 2
x5 xB 2 0 0 1 4 1 4 /1 1 0 4
B
The starting solution is : x1 x2 x3 0 ; x4 2, x5 4, z 0 .
Step 1
b , , g dfirst row of B i da
1 2 3
1
1
(1) (1) (1)
1 , a2 , a 3 i
LM 6 2 3OP
b1, 0, 0 g M 2 1 PP l
2 6, 2, 3 q
MN 1 0 4Q
Hence 1 6, 2 2, 3 3
Since 1 and 3 are still negative, the solution under test can be further improved.
The entering vector a(k1) corresponds to the value of k which is obtained by the critertion
l
k min. 1, 2 , 3 min 6, 2, 3 6 1 q
Hence k = 1
So the entering vector is found to be a1(1) . This also means that the variable x1 will enter
the basic solution.
First we need to computer the column Y1(1) corresponding to the entering vector a1(1) .
146
Now apply the min. ratio rule. This rule indicates that (2) is the 'key element' corresponding
to which 1(1) must leave the basis matrix. Hence x 4 will be the outgoing variable.
x1 xB 1 0 1 /2 0 1 - 1/2 1 -1 2
1
x5 xB 2 0 -1/2 1 3 1 /2 3/ 0 0 4
2
B
The improved solution is : z 6, x1 1, x2 x3 x4 0, x5 3 .
Second Iteration
Step : 5
b , , g dfirst row of B i da
4 2 3
1
1
(1)
4 , a(21) , a(31) i
LM0 2 3 OP
b1, 3, 0g M1 1 PP l
2 3, 1, 3 q
MN0 0 4Q
Hence 4 3, 2 1, 3 3
147
Since 2 is still negative, the solution under test is not optimal. Hence further improvement
is possible. So we proceed to find the ‘entering’ and ’leaving’ vectors in the next step.
k min 4 , 2 , 3 min 3, 1, 3 1 2
Hence k = 2.
Therefore, a(21) will enter the basis. The entering vector a(21) indicates that the variable
x 2 must enter the new solution.
LM1 3 0 OP LM 2OP LM 1 OP
Y2(1) B1 1 a2(1) M0 1/ 2 0 PP MM1 PP MM 1 / 2PP
MN0 1/ 2 1 Q N 0 Q N 1/ 2 Q
The ‘min ratio rule’ in the column of Table 4 indicates that 1/2 is the key element
corresponding to which the vector (21) must leave the basis. Hence x 5 will be the outgoing
variable.
Step : 8 The next improved solution
Transform the Table 4 into Table 5 from which the mext improved solution can be easily
read.
Table 5
x1 xB 1 0 0 1 4 1 0 2
x2 xB 2 0 -1 2 6 0 1 4
z 12, x1 4, x2 6, x3 x4 x5 0
148
Step : 9
Here we compute
l , , q dfirst row of B i da
4 5 3
1
1
(1)
4 , a(51) , a(31) i
LM0 0 3 OP
b1, 2, 2g M1 0 P l
2 2, 2, 9 q
MN0 1 4 PQ
Hence 4 2, 5 2, 3 9
The solution under testis optimal because 4 , 5 , 3 are all positive. Thus, the required
optimal solution is :
x1 4, x2 6, x3 0,max. z 12
Example 3.8
Solve the following L.P.P. by revised simplex method.
Max z 3 x1 x2 2 x3 7 x4 ,
2 x1 3 x2 x3 4 x4 40
2 x1 2 x2 5 x3 x4 35
x1 x2 2 x3 3 x4 100
and x1 2, x2 1, x3 3, x4 4
Solution :
Step : 1
In order to make the lower bounds of the variables zero, we substitute
x1 y1 2, x2 y 2 1, x3 y 3 3, x4 y 4 4 in the gives LPP to obtain :
s. t. 2 y1 3 y 2 y 3 4 y 4 20
2 y1 2 y 2 5 y 3 y 4 26
y1 y 2 2 y 3 3 y 4 91
and y1 0, y 2 0, y 3 0, y 4 0 .
149
Step : 2 To express the LPP in revised simplex form.
Max. z' 3 y1 y 2 2 y 3 7 y 4
s. t z' 3 y1 y 2 2 y 3 7 y 4 0
2 y1 3 y 2 y 3 4 y 4 y 5 20
2 y1 2 y 2 5 y 3 y 4 y 6 26
y1 y 2 2 y 3 3 y 4 y 7 91
b g
y i 0 i 1, 2,..., 7 and z' is unrerstricted in sign.
LM1 3 1 2 7 0 0 0O y
P M P
2 LM0 OP
MM0 2 3 1 4 1 0 0 P M y P M20 P
0 P M y P M26 P
3
MN00 2
1
2
1
5
2
1
3
0
0
1
0
PM P M P 4
1 Q M y P N91 Q
MMy PP5
MNy PQ
6
b g
Here X B(1) 0, 20, 26, 91 is the initial BFS and basis matrix B1 is given by
150
Step : 4 To construct the starting simplex table.
y5 0 1 0 0 20 4 5
(min)
y6 0 0 1 0 26 -1 --
y7 0 0 0 1 91 3 91 / 3
B A
Outgoing vector Incoming Vector
Step : 5 Test of optimality.
b , , , g dfirst row of B i a , a
1 2 3 4
1
1
( 1)
1
(1) (1) (1)
2 , a 3 , a4
LM 3 1 2 7OP
b1, 0, 0, 0g M
4P
b 3, 1, 2, 7g
2 3 1
MM2 2 5 1P
P
N 1 1 2 3Q
Since all j ' s are not 0 , the solution is not optimal.
Step : 6 To Find incoming and outgoing vectors
Thus a(41) is the vector entering the basis. So the column vector Y4(1) corresponding to
a(41) is given by
b g
Y4(1) B1 1 a(41) I4 7, 4, 1, 3 7, 4, 1, 3
xB r 20 91 20 xB 1 LM OP
Outing Vector : Since y min 4 , 3 4 y ,
r4 14N Q
So r = 1 and hence 1(1) a(51) is the outgoing vector..
151
Step : 7 To find the improved solution
(1) (1)
We bring a(41) in place of 1 a5 d i in B1 1 , to get the revised simplex table
Table 2
y4 0 1/4 0 0 5 -1/4 --
y6 0 1/4 1 0 31 19 / 4 124 / 19
y7 0 -3/4 0 1 76 -5/4 --
B A
Outgoing vector Incoming vector
Step : 8
b g d 1
We computer 1, 2 , 3 , 5 first row of B1 i da (1) (1) (1) (1)
1 , a 2 , a3 , a 5 i
LM 3 1 2 0 OP
b1, 7 / 4, 0, 0g M
2 3 1 1P L 1 17 15 7 O
M , , , P
MM2 2 5 0 P
P N2 4 4 4Q
N1 1 2 0Q
Since 3 15 / 4 is till negative, the solution under test is not optimal. So we proceed
to improve the solution in the next step.
Step : 9 To find entering and outgoing vectors.
AS in step 6, we find the entering vector a(31) . The column vector Y3(1) corresponding to
a(31) is given by
LM
Y3(1) B1(1) a(31)
15 1 19 5 OP
N , , ,
4 4 4 4 Q
By min. ratio rule, we find the outgoing vector (21) a(61) . So the key element will be 19 / 4.
152
Step : 10 To find the revised solution
(1) (1)
d i
We bring a(31) in place of 2 a 6 in the basis B1 1 and obtain next revised table 3.
Table 3
y3 0 1 / 19 4 / 19 0 124/19 -6/19 --
B A
Outgoing vector Incoming vector
Step : 11 To test the optimality
1
d(1) (1) (1) (1)
We compute 1, 2 , 5 , 6 first row of B1 a1 , a 2 , a5 , a 6 i
LM3 1 0 0OP
L 37 15 , 0OP M 2
M1, ,
3 1 0 P L 13 122 37 15 O
N 19 19 Q MM2 1 P MN 19 19 19 19 PQ
, , ,
N1
2
1
0
0 0Q
P
Since 1 0 , the solution under test is not optimal. So we proceed to revise the solution
in the next step.
Step : 12 To find entering and outgoing vectors.
As in step 6, we find the entering vector a1(1) . The column vector corresponding to a1(1) is
given by
153
Step : 13 To find the improved solution
(1)
d
(1)
i
In order to bring a1(1) in place of 1 a 4 we divide second row by 8 / 19, then add its
13/19, 6/19 and 17/19 times in first, third and fourth rows respectively to obtain the next improved
solution.
Table 4
y1 0 5/8 1/8 0 63 / 4
y3 0 1/4 1/4 0 23 / 2
b g d 1
We compute, 2 , 4 , 5 , 6 first row of B1 i da (1) (1) (1) (1)
2 , a4 , a 5 , a 6 i
LM 1 7 0 0 OP
F 19 7 I 3
G1, , , 0J M
4 1 0 P F 63 13 19 7 I
G , , , J
H 8 8 K MM 2 1 0 1 P H 8 8 8 8K
P
N1 3 0 0Q
Since all j 0 , the solution under test is optimal. So the optimal solution of modified
LPP is,
x1 y1 2 71/ 4, x2 y 2 1 1, x3 y 3 3 29 / 2, x4 y 4 4 4 and
b g
max z max z' 41 445 / 4 .
154
EXERCISES
Maximize z 3 x1 2 x2 5 x3
Subject to the constraints
x1 2 x2 x3 430
3 x1 3 x3 460
x1 4 x2 420 ,
and x1, x2 , x3 0
2) Use the revised simplex method to solve.
Maximize z x1 2 x2 3 x3 ... 4 x4
Subject to the constraints
3 x1 2 x2 3 x3 x4 25
2 x1 x2 2 x3 x4 5
2 x1 xl2 x3 x4 20
x1, x2 , x3 0
3) Use the revised simplex method to solve the L. P. P.
Max. z 2 x1 x2
Subject to constraints
3 x1 4 x2 6
6 x1 x2 3
x1 x2 0
4) Use resived simplex method to solve the following L. P. P.
x1 4
x2 6
3 x1 2 x2 18
x1, x2 0
155
5) Use the revised simpelx method to solve the L. P. P.
Maximize z x1 x2 3 x3
Subject to 3 x1 2 x2 x3 3 ,
2 x1 x2 2 x3 2
x1, x2 , x3 0
6) Use revised simpelx method to solve the L. P. P.
Maximize z 6 x1 2 x2 3 x3
Subject to 2 x1 x2 2 x3 2
x1 4 x3 4
x1, x2 , x3 0
7) Use revised simplex method to solve the L. P. P.
4 x1 5 x2 10 ,
5 x1 2 x2 10
3 x1 8 x2 12 and
x1, x2 0
8) Use revised simplex method to solve the following L. P. P.
3 x1 2 x2 6
x1 6 x2 3
and x1 0, x2 0
9) Use revised simplex method to solve the following L. P. P.
x1 2 x2 7
4 x1 x2 6
x1, x2 0
156
10) Use revised simplex method to solve the following L. P. P.
2 x1 5 x2 6
x1 x2 2
x1 0, x2 0
11) Use two phase revised implex method to solve the L. P. P.
x1 x 2 1
2 x1 3 x2 2
x1, x2 0
12) Use the two phase revised simplex method to solve the L. P. P.
2 x1 4 x2 5
2 x1 3 x2 x3 4
x1, x2 , x3 0
13) Solve the following L. P. P. by the revised simplex method.
Maximize z 2 x1 4 x2 6 x3 2 x4
Subject to the conditions
x1 2 x2 3 x3 15
2 x1 x2 5 x3 20
3 x1 6 x2 3 x3 3 x4 30 ,
x1, x2 , x3 0
157
14) Use the revised simplex method to solve the L. P. P.
maximize z x1 2 x2 subject to
x1 x2 3
x1 2 x2 5 ,
3 x1 x2 6
and x1, x2 0
15) Use the revised simplex method to solve,
x2 x1 0,
x1 4
and x1, x2 0
16) Use the revised simplex method to solve the following L. P. P.
3 x1 x2 3 ,
4 x1 3 x2 6 ,
x1 2 x2 2, and x1, x2 0
158
UNIT INTERGER
04 PROGRAMMING
4.1 INTRODUCTION
There are certain decision problems where decision variables make sense only if they
have integer values in the solution. For example, it does not make sense saying 1.5 men working
on a project or 1.6 machines in a workshop. The integer solution to the problem can, however,
be obtained by rounding off the optimum value of the variables to the nearest integer value. This
approach can be easy in terms of economy of effort in time and cost that might be required to
derive an integer solution but this solution may not satisfy all the given constraints. Secondly, the
value of the objective function so obtained may not be optimal value. All such difficulties can be
avoided if the given problem, where an integer solution is required, is solved by integer
programming techniques.
4.1.1 Types of Interger Programming Problems
There are two types of integer programming problems.
i) Linear integer programming problems.
ii) Non - linear integer programming problems.
In this unit we are going to learn the methods of solving linear integer programming
problems. linear integer programming problems can be classified into three categories :
i) Pure (all) integer programming problems in which all decision variables are
required to have integer values.
ii) Mixed integer programming problems in which some, but not all, of the decision
variables are required to have integer values.
iii) Zero - one integer programming problems in which all decision variables must
have integer values of 0 or 1.
The pure integer programming problem in its standard form can be stated as follows :
Maximize z c1 x1 c 2 x2 c 3 x3 .... c n xn
Subject to the constraints
a 21 x1 a 22 x2 a 23 x3 .... a 2 n xn b 2
............................................................
159
am 1 x1 am 2 x2 am 3 x3 .... am n xn bm
Suppose the basic variable xr has the largest fractional value among all basic variables.
Then the rth constraint equation (row) from the simplex table can be rewritten as ,
d i
xBr br 1. xr ar 1 x1 ar 2 x2 ...
xr a rj xj
.......... (i)
jr
160
Where x j ( j 1, 2, 3,...) represents all the non - basic variables in the rth constraint except
the variables xr and br xBr d i is the non - integer value of varialbe x . Let us decompose the r
coefficients of x j and xBr into integer and non - negative fractional parts in equation (i).
xBr fr (1 0) xr { a rj }
frj x j .......... (ii)
jr
Where xB r and ai j denote the largest integer obtained by trucating the fractional part
o
fr xBr xr a rj t f
xj
jr
rj xj
.......... (iii)
b g
Where fr is strictly positive fraction 0 fr 1 while 0 f r j 1. We may write equation (iii)
in the form of following inequality.
fr f rj xj
jr
i. e. f rj x j fr s g or fr s g f rj xj .......... (iv)
j r
Where Sg is a non - negative slack variable and is called the Gomory slack variable.
Equation (iv) represents Gomory's cutting plane constraint. This constraint create an additional
row along with a column for the new variable Sg .
fr s g f rj xj
j r
where 0 fr j 1 and 0 fr 1 .
If there are more than one variables with the same largest fraction, then choose
the one that has the smallest contribution to the maximization LP problem or the
largest cost to the minimization LP problem.
Step - 4
Obtain the new solution : Add the cutting plane generated in step 3 to the
bottom of the optimal simplex table as obtained in step. 3. Find a new optimal
solution by using the dual simplex method i. e. choose a variable to enter into the
n s
new solution having the smallest ratio (C j z j ) / yi j ; y i j 0 and return to step 2.
Start
Do all basic
variables with Current solution is
Yes
integer requirements the requited integer
have integer LP problem solution
solution
values?
No
Select the basic variable with largest
fractional value. Generate the cutting plane
162
The process is repeated until all basic variables with integer requirements assume
non - negative integer values.
The procedure for solving an ILP problem can be explained through a flow chart
given above.
4.3 EXAMPLES
1) Solve the following integer programming problem using Gomory's cutting plane
algorithm.
Maximize z x1 x2
Subject to
3 x1 2 x2 5
x2 2
Answer :
Step : 1
Introducing the slack variables we get,
Maximize z x1 x2 0 s1 0 s 2
Subject to
3 x1 2 x2 s1 5
x2 s 2 2
Cj 1 1 0 0
s1 0 5 3 2 1 0 5/2
s2 0 2 0 1 0 1 2/1
z cB xB 0 j zj c j -1 -1 0 0
= cB x j c j A
s1 0 1 3 0 1 -2 1/3
1
163
x2 1 2 0 1 0 1 2/0
z cB xB 2 j zj c j -1 0 0 -1
A
x1 1 1/3 1 0 1/3 -2/3
x2 1 2 0 1 0 1
1 7
The optimal solution is x1 , x2 2 and Max. z .
3 3
Step : 2
In the current optimal solution, all the basic variables in the basic are not integers and the
solution is not acceptable. Since both decision variables x1 and x 2 are assumed to take an
integer value, a pure integer cut is developed under the assumption that all the variables are
integers. We go to next step.
Step : 3
Since x1 is the only basic variable whose value is a non - negative fraction, we shall
consider the fist row for generating the Gomory cut. Considering x1 - equation as the source
row we write.
1 1 2
x1 0. x2 s1 s 2 ( x1 - source row)
3 3 3
Observe that each of the non - integer coefficient is factored into integer and fractional
parts in such a manner that the fractional part in such a manner that the fractional part is strictly
positive.
Rearrange the equation so that all of the integer coefficients appear on the left hand side.
This gives
1
3
b 1
3
g 1
s 2 x1 s1 s 2
3
1 1 1
Therefore s1 s 2
3 3 3
164
Thus complete Gomorian constraint can be written as
1 1 1 1 1 1
g1 s1 s 2 or g1 s1 s 2
3 3 3 3 3 3
By adding the Gomory cut at the bottom of the simplex table, the new table so obtained
is given below.
cj 1 1 0 0 0
x2 1 2 0 1 0 1 0
cj 1 1 0 0 0
x2 1 2 0 1 0 1 0
7
z zj c j 0 0 1/3 1/3 0
2
A
x1 1 0 1 0 0 -1 1
x2 1 2 0 1 0 1 0
s1 0 1 0 0 1 1 -3
z2 zj c j 0 0 0 0 1
165
2) Solve the following integer programming problem using Gomory's cutting plane
algorithm.
Maximize z 2 x1 20 x2 10 x3
Subject to 2 x1 20 x2 4 x3 15
6 x1 20 x2 4 x3 20
Also show that it is not possible to obtain a feasible integer solution by using the
method of simplex rounding off.
Answer :
Adding slack variable s1 in the first constraint and artificial variable in the second constraint
the problem is stated in the standard form as :
Maximize z 2 x1 20 x2 10 x3 0 s1 M A 1
subject to
2 x1 20 x2 4 x3 s1 15
6 x1 20 x2 4 x3 A 1 20
The optimal solution of the problem ignoring the integer requirement using the simplex
method (Big M technique) is obtained in the following table.
cj 2 20 -10 0 -M
s1 0 15 2 20 4 1 0 15/20
A1 -M 20 6 20 4 0 1 20 / 20
A1 -M 5 4 0 0 -1 1 5/4
z 15 5 M z j c j -4M 0 14 M+1 0
A
166
x2 20 5/8 0 1 1/5 3/40 -1/40
x1 2 5/4 1 0 0 -1/4 1/4
z 15 zj c j 0 0 14 1 M j 0
The non - integer optimal solution is x1 5 / 4, x2 5 / 8, x3 0 and Max. z = 15. Then the
rounded off solution will be x1 1, x2 0, x3 0 and Max z = 2. This solution does not satisfy the
second constraint 6 x1 20 x2 4 x3 20 . Hence it is not possible to obtain an integer optimal
solution by simply rounding off the values of the variables.
To obtain the integer valued solution, we proceed to construct Gomory's constraint
(fractional cut). Since the fractional part of the value of x2 ( 0 5 / 8) is more that the fractional
part of x1 (1 1/ 4 ) , the x 2 - row is selected for constructing the fractional cut as given below..
5 1 3
0. x1 1. x2 x3 s1
8 5 40
FG 0 5 IJ b1 0g x FG 0 1IJ x FG 0 3 IJ s
H 8K 2
H 5 K H 40 K
3 1
5 1 3
g1 x3 s1 (Cut I)
8 5 40
Adding this additional constraint at the bottom of optimal simplex table, we get
cj 2 20 -10 0 0
x2 20 5/8 0 1 1/5 3 / 40 0
x1 2 5/4 1 0 0 - 1/4 0
z = 15 zj c j 0 0 14 1 0
A
|RS 0 , 0 , 14 , 1 |UV
Here max
|T 0 0 b 1/ 5g ( 3 / 40) |W
167
RS 40 UV
T
= max , , 70,
3 W
40
= Therefore we must enter the variable s1 .
3
Thus s1 is the entering variable whereas g1 is outgoing variable. Here we are applying
dual simplex method.
cj 2 20 -10 0 0
x2 20 0 0 1 0 0 1
x1 2 10 / 3 1 0 2/3 0 -10/3
s1 0 25 / 3 0 0 8/3 1 -40/3
z = 20 / 3 zj c j 0 0 34/3 0 40/3
The solution is optimal but is still non - integer solution. Therefore one more fractioned
but should be added. Consider x1 - row for censtructing the cut.
FG 3 1IJ b1 0g x FG 0 2 IJ x FG 4 2 IJ g
H 3K H 3K H 3K
1 3 1
1 2 2
g2 x3 g1 (Cut - II)
3 3 3
Adding this constraint to the optiomal simplex table the new table becomes
cj 2 20 -10 0 0 0
x2 20 0 0 1 0 0 1 0
10 2 10
x1 2 1 0 0 0
3 3 3
25 8 40
s1 0 0 0 1 0
3 3 3
168
1 2 2
g2 0 0 0 0 1
3 3 3
20 34 40
z zj c j 0 0 0 0
3 3 3
34 / 3 40 / 3
Ratio - - -
2/3 2/3
= - 17 - 20
A
Maximum ratio = - 17. Remove g2 from the basis and enter variable x 3 into the basis by
applying the dual simplex method.
cj 2 20 -10 0 0 0
x2 20 0 0 1 0 0 1 0
x1 2 3 1 0 0 0 -4 0
s1 0 7 0 0 0 1 -16 4
The above optimal solution is still non - integer because variable x 3 doex not have
integer value. Thus a first fractional cut will have to be constructed with the help of x 3 - row and
the required Gomory's fractional cut is
1 1
g3 g2 (Cut III)
2 2
Additing this cut to the bottom of above table we get a new table. Apply the dual simplex
method.
169
cj 2 20 -10 0 0 0 0
x2 20 0 0 1 0 0 1 0 0
x1 2 3 1 0 0 0 -4 0 0
s1 0 7 0 0 0 1 -16 4 0
g3 0 -1/2 0 0 0 0 0 -1/2 1
z=1 zj c j 0 0 0 0 2 15 0
zj c j
Ratio - - - - - -30 -
5 th row
A
Max. ratio = - 30 and therefore remove variable g3 and enter variable g2 into the basis
By applying the dual simplex method, we get the new optimal solution as shown in the following
table.
cj 20 20 -10 0 0 0 0
x2 20 0 0 1 0 0 1 0 0
x1 2 3 1 0 0 0 -4 0 0
s1 0 3 0 0 0 1 -16 0 8
x3 -10 2 0 0 1 0 1 0 -3
g2 0 1 0 0 0 0 0 1 -2
z 14 zj c j 0 0 0 0 2 0 30
Since all the variables in above table have assumes integer values and all z j c j 0 , the
solution is integer optimal solution. x1 3, x2 0, x3 2 and maz x = - 14.
170
3) The owner of a readymade garments store sells two types of shirts - zee shirts
and button - down shorts. He makes a profit of Rs. 3 and Rs. 12 per shirt on zee
- shirts and Button down shirts, respectively. He has tow tailors A and B at his
disposal to stitch the shirts. Tailors A and B can devote at the most 7 hours and
15 hours per day respectively. Both these shirts are to be stitched by both the
tailors. Tailors A and B spend 2 hours and 5 hours, respectively in stitching one
zee - shirt and 4 hours and 3 hours, respectively in stitching a Button down shirt.
How many shirts of both types should be stitched in order to maximize daily
profit?
a) Formulate and solve this problem as an LP problem.
b) If the optimal solution is not integer valued, use Gomory technique to derive
the optimal integer solution.
Answer :
Let x1 and x 2 are number of zee - shirts and Button down shirts to be stitched daily,,
respectively. Then we have to maximize profit = 3 x1 12 x2 subject to the constraints.
i) Availability of time with tailor A
2 x1 4 x2 7
ii) Availability of time with tailor B
5 x1 3 x2 15
Maximize z 3 x1 12 x2
Subject to,
2 x1 4 x2 7
5 x1 3 x2 15
Adding slack variables s1 and s 2 the given LP problem is stated into its standard form.
Maximize z 3 x1 12 x2
Subject to,
2 x1 4 x2 s1 07
5 x1 3 x2 s 2 15
171
cj 3 12 0 0
s1 0 7 2 4 1 0 7/4
s2 0 15 5 3 0 1 15 / 3
z0 zj c j -3 -12 0 0
s2 0 39 / 4 7/2 0 -3/4 1
z = 21 zj c j 3 0 3 0 j 0
b)
To construct Gomory's fractional cut we use x 2 - rows.
7 1 1
x1 x2 s1
4 2 4
The required fractional cut is
3 1 1
g1 x1 s1
4 2 4
Adding this additional constraint to the bottom of the optimal simplex and applying the
dual simplex method we get the following iterations.
cj 3 12 0 0 0
s2 0 39 / 4 7/2 0 - 3 /4 1 0
g1 0 -3/4 - 1 /2 0 -1/4 0 1
z = 21 zj c j 3 0 3 0 0
172
zj c j
-6 - - 12 0 0
row 3
A
x2 12 1 0 1 0 0 1
s2 0 g/2 0 0 - 5/2 1 7
x1 3 3/2 1 0 1/2 0 -2
33 3
z zj c j 0 0 0 6
2 2
The optimal solution is still non - integer. Therefore adding one more fractional out with
the help of x1 - row we get the following able and subsequent interations by dual simplex method.
cj 3 12 0 0 0 0
x2 12 1 0 1 0 0 1 0
5
s2 0 9/2 0 0 1 7 0
2
x1 3 3/2 1 0 1/2 0 -2 0
g2 0 - 1 /2 0 0 -1/4 0 0 1
33 3
z zj c j 0 0 0 6 0
2 2
zj c j
Ratio - - -3 0 - -
row 4
x2 12 1 0 1 0 0 1 0
s2 0 7 0 0 0 1 7 -5
x1 3 1 1 0 0 0 -2 1
s1 0 1 0 0 1 0 0 -2
z = 15 zj c j 0 0 0 0 6 3 0
Since all the variables have assumed integer values and all z j c j 0 , the solution is an
173
integer optimal solution. Thus the company should produce x1 1 zee shirt, x2 1. Button -
down shirt to yield maximum profit z = Rs. 15.
4.4 GEOMETRICAL INTERPRETATION OF GOMORY'S CUTTINGS PLANE
METHOD
Let us consider the problem
Maximum z x1 x2
Subject to
2 x1 5 x2 16
6 x1 5 x2 30
x1, x2 0
The graphical solution of this problem is obtained in the figure with solution space
represented by the convex region OABC. The optimal solution occurs at the extreme point B
i. e. x1 3.5, x2 18
. , max z = 5.3. But this solution is not integer valued. While solving this
problem by Gomory's method, we introduce first
3 9 4
Gomory's constraint x3 x4 .
10 10 5
In order to express this constraint in terms of
x1 & x2 , we use the constraints 2 x1 5 x2 x3 16
and 6 x1 5 x2 x4 30 . Then Gomory's constraint
becomes,
3
10
b g
16 2 x1 5 x2
9
10
b g
30 6 x1 5 x2
4
5
1
i. e. x1 x2 5
6
This constraint cuts off the feasible region and now the feasible region is reduced to
somewhat less than the previous one and the procedure continues till an integer valued corner
is found. Because of cuttings in the feasible region, the method was named as cutting plane
method.
~ ~ ~ ~ ~ EXERCISE ~ ~ ~ ~ ~
Find the optimum integer solution of the following all integer programming problems.
1) Max z x1 x2
Subjct to
3 x1 2 x2 5
x1 2
174
b
x1, x2 0 and are integers. Ans.: x1 3, x2 2,max. z 5 g
2) Max. z x1 2 x2
Subjct to
4 x1 2 x2 15
bAns.: x 3, x
1 2 0,max. z 3 g
3) Max. z 3 x2
Subject to,
3 x1 2 x2 7
x1 x2 2
bAns.: x 0, x
1 2 2,max z 6 g
4) Max. z x1 5 x2
Subject to,
x1 10 x2 20
x1 2
bAns.: x 2, x
1 2 1,max z 7 g
5) Max. z 3 x1 4 x2
Subject to,
3 x1 2 x2 8
x1 4 x2 10
bAns.: x 0, x
1 2 4,max z 16 g
6) Max. z 11 x1 4 x2
Subject to,
x1 2 x2 4
175
5 x1 2 x2 16
2 x1 x2 4
bAns.: x 2, x
1 2 3,max z 34 g
7) Max. z x1 x2
Subject to,
x1 2 x2 4
6 x1 2 x2 9
bAns.: x 1, x
1 2 0,max z 2g
8) Max. z 3 x1 2 x2 5 x3
Subject to,
5 x1 2 x2 7 x3 28
4 x1 5 x2 5 x3 30
bAns.: x 0, x
1 2 0, x3 4,max z 20 g
BRANCH AND BOUND METHOD
The branch and bound method was first developed by A. H. Land and A. G. Daig and it
was further studied by J.O. C. Little et. al. and other researchers. This method can be used to
solve all integer, mixed integer and zero - one linear problems. This is the most general technique
for the solution of integer programming problem (I.P.P.) in which a few or all the variables are
constrained by their upper or lower bounds.
4.5 STEPS OF BRANCH AND BOUND ALGORITHM
Step : 1
Initialization : Consider the following all integer programming problem.
176
Maximize z c1 x1 c 2 x2 ...c n xn U|
Subject to constra int s ||
a11x1 a12 x2 a13 x3 ... x1n xn b1 |V bLP Ag
a 21 x1 a 22 x2 a 23 x3 ... a 2 n xn b2 ||
...................................................
||
am 1 x1 am 2 x2 am 3 x3 ... am n xn bm W
Obtain the optimal solution of the given problem ignoring integer restriction on the
variables.
If the solution to this LP problem (say L P - A) is infeasible or unbounded, the solution to
the given all integer programming problem is also infeasible or unbounded, as the case may be,
Otherwise examine optimal feasible solution. If the answer satisfies the integer
restrictions, the optimal integer solution has been obtained. If one or more basic variables do not
satisfy integer requirement then go to step 2.
Step : 2
a) Let the optimal value of objective function of LP - A be z1 . This value provides an
initial upper bound on objective function value for integer LP problem. Let it be
denoted by zu . The lower bound on integer LP problem can be obtained by
truncating to integer all values of the varialbes. Let the lower bound be denoted by
zL .
xk xk and xk xk 1
to the original LP problem. Here xk is the integer portition of the current non -
integer value of the variable xk . This is done to exclude the non - integer value of
the variable xk . The two new LP sub problems are as follows.
LP sub - problem B LP sub - problem C
n
Max z c .x
j1
j j Max z c .x j j
subject to subject to
a ij x j bi a ij x j bi
177
xk xk xk xk 1
and x j 0 and x j 0
Step : 3
Bound step : Obtain optimal solution of sub - problems B and C. Let the optimal value of
the objective function of LP - B be z 2 and that of LP - C be z 3 .
Step : 4
Examine solution of both LP - B and LP - C, which might contain optimal point.
1) Exclude a sub - problem from further consideration if it has an infeasible solution.
2) If a sub - problem yields a solution that is feasible but not an integer then for this
sub - problem return to step - 2.
3) If a sub - problem yields a feasible integer solution examine the value of objective
function. If this value is equal to the upper bound zU , an optimal solution has
been reached. But if it is not equal to the upper bound zU but exceeds the lower
bound zL , this value is considered as new upper bound and return to step 2.
Finally if it is less than the lower bound, terminate this branch.
Step : 5
The procedure of branching and bounding contimes until no further sub problem remains
to be examined. At this stage, the integer solution corresponding to the current lower bound is
the optimal all integer programming problem solution.
4.6 Examples
1) Solve the following all integer programming problem using the branch and bound
method.
Maximize z 3 x1 5 x2
Subject to the constraints
2 x1 4 x2 25
x1 8
2 x2 10
Answer :
Relaxing the integer requirements, the optimal non - integer solution of the given integer
L. P. problem obtained by the graphical method as shown below is x1 8, x2 2.25 and
z1 35.25 .
178
x2
x1=8
12
10
8
2x1+4x2=25
6 C
2x2=10
4
Feasible B (8, 2.25)
2 Region
x1
2 4 6 8 10 12
The value of z1 represents the initial upper bound, zu 35.25 on the value of the objective
function i. e. the value of the objective function in the subsequent steps cannot exceed 35.25.
The lower bound zL is obtained by truncating the solution values to x1 8 and x2 2 .
Thus zL 3 ( 8 ) 5 (2) 34
The variable x2 ( 2.25 ) is the only non - integer solution value and is therefore selected
for dividing the given problem into two sub - problems LP - B and LP - C. Two new censtrains
x2 2 and x2 3 are created. These two constraints are added to the given problem to get two
sub - problems.
LP - B LP - C
Max z 3 x1 5 x2 Max. z 3 x1 5 x2
2 x1 4 x2 25 2 x1 4 x2 25
x1 8 x1 8
2 x2 10 2 x2 10
x2 2 x2 3
179
x2
x1 8
12
10
6
x2 5
4 C
x2 3
2 x2 2
B
2 4 6 8 10 12 14
B) Feasble region for sub - problem B
C) Feasible region for sub - problem C.
The solution to subproblem C is x1 6.5, x2 3, z3 34.5 . Notice that both solution yield
value of z lower than that of original LP problem. The value of z, establishes an upper bound on
z 2 and z 3 values of sub - problems.
Since the solution of sub - problem B is an all integer, we stop the search of this sub -
problem i. e. no further branching is required from node B. The value of z2 34 becomes the
new lower bound on the IP problems optiomal solution. A non - integer solution of sub - problem
C and also z3 z2 , both indicate that further brancing is necessary from node C. However if
z3 z2 then no further branching would have been required from node C. The upper bound now
takes the value zU z3 34.5 instead of 35.25 at node A.
The sub - problem C is now branched into two new subproblems D and E, and are
obtained by adding the constraints x1 6 and x1 7 (for problem C, x1 6.25 )
LP - D LP - E
Max. z 3 x1 5 x2 Max. z 3 x1 5 x2
Subject to, Subject to,
2 x1 4 x2 25 2 x1 4 x2 25
x1 8 x1 8
2 x2 10 2 x2 10
x2 3 x2 3
180
x1 6 x1 7
x1=6
x1=7
10 x1=8
6
x2 5
4 D
x2 3
2 x2 2
x1
2 4 6 8 10 12 14
In problem - D solution x2 3.25 is not an integer solution. Create new sub problems F
and G from sub problem D with two new constraints x2 3 and x2 4 .
LP - F LP - G
Max. z 3 x1 5 x2 Max. z 3 x1 5 x2
Subject to, Subject to,
2 x1 4 x2 25 2 x1 4 x2 25
x1 8 x1 8
2 x2 10 2 x2 10
x2 3 x2 3
x1 6 x1 6
x2 3 x2 4
181
and x1, x2 0 and integers. and x1, x2 0 and integers.
The branching process is terminated when new upper bound is less than or equal to the
lower bounds of previous solutions or no further branching is possible.
Although the solution at node G is non - integer, no additional branching is required from
this node because z6 z4 . The branch and bound algorithm is terminated and the optimal integer
solution is x1 8, x2 2 and z = 34 yielded at node B.
The branch and bound procedure for the above problem is given below.
B F
x1 8, x2 2 Optimal x1 6, x2 3
x2 2 x2 3
z2 34 Solution z5 33
A D
x1 8, x2 2.25 x1 6, x2 3
z1 35.25 x1 6 z5 33
C x2 4 G
zU = 35.25
zL = 34 x2 3 x1 6, x2 3.25 x1 4.5, x2 4 x
z4 34.25 z6 33.5
zU 34.5 E
zL 33.0 x1 7
Infeasible
2) Use branch and bound technique and solve the following integer programming
problem.
Max. z 7 x1 9 x2
Subject to,
x1 3 x2 6
7 x1 x2 35
0 x1, x2 7
Answer
Relaxing the integers requirement the optimal non - integer solution obtained by graphical
method is as follows.
182
7 x1 x2 35
x2
x1 7
8 x2 7
x1 3 x2 6
6 (9/2,7/2)
2 Feasible
region
x1
2 4 6 8
9 7
x1 , x2
2 2
FG 9 IJ 9 FG 7 IJ 63
and z1 7
H 2 K H 2K
Thus zu 63 and zL 7 (4 ) 9 ( 3) 55
9
Both x1 and x 2 are non - integer solution values. Choose x1 for dividing the given
2
problem into two sub problems LP - B and LP - C. Two new constraints x1 4 and x1 5 are
added to LP - B and LP - C respectively.
LP - B LP - C
Max. z 7 x1 9 x2 Max. z 7 x1 9 x2
Subject to, Subject to,
x1 3 x2 6 x1 3 x2 6
7 x1 x2 35 7 x1 x2 35
0 x1, x2 7 0 x1, x2 7
x1 4 x1 5
183
x2
x1 4 x1 5
8
x1 3 x2 6
6
(4,10/3)
4
2 Region B
x1
2 4 6 8
Region C = {(5,0)}
7 x1 x2 35
10
The solution of sub problem LP - B is x1 4, x2 , z2 58 . The feasible region for
3
subproblem LP - C is {(5, 0)}. Therefore the solution of subproblem LP - C is x1 5, x2 0, z3 35 .
Since all the variables have integer values, we stop the search for this subproblem i. e. no
further branching is required from node C. The value z = 35 becomes the new lower bounds on
the IP problems optimal solution. A non - integer solution of subproblem B and z2 z3 , both
indicate that further branching is necessary from node B.
The sub - problemB is now branched into two new subproblem D and E, and are obtained
by additing the constraints x2 3 and x2 4 (as for problem B, x2 10 / 3 ).
LP - D LP - E
Max Z 7 x1 9 x2 Max. Z 7 x1 9 x2
Subject to, Subject to,
x1 3 x2 6 x1 3 x2 6
7 x1 x2 35 7 x1 x2 35
0 x1, x2 7 0 x1, x2 7
x1 4 x1 4
x2 3 x2 4
The graphical solutions to LP - D and LP - E are as follows.
184
x2
8
x2 7
6 x1 3 x2 6
(4,10/3)
4
x2 3
2 D
x1
2 4 x2=5 6 8
Region C = {(5,0)}
x1=4
7 x1 x2 35
Start
x1 9 / 2, x2 7 / 2
z1 63
x1 4 x2 5
x1 4, x2 10 / 3 x1 5, x2 0
z2 58 z3 35
x2 3 x2 4
x1 4, x2 3
No solution
z4 55
Optimal solution
185
Remark
If the number of variables are more than 2 then exclude the redendent constraints and
solve these problems by simplex method and obtain solutions corresponding to each sub -
problem.
~ ~ ~ ~ ~ EXERCISE ~ ~ ~ ~ ~
Use branch and bound technique and solve the following integer programming problems.
1) Max. z 3 x1 3 x2 13 x3
Subject to,
3 x1 6 x2 7 x3 8
5 x1 3 x2 7 x3 8
0 xj 5
2) Max. z 3 x1 x2
Subject to,
3 x1 x2 x3 12
3 x1 11 x2 x4 66
x j 0, j 1, 2, 3 , 4
3) Max. z x1 x2
Subject to,
4 x1 x2 10
2 x1 5 x2 10
x1, x2 0, 1, 2, 3
4) Min. z 3 x1 2.5 x2
Subject to,
x1 2 x2 20
3 x1 2 x2 50
186
bAns.: x 14, x
1 2 4, z 52 g
5) Max. z 2 x1 3 x2
Subject to,
x1 3 x2 9
3 x1 x2 7
x1 x 2 1
bAns.: x 0, x
1 2 3, z 9 g
6) Max. z 7 x1 6 x2
Subject to,
2 x1 3 x2 12
6 x1 5 x2 30
bAns.: x 5, x
1 2 0, z 35 g
7) Max. z 5 x1 4 x2
Subject to,
x1 x2 2
5 x1 3 x2 15
3 x1 5 x2 15
bAns.: x 3, x
1 2 0, z 15 g
8) Max. z 3 x1 x2 3 x3
Subject to,
x1 2 x2 x3 4
2 x2 15
. x3 1
x1 3 x2 2 x3 3
187
x1, x2 0
FG Ans.: x 0, x 8 29 IJ
H 1 2 , x3 1, z
7 7 K
9) Max. z x1 x2
Subject to,
2 x1 5 x2 16
6 x1 5 x2 30
x2 0
FG Ans.: x 4, x 6 26 IJ
H 1 2 ,z
5 5 K
10) Max. z 110 x1 100 x2
Subject to,
6 x1 5 x2 29
4 x1 14 x2 48
b
x1, x2 0 and integers. Ans.: x1 4, x2 1, z 540 g
188
UNIT DYNAMIC
05 PROGRAMMING
Dynamic programming is a quantitative technique for solving problems involving a
sequence of inter related decisions. It is a decision making problem. In this technique a problem
is divided into sub - problems (stages). The computation at different stages are linked through
recursive computations in such a way that the feasible optimum solution of the entire problem is
obtained when the last stage is reached.
This technique was developed by 'Richard Bellman'. Bellman's principle of optimality
states that. An optimal policy has the property that whatever the initial state and deciions are the
remaining decisions must constitute an optimal policy with regard to the state resulting from the
first decision.
Mathematically, this can be written as
fN ( x) max n l qs
dn l xq r (dn ) fN 1 T ( x, dn )
b g
T x, dn = The transfer function which gives the resulting
state
The problem which does not satisfy the principle of optimality cannot be solved by the
dynamic programming method.
Characteristics of Dynamic Programming
1) The problem can be divided into stages, with a policy decision required at each
stage.
2) Every stage consists of a number of states associated with it. The states are
different possible conditions in which the system may find itself at that stage of
the problem.
3) The decision at each stage converts the current state into a next state.
4) The state of the system at a stage is decscribed by state variables.
5) Given the current state, an optimal policy for the remaining stages is independent
of the policy adopted in previous stages.
189
6) A recursive relation (functional equation) is formulated with n stages.
7) Using recursive equation approach each time the solution procedure moves
backward stage by stage for obtaining the optimal policy of each state for that
particular stage, till it attains the optimum policy beginning at the initial stage.
5.1 EXAMPLES
Example : 1
A positive quantity C is to be divided into n parts in such a way that the product of the n
parts is to be a maximum. Obtain the optimal subdivision.
Solution :
Step : 1
Mathematical formulation and development of recurrence relation. If the number c is
divided into n parts y1, y 2 ,... y n ( s a 4 ) . Then the problem is to find y1, y 2 , y 3 ,..., y n which
b g
Let ui i 1, 2,..., n be the ith part of c. In this problem each part ui is may be regarded as
a stage, ui may assume any non negative values such that y1 y 2 y 3 ... y n c .
Hence fa the alternatives at each stage are infinite. It is a problem of continuous system
and hence the optimal decision at each stage are obtained by using the method of differential
calculus.
Let fn ( c) denote the maximum value of the product when the quantity c is divided into n
parts. fn ( c) is function of discrete variables n.
f1 ( c) c .......... (1)
Let y1 z
y2 c z
0ZC
m b gr
f2 ( c) Max y1 y 2 Max z c z
m b gr
f2 ( c) Max z f2 c z
0 ZC b g b g
(Since f1 c z c z from (1)
190
For n = 3, if c is divided into threee parts u1, u 2 , u 3
Let y1 z, then y 2 y 3 c z
Therefore the part c - z is further divided into two parts y 2 , y 3 where maximum product
b g
is f2 c z by definition of fn ( c) .
bg
f3 c Max y1 y 2 y 3 Max zf2 c z
0ZC
m b gr .......... (3)
n
fm ( c) Max zfm 1 c z
0 ZC
b gs .......... (4)
Step 2
Solve the recursive relation for optimal policy
From (1) f1 ( c) c
From (2) l
f2 ( c) Max z f2 ( c z)
0 ZC
q
l
Max z.( c z)
0Z C
q
We apply the method of diff. calcules
d
dz
c b gh
z. c z c 2 z 0
c c
z ,c z
2 2
d2
dZ 2 m b gr
z c z 2 at z
c
2
c
Hence z (c - z) is maximum at z
2
c c c FG IJ 2
f2 ( c) .
2 2 2 H K .......... (5)
FG c , c IJ
Optimal policy for two parts is
H 2 2K
191
In other words the optimal policy for two parts is division of c in two equal parts.
R| F c zI 2
U|, f bc zg F c zI 2
Max z . S| GH z JK V| GH z JK
T W .......... From (5)
2
0ZC
R| F IJ 2
U| R|1. F c zI 2
FG c zIJ FG 1IJ U|V 0
d
z. S| GH
cz
K V| S| GH z JK z. 2
H 2 K H 2 K W|
dz T
z W T
c=3z
c
z
3
c 2c
c zc is to be divided into two parts whose product is maximum.
3 3
R c c U| 2
c| RS UV FG c IJ
2 3
f bc g S V
3 c c
3
3| 2 |
T W H 3K
3 3
T W
FG c , c , c IJ is
Hence the optimal policy for three parts is
H 3 3 3K
c is divided into three equal parts.
In general for n parts (stages)
FG c , c , c ,..., c IJ
Optimal policy is
H n n n nK
F cI
f ( c) G J
n
n
H nK
We shall have this result by induction on n.
192
The given result is true for n = 1
b g FGH mc IJK
m
i. e. fm c
From (4) bg m b gr
fm 1 c Max z fm c z
0ZC
R| F c z I m
U|
Max z . S| GH m JK V|
0 ZC
T W
We apply the method of differential calculus
R| F IJ m
U| 1. F c zI m
F c zIJ FG 1 IJ
m1
d
z S| GH
cz
K V| GH m JK zm G
H m K H mK
dz mT W
=0
c
z
m 1
d2 R| F IJ m
U| 0
z. S| GH
cz
K V| for z
c
dz 2
T
m W m 1
R| c c U| m
c
S m 1
V F c IJ
G
m1
fm 1
m 1| m | H m 1K
|T |W
Optimal policy in this case is
RS c , c ,..., c UV
Tm 1 m 1 m 1W
Hence the required optimal policy is
RS c , c ,..., c UV
Tn n n W
193
Example : 2
Use dynamic programming to show that
n n
1
pi log pi subject to p 1 is maximum when p
i1
i 1 p2 ... pn
n
i 1
Step 1
Form a functional equation we consider a problem as follows
p log p bp log p g
n
i i 1 1 p2 log p 2 ... pn log pn is maximum.
i 1
b g
f1 (1) Max p 8 log p1 1.log 1 .......... (1)
Let p1 z
p 2 1 z
b
f2 (1) Max p1 log p1 p 2 log p 2g
Max z log z b1 zg log)b1 zg
Let p1 z , then p 2 p 3 1 z
Therefore the parts (1 - z) is divided into two parts p 2 , p3 whose maximum value is
f2 (1 z) .
194
f3 (1) Max p1 log p1 p 2 log p 2 p 3 log p 3
Max z log z f2 1 z
0 Z 1
b g .......... (3)
By similar procedure
We get the functional equation for n = m.
Step 2
Solve the functional equation
d
dz
b g b g
z log z 1 z log 1 z
LM
log z
z
b g bb gg b g b gOPP
1 Z
1
1 log 1 z 0
MN z 1 z Q
1
z
2
d2
dz 2 b g b g
z log z 1 z log 1 z 4 0 at Z
1
2
bg 1 1 1 1 1 1 FG IJ
f2 1 log log z log
2 2 2 2 2 2 H K
1
Thus the optimal policy for two parts is p1 p2
2
using (3) we have
195
LM RS FG 1 zIJ log FG 1 z IJ UVOP
T H 2 K H 2 K WQ
Max z log z 2
0 Z 1
N .......... (5)
d LM 1 z
b g FGH IJ OP
dz
z log z 1 z log
N 2 KQ
LM
log z
z
b g FGH IJK b g b g FGH 1z IJK OPP
1 log
1 z
1 z
1
MN z z 1 z Q
=0
1
z
3
1 2
1 z 1
3 3
d2 LM 1 z
b g FGH IJ OP 0 at z 2
dz 2
z log 2 1 z log
N z KQ 3
2
1 z is to be divided into two parts p 2 and p 3 such that p 2 log p 2 p 3 log p 3 is
3
maximum.
R 1 F 1I U
3 S log G J V
T 3 H 3KW
F 1I R 1 F 1I U
f (1) log G J 2 S log G J V
1
3
3
H 3K T 3 H 3K W
R 1 F 1I U
3 S log G J V
T 3 H 3KW .......... (6)
196
1
Hence the optimal policy for three parts is p1 p2 p3
3
1
In general for n parts the optimal policy is p1 p2 p3 ... pn
n
1 1RS FG IJ UV
and fn (1) n log
n n T H KW .......... (7)
LM 1 1 FG IJ OP
fm (1) m
N m
log
m H KQ
We shall show that given result (7) also hold for n = m + 1
From (4)
bg m
fm 1 1 Max z log z fm 1 z
0 Z 1
b gr
LM RS b1 zg log b1 zg UVOP
Max z log z m
0 Z 1
MN |T m m |WPQ
Consider
d LM RS b g b g UVOP
1 z 1 z
df
z log z m
MN |T
m
log
m |WPQ
LM OP
log z m M
z b1 zg log b1 zg m FG 1 z IJ 1 FG 1 IJ P 0
z MM m m H m K FG 1 z IJ H m K P
N HmK PQ
1
z
m 1
1
Second derivative is < 0 for z
m 1
b g FGH
fm 1 z fm 1
1
1 m
IJ
K
197
R| FG m IJ m U|
f G
F m IJ m |S H 1 mK log 1 m |V
m
H 1 m K | m m |
|T |W
L 1 log F 1 I OP
m M
N 1 m GH 1 m JK Q
and optimal policy is
1
p1 p 2 pm 1
m 1
1
p1 p2 ... pn
n
Example : 3
Find Min. z x1 x2 ... xn
x1, x2 ,..., xn 0
z x1 x2 ... xn
FG d IJ
f2 ( d) Min z Min y
0yd H y K
m
Min y f1 d / y
0 yd
b gr .......... (2)
198
For n = 3 i. e. d is factorized into three parts x1, x2 , x3
d
Let x1 y, x2 x3
y
b g
i. e. part d / y is further divided into two parts whose minimum value is f2 d / y
l
f3 (d) Min z Min x1 x2 x3 q
RS FG dIJ UV
Min y f2
0yd
T H yK W ........... (3)
RS FG dIJ UV
fm ( d) Min y fm 1
0 yd
T H yK W .......... (4)
From (1) f1 ( d) d
RS FG dIJ UV
T H yK W
From (2) f2 ( d) Min y f1
0 yd
R dU
Min S y V
T yW
0 yd
d FG
d d IJ
dy
y
H
y
1 2 0
y K
y d1/ 2
d2 FG d 2dIJ
dy 2
H
y
y y K
3 0 for y d1/ 2
d
Hence y is minimum for y d1/ 2
y
bg
f2 d d1/ 2
d
d
1/ 2
2 d1/ 2 .......... (5)
From (3)
199
RS FG dIJ UV
T H yK W
f3 (d) Min y f2
0yd
R| F dI U|
Min Sy 2 G J V
1/ 2
0y d
|T H y K |W
R| FG IJ U|V 1 d
1/ 2 1/ 2
d
y2 S|
d
H K |W y 0
dy y
T 3/2
y 3 / 2 d1/ 2
d2 R| FG IJ U|V 0 for y d
1/ 2
yz S|
d
H K W|
1/ 3
dy 2
y
T
FG dIJ 1/ 2
Hence y z
H yK is minimum for y d1/ 3
bg FG d IJ 3 d
Hd K
f3 d d1/ 3 Z 1/ 3
1/ 3
F I
Hence the optimal policy is H d , dd i , dd i K i. e. optimal policy is dd
1/ 3 2/3
1/ 2
2/3
1/ 2
1/ 3
, d1/ 3 , d1/ 3 i
By similar procedure we have
bg
fn d n d1/ n
1/ n 1/ n
and the optimal policy is d , d ,..., d d
1/ n
i
The above result can be proved by induction.
Example : 4
Minimize z y12 y 22 y 23
Solution :
In this problem y1, y 2 , y 3 are decision variables. This is three stage problem.
200
State variables s1, s 2 , s 3 are defined as
s 3 y1 y 2 y 3 15
s 2 y1 y 2 s 3 y 3
s1 y 1 s 2 y 2
b g
F3 s 3 min y 23 F2 s 2
y3
b g
b g
F2 s 2 min y 22 F1 s1
y2
b g
b g
F1 s1 y12 s 2 y 2 b g 2
b g
Thus F2 s 2 min
y
y 22 s 2 y 2
2
b g 2
d
d y2
b
y 22 s 2 y 2 g 2
b
2 y2 2 s 2 y2 0 g
d2
d y 22
b
y 22 s 2 y 2 g 2
0 at y 2 s 2 / 2
b g
Hence F2 s 2 s 22 / 2
F bs g min y
3 3
2
3 b g
F2 s 2
y3
L bs g OP
2
min M y 2 3 y3
MNy3
3
2 PQ
By method of differential calcules
F3 (s) is minimum at y 3 s 3 / 2
s 23
Hence F3 (s 3 ) , s 3 15
3
F3 (s 3 ) is minimum for s 3 15
201
UNIT APPLICATION TO LINEAR
06 PROGRAMMING
Solution of Linear Programming Problem as a Dynamic Programming Problem
A general L. P. problem is
Max. z c1 x1 c 2 x2 .......... cn xn
subject to
a 21 x1 a 22 x2 .......... a2 n xn b 2
.....................................................
.....................................................
b g
Let fn b1, b2 ,...., bm be the maximum value of the general linear programming defined
above for the states x1, x2 ,...., xn for states b1, b 2 ,..., bn
We use backward compatational procedure.
b g o d
fn b1, b2 ,..., bn Max c j x j fj 1 b1 ai j x j , b1 a2 j x j ... bn am j x j
0 xj bj
it
The maximum value of b that x j can assume is
|RS b1 b2 b
,...., m
|UV
T| a W|
b Min ,
ij a2 j am j
202
EXAMPLES
1) Solve the following L. P. P. by dynamic programming
Maximise z 2 x1 5 x2
Subject to 2 x1 x2 43
2 x2 46
x1, x2 0
Solution :
Since there are two resources, the states of the equivalent dynamic programming problem
can be described by two variables only,
b g
Let b1, b 2 describe the states j (= 1, 2)
For j = 2 we have
b g
f2 b1 , b2 max 5 x2
x2
l q
5 max x2
x2
l q
RSb , b UV 1 2
5 min
T1 2W
R 46 U
5 min S43, V
T 2W .......... (1)
Next we have
b g
f1 b1 , b2 max 2 x1 5 x2
x1
l q
max
0 x1 43 / 2
m2 x f b43 2 x , 46gr
1 2 1
RS2 x 5 min FG 43 2 x , 46 IJ UV
max
0 x1 43 / 2
T 1
H 1
2 KW
.......... (2)
by using (1)
Consider,
RS 46 UV
T
min 43 2 x1,
2 W
43 2 x1
46
if 43 2 x1 23
2
203
i. e. if 43 23 2 x1
i. e. if 20 2 x1
i. e. if x1 10
Thus
RS 46 UV 43
T
min 43 2 x1,
2 W
43 2 x1 if
2
x1 10
and
RS 46 UV
46
T
min 43 2 x1,
2
W2
23
46
if 43 2 x1 23
2
if 43 23 2 x1
if 20 2 x1
if x1 10
Thus
RS 46 UV
46
T
min 43 2 x1,
2
W2
23 if 0 x1 10
R|2 x 5 b43 2 x g 43
f bb , b g max S
1 1 ,10 x1
2
1 1 2
|T2 x 5 b23g
x1
1 0 x1 10
R|215 8 x 43
f bb , b g max S 1 ,10 x1
2
1 1 2
|T2 x 115
x1
1 , 0 x1 10
b g
Now max 215 8 x1 for 10 x1
43
2
is at x 10 1
Hence x1 10
and
204
Maximum value of z is
zmax z 2 x1 115
= 2 (10) + 115
= 135
z 2 x1 5 x2
135 2 (10) 5 x
135 20 5 x2
115 5 x2
x2 23
Maximise z 8 x1 7 x2
Subject to 2 x1 x2 8
5 x1 2 x2 15
x1 x2 0
Solution :
Since there are two resources, the states of the equivalent dynamic programming problem
can be described by two variables only
b g
Let b1, b 2 describe the states j (= 1, 2)
For j = 2 we have
b g
f2 b1, b2 max 7 x2
x2
l q
7 max x2
x2
l q
RSb , b UV 1 2
7 min
T1 2W
R 15 U
7 min S8, V
T 2W .......... (1)
205
Next we have
b g l
f1 b1 , b2 max 8 x1 7 x2
x1
q
max
0 x1 8 / 2
m8 x f b8 2 x ,15 5 x gr
1 2 1 1
0 x1 15 / 5
RS RS 15 5 x1 UVUV
max 8 x1 7 min 8 2 x1,
0 x1 3
T T 2 WW .......... (2)
by using (1)
Consider,
FG 8 2 x , 15 5 x IJ 8 2 x
H K
1
min 1 1
2
15 5 x1
if 8 2 x1
2
if 16 4 x1 15 5 x1
if 16 15 5 x1 4 x1
if 1 x1
if x1 1
But x1 0
Therefore
FG 8 2 x , 15 5 x IJ 15 5 x
H K 2
1 1
min 1
2
15 5 x1
i. e. if 8 2 x1
2
if 16 4 x1 15 5 x1
if 16 15 5 x1 4 x1
if 1 x1
if x1 1
i. e. if x1 0
206
Thus
FG IJ
15 5 x1 15 5 x1
H
min 8 2 x1 ,
2
K2
if x1 0 .
b g RS FG 15 5 x IJ UV,
H 2 KW
1
f1 b1, b2 max 8 x1 7
x1
T x1 0
R 19 x 105 UV,
max S
x1 T 2 2W 1 x1 0
105
Now for x1 0 , zmax
2
105
Hence x1 0 and zmax z 52.5
2
z 8 x1 7 x2
105
2
bg
8 0 7 x2
52. 5 7 x2
52.5
x2
7
x2 7.5
Maximise z 4 x1 14 x2
Subject to 2 x1 7 x2 21
7 x1 2 x2 21
x1, x2 0
Solution :
Since there are two resources, the states of the equivalent dynamic programming problem
can be described by two variables only.
207
b g
Let b1 , b2 describe the states j (= 1, 2)
For j = 2, we have
b g
f2 b1 , b2 max 14 x2
x2
l q
14 max x2
x2
l q
RSb , b UV 1 2
14 min
T7 2 W
R 21 21U
14 min S , V
T7 2W .......... (1)
Next we have
b g l
f1 b1 , b2 max 4 x1 14 x2
x1
q
max
0 x1 21 / 2
m4 x f b21 2 x , 21 7 x gr
1 2 1 1
0 x1 21 / 7
by using (1)
Consider
21 2 x1 21 7 x1
if
7 2
if 42 4 x1 147 49 x1
if 49 x1 4 x1 147 42
if 45 x1 105
105 7
if x1
45 3
Thus
208
FG 21 2 x , 21 7 x IJ 21 2 x , 0 x 7
H 7 2 K
1 1 1
min 1
7 3
and
21 2 x1 21 7 x1
if
7 2
if 42 4 x1 147 49 x1
if 49 x1 4 x1 147 42
if 45 x1 105
105 7
if x1
45 3
Thus
f bb , b g max S
| 1 0 x1
3
1 1 2
||4 x 14 FG 21 7 x IJ,
x1 7
H 2 K
1
x1 3
T 1
3
R|4 x 2 b21 2 x g, 0 x1
7
max S
1 1
3
||4 x 7 b21 7 x g,
x1 7
T 1 1
3
x1 3
R|42, 0 x1
7
max S
3
||147 45 x ,
x1 7
T 1
3
x1 3
b
Now max. 147 45 x1 for g 7
3
x1 3 is at x1
7
3
209
7
Hence x1
3
From above maximum value of z is
zmax z 42
z 4 x1 14 x2
FG 7 IJ 14 x
H 3K
42 4 2
28 126 28
14 x2 42
3 3
98
x2
14 3
7
x2
3
7
Hence maximum z z 42 at x1 x2
3
Applications to Inventory
Example
Suppose that there are n machines which can perform 2 jobs. If x of them do the first job,
then they produce goods worth g (x) = 3 x and if y of the machines perform the second job, then
they produce goods worth h (y) = 2.5 y. Machines are subject to depreciation, so that after
performing the first job only a ( x) x / 3 machines remains available and after performing the
2
second job b ( x) y machines remains available in the beginning of the second year. The
3
process is repeated with remaining machines. Obtain the maximum total return after 3 years
and also find the optimal policy in each year.
Solution :
Here first, second and third year are considered as period 1, 2 and 3 respectively.
Let
210
si = totoal number of machines in hand (available) at the beginning of ith period
fn ( s) = maximum possible return when there are n periods left with initial number
of available machines being 's'.
The problem is now taken out by using backward reference approach.
Consider the 3rd year.
Here s 3 is the number of machines available at the beginning of the 3rd year..
Thus, l
f1 (s 3 ) max 3 x3 2.5 y 3
x3 , y 3
q
subject to x3 y 3 s 3
Maximise z 3 x3 2.5 y 3
subject to x3 y 3 s 3
x3 , y 3 0
B 0, s 3
A s3 , 0
0
z=0
Max. z is occur at A s 3 , 0 . b g
x3 s 3 , y3 0
Consider the situation in the second year the number of machines available at the
beginning of this year is clearly s 2 and we have
211
b g RS FG x 2 y IJ UV
H 3 3 KW
2
f2 s 2 max 3 x2 2.5 y 2 f1
x2 y 2
T 2
Since x 2 and y 2 machines are used for the two jobs and x2 / 3 and 2 / 3 y 2 machines b g
will remain available at the beginning of the next year.
RS FG x 2 y IJ UV
H 3 3 KW
2
f2 ( s 2 ) max 3 x2 2.5 y 2 3
T
x2 y 2
2
l
f2 ( s 2 ) max 4 x2 4.5 y 2
x2 y 2
q
subject to x2 y 2 s 2
x2 , y 2 0 .......... (3)
B 0, s 2
A s2, 0
0
z=0
It is clear from the graph that the solution of the L. P. P. is given by equation (3) is
b g
occuring at B 0, s 2 .
x2 0 , y 2 s 2
Now in the first year, the total number of machines available at the beginning of the
period is s1 and we have
b g RS FG x 2 y IJ UV
H 3 3 KW
1
f3 s1 max 3 x1 2.5 y1 f2
x1 y1
T 1
212
RS FG x 2 y IJ UV
H 3 3 KW
1
max 3 x1 2.5 y1 4.5
x1 y1
T 1
max l3 x 2.5 y 15
1 . x 3y q
1 1 1
x1 y1
b g l
f3 s1 max 4.5 x1 5.5 y1
x1 y1
q
subject to x1 y1 s1
x1, y1 0
B 0, s1
A s1, 0
0
z=0
x1 0 y1 s1 n
b g
f3 s1 5.5 s1 5.5 n .......... (6)
Thus
Period 1 Period 2 Period 3
FG IJ
2 2 4
x1 0 x2 0 x3
3 3H K
n n
9
2
y1 n y 2 n y3 0 .......... (7)
3
213
Thus equation (7) gives the entire solutions means during the first period all the n -
machines are used for the second job. Then (2 / 3) n machines will be left for the second year.
FG FG 2 IJ nIJ again for second job. Therefore 2 FG 2 nIJ 4 n machines will
Then use all the machine
HH 3 K K 3H3 K 9
4
be available for the third year. In the third year use all these n machines for the first job.
9
If this is done then the optimum possible return will be 5.5 n.
Example
A man is engaged in buying and selling identical items. He operates from a warehouse
that can be hold 500 items. Each month he can sell any quantity that be chooses up to the stock
at the beginning of the month. Each month, he can buy as much as he wishes for delivery at the
end of the month so long as his stock does not exceed 500 items. For the next four months, he
has the following error - free forecasts of cost sales prices.
Month i 1 2 3 4
Cost ci 27 24 26 28
Sale prices pi 28 25 25 27
If he currently has a stock of 200 units, what quantities should he sell and buy in next four
months ? Find the solution using dynamic programming.
Solution :
To solve the problem by using dynamic programming we consider the months 1, 2, 3, 4
as periods respectively.
Let
fn (bn ) - The maximum possible return when there are n months left with the initial
stock level bn at the beginning of the month.
It is clear that
b 2 b1 y1 x1
214
b 3 b 2 y 2 x2
b4 b 3 y 3 x 3
In general
bn bn 1 y n 1 xn 1
b n 1 b n y n xn
bn yn xn 500
0 yn 500 xn bn
and 0 xn bn
b g l
f1 b4 max x4 p4 c 4 y 4
x4 y 4
q
b g m
f2 b 3 max x3 p 3 c 3 y 3 f1 b 4
x3 y 3
b gr
b g m
f3 b 2 max x2 p 2 c 2 y 2 f2 b 3
x2 y 2
b gr
b g m
f4 b1 max x1 p1 c1 y1 f3 b 2
x1 y1
b gr
Step - I
Let b 4 be the stock level at the starting of the fourth month.
Therefore,
b g l
f1 b4 max x4 p4 c 4 y 4
x4 y 4
q
where 0 x4 b 4 , 0 y 4 500 x4 b4
b g l
f1 b4 max 27 x4 28 y 4
x4 y4
q
Max. occurs at x4 b 4 and y 4 0
b g
f1 b 4 27 b4 ........... (1)
Step - II
In the third month, i.e. 2 months are left with initial stock and b 3 be the initial state at the
beginning of this month.
215
Since the stock b4 b 3 x3 y 3 will be available at the beginning of next month.
b g l
f2 b3 max 25 x3 26 y 3 27 b4
x3 y 3
q
Where 0 x3 b 3 , 0 y 3 500 x3 b3
m b
max 25 x3 26 y 3 27 b 3 x3 y 3
x3 y3
gr
l
max 25 x3 26 y 3 27 b3 27 x3 27 y 3
x3 y3
q
l
max 2 x3 y 3 27 b3
x3 y3
q
It will be max. when x3 0 and y 3 500 x3 b 3
b g l
f2 b3 max 2 x3 500 x3 b3 27 b3
x3
q
l
max 500 26 b3 x3
x3
q
b g
f2 b 3 is maximum at x3 0
f bb g 500 26 b
2 3 3
x3 0 and y 4 500 x3 b 3
500 b 3
b g
f2 b 3 500 26 b 3 ........... (2)
Step - III
In the second month, b 2 be the intial stock at the beginning of this month.
b g m b gr
f3 b 2 max p 2 x2 c 2 y 2 f2 b3
x2 y 2
b g m
f3 b 2 max 25 x2 24 y 2 f2 b3
x2 y 2
b gr
l
max 25 x2 24 y 2 26 b3 500
x2 y 2
q
216
b g m b
f3 b 2 max 25 x2 24 y 2 26 b 2 x2 y 2 500
x2 y 2
g r
l
max 25 x2 24 y 2 26 b2 26 x2 26 y 2 500
x2 y 2
q
l
=max x2 2 y 2 26 b2 500
x2 y 2
q
m b g
max x2 2 500 x2 b 2 26 b2 500
x2
r
l
max x2 1000 2 x2 2 b2 26 b2 500
x2
q
l
max x2 24 b2 1500
x2
q
It will be maximum at x2 b 2
b g
f3 b 2 b 2 24 b2 1500
25 b2 1500
x2 b 2 y 2 500 x2 b 2 500 b2 b2
500
and b g
f3 b 2 25 b 2 1500 .......... (3)
Step - IV
In the first month, b1 be the initial stock at the beginning of this month.
b g m b gr
f4 b1 max x1 p1 c1 y1 f3 b 2
x1 y1
l
max 28 x1 27 y1 25 b2 1500
x1 y1
q
m b
max 28 x1 27 y1 25 b1 x1 y1 1500
x1 y1
g r
l
max 28 x1 27 y1 25 b1 25 x1 25 y1 1500
x1 y1
q
l
max 3 x1 2 y1 25 b1 1500
x1 y1
q
217
Clearly this will be occurs at y1 0 and x1 b1
b g
f4 b1 3 b1 0 25 b1 1500
28 b1 1500
x1 b1 y1 0
and b g
f4 b1 28 b1 1500 .......... (4)
x1 200 b1 y1 0
x2 b 2 b1 x1 y1 y 2 500
x3 0 y 3 500 b 3
x4 b 4 y4 0
Thus
x1 200 y1 0 b g
f4 b1 28 b1 1500 7100
x2 0 y 2 500 f bb g 1500
3 2
x3 0 y3 0 f bb g 500 26 b
2 3 3 13500
x4 500 y4 0 f bb g 13500
1 4
Purchase yi 0 500 0 0
218
UNIT NON - LINEAR
07 PROGRAMMING
7.1 INTRODUCTION
The general non linear programming problem (NLPP) can be stated as follows.
Optimize (Maximize or minimize)
b
z f x1, x2 ,..., xn g
Subject to
b gl q
gi x1, x2 ,..., xn , or bi i 1, 2,..., m
and x j 0, j 1, 2,... n
b
Where f x1, x2 ,..., xn g b
and gi x1, x2 ,..., xn g and real valued functions of n decision
variables x1, x2 ,..., xn and at least one of them is non linear..
b g d i b g
x0 x1, x2 ,..., xn is a maximum point if f x0 h f x0 for all h h1, h 2 ,... hn such that h j is b g
sufficiently small for all j.
d i b g
Similarly x0 is a minimum point if f x0 h f x0 such that h j is sufficiently small for
all j.
Quadratic forms
b g d i
Let x x1, x2 ,..., xn and A ai j is n n matrix, then a function of n variables denoted
b g bg
by f x1, x2 ,..., xn or Q x is called a quadratic form in n space if
bg
n n
Q x xT A x a ij xi x j
i 1 j 1
The matrix A can always be assumed symmetric since each elemen of every pair of
b g d
coefficients ai j and a ji i j can be replaced by ai j a ji / 2 without changing, the value ofi
bg
Q x .
2) bg
Positive - semidefinite if Q x 0 for every x and there exists x 0 such that
bg
Q x 0.
3) bg
Negative definite if Q x is positive definite.
4) bg
Negative semidefinite if Q x is positive - semi definite.
1) bg
Q x is positive definite (semidefinite) if the values of the principal minor
detaminants of A are positive (non negative). In this case A is said to be positive
definite (semidefinite)
2) bg
Q x is negative definite if the value of kth principal minor detaminant of A has
the sign of 1 b g k
k = 1, 2, ..., n. In this case A is called negative - definite.
3) bg
Q x is negative semi definite if the kth principal minor determinant of A is either
k = 1, 2, ..., n
Theorem
Note : The above condition is also satisfied for inflection and saddle points. Hence these
conditions are necessary but not sufficient for identifing extreme points. Hence the points obtained
b g
from the solution of f X 0 0 are called as stationary points.
The following theorem gives the sufficiency conditions for X 0 to be an extreme point.
THEOREM
A sufficient condition for a stationary point X 0 to be an extreme point is that the Hessian
matrix H evaluated at X 0 is
b
X x1, x2 , x3 ,...., xn g
220
The Hessian matrix for f X is defined by di
F f 2
d f 2
f I 2
GG x x x x x JJ
2
1 1 2 1 n
H G .......................................... J
GG f 2
f 2
f
JJ 2
GH x x x x ..... x JK
n 1 n 2 n
2
Example - 1
Find the extreme point of the function.
bg
f x x12 x 22 x 23 4 x1 8 x 2 12 x 3 64
Solution :
di
Let X 0 be an extreme point of f X . The necessary condition for extreme point is
d i
f X0 0 .
f X0 d i f
i
f
i
f
x1 x2 x3
k 0
f
2 x1 4 0 x1 2
x1
f
2 x2 8 0 x2 4
x2
f
2 x3 12 0 x3 6
x3
b
Hence X 0 2, 4, 6 is extreme point.g
The Hessian matrix is given by
LM f2
2 f 2 f OP
MM x 2
1 x1 x2 x1 x3 PP L2 0 0 OP
H M
f
MM x x
2
2 f 2 f
PP MM0 2 0P
x22
PP MN0 2PQ
2 1 x 2 x3
0
MM f 2
2 f 2 f
PQ
N x x
3 1 x 3 x2 x23
221
The three principal minors are
2 0 0
2 0
2 0 2 0
0 2
0 0 2
b g
Hence the point X 0 2, 4, 6 is a minimum point of f X . di
e d ij at X b2, 4, 6g
fmin f X
22 4 2 6 2 4 ( 2) 8 ( 4) 12 (6 ) 64
=8
fmin 8
Example - 2
di 2 2 2
Find the extreme points of the function f X x1 2 x 3 x 2 x 3 x1 x 2 x 3
Solution :
f
x3 2 x2 0
x2
f
2 x2 2 x3 0
x3
FG 1 , 2 , 4 IJ
We get X 0
H 2 3 3K
X 0 is an extreme point.
The Hessian matrix
222
LM f2
2 f 2 f OP
MM x 2
1 x1 x2 x1 x3 PP L 2 0 0 OP
H M
2
f
MM x x
2 f 2 f PP MM 0 2 1P
PP MN 0 2PQ
2
2 1 x2 x2 x3
1
MM f 2
2 f 2 f
PQ
N x x
3 1 x3 x2 x3
2
2 0 0
2 0
2 2, 4, 0 2 1 6
0 2
0 1 2
Sign of 1 b g k
are as follows.
b1g ve
2
b1g ve
3
FG 1 , 2 , 4 IJ is a maximum point of f d Xi
The point X 0
H 2 3 3K
19
fmax .
12
7.3 LAGRANGE'S METHOD OF UNDETERMINED MULTIPLIERS
This is a systematic way of generating the necessary conditions for a stationary points
when the constraints are equations.
Example - 1
b g
Minimize Z f x1, x 2 3 e 2 x1 1 2 e x2 5
x1 x2 7 and x1, x2 0
223
Solution :
In this problem hagrangian function
b g b g b
L x1, x2 , f x1, x2 x1 x2 7 g
b
3 e2 x1 1 2 e x2 5 x1 x 2 7 g
Where is a Lagrangian multiplier. The necessary condition for the minimum of f x1, x2 b g
are given by
L
6 e2 x1 1 0 6 e2 x1 1
x1
L
2 e x2 5 0 2 e x2 5
x2
L
b g
x1 x2 7 0 x1 x2 7
6 e 2 x1 1 2 e x2 5 2 e7 x1 5 bx 2 7 x1 g
3 e 2 x1 1 e7 x1 5
elog 3 . e 2 x1 1 e7 x1 5
Hence log3 2 x1 1 7 x1 5
1
x1 11 log 3 , x2 7 x1
3
Example - 2
Solution :
In this problem Lagrangian function is
b
L x1, x2 ,..., xn , g
b g b
f x1, x2 ,..., xn x1 x2 x3 ... xn b g
x1 x2 x3 ... xn bx x 1 2 x3 ... xn b g
The necessary conditions for the maximum are
224
L
x1
b g
x2 x3 ... xn 0 x1 x2 ... xn x1 0
f x1 0 .......... (1)
L
x2
b g
x1 x3 ... xn 0 x1, x2 ,... xn x2 0
f x2 0 .......... (2)
L
xn
d i
x1 x2 ... xn 1 0 x1 x2 ... xn 1 xn xn 0
f xn 0 .......... (3)
L
x1 x2 ... xn b 0
x1 x2 ... xn b 0
b
n f x1 x2 ... xn 0 g
nf b 0
nf
b
From equation f x1 0
nf FG nf IJ
f
b
x1 0
H bK
b
x1
n
Similarly x2 b / n,..., xn b / n
Hence
b
x1 x2 x3 ... xn
n
b
Therefore f is maximum at x1 x2 .... xn
n
225
FG bIJ n
fmax
H nK
Note
In this problem minimum value of f if zero. This value is achieved by taking any one of
x1, x2 ,..., xn zero.
Obtained the set of necessary conditions for the non linear programming problem.
x1 x2 3 x3 2, 5 x1 2 x2 x3 5 and x, x2 , x3 0
Solution :
In this problem hagrangian function is
b g b g b
L x1, x2 , x3 , 1, 2 f 1 x1 x2 3 x3 2 2 5 x1 12 x2 x3 5 g
d i b g b
x12 3 x 22 5 x 23 1 x1 x 2 3 x3 2 2 5 x1 2 x 2 x 3 5 g
The necessary conditions are
L
2 x1 1 5 2 0
x1
L
6 x2 1 2 2 0
x2
L
10 x3 3 1 3 0
x3
L
1
b
x1 x2 3 x3 2 0 g
L
2
b
5 x1 2 x2 x3 5 0 g
7.4 NECESSARY AND SUFFICIENT CONDITIONS FOR OPTIMIZATION OF AN
OBJECTIVE FUNCTION
The general NLPP having n variables and equally type constraints (m n) can be given
as follows.
bg b
Optimize z f x , x x1, x2 , x3 ,..., xn g
226
bg
Subject to gi x fi , i 1, 2, 3,..., m
x0
bg bg
hi x gi x bi
bg
To find the necessary conditions for maximum or minimum of f x a new function. The
d i b
hagrangian function h x, is formed by introducing m Lagrangiam multipliers 1, 2 ,..., m . g
This function is defined as
m
d i d i h dX i
L X, f X
i 1
i i
b g bg di di
f x1, x2 ,..., xm 1 h1 x 2 h 2 X ... m hm X
Assuming that L, f, hi are all differentiable partially w. r. t. x1, x2 ,.., xn 1, 2 ,..., m . The
necessary conditions for the objective function to be maximum or minimum are given by
L f m hi X di
hi
x j xj i 1 xj
0, j 1, 2,..., n
L
and X 0 hi 0, i 1, 2,..., m
i
f m hi X di
i
xj i 1 xj
, 0, j 1, 2,..., n
hi 0 i = 1, 2, 3, ...., m
These are m + n necessary conditions.
These necessary conditions also become sufficient for a maximum (minimum) of the
objective function if the objection in concave (convex) and the side and the constraints are
equally once.
The sufficient conditions for the Lagrangian method will be stated without proof.
227
LM0 : P
mn mn
OP
Define H
B
M................... PP
MMNP : Q PQ
n m nn
bm n g bm n g
LM h bxg h b xg .......... h b xg OP
1 1 1
MM x x x P
h b xg P
1 2 n
h b xg h b xg
M
PM x
2
x
2
..........
x P
P 2
MM.....:................:...........................:..... PP
1 2 n
MM h b xg h b xg .......... h b xg PP
m m m
NM x 1 x 2 x QP n mn
and
LM L
2
L 2
..........
2 LOP
MM x
1
2
x , x1 2 PP
x1 xn
M L L
2
R M x x x
2
..........
2
L P
x x P
2
MM.....:............:.................:
2 1 2
:
2
PP n
MM L L2 2
L P
2
PP
MN x x x x ..........
n 1 n 2 x n Qb2
nn g
P T is transpose of P .
d
i d i
If X , is a stationary point of L x, and HB is the corresponding bordered Hessian
d
matrix evaluated at x , ' Then X 0 is i
1) A maximum point if, starting with the principal minor determinant of order 2 m 1 , b g
the last (n - m) principal minor detuminants of HB form an alternating sign pattern
starting with 1 b g m 1
.
228
The above conditions are sufficient for identifying an extreme point, but the conditions
are not necessary. In other words, a stationary point may be an extreme point without satisfying
the above conditions.
Optimize bg
Z f x x12 x 22 x 23
subject to x1 x2 3 x3 2
5 x1 2 x2 x3 5
Solution :
The Lagrangian function is
d i
L x, x12 x22 x23 1 x1 x 2 3 x 3 2 2 5 x1 2 x 2 x 3 5
L
2 x1 1 5 2 0 .......... (1)
x1
L
2 x 2 1 2 2 0 .......... (2)
x2
L
2 x3 3 1 2 0 .......... (3)
x2
L
1
b
x1 x2 3 x3 2 0 g .......... (4)
L
2
b
5 x1 2 x2 x3 5 0 g .......... (5)
2 x1 2 x2 3 2 0 .......... (6)
Multipling equation (2) by 3 and subtracting equation (3) we have
5 x2 2 x 3 5 2 0 .......... (7)
2 x1 2 x2 6 x2 2 x3
2
3 5
10 x1 28 x2 6 x3 0
The above equation can be written as
5 x1 14 x2 3 x3 0 .......... (8)
229
Also x1 x2 3 x3 2 0 .......... (4)
5 x1 2 x2 x3 5 0 .......... (5)
37
x1 0' 804, x2 0' 348
46
x3 0 ' 283
LMO : P OP
H B
M.......:......P
MNP : Q PQ
T
L
x1
2 x1 1 5 2 bg
h1 x x1 x2 3 x3 2
L
x2
2 x2 1 2 2 bg
h2 x 5 x1 2 x2 x3 5
L
2 x3 3 1 2
x3
LM h 1 h1 h1 OP
PM PP
x 1 x2 x3
MM h 2 h2 h2
PQ
Nx 1 x2 x3
LM L 2
2L 2L OP
MM x L
1 x1 x2 x1 x3 PP
RM PP
2
L 2L 2L
MM x
x 2 1 x22 x2 x3
PP
MM L 2
2L 2L
PQ
N x x 3 1 x3 x 2 x23
230
LM0 0 . 1 1 3 OP
MM0 0 . 5 2 1 PP
M
MM1 5 . 2 0 0PPP
............................
HB
MM13 21 .. 00 20 02PP
N Q
Here n = 3, m = 2
n-m=1
We have to check determinan of H B
LM0 0 1 1 3 OP
MM0 0 5 2 1P
B
0P
H 1
MM1 5
2
2
0
0
2 0P
P
MN3 1 0 0 2PQ
By C 3 C 4 and C 5 3 C 4
LM0 0 0 1 OP0
MM0 0 3 2 5P
B
0P
H 1
MM1 5
2
2 0
2 2 5P
P
MN3 1 . 0 0 2PQ
0 0 3 5 0 0 3 5
1 5 2 0 1 5 2 0
HB
1 2 2 6 1 2 2 6
3 1 0 2 3 1 0 2
By R2 R3 and R4 3 R3
0 0 3 5
0 3 4 6
HB
1 2 2 6 Expanding by 3rd colour
0 5 6 20
231
0 3 5
B
H 3 4 6 460 0
5 6 20
b g b g
m 2
Here 1 1 1 positive sign.
X0 b x , x , x g is a minimum point.
1 2 3
bg b g
The necessary conditions for maximization of f x , x x1, x2 , x3 ,..., xn at x x0
bg
gi x bi , i = 1, 2, 3...., m and x 0
are
1)
d
L x, , s i0, j = 1, 2, 3, ...n
xj
2) bg
i gi x bi 0 i = 1, 2, 3, ..., m
3) i 0 i = 1, 2, 3, ...., m
4) bg
gi x bi i = 1, 2, 3, ...., m
bg b g
The necessary conditions for minimization of f x , x x1, x2 ,...., xn at x x0 subject to
the the conditions
bg
gi x bi , i 1, 2,..., m and x 0 are
1)
d
L x, , i 0 j = 1, 2, ....n
xj
2) bg
i gi x bi 0 i = 1, 2, ...., m
3) i 0 i = 1, 2, ...., m
4) bg
gi x bi i = 1, 2, ....m
232
Proof (A)
bg
gi x s i2 bi , i = 1, 2, 3, ...., m
i.e. g b xg s
i
2
i bi 0 i = 1, 2, ...., m ............. (2)
bg b
Maximize f x , X x1, x 2 ,...., xn , g
such that G b x , s g 0 ,
i i i i = 1, 2, ....m ............. (4)
i b g g b xg s
m
d
L X, , s f x
i 1
i i
2
1 bi
d
L x, , s i 0 , j = 1, 2, .....n .......... (5)
xj
d
L x, , s i 0, i = 1, 2, ..... m .......... (6)
i
d
L x, , s i 0, i = 1, 2, ..... m .......... (7)
i
233
From (6) we get
2 i si 0
i si 0
Multiplying by s i we have
bg
gi x s i2 bi 0
bg
i bi gi x 0 , i = 1, 2, .....m
bg
i gi x bi 0, i = 1, 2, ..., m .......... (10)
d
Thus the equations (5) (10) and constraint (1) satisfied by the stationary point x0 x, , s i
proves the necessary conditions (1) (2) and (3) respectively.
Proof of B
The proof of (1) (2) and (4) are as in case (I)
Proof of (3) for both the parts.
di
gi X bi , i = 1, 2, ....., m
The necessary condition for maximum is that 0 and for minimum of f X is that di
0 .
b g ,
f x
i i = 1, 2, ....m
bi
234
We see that as bi increases, the solution becomes less constrained.
f can not decrease.
f xb g 0
bi
i. e. i 0
bg
Similarly for minimization of f x as bi increases f can not increase
f xb g 0
bi
i. e. i 0
Minimize di
f X x12 x 22 x 23
Subject to 2 x1 x2 5
x1 x3 2
x1 1
x2 2
x3 0
Solution
Problem in standard form
bg
Minimize f x x12 x 22 x 23
Subject to 2 x1 x2 5 0
x1 x 3 2 0
1 x1 0
2 x2 0
x3 0
Here
d i di
L X, , s f X 1 h1 2 h2 3 h3 4 h4 5 h5
235
The conditions are
5
L hi
i
xj i 1 x j
, i = 1, 2, 3 .......... (1)
i hi 0 i = 1, 2, 3, 4, 5 .......... (2)
hi 0 i = 1, 2, 3, 4, 5 .......... (3)
i 0 i = 1, 2, 3, 4, 5 .......... (4)
di
f X x12 x 22 x 23
b
1 h1 1 2 x1 x2 5 g
b
2 h2 2 x1 x3 2g
3 h3 b1 x g
3 1
4 h4 b2 x g
4 2
5 h5 b x g
5 3
2 x1 2 1 2 3 0
2 x 2 1 4 0
2 x3 2 5 0 .......... (5)
From (2)
b g
1 2 x1 x2 5 0
b x x 2g 0
2 1 3
b1 x g 0
3 1
b2 x g 0
4 2
b x g 0
5 3 .......... (6)
From (3)
2 x1 x2 5 0
236
x1 x 3 2 0
1 x1 0
2 x2 0
x3 0 .......... (7)
From (4) 1, 2 , 3 , 4 , f 0
x1 1, x2 2, x3 0
2 x1 x2 5 0
1 2 5 0 True
x1 x 3 2 0
1 0 2 0 True
1 x1 0
1 1 0 True
2 x2 0
2 20 True
x3 0
00 True
b g
2 x1 x3 2 0
2 b1 0 2g 0
2 0
b
1 2 x1 x2 5 0 g
b
1 2 2 5 0g
1 0
237
2 x1 2 1 2 3 0
2 (1) 0 0 3 0
2 3 0
3 2
2 x2 1 4 0
2 ( 2) 0 4 0
4 4
2 xj 2 5 0
bg
2 0 0 5 0
5 0
238
UNIT WOLFE'S AND BEALE'S
08 METHODS
8.1 INTRODUCTION
The problem of optimizing a quadratic function subject to linear constraints is called a
quadratic programming problem of the nonlinear programs. The quadratic programming
problems are computationally the least difficult to handle. For this reason, quadratic functions
and programs are as widely used as the linear functions and programs in modelling the
optimization problems. Quadratic programs are not only useful in the application of these models
of real - life situation but also serve as subproblems in a number of algorithms for general non -
linear programs. Consequently many algorithms have been developed for quadratic programs.
In this unit, we shall describe Wolfe's and Beal's method.
8.2 QUADRATIC PROGRAM
A quadratic program can be represented in the form
T 1 T
Maximize / Minimize f ( x) c x x Qx
2
Subject to the constraints
b g
A x , , b and x 0 .
Definition
239
n n n
1
Maximize z f ( x) j1
Cj xj
2
C
j1 k 1
jk x j xk
Subject to be consteaints :
n
a
j 1
ij x j bi , x j 0 (i = 1, 2, ..., m, j = 1, 2, 3, ...., n)
Whree C jk Ck j j, k, bi 0 i 1, 2,...., m
L O
MM a x b q PP x r
m n n
2 2
L ( x, q, r, , ) f ( x) i
N i 1 Qj1
ij j i i
j1
j j j
Introduce the non - negative artificial variable v j , j 1, 2, 3,..., n in the Kuhn Tucker
conditions.
n m
C j C jk xk i ai j j v j 0 j = 1, 2, 3, ..., n
k 1 i1
240
Step : 4
Obtain the initial basic feasible solution to the following linear programming
problem.
Minimize Z v v1 v 2 v 3 ... v n
k 1
C jk xk a
i1
i ij j vj Cj (j = 1, 2, 3, ..., n)
a ij x j si bi (i = 1, 2, 3, ..., n)
v j, i , j, x j 0 (i = 1, 2,....m, j = 1, 2, 3, ...,n)
j xj s 0i i or
i si 0, j x j 0 (i = 1, 2, 3,...m, j = 1, 2, 3, ..., n)
Step : 5
Apply two phase simplex method in the usual manner to find an optimum solution
to the linear programming problem constructed in step 4. Enter the variables
such that the above complementary slackness conditions are satisfied.
Step : 6
The optimum solution thus obtained in step 5 gives the optimum solution of the
given QPP also.
Remark
1) If the quadratic programming problem is given in the minimize form then convert
it into maximize it into maximization one by suitable modifications in f (x) and the
' ' constraints.
3) If i is the basic solution with positive value, then xi cannot be basic with positive
value. Similarly j and x j cannot be positive simultaneously..
241
Max. Z x 4 x1 6 x2 2 x12 2 x1 x2 2 x22
Solution :
Step : 1
First we convert the inequality constraints into equations.
x1 2 x2 q12 2
x1 r12 0
x2 r22 0
Step : 2
The Lagrangian function
b
L x1, x2 , q1, r1, r2 , 1, 1, 2 g
d i d
4 x1 6 x 2 2 x12 2 x1 x 2 2 x 22 1 x1 2 x 2 q12 2 i
1 ( x1 r12 ) 2 ( x2 r22 )
L L
4 4 x1 2 x2 1 1 0, 6 2 x1 4 x2 2 1 2 0
x1 x2
Step : 3
Introduce the non - negative artifical variables.
Max Z v v1 v 2
Subject to the constraints
4 x1 2 x2 1 1 v1 4
242
2 x1 4 x2 2 1 2 v 2 6
x1 2 x2 S1 2
1 x1 0, 2 x2 0, 1S1 0
Step : 5
Now solve this problem by two phase simplex method.
Cj 0 0 0 0 0 -1 -1 0
4
v1 -1 4 4 2 1 -1 0 1 0 0 1 1
4
2 1
v2 -1 6 2 4 2 0 -1 0 1 0 3
6 3
1
S1 0 2 1 2 0 0 0 0 0 1 2
2
z = -10 Zj Cj -6 -6 -3 1 1 0 0 0
A
x1 is introduce as a basic variable leaving v1
Cj 0 0 0 0 0 -1 -1 0
1/ 2 1
x1 0 1 1 1/2 1/4 -1/4 0 1/4 0 0 2
1 2
3
v2 -1 4 0 3 3/2 1/2 -1 -1/2 1 0 4/3
4
3
S1 0 1 0 3/2 -1/4 1/4 0 -1/4 0 1 2/3
2
A
243
3 1RS UV
Most negative of 3, ,
2 2 Tis - 3.
W
RS 1 , 3 , 3 UV 3
and maximum ratio
T2 4 2 W 2
x 2 is entering variable (possible because 2 0 ) and S1 is leaving variable.
Cj 0 0 0 0 0 -1 -1 0
1
x1 0 2/3 1 0 1/3 -1/3 0 1/3 0 -1/3 2
2
v2 -1 2 0 0 2 0 -1 0 1 -2 1 1
Z = -2 Zj Cj 0 0 -2 0 1 1 0 2
A
Since - 2 is most negative 1 enters (possible as S1 0 ) and Max. ratio is 1, v 2
is leaving variable.
Cj 0 0 0 0 0 -1 -1 0
Basic CB xB x1 x2 1 1 2 v1 v2 s1
Variable
1 1 1 1
x1 0 1/3 1 0 0 0
3 6 3 6
1 1
1 0 1 0 0 1 0 0 -1
2 2
5 1 1 1 1 1
x2 0 0 1 0
6 6 12 6 12 2
Z=0 Zj Cj 0 0 0 0 0 1 1 0
1 5
Since all j Z j C j are 0 . We get the optimal solution as x1 and x2 .
3 6
244
Step : 6
The optimal value
Z x 4 x1 6 x2 2 x12 2 x1 x2 2 x22
FG 1IJ 6 FG 5 IJ 2 FG 1IJ 2
FG 1IJ FG 5 IJ 2 FG 5 IJ 2
4
H 3K H 6 K H 3K 2
H 3K H 6K H 6K
25
6
Example 8.4.2
Apply Wolfe's method to solve the quadratic programming problem.
Max. Z x 2 x1 x2 x12
Subject to
2 x1 3 x2 6, 2 x1 x2 4 and x1, x2 0 .
Solution :
Step : 1
First we convert the inequality constraints into equations
2 x1 3 x2 q12 6
2 x1 x2 q22 4
x1 r12 0
x2 r22 0
Step : 2
d i d i
2 x1 x2 x12 1 2 x1 3 x 2 q12 6 2 2 x1 x 2 q22 4 d i
d x r i d x
1 1 1
2
2 2 r22 i
The necessary and sufficient conditions are
L L
2 2 x1 2 1 2 2 1 0, 1 3 1 2 0
x1 x2
L L
2 x1 3 x2 q12 6 0, 2 x1 x2 q22 4 0
1 2
245
Now define S1 q12 , S2 q22 then we have the complementary conditions,
1S1 0, 2 S2 0, 1 x1 0, 2 x2 0
and x1, x2 , S 1, S2 , 1, 2 , 1, 2 0
Step : 3
Introduce the non - negative artificial variable
2 x1 2 1 2 2 1 v1 2
3 1 2 2 v 2 1
Step : 4
To construct the modified linear programming problem.
Max. Z v v1 v 2
Subject to
2 x 2 2 1 2 2 1 v1 2
3 1 2 2 v 2 1
2 x1 3 x2 S1 6
2 x1 x2 S2 4
With
Step : 5
Now solve this program by two phase simplex method.
Cj 0 0 0 0 0 0 -1 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v1 v2 s1 S2 Max.
Variable ratio
v1 -1 2 2 0 2 2 -1 0 1 0 0 0 1
v2 -1 1 0 0 3 1 0 -1 0 1 0 0 0
S1 0 6 2 3 0 0 0 0 0 0 1 0 1/3
S2 0 4 2 1 0 0 0 0 0 0 0 1 1/2
246
Z=-3 Zj Cj -2 0 -5 -3 1 1 0 0 0 0
A
Though most negative value of Z j C j is - 5, 1 cannot be an entering variable
as 1 S1 0 and S1 0 . Similarly 2 cannot be an entering variable as S2 0 .
Therefore x1 is an entering variable (possible because 1 0 ).
Cj 0 0 0 0 0 0 -1 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v1 v2 s1 S2 Max.
Variable ratio
1 1
x1 0 1 1 0 1 1 0 0 0 0 0
2 2
v2 -1 1 0 0 3 1 0 -1 0 1 0 0 0
3
S1 0 4 0 3 -2 -2 1 0 -1 0 1 0
4
1
S2 0 2 0 1 -2 -2 1 0 -1 0 0 1
2
Z 1 Zj Cj 0 0 -3 -1 0 1 1 0 0 0
A
Though - 3 is most negative value corresponding variable 1 cannot be an entering
variable as 1S1 0 and S1 0 . So is 2 . Since 2 0 , x 2 can be introduced
( x1, v 2 , s1, s 2 are already basic variables 1 cannot be introduce as 1x1 0 and
there are x 2 is the only possibility). Thus introducing x 2 as a basic variable and
with leaving variable S1 we get the following table.
Cj 0 0 0 0 0 0 -1 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v1 v2 s1 S2 Max.
Variable ratio
1
x1 0 1 1 0 1 1 -1/2 0 1/2 0 0 0 1
xB
v2 -1 1 0 0 3 1 0 -1 0 1 0 0 3
247
4 2 2 1 1 1
x2 0 0 1 0 0 0 --
3 3 3 3 3 3
2 4 4 2 1 1
s2 0 0 0 0 0 1 --
3 3 3 3 3 3
Z = -1 Zj Cj 0 0 -3 -1 0 1 1 0 0 0
A
Since most negative Z j C j is - 3 the corresponding variable 1 is the entering
variable (possible as s1 0 ) and the variable v 2 is the leaving variable as max.
ratio of 1 and xB is 3 corresponds to variable v 2 . We get the following table.
Cj 0 0 0 0 0 0 -1 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v1 v2 s1 S2 Max.
Variable ratio
2 2 1 1 1 1
x1 0 1 0 0 0 0
3 3 2 3 2 3
1 1 1 1
1 0 0 0 1 0 0 0 0
3 3 3 3
14 4 1 2 1 2 1
x2 0 0 1 0 0
9 9 3 9 3 9 3
10 8 2 4 2 4 1
s2 0 0 0 0 1
9 9 3 9 3 9 3
Z=0 Zj Cj 0 0 0 0 0 0 1 1 0 0
F 2 I 14 FG 2 IJ
2G J
2
22
H 3K 9 H 3K
9
248
Example 8.4.3
Use Wolfe's method to solve the quadratic programming problem
Max. Z 2 x1 3 x2 2 x12
and x1, x2 0
Solution :
Step : 1
Max. Z 2 x1 3 x2 2 x12
x1 4 x2 q12 4
x1 2 x2 q22 2
x1 r12 0
x2 r22 0
Step : 2
b
The Lagrangian function now becomes L x1, x2 , q1, q2 , r1, r2 , 1, 2 , 1, 2 g
d i d i d i
2 x1 3 x 2 2 x12 1 x1 4 x 2 q12 4 2 x1 2 x 2 q22 2
d x r i d x r i
1 1
2
1 2 2
2
2
L L
x1 4 x2 S1 4 0 , x1 2 x2 S2 2 0
1 2
and the complementary conditions
x1, x2 , 1, 2 , 1, 2 , s1, s 2 0
249
Step : 3
Introduce the non - negative artificial variables
4 x1 1 2 1 v1 2
4 1 2 2 v 2 3
Step : 4
To construct the modified linear programming problem
max. Z v v1 v 2
Subject to,
4 x1 1 2 1 v1 2
4 1 2 2 v 2 3
x1 4 x2 s1 4
x1 2 x2 s 2 2
1s1 0, 2 s 2 0, 1 x1 0, 2 x2 0
Step : 5
Solve the problem constructed in step 4 by simplex method
Cj 0 0 0 0 0 0 -1 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v1 v2 s1 S2 Max.
Variable rations
x1
v1 -1 2 4 0 1 1 -1 0 1 0 0 0 2
xB
v2 -1 3 0 0 4 1 0 -1 0 1 0 0 0
1
S1 0 4 1 4 0 0 0 0 0 0 1 0
4
S2 0 2 1 2 0 0 0 0 0 0 0 1 1/2
Z j 5 Zj Cj -4 0 -5 -2 1 1 0 0 0 0
A B
250
Above table shows thats any one of x1, 1, 2 can enter as basic variables but
since 1S1 0 and 2 S2 0 where S1 0 and S2 0 , 1 and 2 cannot be
introduce as a basic variable. Therefore x1 enters the basis and since the
x1 column
maximum value of ratio x column is 2, the corresponding variable v1 leaves
B
Cj 0 0 0 0 0 0 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v2 s1 S2 Max.
Variable ratio
1 1 1 1
x1 0 1 0 0 0 0 0 0
2 4 4 4
v2 -1 3 0 0 4 1 0 -1 1 0 0 0
7 1 1 1 8
S1 0 0 4 0 0 1 0
2 4 4 4 7
3 1 1 1 4
S2 0 0 2 0 0 0 1
2 4 4 4 3
Z j 3 Zj Cj 0 0 -4 -1 0 1 0 0 0
A B
Above table indicates that either 1 or 2 enters the basis, but this is not true
because S1 0, S2 0 and 1 S1 0, 2 S2 0. x1, v 2 , S1, S2 are already basis
elements. Since 1x1 0 and x1 0 , 1 cannot enter as a basic element. Thus
only left out variables are x 2 and 2 .
Enter x 2 as a basic element . Consider the second column of the above table
x2 4
and take the ratio x and the maximum value of the ratio. Since is the maximum
B 3
ratio the corresponding variable S2 leaves the basis and we get the following
table.
251
Cj 0 0 0 0 0 0 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v2 s1 S2 Max.
Variable ratio
1 1 1 1 1
x1 0 1 0 0 0 0 0
2 4 4 4 2
1
v2 -1 3 0 0 4 1 0 -1 1 0 0
3
1 1 1 1 1
S1 0 0 0 0 0 1 -2
2 4 4 4 2
3 1 1 1 1
x2 0 0 1 0 0 0 -
4 8 8 8 2
Z = -3 Zj Cj 0 0 -4 -1 0 1 0 0 0
A
Again 1 cannot enter the basis since S1 is in the basis and 1S1 0 . The variable
2 enters as the basic variable. Consider the ratio of columns corresponding to
1
2 and xB . Since the maximum ratio is and is corresponding to the variable
2
x1 and S1 any one of it can leave the basis. Suppose S1 leaves the basis. Thus
we introduce 2 into the basis and drop S1 .
Cj 0 0 0 0 0 0 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v2 s1 S2 Max.
Variable ratio
x1 0 0 1 0 0 0 0 0 0 -1 +2
v2 -1 1 0 0 3 0 1 -1 1 -4 8 0
2 0 2 0 0 1 1 -1 0 0 4 -8 0
1 1
x2 0 1 0 1 0 0 0 0 0 0
2 2
Z = -1 Zj Cj 0 0 -3 0 -1 1 0 4 -8
A
252
S2 enters as a basic variable and variable x1 leaves the basis.
Cj 0 0 0 0 0 0 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v2 s1 S2 Max.
Variable ratio
1 1
S2 0 0 0 0 0 0 0 0 1 --
2 2
v2 -1 1 -4 0 3 0 1 -1 1 0 0 3
1
2 0 2 4 0 1 1 -1 0 0 0 0
2
1 1
x2 0 1 1 0 0 0 0 0 0 0
4 4
Z = -1 Z j Cj 4 0 -3 0 -1 1 0 0 0
Cj 0 0 0 0 0 0 -1 0 0
Basic CB xB x1 x2 1 2 1 2 v2 s1 S2
Variable
1 1
S2 0 0 0 0 0 0 0 0 1
2 2
1 4 1 1 1
1 0 0 1 0 0 0
3 3 3 3 3
5 16 4 1 1
2 0 0 0 1 0 0
3 3 3 3 3
1 1
x2 0 1 1 0 0 0 0 0 0
4 4
Z=0 Zj Cj 0 0 0 0 0 0 1 0 0
253
Step : 6
Max Z 2 x1 3 x2 2 x1 2
2 ( 0) 3 (1) 2 ( 0) 3
~ ~ ~ ~ ~ EXERCISE ~ ~ ~ ~ ~
Use Wolfe's method and solve the following problems.
Subject to,
x1 x2 3 x3 2
5 x1 2 x2 x3 5
x1, x2 , x3 0
bAns.: x 0.81, x
1 2 0.35, x3 0.35,min z 0.857 g
2) Min Z x1 x2 x3
2
d
1 2
x1 x22 x23 i
Subject to
x1 x2 x3 1 0
7
4 x1 2 x2 0
2
x1, x2 , x3 0
FG Ans.: x x x 1 15 IJ
H 1 2 3 ,z
3 18 K
3) Max. Z 2 x1 3 x2 2 x12
Subject to,
x1 4 x2 4
x1 x2 2
x1, x2 0
FG Ans.: x 0, x 1 5 IJ
H 1 2 1, 1 , 2 ,Max. z 3
3 3 K
254
4) Min. Z 6 6 x1 2 x12 2 x1 x2 2 x22
Subject to,
x1 x2 2
x1, x2 0
FG Ans.: x 3 , x 1 1 IJ
H 1
2
2 , 1 0, 2 1,Max. z
2 2 K
8.5 BEALE'S METHOD
Another approach to solve a quadratic programming problem has been suggested by
Beale. In this method the variables are partitioned into basic and non - basic variables and the
results of classical calculus are used. At each iteration the objective function is expressed in
terms of non - basic variables only.
A general quadratic programming problem with linear constraints can be writter as,
1
Max f ( x) C x x Qx
2
Subject to the constraints
A xb x0 .
d
Where x x1, x2 , x3 ,..., xn m i T
C is 1 n and A is m (n m) , Q is symmetric matrix.
A b
B,R
LMX OP b
B
or B X B R X NB b
NX Q
NB
Where X B and X NB denote basic and non - basic variables respectively and
matrix A is partitioned into the submatrices B and R corresponding to xB and xNB
respectively.
255
Since B xB R xNB b, xB B 1 (b R xNB )
Step : 3
Step : 4
Express the objective function in terms of non - basic variables.
Thus by increasing the value of any of the non - basic variables xNB , the value of
the objective function can be improved.
Note that the constraints on the new problem become
B 1 R xNB B 1 b (as xB 0 )
f
Thus any component of xNB can be increased only until 0 or, none or
xNB
more components of xB are reduced to zero.
If we have more than m non - zero variables at any step of iteration, define a new
f
variables Si , Where Si and a new constraint Si 0 .
xNB
Step : 5
Now we have m + 1 non - zero variables and m + 1. Constraints, solution gives
a basic solution to the extended set of constraints.
Step : 6
Repeat the above procedure until no further improvement in the objective function
may be obtained by increasing one of the non - basic variables.
This technique will give an optimal solution in finite number of steps.
8.6 ILLUSTRATIVE EXAMPLES ON BEALE'S METHOD
Example 8.6.1
Use Beale's method for solving the quadratic programming problem.
Subjec to
x1 2 x 2 2 and x1, x2 0 .
Solution :
Step : 1
256
Introducing slack variable x 3 , the given problem becomes
Subject to,
x1 2 x2 x3 2, x1, x2 , x3 0
x1 2 2 x2 x3 where xB ( x1 ), xNB e j
x2
x3
Step : 2
b g b g b
f x2 , x 3 4 2 2 x 2 x 3 6 x 2 2 2 2 x 2 x 3 g 2
b g
2 2 2 x2 x3 x2 2 x22
f dx i
NB
8 6 4 (2 2 x 2 x3 )( 2) 2 (2 4 x2 x3 ) 4 x2
x2
d i 8 6 16 14 10 0
f xNB
x2
This indicates that the objective function will increase if x 2 is increased. Now,,
we should observe whether the partial derivative with respect to x 3 gives a more
promising alternative.
d i 4 4 b2 2 x
f xNB
2 g
x3 2 x2
x3
f
d f
i d i
Since x xNB 0 x xNB 0 , increase in x 2 will give better improvement
2 3
f ( xNB )
8 6 8 ( 2 2 x2 ) 2 ( 2 4 x2 ) 4 x2
x2 x3 0
5
10 12 x2 0 i. e. x2 .
6
RS 5 ,1UV 5
x2 Min
T6 W 6
5
Thus we find x2 and the new basic variable is x 2 . We now initiate a new
6
interation by solving for x 2 in terms of x1 and x 3 .
Second Iteration
Step : 1
x 2 1
1
2
b
x1 x3 g
Here xB ( x2 ) and xNB e j
x1
x3
Step : 2
b g FG 1
b gIJK 2 x
FG 1 x 1 x IJ 2 FG1 1 x 1 x IJ 2
H H 2 2 K H 2 2 K
2
f x1, x3 4 x1 6 1 x1 x3 1 2 x1 1 1 3 1 3
2
f F 1 1 I F 1I F 1 1 I F 1I
4 3 4 x 2 G1 x x J 2 x G J 4 G1 x x J G J
x 1
H 2 2 K H 2K H 2 2 K H 2 K
1 1 3 1 1 3
1 3 x1
258
f 1 FG IJ
1 1 1 FG IJ FG IJ FG 1IJ 1 x
x3 2 H K
6 2 x1 4 1 x1 x3
2 2 2 H K H K H 2K 3
f f
1 0 and 1 0
x1 x1 x3 0
x3 x1 x3 0
Step : 3
f 1
1 3 x1 0 x1
x1 x3 0
3
1 1 FG 1 ,1IJ 1 .
x2 1
2
x1 x3 , At the most x1 2 with x3 0 .
2
x1 Min
H3 K 3 The
f
Since x 1 0, x3 cannot become basic variable and therefore the
3 x1 x3 0
1 5
optimal solution is attained at x1 and x2 , x3 0 .
3 6
FG 1IJ 6 FG 5 IJ 2 FG 1IJ 2
FG 1IJ FG 5 IJ 2 FG 5 IJ 2
Max. z x 4
H 3 K H 6K H 3 K 2
H 3K H 6 K H 6 K
25
6
FG 1IJ 2 FG 5 IJ 2, x 0, x
Observe that x1 2 x2
H 3K H 6K 1 2 0
Subject to,
x1 2 x2 x3 10
259
x1 x2 x4 9
and x1, x2 , x3 , x4 0
Solution :
Step : 1
Select x1, x2 as basic variables. (Since there are two constraints we choose 2
variables as basic variables).
x1 2 x2 10 x3
x1 x2 9 x4
x1 8 x3 2 x4 and x2 1 x3 x4
Here xB x1, x2 b g b
xNB x 3 , x 4 g
Step : 2
b g b g b
f x3 , x4 10 8 x3 2 x4 25 1 x3 x4 10 8 x3 2 x4 g b g b1 x
2
3 x4 g 2
b gb
4 8 x 3 2 x 4 1 x 3 x g
4
f
x3
d i b g b
xNB 10 25 20 8 x3 2 x4 2 1 x3 x4 1 4 1 x3 x4 gb g b g
b
4 8 x3 2 x4 g
f xNB d i 145 0
x3
x 3 x4 0
f
x4
d i b
xNB 20 25 20 8 x3 2 x4 2 2 (1 x3 x4 ) gb g
b g b
4 8 x 3 2 x 4 8 1 x 3 x 4 g
260
f xNB d i 299 0
x4
x3 x4 0
Step : 3
x1 8 x3 2 x4 x1, x3 , x4 0 ,
f xNB d i b g b g b
20 25 40 8 2 x4 2 1 x4 4 8 2 x4 8 1 x4 g b g
x4
x3 0
299
299 66 x4 0 x4
66
RS 299 UV
x4 Min 4,
T 66
4
W
Since at x3 0, x4 4, x1 0 , x1 cannot be basic variable.
1 1 1 1 FG IJ
x2 5
2
x1 x3 and x4 9 x1 5 x1 x3
2 2 2 H K
1 1
4 x1 x3
2 2
Thus xB x2 , x4 b g b
xNB x1, x 3 g
Step : 2
b g FG 1 1 IJ 1 1 FG IJ 2
f x1, x3 10 x1 25 5
H 2 2 K
x1 x3 10 x12 5 x1 x3
2 2 H K
261
IJFG 1 1
K H
4 x1 5
2
x1 x3
2
f F 1I 1 F 1 1 I F 1 1 I
10 25 G J 20 x . 2 G 5 x x J 4 G 5 x x J 2 x
x 1
H 2K 2 H
1
2 2 K H 2 2 K1 3 1 3 1
f 35
0
x1 x1 x3 0
2
f 1 FG IJ FG
1 1 IJ FG 1IJ 4 x FG 1IJ
x3
25
2 H K H
2 5 x1 x3
2 2 K H 2K H 2K 1
f 15
0
x3 x1 x3 0
2
Since both the partial derivaties are negative, neither x1 nor x 3 non - basic
variables can be introduced to increase Z x and thus the optimal solution has
been obtained. The solution is given by x1 x3 0, x2 5 and x4 4 and optimal
value of Z is.
bg
Z max 10 0 25 5 10 0 bg b g b5g
2 2
b gb g
4 0 S 100
Example 8.6.3
Use Beal's method to solve quadratic programming problem.
Maximize Z 2 x1 3 x2 2 x22
x1 4 x2 4
x1 x2 2
and x1, x2 0
Solution :
Step : 1
Introduce slack variables in the constraints to get equations.
x1 4 x2 x3 4
x1 x2 x4 2
and x1, x2 , x3 , x4 0
262
Solve the constraints for x1 and x 2 .
x1 4x 2 4 x 3
x1 x2 2 x4
Solving above equations simultaneously we get,
x1
1
3
b 1
g
4 x3 4 x4 and x2 2 x3 x4
3
b g
4 2
Initially x1 and x2
3 3
b g
Thus xB x1, x2 and xNB x 3 , x 4 . b g
Step : 2
Express Z in terms of non - basic variables.
b
Z f x3 , x4 g 32 b4 x 3 g b
4 x4 2 x3 x4 g 92 b2 x 3 x4 g 2
f 2 4
1 2 x3 x4 1 ;
x3 3 9
b gb g f
x3 x 3 , x4 0
2
3
8 5
1 0
9 9
f
x4
8 4
1 2 x 3 x 4 ;
3 9
b g f
x4 x3 x 4 0
8
3
8
1
9
23
9
0
f
Since x 0 , increase in x 3 will increase the objective function whereas, since
3
f
0 , increase in x will decrease the objective function. Therefore we increase
x4 4
Step : 3
Since x2
1
3
b g
2 x3 x4 at the most x3 2 with x2 0 and
f 5 4 5
x3 0 x3
x3 9 9 4
263
RS 5 UV 5
x3 min 2,
T 4W 4
Thus we get three non - zero variables.
5 7 1
x3 therefore x1 and x2
4 4 4
Thus we have three non-zero variables with 2 constraints.
Therefore introduce new variable
f
x5
x3 x4 0
Step : 4
5 f
Since at x3 , 0 we introduce a new variable
4 x3 x4 0
f 5 4
x5 x3
x3 x4 0
9 9
4 5
x3 x5
9 9
Thus we have the following system of constraints.
x1 4 x2 x3 4
x1 x2 x4 2
4 5
x3 x5
9 9
7 3 4
x1 x5 x4
4 4 3
1 3 1
x2 x5 x4
4 4 3
264
5 9
x3 x5
4 4
b
xB x1, x2 , x3 g and b
xNB x 4 , x 5 g
Step : 5
g FGH 74 34 x IJ FG IJ FG IJ 2
b
Z f x4 , x5 2
K H
5
4
3
x4 3
1 3 1
x5 x4 2
4 4 3
1 3 1
K H
x5 x4
4 4 3 K
f 8 F 1I F 1I
1 4 G J G J 2 0
x4 x 4 x5 0
3 H 4K H 3K
f 3 9 1 FG IJ FG 3 IJ 0
x5 x 4 x5 0
4
2 4 4 H K H 4K
f f
Since x 0 and x 0 , no further improvement is possible and we get
4 5
7 1 5
optimal solution at x1 , x2 , x3 , x4 x5 0 .
4 4 4
and Z 2 x1 3 x2 2 x22
7 1 1 FG IJ 2
2. 3. 2
4 4 4 H K
14 3 1 33
4 4 8 8
~ ~ ~ ~ ~ EXERCISE ~ ~ ~ ~ ~
Solve the following problems by Beale's method.
1) Max. Z 2 x1 3 x2 x12
Subject to,
x1 2 x2 4 , x1, x2 0
FG Ans.: x 1 , x 15 97 IJ
H 1
4
2
8
,z
16 K
265
2) Max. Z 2 x1 2 x2 2 x22
Subject to,
x1 4 x2 2 , x1 x2 2
x1, x2 0
bAns.: x 0, x
1 2 1, z 3 g
3) Max. Z 6 x1 3 x2 x12 4 x1 x2 4 x22
x1 x2 3, 4 x1 x2 9
x1, x2 0
bAns.: x 2, x
1 2 1, z 15 g
4) Min Z 183 44 x1 42 x2 8 x12 12 x1 x2 17 x22
Subject to,
2 x1 x2 10, x1, x2 0
bAns.: x 3.8, x
1 2 2.4, z 19 g
5) Max Z
1
4
b 1
g d
2 x3 x1 x12 x22 23
2
i
Subject to,
x1 x2 x3 1 and x1, x2 , x3 0
FG Ans.: x 1 , x 7 1 IJ
H 1
8
2 0, x3 , z
8 64 K
6) Max. Z 4 x12 3 x22
Subject to,
x1 3 x2 5, x1 4 x2 4 , x1, x2 0
266
REFERENCES
267
18. Kanti Swarup, P.K. Gupta and Man Mohan, Operations Research, Sultan Chand,
New Delhi, 1991.
19. LiV, Kantorovich, On the translocations of masses, Doklady Akad, Nauk SSr,
37, (1942), Translated in Management Science, 5, No.1, 1958.
20. TC. Koopmans, Optimum Utilization of the Transportation systems,
Econometrica, 17, 1949.
21. TC. Koopmans, Activity Analysis of Production and Allocation, Cowles
commission Monograph 13, Wiley, New York, 1951.
22. K. V. Mittal, Optimization Methods in Operations Research and System Analysis,
Wiley Eastern, New Delhi, 1983.
23. N.G. Nair, Operations Research, Dhanpat Rai and sons, 1994.
24. S. Philipose, Operations Research, A Practical Approach, Tata MacGraw - Hill,
New Delhi, 1986.
25. S.S.Rao, Optimization, Theory and Application, Wiley Eastern, New Delhi, 1977.
26. S.D. Sharma, Operations Research, Kedarnath Ramnath, Meerut, 1994.
27. TL. Saaty, Mathematical Methods of Operation Research, McGraw - Hill, New
York, 1959.
28. H.A. Taha, Operations Research, An Introdution, McMillan, New York, 1976
29. S. Vajda, The The01Y of Games and Linear Programming, John Wiley, New
York, 1956.
30. H.M. Wagner, Principles of Operations Research, Prentice-Hall of India, New
Delhi, 1994.
31. Joseph G.Ecker and Michael Kupferschmid, Introduction to Operations Research,
John Wiley, New York, 1988.
268