You are on page 1of 9

OPERATIONS RESEARCH

Chapter 10
Non-linear Programming

Prof. Bibhas C. Giri

Department of Mathematics
Jadavpur University
Kolkata, India
Email: bcgiri.jumath@gmail.com
MODULE - 3: Wolfe’s Modified Simplex
Method and Beale’s Method

3.1 Wolfe’s Modified Simplex Method

Let the quadratic programming problem (QPP) be:



n
1 ∑∑
n n
Maximize z = f (x) = cj xj + xj cjk xk
2
j=1 j=1 k=1
∑n
subject to aij xj ≥ bi , xj ≥ 0, i = 1, 2, ..., m; j = 1, 2, ..., n
j=1

where cjk = ckj for all j and k, bi ≥ 0 for all i = 1, 2, ..., m.


∑ ∑
Also, we assume that the quadratic form nj=1 nk=1 xj cjk xk is negative semi-definite.
The above QPP can be solved by Wolfe’s modified simplex method which is out-
lined in the following steps:

Step 1. First, introduce slack-variable qi2 in the ith constraint (i = 1, 2, ..., m) and slack
variable rj2 in the jth non-negative constraint (j = 1, 2, ..., n) to convert the
constraints into equations.

Step 2. Then construct the Lagrangian function



m ∑n ∑
n
2
L(x, q, r, λ, µ) = f (x) − λi [ aij xj − bi + qi ] − µj [−xj + rj2 ]
i=1 j=1 j=1

where x = (x1 , x2 , ..., xn ), 2


q = (q12 , q22 , ..., qm ), r = (r12 , r22 , ..., rn2 ),
λ = (λ1 , λ2 , ..., λm ), µ = (µ1 , µ2 , ..., µn ).

Differentiating the above function partially with respect to x, q, r, λ, µ, and

2
equating the first order partial derivatives to zero, we derive the Kuhn-Tucker
conditions from the resulting equations.

Step 3. Wolfe (1959) suggested to introduce the non-negative artificial variable vj , j =


1, 2, ..., n in the Kuhn-Tucker conditions

n ∑
m
cj + cjk xk − λi aij + µj = 0, j = 1, 2, ..., n
k=1 i=1

and to construct an objective function

zv = v1 + v2 + ... + vn ,

where v1 , v2 , · · · , vn are artificial variables.

Step 4. Obtain the initial basic feasible solution to the following linear programming
problem:

Min. zv = v1 + v2 + ... + vn
subject to constraints:

n ∑
m
cjk xk − λi aij + µj + vj = −cj , j = 1, 2, ..., n
k=1 i=1

n
aij xj + qi2 = bi , i = 1, 2, ..., m
j=1
vj , λi , µj , xj ≥ 0, i = 1, 2, ..., m; j = 1, 2, ..., n

and satisfying the complementary slackness condition:



n ∑
m
µj x j + λi si (where si = qi2 )
j=1 i=1
or, λi si = 0 and µj xj = 0, i = 1, 2, ..., m; j = 1, 2, ..., n.

Step 5. Now, apply two-phase simplex method in the usual manner to find an opti-
mum solution to the LP problem constructed in Step 4. The solution must
satisfy the above complementary slackness condition.

Step 6. The optimum solution thus obtained in Step 5 gives the optimum solution of
given QPP also.

Note:
• If the QPP is given in minimization form then convert it into maximization form with
≤ type constraints.
• Modify the simplex algorithm to include the complementary slackness conditions.

• The solution is obtained by using Phase I of simplex method. As our aim is to obtain
a feasible solution, we need not to consider Phase II.

• Phase I ends with the sum of all artificial variables equal to zero, provided that the
feasible solution of the problem exists.

Example 3.1: Apply Wolfe’s method for solving the quadratic programming problem:

Max. zx = 4x1 + 6x2 − 2x12 − 2x1 x2 − 2x22


subject to
x1 + 2x2 ≤ 2 and x1 , x2 ≥ 0

Solution: Step 1. First, we write all the constraint inequalities with ‘≤’ sign as

x1 + 2x2 ≤ 2, −x1 ≤ 0, −x2 ≤ 0.

Step 2. Now, we introduce the slack variables q12 , r12 , r22 . Then our problem becomes

Max. zx = 4x1 + 6x2 − 2x12 − 2x1 x2 − 2x22


subject to x1 + 2x2 + q12 = 2
−x1 + r12 = 0
−x2 + r22 = 0

Step 3. To obtain the Kuhn-Tucker conditions, we construct the Lagrangian function

L(x1 , x2 , q1 , r1 , r2 , λ1 , µ1 , µ2 ) = (4x1 + 6x2 − 2x12 − 2x1 x2 − 2x22 )


−λ1 (x1 + 2x2 + q12 − 2)
−µ1 (−x1 + r12 ) − µ2 (−x2 + r22 ).

The necessary and sufficient conditions for optimality are :

∂L
= 4 − 4x1 − 2x2 − λ1 + µ1 = 0
∂x1
∂L
= 6 − 2x1 − 4x2 − 2λ1 + µ2 = 0
∂x2

Defining s1 = q12 , we have λ1 s1 = 0, µ1 x1 = 0, µ2 x2 = 0


Also, x1 + 2x2 + s1 = 2, and finally, x1 , x2 , s1 , λ1 , µ1 , µ2 ≥ 0.
Step 4. Now, we introduce the artificial variables v1 and v2 . Then the modified linear
programming problem becomes:

Max. zv = −v1 − v2
subject to 4x1 + 2x2 + λ1 − µ1 + v1 = 4
2x1 + 4x2 + 2λ1 − µ2 + v2 = 6
x1 + 2x2 + s1 = 0

where all the variables are non-negative and µ1 x1 = 0, µ2 x2 = 0, λ1 s1 = 0.

Step 5. The initial table (Table 3.1) for Phase I is given below:

cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
-1 v1 v1 4 4 2 1 -1 0 1 0 0
-1 v2 v2 6 2 4 2 0 -1 0 1 0
0 s1 s1 2 1 2 0 0 0 0 0 1
zv = −10 zj − cj -6 -6 -3 1 1 0 0 0
↑ ↓

Table 3.1: Starting table

Step 6. Since µ1 = 0, therefore, x1 is introduced into the basic solution with v1 as the
leaving variable. Notice that λ1 cannot enter because s1 is the basic variable.
This gives the following transformed table (Table 3.2) by our usual rules of
transformation.

cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
0 x1 x1 1 1 1/2 1/4 -1/4 0 1/4 0 0
-1 v2 v2 4 0 3 3/2 1/2 -1 -1/2 1 0
0 s1 s1 1 0 3/2 -1/4 1/4 0 -1/4 0 1
zv = −4 zj − c j 0 -3 -3/2 -1/2 1 3/2 0 0
↑ ↓

Table 3.2: First iteration table


cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
0 x1 x1 2/3 1 0 1/3 -1/3 0 1/3 0 -1/3
-1 v2 v2 2 0 0 2 0 -1 0 1 -2
0 x2 x2 2/3 0 1 -1/6 1/6 0 -1/6 0 2/3
zv = −2 zj − c j 0 0 -2 0 1 1 0 2
↑ ↓

Table 3.3: First iteration table

Step 7. Since µ2 = 0, x2 is introduced into the basic solution with s1 as the leaving
variable. We then get the next improved table (Table 3.3) as given below.

Step 8. Since s1 = 0, hence λ1 can be introduced into the basic solution.

cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
0 x1 x1 1/3 1 0 0 -1/3 1/6 1/3 -1/6 0
0 λ1 λ1 1 0 0 1 0 -1/2 0 1/2 -1
0 x2 x2 5/6 0 1 0 1/6 -1/12 -1/6 1/12 1/2
zv = 0 zj − c j 0 0 0 0 0 1 1 0

Table 3.4: First iteration table

Here all the zj − cj ≥ 0 for all j. Hence this last table (Table 3.4) gives the
optimal solution for Phase I. Since zv = 0, the given solution is feasible also.
Thus the required optimal solution is given by x1∗ = 13 , x2∗ = 56 . The optimal
value zx∗ can be computed from the original objective function as follows:
1 5 1 1 5 5 25
zx∗ = 4( ) + 6( ) − 2( )2 − 2( )( ) − 2( )2 = .
3 6 3 3 6 6 6

3.2 Beale’s Method

In 1959, E.M.L. Beale has developed a technique of solving the quadratic program-
ming problem that does not use the Kuhn-Tucker conditions in achieving the optimum
solution. His technique involves partitioning of the variable into basic and non-basic
ones and using classical calculus results. At each iteration, the objective function is
expressed in terms of only the non-basic variables.
Let the QP problem be given in the form:
Max. f (x) = cx + 12 xT Qx,
subject to the constraints: Ax = b, x ≥ 0, where x = (x1 , x2 , ..., xn+m )T , c is a 1 × n and A
is a m × n matrices, and Q is a symmetric matrix.
Without any loss of generality, we can state that every QPP with linear constraints can
be written in this form.

3.2.1 Iterative procedure


Beale’s iterative procedure for solving such type of QP problems can be outlined in
the following steps:

Step 1. First express the given QP problem with linear constraints in the above form
by introducing slack and/or surplus variables.

Step 2. Now select arbitrarily m variables as basic and the remaining n variables as
non-basic. With this partitioning, the constraint equation Ax = b can be writ-
ten as
 
 xB 
(B, R)   = b or, Bx + Rx = b
 B NB
xNB

where xB and xNB denote the basic and non-basic vectors, respectively. Also,
the matrix A is partitioned to sub-matrices B and R corresponding to xB and
xNB , respectively.
According to this partitioning, the above equation can be written as

xB = B−1 b − B−1 RxNB

Step 3. Express the basic xB in terms of non-basic xNB only, using the given and addi-
tional constraint equation, if any.

Step 4. Express the objective function f (x) also in terms of xNB only, using the given
and additional constraint, if any
Thus, we observe that by increasing the value of any of the non-basic variables
(xNB ), the value of the objective function can be improved.
It is also important to note here that the constraints on the new problem be-
come

B−1 RxNB ≤ B−1 b ( since xB ≥ 0)


∂f
Thus, any component of xNB can increase only until ∂xN B
becomes zero or one
or more components of xB are reduced to zero.

Example 3.2: Solve the following quadratic programming problem by Beale’s method.

Max. z = 10x1 + 25x2 − 10x12 − x22 − 4x1 x2


subject to x1 + 2x2 + x3 = 10, x1 + x2 + x4 = 9
x1 , x2 , x3 , x4 ≥ 0

Solution: First Iteration:

Step 1. Selecting x1 and x2 arbitrarily to be the basic variables, we obtain x1 = 8 + x3 −


2x4 , x2 = 1 − x3 + x4 , where xB = (x1 , x2 ), xNB = (x3 , x4 ).

Step 2. Now, expressing z in terms of (x3 , x4 ) gives

f (x3 , x4 ) = 10(8 + x3 − 2x4 ) + 25(1 − x3 + x4 ) − 10(8 + x3 − 2x4 )2


−(1 − x3 + x4 )2 − 4(8 + x3 − 2x4 )(1 − x3 + x4 )
∂f (xN B )
= 10 − 25 − 20(8 + x3 − 2x4 ) + 2(1 − x3 + x4 )
∂x3
−4(1 − x3 + x4 ) + 4(8 + x3 − 2x4 )
( ∂f )
Therefore, ∂x3 x3 =0,x4 =0
= −145.

This indicates that the objective function will decrease if x3 is increased. This
happens contrary to our desire to increase the objective function. The partial
derivative with respect to x4 will give us a more suitable alternative:

∂f (xN B )
= −20 + 25 − 20(−2)(8 + x3 − 2x4 ) − 2(1 − x3 + x4 )
∂x4
+8(1 − x3 + x4 ) − 4(8 + x3 − 2x4 )

∂f (xN B )
At the point x3 = 0, x4 = 0, we obtain ∂x4
= 299.
This indicates the increase in x4 will certainly improve the objective function.
So, we now proceed to decide how much x4 should or may increase.

Step 3. If x4 is increased to a value greater than 4, x1 will become negative, since x1 =


8 − x3 − 2x4 and x − 3 = 0. The partial derivative becomes zero at x4 = 299/66.
Taking minimum of (4, 299/66), we find x4 = 4, and the new basic variables
are x4 and x2 . We now start with new iteration.
Second Iteration:

Step 4. We start with solving for x2 and x4 in terms of x1 and x3 . Thus

x2 = 5 − 1/2(x1 + x3 ), x4 = 4 + 1/2(x3 − x1 ).

In this case, xB = (x2 , x4 ), xNB = (x1 , x3 ).

Step 5. Expressing z in terms of (x1 , x3 ) gives

f (x1 , x3 ) = 10x1 + 25[5 − 1/2(x1 + x3 )] − 10x12 − [5 − 1/2(x1 + x3 )]2


−4x [5 − 1/2(x1 + x3 )]
( 1) ( )
∂f 35 ∂f 15
=− , =− .
∂x1 x =0,x =0 2 ∂x3 x 2
1 2 1 =0,x2 =0

Since both the partial derivatives are negative, hence neither x1 nor x3 can be
introduced to the basic solution to increase the value of the objective function
z and thus the optimal solution has been obtained. The optimal solution is
given by x1 = x3 = 0, and x2 = 5, x4 = 4.

You might also like