Professional Documents
Culture Documents
Chapter 10
Non-linear Programming
Department of Mathematics
Jadavpur University
Kolkata, India
Email: bcgiri.jumath@gmail.com
MODULE - 3: Wolfe’s Modified Simplex
Method and Beale’s Method
Step 1. First, introduce slack-variable qi2 in the ith constraint (i = 1, 2, ..., m) and slack
variable rj2 in the jth non-negative constraint (j = 1, 2, ..., n) to convert the
constraints into equations.
2
equating the first order partial derivatives to zero, we derive the Kuhn-Tucker
conditions from the resulting equations.
zv = v1 + v2 + ... + vn ,
Step 4. Obtain the initial basic feasible solution to the following linear programming
problem:
Min. zv = v1 + v2 + ... + vn
subject to constraints:
∑
n ∑
m
cjk xk − λi aij + µj + vj = −cj , j = 1, 2, ..., n
k=1 i=1
∑
n
aij xj + qi2 = bi , i = 1, 2, ..., m
j=1
vj , λi , µj , xj ≥ 0, i = 1, 2, ..., m; j = 1, 2, ..., n
Step 5. Now, apply two-phase simplex method in the usual manner to find an opti-
mum solution to the LP problem constructed in Step 4. The solution must
satisfy the above complementary slackness condition.
Step 6. The optimum solution thus obtained in Step 5 gives the optimum solution of
given QPP also.
Note:
• If the QPP is given in minimization form then convert it into maximization form with
≤ type constraints.
• Modify the simplex algorithm to include the complementary slackness conditions.
• The solution is obtained by using Phase I of simplex method. As our aim is to obtain
a feasible solution, we need not to consider Phase II.
• Phase I ends with the sum of all artificial variables equal to zero, provided that the
feasible solution of the problem exists.
Example 3.1: Apply Wolfe’s method for solving the quadratic programming problem:
Solution: Step 1. First, we write all the constraint inequalities with ‘≤’ sign as
Step 2. Now, we introduce the slack variables q12 , r12 , r22 . Then our problem becomes
∂L
= 4 − 4x1 − 2x2 − λ1 + µ1 = 0
∂x1
∂L
= 6 − 2x1 − 4x2 − 2λ1 + µ2 = 0
∂x2
Max. zv = −v1 − v2
subject to 4x1 + 2x2 + λ1 − µ1 + v1 = 4
2x1 + 4x2 + 2λ1 − µ2 + v2 = 6
x1 + 2x2 + s1 = 0
Step 5. The initial table (Table 3.1) for Phase I is given below:
cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
-1 v1 v1 4 4 2 1 -1 0 1 0 0
-1 v2 v2 6 2 4 2 0 -1 0 1 0
0 s1 s1 2 1 2 0 0 0 0 0 1
zv = −10 zj − cj -6 -6 -3 1 1 0 0 0
↑ ↓
Step 6. Since µ1 = 0, therefore, x1 is introduced into the basic solution with v1 as the
leaving variable. Notice that λ1 cannot enter because s1 is the basic variable.
This gives the following transformed table (Table 3.2) by our usual rules of
transformation.
cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
0 x1 x1 1 1 1/2 1/4 -1/4 0 1/4 0 0
-1 v2 v2 4 0 3 3/2 1/2 -1 -1/2 1 0
0 s1 s1 1 0 3/2 -1/4 1/4 0 -1/4 0 1
zv = −4 zj − c j 0 -3 -3/2 -1/2 1 3/2 0 0
↑ ↓
Step 7. Since µ2 = 0, x2 is introduced into the basic solution with s1 as the leaving
variable. We then get the next improved table (Table 3.3) as given below.
cj → 0 0 0 0 0 -1 -1 0
cB B xB b x1 x2 λ1 µ1 µ2 v1 v2 s1
0 x1 x1 1/3 1 0 0 -1/3 1/6 1/3 -1/6 0
0 λ1 λ1 1 0 0 1 0 -1/2 0 1/2 -1
0 x2 x2 5/6 0 1 0 1/6 -1/12 -1/6 1/12 1/2
zv = 0 zj − c j 0 0 0 0 0 1 1 0
Here all the zj − cj ≥ 0 for all j. Hence this last table (Table 3.4) gives the
optimal solution for Phase I. Since zv = 0, the given solution is feasible also.
Thus the required optimal solution is given by x1∗ = 13 , x2∗ = 56 . The optimal
value zx∗ can be computed from the original objective function as follows:
1 5 1 1 5 5 25
zx∗ = 4( ) + 6( ) − 2( )2 − 2( )( ) − 2( )2 = .
3 6 3 3 6 6 6
In 1959, E.M.L. Beale has developed a technique of solving the quadratic program-
ming problem that does not use the Kuhn-Tucker conditions in achieving the optimum
solution. His technique involves partitioning of the variable into basic and non-basic
ones and using classical calculus results. At each iteration, the objective function is
expressed in terms of only the non-basic variables.
Let the QP problem be given in the form:
Max. f (x) = cx + 12 xT Qx,
subject to the constraints: Ax = b, x ≥ 0, where x = (x1 , x2 , ..., xn+m )T , c is a 1 × n and A
is a m × n matrices, and Q is a symmetric matrix.
Without any loss of generality, we can state that every QPP with linear constraints can
be written in this form.
Step 1. First express the given QP problem with linear constraints in the above form
by introducing slack and/or surplus variables.
Step 2. Now select arbitrarily m variables as basic and the remaining n variables as
non-basic. With this partitioning, the constraint equation Ax = b can be writ-
ten as
xB
(B, R) = b or, Bx + Rx = b
B NB
xNB
where xB and xNB denote the basic and non-basic vectors, respectively. Also,
the matrix A is partitioned to sub-matrices B and R corresponding to xB and
xNB , respectively.
According to this partitioning, the above equation can be written as
Step 3. Express the basic xB in terms of non-basic xNB only, using the given and addi-
tional constraint equation, if any.
Step 4. Express the objective function f (x) also in terms of xNB only, using the given
and additional constraint, if any
Thus, we observe that by increasing the value of any of the non-basic variables
(xNB ), the value of the objective function can be improved.
It is also important to note here that the constraints on the new problem be-
come
Example 3.2: Solve the following quadratic programming problem by Beale’s method.
This indicates that the objective function will decrease if x3 is increased. This
happens contrary to our desire to increase the objective function. The partial
derivative with respect to x4 will give us a more suitable alternative:
∂f (xN B )
= −20 + 25 − 20(−2)(8 + x3 − 2x4 ) − 2(1 − x3 + x4 )
∂x4
+8(1 − x3 + x4 ) − 4(8 + x3 − 2x4 )
∂f (xN B )
At the point x3 = 0, x4 = 0, we obtain ∂x4
= 299.
This indicates the increase in x4 will certainly improve the objective function.
So, we now proceed to decide how much x4 should or may increase.
x2 = 5 − 1/2(x1 + x3 ), x4 = 4 + 1/2(x3 − x1 ).
Since both the partial derivatives are negative, hence neither x1 nor x3 can be
introduced to the basic solution to increase the value of the objective function
z and thus the optimal solution has been obtained. The optimal solution is
given by x1 = x3 = 0, and x2 = 5, x4 = 4.