You are on page 1of 15

NMIMS-MPSTE Mumbai

Institutional Elective: Optimization For Decision


Notes on Non Linear Programming: Unit-01

The Basic Definition


The Non-Linear Programming Problems- NLPP :
The optimization problems equipped with either the non-linear objective functions or non-linear
constraints or even both jointly are non-linear along with the non-negativity restrictions are well
known as non-linear programming problems or NLPP.

Objectives of Unit-01 :

a. Unconstrained Optimization of a Non-Linear Problem


(Three Variables no constraints).

b. Optimization of NLPP with Equality Constraints


(Either 01 or 02 Linear constraints with 3 variables).

c. Optimization of NLPP with inequality Constraints using Karush-Khun-Tucker (KKT) Conditions


(Either 01 or 02 Linear constraints with 2 variables).

d. Verification of KKT conditions for a given NLPP using Graphical Method


(2 Linear/Non-Linear constraints and 2 variables only).

Unconstrained NLPP
Definition :
An NLPP with non-linear objective function associated with non-negativity of decision variables
involved is called an Unconstrained NLPP. Mathematically one shall have

Optimize f (x1 , x2 , x3 , · · ·, xn )

Subjected To,

x1 , x2 , x3 , · · ·, xn ≥ 0

We shall use the process of evaluation of Maxima or Minima here for y = f (x) for n independent
variables.
dy
Following that one has to solve the equation = 0 to get the point of maxima or minima. Since
dx
several independent variables are involved here so we shall make use of following n simultaneous
equations using the concept of partial derivatives as
∂f
= 0; ∀i = 1(1)n
∂xi

1
The solution of these equations will yields a point given by

X0 = (a0 , a1 , a2 , · · ·, an )

In order to decide the nature of this point we shall define a matrix called Hessian Matrix denoted
by H of order n × n as shown below:
 2 
∂ f ∂2f ∂2f
 ∂x2 · · ·
 1 ∂x1 x2 ∂x1 xn 

 
 
 2
∂2f ∂2f 

 ∂ f
 · · · 
 ∂x2 x1
 ∂x22 ∂x2 xn 

H=
 

 · · · · · · 
 
 
 · · · · · · 
 
 
 
 2
∂2f ∂2f 

 ∂ f
· · ·
∂xn x1 ∂xn x2 ∂x2n n×n
Then one shall compute the principal order minors of the Hessian Matrix H denoted by:
2 2f 2f

∂ f ∂ ∂

∂x2 ∂x x ∂x x
2
∂ f 2
1 1 2 1 3
∂ f





∂x2 ∂x 1 x 2

2 1
2
∂ f 2
∂ f 2
∂ f

∂ f
H1 = 2 , H2 = 2 , H3 = ∂x2 x1 ∂x22 ∂x2 x3 , · · ·, Hn = |H|

2
∂x1 ∂ f ∂ f
2

∂x1 x2 ∂x 2

2 2 2




∂ f ∂ f ∂ f

∂x3 x1 ∂x3 x2 ∂x23


Decision:
Then we decide only if ∀i = 1(1)n the following equality holds



 Hi > 0 : X0 is Minima


(−1)i H : X is Maxima

i 0

Elsewise: X0 is Saddle




X0 is Neither Maxima Nor Minima

In particular we consider for i = 3 for examination that means x0 is minima if H1 , H2 , H3 > 0 and
Maxima if H1 > 0, H2 < 0, H3 > 0 or H1 < 0, H2 > 0, H3 < 0.

Finally substitute X0 = (a0 , a1 , a2 , · · ·, an ) and compute f (a0 , a1 , a2 , · · ·, an ) only if X0 is a


maxima or minima.

Note: It is a well known fact that


∂2f ∂2f
= ; ∀i, j
∂xi xj ∂xj xi
Therefore the Hessian matrix H will always be Symmetric Matrix and hence the cor-
responding principal minors also.

2
Problems & Solutions

Optimize the following unconstrained NLPP given by:

SN Problem Solution

1. Z = x21 + x22 + x23 − 4x1 − 8x2 − 12x3 + 100 ZMin = 44; X0 = (2, 4, 6)

2. Z = x21 + x22 + x23 − 6x1 − 8x2 − 10x3 ZMin = −50; X0 = (3, 4, 5)

 
19 1 4 2
3. Z= −x21 − x22 − x23 + x1 + 2x2 + x2 x3 ZMax = ; X0 = , ,
12 2 3 3
 
10 1 2
4. Z = −x21 − 3x22 − 3x23 + 2x1 + x3 + 3x2 x3 ZMax = ; X0 = 1, ,
9 9 9

NLPP with Linear & Equality Constraints


In this section we shall learn the NLPP associated with n decision variables and m equality constraints.

a. We shall restrict to only second degree non-linear objective functions with n decision variables.

b. One may note that whenever one will have m consistent linear equations in n variables then following
possible cases may arises:

1. Whenever n = m then one will have a unique solution. In such case there is no question arises to
optimize the objective function as we have no choices due to n − m = 0.

2. Whenever n > m then one will have infinitely many solutions to this system of equations. In such
case only there will be a question arises to optimize the objective function as we have total number
of choices are n − m > 0.

3. As per computational simplicity we shall consider the following cases

3a. n = 2 and m = 1.

3b. n = 3 and m = 1.

3c. n = 3 and m = 2.

4. We shall apply only Lagrangian Multiplier Method to optimize the above cases of NLPP. This is
an extension of the procedure of solving a NLPP without constraints already discussed above. The
main steps of general procedure are as follows:

3
Consider a general NLPP given by

Optimize Z = f (x1 , x2 , · · ·, xn )

Subjected to

gi (x1 , x2 , · · ·, xn ) = bi
x1 , x2 , · · ·, xn ≥ 0

Where gi (x1 , x2 , · · ·, xn ) = bi is a linear equation in n decision variables and i = 1(1)m.

4a. We shall construct a single optimizing function called Lagrangian denoted by L, with the help of m
Lagrangian Multipliers denoted by λi as follows:
m
X
L(x1 , x2 , · · ·, xn , λ1 , λ2 , · · ·, λm ) = f (x1 , x2 , · · ·, xn ) − λi (gi (x1 , x2 , · · ·, xn ) − bi )
i=1

This Non-Linear function needed to be optimized subjected to the condition

x1 , x2 , · · ·, xn ≥ 0, λ ∈ R.

∂L ∂L
4b. Apply the condition = 0 and = 0, and obtain the solution to have the point
∂xi ∂λi
X0 = (x1 , x2 , · · ·, xn , λ1 , λ2 , · · ·, λm )

4c. Construct the Bordered Hessian Matrix denoted by H B defined as shown below:
 
O : P
HB =  · · · : · · ·
 

PT : Q (m+n)×(m+n)

where
 
∂2L ∂2L ∂2f
 ∂x2 · ·
  1 ∂x1 x2 ∂x1 xn 

∂g1 ∂g1 ∂g1
  
··· 
 2

 ∂x1 ∂x2 ∂xn  ∂2L ∂2L 

   ∂ L
 ∂g2

∂g2 ∂g2 
 0 0 ··· 0  · · 
 ···     ∂x2 x1 ∂x22 ∂x2 xn 
 ∂x1 ∂x2 ∂xn  0 0 · · · 0  
P = O= Q =
 
 · · ··· · 
   
· · · · · ·  · · · · · 
     
 · · ··· ·  0 0 ··· 0 
 ·

· · · · 
 
 ∂gm ∂gm ∂gm  m×m
···
 
 
∂x1 ∂x2 ∂xn m×n 
 2

∂2L 2

 ∂ L ∂ L 
· ·
∂xn x1 ∂xn x2 ∂x2n n×n

4d-A. Starting with the principal minor of order 2m + 1 we shall check the signs of all n − m principal
minors, if these signs are alternatively appear as per the sign of (−1)m+n of (2m + 1)th order minor
we shall claim the point X0 is a maxima.

4
4d-B. Starting with the principal minor of order 2m + 1 we shall check the signs of all n − m principal
minors, if these signs are alternatively appear, as per the sign of (−1)m+n , we shall claim the point
X0 is a minima.

Note Important:

Case-a: n = 2, m = 1
If Z is a function of two decision variables x1 , x2 associated with a linear constraint, then we get
only third order Bordered Hessian H B . Then the point X0 is a maxima or minima if 43 is positive
or negative respectively where 43 = |H B |.

Case-b: n = 3, m = 1
If Z is a function of three decision variables x1 , x2 , x3 associated with a linear constraint, then we
get only fourth order Bordered Hessian H B . Then one shall compute the principal order minors of
order 3 and 4, denoted by 43 , 44 shown below
∂h ∂h ∂h


0
∂h ∂h ∂x1 ∂x2 ∂x3
0 2 2 2
∂h ∂ L ∂ L ∂ L

∂x1 ∂x2
∂h
2
∂ L 2
∂ L

∂x1 2

∂x1 ∂x1 x2 ∂x1 x3
43 = ; 44 = ∂h

∂2L ∂2L ∂2L
∂x1 ∂x21 ∂x1 x2
∂2L ∂ 2 L

∂h ∂x2 ∂x1 x2 ∂x22 ∂x2 x3
∂2L ∂2L ∂ 2 L

∂x2 ∂x1 x2 ∂x22 ∂h

∂x
3 ∂x3 x1 ∂x3 x2 ∂x23

The point X0 would be a maxima or minima as per the following rule:

If 43 < 0, 44 < 0 Then point X0 is a Minima.


If 43 > 0, 44 < 0 Then point X0 is a Maxima.

Case-c: n = 3, m = 2
If Z is a function of three decision variables x1 , x2 , x3 associated with two linear constraints, then
we get fifth order Bordered Hessian H B . Then one shall compute the principal order minors of order
2m + 1 = 2(2) + 1 = 5 denoted by 45 = |H B 5×5 | shown below
∂h1 ∂h1 ∂h1


0 0 :
∂x1 ∂x2 ∂x3

0 ∂h2 ∂h2 ∂h2
0 :
∂x1 ∂x2 ∂x3


·· ·· : ·· ·· ··

2 ∂2L ∂2L
|H B 5×5 | = 45 = ∂h1 ∂h2 : ∂ L


∂x1 ∂x1 ∂x21 ∂x1 x2 ∂x1 x3
∂2L ∂2L ∂ 2 L

∂h1 ∂h2
∂x2 ∂x2 : ∂x1 x2

∂x2 2 ∂x2 x3
2 2
∂h1 ∂h2 ∂ L ∂ L ∂ 2 L
:
∂x
3 ∂x3 ∂x3 x1 ∂x3 x2 ∂x23

The point X0 would be a maxima or minima as per the following rule:

If 45 < 0 Then point X0 is a Minima.


If 45 > 0, Then point X0 is a Maxima.

5
To determine the determinant of a fifth order bordered Hessian matrix one shall have to apply the
method designed by Laplace given below:

0 0 : a a a
3 4 5

0 0 : b3 b4 b5

·· ·· : ·· ·· ··
|H B 5×5 | = 45 =


a3 b3 : c3 c4 c5

a b : d d d
4 4 3 4 5

a5 b5 : e3 e4 e5

a b a3 a4 a5 a b a3 a4 a5 a b a3 a4 a5

3 3 3 3 5+1 4 4
= (−1)3 +44+1 b b b + (−1)3 +55+1 b b b5 + (−1)4 +5
b b b

a4 b4 3 4 5 a5 b5 3 4 a5 b5 3 4 5


e3 e4 e5 d3 d4 d5 c3 c4 c5

Problems on Linear Equality Constraints

Problems & Solutions

Optimize the following unconstrained NLPP considering , x1 , x2 , x3 ≥ 0 using Lagrangian Multiplier


Method given by:

SN Objective Function The Constraints The Solution

1. Z = 6x21 + 5x22 x1 + 5x2 = 7 x1 = 7/31, x2 = 42/31,


ZM in = 294/31

2. Z = 4x1 + 8x2 − x21 − x22 x1 + x2 = 4 x1 = 1, x2 = 3,


ZM ax = 18

3. Z = x21 + x22 + x23 − 10x1 − 6x2 − 4x3 x1 + x2 + x3 = 7 x1 = 4, x2 = 2,


x3 = 1, ZM in = −35

4. Z = 2x21 + x22 + 3x23 x1 + x2 + x3 = 20 x1 = 5, x2 = 11,


+10x1 + 8x2 + 6x3 − 100 x3 = 4, ZM in = 281

5. Z = 12x1 + 8x2 + 6x3 x1 + x2 + x3 = 10 x1 = 5, x2 = 3,


−x21 − x22 − x23 − 23 x3 = 2, ZM ax = 35

6. Z = 4x21 + 2x22 + x23 − 4x1 x2 x1 + x2 + x3 = 15 x1 = 11/3, x2 = 10/3,


2x1 − x2 + 2x3 = 20 x3 = 8, ZM in = 820/9

7. Z = x21 + x22 + x23 x1 + x2 + 3x3 = 2 x1 = 37/46, x2 = 16/46,


5x1 + 2x2 + x3 = 5 x3 = 13/46, ZM in = 39/46

6
NLPP with Inequality Constraints

Consider a general NLPP given by

Maximize Z = f (x1 , x2 , · · ·, xn )

Subjected to

gi (x1 , x2 , · · ·, xn ) ≤ bi
x1 , x2 , · · ·, xn ≥ 0

Where gi (x1 , x2 , · · ·, xn ) ≤ bi is a in-equations in n decision variables and i = 1(1)m.

As per Lagrangian procedure of equality constraint method let us first construct the generalized
Lagrangian function with the help of m multipliers.

One can observe in order to construct the Lagrangian it is mandatory to have the linear
constraints, therefore one shall convert the inequalities constraints in form of equalities using m
slack variables S12 , S22 , · · ·Sm
2 . Here squares of slack variables are considered to ensure the positivity

of slack variables. Therefore one will have the following Lagrangian Function given by
m
X
L(x1 , x2 , ···, xn , λ1 , )λ2 , ···, λm , S1 , S2 , , ···, Sm ) = f (x1 , x2 , ···, xn )− λi (gi (x1 , x2 , ···, xn )−bi +s2i )
i=1

As per the Lagrangian Multiplier method the necessary conditions for optimization are given below:
∂L
= 0; i = 1(1)n
∂xi
∂L
= 0; i = 1(1)m
∂λi
∂L
= 0; i = 1(1)m
∂Si
Applying the above conditions one will have
m
∂f X ∂gi
− = 0; j = 1(1)n
∂xj ∂xi
i=1
∂L
= gi (x1 , x2 , · · ·, xn ) + Si2 = 0; i = 1(1)m
∂λi
∂L
= Si λi = 0; i = 1(1)m
∂Si

One of the method is to solve the above NLPP is derived by Krush-Khun-Tuker subjected to the
satisfaction of the above equations conditions in general usually called as KKT-conditions or just
KT-Conditions, described as follows:
m
∂f X ∂gi
− = 0; j = 1(1)n
∂xj ∂xi
i=1

λi (gi (x1 , x2 , · · ·, xn ) − bi ) = 0; i = 1(1)m


gi (x1 , x2 , · · ·, xn ) ≤ bi
xj ≥ 0, λi ≥ 0; j = 1(1)n i = 1(1)m

7
Note: In case of a standadrd minmization problem

Minimize Z = f (x1 , x2 , · · ·, xn )

Subjected to

gi (x1 , x2 , · · ·, xn ) ≥ bi
x1 , x2 , · · ·, xn ≥ 0

Where gi (x1 , x2 , · · ·, xn ) ≤ bi is a in-equations in n decision variables and i = 1(1)m.


The KT conditions are given below:
m
∂f X ∂gi
− = 0; j = 1(1)n
∂xj ∂xi
i=1

λi (gi (x1 , x2 , · · ·, xn ) − bi ) = 0; i = 1(1)m


gi (x1 , x2 , · · ·, xn ) ≥ bi
xj , λi ≥ 0; j = 1(1)n i = 1(1)m

Due to the computational ease we shall restrict ourselves for either an NLPP of two decision
variables and one constraint i.e. n = 2, m = 1 or otherwise an NLPP of two decision variables and
two constraints i.e. n = m = 2 ONLY.

Working Examples & Practice Problems

1. Considering the case of two decision variables in single constraint here is the problem of optimizations
to be solved through KT Conditions only:

Maximize Z = 10x1 + 4x2 − 2x21 − x22

Subjected to

2x1 + x2 ≤ 5
x1 , x2 ≥ 0

Solution:

S-01: Let us first convert the original problem in equality constraints given below:

Maximize Z = 10x1 + 4x2 − 2x21 − x22

Subjected to

2x1 + x2 + S12 = 5
x1 , x2 ≥ 0

8
S-02: Write the Lagrangian function L given below:

L = 10x1 + 4x2 − 2x21 − x22 − λ1 (2x1 + x2 + S12 − 5)

S-03: Applying the KT conditions one shall have following system


∂L
= 10 − 4x1 − 2λ = 0
∂x1
∂L
= 4 − 2x2 − λ = 0
∂x2
λ(2x1 + 5x2 + S12 − 5) = 0
S1 λ = 0
2x1 + 5x2 ≤ 5
x1 , x 2 , λ ≥ 0

S-04: As per third and fourth conditions stated above following couple of cases arises given by

Case-A:
λ = 0 ⇒ S1 > 0 ⇔ 2x1 + 5x2 ≤ 5

Case-B:
λ > 0 ⇒ S1 = 0 ⇔ 2x1 + 5x2 = 5

S-05:
Solving for both the cases one gets

Case A Case B
λ=0 λ>0
4x1 = 10 2x1 + x2 = 5
2x2 = 4 2x1 − 2x2 = 1
x1 = 5/2, x2 = 2 x1 = 11/6, x2 = 4/3, λ = 4/3
Since 5 + 2 = 7  5 Satisfy all the KT conditions.
Discarded the Solution ZM ax = 91/6

Hence the solution of above NLPP by KT Conditions is given by

x1 = 11/6, x2 = 4/3, λ = 4/3, ZM ax = 91/6

2. Considering the case of two decision variables in two constraints here is the problem of optimizations
to be solved through KT Conditions only:

Maximize Z = 10x1 + 10x2 − x21 − x22

Subjected to

x1 + x2 ≤ 8
− x1 + x2 ≤ 5
x1 , x2 ≥ 0

9
Solution:
S-01: Let us first convert the original problem in equality constraints given below:

Maximize Z = 10x1 + 10x2 − x21 − x22

Subjected to

x1 + x2 + S12 = 8
− x1 + x2 + S22 = 5
x1 , x2 ≥ 0

S-02: Write the Lagrangian function L given below:

L = 10x1 + 10x2 − x21 − x22 − λ1 (x1 + x2 + S12 − 8) − λ2 (−x1 + x2 + S22 − 5)

S-03: Applying the KT conditions one shall have following system


∂L


 = 10 − 2x1 − λ1 + λ2 = 0
∂x1
∂L

 = 10 − 2x2 − λ1 − λ2 = 0
∂x
 2
λ (x + x + S 2 − 8) = 0
1 1 2 1
λ (−x + x + S 2 − 5) = 0
2 1 2 2

S λ = 0
1 1
S λ = 0
2 2

x + x ≤ 8
1 2
−x + x ≤ 5
1 2

x1 , x 2 , λ 1 , λ 2 ≥ 0

S-04: As per the pair of second and third conditions above following four cases arises as
Case-A: λ1 = λ2 = 0 ⇒ S1 , S2 > 0 ⇔ x1 + x2 ≤ 8 & − x1 + x2 ≤ 5
Case-B: λ1 = 0 ⇒ S1 > 0 & λ2 6= 0 ⇒ S2 = 0 ⇔ x1 + x2 ≤ 8 & − x1 + x2 = 5
Case-C: λ1 6= 0 ⇒ S1 = 0 & λ2 = 0 ⇒ S2 > 0 ⇔ x1 + x2 = 8 & − x1 + x2 ≤ 5
Case-D: λ1 , λ2 6= 0 ⇒ S1 , S2 = 0 ⇔ x1 + x2 = 8 & − x1 + x2 = 5
S-05: Solving for both the cases one gets

Case A Case B Case C Case D


λ1 = λ2 = 0 λ1 = 0, λ2 6= 0 λ1 6= 0, λ2 = 0 λ1 , λ2 6= 0
x1 = 5 10 − 2x1 + λ2 = 0 10 − 2x1 − λ1 = 0 x1 + x2 = 8
x2 = 5 10 − 2x2 − λ2 = 0 10 − 2x2 − λ1 = 0 −x1 + x2 = 5
Since 5 + 5 = 10  8 x1 = 5/2, x2 = 15/2 x1 = x2 = 4 x1 = 13/2, x2 = 3/2
Discarded Since 5 + 15 = 20  16 KTC satisfies for λ1 = 2 Since λ1 = 2, λ2 = −5
Discarded Zmax = 48 Discarded

Hence the solution of NLPP by KT Conditions is x1 = 4, x2 = 4, λ1 = 2, λ2 = 0, ZM ax = 48

10
Problems on Linear Inequality Constraints

Problems & Solutions


Optimize the following unconstrained NLPP considering , x1 , x2 , x3 ≥ 0 using KT Conditions given by:

SN Objective Function The Constraints The Solution

1. Max Z = 2x21 − 7x22 + 12x1 x2 2x1 + 5x2 ≤ 98 x1 = 44, x2 = 2,


λ = 100, ZM ax = 4900

2. Max Z = 8x1 + 10x2 − x21 − x22 3x1 + 2x2 ≤ 6 x1 = 4/13, x2 = 33/13,


λ = 206/33, ZM ax = 277/13

3. Max Z = 10x1 + 10x2 − x21 − x22 x1 + x2 ≤ 14 x1 = 5, x2 = 5,


−x1 + x2 ≤ 6 λ1 = λ2 = 0, ZM in = 50

4. Max Z = x21 + x22 x1 + x2 ≤ 4 x1 = 2, x2 = 2,


2x1 + x2 ≤ 5 λ1 = 4, λ2 = 0, ZM ax = 8

5. Max Z = 2x1 + 3x2 − x21 − 2x22 x1 + 3x2 ≤ 6 x1 = 1, x2 = 3/4,


5x1 + 2x2 ≤ 10 λ1 = λ2 = 0, ZM ax = 17/8

6. Max Z = 4x1 + 6x2 − x21 − x22 − x23 x1 + x2 ≤ 2 x1 = 1/2, x2 = 3/2, x3 = 0


2x1 + 3x2 ≤ 12 λ1 = 3, λ2 = 0, ZM ax = 17/2

7. Max Z = −2x21 − 2x22 + 12x1 + 21x2 + 2x1 x2 x1 + x2 ≤ 10 x1 = 23/4, x2 = 17/4


x2 ≤ 8 λ1 = 13/2, λ2 = 0, ZM ax = 1734/16

8. Min Z = 7x21 + 5x22 − 6x1 x1 + 2x2 ≤ 10 x1 = 3/7, x2 = 0,


x1 + 3x2 ≤ 9 λ1 = λ2 = 0, ZM in = −9/7

11
Verification of KKT conditions for a given NLPP using Graphical
Method
1. Apply the graphical method to find the solution of following NLPP

Min Z = x21 + x22

Subjected to

x1 + x2 ≥ 4
2x1 + x2 ≥ 5
x1 , x 2 ≥ 0

Solution:

Observe that if (x1 , x2 ) is any point in the plane, then basically x21 + x22 represents the square of
the distance of this point, that needs to be minimized subjected to the satisfaction of the given
constraints.

Since x1 , x2 ≥ 0 so the region occupies by the constraints must lies on the first quadrant only. So
one shall draw the respective region given below:

From basic geometry one shall have information that the minimum distance will be recovered at a
point at which a side of unbounded convex region will be a tangent to the circle (Objective function
in our case).

12
dy
Since the slope of tangent to the curve y = f (x) in X − Y plane is denoted by , therefore in order
dx
to search the point of minima we shall differentiate the objective function and constraints as well
w.r.t. both variables x1 and x2 respectively as follows:

Consider a dummy constant d > 0 such that x21 + x22 = d2 , differentiating the objective function we
get
dx2 x1
2x1 dx1 + 2x2 dx2 = 0 ⇒ =− (1)
dx1 x2
Ignoring the inequality sign and on differentiation the constraints would be reduced to following
couple of equations:
dx2
2x1 + x2 = 5 ⇒ = −2 (2)
dx1
dx2
x1 + x2 = 4 ⇒ = −1 (3)
dx1
Solving a pair of equations (1) and (2) and (1) and (3) one gets the couple of points as (2, 1) and
(2, 2). Therefore at one of these points the distance d of the circle d2 = x21 + x22 would be minimum.

By definition of solution it must lies in the shaded region and since point (2, 1) lies outside the
shaded convex region therefore one will have only point left is (2, 2) which is the required solution
to the given NLPP.

The value of minimum distance or our objective function at the point (2, 2) is Z = 8.

The KKT Conditions for above NLPP are given by



2x − λ − 2λ = 0
1 1 2
2x − λ − λ = 0
2 1 2

λ [x + x − S 2 − 4] = 0
1 1 2 1
λ [2x + x − S 2 − 5] = 0
2 1 2 2

x + x ≤ 4
1 2
2x + x ≤ 5
1 2

λ S = 0
1 1
λ S = 0
2 2

x1 , x 2 ≥ 0
λ1 , λ2 ≤ 0

It could be seen that substituting (2, 4) in above conditions one will get λ1 = 4 and λ2 = 0
Henceforth, the KKT conditions are satisfied by the graphically obtained optimal solution of this
NLPP.

13
2. Apply the graphical method to find the solution of following NLPP

Min Z = 2x1 + 3x2

Subjected to

x1 x2 ≤ 8
x21 + x22 ≤ 20
x1 , x 2 ≥ 0

Solution:

Since x1 , x2 ≥ 0 so the region occupies by the constraints must lies on the first quadrant only. So
one shall draw the respective region given below:

The curve x1 x2 = 8 is a rectangular hyperbola having x1 = 0 and x2 = 0 as an asymptotic lines of


axis (The line along which the curve approaches to infinity). However, next constraint x21 + x22 ≤ 20

represents a circle centred at origin of radius 20 occupies the shaded region in the first quadrant
shown below:

From basic geometry one shall have information that the Objective function is a line and it would
be optimize at either on the point C or the point D in our case.

Ignoring inequality constraint solving both non-linear constraints one will get the points of intersec-
tion in first quadrant denoted by C = (2, 4) and B = (4, 2) only.

14
It could be observed the profit line of objective function 3x1 + 2x2 = K for fixed K touches the
shaded region only at point C. Hereafter, the objective function line crosses the shaded region and
hence could not be further optimize.

By definition of solution it must lies in the shaded region and therefore one will have only point left
is C = (4, 2) which is the required solution to the given NLPP.

The optimized value of our objective function at the point (2, 4) is Z = 16.

The KKT Conditions for above NLPP are given by



λ x + 2λ x = 2
1 2 2 1
λ x + λ x = 3
1 1 2 2

λ [x x + S 2 − 8] = 0
1 1 2 1
λ [x2 + x2 + S 2 − 20] = 0
2 1 2 2

x x ≤ 8
1 2
x2 + x2 ≤ 20
1 2

λ S = 0
1 1
λ S = 0
2 2

x1 , x 2 ≥ 0
λ1 , λ2 ≤ 0

1 1
It could be seen that substituting (2, 4) in above conditions one will get λ1 = and λ2 =
6 3
Henceforth, the KKT conditions are satisfied by the graphically obtained optimal solution of this
NLPP.

...............................

15

You might also like