You are on page 1of 25

NONLINEAR PROGRAMMING

1
INTRODUCTION TO NONLINEAR PROGRAMMING (NLP)
In LP, our goal was to maximize or minimize a linear
function subject to linear constraints:
Maximize profit P = 7X1 + 10X2

Subject to:

Fabrication Time: 3X1 + 2X2 <= 36


Assembly Time: 2X1 + 4X2 <= 40
IC Chips: 10X1 <= 100
Non-negativity: X1, X2 >= 0

Linear functions have the form of a “sumproduct”:


a1X1 + a2X2 + a3X3 + …
So linear functions do not involve exponents,
logarithms, square roots, products of variables, and 2
so on. Functions having these components are
nonlinear.
INTRODUCTION TO NONLINEAR
PROGRAMMING (NLP)
 If an LP problem is feasible then, at least in theory,
it can always be solved because:
 We know the solution is a “corner point”: a point where
lines or planes intersect. There are a finite number of
possible solution points.
 The simplex algorithm will find that point

 Also, a very informative sensitivity analysis is


relatively easy to obtain for LP problems
 But in many interesting, real-world problems, the
objective function may not be a linear function, or
some of the constraints may not be linear
constraints  3
INTRODUCTION TO NLP
 Optimization problems that involve nonlinearities
are called nonlinear programming (NLP) problems.
 Many NLPs do not have any constraints. They are
called unconstrained NLPs.
 Solutions to NLPs are found using search
procedures. Solutions are more difficult to
determine, compared to LPs. One problem is
difficulty in distinguishing between a local and
global minimum or maximum point.

4
Example problem: Maximize f(x) = -x2 + 9x + 4
(An unconstrained problem that can be solved without a search)
30

25

20
f(x)

15 Solution process is straightforward using calculus:

f'(x) = -2x + 9 Set this equal to zero and obtain x = 4.5


10
f''(x) = -2 which is negative at x = 4.5 (or at any
other x-value) so we have indeed found a maximum rather
5 than a minimum point

So the function is maximized when x = 4.5, with a


0
maximum value of -4.52 + 9(4.5) + 4 = 24.25.
0 1 2 3 4 5 6 7 8 5 9
x
Problem: Maximize f(x)
450
Global
400
maximum

Local maximum
350

300

250
f(x)

200

150
This is trickier: a value x whose first derivative is zero and
100
whose second derivative is negative is not necessarily the
solution point! It could be a local maximum point rather
50
than the desired global maximum point.

0
0 1 2 3 4 5 6 7 8 6 9
x
Constrained Problem: Maximize f(x) subject to: x ≥ 7
450

400

350

300

Solution point
250
f(x)

200

150

In the case of this constrained


100 optimization problem basic calculus is
of no value, as the derivative at the
Feasible
50 solution point is not equal to zero
region
0
0 1 2 3 4 5 6 7 8 7 9
x
NLP EXAMPLE: SEARCHES CAN FAIL!
Maximize f(x) = x3 - 30x2 + 225x + 50
3000

2500

2000
f(x)

1500

1000

500

0
0 5 10 15 20 25
x

The correct answer is that the problem is


unbounded. There is no solution point! 8

Let‟s try other Solution technique……


NLP EXAMPLE: PRICING CHAIRS
The Hickory Cabinet and Furniture Company has
decided to concentrate on the production of chairs.
The fixed cost per month of making chairs is $7,500,
and the variable cost per chair is $40. Demand is
related to price according to the following linear
equation:
d = 400 − 1.2p,
where d is the demand and p is the price. Develop
the nonlinear profit function for this company and
determine the price that will maximize profit, the
optimal volume, and the maximum profit per month.
10
NLP EXAMPLE: PRICING CHAIRS
The Hickory Cabinet and Furniture Company has decided to concentrate on the production of chairs. The fixed cost per month of
making chairs is $7,500, and the variable cost per chair is $40. Price is related to demand according to the following linear
equation:
d = 400 − 1.2p,
where d is the demand and p is the price. Develop the nonlinear profit function for this company and determine the price that will
maximize profit, the optimal volume, and the maximum profit per month.

Profit = Revenue – Cost


Revenue = Units Sold (Demand) x Price = dp
= (400 – 1.2p)p
= 400p - 1.2p2
Cost = 7500 + 40d
= 7500 + 40(400-1.2p)
= 23,500 – 48p 11
Optimization using Calculus

Kuhn-Tucker
Conditions

1
Introduction

™ Optimization with multiple decision variables and equality


constraint : Lagrange Multipliers.
™ Optimization with multiple decision variables and inequality
constraint : Kuhn-Tucker (KT) conditions
™ KT condition: Both necessary and sufficient if the objective
function is concave and each constraint is linear or each constraint
function is concave, i.e. the problems belongs to a class called the
convex programming problems.

2
Kuhn Tucker Conditions: Optimization
Model

Consider the following optimization problem


Minimize f(X)
subject to
gj(X) ≤ 0 for j=1,2,…,p
where the decision variable vector

X=[x1,x2,…,xn]

3
Kuhn Tucker Conditions

Kuhn-Tucker conditions for X* = [x1* x2* . . . xn*] to be a local minimum are

∂f m
∂g
+ ∑ λj =0 i = 1, 2,..., n
∂xi j =1 ∂xi
λj g j = 0 j = 1, 2,..., m
gj ≤ 0 j = 1, 2,..., m
λj ≥ 0 j = 1, 2,..., m

4
Kuhn Tucker Conditions …contd.

™ In case of minimization problems, if the constraints are


of the form gj(X) ≥ 0, then λ j have to be non-positive
™ On the other hand, if the problem is one of maximization
with the constraints in the form gj(X)≥ 0, then λ j have to
be nonnegative.

5
Example (1)

Minimize f = x1
2
+ 2 x2
2
+ 3 x3
2

subject to

g1 = x1 − x2 − 2 x3 ≤ 12
g 2 = x1 + 2 x2 − 3 x3 ≤ 8

6
Example (1) …contd.
Kuhn – Tucker Conditions
∂f ∂g ∂g 2 x1 + λ1 + λ2 = 0 (2)
+ λ1 1 + λ2 2 = 0
∂xi ∂xi ∂xi 4 x2 − λ1 + 2λ2 = 0 (3)
6 x3 − 2λ1 − 3λ2 = 0 (4)

λj g j = 0 λ1 ( x1 − x2 − 2 x3 − 12) = 0 (5)
λ2 ( x1 + 2 x2 − 3 x3 − 8) = 0 (6)

x1 − x2 − 2 x3 − 12 ≤ 0 (7)
gj ≤ 0 x1 + 2 x2 − 3 x3 − 8 ≤ 0 (8)

λ1 ≥ 0 (9)
λj ≥ 0 λ2 ≥ 0 (10)

7
Example (1) …contd.

From (5) either λ1 = 0 or x1 − x2 − 2 x3 − 12 =, 0


Case 1
¾ From (2), (3) and (4) we have x1 = x2 =−λ2 / 2 and x3 = λ2 / 2
Using these in (6) we get λ2 + 8λ2 = 0, ∴ λ2 = 0 or − 8
2
¾

¾ From (10), λ2 ≥ 0 , therefore, λ2 =0,


¾ Therefore, X* = [ 0, 0, 0 ]
This solution set satisfies all of (6) to (9)

8
Example (1) …contd.

Case 2: x1 − x2 − 2 x3 − 12 = 0
¾ Using (2), (3) and (4), we have −λ1 − λ2 − λ1 − 2λ2 − 2λ1 + 3λ2 − 12 = 0
2 4 3
or 17λ1 + 12λ2 = −144

¾ But conditions (9) and (10) give us λ1 ≥ 0 and λ2 ≥ 0 simultaneously,

which cannot be possible with 17λ1 + 12λ2 = −144

Hence the solution set for this optimization problem is X* =[000]

9
Example (2)

Minimize f = x12 + x22 + 60 x1


subject to
g1 = x1 − 80 ≥ 0
g 2 = x1 + x2 − 120 ≥ 0

10
Example (2) …contd.
Kuhn – Tucker Conditions
∂f ∂g ∂g 2 x1 + 60 + λ1 + λ2 = 0 (11)
+ λ1 1 + λ2 2 = 0
∂xi ∂xi ∂xi 2 x2 + λ2 = 0 (12)
λ1 ( x1 − 80) = 0 (13)
λj g j = 0 λ2 ( x1 + x2 − 120) = 0 (14)
x1 − 80 ≥ 0 (15)
gj ≤ 0 x1 + x2 + 120 ≥ 0 (16)
λ1 ≤ 0 (17)
λj ≥ 0
λ2 ≤ 0 (18)
11
Example (2) …contd.

From (13) either λ1 = 0 or ( x1 − 80) = 0 ,


Case 1
¾ From (11) and (12) we have x1 = − λ2 − 30 and x2 = −
λ2
2 2
¾ Using these in (14) we get λ2 ( λ2 − 150 ) = 0
∴ λ2 = 0 or − 150
¾ Considering λ2 = 0, X* = [ 30, 0]. But this solution set violates (15)
and (16)
¾ For λ2 = −150 , X* = [ 45, 75]. But this solution set violates (15)

12
Example (2) …contd.

Case 2: ( x1 − 80) = 0

¾ Using x1 = 80 in (11) and (12), we have

λ2 = −2 x2
λ1 = 2 x2 − 220 (19)

¾ Substitute (19) in (14), we have −2 x2 ( x2 − 40 ) = 0

¾ For this to be true, either x2 = 0 or x2 − 40 = 0

13
Example (2) …contd.

¾ For x2 = 0 , λ1 = −220

¾ This solution set violates (15) and (16)

¾ For x2 − 40 = 0, λ1 = −140 and λ2 = −80


¾ This solution set is satisfying all equations from (15) to (19) and hence

the desired

¾ Thus, the solution set for this optimization problem is X* = [ 80 40 ].

14
Thank you

15

You might also like