You are on page 1of 27

INDIAN 

INSTITUTE OF SCIENCE

Water Resources Systems:


Modeling Techniques and Analysis
Lecture - 5
Course Instructor : Prof. P. P. MUJUMDAR
Department of Civil Engg., IISc.

1
Summary of the previous lecture
• Optimization of a function of a single variable
necessary condition f '  x  0
Sufficiency condition f ''
 x x 0 f ''
 x  x0  0
0

Function of multiple variables


f f f
necessary condition   ........  0
x1 x2 xn
Sufficiency condition:  2 f  X  2 f  X  
 
 x12 x1x2 
Hessian matrix H  f  X    2
 f  X  2 f  X  
 
 x2 x1 x22 

• H positive definite at X = X0 … Minimum


• H negative definite at X = X0 …. Maximum
2
Example – 1
Examine the function for convexity/concavity and
determine the values at extreme points
f  X    x12  x22  4 x1  8

The stationary point is obtained by solving

f f
 2 x1  4  0 and  2 x2  0
x1 x2

x1  2, x2  0

X   2, 0 

3
Example – 1 (Contd.)
• Hessian matrix is

 2 f  X  2 f  X  
 
x 2
x1x2 
H  f  X     2
1
 f  X  2 f  X  
 
 x2 x1 x2   2, 0 
2

Hessian matrix evaluated at stationary point (-2,0)

4
Example – 1 (Contd.)
f  X    x  x  4 x1  8
2
1
2
2
 2 f  X 

2 f  X  

 x1 x1x2 
2

 2 f  X  2 f  X  
f 2 f  
 2 x1  4 0  x2 x1 x2 
2

x1 x1x2

2 f
 2
x12

f 2 f
 2 x2 0
x2 x2 x1

2 f
 2
x22

5
Example – 1 (Contd.)
Hessian matrix is
 2 0 
H  f  X     
 0 2 
Eigen values of Hessian matrix:
 I  H  f  X    0

  2 0 
I  H   
 0   2 
   2
2
0

Eigen values are 1  2, 2  2

6
Example – 1 (Contd.)

• As both the eigen values are negative, the matrix is


negative definite

Hence the function has local maximum at X   2, 0 

As the Hessain matrix does not depend on x1 and


x2 and it is negative definite matrix, the function is
strictly concave and therefore the local maximum
is also the global maximum

7
Example – 2
Determine the extreme values of the function

f  X   x13  x23  3 x1  12 x2  20

The stationary point is obtained by solving

f
 3 x12  3  0 x1  1
x1
and
f
 3 x22  12  0 x2  2
x2
Four solutions
X   1, 2  , 1, 2  , 1, 2  and  1, 2 
8
Example – 2 (Contd.)
• Hessian matrix is
 2 f  X  2 f  X  
 
x 2
x1x2 
H  f  X     2
1
 f  X  2 f  X  
 
 x2 x1 x2 
2

f  X   x13  x23  3 x1  12 x2  20
f 2 f
 3 x12  3 0
x1 x1x2

2 f
 6 x1
x12

9
Example – 2 (Contd.)
f  X   x13  x23  3 x1  12 x2  20

f 2 f
 3x22  12 0
x2 x2 x1
2 f
 6 x2
x22

Hessian matrix is

6 x1 0 
H  f  X     
 0 6 x2

10
Example – 2 (Contd.)
Hessian matrix is
6 x1 0 
H  f  X     
 0 6 x2

Eigen values of Hessian matrix:


 I  H  f  X    0

  6 x1 0 
I  H   
 0   6 x2

   6 x1    6 x2   0
Eigen values are 1  6 x1 , 2  6 x2

11
Example – 2 (Contd.)
Hessian matrix at
6 x1 0 
H  f  X     
 0 6 x2
1, 2 
Eigen values are 1  6 x1 , 2  6 x2

1  6, 2  12

All the eigen values of Hessian matrix are positive,


hence the matrix is positive definite at X  1, 2 

Therefore the function has a local minimum at this point


f min  X   13  23  3 1  12  2  20  2

12
Example – 2 (Contd.)
Hessian matrix at
6 x1 0 
H  f  X     
 0 6 x2   1, 2 

Eigen values are 1  6 x1 , 2  6 x2

1  6, 2  12
All the eigen values of Hessian matrix are negative,
hence the matrix is negative definite at X   1, 2 

Therefore the function has a local maximum at this point


f max  X    1   2   3   1  12   2   20
3 3

 38
13
Example – 2 (Contd.)
Hessian matrix at
6 x1 0 
H  f  X     
 0 6 x2   1, 2  or 1, 2 

Eigen values are -6 and 12 (or 6 and -12)

The H matrix is neither positive definite nor negative


definite at these two points

14
Constrained Optimization

15
Optimization: Methods of Calculus
Constrained Optimization:
• f(X) is a function of n variables represented by vector X
= (x1, x2, x3, …… xn)

Maximize or Minimize f(X)


Subject to (s.t.) gj(X) < 0 j = 1, 2, ….. m
m<n
f(X) and g(X) may or may not be linear functions

• If m > n the problem is over defined and there will be


no solution unless redundant constrains are present

16
Optimization: Methods of Calculus
Constrained Optimization:
• Function with equality constraints
• Function with inequality constraints

Function with equality constraints


Maximize or Minimize f(X)
(s.t.) gj(X) =0

Two methods discussed


• Direct substitution
• Lagrange multipliers
17
Optimization: Methods of Calculus
Direct substitution:
• Reduce the problem to an unconstrained problem by
expressing m variables in terms of the remaining (n –
m) variables.

For example,
3 variables : x1, x2, x3
2 constraints
x2, x3 may be expressed in terms of x1 and
render the problem as unconstrained
problem with only x1 involved

18
Optimization: Methods of Calculus
Limitation:
• With higher no. of variables and constraints this
method becomes quite cumbersome.
• Constraint equations are often non-linear – difficult to
solve them simultaneously.

19
Example – 3
Minimize the function
f  X   x12  x22  4 x1 x2
s.t.
x1  x2  4  0
Solution:
x1  4  x2

The modified function is

f  X    4  x2   x22  4  4  x2  x2
2

 16  8 x2  2 x22

20
Example – 3 (Contd.)
f
 8  4 x2
x2
f
0
x2
8  4 x2  0
x2  2

2 f
 4 < 0,
x22

Global maximum occurs at x2  2


21
Optimization: Methods of Calculus
Lagrange multipliers:
Maximize or Minimize f(X)
s.t.
gj(X) =0 j = 1, 2, ….. m
• Introduce one additional variable corresponding to each
constraint.
• Lagrange function f(X) is written as
L  f  X   j g j  X 

• When gj(X) = 0, optimizing L is same as optimizing f(X)


• The problem is transformed to unconstrained
optimization problem 22
Optimization: Methods of Calculus

L  f  X   1 g1  X   2 g 2  X   .....  m g m  X 

• The problem of n variables with m constraints is


changed to a single problem of (n + m) variables with
no constraints.

23
Optimization: Methods of Calculus
Necessary condition: For a function f(X) subject to the
constraints gj(X) =0, j = 1, 2, ….. m to have a relative
optimum at a point X* is that the first partial derivatives of
the Langrange function with respect to each of its
arguments must be zero.
m
L  f  X    j g j  X 
j 1

L
0 i = 1, 2, ….. n
xi

L
0 j = 1, 2, ….. m
 j

24
Optimization using Calculus
The (n + m) simultaneous equations are solved to get a
solution, (X* ,  *) .

Sufficiency condition:
The second partial derivatives are denoted by
2L
Lij  i = 1, 2, ….. n
xi x j
X *
, * 

g j  X 
gij  j = 1, 2, ….. m
xi X*

25
Optimization using Calculus
Sufficiency condition:
n terms m terms

L11  Z L12 .... L1n g11 g 21 ... g m1


L21 L22  Z .... L2 n g12 g 22 ... g m 2
n
. . . . . .
terms . . . . . .
Ln1 Ln 2 .... Lnn  Z g1n g 2 n ... g mn
D 
g11 g12 .... g1n 0 0 ... 0
g 21 g 22 .... g2n 0 0 ... 0
m
. . . . . . .
terms . . . . . . .
g m1 gm2 .... g mn 0 0 ... 0

D 0
26
Optimization using Calculus
Leads to a polynomial in Z of the order (n – m)

Solve for Z

If all Z values are positive ….. X* corresponds to


minimum
If all Z values are negative ….. X* corresponds to
maximum
If some values are positive and some are negative … X*
is neither a minimum nor a maximum.

27

You might also like