You are on page 1of 15

Optimal Design of Energy Systems (M2794.

003400)

Chapter 16. Calculus Methods of Optimization

Min Soo KIM


Department of Mechanical and Aerospace Engineering
Seoul National University
Chapter 16. Calculus Methods of Optimization

16.1 Continued exploration of calculus methods

- A major portion of this chapter deals with principles of calculus method

- Substantiation for the Largrange multiplier equations will be provided

- Physical interpretation of λ, and test for maxima-minima are also provided

2 / 15
Chapter 16. Calculus Methods of Optimization

16.1 Continued exploration of calculus methods

- Lagrange multipliers (from Chap.8)

m
y  y ( x1 , x2 , , xn ) y   i i  0
i 1

1 ( x1 , x2 , , xn )  0 1 ( x1 , x2 , , xn )  0 n+m scalar
equations

m ( x1 , x2 , , xn )  0 m ( x1 , x2 , , xn )  0

3 / 15
Chapter 16. Calculus Methods of Optimization

16.2 The nature of the gradient vector (y )

1. normal to the surface of constant y


For arbitrary vector : dx1iˆ1  dx2iˆ2  dx3iˆ3
To be tangent to the surface : dy  y dx  y dx  y dx  0
x1 x2 x3
1 2 3

(y / x2 )dx2  (y / x3 )dx3


 dx1  
y / x1
 (y / x2 )dx2  (y / x3 )dx3  ˆ
Tangent vector : T     i1  dx2iˆ2  dx3iˆ3
 y / x1 
y ˆ y ˆ y ˆ
Gradient vector : y  i1  i2  i3
x1 x2 x3

T  y  0 → gradient vector is normal to all tangent vectors


→ gradient vector normal to the surface

4 / 15
Chapter 16. Calculus Methods of Optimization

16.2 The nature of the gradient vector (y )

2. indicate direction of maximum rate of change of y with respect to x


y y y
to find maximum dy : dy  dx1  dx2   dxn
x1 x2 xn
constraints : ( dx1 ) 2  ( dx2 ) 2   ( dxn ) 2  r 2

The constraint indicates a circle of


radious for 2 dimensions, and a
sphere for 3 dimensions

5 / 15
Chapter 16. Calculus Methods of Optimization

16.2 The nature of the gradient vector (y )

using Lagrange multipliers : y 1 y


 2 dxi  0  dxi 
xi 2 xi
In vector form :
1  y ˆ y ˆ y ˆ  1
dx1iˆ1  dx2iˆ2   dxniˆn   i1  i2   in   y
2 
 1x x2 xn  2

→ y indicates the direction of maximum change for a given distance in the space

6 / 15
Chapter 16. Calculus Methods of Optimization

16.2 The nature of the gradient vector (y )

3. points in the direction of increasing y


y y
small move in the x1-x2 space : dy  dx1  dx2
x1 x2
If the move is made in the direction of y :
dxi
 c  dx1  c (y / x1 ), dx2  c (y / x2 )
y / xi
 y  2  y  2 
dy  c     0
 x1   x2  

→ dy is equal or greater than zero


→ y always increase in the direction of y

7 / 15
Chapter 16. Calculus Methods of Optimization

16.5 Two variables and one constraint (prove Lagrange multiplier method)

Optimize y ( x1 , x2 ) subject to  ( x1 , x2 )  0

 y   y 
Taylor expansion : y   
 1 
x   x2

 1
x  x2 
       / x2
d   
 1 
x  
 2x  0  x   x2
  x2  / x1
1
 1
x 

substituting the result for ∆𝑥1

 y  / x2 y 
y      x2
 x1  / x1 x2 

8 / 15
Chapter 16. Calculus Methods of Optimization

16.5 Two variables and one constraint (prove Lagrange multiplier method)

In order for no improvements of Δy,

𝜕𝑦 𝜕𝜙Τ𝜕𝑥2 𝜕𝑦 𝜕𝑦 𝜕𝜙Τ𝜕𝑥2 𝜕𝑦
∆𝑦 = − + ∆𝑥2 = 0 − + =0
𝜕𝑥1 𝜕𝜙Τ𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝜙Τ𝜕𝑥1 𝜕𝑥2

If λ is defined as 𝜕𝑦
𝜕𝑥1
𝜕𝜙
𝜕𝑥1

y  y 
 0  0
x2 x2 x1 x1

9 / 15
Chapter 16. Calculus Methods of Optimization

16.6 Three variables and one constraint

As a same manner with 2 variables, 𝑦 𝑥1 , 𝑥2 , 𝑥3 is need to be optimized


subject to the constraint, 𝜙 𝑥1 , 𝑥2 , 𝑥3

The first degree terms in the Taylor seriers are

𝜕𝑦 𝜕𝑦 𝜕𝑦
∆𝑥1 + ∆𝑥2 + ( )∆𝑥3
𝜕𝑥1 𝜕𝑥2 𝜕𝑥3

𝜕𝜙 𝜕𝜙 𝜕𝜙
∆𝑥1 + ∆𝑥2 + ( )∆𝑥3
𝜕𝑥1 𝜕𝑥2 𝜕𝑥3

10 / 15
Chapter 16. Calculus Methods of Optimization

16.6 Three variables and one constraint

Any two of the three variables can be moved independently, but the motion
of the third variables must abide by the constraint. Arbitraily choosing 𝑥2 as a
dependant one,

𝜕𝑦 𝜕𝑦 𝜕𝜙Τ𝜕𝑥1 𝜕𝜙Τ𝜕𝑥3 𝜕𝑦
∆𝑦 = ∆𝑥1 − ∆𝑥1 + ∆𝑥3 + ∆𝑥 = 0
𝜕𝑥1 𝜕𝑥2 𝜕𝜙Τ𝜕𝑥2 𝜕𝜙Τ𝜕𝑥2 𝜕𝑥3 3

In order for no improvement to be possible

𝜕𝜙 𝜕𝜙
𝜕𝑦 𝜕𝑦 𝜕𝑥1 𝜕𝑦 𝜕𝑦 𝜕𝑥3
− =0 − =0
𝜕𝑥1 𝜕𝑥2 𝜕𝜙 𝜕𝑥3 𝜕𝑥2 𝜕𝜙
𝜕𝑥2 𝜕𝑥2

11 / 15
Chapter 16. Calculus Methods of Optimization

16.6 Three variables and one constraint

Define 𝜆 as
𝜕𝑦
𝜕𝑥1
𝜕𝜙
𝜕𝑥1

Then
𝜕𝑦 𝜕𝜙 𝜕𝑦 𝜕𝜙 𝜕𝑦 𝜕𝜙
−𝜆 =0 −𝜆 =0 −𝜆 =0
𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥3 𝜕𝑥3

12 / 15
Chapter 16. Calculus Methods of Optimization

16.6 Three variables and one constraint

Define 𝜆 as
𝜕𝑦
𝜕𝑥1
𝜕𝜙
𝜕𝑥1

Then
𝜕𝑦 𝜕𝜙 𝜕𝑦 𝜕𝜙 𝜕𝑦 𝜕𝜙
−𝜆 =0 −𝜆 =0 −𝜆 =0
𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥3 𝜕𝑥3

13 / 15
Chapter 16. Calculus Methods of Optimization

16.8 Alternate expression of constrained optimization problem

optimize y ( x1 , x2 )

subject to  ( x1 , x2 )  b

unconstrained function L( x1 , x2 )  y ( x1 , x2 )  [ ( x1 , x2 )  b]

optimum occurs where L  0

L y 
  0
x1 x1 x1
L y 
  0 find optimum point
x2 x2 x2
 ( x1 , x2 )  b  0

14 / 15
Chapter 16. Calculus Methods of Optimization

16.9 Interpretation λ of as the sensitivity coefficient

sensitivity coefficient (SC) :


y * y * x1 y * x2
y ( x1 , x2 )  SC    (1)
b x1 b x2 b
  x1  x2
 ( x1 , x2 )  b    1  0 (2)
b x1 b x2 b
(y * / x1* ) (y * / x2* )
 
( / x1* ) ( / x2* )

y x1 y x2
(2)       0
x1 b x2 b

(1)  SC  

15 / 15

You might also like