You are on page 1of 27

EPE-821

Optimization and Economics of Integrated


Power Systems

Dr. Muhammad Naeem

EPE 821 Lec 4 1


Optimiality Condition
Suppose that F (x) is a continuously differentiable function of the scalar
variable x, and that, when x = x*,

dF d 2F
 0 and 2
0 (1)
dx dx
Above two conditions are known as optimality conditions

Conditions (1) imply that F(x*) is the smallest value of F in some region
near x* . It may also be true that F (x* ) ≤ F (x) for all x but condition (1)
does not guarantee this.
Definition If conditions (1) hold at x = x* and if F(x*) < F(x) for all x
then x* is said to be the global minimum.

EPE 821 Lec 4 2


Necessary and Sufficient Conditions
Example: If a number is divisible by 4(call this H), then it is divisible by 2
(call this C). H implies C but C is not strong enough to imply H for example
6 is divisible by 2 but not by 4. Therefore C is necessary for H but not
sufficient for H.

Example: In Euclidean geometry, a triangle has equal sides (call this H) if


and only if the triangle has equal angles (call this C). This means

H if C: H is necessary for C since C implies H


H only if C: C is necessary for H since H implies C
H if and only if C: H and C imply each other and are both necessary and
sufficient conditions for each other.

EPE 821 Lec 4 3


Necessary and Sufficient Conditions
Example: If a number is divisible by 4(call this H), then it is divisible by 2
(call this C). H implies C but C is not strong enough to imply H for example
6 is divisible by 2 but not by 4. Therefore C is necessary for H but not
sufficient for H.

Example: In Euclidean geometry, a triangle has equal sides (call this H) if


and only if the triangle has equal angles (call this C). This means

H if C: H is necessary for C since C implies H


H only if C: C is necessary for H since H implies C
H if and only if C: H and C imply each other and are both necessary and
sufficient conditions for each other.
In the context of smooth functions (Differentiable functions) the condition f′
(x) is a necessary condition for a relative maximum or minimum. (This is
not sufficient because zero slope may also at inflection point.

In the context of smooth functions (Differentiable functions) the condition


f''(x) is a sufficient condition if f'(x) = 0. 4
EPE 821 Lec 4
Direct search and gradient methods
 Both are iterative methods
 direct search techniques and are based on simple comparison of function
values at trial points.
 gradient methods. These use derivatives of the objective function and can
dF
be viewed as iterative algorithms for solving the nonlinear equation  0
dx
 Gradient methods tend to converge faster than direct search methods.
They also have the advantage that they permit an obvious convergence
test - namely stopping the iterations when the gradient is near zero.

EPE 821 Lec 4 5


Example
Find the minimum and maximum of x3  3x 2 for x  1, 4

Solution:

EPE 821 Lec 4 6


Example
Find the minimum and maximum of x3  3x 2 for x  1, 4

Solution:

F  x3  3x 2
F   3 x 2  6 x  0  at x  0 and x  2
F   6 x  6 x 3-3*x 2
25
Orig
20 1st Derv.
2nd Derv.

F (0)  0 Local Maximum 15

F (2)  0
10

Local Minimum 5

-5

-10

-15
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4

EPE 821 Lec 4 7


Bisection Method to Find the Root
Only suitable for functions that has only one minimum/maximum in the range [a, b]

Given a bracketed root, the method repeatedly halves the interval while
continuing to bracket the root and it will converge on the solution.

EPE 821 Lec 4 8


Bisection Method to Find the Root
1.Choose xl and xu as two guesses for the root such
f(x)
that f(xl) f(xu) < 0, or in other words, f(x) changes
sign between xl and xu.
2.Estimate the root, xm of the equation
x 
x  xf (x) = 0 as the
l u
m
2
mid-point between xl and xu as

3. If f(xl) f(xm) < 0, then the root lies between xl and x xm


x
xu
xm; then xl = xl ; xu = xm.
Else If f(xl ) f(xm) > 0, then the root lies between xm
and xu; then xl = xm; xu = xu.
Else If f(xl) f(xm) = 0; then the root is xm. Stop the
algorithm if thisxmis true.
xl  xu
new
2 xm  x mold
a   100
x mnew
4. Get new estimate xmold  previous estimate of root
5. Absolute Relative Approximate Error
xmnew  current estimate of root

6. Check if error is less than pre-specified tolerance


EPE 821 Lec 4 9
or if maximum number of iterations is reached
Bisection Method to Find the Root

EPE 821 Lec 4 10


10
Bisection Method
f x   x 3-0.165 x 2+3.993x10- 4

Choose the bracket


x  0.00
xu  0.11
f 0.0  3.993x10  4
f 0.11  2.662x10  4

EPE 821 Lec 4 11


Bisection Method
Iteration #1

x  0, xu  0.11
0  0.11
xm   0.055
2

f 0  3.993x10 4
f 0.11  2.662 x10  4
f 0.055  6.655x10 5

x  0.055
xu  0.11

EPE 821 Lec 4 12


Bisection Method
Iteration #2

x  0.055, xu  0.11
0.055  0.11
xm   0.0825
2
a  33.33%

f 0.055  6.655x10 5
f 0.11  2.662 x10  4
f 0.0825  1.62216 x10  4
x  0.055, xu  0.0825

EPE 821 Lec 4 13


Bisection Method
Iteration #3

x  0.055, xu  0.0825
0.055  0.0825
xm   0.06875
2

a  20%
f 0.055  6.655x10 5
f 0.0825  1.62216x10  4
f 0.06875  5.5632x10 5

EPE 821 Lec 4 14


Bisection Method
Convergence
Table 1: Root of f(x)=0 as function of number of iterations for bisection method.
Iteration x xu xm a % f(xm)

1 0.00000 0.11 0.055 ---------- 6.655x10-5

2 0.055 0.11 0.0825 33.33 -1.6222x10-4

3 0.055 0.0825 0.06875 20.00 -5.5632x10-5

4 0.055 0.06875 0.06188 11.11 4.4843x10-6

5 0.06188 0.06875 0.06531 5.263 -2.5939x10-5

6 0.06188 0.06531 0.06359 2.702 -1.0804x10-5

7 0.06188 0.06359 0.06273 1.369 -3.1768x10-6

8 0.06188 0.06273 0.0623 0.6896 6.4973x10-7

9 0.0623 0.06273 0.06252 0.3436 -1.2646x10-6

10 0.0623 0.06252 0.06241 0.1721 -3.0767x10-7


EPE 821 Lec 4 15
Bisection Method
Advantages:
Always convergent
The root bracket gets halved with each iteration - guaranteed.
Disadvantages:
Slow convergence
If one of the initial guesses is close to the root, the convergence is
slower
 If a function f(x) is such that it just touches the x-axis it will be
unable to find the lower and upper guesses.

100 2
y=x

80

f x   x 2
60
y(x)

40

20

-10 -5 0 5 10
x
EPE 821 Lec 4 16
Bisection Method

 Function changes sign but root does not exist

1
100 f x  
y = 1/x x
80

60

40

20
y(x)

-20

-40

-60

-80

-100

-0.9 -0.6 -0.3 0.0 0.3 0.6 0.9


x

EPE 821 Lec 4 17


Difference between roots and optima

EPE 821 Lec 4 18


Bisection method for optimization
forx  1.5,3. Also write matlab code for
3 2
Find the minimum of x  3 x
verification.

Can we apply the bisection method to


find the root to this problem?
No. Why?. How to use root finding algorithms for optimization

x 3-3*x 2
25
Orig
20 1st Derv.
2nd Derv.
To get the minimum we need to 15

apply bisection method on the 10

derivative of x3-3x2 otherwise 5

we will get the wrong answer. 0


Root finding algorithms find the
-5
point where the function is zero.
-10
We want to see where
derivative is zero. -15
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4

EPE 821 Lec 4 19


Secant method to find root
Equation of a Line from 2 Points
From the definition of slope,
we can determine the slope in this case as
y 2  y1
 m   
x 2  x1
Plugging in this for ‘m’ in slope  point formula, 
y  y1 y 2  y1
 
x  x1 x 2  x1
 y 2  y1 
 y  y1    x  x1
 x 2  x1 
This is called as two  point formula.

EPE 821 Lec 4 20


Secant method to find root
Derivation of the method We then use this new value of  x as  x2  and repeat the process
using  x1  and  x2  instead of  x0  and  x1 . We continue this
process, solving for  x3 ,  x4 , etc., until we reach a
sufficiently high level of precision (a sufficiently small
difference between  xn  and  xn-1 ).

EPE 821 Lec 4 21


Secant method to find root

The first two iterations of the secant method. The red curve
shows the function f and the blue lines are the secants. For this
particular case, the secant method will not converge.

EPE 821 Lec 4 22


Secant method to find root
Initialize x1, x2,
=10-5, i=2

 xi  xi 1 
xi 1  xi  f xi  
 f  xi   f  x 
i 1 

F
|f(xi+1)|  
T i=i+1
x_Root=xi+1

end

EPE 821 Lec 4 23


Secant method for Optimization

Initialize x1, x2,


=10-5, i=2

 xi  xi 1 
xi 1  xi  f  xi  
 
 f  xi   f  xi 1  
 

F
|f’(xi+1)|  
T i=i+1
x_Root=xi+1

end

EPE 821 Lec 4 24


Secant method for Optimization
forx  1.5,3. Also write matlab code for
3 2
Find the minimum of x  3 x
verification. 25
x 3-3*x 2

Orig
1st Derv.
F = x^3 - 3*x^2; % function 20
2nd Derv.

dF = 3*x^2 - 6*x; % Derivative 15

10
x2 is 1.800000 and dF is -1.080000
x2 is 1.928571 and dF is -0.413265 5

x2 is 2.008264 and dF is 0.049792 0

x2 is 1.999695 and dF is -0.001828 -5


x2 is 1.999999 and dF is -0.000008
x2 is 2.000000 and dF is 0.000000 -10

-15
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4

EPE 821 Lec 4 25


Secant method for Optimization
forx  1.5,3. Also write matlab code for
3 2
Find the minimum of x  3 x
verification.
F = x^3 - 3*x^2; % function 20
dF = 3*x^2 - 6*x; % Derivative Orig
1st Derv.
15
Iter = 1
x2 is 1.800000 and dF is -1.080000 Iter = 2
x2 is 1.928571 and dF is -0.413265 10 Iter = 3
Iter = 4
x2 is 2.008264 and dF is 0.049792 5 Iter = 5
x2 is 1.999695 and dF is -0.001828
x2 is 1.999999 and dF is -0.000008 0
x2 is 2.000000 and dF is 0.000000
-5

-10

-15

-20
0 0.5 1 1.5 2 2.5 3

EPE 821 Lec 4 26


Newton-Raphson or Newton method
Root
f  xi   0
f   xi  
xi  xi 1
Which can be rearrange as
f  xi 
xi 1  xi 
f   xi 
Repeat the process until | f (xi) | is
sufficiently small
Optimization
f   xi 
xi 1  xi 
f   xi 

EPE 821 Lec 4 27

You might also like