You are on page 1of 76

Methodology to solve the economic dispatch by

searching over values of lambda


Adjust lambda up/down
Until the sum of the gen output
Meets the load to be supplied
Determining Lambda using
binary search
Lambda-Iteration Solution Method
–the method requires that there to be a unique
mapping from a value of lambda (marginal cost)
to each generator’s MW output:
–for any choice of lambda (marginal cost), the
generators collectively produce a total MW
output
–the method then starts with values of lambda
below and above the optimal value
(corresponding to too little and too much total
output), and then iteratively brackets the
optimal value.
3
Lambda-Iteration Algorithm
Pick  L and  H such that
m m
 Gi )  PD  0
P (  L
 Gi )  PD  0
P (  H

i 1 i 1

While  H   L   Do
 M  ( H   L ) / 2
m
If  Gi
P (  M
)  PD  0 Then  H
  M

i 1

Else  L   M
End While 4
Lambda-Iteration: Graphical
View
In the graph shown below for each value of lambda
there is a unique PGi for each generator. This
relationship is the PGi() function.

5
Lambda-Iteration Example
Consider a three generator system with
IC1 ( PG1 )  15  0.02 PG1   $/MWh
IC2 ( PG 2 )  20  0.01PG 2  $/MWh
IC3 ( PG 3 )  18  0.025 PG 3  $/MWh
and with constraint PG1  PG 2  PG 3  1000MW
Rewriting generation as a function of  , PGi ( ),
we have
  15   20
PG1 ( )  PG2 ( ) 
0.02 0.01
  18
PG3 ( )  6
0.025
Lambda-Iteration Example,
cont’d m
L
Pick  so 
L
P ( )  1000  0 and
Gi
i=1
m
 Gi )  1000  0
P (  H

i=1
m
Try  L  20 then  PGi (20)  1000 
i 1
  15   20   18
   1000  670 MW
0.02 0.01 0.025
m
Try  H
 30 then  PGi (30)  1000  1230 MW
i 1
8
Lambda-Iteration Example,
cont’d
Pick convergence tolerance   0.05 $/MWh
Then iterate since  H   L  0.05

 M  ( H   L ) / 2  25
m
Then since  Gi
P (25)  1000  280 we set  H
 25
i 1
Since 25  20  0.05
 M  (25  20) / 2  22.5
m
 Gi
P (22.5)  1000  195 we set  L
 22.5
i 1 9
Lambda-Iteration Example,
cont’d H L
Continue iterating until     0.05
*
The solution value of  ,  , is 23.53 $/MWh
Once  * is known we can calculate the PGi
23.53  15
PG1 (23.5)   426 MW
0.02
23.53  20
PG 2 (23.5)   353 MW
0.01
23.53  18
PG 3 (23.5)   221 MW
0.025
10
Lambda-Iteration with Gen
Limits

In the lambda-iteration method the limits are taken


into account when calculating PGi ( ) :
if calculated production for PGi  PGi ,max
then set PGi ( )  PGi ,max
if calculated production for PGi  PGi ,min
then set PGi ( )  PGi ,min

11
Lambda-Iteration Gen Limit Example
In the previous three generator example assume
the same cost characteristics but also with limits
0  PG1  300 MW 100  PG2  500 MW
200  PG3  600 MW
With limits we get:
m
 PGi (20)  1000  PG1 (20)  PG 2 (20)  PG 3 (20)  1000
i 1
 250  100  200  1000
 450 MW (compared to  670MW)
m
 PGi (30)  1000  300  500  480  1000  280 MW
i 1 12
Lambda-Iteration Limit
Example,cont’d
Again we continue iterating until the convergence
condition is satisfied.
With limits the final solution of  , is 24.43 $/MWh
(compared to 23.53 $/MWh without limits).
Maximum limits will always cause  to either increase
or remain the same.
Final solution is:
PG1 (24.43)  300 MW (at maximum limit)
PG 2 (24.43)  443 MW
PG 3 (24.43)  257 MW
13
Optimization of Linear Problems:
Linear Programming (LP)

14
Motivation
• Many optimization problems are linear
• Linear objective function
• All constraints are linear
• Non-linear problems can be linearized:
• Piecewise linear cost curves
• DC power flow
• Efficient and robust method to solve such problems

15
Piecewise linearization of a cost
curve
CA

PA

16
Mathematical formulation
 

 
minimize

subject to:

cj, aij, bi, cij, di are constants 17


Example 1
y
Maximize x + y 4

Subject to: x > 0; y > 0


3
x≤3 Feasible
2
Region
y≤4
x+2y≥2
1

0 x
0 1 2 3
18
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2 2

0 x
0 1 2 3
x+y=0 19
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2 2
Feasible
Solution
1

0 x
0 1 2 3
x+y=1 20
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2 2

1
Feasible
Solution
0 x
0 1 2 3
x+y=2 21
Example 1
Maximize x + y
y
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3
y≤4
x+2y≥2 2

0 x
0 1 2 3
x +22 y = 3
Example 1
Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3 x+y=7
y≤4
x+2y≥2 2

0 x
0 1 2 3
23
Solving a LP problem (1)
• Constraints define a polyhedron in n dimensions
• If a solution exists, it will be at an extreme point
(vertex) of this polyhedron
• Starting from any feasible solution, we can find the
optimal solution by following the edges of the
polyhedron
• Simplex algorithm determines which edge should be
followed next

24
Which direction?
Maximize x + y Optimal
y Solution
Subject to: x ≥ 0; y ≥ 0 4

x≤3
3 x+y=7
y≤4
x+2y≥2 2

0 x
0 1 2 3
25
Solving a LP problem (2)
• If a solution exists, the Simplex algorithm will find it
• But it could take a long time for a problem with many
variables!
• Interior point algorithms
• Equivalent to optimization with barrier functions

26
Interior point methods
Extreme points
(vertices)

Constraints
(edges)

Simplex: search from vertex to Interior-point methods: go through


vertex along the edges the inside of the feasible space
27
Sequential Linear Programming
(SLP)
• Used if more accuracy is required
• Algorithm:
• Linearize
• Find a solution using LP
• Linearize again around the solution
• Repeat until convergence

28
Summary
• Main advantages of LP over NLP:
• Robustness
• If there is a solution, it will be found
• Unlike NLP, there is only one solution
• Speed
• Very efficient implementation of LP solution algorithms are
available in commercial solvers
• Many non-linear optimization problems are
linearized so they can be solved using LP

29
Non-Linear Programming

30
Motivation
• Method of Lagrange multipliers
• Very useful insight into solutions
• Analytical solution practical only for small problems
• Direct application not practical for real-life problems
because these problems are too large
• Difficulties when problem is non-convex
• Often need to search for the solution of practical
optimization problems using:
• Objective function only or
• Objective function and its first derivative or
• Objective function and its first and second derivatives

31
Naïve One-Dimensional Search
• Suppose:
• That we want to find the value of x that minimizes f(x)
• That the only thing that we can do is calculate the value
of f(x) for any value of x
• We could calculate f(x) for a range of values of x
and choose the one that minimizes f(x)

32
Naïve One-Dimensional Search
f(x)

x
• Requires a considerable amount of computing
time if range is large and a good accuracy is
needed
33
One-Dimensional Search
f(x)

x0 x

34
One-Dimensional Search
f(x)

x0 x1 x

35
One-Dimensional Search
f(x)

Current search
range

x0 x1 x2 x

If the function is convex, we have bracketed the optimum

36
One-Dimensional Search
f(x)

x0 x1 x3 x2 x

Optimum is between x1 and x2


We do not need to consider x0 anymore
37
One-Dimensional Search
f(x)

x0 x1 x4 x3 x2 x

Repeat the process until the range is sufficiently small

38
One-Dimensional Search
f(x)

x0 x1 x4 x3 x2 x

The procedure is valid only if the function is convex!

39
Multi-Dimensional Search
• Unidirectional search not applicable
• Naïve search becomes totally impossible as
dimension of the problem increases
• If we can calculate the first derivatives of the
objective function, much more efficient searches
can be developed
• The gradient of a function gives the direction in
which it increases/decreases fastest

40
Steepest Ascent/Descent
Algorithm
x
2

x41
1
Steepest Ascent/Descent
Algorithm
x
2

x42
1
Unidirectional
Objective function
Search

Gradient direction
43
Steepest Ascent/Descent
Algorithm
x
2

x44
1
Steepest Ascent/Descent
Algorithm
x
2

x45
1
Steepest Ascent/Descent
Algorithm
x
2

x46
1
Steepest Ascent/Descent
Algorithm
x
2

x47
1
Steepest Ascent/Descent
Algorithm
x
2

x48
1
Steepest Ascent/Descent
Algorithm
x
2

x49
1
Steepest Ascent/Descent
Algorithm
x
2

x50
1
Steepest Ascent/Descent
Algorithm
x
2

x51
1
Steepest Ascent/Descent
Algorithm
x
2

x52
1
Steepest Ascent/Descent
Algorithm
x
2

x53
1
Steepest Ascent/Descent
Algorithm
x
2

x54
1
Choosing a Direction
• Direction of steepest ascent/descent is not always
the best choice
• Other techniques have been used with varying
degrees of success
• In particular, the direction chosen must be
consistent with the equality constraints

55
How far to go in that direction?
• Unidirectional searches can be time-consuming
• Second order techniques that use information about
the second derivative of the objective function can be
used to speed up the process
• Problem with the inequality constraints
• There may be a lot of inequality constraints
• Checking all of them every time we move in one direction
can take an enormous amount of computing time

56
Handling of inequality constraints
x2
How do I know that
I have to stop here?
Move in the direction
of the gradient

x1

57
Handling of inequality constraints
x2
How do I know that
I have to stop here?

x1

58
Non-Robustness
Different starting points may lead to different solutions if the
problem is not convex
x2

A D

x1
B C

X Y 64
Conclusions
• Very sophisticated non-linear programming methods
have been developed
• They can be difficult to use:
• Different starting points may lead to different solutions
• Some problems will require a lot of iterations
• They may require a lot of “tuning”

65
Local and Global
Optima

66
Which one is the real maximum?
f(x)

A D x

df d2 f
For x =A and x =D, we have: =0 and <0
dx dx 2

67
Which one is the
x
real optimum?
2

A D

x1
B C

A, B,C and D are all minima because we have:


¶f ¶f ¶2 f ¶2 f ¶2 f
=0;  =0 and 2
< 0;  2 < 0;  <0
¶x1 ¶x2 ¶x1 ¶x2 ¶x1 ¶x2 68
Local and Global Optima
• The optimality conditions are local conditions
• They do not compare separate optima
• They do not tell us which one is the global optimum
• In general, to find the global optimum, we must
find and compare all the optima
• In large problems, this can be require so much time
that it is essentially an impossible task

69
Convexity
• If the feasible set is convex and the objective
function is convex, there is only one minimum and
it is thus the global minimum

70
Examples of Convex Feasible
xSets
2 x 2

x1 x1

x2

x1 x1
x1min x1max
71
Example of Non-Convex Feasible
xSets
2 x 2

x1 x1

x2
x1

x1 x1
x 1a x1 b x1c x1 d
72
Example of Convex Feasible
A set is convex if, for any two points belonging to the set, all the
Sets
points on the straight line joining these two points belong to the set
x2 x2

x1 x1

x2

x1 x1
x1 min
x1 max

73
Example of Non-Convex Feasible
Sets
x2 x2

x1 x1

x2
x1

x1
x 1a x1 b x1c x1 d x1

74
Example of Convex Function
f(x)

75
Example of Convex Function

x2

x1
76
Example of Non-Convex
Function

f(x)

77
Example of Non-Convex
Function
x2

A D

x1
B C

78
Definition
f(x) of a Convex Function

f(y)

xa y xb x

A convex function is a function such that, for any two points xa and xb
belonging to the feasible set and any k such that 0 ≤ k ≤1, we have:

z =kf ( x a ) + (1- k ) f ( x b ) ³ f ( y ) = f [ kx a + ( 1- k ) x b ]
79
Example of Non-Convex
Function

f(x)

80
Importance of Convexity
• If we can prove that a minimization problem is convex:
• Convex feasible set
• Convex objective function
Then, the problem has one and only one solution

• Proving convexity is often difficult


• Power system problems are usually not convex
There may be more than one solution to power system
optimization problems

81

You might also like