You are on page 1of 24

Appendix A

Solving Systems of Nonlinear Equations

Chapter 4 of this book describes and analyzes the power flow problem. In its ac
version, this problem is a system of nonlinear equations. This appendix describes
the most common method for solving a system of nonlinear equations, namely, the
Newton-Raphson method. This is an iterative method that uses initial values for the
unknowns and, then, at each iteration, updates these values until no change occurs
in two consecutive iterations.
For the sake of clarity, we first describe the working of this method for the case of
just one nonlinear equation with one unknown. Then, the general case of n nonlinear
equations and n unknowns is considered.
We also explain how to directly solve systems of nonlinear equations using
appropriate software.

A.1 Newton-Raphson Algorithm

The Newton-Raphson algorithm is described in this section.

A.1.1 One Unknown

Consider a nonlinear function f .x/ W R ! R. We aim at finding a value of x so that:

f .x/ D 0: (A.1)

.0/
 To do so, we first consider a given value of x, e.g., x  . In general, we have that
f x.0/ ¤ 0. Thus, it is necessary to find x.0/ so that f x.0/ C x.0/ D 0.

© Springer International Publishing AG 2018 271


A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power
Systems, https://doi.org/10.1007/978-3-319-69407-8
272 A Solving Systems of Nonlinear Equations

 
Using Taylor series, we can express f x.0/ C x.0/ as:

 .0/  .0/ 2  .0/


    df .x/ x d2 f .x/
f x.0/ C x.0/ D f x.0/ C x.0/ C C :::
dx 2 dx2
(A.2)
Considering
 .0/ only.0/the
 first two terms in Eq. (A.2) and since we seek to find x.0/
.0/
so that f x C x D 0, we can approximately compute x as:
 
.0/ f x.0/
x    : (A.3)
df .x/ .0/
dx

Next, we can update x as:

x.1/ D x.0/ C x.0/ : (A.4)


 
Then, we check if f x.1/ D 0. If so, we have found a value of x that satisfies
 
f .x/ D 0. If not, we repeat the above step to find x.1/ so that f x.1/ C x.1/ D 0
and so on.
In general, we can compute x./ as:
 
.C1/ ./ f x./
x Dx   ; (A.5)
df .x/ ./
dx

where  is the iteration counter.


Considering the above, the Newton-Raphson method consists of the following
steps:
• Step 0: initialize the iteration counter ( D 0) and provide an initial value for x,
i.e., x D x./ D x.0/ .
• Step 1: compute x.C1/ using Eq. (A.5).
• Step 2: check if the difference between the values of x in twoˇ consecutiveˇ
iterations is lower than a prespecified tolerance , i.e., check if ˇx.C1/  x./ ˇ
< . If so, the algorithm has converged and the solution is x.C1/ . If not, continue
at Step 3.
• Step 3: update the iteration counter   C 1 and continue at Step 1.
Illustrative Example A.1 Newton-Raphson algorithm for a one-unknown problem

We consider the following quadratic function:

f .x/ D x2  3x C 2;
A Solving Systems of Nonlinear Equations 273

whose first derivative is:

df .x/
D 2x  3:
dx
The Newton-Raphson algorithm proceeds as follows:
• Step 0: we initialize the iteration counter ( D 0) and provide an initial value for
x, e.g., x./ D x.0/ D 0.
• Step 1: we compute x.1/ using the equation below:
 .0/ 2
.1/ .0/ x  3x.0/ C 2 02  3  0 C 2
x Dx  .0/
D0 D 0:6667:
2x  3 203

• Step 2: we compute absolute value of the difference between x.1/ and x.0/ , i.e.,
j0:6667  0j D 0:6667. Since this difference is not small enough, we continue at
Step 3.
• Step 3: we update the iteration counter  D 0 C 1 D 1 and continue at Step 1.
• Step 1: we compute x.2/ using the equation below:
 .1/ 2
.2/ .1/x  3x.1/ C 2 0:66672  3  0:6667 C 2
x Dx  D 0:6667 D 0:9333:
2x.1/  3 2  0:6667  3

• Step 2: we compute the absolute value of the difference between x.2/ and x.1/ ,
i.e., j0:9333  0:6667j D 0:2666. Since this difference is not small enough, we
continue at Step 3.
• Step 3: we update the iteration counter  D 1 C 1 D 2 and continue at Step 1.
This iterative algorithm is repeated until the difference between the values of x in
two consecutive iterations is small enough. Table A.1 summarizes the results. The
algorithm converges in four iterations for a tolerance of 1  104 .
Note that the number of iterations needed for convergence by the Newton-
Raphson algorithm is small.


Table A.1 Illustrative Iteration x


Example A.2: results
0 0
1 0.6667
2 0.9333
3 0.9961
4 1.0000
274 A Solving Systems of Nonlinear Equations

A.1.2 Many Unknowns

The Newton-Raphson method described in the previous section is extended in this


section to the general case of a system of n nonlinear equations with n unknowns,
as the one described below:
8
ˆ
ˆ f1 .x1 ; x2 ; : : : ; xn / D 0;
ˆ
ˆ
ˆ
<f2 .x1 ; x2 ; : : : ; xn / D 0;
:: (A.6)
ˆ
ˆ :
ˆ
ˆ
:̂f .x ; x ; : : : ; x / D 0;
n 1 2 n

where fi .x1 ; x2 ; : : : ; xn / W Rn ! R, i D 1; : : : ; n, are nonlinear functions.


The system of equations (A.6) can be rewritten in compact form as:

f .x/ D 0; (A.7)

where:
• f .x/ D Œf1 .x/ f2 .x/ : : : fn .x/> D 0: Rn ! Rn ,
• x D Œx1 x2 : : : xn > ,
• 0 D Œ0 0 : : : 0> , and
• > denotes the transpose operator.
 
Given an initial value for vector x, i.e., x.0/ , we have, in general, that f x.0/ ¤ 0.
 
Thus, we need to find x.0/ so that f x.0/ C x.0/ D 0. Using the first-order Taylor
 
series, f x.0/ C x.0/ can be approximately expressed as:
   
f x.0/ C x.0/  f x.0/ C J.0/ x.0/ ; (A.8)

where J is the n  n Jacobian:


2 3
@f1 .x/ @f1 .x/ @f1 .x/

6 @x1 @xn 7
6 @f .x/ @f@x.x/ 2
@f2 .x/ 7
6 2 2 7
6  7
6
JD6 @x 1 @x 2 @x n 7: (A.9)
: : :: : 7
6 : :: : :: 7
6 : 7
4 @fn .x/ @fn .x/ @fn .x/ 5

@x1 @x2 @xn

 
Since we seek f x.0/ C x.0/ D 0, from Eq. (A.8) we can compute x.0/ as:
 1  .0/ 
x.0/   J.0/ f x : (A.10)
A Solving Systems of Nonlinear Equations 275

Then, we can update vector x as:

x.1/ D x.0/ C x.0/ : (A.11)

In general, we can update vector x as:


 1  ./ 
x.C1/ D x./  J./ f x ; (A.12)

where  is the iteration counter.


Considering the above, the Newton-Raphson algorithm consists of the following
steps:
• Step 0: initialize the iteration counter ( D 0) and provide an initial value for
vector x, i.e., x D x./ D x.0/ .
• Step 1: compute the Jacobian J using (A.9).
• Step 2: compute x.C1/ using matrix equation (A.12).
• Step 3: check every element of the absolute value of the difference between
the values of vector x in two
ˇ consecutive ˇ iterations is lower than a prespecified
tolerance , i.e., check if ˇx.C1/  x./ ˇ < . If so, the algorithm has converged
and the solution is x.C1/ . If not, continue at Step 4.
• Step 4: update the iteration counter   C 1 and continue at Step 1.
For the sake of clarity, this iterative algorithm is schematically described through
the flowchart in Fig. A.1.
Illustrative Example A.2 Newton-Raphson algorithm for a two-unknown problem

We consider the following system of two equations and two unknowns:


(
f1 .x; y/ D x C xy  4;
f2 .x; y/ D x C y  3:

We aim at finding the values of x and y so that f1 .x; y/ D 0 and f2 .x; y/ D 0. To


do so, we use the Newton-Raphson method.
First, we compute the partial derivatives:
8
ˆ
ˆ @f1 .x; y/
ˆ
ˆ D 1 C y;
ˆ
ˆ @x
ˆ
ˆ @f1 .x; y/
ˆ
< D x;
@y
ˆ .x; y/
ˆ
ˆ
@f 2
D 1;
ˆ
ˆ @x
ˆ
ˆ
ˆ @f2 .x; y/
:̂ D 1:
@y
276 A Solving Systems of Nonlinear Equations

n =0

?
x = x (0)

?
- Compute the Jacobian J (n ) using (A.9)

?
Compute x (n +1) using (A.12)


? 
 (n +1)  YES
x − x (n )  < e ? - END

NO
?
n +1 ← n

Fig. A.1 Algorithm flowchart for the Newton-Raphson method

Second, we build the Jacobian matrix:



1Cy x
JD :
1 1

Then, we follow the iterative procedure described above:


• Step 0: we initialize the iteration counter ( D 0) and provide initial values for
variables x and y, e.g., x./ D x.0/ D 1:98 and y./ D y.0/ D 1:02, respectively.
• Step 1: we compute the Jacobian matrix J at iteration  D 0:
  
.0/ 1 C y.0/ x.0/ 1 C 1:02 1:98 2:02 1:98
J D D D :
1 1 1 1 1 1
A Solving Systems of Nonlinear Equations 277

Table A.2 Illustrative Iteration x y


Example A.2: results
0 1.9800 1.0200
1 1.9900 1.0100
2 1.9950 1.0050
3 1.9975 1.0025
4 1.9987 1.0013
5 1.9994 1.0006
6 1.9997 1.0003
7 1.9998 1.0002
8 1.9999 1.0001
9 2.0000 1.0000

• Step 2: we compute x.1/ and y.1/ using the matrix equation below:
   1  .0/
x.1/ x.0/ 1 C y.0/ x.0/ x C x.0/ y./  4
D 
y.1/ y.0/ 1 1 x.0/ C y.0/  3
  1  
1:98 2:02 1:98 4  104 1:9900
D  D :
1:02 1 1 0 1:0100

• Step 3: we compute the difference between x.1/ and x.0/ , i.e., j1:9900  1:98j D
0:01, as well as the differences between y.1/ and y.0/ , i.e., j1:0100  1:02j D 0:01.
Since these differences are not small enough, we continue with Step 4.
• Step 4: we update the iteration counter  D 0 C 1 D 1 and continue with Step 1.
This iterative algorithm is repeated until the differences between the values
of x and y in two consecutive iterations are small enough. Table A.2 provides
the evolution of the values of these unknowns. The algorithm converges in nine
iterations for a tolerance of 1  104 .
Note that the number of iterations needed by the Newton-Raphson algorithm is
rather small.
Next, we consider a different initial solution. Table A.3 provides the results. In
this case, the algorithm converges in 11 iterations for a tolerance of 1  104 .
We conclude that the initial solution does not have an important impact on
the number of iterations required for convergence, provided that convergence is
attained. However, convergence is not necessarily guaranteed, and the Jacobian
may be singular at any iteration. Further details on convergence guarantee and on
convergence speed are available in [1].

278 A Solving Systems of Nonlinear Equations

Table A.3 Illustrative Iteration x y


Example A.2: results
considering a different initial 0 2.1000 0.9000
solution 1 2.0500 0.9500
2 2.0250 0.9745
3 2.0125 0.9875
4 2.0062 0.9938
5 2.0031 0.9969
6 2.0016 0.9984
7 2.0008 0.9992
8 2.0004 0.9996
9 2.0002 0.9998
10 2.0001 0.9999
11 2.0000 1.0000

A.2 Direct Solution

Generally, the Newton-Raphson method does not need to be implemented. An off-


the-self routine (in GNU Octave [2] or MATLAB [3]) embodying the Newton-
Raphson algorithm can be used to solve systems of nonlinear equations.
Illustrative Examples A.1 and A.2 are solved below using GNU Octave routines.

A.2.1 One Unknown

The GNU Octave [2] routines below solve Illustrative Example A.1:
1 clc
2 fun = @NR1;
3 x0 = [0]; x = fsolve(fun,x0)

1 function F = NR1(x)
2 %
3 F(1)=x(1)*x(1)-3*x(1)+2;

The solution provided by GNU Octave is:


1 x = 1.00000
A Solving Systems of Nonlinear Equations 279

A.2.2 Many Unknowns

The GNU Octave routines below solve Illustrative Example A.2:


1 clc
2 fun = @NR2;
3 x0 = [1.98,1.02]; x = fsolve(fun,x0)

1 function F = NR2(x)
2 %
3 F(1)=x(1)+x(1)*x(2)-4;
4 F(2)=x(1)+x(2)-3;

The solution provided by GNU Octave is:


1 x =
2 1.9994 1.0006

A.3 Summary and Further Reading

This appendix describes the Newton-Raphson method, which is the most common
method for solving systems of nonlinear equations, as those considered in Chap. 4
of this book. The Newton-Raphson method is based on an iterative procedure that
updates the value of the unknowns involved until the changes in their values in two
consecutive iterations are small enough.
Different illustrative examples are used to show the working of the Newton-
Raphson method. Additionally, this appendix explains also how to directly solve a
system of nonlinear equations using appropriate software, such as GNU Octave [2].
Additional details can be found in the monograph by Chapra and Canale on
numerical methods in engineering [1].

References

1. Chapra, S.C., Canale, R.P.: Numerical Methods for Engineers, 6th edn. McGraw-Hill, New York
(2010)
2. GNU Octave (2016): Available at www.gnu.org/software/octave
3. MATLAB (2016): Available at www.mathworks.com/products/matlab
Appendix B
Solving Optimization Problems

This appendix provides an overview of the general structure of some of the


optimization problems considered through the chapters of this book, namely, linear
programming, mixed-integer linear programming, and nonlinear programming
problems.

B.1 Linear Programming Problems

The simplest instance of an optimization problem is a linear programming (LP)


problem. All variables of an LP problem are continuous and its objective function
and constrains are linear.

B.1.1 Formulation

The general formulation of an LP problem is as follows:


minxi ;8i
X
Ci xi (B.1a)
i

subject to
X
Aij xi D Bj ; j D 1; : : : ; m; (B.1b)
i
X
Dik xi  Ek ; k D 1; : : : ; o; (B.1c)
i

xi 2 R; i D 1; : : : ; n; (B.1d)

© Springer International Publishing AG 2018 281


A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power
Systems, https://doi.org/10.1007/978-3-319-69407-8
282 B Solving Optimization Problems

where:
• R is the set of real numbers,
• Ci , 8i, are the cost coefficients of variables xi , 8i, in the objective function (B.1a),
• Aij , 8i, and Bj are the coefficients that define equality constraints (B.1b), 8j,
• Dik , 8i, and Ek are the coefficients that define inequality constraints (B.1c), 8k,
• n is the number of continuous optimization variables,
• m is the number of equality constraints, and
• o is the number of inequality constraints.
In compact form, the LP problem (B.1) can be written as:
minx

C> x (B.2a)

subject to

Ax D B; (B.2b)
Dx  E; (B.2c)
x2R n1
; (B.2d)

where:
• superscript > denotes the transpose operator,
• C 2 Rn1 is the cost coefficient vector of the variable vector x in the objective
function (B.2a),
• A 2 Rmn and B 2 Rm1 are the matrix and the vector of coefficients that define
equality constraint (B.2b), and
• D 2 Ron and E 2 Ro1 are the matrix and the vector of coefficients that define
inequality constraint (B.2c).
Some examples of LP problems are the dc optimal power flow problem analyzed
in Chap. 6 or the economic dispatch problem described in Chap. 7 of this book.

B.1.2 Solution

One of the most common and efficient methods for solving LP problems is the
simplex method [2]. A detailed description of this method can be found, for
instance, in [4].
LP problems can be also solved using one of the many commercially available
software tools. For example, in this book we use CPLEX [5] under GAMS [3].
Illustrative Example B.1 Linear programming
We consider a generating unit with a capacity of 10 MW and a variable cost of
$21/MWh. This generating unit has to decide its power output for the following 6 h,
B Solving Optimization Problems 283

knowing that the electric energy prices in these hours are $10/MWh, $15/MWh,
$22/MWh, $30/MWh, $24/MWh, and $20/MWh, respectively.
Considering these data, we formulate the following LP problem:
maxp1 ;p2 ;p3 ;p4 ;p5 ;p6

10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6


 21 .p1 C p2 C p3 C p4 C p5 C p6 /

subject to

0  p1  10;
0  p2  10;
0  p3  10;
0  p4  10;
0  p5  10;
0  p6  10:

The solution of this problem is (note that a superscript  in the variables below
indicates optimal value):

p1 D 0;
p2 D 0;
p3 D 10 MW;
p4 D 10 MW;
p5 D 10 MW;
p6 D 0:

This solution renders an objective function value of $130.



A simple input GAMS [3] file to solve Illustrative Example B.1 is provided
below:
1 variables z, p1, p2, p3, p4, p5, p6;

3 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,
eq5a, eq5b, eq6a, eq6b;

5 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-21*(p1+p2+p3+p4+
p5+p6);
284 B Solving Optimization Problems

7 eq1a.. 0=l=p1;
8 eq1b.. p1=l=10;
9 eq2a.. 0=l=p2;
10 eq2b.. p2=l=10;
11 eq3a.. 0=l=p3;
12 eq3b.. p3=l=10;
13 eq4a.. 0=l=p4;
14 eq4b.. p4=l=10;
15 eq5a.. 0=l=p5;
16 eq5b.. p5=l=10;
17 eq6a.. 0=l=p6;
18 eq6b.. p6=l=10;

20 model example_lp /all/;


21 solve example_lp using lp maximizing z;

23 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l;

The part of the GAMS output file that provides the optimal solution is given
below:
1 ---- 23 variable z.l = 130.000
2 variable p1.l = 0.000
3 variable p2.l = 0.000
4 variable p3.l = 10.000
5 variable p4.l = 10.000
6 variable p5.l = 10.000
7 variable p6.l = 0.000

B.2 Mixed-Integer Linear Programming Problems

A mixed-integer linear programming (MILP) problem is an LP problem in which


some of the optimization variables are not continuous but integer.

B.2.1 Formulation

The general formulation of a MILP problem is as follows:


minxi ;8iIy` ;8`
X X
Ci xi C R` y` (B.3a)
i `
B Solving Optimization Problems 285

subject to
X X
Aij xi C G`j u` D Bj ; 8j; (B.3b)
i `
X X
Dik xi C H`k u`  Ek ; 8k; (B.3c)
i `

xi 2 R; 8i; (B.3d)
y` 2 I; ` D 1; : : : ; p; (B.3e)

where:
• I is the set of integer variables,
• Ci , 8i, and R` , 8`, are the cost coefficients of variables xi , 8i, and y` , 8`,
respectively, in the objective function (B.3a),
• Aij , 8i; G`j , 8`; and Bj are the coefficients that define equality constraints (B.3b),
8j,
• Dik , 8i; H`k , 8`; and Ek are the coefficients that define inequality
constraints (B.3c), 8k, and
• p is the number of integer optimization variables,
In compact form, MILP problem (B.3) can be written as:
minx;y

C> x C R> y (B.4a)

subject to

Ax C Gy D B; (B.4b)
Dx C Hy  E; (B.4c)
x 2 Rn1 ; (B.4d)
y 2 Ip1 ; (B.4e)

where:
• C 2 Rn1 and R 2 Rp1 are the cost coefficient vectors of the variable vectors x
and y, respectively, in objective function (B.2a),
• A 2 Rmn , G 2 Rmp , and B 2 Rm1 are the matrices and vector of coefficients
that define equality constraint (B.2b),
• D 2 Ron , H 2 Rop , and E 2 Ro1 are the matrices and vector of coefficients
that define inequality constraint (B.2c),
Some examples of MILP problems are the unit commitment problem described
in Chap. 7 or the self-scheduling problem analyzed in Chap. 8 of this book.
286 B Solving Optimization Problems

B.2.2 Solution

MILP problems can be solved using branch-and-cut methods. A detailed description


of these methods can be found, for instance, in [4].
MILP problems can also be solved using one of the many commercially available
software tools. For example, in this book we use CPLEX [5] under GAMS [3].
Illustrative Example B.2 Mixed-integer linear programming
We consider again the data of Illustrative Example B.1. However, in this case, we
assume that the generating unit has a minimum power output of 2 MW and a fixed
cost of $25.
Considering these data, we formulate the following MILP problem:
minp1 ;p2 ;p3 ;p4 ;p5 ;p6 ;u1 ;u2 ;u3 ;u4 ;u5 ;u6

10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6


 21 .p1 C p2 C p3 C p4 C p5 C p6 /
 25 .u1 C u2 C u3 C u4 C u5 C u6 /

subject to

2u1  p1  10u1 ;
2u2  p2  10u2 ;
2u3  p3  10u3 ;
2u4  p4  10u4 ;
2u5  p5  10u5 ;
2u6  p6  10u6 ;
u1 ; u2 ; u3 ; u4 ; u5 ; u6 2 f0; 1g:

In this example it is necessary to include binary variables to represent the on/off


status of the generating unit at each time period.
The solution of this problem is (note that a superscript  in the variables below
indicates optimal value):

p1 D 0;
p2 D 0;
p3 D 0;
p4 D 10 MW;
p5 D 10 MW;
B Solving Optimization Problems 287

p6 D 0;
u1 D 0;
u2 D 0;
u3 D 0;
u4 D 1;
u5 D 1;
u6 D 0:

Contrary to the solution of Illustrative Example B.1, we can see that in this
example it is not optimal to turn on the generating unit at the third time period
as a result of its fixed cost.
This solution renders an objective function value of $70.

A simple input GAMS [3] file to solve Illustrative Example B.2 is provided
below:
1 variables z, p1, p2, p3, p4, p5, p6;

3 binary variables u1, u2, u3, u4, u5, u6;

5 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,
eq5a, eq5b, eq6a, eq6b;

7 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-21*(p1+p2+p3+p4+
p5+p6)-5*(u1+u2+u3+u4+u5+u6);

9 eq1a.. 2*u1=l=p1;
10 eq1b.. p1=l=10*u1;
11 eq2a.. 2*u2=l=p2;
12 eq2b.. p2=l=10*u2;
13 eq3a.. 2*u3=l=p3;
14 eq3b.. p3=l=10*u3;
15 eq4a.. 2*u4=l=p4;
16 eq4b.. p4=l=10*u4;
17 eq5a.. 2*u5=l=p5;
18 eq5b.. p5=l=10*u5;
19 eq6a.. 2*u6=l=p6;
20 eq6b.. p6=l=10*u6;

22 model example_milp /all/;


23 solve example_milp using mip maximizing z;

25 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l, u1.l, u2.l, u3.
l, u4.l, u5.l, u6.l;
288 B Solving Optimization Problems

The part of the GAMS output file that provides the optimal solution is given
below:
1 ---- 25 variable z.l = 70.000
2 variable p1.l = 0.000
3 variable p2.l = 0.000
4 variable p3.l = 0.000
5 variable p4.l = 10.000
6 variable p5.l = 10.000
7 variable p6.l = 0.000
8 variable u1.l = 0.000
9 variable u2.l = 0.000
10 variable u3.l = 0.000
11 variable u4.l = 1.000
12 variable u5.l = 1.000
13 variable u6.l = 0.000

B.3 Nonlinear Programming Problems

A nonlinear programming (NLP) problem is an optimization problem in which the


objective function and/or some of the constraints are nonlinear.

B.3.1 Formulation

The general formulation of an NLP problem is as follows:


minxi ;8i

f .x1 ; : : : ; xn / (B.5a)

subject to

Aj .x1 ; : : : ; xn / D 0; 8j; (B.5b)


Dk .x1 ; : : : ; xn /  0; 8k; (B.5c)
xi 2 R; 8i; (B.5d)

where:
• f .x1 ; : : : ; xn /: Rn ! R is the nonlinear objective function (B.5a),
• Aj .x1 ; : : : ; xn /: Rn ! R are the nonlinear functions that define equality
constraints (B.5b), 8j, and
• Dk .x1 ; : : : ; xn /: Rn ! R are the nonlinear functions that define inequality
constraints (B.5c), 8k.
B Solving Optimization Problems 289

Problem (B.5) can be rewritten in compact form as:


minx

f .x/ (B.6a)

subject to

A .x/ D 0; (B.6b)
D .x/  0; (B.6c)
x 2 Rn ; (B.6d)

where:
• f .x/: Rn ! R is the nonlinear objective function (B.6a),
• A .x/: Rn ! Rm is the nonlinear function that defines equality constraint (B.6b),
and
• D .x/: Rn ! Ro is the nonlinear function that defines inequality con-
straint (B.6c).
Some examples of NLP problems are the state estimation problem described in
Chap. 5 or the ac optimal power flow problem analyzed in Chap. 6 of this book.

B.3.2 Solution

Solving NLP problems is generally more complicated than solving LP problems or


MILP problems.
NLP problems can be solved using one of the many commercially available
software tools. For example, in this book we use CONOPT [1] under GAMS [3].
Further information about NLP problems can be found, for instance, in [4].
Illustrative Example B.3 Nonlinear programming
We consider again the data of Illustrative Example B.1. However, in this case, we
assume that the generating unit has a quadratic cost function so that the cost is:

ct D 15pt C 2p2t ; t D 1; : : : ; 6:

Considering these data, we formulate the following NLP problem:


maxp1 ;p2 ;p3 ;p4 ;p5 ;p6

10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6


 15 .p1 C p2 C p3 C p4 C p5 C p6 /
 
 2 p21 C p22 C p23 C p24 C p25 C p26
290 B Solving Optimization Problems

subject to

0  p1  10;
0  p2  10;
0  p3  10;
0  p4  10;
0  p5  10;
0  p6  10:

The solution of this problem is (note that a superscript  in the variables below
indicates optimal value):

p1 D 0;
p2 D 0;
p3 D 1:75 MW;
p4 D 3:75 MW;
p5 D 2:25 MW;
p6 D 1:25 MW:

This solution renders an objective function value of $47.5.



A simple input GAMS [3] file to solve Illustrative Example B.3 is provided
below:
1 variables z, p1, p2, p3, p4, p5, p6;

3 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,
eq5a, eq5b, eq6a, eq6b;

5 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-15*(p1+p2+p3+p4+
p5+p6)-2*(p1*p1+p2*p2+p3*p3+p4*p4+p5*p5+p6*p6);

7 eq1a.. 0=l=p1;
8 eq1b.. p1=l=10;
9 eq2a.. 0=l=p2;
10 eq2b.. p2=l=10;
11 eq3a.. 0=l=p3;
12 eq3b.. p3=l=10;
13 eq4a.. 0=l=p4;
14 eq4b.. p4=l=10;
15 eq5a.. 0=l=p5;
16 eq5b.. p5=l=10;
B Solving Optimization Problems 291

17 eq6a.. 0=l=p6;
18 eq6b.. p6=l=10;

20 model example_nlp /all/;


21 solve example_nlp using nlp maximizing z;

23 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l;

The part of the GAMS output file that provides the optimal solution is given
below:
1 ---- 23 variable z.l = 47.500
2 variable p1.l = 0.000
3 variable p2.l = 0.000
4 variable p3.l = 1.750
5 variable p4.l = 3.750
6 variable p5.l = 2.250
7 variable p6.l = 1.250

B.4 Summary and Further Reading

This appendix provides brief formal descriptions of the three types of optimization
problems considered in this book, namely, LP problems, MILP problems, and NLP
problems. A detailed description of these problems can be found in the monograph
by Sioshansi and Conejo [4].

References

1. CONOPT (2016). Available at www.conopt.com/


2. Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton, NJ
(1963)
3. GAMS (2016). Available at www.gams.com/
4. Sioshansi, R., Conejo, A.J.: Optimization in Engineering. Models and Algorithms. Springer,
New York (2017)
5. The ILOG CPLEX (2016). Available at www.ilog.com/products/cplex/
Index

A Reactive power, 44
ac source Voltages, 21
Angular frequency, 18
Ordinary frequency, 18 E
Period, 18 Economic dispatch, 197, 209
Phasorial representation, 18 Capacity limits of transmission lines, 212
Root mean square, 18 Cost function, 213
Sinusoidal representation, 18 Description, 211
Active and reactive power decoupling, Example, 210, 214
81 Example: impact of transmission capacity
Admittance matrix, 101 limits, 215
Alternating current (ac), 17 Example: locational marginal prices, 216
Example: marginal prices, 211
Formulation, 213
B GAMS code, 225
Balanced three-phase circuits, 17 Locational marginal prices, 216
Active power, 43 Marginal prices, 211
Apparent power, 44 Power balance, 213
Balanced three-phase sequence, 18 Power bounds, 212
Common star connection, 38 Power flows through transmission lines,
Currents, 23 211
Delta currents, 25 Reference node, 212
Equivalence wye-delta, 28 Electrical line, 71
Exercises, 52 Capacity, 79
How to measure power?, 44 Efficiency, 80
Instantaneous power, 43 Geometric mean radius, 78
Line currents, 23 Inductance, 76
Line voltages, 22 Model, 71
Magnitudes, 21 Parameters, 75
Negative sequence, 20 Reactance, 79
Phase voltages, 22 Regulation, 80
Positive sequence, 19 Resistance, 75
Power, 42 Resistivity, 75

© Springer International Publishing AG 2018 293


A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power
Systems, https://doi.org/10.1007/978-3-319-69407-8
294 Index

G Formulation, 217
Generator and motor, 56 GAMS code, 226
Efficiency, 58 Newton-Raphson method, 111
Three-phase generator, 56
Three-phase motor, 57
O
Optimal power flow, 165
I Active power limits, 168
Introduction, 1 Concluding remarks, 185
dc example, 178
dc formulation, 177
K dc GAMS code, 189
Kirchhoff’s laws, 99 Description, 166
Example, 171
Formulation, 170
L GAMS code, 185
Load, 68 Introduction, 165
Constant impedance, 69 Objective function, 169
Constant power, 71 Power balance, 166
Constant voltage, 71 Power flows through transmission lines,
Induction motor, 69 167
Induction motor efficiency, 70 Reactive power limits, 168
Model, 69–71 Security, 179
Solution, 171
Transmission line limits, 168
M Voltage angle limits, 169
Magnetic constant, 76 Voltage magnitude limits, 169
Market clearing auction, 242 Optimization problems, 281
Bids, 233 Linear programming, 281
Consumer surplus, 244 Linear programming: example, 282
Consumption bid curve, 244 Linear programming: formulation, 281
Example, 248, 252, 257 Linear programming: GAMS code, 283
Formulation, 246 Linear programming: simplex method,
Formulation: multi period, 251 282
Formulation: single period, 247 Linear programming: solution, 282
Formulation: transmission-constrained Mixed-integer linear programming, 284
multi-period, 256 Mixed-integer linear programming:
GAMS code, 264 branch-and-bound methods, 286
Introduction, 234 Mixed-integer linear programming:
Locational marginal prices, 257 example, 286
Market clearing price, 250 Mixed-integer linear programming:
Market operator, 234 formulation, 284
Offers, 233 Mixed-integer linear programming: GAMS
Participants, 242 code, 287
Producer surplus, 244 Mixed-integer linear programming:
Production offer curve, 243 solution, 286
Profit of generating units, 250 Non-linear programming, 288
Social welfare, 244 Non-linear programming: example, 289
Non-linear programming: formulation,
288
N Non-linear programming: GAMS code,
Network-constrained unit commitment, 197 290
Example, 218 Non-linear programming: solution, 289
Index 295

P Power transformer, 58
Per-unit system, 46 Connections, 61
Base value, 48 Denomination, 66
Definition, 46 Model, 67
Example, 47, 50 Per-unit analysis, 66
Procedure, 51 Transformation ratio, 60
Permittivity, 79
Phasor, 18
Power, 42 S
Active power, 43 Scope of the book, 12
Apparent power, 44 What we do, 13
How to measure power?, 44 What we do not do, 13
Instantaneous power, 43 Security-constrained optimal power flow, 179
Reactive power, 44 n  1 security, 180
Power flow, 97 n  k security, 180
Applications, 97 Concluding remarks, 185
Concluding remarks, 130 Corrective approach, 180
dc formulation, 121 Description, 180
Decoupled, 119 Example, 182
Distributed slack, 120 Formulation, 180
Equations, 104 GAMS code, 190
Example, 117 Introduction, 179
Example in Octave, 130 Preventive approach, 180
Exercises, 132 Self-scheduling, 234
Introduction, 97 Description, 234
Nodal equations, 98 Example, 237, 240
Outcome, 114 Formulation, 236
Slack, PV, and PQ nodes, 109 GAMS code, 263
Solution, 110 Introduction, 233
Power markets, 10 Self-scheduling and market clearing auction,
Day-ahead market, 12 233
Futures market, 11 Final remarks, 262
Intra-day markets, 12 Introduction, 233
Pool, 11 Sinusoidal ac source, 18
Real-time market, 12 State estimation, 137
Power system Cumulative distribution function, 155
Fundamentals, 17 Erroneous measurement detection, 152
Model components, 55 Erroneous measurement detection: 2 test,
Power system components 152
Examples, 83 Erroneous measurement detection:
Power system operations, 9 Example, 153
Day-ahead operation, 9 Erroneous measurement identification, 154
Hours before power delivery, 10 Erroneous measurement identification:
Minutes before power delivery, 10 Example, 155
Power system structure, 1 Erroneous measurement identification:
Centralized operation, 7 Normalized residual test, 154
Distribution, 4 Estimation, 140
Economic layer, 5 Estimation: Example, 143
Generation, 2 Exercises, 160
Market operation, 8 Measurements, 138
Physical layer, 1 Non-observable, 148
Regulatory layer, 7 Observability, 145
Supply, 4 Observability: Example, 148
Transmission, 3 Observable, 148
296 Index

State estimation (cont.) Costs of generating units: Start-up costs,


Residuals, 154 200
System state, 137 Costs of generating units: Variable costs,
Systems of nonlinear equations, 110, 271 200
Direct solution, 278 Example, 205
Direct solution: many unknowns, 279 Formulation, 204
Direct solution: one unknown, 278 GAMS code, 223
Jacobian, 274 Generating units, 199
Newton-Raphson algorithm, 271 Logical expressions, 201
Newton-Raphson algorithm: example, 272, Logical expressions: Example, 201
275 Network-constrained unit commitment,
Newton-Raphson algorithm: many 216
unknowns, 274 Planning horizon, 199
Newton-Raphson algorithm: one unknown, Power balance, 204
271 Power bounds, 202
Taylor series, 272, 274 Ramping limits, 202
Security constraints, 204
U Unit commitment and economic dispatch,
Unit commitment, 197, 198 197
Costs of generating units, 199 Exercises, 228
Costs of generating units: Fixed costs, 199 Final remarks, 222
Costs of generating units: Shut-down costs, GAMS codes, 222
200 Introduction, 197

You might also like