You are on page 1of 17

Contents

1 Mathematical Preliminaries
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Single Variable Optimization . . . . . . . . . . . . . . . . . . . .
1.2.2 Multi-Variable Optimization . . . . . . . . . . . . . . . . . . . .
1.3 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Multi-variable optimization with equality constraints . . . . . .
1.3.2 Inclusion of inequality constraints in multi-variable optimization
1.4 Search for the Optimal Solution . . . . . . . . . . . . . . . . . . . . . .
1.5 Gradient Search Method . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . .
1.7.2 Solution Procedure . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Optimization by Computational Intelligence . . . . . . . . . . . . . . .
1.8.1 Particle Swarm Optimization (PSO) . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

2
2
2
2
4
6
6
9
11
11
11
12
13
14
15
15

Chapter 1
Mathematical Preliminaries
1.1

Introduction

This chapter presents brief introduction to some of the mathematical techniques required
for complete understanding of the course material. The economic operation of the electric
power systems depends heavily on the use of optimization techniques for ecient allocation
of available resources. For example, to meet certain load demand, one has to judiciously
allocate the outputs of the participating generators, so that the total cost of production is
minimized. Coordinating various control actions to ensure secure operation of the system
is another example where optimization tools are needed. Dedicated programs such as
optimal power ows incorporating production-grade optimization tools are integral parts
of a modern day energy management system.

1.2

Unconstrained Optimization

The optimization problem without any constraint on the independent variables is known as
unconstrained optimization problem. Following discusses the unconstrained optimization
problem for single and multiple (independent) variables.

1.2.1

Single Variable Optimization

Following is the necessary condition that a function f (x) satises if it has a minimum or
maximum at some point x = x in some interval a < x < b.
Necessary condition for minimum of a function
If the derivative f (x) exists at x = x , then
f (x) = 0
2

(1.1)

Note: Condition (1.1) does not guarantee that f (x) has a minimum at x = x . The point
x = x can also be a point of maximum or a point of inection.
Sucient condition
If f (x ), f (x ),. . . ,f (n) (x ) exist, and f (x ) = f (x ) = . . . = f (n1) (x ) = 0; but
f (n) (x ) = 0, then the following holds:
If f (n) (x ) > 0 and n is even, then x is a minimum.
If f (n) (x ) < 0 and n is even, then x is a maximum.
If n is odd, then x is a point of inection.
Proof : Using Taylor series expansion of f (x) at x = x + h, where h is a small number,
f (x + h) = f (x ) + hf (x ) +

h2
hn1 (n1)
hn
f (x ) + . . . +
f
(x ) + f (n) (x + h), (1.2)
2!
(n 1)!
n!

where 0 < < 1.


Since f (x ) = f (x ) = . . . = f (n1) (x ) = 0, one gets,
f (x + h) f (x ) =

hn (n)
f (x + h).
n!

(1.3)

Since f (n) (x ) = 0, for some interval a < x < b that includes x +h, sign of f (n) (x +h)
is the same as that of f (n) (x ). If n is even, f (x +h)f (x ) will be positive if f (n) (x ) > 0,
and therefore x will be a minimum. On the other hand, if n is even and f (n) (x ) < 0,
x will be a maximum. If n is odd in the interval a < x < b, sign of f (x + h) f (x )
changes with the sign of h. Hence, x is a point of inection in this case.
Example 1.1 :
Examine the extrema of the following function:
f (x) = x5 5x4 + 5x3 + 5.
Solution:
The rst derivative of the function is given by,
f (x) = 5x4 20x3 + 15x2
= 5x2 (x 1)(x 3)
Above equation implies that f (x) = 0 at x = 0, x = 1 and x = 3.
3

(1.4)

Now, f (x) = 20x3 60x2 + 30x.


At x = 0, f (x) = 0.
Therefore, to determine the nature of the point, we have to nd f (x):


Now, f (x)

f (x) = 60x2 120x + 30

= 30 = 0
x=0

Hence, x = 0 is a point of inection.


At x = 1, f (x) = 20 60 + 30 = 10 < 0. Hence, x = 1 is a relative maximum point.
At x = 3, f (x) = 540 360 + 30 = 210 > 0. Hence, x = 3 is a relative minimum point.

1.2.2

Multi-Variable Optimization

In this section we discuss multi-variable optimization problem with no constraints. Let the
function f (x) be a single-valued function of n variables denoted by x = [x1 , x2 , . . . , xn ]T .
Let the optimal solution be denoted by, x = [x1 , x2 , . . . , xn ]T . The Taylor series expansion
of f (x) around x is given by,
f (x) = f (x ) + df (x ) +

1 2
d f (x ) + . . .
2!

1 N
d f (x ) + RN (x , h),
N!
where the last term in the above equation is the remainder, and is given by,
... +

1
dN +1 f (x + h),
(N + 1)!

RN (x , h) =

(1.5)

(1.6)

where 0 < < 1 and h = x x .


The dierentials ds in (1.5) are dened as,
dr f (x ) =

n
n

...

i=1 j=1

{z

hi hj . . . hk

k=1

r f (x )
.
xi xj . . . xk

(1.7)

r summations

For example, when r = 2 and n = 3, we have,


2

d f (x ) = d

f (x1 , x2 , x3 )

i=1 j=1
2

hi hj

2 f (x )
xi xj

f (x )
2 f (x )
2 f (x )
+
h
+
h
2
3
x21
x22
x23
2 f (x )
2 f (x )
2 f (x )
+ 2h1 h2
+ 2h2 h3
+ 2h1 h3
.
x1 x2
x2 x3
x1 x3
2

= h21

3
3

(1.8)

Necessary condition for minimum or maximum


The necessary condition that f (x) has an extremum (maximum or minimum) at x = x
is given by,
f (x )
f (x )
f (x )
=
= ... =
= 0.
(1.9)
x1
x2
xn
Sucient condition for minimum or maximum
Assuming all the partial derivatives of f (x) up to the order k to be existing and continuous
in the neighborhood of a stationary point x , and if
dr f (x ) = 0, r = 1, . . . , k 1,

(1.10)

dk f (x ) = 0,

(1.11)

but
then the following hold:

1. If k is even, x is a relative minimum if dk f (x ) is positive.


2. If k is even, x is a relative maximum if dk f (x ) is negative.
3. If dk f (x ) is zero, no general conclusion can be made.
4. if k is odd, x is a not an extremum.
5. If dk f (x ) takes both positive and negative values, then x is a saddle point.
Cases with non-zero second order dierential
Referring to the above discussion, when a multi-variable function has non-zero second order
dierential, the nature of the point under consideration depends on the sign of d2 f (x ),
which is given by,
n
n
1
2 f (x )
2

d f (x ) =
hi hj
.
(1.12)
2! i=1 j=1
xi xj
The quantity inside the summations can be written as,
Q=

n
n

2 f (x )
i=1 j=1

xi xj

= hT Hh,

(1.13)

where H is the called the Hessian matrix. The Hessian matrix of f (x) at x = x is given
by,

2
2f
2f
f
.
.
.
2
x1 x2
x1 xn
1

x
2f
2f
2f

. . . x2 x
2
x
x

x
n
2
1
2
.
(1.14)
H(f ) =

.
.
.
.
.
.
.
.
.
.
.
.

2f
2f
2f
...
xn x1
xn x2
x2

x=x

Depending on the nature of the Hessian matrix H (matrix of second partial derivatives)
of f (X), a sucient condition for X = X to be an extremum is the following:
1. X is a minimum point, if H is positive denite.
2. X is a maximum point, if H is negative denite.
Note: A matrix A is called positive denite, if for all non-zero vectors z with real entries,
zT Az > 0. Similarly, A is called negative denite, if zT Az < 0.

1.3

Constrained Optimization

The preceding sections describe optimization techniques for problems where there were no
constraints on the independent variables. In this section, we will discuss the optimization
problem for multiple variables, including constraints on some or all of the variables.

1.3.1

Multi-variable optimization with equality constraints

The general formulation of the multi-variable optimization problem with equality constraints is as follows:
Minimize
f (x)
(1.15)
Subject to

gi (x) = 0, i = 1, . . . , m,

(1.16)

where x = [x1 , x2 , . . . , xn ]T is the vector of decision variables; gi , i = 1, . . . , m are m number


of equality constraints that need to be satised in the process of minimizing f (x).
For the time being it is assumed that m n. There are methods of solving overdetermined systems for which m > n. Those will be discussed later. In this section we
will discuss method of Lagrange multipliers for solving equality constrained multi-variable
optimization problems.

Method of Lagrange Multipliers


Let us rst concentrate on the problem of minimizing a function of two variables with one
equality constraint. We will then generalize the method for any number of variables.
For two variables, the optimization problem becomes,
Minimize
Subject to

f (x1 , x2 )

(1.17)

g(x1 , x2 ) = 0.

(1.18)

A necessary condition for (x1 , x2 ) to be an extremum is that:




f
f
=
= 0.
x1 (x ,x ) x2 (x ,x )
1

Hence, the rst dierential of f (x1 , x2 ) at (x1 , x2 ) is,


)
(

f
f
dx1 +
dx2
= 0.
df =
x1
x2
(x ,x )
1

g
x2

(1.20)

The constraint g(x1 , x2 ) = 0 is always maintained. Hence,


(
)

g
g
dg =
dx1 +
dx2
= 0.
x1
x2
(x ,x )
Assuming

(1.19)

(1.21)

= 0, (1.22) can be expressed as,


{

x1

g g
/
x1 x2

}
f
dx1 = 0.
x2

(1.22)

Since dx1 can be chosen arbitrarily, one gets following after rearranging (1.24),
{
(
)
}
f
f g
g

/
= 0.
(1.23)
x1
x2 x2 x1
A quantity is now dened as,

f /x2
=
g/x2 (x ,x )
1

Using above in (1.25),

g
f
+
x1
x1

(1.24)

(x1 ,x2 )

= 0.

(1.25)

Again, (1.26) can be written as,


(
)
f
g
+
= 0.
x2
x2 (x ,x )
1

Also,

(1.26)

g(x1 , x2 ) = 0.

(1.27)

Equations (1.27) to (1.29) represent the necessary conditions for (x1 , x2 ) to be an extremum. The quantity is called the Lagrange multiplier. Following function is called the
Lagrange multiplier:
L(x1 , x2 , ) = f (x1 , x2 ) + g(x1 , x2 ).
(1.28)
Necessary Conditions for any Number of Variables
The general minimization problem involving equality constraints can be stated as,
Minimize
Subject to

f (x)

(1.29)

gj (x) = 0, j = 1, . . . , m.

(1.30)

where x = [x1 , x2 , . . . , xn ]T is the vector of decision variables; gj (x), j = 1, . . . , m are m


number of equality constraints that need to be satised in the optimization process.
The Lagrange function in this case is constructed as follows:
L(x1 , x2 , . . . , xn , 1 , 2 , . . . , m ) = f (x) + 1 g1 (x) + 2 g2 (x) + . . . + m gm (x)

(1.31)

By treating as a function of n + m variables, viz., x1 , x2 , . . . , xn , 1 , 2 , . . . , m , the


necessary conditions for extremum of L at a point x is given by,
{
}
m

f
L
gj
=
+
j
= 0; i = 1, . . . , n,
(1.32)
xi
xi j=1 xi x


L
= gj (x) = 0; j = 1, . . . , m,
j
x

(1.33)

Sucient Condition
Assuming the second derivative of Lagrange function to be non-zero, the sucient condition
for f (x) to be a minimum of x is that the quadratic, Q, given by the following is positive
denite.

n
n


2L
(1.34)
Q=
dxi dxj
xi xj
x
i=1 j=1
8

1.3.2

Inclusion of inequality constraints in multi-variable optimization

The general multi-variable optimization problem including both equality and inequality
constraints can be described as follows:
Minimize
Subject to

f (x)

(1.35)

gj (x) 0, j = 1, . . . , m.

(1.36)

where gj (x), i = 1, . . . , m are m number of inequality constraints that need to be satised


in the process of optimization.
The inequalities in (1.36) can be transformed into equality constraints by adding squares
of slack variables (to ensure positive quantities are being added), as shown below.
gj (x) + yj2 = 0; j = 1, . . . , m.

(1.37)

In the above equation, yj s corresponding to the equality constraints in (1.36) are zero.
Now, including the slack variables, (1.37) can be written as,
Gj (x, yj ) = 0; j = 1, . . . , m.

(1.38)

The Lagrange function for the optimization problem can now be written as,
L(x, , y) = f (x) +

j Gj (x, yj ),

(1.39)

j=1

where = [1 , 2 , . . . , m ]T , and y = [y1 , y2 , . . . , ym ]T .


Similar to (1.32) and (1.33), the necessary conditions for f (x) to have a stationary
point at (x , , y ) are given by,
}
m

f
gj
+
j
= 0; i = 1, . . . , n,
xi j=1 xi x , ,y

(1.40)



L
= Gj (x) = 0; j = 1, . . . , m,
j
x

(1.41)

L
Gj
= j
= 2j yj = 0; j = 1, . . . , m,
yj
yj

(1.42)

L
=
xi

The last equation can be written as,


j yj = 0; j = 1, . . . , m,
9

(1.43)

Equation (1.43) implies that if yj = 0, then j = 0. Now, if j = 0, the jth constraint


does not aect the solution. The corresponding constraint is therefore called inactive.
On the other hand, a constraint gk (x) at x is called active, if it satises the condition
gk (x ) = 0. In the feasible region of the decision variables, both active and inactive
constraints can satisfy gk (x) 0. Let A be the set of indices of active constraints from
(1.36). Equation (1.40) can then be written as,

gj
f

= 0; i = 1, . . . , n,
(1.44)
=
j


xi
x
i x ,
jA
The equations above can be collectively represented as,

f (x ) =
j gj (x ).

(1.45)

jA

For a small change s = [dx1 , dx2 , . . . , dxn ]T in x in its feasible region, the following
holds for the equality and active inequality constraints:
gj (x + s) gj (x ) + sT gj (x ) 0; j A.

(1.46)

Since gj (x ) = 0, j A, above equation implies:


sT gj (x ) 0; j A.
Multiplying both sides of (1.45) by sT ,

sT f (x ) = sT
j gj (x ) =
j sT gj (x ).
jA

(1.47)

(1.48)

jA

For x to be a minimum point, any change in f , represented by sT f (x ), has to be


positive or zero for any x in the feasible region. Using (1.47) and (1.48), it is clear that,
such change in f is possible only if the following holds:
j 0, j A.

(1.49)

Kuhn-Tucker conditions
To summarize the discussions in the preceding section, the necessary conditions for a
minimum of the optimization problem described in (1.35) and (1.36) are as follows:

gj
f
= 0; i = 1, . . . , n,
(1.50)
+
j
xi jA xi x
j 0, j A.

(1.51)

The above equations are called Kuhn-Tucker conditions, after the name of the mathematicians who proposed them.
10

1.4

Search for the Optimal Solution

The mathematical techniques discussed so far are good for determining whether a given
point is stationary (maximum or minimum) or not. They do not help in reaching the
optimal solution starting from some other (non-optimal) point. There are basically two
approaches to search for the optimal solution, starting from any location: direct search
methods, and derivative based methods. Linear programming, interior point algorithm,
and computational intelligence based methods such as genetic algorithm, particle swarm
optimization etc. are examples of direct search methods, where only function values are
required at each iteration. Gradient search method is an example of derivative based
method of searching for the optimal solution.

1.5

Gradient Search Method

A function f (x) has the maximum rate of increase along its gradient, i.e., f (x). The
basic idea of gradient search method is that , the minimum of a function f (x) can be found
by taking a series of steps in the direction of its steepest descent, i.e., along f (x). Given
a starting point x0 , the new value of x is obtained as,

x1 = x0 f (x) x0
(1.52)
For the kth iteration,


xk = xk1 f (x) xk1

The iteration is continued until the following is satised:






f (x)

(1.53)

(1.54)

where is a small predened tolerance.

1.6

Linear Programming

Linear programming is a method of solving optimization problems for which the objective
and the constraints are linear functions of the decision variables. All of the constraints
can be expressed as equality constraints, as seen in Section 1.3.2, by adding non-negative
slack variables. A wide range of non-linear optimization problems also can be solved by
linear programming, after suitably linearizing the problem around the operating point of
interest. Following is the normal form of the linear programming problem.
Minimize

f (x) = c1 x1 + c2 x2 + . . . + cn xn
11

(1.55)

Subject to
a11 x1 + a12 x2 + . . . + a1n xn
a21 x1 + a22 x2 + . . . + a2n xn
..
.

= b1
= b2
..
.

am1 x1 + am2 x2 + . . . + amn xn = bm


xi 0; i = 1, . . . , n
bj 0; j = 1, . . . , m

(1.56)

In the above equations, x1 , . . . , xn may include slack variables also, as mentioned before,
to convert inequality constraints into equality constraints. The equality constraints in
(1.56) are assumed to be linearly independent. Then, using the equality constraints one
can solve for m variables. The values of the remaining n m variables have to be chosen.
An n-tuple, x = [x1 , . . . , xn ]T that satises all the constraints in (1.56) is called a
feasible solution. A feasible solution is called an optimal solution if it minimizes f (x).
A solution for which at least n m decision variables are zero, is called a basic feasible
solution. A fundamental theorem in linear programming is that there is at least one basic
feasible solution which is optimal. This
to search in the space of basic feasible
( n )allows us
n!
solutions only. However, there are nm = (nm)!m! ways of setting n m variables to
zero from a set of n variables. This number can easily become very big, and one needs a
systematic way to look for the optimal solution. The Simplex method proposed by Dantzig
in 1948 provides a systematic way of moving from one basic feasible solution to the other,
such that the value of f (x) always reduces. Please see the references for details of the
Simplex method.

1.7

Dynamic Programming

The problem of selecting a combination of values of a set of variables in order to minimize


or maximize certain objective is known as the combinatorial optimization problem. Dynamic programming, developed by Richard Bellman in 1950s, nds frequent application
in solving combinatorial optimization problems. We will learn the basics of dynamic programming while solving a problem of nding the shortest route among existing alternate
routes between a starting point and a destination. In Figure 1.1, the circles represent
cities, and arrows (not to scale) represent the distances between two connected cities. The
problem is to nd the shortest distance between the cities A and J.
Before we discuss the details of the dynamic programming technique, let us get ourselves
familiar with the following denitions of frequently used terms.
Stages: The problem can be divided into a number of stages. Each stage consists of
a number of available policies or alternative actions. For the current problem, I to
IV are the stages.
12

states
7
2
4

B
4
6 6
3
2

4
6

I
3

Policies

II

III  

IV

 stages

Figure 1.1: Problem of nding the shortest path between A and J


States: Each stage consists of a number of states to begin with. One such stage
belongs to the optimal path. For example, stage II in the current problem consists
of states B, C, and D.
Policy: A policy is chosen at each stage, to move from a state to another state. In
the present case, policies are the available paths at each stage.

1.7.1

Formulation of the Problem

Let xn , n = 1, 2, . . . be the immediate destination at stage n. Therefore, the route selected


in going from A to J is given by, A x1 x2 x3 x4 , where x4 = J. Let
fn (s, xn ) be the total cost for the overall policy for the remaining stages (not only for that
particular stage). Here s is the current state; stage n is about to be started, and xn is the
immediate destination. Let xn minimizes fn (s, xn ), and the corresponding minimum value
is designated by,
(1.57)
fn (s) = min fn (s, xn ) = fn (s, xn ),
xn

Now, the total cost for the overall policy for the remaining stages can be decomposed
as,
fn (s, xn ) = immediate (transition) cost at stage n
+ minimum future cost (stage n + 1 onwards)

(xn ),
= csxn + fn+1

13

(1.58)

where csxn is the cost of moving from state s to xn .


Note that the second part in the right hand side of (1.58) is the minimum cost for stages
n + 1 onwards. This is the main principle of dynamic programming. The formulation that
is being described here is backward dynamic programming. In this method, one starts
with nding the optimal path for the last stage, and moves backwards. The information
regarding the best policy in the remaining stages is conveyed backwards, which saves the
computational eort of nding the optimal path for the remaining stages.
Principle of optimality
Given the current state, an optimal policy for the remaining stages is independent of the
policy decisions adopted in the previous stages. Therefore, an optimal immediate decision
depends only on the current state, and not on how one gets there.

1.7.2

Solution Procedure

The use of backward dynamic programming for solving the minimal path problem described
in Figure 1.1 is described in the following.
Consider the case for n = 4, where the traveler has one more stage to go, i.e., x4 = J;
s can be either H or I. The policies are described in the following table:
Table 1.1: n=4
f4 (s) x4

f4 (s, x4 )

For n = 3, when the traveler has 2 more stage to go, following table is valid.
Table 1.2: n = 3
s

f3 (s, x3 ) = csx3 + f4 (x3 )

f3 (s)

x3

1+3=4

4+4=8

6+3=9

3+4=7

3+3=6

3+4=7

14

Following table can be constructed for n = 2.


Table 1.3: n = 2
f2 (s, x2 ) = csx2 + f3 (x2 )

s
E

f2 (s)

x2

7 + 4 = 11 4 + 7 = 11 6 + 6 = 12

11

E or F

3+4=7

2+7=9

4 + 6 = 10

4+4=8

1+7=8

5 + 6 = 11

E or F

f1 (s)

x1

11

C or D

For n = 1, i.e., the last stage, following table is valid.


Table 1.4: n = 1
f1 (s, x1 ) = csx1 + f2 (x1 )

s
B
B

2 + 11 = 13 4 + 7 = 11 3 + 8 = 11

Hence the optimal route is any of the following: A C E H J, or A D


E H J, or A D F I J.

1.8

Optimization by Computational Intelligence

Computational intelligence (CI) is the rational or intelligent behavior emerging as a result of some mathematical operations on individual or group of entities. Articial neural
network (ANN), fuzzy logic, and evolutionary algorithm come under the paradigm of
computational intelligence. The ability to learn from experience and making decisions,
generalizing a known problem, adapting to changing operating conditions are some of the
common properties of CI. Genetic algorithm (GA), particle swarm optimization (PSO),
ant colony optimization (ACO) are some of the frequently used CI based optimization
techniques. PSO has been used recently in a number of power system applications. We
will discuss the basics of PSO in the next section.

1.8.1

Particle Swarm Optimization (PSO)

The basic principles of PSO are taken from the collective movement of a ock of bird, a
school of sh, or a swarm of bees. A number of agents or particles are employed in nding
15

the optimal solution for the problem under consideration. The movement of the particles
towards nding the optimal solution is guided by both individual and social knowledge of
the particles. As shown below, the position of a particle at any instant is determined by
its velocity at that instant and the position at the previous instant.
xi (t) = xi (t 1) + vi (t),

(1.59)

where xi (t 1) and xi (t) are the position vectors of the ith particle at the instants t and
t 1 respectively, and vi (t) is the velocity vector of the particle at t.
The velocity vector is updated by using the experience of the individual particles, as
well as the knowledge of the performance of the other particles in its neighborhood. The
velocity update rule for a basic PSO is,
vi (t) = vi (t 1) + 1 r1 (pbesti xi (t 1)) + 2 r2 (gbest xi (t 1)),

(1.60)

where 1 and 2 are adjustable parameters called individual and social acceleration constant respectively; r1 and r2 are random numbers in the range [0, 1]; pbesti is the best
position vector found by the ith particle; gbest is the best among the position vectors
found by all the particles.
The vectors pbesti and gbest are evaluated by using a suitably dened tness function.
1 and 2 are usually dened such that 1 + 2 = 4, with 1 = 2 = 2. The maximum
and minimum values of the components of velocity are limited by the following constraints
to avoid large oscillations around the solution.
{
vmax if vij vmax ,
vij =
(1.61)
vmax if vij vmax .

16

Bibliography
[1] R. Flecher, Practical Methods of Optimization. New York: John Wiley & Sons, Inc., second ed., 1987.
[2] S. S. Rao, Engineering Optimization: Theory and Practice. New York: John Wiley & Sons,
Inc., third ed., 1996.
[3] A. Ravindran, K. M. Ragsdell, G. V. Reklaitis, Engineering Optimization: Methods and
Applications. Wiley India, second ed., 2006.
[4] F. S. Hillier, G. J. Lieberman, Introduction to Operations Research. New Delhi: Tata
McGraw-Hill Publishing Company Limited, seventh ed., 2001.
[5] Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J. C. Hernandez, and R. G. Harley,
Particle swarm optimization: basic concepts, variants and applications in power systems,
IEEE Trans. Evolutionary Computation, vol. 12, no. 2, pp. 171-195, Apr. 2008.
[6] S. Chakrabarti, G. K. Venayagamoorthy, and E. Kyriakides, PMU placement for power
system observability using binary particle swarm optimization, Australasian Universities
Power Engineering Conference (AUPEC 2008), Sydney, Australia, December 2008.

17