## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

53-58, March 2000

53

REAL POWER OPTIMIZATION WITH LOAD FLOW USING

ADAPTIVE HOPFIELD NEURAL NETWORK

Kwang Y. Lee, Fatih M. Nuroglu, and Arthit Sode-Yome

Department of Electrical Engineering

The Pennsylvania State University

University Park, PA 16802

Abstract: This paper presents real power optimization with load flow using an adaptive Hopfield neural network. In

order to speed up the convergence of the Hopfield neural network system, the two adaptive methods, slope

adjustment and bias adjustment, were used with adaptive learning rates. Algorithms of economic load dispatch for

piecewise quadratic cost functions using the Hopfield neural network have been developed for the two approaches

In stead of using the typical B-coefficient method, this paper uses actual load flow to compute the transmission

loss accurately. These methods for optimization has been tested in the IEEE 30-bus system to demonstrate its

effectiveness. The performance of the proposed approaches is evaluated by comparing the results of the slope

adjustment and the bias adjustment methods with those of the conventional Hopfield network, and an additional

improvement was demonstrated by the use of momentum in the adaptive learning approaches.

Keywords: Optimal power flow, economic load dispatch, Hopfield neural networks, adaptive Hopfield

neural networks

1. INTRODUCTION

In power system, the operation cost at each time needs

to be minimized via economic load dispatch (ELD) [1].

Since the introduction by Hopfield [2,3] The Hopfield

neural networks have been used in many different

applications. An important property of the Hopfield

neural network is the decrease in energy by finite

amount whenever there is any change in inputs [4].

Thus, the Hopfield neural network can be used for

optimization. Tank and Hopfield [5] described how

several optimization problem can be rapidly solved by

highly interconnected networks of a simple analog

processor, which is an implementation of the Hopfield

neural network. Park and others [6] presented the

economic load dispatch for piecewise quadratic cost

functions using the Hopfield neural network. The

results of this method compared very well with those of

the numerical method in an hierarchical approach [7].

King and others [8] applied the Hopfield neural network

in the economic and environmental dispatching of

electric power systems.

These applications, however, involved a large

number of iterations and often showed oscillations

during transients. . Recently, in order to speed up the

convergence of the Hopfield neural networks, Lee and

others [9] developed adaptive Hopfield neural network

methods, slope adjustment and bias adjustment

methods, with adaptive learning rates. In these studies,

however, the transmission loss was either calculated by

the traditional B-coefficient method or ignored all

together. This paper incorporates both the adaptive

Hopfield network for faster convergence and the load

flow for accurate calculation of the transmission loss.

2. ECONOMIC LOAD DISPATCH

The cost function of ELD problem is defined as

follows:

( )

C C P

i i

i

=

∑

, (1)

C P

a +b P +c P , , P P P

a +b P +c P , , P P P

a +b P +c P , k, P P P

i i

i i i i i i i i

i i i i i i i i

ik ik i ik i k- i i

( )

fuel 1

fuel 2

fuel

1 1 1

2

1

2 2 2

2

1 2

2

1

·

≤ ≤

≤ ≤

≤ ≤

¹

'

¹

¹

¹

¹

¹

M

,(2)

where

C

i

(P

i

) : cost of the i

th

generator

P

i

: the power output of generator i

a

ik

, b

ik

, c

i k

: cost coefficients of the i

th

generator

for fuel type k.

In minimizing the total cost, the constraints of

power balance and power limits should be satisfied:

Engineering Intelligent Systems, Vol.8, No. 1, pp. 53-58, March 2000

54

a) Power balance

The total generating power has to be equal to the

sum of load demand and transmission-line loss:

+ D L P

i

i

− ·

∑

0 , (3)

where D is total load and L is transmission loss

The transmission loss is obtained by load flow.

Traditionally, the transmission loss was calculated by

the B-coefficient method [1] and, for simplification, it

was often assumed zero.

b) Maximum and minimum limits of power

The generation power of each generator has some

limits and it can be expressed as

P P P

i i i

≤ ≤ , (4)

where P and P are respectively the minimum and

maximum power generation limits.

3. HOPFIELD NETWORKS FOR ELD

The energy function of the continues-time Hopfield

network [3] is defined as

E T VV I V V

ij i j

j i

i i i i

i i

= − − +

∑ ∑ ∑ ∑

1

2

θ , (5)

where V

i

, I

i

, and θ

i

are respectively the output, external

input and threshold bias of neuron i.

The dynamics of the neurons is defined by

dU

dt

T V I

i

ij j i

j

= +

∑

, (6)

where U

i

is the total input to neuron i and the sigmoidal

function can be defined as

V g U g

U

U

1

U +

U

i i i i

i

i i

· · ·

+ −

¸

¸

_

,

( ) ( ) λ

θ

0

1

exp

0

. (7)

Stability is known to be guaranteed since the energy

function is bounded and its increment is found to be

nonpositive. The time evolution of the system is a

motion in state-space that seeks out minima in E and

comes to a stop at such points.

In order to solve the ELD problem, the following

energy function is defined by augmenting the objective

function (1) with the constraint (2):

E = A D+ L P B a +b P+c P

i

i

i i i i i

i

1

2

( )

1

2

( )

2 2

− +

∑ ∑

,(8)

where a

ik

, b

ik

, c

ik

are the cost coefficients as discrete

functions of P

i

defined in (1).

By comparing (8) with (5) whose threshold is

assumed to be zero, the weight parameters and external

input of neuron i in the network [6] are given by

T A Bc

ii i

· − − ,

T A

ij

· − , (9)

I A D L

Bb

i

i

· + − ( )

2

.

From (6), the differential synchronous transition mode

used in computation of the Hopfield neural network is

U k U k TV k + I

i i ij j i

j

( ) ( - 1) ( ) − ·

∑

. (10)

The sigmoidal function (7) can be modified [6] to

meet the power limit constraint as follows:

V k + = P - P

1

1 +

U k

U

P

i

i

i

i i

i

( 1) ( )

exp

( )

0

−

+

¸

1

]

1

+

θ

(11)

4. ADAPTIVE HOPFIELD NETWORKS

The traditional approach in solving the economic load

dispatch (ELD) problem using the Hopfield neural

network requires a large number of iterations and often

oscillates during the transient [6] and [9]. In order

to speed up convergence, two adaptive adjustment

methods are developed in this paper: slope adjustment

and bias adjustment methods [9].

- 600 - 400 - 200 0 200 400 600

100

150

200

250

Ui

Pi

U0=50100 150 200

Figure 1. Sigmoidal threshold function with different values of the

gain parameter.

4.1 Slope Adjustment Method

In transient state, the neuron input oscillates around the

threshold value, zero in Fig. 1. Some neuron inputs

Engineering Intelligent Systems, Vol.8, No. 1, pp. 53-58, March 2000

55

oscillate away from the threshold value. If the gain

parameter is set too high, the oscillation will occur at

the saturation region. If the slope in this region is too

low, the neurons can not go to the stable state and will

cause instability.

Since energy is to be minimized and its convergence

depends on the gain parameter U

0

, the gradient-descent

method can be applied to adjust the gain parameter as

U k U k

E

U

s 0 0

0

1 ( ) ( ) + · − η

∂

∂

, (12)

where η

s

is a learning rate.

From (8) and (11), the gradient of energy with respect

to the gain parameter can be computed as

∂

∂

∂

∂

∂

∂

E

U

E

P

P

U

i

i

i

n

0 0 1

·

·

∑

. (13)

The update rule of (12) needs a suitable choice of

the learning rate η

s

. For speed and convergence, a

method to compute adaptive learning rates is developed

following the procedure in Ku and Lee [10]. It can be

shown [9,10] that the optimal learning rate is

η

s

s

g

∗

·

1

2

, max

, (14)

where g g k g k E k U

s s s ,max

: max ( ) , ( ) ( ) / · · ∂ ∂

0

.

4.2 Bias Adjustment Method

There is a limitation in the slope adjustment method in

that slopes are small near the saturation region of the

sigmoidal function, Fig 1. If every input can use the

same maximum possible slope, convergence will be

much faster. This can be achieved by changing the bias

to shift the input near the center of the sigmoidal

function. The bias can be changed following the similar

gradient-descent method used in the slope adjustment

method:

θ θ η

∂

∂θ

i i b

i

k k

E

( ) ( ) + · − 1 , (15)

where η

b

is a learning rate.

The bias can be applied to every neuron as in (7),

therefore, from (8) and (11), a derivative of energy with

respect to a bias can be individually computed as

∂

∂θ

∂

∂

∂

∂θ

E E

P

P

i i

i

i

· . (16)

The adaptive learning rate is also developed following

the similar procedure [10]. It can be shown that the

optimal learning rate is

η

b

b

g k

∗

· −

1

( )

, (17)

where

g k T

V

V

b ij

i

j

( ) ·

∑ ∑

∂

∂ θ

∂

∂ θ

. (18)

Again, any other learning rate larger than η

b

∗

does

not guarantee a faster convergence.

4.3 Momentum

The speed of convergence can be accelerated by adding

momentum in the update processes. The momentum

can be applied when updating the input in (10), the gain

parameter in (12), and the bias in (15):

U k U k TV k U k

i i ij

j

j u i

( ) ( ) ( ) ( ) − − · + −

∑

1 1 α ∆ , (19)

U k U k

E

U

U k

s s 0 0

0

0

1 1 ( ) ( ) ( ) · − − + − η

∂

∂

α ∆ , (20)

θ θ η

∂

∂θ

α θ ( ) ( ) ( ) k k

E

k

b b

· − − + − 1 1 ∆ , (21)

where α’s are momentum factors.

5. SIMULATION RESULTS

The results of slope adjustment and bias adjustment

methods are compared at different load demands. Then

the results of the slope and bias adjustment methods

with adaptive learning rates and momentum applied are

compared with each other. Three load levels are tested:

220 MW, 283.4 MW and 380 MW. The 220 MW and

380 MW load levels were used to force one or more

units to hit lower and upper limits. The IEEE 30-bus, 6

generator system is used for loadflow. The line data is

given in [13]. All the graphs are based on the nominal

load, 283.4 MW, to demonstrate the convergence. A

Compaq 90 MHz Pentium computer was used for

simulation.

For real power optimization with loadflow, first the

load flow is run to obtain initial generator power and

the system loss. Second, the adaptive Hopfield

networks update the slope and bias parameters and

generator power. Then, with the new generator power,

Engineering Intelligent Systems, Vol.8, No. 1, pp. 53-58, March 2000

56

load flow is run to obtain the new system loss and

generator power. Then, if the system converges, the

program will stop, otherwise the process will be

repeated. This procedure is shown in Fig. 2.

TABLE I

COST COEFFICIENTS FOR PIECEWISE QUADRATIC COST

FUNCTIONS

U Min P1 P2 Max F a b c

F1 F2 F3

1 50 100 190 200 1 0.000 1.900 .00355

1 2 3 2 0.000 2.000 .00376

3 0.000 2.200 .00415

2 20 35 50 80 1 0.000 1.700 .01700

1 2 3 2 0.000 1.750 .01750

3 0.000 2.050 .02350

3 10 25 35 1 0.000 1.000 .06250

1 2 2 0.000 1.200 .0825

4 10 20 30 1 0.000 3.250 .00834

1 2 2 0.000 3.650 .01234

5 15 30 50 1 0.000 3.000 .02500

1 2 2 0.000 3.300 .03500

6 12 25 40 1 0.000 3.000 .02500

1 2 2 0.000 3.300 .03500

GENERATIONS COST COEFFICIENTS

Slope adjustment method with load flow was run for

fixed and adaptive learning rates at three load demands.

The results are shown in Table II for comparison. For

283.4 MW, the generation cost was plotted for fixed

learning rate in Fig 3. In Fig 3, the effectiveness of the

adaptive Hopfield neural networks is clearly shown.

Initial Loadflow

Hopfield N. N.

Loadflow

Converged?

Optimal Cost

Stop

Yes

No

Figure 2 : Flowchart for the optimization with using load flow.

Bias adjustment method with loadflow was run for

fixed and adaptive learning rates at three load demands.

The results are shown in Table II for comparison. For

283.4 MW, the generation cost was plotted for fixed

,in Fig 3., and adaptive, in Fig 4, learning rates.

0 1 2 3 4 5

x 1 0

4

7 5 0

8 0 0

8 5 0

9 0 0

9 5 0

1 0 0 0

I t e r a t i o n s

C

o

s

t

S l o p e

b i a s

c o n v e n t i o n a l

Fig.3: Cost of the using Hopfield network (dashed-dotted line), slope

adjustment method with fixed learning rate (solid line) and bias

adjustment method with fixed learning rate (dashed line).

TABLE II

RESULTS FOR CONVENTIONAL HOPFIELD NETWORK (A), THE

SLOPE ADJUSTMENT METHOD WITH FIXED LEARNING RATE

η=1.0 (B) AND THE BIAS ADJUSTMENT METHOD WITH FIXED

LEARNING RATE η=1.0 (C)

U A B C A B C A B C

1 134.80 135.11 133.93 180.05 179.12 180.86 200 200 200

2 37.602 37.299 37.352 49.246 49.05 49.19 78.821 78.937 78.935

3 16.495 16.340 16.492 19.853 19.75 19.83 25.837 25.802 25.822

4 10.704 10.774 11.009 15.961 16.31 15.12 29.210 29.886 29.890

5 15.369 15.451 16.027 16.916 17.51 17.18 31.826 31.976 31.758

6 12.083 12.093 12.165 12.953 13.13 12.86 29.995 29.784 29.993

Tot. P. 227.05 227.06 226.97 294.99 294.88 295.45 396.39 396.385 396.39

Cost 589.63 589.73 590.06 810.06 810.08 810.24 1391 1390.89 1390.9

Lost 7.053 7.062 6.974 11.582 11.62 11.645 16.389 16.385 16.398

Iter. 48000 31000 25000 50000 30000 30000 49000 31700 31500

U0 100 100 100 100 100 100 100 100 100

Theta 0 0 50 0 0 50 0 0 50

Time NA 3m 45s 3m 4s NA 3m 37s 3m 37s NA 3m 51s 3m 53s

η

0 1 1 0 1.0 1.0 0 1 1

220 M 283.4 MW 380 MW

0 1 2 3 4 5

x 1 0

4

7 5 0

8 0 0

8 5 0

9 0 0

9 5 0

1 0 0 0

I t er at i ons

C

o

s

t

b i a s

s l o p e

Figure 4: Cost of the using slope adjustment method with load flow for

adaptive learning rate (solid line) and slope adjustment method with

load flow for adaptive learning rate (dashed-dotted line).

Engineering Intelligent Systems, Vol.8, No. 1, pp. 53-58, March 2000

57

Momentum is applied to the input of each method

and gain parameter for slope adjustment method to

speed up the convergence of the system. When the

momentum is applied to the system, the number of

iterations is significantly decreased. In Fig. 5 , the

momentum factor of 0.9 and 0.97 is applied to input

and gain parameter of slope adjustment methods, and

the number of iterations is reduced to about 15 percent

of that without momentum of slope adjustment method.

The system bus information (after convergence) is

shown in Table IV.

As seen in Table III, for 220 MW load, Units 4,5,and

6 have 10.606, 15.357, and 12.074 MW, respectively;

which are all very close to lower limits. Also for 380

MW load, Units 1,2, and 4 have 199.999, 79.024, and

29.809 MW, respectively, which are close to 200, 80,

and 30 MW upper limits , respectively.

TABLE III

RESULTS FOR ADAPTIVE SLOPE ADJUSTMENT METHOD WITH

MOMENTUM 0.9 APPLIED TO INPUT AND GAIN PARAMETER

WITH LOAD FLOW AND ADAPTIVE BIAS ADJUSTMENT

METHOD WITH MOMENTUM 0.9 APPLIED TO INPUT WITH LOAD

FLOW.

U Slope Bias Slope Bias Slope Bias

1 135.466 133.977 182.984 180.173 199.999 199.986

2 37.178 37.474 48.691 49.188 79.024 79.448

3 16.405 16.518 19.61 19.897 25.76 25.687

4 10.606 10.971 15.385 15.334 29.809 29.435

5 15.357 15.891 15.951 17.443 32.16 32.507

6 12.074 12.15 12.629 12.939 29.628 29.322

Total Power 227.086 226.981 295.252 294.974 396.324 389.385

Cost 589.647 589.965 810.278 810.203 1390.997 391.16

Lost 7.0861 9.981 11.852 11.574 16.324 9.385

Iterations 3200 2700 4000 3000 3000 2500

U0 100 100 100 100 100 100

Theta 0 50 0 50 0 50

Time 25s 21s 30s 23s 25s 20s

n 1.00E-04 1 1.00E-04 1 1.00E-04 1

220 MW 283.4 MW 380 MW

In Fig. 6, momentum 0.9 is applied to input of bias

adjustment method with adaptive learning rate using

load flow. Reduction of the iteration number for the

bias adjustment method is similar to the case for slope

adjustment method.

0 0 . 5 1 1 . 5 2 2 . 5 3

x 1 0

4

7 5 0

8 0 0

8 5 0

9 0 0

9 5 0

1 0 0 0

I t e r a t i o n s

C

o

s

t

W i t h M o m e n t u m

W i t h o u t M o m e n t u m

Figure 5: Cost of the using slope adjustment method with load flow for

adaptive learning rate without momentum (solid line) and with

momentum 0.9 applied at input and momentum 0.97 at gain parameter

(dashed-dotted line).

0 0 . 5 1 1 . 5 2

x 1 0

4

7 5 0

8 0 0

8 5 0

9 0 0

9 5 0

1 0 0 0

I t e r a t i o n s

C

o

s

t

W i t h M o m e n t u m

W i t h o u t M o m e n t u m

Figure 6: Cost of the using bias adjustment method with load flow for

adaptive learning rate without momentum (solid line) and with

momentum 0.9 applied at input (dashed-dotted line).

TABLE IV

BUS INFORMATION OF 283.4 MW LOAD WITH ADAPTIVE SLOPE

ADJUSTMENT METHOD WITH MOMENTUM

Bus # V (p.u) P (MW) Q(MVAr) Bus # V (p.u) P (MW) Q(MVAr)

1 1 182.984 -41.067 16 0.934 -3.495 -2.414

2 1 27 24.362 17 0.919 -8.999 -5.899

3 1 -10.381 28.257 18 0.912 -3.199 -0.965

4 1 15.387 28.366 19 0.906 -9.499 -3.482

5 1 -78.241 39.272 20 0.909 -2.198 -0.59

6 1 12.63 30.189 21 0.907 -17.496 -11.741

7 0.984 -22.795 -12.687 22 0.908 0.007 0.705

8 0.989 -2.399 -0.825 23 0.908 -3.197 -1.798

9 0.942 0 2.139 24 0.892 -8.693 -7.841

10 0.922 -5.792 1.698 25 0.893 0.01 0.485

11 0.987 -7.603 -1.413 26 0.87 -3.494 -2.842

12 0.958 -11.176 -2.238 27 0.904 0.004 2.26

13 0.99 0.007 -0.586 28 0.984 0.005 -2.409

14 0.938 -6.193 -0.331 29 0.882 -2.399 -0.952

15 0.927 -8.174 -5.318 30 0.87 -10.599 -1.296

Engineering Intelligent Systems, Vol.8, No. 1, pp. 53-58, March 2000

58

For both figures (Fig 5. and Fig. 6), 283.4 MW load

was used. The results of the cases for both methods are

shown in Table 4.

6. CONCLUSIONS

In this paper, the slope and bias adjustment are applied

to the real power optimization with loadflow. For all

three load levels, slope and bias adjustment methods

gave very good responses. Especially with momentum

applied, the iteration numbers are very low and the

computation time is around 20-25 seconds. Around

lower and upper limits of generator, the system

converged very smoothly.

ACKNOWLEDGEMENTS

This work was supported in part by the National Science

Foundation under grants INT-9605028 and ECS-

9705105.

REFERENCES

1. A. J. Wood and B. F. Wollenberg, Power

Generation, Operation, and Control, Second

Edition, John Wiley & Sons, New York, NY (1996)

2. J. J. Hopfield, ‘Neural networks and physical

systems with emergent collective computational

abilities’, Proceedings of National Academy of

Science, USA., Vol. 79, pp. 2554-2558, April

1982.

3. J. J. Hopfield, ‘Neurons with graded response have

collective computational properties like those of

two-state neurons’, Proceedings of National

Academy of Science, USA., Vol. 81, pp. 3088-

3092, May 1984.

4. J. A. Freeman and D. M. Skapura, Neural Networks

Algorithms, Applications, and Programming

Techniques, Addison-Wesley Pub. Co., Inc., July

1992.

5. W. D. Tank and J. J. Hopfield, ‘Simple neural

optimization networks: an A/D converter, signal

decision circuit, and a linear programming circuit’,

IEEE Transactions on Systems, Mans, and

Cybernetics, Vol. CAS-33, No. 5, pp. 553-541,

January/February 1992.

6. H. Park, Y. S. Kim, I. K. Eom, and K. Y. Lee,

‘Economic load dispatch for piecewise quadratic

cost function using hopfield neural networks’,

IEEE Transactions on Power system Apparatus

and Systems, Vol. 8, No. 3, pp. 1030-1038, August

1993.

7. C. E. Lin, and G. L. Viviani, ‘Hierachical economic

dispatch for piecewise quadratic cost functions’,

IEEE Transactions on Power Apparatus and

Systems., Vol. PAS-10, No. 6., June 1984.

8. T. D. King, M. E. El-Hawary, and F. El-Hawary,

“Optimal environmental dispatching of electric

power system via an improved hopfield neural

network model”, IEEE Transactions on Power

Systems, Vol. 1, No. 3, pp. 1559-1565, August

1995.

9. K. Y. Lee, A. Sode-Yome, and J. H. Park, ‘Adaptive

Hopfield Neural Networks for Economic Load

Dispatch’, IEEE Trans. on Power Systems, Vol 13

No 2 (May 1998) pp 519-526

10. C. C. Ku and K. Y. Lee, ‘Diagonal recurrent neural

networks for dynamic systems control,” IEEE,

Transactions on Neural Networks’, Vol. 6, No. 1,

pp. 144-156, January 1995.

11. M. A. El-Sharkawi and D. Niebur, Application of

Artificial Neural Networks to Power Systems,

IEEE Power System Society, 96 TP 112-0, IEEE,

1996.

12. J. J. Hopfield and W. D. Tank, ‘Neural computation

of decisions in optimization problems’, Biological

Cybernetics, Vol. 52, pp 141-152, 1985.

13. K. Y. Lee, Y. M. Park and J. L. Ortiz, ‘United

Approach to Optimal Real and Reactive Power

Dispatch’, IEEE Trans. on Power Appratus &

Systems, PAS-104 1985 pp 1147-1153

In order to speed up convergence. Traditionally. bik. zero in Fig. The dynamics of the neurons is defined by dU i dt = ∑ j T ij V j + Ii . the transmission loss was calculated by the B-coefficient method [1] and. external input and threshold bias of neuron i. Ii. Some neuron inputs . the neuron input oscillates around the threshold value. and θi are respectively the output. 1. it was often assumed zero. Sigmoidal threshold function with different values of the gain parameter.1) = ∑ TijV j ( k ) + I i . for simplification. ADAPTIVE HOPFIELD NETWORKS (5) where Vi. (4) (9) Bbi . j (10) where P and P are respectively the minimum and maximum power generation limits. 2 Ii = A( D + L ) − From (6).P i ) 1 + Pi (11) Ui(k) + θi 1 + exp − U0 3. the differential synchronous transition mode used in computation of the Hopfield neural network is U i ( k ) − U i ( k . (3) where D is total load and L is transmission loss The transmission loss is obtained by load flow. b) Maximum and minimum limits of power The generation power of each generator has some limits and it can be expressed as Pi ≤ Pi ≤ Pi . Vol. By comparing (8) with (5) whose threshold is assumed to be zero. 53-58.1 Slope Adjustment Method In transient state.(8) 2 i 2 i 100 00 -6 -400 -200 0 Ui 200 400 600 Figure 1. cik are the cost coefficients as discrete functions of Pi defined in (1). 4.8. i i 2 i j 4. i where aik. In order to solve the ELD problem. two adaptive adjustment methods are developed in this paper: slope adjustment and bias adjustment methods [9]. Tij = − A . 1. The time evolution of the system is a motion in state-space that seeks out minima in E and comes to a stop at such points. The sigmoidal function (7) can be modified [6] to meet the power limit constraint as follows: Vi ( k + 1) = ( P i .Engineering Intelligent Systems. HOPFIELD NETWORKS FOR ELD The energy function of the continues-time Hopfield network [3] is defined as E=− 1 ∑ ∑ Tij ViV j − ∑ I i Vi + ∑ θ i Vi . pp. March 2000 54 a) Power balance The total generating power has to be equal to the sum of load demand and transmission-line loss: D + L − ∑ Pi = 0 . No. (7) Pi 150 Stability is known to be guaranteed since the energy function is bounded and its increment is found to be nonpositive. 250 U0=50100 150 200 200 where Ui is the total input to neuron i and the sigmoidal function can be defined as U Vi = g i ( λ U i ) = g i ( i ) = U0 1 U +θi 1 + exp − i U0 . the following energy function is defined by augmenting the objective function (1) with the constraint (2): E= 1 1 A( D + L − ∑ Pi )2 + B ∑ (ai + bi Pi + ci Pi 2 ) . (6) The traditional approach in solving the economic load dispatch (ELD) problem using the Hopfield neural network requires a large number of iterations and often oscillates during the transient [6] and [9]. the weight parameters and external input of neuron i in the network [6] are given by Tii = − A − Bci .

Fig 1. If every input can use the same maximum possible slope. All the graphs are based on the nominal load. It can be shown that the optimal learning rate is η∗ = − b where gb ( k ) = ∑ ∑ Tij ∂ Vi ∂ V j . therefore. Second. ∂θ i ∂ Pi ∂θ i (16) The results of slope adjustment and bias adjustment methods are compared at different load demands. the oscillation will occur at the saturation region. March 2000 55 oscillate away from the threshold value. max The speed of convergence can be accelerated by adding momentum in the update processes. SIMULATION RESULTS where ηb is a learning rate. 283. a method to compute adaptive learning rates is developed following the procedure in Ku and Lee [10]. and the bias in (15): Ui (k ) − U i ( k − 1) = ∑ TijV j ( k ) + α u ∆U i ( k − 1) . ∂θ ∂θ (18) 1 . the gradient-descent method can be applied to adjust the gain parameter as U 0 ( k + 1) = U 0 ( k ) − η s ∂E . g b (k ) (17) (12) where ηs is a learning rate. with the new generator power. (14) U 0 (k ) = U 0 ( k − 1) − η s θ ( k ) = θ ( k − 1) − η b ∂E + α s ∆U 0 ( k − 1) . The line data is given in [13]. Then the results of the slope and bias adjustment methods with adaptive learning rates and momentum applied are compared with each other. If the slope in this region is too low. The IEEE 30-bus. ∂θ i (15) 5. This can be achieved by changing the bias to shift the input near the center of the sigmoidal function. For speed and convergence. Since energy is to be minimized and its convergence depends on the gain parameter U0. ∂θ 4. There is a limitation in the slope adjustment method in that slopes are small near the saturation region of the sigmoidal function. the adaptive Hopfield networks update the slope and bias parameters and generator power. the gain parameter in (12).2 Bias Adjustment Method where α’s are momentum factors. g s (k ) = ∂ E (k ) / ∂ U 0 .Engineering Intelligent Systems. i=1 i ∂U 0 n (13) 4. Three load levels are tested: 220 MW. ∂ U0 The adaptive learning rate is also developed following the similar procedure [10].4 MW and 380 MW. Then.3 Momentum The update rule of (12) needs a suitable choice of the learning rate ηs. For real power optimization with loadflow. any other learning rate larger than η ∗ does b not guarantee a faster convergence. ∂E + α b ∆θ ( k − 1) .4 MW. 6 generator system is used for loadflow. from (8) and (11). The 220 MW and 380 MW load levels were used to force one or more units to hit lower and upper limits. a derivative of energy with respect to a bias can be individually computed as ∂ E ∂ E ∂ Pi = . 283. The momentum can be applied when updating the input in (10). 53-58. The bias can be applied to every neuron as in (7). the gradient of energy with respect to the gain parameter can be computed as ∂E = ∂U 0 Again. ∂U0 (20) (21) where g s. No. (19) j . ∑∂P ∂ E ∂ Pi . A Compaq 90 MHz Pentium computer was used for simulation. 1. . If the gain parameter is set too high. first the load flow is run to obtain initial generator power and the system loss. pp. The bias can be changed following the similar gradient-descent method used in the slope adjustment method: θ i (k + 1) = θ i ( k ) − η b ∂E . It can be shown [9.max : = max g s ( k ) . convergence will be much faster. From (8) and (11). Vol. to demonstrate the convergence.10] that the optimal learning rate is η∗ s = 1 2 g s.8. the neurons can not go to the stable state and will cause instability.

0 (B) AND THE BIAS ADJUSTMENT METHOD WITH FIXED LEARNING RATE η=1. THE SLOPE ADJUSTMENT METHOD WITH FIXED LEARNING RATE η=1.009 16.995 29.19 19.299 16.73 7.000 0. U0 Theta Time η A 134.8.3: Cost of the using Hopfield network (dashed-dotted line).12 16.582 11.645 50000 30000 30000 100 100 100 0 0 50 NA 3m 37s 3m 37s 0 1.05 49.000 0.12 180.200 3.750 2.886 29.00355 .39 1391 1390.000 0.602 16.13 12.774 15. in Fig 4.700 1.4 MW.39 396.02500 .993 396.953 13.01700 .000 0.05 589.86 49.9 16.062 31000 100 0 3m 45s 1 283.352 16.837 25. This procedure is shown in Fig.784 29.45 810. pp.51 17. 53-58. and adaptive.027 12.000 3.961 16.935 25. Vol.822 29.853 19.250 3. learning rates.00415 .93 37.31 15.492 11. the generation cost was plotted for fixed learning rate in Fig 3.000 0.389 16. No.000 1.0 (C) 220 M B 135.000 0. the effectiveness of the adaptive Hopfield neural networks is clearly shown. Then.000 0.97 590.495 10.451 12. For 283. N.900 2.05 179.053 48000 100 0 NA 0 C 133. TABLE I COST COEFFICIENTS FOR PIECEWISE QUADRATIC COST FUNCTIONS GENERATIONS P1 P2 Max F F1 F2 F3 50 100 190 200 1 1 2 3 2 3 20 35 50 80 1 1 2 3 2 3 10 25 35 1 1 2 2 10 20 30 1 1 2 2 15 30 50 1 1 2 2 12 25 40 1 1 2 2 COST a 0.300 3.000 0.11 37.000 3.18 12.99 294. P. 1.02500 . 0 1 2 Iterations 3 4 x 10 5 4 Figure 4: Cost of the using slope adjustment method with load flow for adaptive learning rate (solid line) and slope adjustment method with load flow for adaptive learning rate (dashed-dotted line).083 227.802 25.650 3.210 29.000 0. Cost 850 bias 800 slope 750 Bias adjustment method with loadflow was run for fixed and adaptive learning rates at three load demands.000 COEFFICIENTS b c 1.00376 . U 1 2 3 4 5 6 Tot.369 12.80 37.826 31.200 1.050 1.0 1. In Fig 3.093 227.06 589.02350 . 3 4 5 6 TABLE II RESULTS FOR CONVENTIONAL HOPFIELD NETWORK (A). 2.75 19.385 396. March 2000 56 1 0 0 0 load flow is run to obtain the new system loss and generator power.Engineering Intelligent Systems.01750 . if the system converges.06 6.03500 .890 31.89 1390.03500 Cost 950 900 850 bias Slope conventional 800 750 0 1 2 Iterations 3 4 x 10 5 4 U Min 1 2 Fig.165 226.340 10.398 49000 31700 31500 100 100 100 0 0 50 NA 3m 51s 3m 53s 0 1 1 Slope adjustment method with load flow was run for fixed and adaptive learning rates at three load demands.in Fig 3.000 0. The results are shown in Table II for comparison.63 7.976 31.08 810.000 0. slope adjustment method with fixed learning rate (solid line) and bias adjustment method with fixed learning rate (dashed line).62 11.000 2.821 78. the generation cost was plotted for fixed .24 11.385 16.0 380 MW A B C 200 200 200 78.000 0. otherwise the process will be repeated.916 17.83 15. Initial Loadflow Hopfield N. the program will stop.06250 .0825 . For 283.000 0.4 MW. Cost Lost Iter.88 295. The results are shown in Table II for comparison.704 15.937 78.246 49.06 810.01234 .300 ..758 29.974 25000 100 50 3m 4s 1 Loadflow No Converged? 1000 Yes Optimal Cost 950 Stop 900 Figure 2 : Flowchart for the optimization with using load flow.00834 .86 294. .4 MW A B C 180.

841 0.809 MW. Units 1.628 29.482 -0.139 24 1.934 0.693 0.909 0.907 0. March 2000 57 1000 Momentum is applied to the input of each method and gain parameter for slope adjustment method to speed up the convergence of the system. 9 5 0 9 0 0 Cost Without Momentum 8 5 0 With Momentum 8 0 0 7 5 0 0 0.278 810.385 15. momentum 0. 80. respectively.174 Q(MVAr) Bus # -41.15 Total Power 227.977 2 37.606.413 26 -2. 1 0 0 0 9 5 0 TABLE III RESULTS FOR ADAPTIVE SLOPE ADJUSTMENT METHOD WITH MOMENTUM 0.16 32.9 is applied to input of bias adjustment method with adaptive learning rate using load flow.518 4 10.984 0. respectively. Also for 380 MW load.87 P (MW) -3.00E-04 1 283. 15.198 -17.074 MW.199 -9.067 16 24. 6.599 Q(MVAr) -2.005 -2.257 18 28. 5 .927 P (MW) 182.939 29.919 0.272 20 30.443 32.97 is applied to input and gain parameter of slope adjustment methods.9 applied at input (dashed-dotted line).908 0.474 3 16. .606 10.99 0.01 -3. The system bus information (after convergence) is shown in Table IV.366 19 39.629 12.647 589.825 23 2. pp.952 -1.435 15.495 -8.997 391.892 0.891 6 12.965 -3.024 79. respectively. for 220 MW load.9 APPLIED TO INPUT WITH LOAD FLOW.912 0.241 12. No.485 -2.984 27 -10.0861 9.007 -3.16 11.5 2 x 10 4 Figure 6: Cost of the using bias adjustment method with load flow for adaptive learning rate without momentum (solid line) and with momentum 0.603 -11.687 22 -0.938 0. and 12.882 0.296 In Fig. which are all very close to lower limits.024. Reduction of the iteration number for the bias adjustment method is similar to the case for slope adjustment method.999 -3.899 -0.63 -22.999 199.984 180.984 0.586 28 -0. and 29.986 48.324 389.189 21 -12.26 -2. 1.5 1 1. the number of iterations is significantly decreased.87 0.9 and 0.331 29 -5.893 0.u) 1 1 1 1 1 1 0.942 0.Engineering Intelligent Systems.252 294.322 295.004 0.188 79.965 Lost 7.5 x 10 3 4 Figure 5: Cost of the using slope adjustment method with load flow for adaptive learning rate without momentum (solid line) and with momentum 0. and 4 have 199.499 -2.324 9.5 Iterations 2 2.59 -11.8.809 29.318 30 V (p.238 27 -0.987 0.074 12.193 -8. In Fig.405 16. Units 4.9 applied at input and momentum 0.466 133.448 19.399 -10.792 -7.496 0.9 APPLIED TO INPUT AND GAIN PARAMETER WITH LOAD FLOW AND ADAPTIVE BIAS ADJUSTMENT METHOD WITH MOMENTUM 0.798 -7.385 810.197 -8.897 25.173 199.574 16.906 0.698 25 -1.922 0.334 29. As seen in Table III. the momentum factor of 0.086 226.795 -2.842 2.852 11.61 19.414 -5.687 15.5 1 Iterations 1.357 15. 220 MW U Slope Bias 1 135.691 49.989 0.999.007 -6.5.2.971 5 15.507 12.and 6 have 10.974 396.178 37.705 -1.u) 0.904 0. and the number of iterations is reduced to about 15 percent of that without momentum of slope adjustment method.981 Cost 589.981 Iterations 3200 2700 U0 100 100 Theta 0 50 Time 25s 21s n 1.951 17.381 15. 53-58. and 30 MW upper limits .741 0.908 0.357.399 0 -5.176 0.203 1390.00E-04 1 1. When the momentum is applied to the system.97 at gain parameter (dashed-dotted line). TABLE IV BUS INFORMATION OF 283.4 MW 380 MW Slope Bias Slope Bias 182.00E-04 1 Without Momentum 9 0 0 Cost With Momentum 8 5 0 8 0 0 7 5 0 0 0.385 4000 3000 3000 2500 100 100 100 100 0 50 0 50 30s 23s 25s 20s 1.958 0. Vol. which are close to 200. 79.4 MW LOAD WITH ADAPTIVE SLOPE ADJUSTMENT METHOD WITH MOMENTUM Bus # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 V (p.362 17 28.409 -0.494 0.387 -78.76 25.

Mans. J. April 1982. M. and a linear programming circuit’. 553-541. El-Hawary. on Power Systems. PAS-104 1985 pp 1147-1153 For both figures (Fig 5. Vol. Proceedings of National Academy of Science. May 1984. I. the iteration numbers are very low and the computation time is around 20-25 seconds. King. 96 TP 112-0. Skapura. Sode-Yome. The results of the cases for both methods are shown in Table 4. Vol. Y. Lee. Wood and B. 144-156. 53-58. IEEE Transactions on Systems. 4. “Optimal environmental dispatching of electric power system via an improved hopfield neural network model”.Engineering Intelligent Systems. CAS-33. No. Especially with momentum applied. and Programming Techniques. the slope and bias adjustment are applied to the real power optimization with loadflow. J. El-Hawary. Transactions on Neural Networks’. January/February 1992.. IEEE Transactions on Power Apparatus and Systems. Neural Networks Algorithms. Lin. March 2000 58 and Systems. No. K. . 9. A. 5. Y. Wollenberg. 79. and Control. Co. 8. Niebur.. August 1995. 283. Inc. Hopfield. J. Tank. Addison-Wesley Pub. J. 3. 12.. ‘Neural computation of decisions in optimization problems’. El-Sharkawi and D. J. ‘United Approach to Optimal Real and Reactive Power Dispatch’. W. the system converged very smoothly. IEEE Transactions on Power system Apparatus 13. IEEE. D. For all three load levels. 1030-1038. 1559-1565. ‘Economic load dispatch for piecewise quadratic cost function using hopfield neural networks’. USA. and J. Second Edition. Vol. Hopfield and W.4 MW load was used. slope and bias adjustment methods gave very good responses. IEEE Trans. Kim. PAS-10. Vol. J. August 1993. ‘Simple neural optimization networks: an A/D converter. Lee. Applications. and Fig. 11. J. 3. Power Generation. D. June 1984. John Wiley & Sons. K. Y. T. pp. Vol. NY (1996) 2. pp 141-152. 2554-2558. H. Ku and K. CONCLUSIONS 8. J. July 1992. Application of Artificial Neural Networks to Power Systems. 1996. 6. and F. 6). Lee. J. IEEE Transactions on Power Systems. 30883092. No. and K. Tank and J. Operation. No. Park and J. No. signal decision circuit. E. 52. ‘Adaptive Hopfield Neural Networks for Economic Load Dispatch’.8. Hopfield. 10. 5. M. 6. 6. and Cybernetics. 1. A. In this paper. Park. and G. K. D. REFERENCES 1. Y. IEEE Trans. IEEE Power System Society. L. H. Vol 13 No 2 (May 1998) pp 519-526 C. M. Hopfield. pp. A.. 1. Freeman and D. pp. S. pp. C. New York. Proceedings of National Academy of Science. Ortiz. Vol... 3. Eom. L. ‘Hierachical economic dispatch for piecewise quadratic cost functions’. ACKNOWLEDGEMENTS This work was supported in part by the National Science Foundation under grants INT-9605028 and ECS9705105. 1. pp. 6. Y. USA. 7. Vol. ‘Neurons with graded response have collective computational properties like those of two-state neurons’. A. 81. January 1995. pp. Around lower and upper limits of generator. M. 1985. Vol. on Power Appratus & Systems. C. ‘Neural networks and physical systems with emergent collective computational abilities’. ‘Diagonal recurrent neural networks for dynamic systems control. Lee. E. pp. F. Biological Cybernetics. Park. Vol. No. Y. Viviani.” IEEE.

- Filetype Network Neural PDFJulie
- ZL013660665Anonymous 7VPPkWS8O
- 100808ijcnsvol2no8
- Do u Thnk Dt AI Plays an Impotent Role in Mngng IT of an OrgnznVedanti Jain
- 20100639AhmedAlazzawi
- ARTIFICIAL INTELLIGENCE IN GRAVEL PACKINGUgomuoh Tochukwu Theophine
- R05320505-NEURALNETWORKSfrGagan Kishore
- Artificial Neural NetworkBharadwaj Santhosh
- Partially Recurrent Neural Networks in Stock Forecastingumunera2997
- a friendNatalia Simasco
- A Decision Support System for Estimatingijfcstjournal
- SynopsisNishant
- Computational Neuroscience and Cognitive Modellingaajkljkl
- 02-tar3.pdfGabin Lenge
- Incorporating Kalman Filter in the Optimization of Quantum Neural Network ParametersAI Coordinator - CSC Journals
- Multi Objective based Economic Load Dispatch using Game TheoryIRJET Journal
- A Brief review to the intelligent controllerswhich used to control trafficflowjournal
- Contributions to intelligent control of autonomous robots equipped with multi-sensors systemsDon Hass
- 49628394-Real-Time-Power-System-Security.pdfAbdul Rafae
- Analysis & Performance Evolution of IRIS Recognition SVD and EBP Algorithms using Neural Network ClassifierAnonymous vQrJlEN
- Nelson Intro Annkvnsai
- tmp685D.tmpFrontiers
- v13-53Samir Rustamov
- Random Valued Impulse Noise Elimination using Neural FilterATS
- Computerized Paper Evaluation Using Neural Network a Neshaikshaa007
- ann2-6s22agostoimpriJuan Alvarez
- Mam Mis Ist UnitNeeshu Sharma
- A Hybrid Financial Analysis Model for BusinessAjeet Singh Chouhan
- Vidal 1984Rachmat Miftakhul Huda
- Cpeapi-26172869

- Simulation of Single and Multilayer of Artificial Neural Network using VerilogInternational Journal for Scientific Research and Development
- Comparative Analysis of Power System Stabilizer using Artificial Intelligence TechniquesInternational Journal for Scientific Research and Development
- Evaluation the Effect of Machining Parameters on MRR of Mild SteelInternational Journal for Scientific Research and Development
- Modeling To Predict Cutting Force And Surface Roughness of Metal Matrix Composite Using ANNInternational Journal for Scientific Research and Development
- tmpCFA5.tmpFrontiers
- Pami Im2Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning ChallengeJordan Novet
- tmp3183.tmpFrontiers
- Predicting GDP Growth using Knowledge-based Economy Indicators: A Comparison between Neural Network and Econometric ApproachesSunway University
- commalds (1)Fred Lamert
- tmp655B.tmpFrontiers
- tmp5573.tmpFrontiers
- tmp56BC.tmpFrontiers
- Fault Diagnostics of Rolling Bearing based on Improve Time and Frequency Domain Features using Artificial Neural NetworksInternational Journal for Scientific Research and Development
- Efficient And Improved Video Steganography using DCT and Neural NetworkInternational Journal for Scientific Research and Development
- OCR for Gujarati Numeral using Neural NetworkInternational Journal for Scientific Research and Development
- Heart Disease Prediction System Using Artificial Neural Network and Naive BayesInternational Journal for Scientific Research and Development
- UT Dallas Syllabus for cgs3342.501.10f taught by Richard Golden (golden)UT Dallas Provost's Technology Group
- tmp8828.tmpFrontiers
- Nasa Facts Nasa's Loflyte Program FlownBob Andrepont
- A Survey on Hybrid Soft Computing Techniques based on Classification ProblemInternational Journal for Scientific Research and Development
- tmpD53B.tmpFrontiers
- A Survey of Various Intrusion Detection SystemsInternational Journal for Scientific Research and Development
- tmp5555.tmpFrontiers
- tmpAA72Frontiers
- A Review on Nanofluids Thermal Properties Determination Using Intelligent Techniques or Soft Computing ToolsInternational Journal for Scientific Research and Development
- A Solar - Powered charger with Neural Network Implemented on FPGAInternational Journal for Scientific Research and Development
- Smart License Plate Recognition System based on Image ProcessingInternational Journal for Scientific Research and Development
- A Comparative Study of Fuzzy Logic with Artificial Neural Network Techniques in Cancer DetectionInternational Journal for Scientific Research and Development
- A Study on Face Recognition SystemInternational Journal for Scientific Research and Development
- tmpA940.tmpFrontiers

- Fault Analysis Using z BusJagath Prasanga
- a4f0ea9ba5ebe8baf31cba17ff378633_SymmetricalComponents_2013Jagath Prasanga
- Fault Analysis Using z BusJagath Prasanga
- Distribution Design Standard - Underground SystemJagath Prasanga
- 7SR11 and 7SR12 - Argus Complete Technical ManualJagath Prasanga
- Earth fault distance computation with fundamental frequency signals based on measurements in substation supply bayJagath Prasanga
- Accurate Fault Location Algorithm on Power Transmission Lines with use of Two-end Unsynchronized MeasurementsJagath Prasanga
- Basics of SISJagath Prasanga
- Fault locating in large distribution systems by empirical mode decomposition and core vector regressionJagath Prasanga
- 01-NDTP05-14Jagath Prasanga
- Hard.to.FindJagath Prasanga
- Protect your hearing or lose it!Jagath Prasanga
- An 641Jagath Prasanga
- sroiaf(2007)02Jagath Prasanga
- PA-08Jagath Prasanga

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading