Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
PSS Design Based on RNN and the MFA\FEP Control Strategy

PSS Design Based on RNN and the MFA\FEP Control Strategy

Ratings: (0)|Views: 36 |Likes:
Published by ijcsis
The conventional design of PSS (power system stabilizers) was carried out using a linearized model around the nominal operating point of the plant, which is naturally nonlinear. This limits the PSS performance and robustness. In this paper, we propose a new design using RNN (recurrent neural networks) and the model free approach (MFA) based on the FEP (feed-forward error propagation) training algorithm [15]. The results show the effectiveness of the proposed approach. The system response is less oscillatory with a shorter transient time. The study was extended to faulty power plants.
The conventional design of PSS (power system stabilizers) was carried out using a linearized model around the nominal operating point of the plant, which is naturally nonlinear. This limits the PSS performance and robustness. In this paper, we propose a new design using RNN (recurrent neural networks) and the model free approach (MFA) based on the FEP (feed-forward error propagation) training algorithm [15]. The results show the effectiveness of the proposed approach. The system response is less oscillatory with a shorter transient time. The study was extended to faulty power plants.

More info:

Published by: ijcsis on Jun 12, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/13/2010

pdf

text

original

 
PSS Design Based on RNN and the MFA\FEPControl Strategy
Rebiha Metidji and Boubekeur Mendil
Electronic Engineering Department
 
University of A. Mira, Targua Ouzemour,Bejaia, 06000, Algeria.Zeblah80@yahoo.fr
 Abstract –
The conventional design of PSS (powersystem stabilizers) was carried out using a linearizedmodel around the nominal operating point of theplant, which is naturally nonlinear. This limits thePSS performance and robustness.In this paper, we propose a new design using RNN(recurrent neural networks) and the model freeapproach (MFA) based on the FEP (feed-forwarderror propagation) training algorithm [15].The results show the effectiveness of the proposedapproach. The system response is less oscillatory witha shorter transient time. The study was extended tofaulty power plants.
 Keywords-Power Network; Synchronous Generator; Neural Network; Power System Stabilizer; MFA/FEP control.
I.
 
INTODUCTIONPower system stabilizers are used to generatesupplementary control signals for the excitationsystem, in order to damp the low frequency powersystem oscillations. Conventional power systemstabilizers are widely used in existing powersystems and have made a contribution in enhancingpower system dynamic stability [1]. The parametersof a classical PSS (CPSS) are determined based ona linearized model of the power system around itsnominal operating point. Since power systems arehighly nonlinear with time varying configurationsand parameters, the CPSS design cannot guaranteegood performance in many practical situations.To improve the performance of CPSSs,numerous techniques have been proposed for theirdesign, such as the intelligent optimization methods[2], fuzzy logic [3,4,5], neural networks [7,8,9] andmany other nonlinear control techniques [11,12].The fuzzy reasoning using qualitative data andempirical information make the fuzzy PSS (FPSS)less optimizing compared with the neural PSS(NPSS) performance. This is our motivation.The main problem in control systems is todesign a controller that can provide the appropriatecontrol signal to meet some specificationsconstituting the subject of the control action. Often,these specifications are expressed in terms of speed,accuracy and stability. In the case of neuronalcontrol, the problem is to find a better way to adjustthe weights of the network. The main difficulty ishow to use the system output error to change thecontroller parameters, since the physical plant isinterposed between the controller output and thescored output.Several learning strategies have been proposedto overcome this problem such as the supervisedlearning, the learning generalized inverse modeling,the direct modeling based on specialized learning,and so on [14]. In this work, we used our MFA/FEPapproach because of its simplicity and efficiency[15]. The aim is to ensure a good damping of thepower network transport oscillations. This can bedone by providing an adequate control signal thataffects the reference input of the
 
AVR (automaticvoltage regulator). The stabilization signal isdeveloped from the rotor speed or electric powervariations.The section II presents the power plant model.The design of the neural PSS is described in sectionIII. Some simulation results are provided in sectionIV.II.
 
THE POWER PLANT MODELThe standard model of a power plant consists of a synchronous generator, turbine, a governor, anexcitation system and a transmission line connectedto an infinite network (Fig.1). The model is built inMATLAB/SIMULINK environment using thepower system Blockset. In Fig.1,
P
 REF 
is themechanical power reference,
P
SR
is the feedback through the governor,
 M 
is the turbine output
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 2, May 2010217http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
torque,
inf 
is the infinite bus voltage,
TREF 
isterminal
 
voltage reference,
is terminal voltage,
 A
is the voltage regulator output,
is fieldvoltage,
 E 
is the excitation system stabilizingsignal,
 ∆
w
is the speed deviation,
PSS
is the PSSoutput signal,
P
is the active power, and
Q
is thereactive power at the generator terminal.The switch
S
is used to carry out tests on thepower system with NPSS, CPSS and without PSS(with switch
S
in position 1, 2, and 3, respectively).The synchronous generator is described by aseventh order d-q axis of equations with themachine current, speed and rotor angle as the statevariables. The turbine is used to drive the generatorand the governor is used to control the speed andthe real power. The excitation system for thegenerator is shown in Fig.2 [4].The CPSS consists of two phase-leadcompensation blocks, a signal washout block, and again block. The input signal is the rotor speeddeviation
 ω
[16]. The block diagram of the CPSSis shown in Fig.3.
 
Figure 1. The control system configuration.Figure 2. Block diagram of the excitation system.Figure 3. Block diagram of CPSS.Figure 4. Block diagram of DTRNN.Figure 5. Illustration of the state feedback.
III.
 
THE NPSS DESIGNIn this work, we used a neural architecture usedprimarily in the field of modeling and systemsdynamic control; namely the DTRNN (DiscreteTime Recurrent Neural Network). This is analogousto the Hopfield network (Fig.4).W
 
: global synaptic weight vector.S : weighted sum, called potential.U : output or neuron response.f : activation function.
 υ
: external input.This is a neural network with state feedback. Ithas a single layer network with the output of eachneuron is feed back to all other neurons. Eachneuron may receive an external input [14]. This iswell shown in Fig.5.
 A.
 
 DTRNN network Equations
Originally, the equations governing this network type are the form:
U
k1fW

U
kν
k

 
(1)If we consider that the inputs
ν
are weighted as inFig.4, (1) can be rewritten as follows:
U
k1fW

U
k

 
(2)With
U
k1 j0U
kj1,…,nν

kjn 1,…,nm
 
(3)
 Where:
U
k1
:
i

Network state variable(i=1… n).
ν
k
:
j

External input (j=1… m).
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 2, May 2010218http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
n : number of neurons in the network.m : dimension of the input vector, V.
U

k
=1 : inner threshold.The outputs are chosen among the statevariables. Learning is done using dynamic trainingalgorithms that adjust the synaptic coefficientsbased on presented examples (correspondingdesired inputs \ outputs) and taking into account thefeedback information flow [17].In this work, we used the MFA trainingstructure of Fig.6. The idea is to evaluate the outputerror (i.e. the gap between the system output and itsdesired value) to the controller input and to spreadthem directly from input to output, to get the hiddenlayers and the output layer errors. This can berealized by the feed forward error propagationalgorithm (FEP) crafted specifically for thispurpose. This allows direct and fast errorscalculation of consecutive layers, required for thecontroller parameters adjustment.
 B.
 
Training the DTRNN controller based on thealgorithm FEP
The approach MFA / FEP is an effectivealternative for training controllers. Direct
 
injectionof the input error can provide error vectorcomponents required to the network weight update.The FEP formulation is given in the followingparagraph.Let consider a DTRNN given by (2) and (3).The global input vector, X (k), of the network consists of the threshold input, d=1, the externalinputs, xi (k), and network state feedback variables,U
i
(k).
X (k)
 
 
1X
(k)..X
(k)u
(k1)..u
(k 1)
 
 
=
 
X
(k)X
(k)......X
(k)
 
 
and U
 
(k)=
 
U
(k)U
(k)....U
(k)
 
 
Then, we can write
U
(k  1)= f S
(k)=
 
fW

.X
(k)

 
(4)The outputs are chosen among the state variables.For the simplicity of notation, let consider the first
state variables as outputs.
Figure 6. MFA/FEP control structure.
Y(k) = [y
1
(k), y
2
(k),…,y
r
(k)]
T 
= [U
1
(k), U
2
(k),… U
r
(k)]
T 
(5)
 
The standard quadratic error over the
 pth
sequence is given by
J
(W) =
y
(k)  y
(k)


 
(6)
 
Where
 
: is the length of the
 pth
training sequence,
: is the number of the network outputs,
y
(k)
: is the
 jth
network output,
y
(k)
: is the desired value corresponding to
y
(k)
.The weights are updated iteratively according to
W

(k  1)= W

(k) μ
()

()
 
(7)where µ is the learning rate. The gradient in (7) iscalculated using the chain rule:

()
 


()
=

()
()


()
 
(8)
 
Where
∂U
∂W

(k)=∂f s
(k)∂s
(k)∂s
(k)∂W

(k)
 
= f 
s
(k).x
(k)
 
(9)
 
The term

()
()
in (8) is the sensitivity of Jp(w) tothe node output, U
i
(k). Let the error, calculatedbetween the network output and desired output, be
(k)= Y(k) Y
(k) = ∆Y(k)
(10)As in the back-propagation algorithm, the term

()
()
can be interpreted as the equivalent
 
error,
ε
(k)
, with (
i=1,2,…,n
). Hence, we write

()
 

()
= ε
(k)=

()
()
∆y
(k)
 
(11)
 
The error vector, E
0
(k), can be calculated at theinput ; since the network receives the output vector,Y(k), as input through the state feedback. The
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 2, May 2010219http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->