You are on page 1of 8

BUS VOLTAGES CALCULATION BY USING ANN

Document By
SANTOSH BHARADWAJ REDDY
Email: help@matlabcodes.com
Engineeringpapers.blogspot.com
More Papers and Presentations available on above site

ABSTRACT

An attempt has been made in this work to 1.0 Introduction :


apply the Concepts of Artificial Neural
Networks to a load-flow analysis which is Load flow solution is a solution of the network
basic for any type of study on a power system under steady state condition subject to certain
and ANN concepts are applied to this. inequality constraints under which the system
operates. The load flow solution gives the
To apply ANN to a load-flow study, the nodal voltages and phase angles and hence
network has to be trained to get various the power injection at all the buses and
weights and biases. For this purpose, a load- power flows through interconnecting power
flow study on a typical IEEE 14-Bus, channels. Load flow solution is essential for
system has been conducted using the designing a new power system and for
Newton-Raphson method developed for this planning extension of the existing one for
purpose. Various load patterns are selected increased load demand.
from this analysis and the various voltages
and angles are used to train the network. 1.1 Development of load flow equation :
Using the code developed for this purpose,
the network is trained to get the values of It can be seen that the load flow equations are
weights and biases. nonlinear and they can be solved by an iterative
method.
Using this information, the voltages and angles
for the network to any different load 1.2 Newton Raphson Method :
pattern has been determined successfully. This method is used to solve the load flow
The correctness of this method is again eqns as given in the above. The use of the
verified by conventional load-flow method. polar Newton Raphson method certainly
In other words, applying ANN concepts simplifies the calculation and results in
load-flow studies can be conducted smaller computation time.
without using conventional methods.
These eqn’s after linearization can be rewritten
ANN application is very much useful in real in matrix form as:
time application of a power system.
 ∆P   J 1 J 2  ∆δ 
∆Q  =  J 3 J 4  ∆V 
    
The authors are with the Department of Electrical and
Electronic Engineering,1. Padmasri Dr. B. V. Raju
Institute of Technology, Narsapur, Medak, Andhra
It is well known that a small change in phase
Pradesh, India (e-mail: bhoopal_veni@yahoo.co.in). angle changes the flow of active power
and does not affect much the flow
of reactive power. Similarly a small change in Although the idea of ANNs was
nodal voltage affects the flow of reactive proposed by McCulloch and Pitts (1943)
power where as active power practically over fifty years ago, the development of
does not change. Keeping these facts in ANN techniques has experienced a renaissance
mind, the set of linear load flow equation only in the last decade due to Hopfield’s effort
given in eqn (1.12) can be written as follows (Hopfield, 1982) in iterative auto association
: neural networks. A tremendous growth in the
∆P  J 1 0  ∆δ  interest of this computational mechanism has
∆Q  =  0 J 4
  occurred since Rumelthart et al (1986)
    ∆V 
rediscovered a mathematically rigorous
Here J1 corresponds to the elements ∂P / ∂ δ theoretical framework for neural networks, i.e.
which exist. back propagation algorithm. Consequently,
ANNs have found application in such diverse
areas as neurophysiology, physics, biomedical
J2 corresponds to the elements ∂P / ∂|V| which
engineering, electrical engineering, computer
do not exist and, therefore, are zero.
science, acoustics, cybernetics, robotics, image
processing, financing and others.
J3 Corresponds to the elements ∂Q / ∂ δ which
do not exist and, therefore, are zero. 2.2 Introduction to Artificial Neural
J4 corresponds to the elements ∂Q / ∂|V| which Network :
exist.
ANN is a massively parallel-
NEURAL NET WORKS distributed information processing system that
has certain performance characteristics
2.1 BACK GROUND : resembling biological neural networks of the
human brain (haykin, 1994). ANNs have been
The development of artificial developed as a generalization of mathematical
neural networks (ANNs) began approximately models of human cognition (or) neural biology.
50 years ago (Mc Cullach and Pitts, 1943), Their development is based on the rules that :
inspired by a desire to understand the human
brain and emulate its functioning. Within the 1). Information processing occurs at many
last decade it has experienced a huge
resurgence due to the development of more single elements called nodes also referred as
sophisticated algorithms and emergence of
powerful computation tools. units, cells or neurons,
Extensive research has been
deviated to investigating the potential of 2). Signals are passed between nodes through
artificial neural networks as computational
tools that acquire, represent and compute connection links.
mapping from one multivariate input space to
another (Wassernabm 1989). The ability to 3). Each connection link has an associated
identify a relationship from given patterns
make it possible for ANNs to solve large-scale weight that represents its connection strength.
complex problems such as pattern recognition,
nonlinear modeling, classification, association
and control.
4). Each node typically applies a nonlinear trial and error procedure. The nodes within
transformation called activation function to its neighboring layers of the network are fully
net input to determine its output signal. connected by links. Fig 2.1 shows the
configuration of a feed forward three layer
A neural network is characterized by its ANN. These kinds of ANNs can be used in a
architecture that represents the pattern of wide variety of problems, such as storing and
connection between nodes, its method of realling data, classifying patterns, performing
determining the connection weights and the general mapping from input pattern to output
activation function. A typical ANN consists of pattern, grouping similar patterns or finding
a number of nodes that are organized according solutions to constrained optimization problems.
to a particular arrangement. One-way of In this figure X is a system input vector
Classifying neural networks is by the number composed of a number of causal variables that
of layers: Single, bi-layer and multi-layer (most influence system behavior, Y is the system
back propagation networks). output vector composed of a number of
ANNs can also be categorized resulting variables that represents the system
based on the direction of information flow and behavior. Out
Hidde
processing. In a feed forward network, the Input put
n
nodes are generally arranged in layers, starting Ntw Layer laye
Layer
from a first input layer and ending at the final rk r
output layer. There can be several hidden Inpu
layers with each layer having one (or) more t Netw
nodes. Information passes from the input to X ork
the output side. The nodes in one layer are Outp
connected to those in the next but not to those ut
in the same layer. Thus, the output of a node in Y
a layer is only a dependent on the inputs it
receives from previous layers and the Fig 2.1: The configuration of a feed
corresponding weights. On the other hand, in a forward three layer ANN.
recurrent ANN, information flows through the
nodes in both directions from the input to the 2.3 Mathematical Aspects: -
output side and vice – versa. This is generally
achieved by recycling previous network A schematic diagram of a
outputs as current inputs, thus allowing- for typical Jth node is shown in fig.2.2. The inputs
feedback. Sometimes, lateral connections are to such a node may come from system causal
used where nodes within a layer are also variables (or) outputs of other nodes depending
connected. on the layer that the node is located in. these
In most networks, the input layer inputs form an input vector X=(x1,…xi…xn).
receives the input variables for the problem at The sequence of weights leading to the node
hand. This consists of all quantifies that can form a weight vector Wj = (wij…wij…wnj).
influence the output. The input layers thus Where wij represents the connection weight
transparent and is a means of providing from the ith node in the preceding layer to jth
information to the network. The output layer node.
consists of values predicated by the network The output of node j, yj is
and thus represents model output. The number obtained by computing the value of function
of hidden layers and the numbers of nodes in with respect to the inner product of vector X
each hidden layer are usually determined by a and w1 minus bj. Where b is the threshold
value, also called the bias, associated with this
node. In ANN, the bias bj of the node must be Here ti is component of desired output
exceeded before it can be activated. The T, Yi is the corresponding ANN output, p is the
following equation defines the operation. number of the output nodes and P is the
yj = f(X. Wj – bj) -------------- (2.1) number of training patterns. Training is a
X b process by which the connection weights of an
1 W j ANN are adapted through a continuous process
of stimulation by the environment in which the
1j
X Y network is embedded. There are primarily two
i
W f j
types of training supervised and unsupervised.
ij A supervised training algorithm requires an
X W external teacher to guide the training process.
n nj This typically implies that a large number of
Fig.2.2: A Schematic examples (or) patterns of inputs and outputs are
diagram of node j required for training. The inputs are cause
variables of a system and the outputs are the
The function f is called an activation function. effect variables. This training procedure
Its functional form determines the response of involves the iterative adjustment and
a node to the total input signal it receives. The optimization of connection weights and
most commonly used form of f(.) in eqn (2.1) threshold values for each of nodes.
is sigmoid function given as. The primary goal of training is to
f(t) = 1/1+e-net ------------(2.2) minimize the error function by searching for a
set of connection strengths and threshold
The sigmoid function is a values that cause the ANN to produce outputs
bounded, monotonic, non decreasing function that are equal (or) close to targets. After
that provides a graded, nonlinear response. training has been accomplished, it is hoped that
This function enables a network to map any the ANN is then capable of generating
nonlinear process. The popularity of the reasonable results given new inputs. In
sigmoid function is partially attributed to the contrast, an unsupervised training algorithm
simplicity of its derivative that will be used does not involve a teacher. During training,
during training process. Some researchers also only an input data set is provided to the ANN
employ bipolar sigmoid and hyperbolic tangent that automatically adapts its connection
as activation functions-both of which are weights to cluster those input patterns into
transformed from the sigmoid function. A classes with similar properties.
number of such nodes are organized to form a 2.5 Back – Propagation :
artificial neural network.
Back – propagation is perhaps
2.4 Network Training : the most popular algorithm for training ANNs.
It is essentially a gradient descent technique
In order for an ANN to generate that minimizes the network error function- equ
an output vector Y = (y1,y2,…..yp) that is as (2.3). Each
close as possible to the target vector T = (t1,y2, input pattern of the training data set is passed
……tp), a training process, also called through the network from the input layer to the
learning, is employed to find optimal weight output layer. The network output is compared
matrices W and bias vectors V, that minimize a to desired target output, and an error is
predetermined error function that usually has computed based on equ (2.3). This error is
the form. propagated backward through the network to
E = Σ Σ (Yi – ti) ------------(2.3)
each node and correspondingly the connection systems for problems that have been found to
weights are adjusted based on equation : be difficult for traditional computation.
∂E Artificial Neural Networks have widely
∆Wij (n) = −ε + α∆Wij ( n − 1)
∂Wij been used in electric power engineering. For
-------(2.4) energy management, Artificial Neural
Networks have solved the load-flow and
Where ∆ Wij(n) and ∆ Wij(n-1) optimal power flow problems.
are weight increments between node I However most existing Artificial
and j during the nth and (n-1)th epoch. A similar Neural Networks for electric power
equation is written for connection of bias applications have been designed using real
values. In eqn (2.4), and are called learning numbers. In power engineering, application
rate and momentum, respectively. The such as load-flow analysis, phasor evaluation,
momentum factor can speed up training in very signal processing and image processing mainly
flat regions of the error surface and help involve complex numbers.
prevent oscillations in the weights. A learning Although conventional Artificial Neural
rate is used to increase the chance of avoiding Networks are able to deal with complex
the training processes being trapped in local numbers by treating the real and imaginary
minima instead of global minima. The back parts independently, it will show in this paper
propagation algorithm involves two-steps. The that their behavior is not so satisfactory.
first step is a forward pass, in which the effect A new approach is introduced in this
of the input is passed forward through the paper where a computational Artificial Neural
network to reach the output layer. After the Networks, particularly designed for
error is computed, a second step starts manipulation of complex number in electric
backward through the network. The errors at power systems.
the output layer are propagated back towards It will be shown that this new ‘complex
the input layer with the weights being modified ‘Artificial Neural Network has a superior
according to equ (2.4). performance on operations and computations
Back- Propagation is a first and computations of complex numbers as
order method based on steepest gradient compared with the conventional ‘real’
descent, with the direction vector being set counterparts. The complex Artificial Neural
equal to the negative of the gradient vector. Network is implemented to estimate bus
Consequently, the solution often follows a voltages in a load-flow problem.
zigzag path while trying to reach a minimum 3.2 Conventional ANN for real numbers:
error position, which may slow down the Fig 3.1 shows a typical
training process. It is also possible, which may Artificial Neural Network for real numbers
slow down the training process. It is also where there are n number of input nodes , in
possible for the training process to be trapped number of hidden nodes and one number of
to be trapped in the local minimum despite the output node: 3 layers in total. Ofcourse, this
use of a learning rate. network is freely extensible to any number of
layers. All values of x and w in the Network
3. Artificial Neural Networks are real numbers and all outputs ‘0’ are real
3.1 Back Ground: numbers within an interval [0,1].
The pre-subscript of
Artificial Neural each w indicates the layer to which that w
Networks represent the promising new belongs. A set of desirable outputs, dk for
generation of information processing networks. k=1,2,……l, corresponding to a set of inputs,
Advances have been made in applying such
xj ,j=1,2,….n, is used as a training set. The 4.1 LOAD FLOW SOLUTION:
standard sigmoid function is employed and the 4.1.1 GENERATION OF TRAINING
following equations hold. PATERNS:
1 Power at load buses is varied
Ok = k = 1,2,...l
 m  over a range of 0.05 to 0.10 keeping the power
1 + exp − ∑ Wki hi  of the other buses constant. The training
 i =1  patterns are generated by using load flow
program for each combination of loads. The
1 inputs to the ANN are real and reactive powers
hi = i = 1,2,...m
 n  of load buses.
1 + exp − ∑Wij xi 
 j =1 
For each training bus are taken to be having 2
inputs which are voltage magnitude and phase
---------(3.1)
angle. Similarly for the outputs. The training
patterns for this case are taken as 57 pairs and
desired error is 0.1 and learning rate is 0.02 is
selected.
The following energy function E is being
4.1.2 TRAINING THE NEUTRAL
NETWORK:
minimized:
There is no fixed method of
choosing the number of hidden nodes. The
1 l
E= ∑ [ ok − d k ] 2 number of hidden nodes is randomly chosen
2 k =1 ----------(3.2) and various values are tried till satisfactory
results are obtained. Two types of training have
3.3 New Artificial Neural Networks for been employed; the first one is done by using
real ANN.
Complex Numbers:
In this paper (IEEE-14 bus) the trained buses
Fig 3.2a shows the basic are 4, 9, 11, 12, 13 and the tested buses are 5,
elements of the newly designed Artificial 10 and 14. The number of inputs is 10, hidden
Neural Networks for complex numbers. For the nodes are taken as 2 and the number of outputs
operation of a basic function, say z=wx, where is 10.
x is the input complex number, w is the
weighting and z is the output complex number, The algorithm for training the neural
Zr + jzi = (wr + jwi)(xr + jxi)= (wrxr – wixi) + network is as follows:
1. Apply the input vector to the input
j(wixr – wrxi) ---------(3.3) units.
2. Calculate the net input values to the
where j = √-1 hidden layer units Netjh =
Σ Whjixi + θ jh for I =
For the addition of two complex 1,2,………….n. Where Wji is
numbers. x1 and x2, the operation is clearly connection weight and θ jh is bias
shown . These basic elements form the value.
foundation of the newly designed ANN for 3. Calculate the outputs from the hidden
complex numbers developed in this paper. layers:
4. Ij = fjh (netjh)
5. Move to the output layer. calculate the the conventional real ANN is better in the
net input values to each unit. following aspects.
6. Netko = Σ WkjIi + θ ko It seems that there is an
7. Calculate the outputs: improved ability to evaluate cases not falling
8. Ok = fko(netko) within the training zone. Finally, the sigmoid
9. Calculate the error terms for the output function employed in our ANN automatically
units: handles the whole complex space with an
10. δ ko = (yk-ok)fko(netko) absolute magnitude smaller than or equal to
11. Calculate the error terms for the hidden one.
terms: The time taken for conducting
12. δ jh=fhj(nethj)Σ δ kowokj load-flow study using ANN is less than that
13. Notice that the error terms on the required by Newton-raphson method, the
hidden units are calculated before the occurancy of the result being almost the
connection weights to input layer units same. Hence it can be concluded that a load-
have been updated. flow study using ANN is particularly suitable
14. Update weights on the input layer: for real time applications.
Wkj (t+1) = Wkjo(t) + η δ ko ij
o REFERENCES
1) Update weights on the
hidden layer: 1. Neural Networks Algorithms
Whji = Whji(t) + η δ jh xi Applications and Programming
Techniques by James. A. Freeman
Comparison of computation voltages for and David.M.Skapura.
ANN and NR
2. NGUYEN, T.T. ; Neural Networks
Load-flow, IEE proc., Gener, Transm.
S. Bus NR ANN Error
Distrib., 1995.
N Voltag voltage voltage
o e
1 V5 1.0060 - 1.0000 - 3.8400e- 3. W.L.Chan, A.T. PSo L.L.Lai; Initial
j0.1557 j0.1881 004 Application of complex
2 V10 1.0114 - 1.0000 - 2.5000e- Artificial Neural Networks to Load –
j0.2747 j0.2883 004 flow Analysis, IEE Proc.
3 V14 0.9924 - 1.0000 - -1.2900e- Gener., Transm.Distrib., Nov 2000.
j0.2867 j0.2814 004
4. Power System Control and Stability
by P.M. Anderson and A.A.Foud.
For the evaluation of computation time, the 5. Computer Methods in power ‘system
MATLAB commands TIC and TOC are used. Analysis by Stagg and El-Abiad.
6. Modern Power System Analysis by
CONCLUSION I.J.Nagrath and D.P.Kothari.
In this work Load-flow is 7. Electrical Power System by C.L.
conducted on typical IEEE 14 bus system and Wadhwa.
different load patterns are obtained. This load- 8. MATLAB Version 5.1 Copyright
flow study is made using a load flow program 198497 The Mathworks.
developed in this work using Newton-Raphson 9. Neural Networks Toolbox Version
method. By using these patterns the voltage at 2.0 02-Jan-97.
a particular busbar is estimated using two 10. Document By
trained ANNs. From this it is concluded that 11. SANTOSH BHARADWAJ REDDY
12. Email: help@matlabcodes.com
13. Engineeringpapers.blo
gspot.com
14. More Papers and
Presentations available on
above site

You might also like