You are on page 1of 7

2014 IEEE Conference on Technologies for Sustainability (SusTech)

Short-term Load Forecasting Algorithm and


Optimization in Smart Grid Operations and Planning
Siriya Skolthanarat Udom Lewlomphaisarl Kanokvate Tungpimolrut
Advanced Automation and Electronics Research Unit
National Electronics and Computer Technology Center
Pathumthani, Thailand
siriya.skolthanarat@nectec.or.th udom.lewlomphaisarl@nectec.or.th kanokvate.tungpimolrut@nectec.or.th

Abstract--- Electrical load forecasting is one of the important filtering [6], [7], time-series method such as ARMA (Auto
parts for smart grid system. The reliable prediction of the load Regressive Moving Average) [8], ARMAX (Auto Regressive
demand contributes to the efficient and economical operations Moving Average with exogeneous inputs) [9], and Box-
and planning. The artificial neural network is used extensively in Jenkins [10]. The statistical models have disadvantages that
load demand forecasting. The nonlinear nature of the electrical
cannot adjust rapidly with the abrupt changes of loads.
load demand conforms to the ability of the artificial neural
network in calculating the nonlinear relationship of inputs and Another method used an ANN (Artificial Neural
outputs. Among many models of neural networks, radial basis Network). The ANN was first introduced in 1943 by
neural networks yield superior performance in small error and McCulloch and Pitts as a simple model of the human brain.
fast simulation time.
However, it is challenge to design the radial basis neural
The ANN architecture is comprised of brain cells, or so called
networks. The excessive numbers of hidden neurons lead to neurons, connected together forming a network as shown in
lacking of generalization or so called overfitting problems. This Fig. 1. The ANN computes the nonlinear relationship between
paper proposes an approach to design the radial basis neural inputs and outputs by adjusting the weights and biases of the
networks that use as least numbers of hidden neurons as connections. The ANN was developed more than 5 decades
possible. The error criterion is optimized based on modified ago, and has been used in many applications such as pattern
genetic algorithm as the numbers of hidden neurons are recognition [11], prediction [12], controls [13], signal
incrementally increased. Simulation results of short term load processing [14], etc.
forecasting are calculated in Matlab, and compared to the
orthogonal least square error method. The proposed approach The artificial neural network has also been used widely in
gives better results with the same numbers of hidden neurons. the electric power system since its ability to compute a non-
linear relationship between inputs and outputs conforms to the
Index Terms—Artificial neural network, hidden neuron, non-linear characteristics of the electric power system. It
genetic algorithm, radial basis function
helps strengthen the power system infrastructure by
I. INTRODUCTION monitoring and assessing the stability and security [15]-[21].
Smart grid is the upgraded electrical network that operates It is used to analyze and enhance the power flow control [22]-
in more efficient, reliable, and secure manner with the [24], and estimate system states [24]. It is used to screen,
contribution of communication, information, control, and identify, and analyze possible faults during abnormal
management technologies. The electrical load forecasting situations [26]-[29]. The short-term load forecast with ANN
provides the intelligence to the smart grid operations and mostly used feedforward models [30]-[32].
planning. The load prediction is advantageous to dispatchers MATLAB is a computing software tool that is used widely
in decision making for generating electric power, load in many applications. The user-friendly neural network
switching, and fuel allocation [1]-[3]. Load forecasting is toolbox in MATLAB provides functions, applications, and
basically categorized into 3 types based on the period of user guides to design and model the ANN. There are multiple
prediction. The LTLF (Long-term Load Forecast) involves the models of the ANN in MATLAB. Each model has different
forecasting period of one year to several years. The MTLF characteristics, and yields various results. The comparative
(Medium-term Load Forecast) relates to the forecasting period study is essential to select the model properly. Previous works
of a few weeks to several months. The STLF (Short-term focused on the comparison between two models. [33]
Load Forecast), which is of interest in this paper, refers the compares the feedforward model with the radial basis neural
forecasting period of hours to one week. network. [34] compares the feedforward model with the
Traditional approaches to load forecasting use statistical recurrent neural network. This is not sufficient to recognize
models, which involved mathematical equations to predict
future values of loads. The statistical models include
exponential smoothing [4], regression method [5], Kalman

978-1-4799-5238-0/14/$31.00 ©2014 IEEE 165


2014 IEEE Conference on Technologies for Sustainability (SusTech)

Input layer simulate the networks with the test data without the desired
Hidden layer Hidden layer
Output layer
outputs.
The neural network architecture is designed such that the
forecasting method is based on the iterative forecasting that

….
….

….
predicts the load demand in a series manner [37]. This method
….

is different from the multi-model forecasting method that


forecasts the load in a parallel manner using 24 models, one
f (W , P ,b ) for each hour of the day. Contrarily, the iterative forecasting
f (W , P ,b )
method predicts the load one hourly load at a time. This load
is aggregated to the series that is used to predict the next
Fig. 1 Artificial neural network architecture
hourly load. Therefore, the forecast of an hourly load is
according to the previous hourly loads.
the effects of the model architecture. This paper provides the
The model structure illustrated in Fig. 2 consists of m
comparative details of the short-term load forecasting with the
input nodes corresponding to m input data points and one
artificial neural networks in MATLAB. The information for
output node that predicts the load demand at hour i. This
model selection is given by comparing the results of the load
forecasting method is advantageous that it provides better
forecasting based on the error and simulation time.
results than a multi-output network since the input data are
Among the models, radial basis neural networks show
updated during the forecasting. The network performance is
better results. It is challenge to design the radial basis neural
showed by MSE (Mean Squared Error) [38], which is
networks. An excessive numbers of hidden neurons lead to
the over-parameterization [35] since each hidden neuron depicted in (1). x and x are actual outputs and desired
brings a certain numbers of free parameters in the model. The outputs, respectively. n is the numbers of data points.
free parameters increase the probability of the overfitting
problem. The significant factor that causes the generalization mse 
1 n

 xx
n i1

2
(1)
or the overfitting problem is the model structure. This
problem occurs when the networks fit well with the training
data, and give very small errors. When the networks are Fig.3 depicts the simulation results of case study 1, which
inputted with a new set of data, the errors are large. This is is an average-workday load demand of medium-sized
because the networks memorize the pattern of the training businesses in winter. The load demand is high from 8 A.M. to
data, and cannot adjust with the new data. 11:15 A.M., and drops until 12:45 P.M. Then, rises again until
It is important that the neural network achieves the 5:15 P.M., and slowly decreases until midnight. The
generalization. This paper presents an approach and algorithm prediction results are summarized in TABLE 1 by averaging
to obtain as least numbers of hidden neurons as possible. An the performances of 5 simulation results. The feedforward
incremental growing method combined with the modified neural network gives poor results. The radial basis, recurrent,
genetic algorithm optimization is used in designing weights FFTD (Focused Time-Delay), and recurrent radial basis
and biases of the networks. The proposed approach is neural networks yield small error. However, the recurrent
simulated and compared with the OLS (Orthogonal Least neural network takes longer simulation time compared to the
Square) error, which is used by a built-in function in radial basis neural network. The radial basis neural network is
MATLAB. [36] also uses the orthogonal least square error a one-time training network, therefore, resulting in fast
combined with k-means clustering to design the neural response.
networks and reduce the numbers of hidden neurons. The
proposed approach gives better results or smaller errors with
the same numbers of hidden neurons.

II. COMPARATIVE STUDY


Among the models in MATLAB, six models are selected
to perform the comparison based on their corresponding
results. The neural network models in MATLAB are used to
predict the load demand of two case studies. The load demand Fig. 2 Neural network structure
is collected in such a manner that there are 24-hour data
available for each category of weekdays, weekends, and peak Case study 2 is the peak-workday load demand of
day in a month. To verify the performance of the models, the nonprofit businesses in summer. Although the load shape is
data is split to 2 sets, training and test sets. Each set has 24- similar to the case study 1, the magnitude is lower since there
hour data points. The training data is used to train the neural are small numbers of nonprofit businesses. Fig. 4 illustrates
networks. The networks adjust the weights and biases the load forecasting of the case study 2. The simulation results
according to the given inputs and the desired outputs. Then, are presented in TABLE 2 by averaging the performances of 5
simulation results. The feedforward network also shows poor

166
2014 IEEE Conference on Technologies for Sustainability (SusTech)

results for both error and time. TABLE 3 summarizes the adjusted when each input is presented. Based on the
performance ranking including the times used in the case performance ranking, the FFTD and recurrent networks are
study 1 and 2. The radial basis neural network and recurrent simulated in the on-line training mode with test data of case
radial basis neural network give better results compared to the study 2 for 200 iterations or passes as shown in Fig. 5. With
other networks. sufficient iterations, the networks perform excellently
compared to the off-line training. Although the error is very
small, the simulation time is large as the iteration numbers
increase. Fig. 6 depicts the MSE and the time used of the
recurrent network in the on-line training mode. The MSE
decreases as the numbers of passes increase. Nonetheless, the
simulation time also increases. The radial basis neural
network, which is a nonrepetitive training network, can be
trained on-line by repeatedly training the network with a
group of training example and updating the weights and
Fig. 3 simulation results of case study 1
biases until the error is below the criterion [39].

Fig. 4 simulation results of case study 2


Fig.5 On-line training of FFTD and recurrent network
TABLE I SIMULATION RESULTS of CASE STUDY 1

Case study 1
Models
MSE Time (s.)
Feedforward 0.0046 1.5812
Radial basis 7.4731e-5 0.078
Linear 0.0020 0.4128
Focused
1.0235e-4 0.5092
time-delay
Recurrent 9.559e-5 1.2544
Recurrent
2.5933e-4 0.094
radial basis
Fig.6 The MSE and simulation time in on-line training mode
TABLE II SIMULATION RESULTS of CASE STUDY 2

Case study 1 III. PROPOSED APPROACH


Models
MSE Time (s.) As seen in the previous section, the radial basis and
Feedforward 0.0056 1.5904
7.2003e-4 0.1186
recurrent radial basis neural networks show better results for
Radial basis
Linear 0.0013 0.3910 the simulation times and errors. The proposed approach
Focused focuses on the design of the hidden layers of both neural
9.8249e-4 0.5124
time-delay networks. The genetic algorithm is modified to minimize the
Recurrent 7.2450e-004 0.9936 numbers of hidden neurons.
Recurrent
6.2539e-4 0.093
radial basis A. Genetic Algorithm
Genetic algorithm is an evolutionary optimization that
TABLE III PERFORMANCE RANKING
uses a string of genes or chromosomes to search an optimized
Ranking
Case study 1 Case study 2 solution. Each chromosome represents a member of searching
MSE Time MSE Time population where each gene is independent to each other. The
1 Radial basis Radial basis RRBN RRBN
2 Recurrent RRBN Radial basis Radial basis
population evolves toward an optimization of the cost
3 FFTD Linear Recurrent Linear function or fitness function. The goal is to find the
chromosome that yields the maximized or minimized fitness
There are 2 types of training styles. In the off-line training, function. In case that the contents of chromosomes are
networks adjust the weights and biases after all inputs are numbers, those numbers are transformed to the binary
presented. In the on-line training, the weights and biases are

167
2014 IEEE Conference on Technologies for Sustainability (SusTech)

numbers. As a result, each gene carries 0 or 1, as shown in algorithm is created by randomization of the real
Fig. 7. numbers. In this case the variable of interest is the
gene input weights of radial basis neural networks, which


their input weights are the subset of the input data.
0 1 0 0 1 0 1 1
Starting with one hidden neuron, each member of the
searching population is a vector of elements of the
Chromosome
input data obtained from randomization. Each vector
Fig. 7 Chromosome in the searching population of the genetic algorithm has the length n for the input matrix n  m .
c) Find the fitness value of each member. The fitness
The genetic algorithm procedure is described as follows [40]: value is the MSE of the outputs and the desired
 Initialize the population randomly. The size of the outputs. The outputs of the radial basis neural
population is set arbitrarily. Large population network are found in (2). The outputs of the
guarantees the convergence of the optimization. recurrent radial basis neural network are found in (3).
Nonetheless, it comes at a price. The simulation P, Y, LW, IW, b1, and b2 are input, output, layer
times are prolonged substantially.
weight, input weight, biases of the hidden layer and
 At each generation, find the fitness value of all
the output layer, respectively.
members.
 The members that give the lower fitness values are
Y  LW (exp( IW  P  b1 ) 2 )  b2 (2)
passed to the next generation.
 The rest of the population is selected to form parents.
 The children of the next generation are produced by Y (t )  LW (exp( IW  P  b1  LW 'Y (t  1) ) 2 )  b2 (3)
crossover and mutation of the parents. The crossover
relates to the process of combination of two or more d) The members with lower fitness values are passed to
parents to form the children. The mutation refers to the next generation.
the process of the change in an individual parent. e) Select some members to be parents based on their
 Then, replace the current population with the fitness values. The selection method is the stochastic
children. One generation is complete. uniform method [41].
The process is iterated until it reaches the stopping f) The children are produced by the crossover and
criterion, which is the predetermined maximum generation. mutation of the parents. As shown in Fig. 9, one of
The iteration ends when the generations exceed the specified each pair of parents is flipped from gene p1 to gene
value.
n-p1. Gene 1 to gene p1 and gene n-p1 to gene n are
B. Proposed Algorithm switched position where p1 is a randomized number.
The genetic algorithm optimization that uses strings of The mutation process is the exchange of two
genes or chromosomes is analogous to the format of input randomized genes p1 and gene p2 as shown in Fig.
weights of the radial basis function neural networks. Instead 10.
of number 0 or 1 in the binary system, a chromosome contains g) Replace the current population with the children.
elements, which are the subset of the input data when there is h) Increase one generation
one hidden neuron. If more hidden neurons are necessary, the i) Check whether the generation exceeds the
size of input weights is increased to k chromosomes or vectors predetermined value. If no, go back to step c). If yes,
for k hidden neurons. Each vector has n genes according to proceed to step j).
the row number of the input matrix. The proposed approach j) Check whether the MSE is still larger than the
uses the incremental growing of the hidden neurons combined desired error. If no, the procedure ends. If yes,
with the modified genetic algorithm to design the radial basis proceed to step k).
neural networks. The numbers of hidden neurons are as least k) Increase one hidden neuron.
as possible that still acquires the desired error. The proposed l) Go back to step b). The population is initialized
algorithm is depicted in Fig. 8 and described as follows: again. Each member of the initial population
contains 2 rows of vectors corresponding to the
a) In the beginning the number of hidden neuron is set
numbers of hidden neurons.
to 1.
b) The next step is to create the initial population. With the proposed approach, the network is built such that
Typically, the initial population of the genetic the hidden neuron is incrementally increased until the error of
the outputs and the desired outputs is met.

168
2014 IEEE Conference on Technologies for Sustainability (SusTech)

Start The proposed approach is simulated and compared to the


a)
OLS (Orthogonal Least Square) error method, which is used
Set number of hidden neuron to 1
by function “newrb” in Matlab and [36]. This function
b)
Initialize population
designs the weights and biases by orthogonal least square
error combined with incremental growing of hidden neurons.
c)
Find fitness value of each member TABLE 4 depicts the simulation results of the proposed
d)
approach compared with the function “newrb” by using the
Members with lower values
passed to the next generation training data. To simply show the results, the data points are
e) reduced to 10 data points. So the maximum hidden neuron is
Select members to be parents
k)
Increase one hidden neuron 10. The proposed approach yields smaller errors for the same
f)
Produce children from adaptive numbers of hidden neurons. In other words, the proposed
crossover and mutation
approach uses less hidden neurons for the same error. For
g) Replace the current population example, when the desired error is 1e-6, the proposed
with the children
approach uses 6 hidden neurons while the OLS method uses 9
h)
Increase one generation hidden neurons.
Fig. 11 illustrates the simulation results of the test data.
i)
No
Generation > spcified value The radial basis neural network with the proposed approach
predicts the load demand very well. The MSE is 2.4661e-6.
Yes

j)
TABLE 5 summarizes the comparative results of the proposed
Yes
MSE > desired error
approach and the OLS method with the recurrent radial basis
No neural network. Fig. 12 illustrates the simulation results of the
End
test data. The MSE is 4.519e-5. The error of both RBN and
Fig.8 Proposed algorithm RRBN networks are decreased from the results shown in table
1. This is because the data points are increased from 16 to 40
1 p1 n-p1 n
points. The results ensure one of the properties of neural
0.9 0.97 1.0 0.95 0.96 0.85
0.88 0.88
networks. The more the data points are given, the more the
accuracy is obtained.

0.88 0.96 0.85 0.95 1.0 0.88 0.9 0.97 TABLE IV MEAN SQUARED ERRORS OF RBN NETWORKS

Numbers of Orthogonal
Fig.9 Crossover method The proposed approach
Hidden neurons Least Square
1 0.0012 0.0034
1 p1 p2 n
2 2.3447e-4 0.0029
0.88 0.9 0.97 1.0 0.95 0.88 0.96 0.85
3 3.2659e-4 0.0020
4 5.1468e-5 0.0012
5 3.2527e-6 6.8513e-4
0.88 0.9 0.88 1.0 0.95 0.97 0.96 0.85 6 9.2695e-7 6.2697e-4
7 1.0502e-8 2.1013e-4
Fig.10 Mutation method 8 5.2518e-12 1.1743e-4
9 0 4.9304e-32

IV. SIMULATION RESULTS


The RBN and RRBN with the proposed algorithm are used TABLE V MEAN SQUARED ERRORS OF RRBN NETWORKS

to forecast the load demand in the case study 1, which is the Numbers of
The proposed approach
Orthogonal
average-workday load demand of medium-sized businesses in Hidden neurons Least Square
1 0.2266e-3 0.0037
winter. The networks are simulated by separating the input
2 0.0175e-3 0.0032
data to training and test sets. There are 24-hour data of the 3 0.0219e-3 0.0020
training and test sets. 4-hour data is required as historical data 4 0.0052e-3 0.0013
to predict the load demand in later hours. Therefore, there are 5 0.0003e-3 7.9012e-4
20-hour predictions for both the training and test data. The 6 2.0664e-8 7.8934e-4
7 6.5128e-9 6.3322e-4
input variable is the load demand, which is sufficient to verify
8 3.6095e-13 3.2801e-4
the proposed algorithm. 9 4.9304e-33 6.1630e-33

169
2014 IEEE Conference on Technologies for Sustainability (SusTech)

REFERENCES

[1] T. Senjyu, H. Takara, K. Uezato, “ One-hour-ahead load forecasting using


neural network,” IEEE Trans. Power Systems, vol. 17, pp. 113-118,
2002
[2] V. H. Ferreira, A. P. Alves de Silva “ Toward estimating autonomous
neural network based electric load forecasters,” IEEE Trans. Power
Systems, vol. 22, pp. 1554-1562, 2007
[3] H. S. Hippert, C. E. Pedrcira, “ Estimating temperature profiles for short-
term load forecasting,” in Proc. 2004 IEE Generation, Transmission
and Distribution Conf., pp. 543-547
[4] W. R. Christiaanse, “Short-Term Load Forecasting Using General
Fig.11 Simulation results RBN network Exponential Smoothing,” IEEE Trans. Power Appratus and Systems,
vol. PAS-90, pp 900-911, 1971
[5] R. F. Engle, C. Mustafa, J. Rice, “Modeling peak electricity demand,”
Journal of forecasting, vol. 11, pp. 241-251, April, 1992
[6] J. H. Park, Y. M. Park, K. Y. Lee, “Composite modeling for adaptive
short-term load forecasting,” IEEE Trans. Power Systems, vol. 6, pp.
450-457, 1991
[7] D. G. Infield, D. C. Hill, “optimal smoothing for trend removal in short-
term electricity demand forecasting,” ,” IEEE Trans. Power Systems,
vol. 13, pp. 1115-1120, 1998
[8] S. J. Huang, K. R. Shih, “short-term load forecasting via ARMA model
including non-gaussian process considerations,” IEEE Trans. Power
Systems, vol. 18, pp. 673-679, 2003
[9] H. T. Yang, C. M. Huang, “a new short-term load forecasting approach
using self-organizing fuzzy(ARMAX) models,” IEEE Trans. Power
Systems, vol. 13, pp. 217-225, 1998
Fig.12 Simulation results RRBN network [10] G. P. Box, G. M. Jenkins, G. C. Reinsel, “Time-series analysis:
forecasting and control,” ed. 4th, Wiley, 2008
[11] K. Fukushima, “A neural network for visual pattern recognition,” in
V. CONCLUSION Third international conference 1993 Artificial Neural Network ., pp. 11-
15.
Load forecast is advantageous in smart grid since it [12] M. C. Alexander, P. S. Dakopoulos, H. S. Sahsamanoglou,
contributes to the operational planning. An artificial neural “windspeedand power forecasting based on spatial correction models,”
IEEE Trans. Energy conversion, vol. 14, pp. 836-842, 1999
network is one of the preferred approaches since it is [13] P. J. Antsaklis, “neural networks for control systems,” IEEE Trans.
applicable to the nonlinear characteristics of the load forecast. Neural network, vol. 1, pp. 242-244, 1990
[14] A. S. Gevins, N. H. Morgans, “Applications of neural network(NN)
There are several models of neural networks that have signal processingin brain research,” IEEE Trans. Acoustics, speed, and
different characteristics, and give various results depending signal processing, vol. 36, pp. 1152-1161, 1988
on their weight functions, input functions, and transfer [15] C. W. Liu, C. S. Chang, M. C. Su, “Neuro-fuzzy networks for voltage
security monitoring based on synchronized phasor measurement,” IEEE
functions. To select the models properly, it is essential to Trans. Power Systems, vol. 13, pp. 326-332, 1998.
study the models comparatively. Each model is compared [16] R. K. hartana, G. C. Richards, “Harmonic source modeling and
identification using neural networks,” IEEE Trans. Power Systems, vol.
based on the MSE performance and the simulation times. 5, pp. 1098-1104, Nov. 1990
Among many models of neural networks, radial basis [17] D. Q. Zhou, U. D. Annakkage, A. D. Rajapokse, “Online monitoring of
voltage stability margin using an artificial neural networks,” IEEE
neural networks perform fast and give small errors. To avoid Trans. Power Systems, vol. 25, pp. 1566-1574, 2010
the generalization or overfitting phenomenon, the numbers of [18] G. Cardoso, J. G. Rolim, H. H. Zurn, “identifying the primary fault
section after contingencies in bulk power systems,” IEEE Trans. Power
hidden neurons should be as least as possible to provide the Delivery, vol. 23, pp. 1335-1342, 2008.
required error. The proposed approach uses modified genetic [19] G. Cardoso, J. G. Rolim, H. H. Zurn, “Application of neural-network
algorithm optimization combined with incremental growing modules to electric power system,” IEEE Trans. Power Delivery, vol.
19, pp. 1034-1041, 2004.
of hidden neurons to guarantee the least hidden neurons. The [20] A. A. El-Keib, X. Ma, “Allplication of artificial neural networks in
radial basis neural networks with the proposed algorithm voltage stability assessment,” IEEE Trans. Power Systems, vol. 10, pp.
1890-1896, 1995
show excellent predictions of the short-term load demand. [21] E. N. deOliveira, A. padilha, C. R. Minussi, “Use of transient stability
indices to dynamic security assessment,” IEEE Trans. Latin America,
ACKNOWLEDGEMENT vol. 1, pp. 27-33, 2003
[22] W. L. Chan, A. T. P. So, L. L. Lai, “Initial applications of complex
The authors gratefully acknowledge the contributions of artificial neural networks to load flow analysis,” in Proc. 2000 IEE
Generation, Transmission and Distribution Conf., pp. 361-366
PEA (Provincial Electricity Authority) to supportively [23] P. Siano, C. Cecati, H. Yu, J. Kolbusz, “ Real time operation of smart
provide the load demand data. grids via FCN networks and optimal power flow,” IEEE Trans.
Industrial Informatics, vol. 8, pp. 944-952, 2012
[24] P. K. Dash, S.Mishra, G. Panda, “A radial basis function neural network
controller for UPFC,” IEEE Trans. Power Systems, vol. 15, pp. 1293-
1299, 2000

170
2014 IEEE Conference on Technologies for Sustainability (SusTech)

[25] M. Biserica, Y. Besenger, R. Caire, O. Chilard, P. Deschamps, .“ Neural


networks to improve distribution state estimation Volt Var control
performance,” IEEE Trans. Smart Grid, vol. 3, pp. 1137-1144, 2012
[26] H. T. Yang, W. Y. Chang, C. L. Huang, “A new neural networks
approach to on-line fault section estimation using information of
protective relays and circuit breakers,” IEEE Trans. Power Delivery,
vol. 9, pp. 220-230, 1994.
[27] T. Jain, L. Srivastana, S. N. Singh, “ Fast voltage contingency screening
using radial basis function neural networks,” IEEE Trans. Power
Systems, vol. 18, pp. 1359-1366, 2003
[28] K. W. Chan, A. R. Edwards, R. W. Dunn, A. R. Daniels, “ On-line
dynamic security contingency screening using artificial neural
networks,” ,” in Proc. 2000 IEE Generation, Transmission and
Distribution Conf., pp. 367-372
[29] T. S. Sidhu, L. Cui, “ Contingency screening for steady-state security
analysis by using FFT and artificial neural networks,” IEEE Trans.
Power Systems, vol. 15, pp. 421-426, 2000
[30] D. C. Park, M. A. El-Sharkawi, R. J. Marks II, L. E. Atlas, M. J.
Damborg, “Electric load forecasting using an artificial neural network,”
IEEE Trans. Power Systems, vol. 6, pp. 442-449, 1991
[31] Y. Y. Hsu, C. C. Yang, “Design of artificial neural network for short-
term load forecasting II multilayer feedforward networks for peak load
and valley load,” in Proc. 1991 IEEE Generation, transmission, and
distribution, pp. 414-418
[32] K. L. Ho, Y. Y. Hsu, C. C. Yang, “Short term load forecasting using a
multilayer neural network with as adaptive learning algorithm,” IEEE
Trans. Power Systems, vol. 7, pp. 141-149, 1992
[33] O.A. Abdalla, M.N. Zakaria, S. Sulaiman, W.F.W. Ahmad, “A
comparison of feed-forward back-propagation and radial basis artificial
neural networks: A Monte Carlo study”, in 2010 Information
Technology (ITSim), International Symposium, pp. 994-998
[34] D. Brezak, T. Bacek, D. Majetic, J. Kasac, B. Novakovic, “A
comparison of feed-forward and recurrent neural networks in time series
forecasting”, in Proc. 2012 IEEE Computational Intelligence for
Financial Engineering & Economics (CIFEr) Conf., pp. 1-6
[35] P. Murto, “Neural network models for short-term load forecasting”,
Master’s thesis in department of physics and mathematics, Helsinki
university of technology
[36] Z. Meng, Y. H. Pao, “Automatic neural-net model generation and
maintenance”, U.S. Patent 7483868, Jan. 27, 2009
[37] H. S. Hippert, C. E. Pedreira, R.C. Souza, “neural networks for short-
term load forecasting: A review and evaluation”, IEEE Trans. Power
Systems, vol. 16, pp. 44-55, 2001
[38] K. Liano, “Robust error measure supervised neural network learning
with outliers,” IEEE Trans. Neural networks, vosl. 7, pp. 246-250, 1996
[39] L. Jun, T. Duckett, “Some practical aspects on incremental training of
RBF network for robust,” Proc. 2008 7th World Congress on Intelligent
Control and Automation, pp. 2001-2006
[40] J. A. Momoh, “Electric power system applications of optimization,”
New York: CRC Press, Taylor&Francis Group, 2005, pp. 443
[41] MATLAB and Simulink for technical computing [online] Available :
http://www.mathworks.com

171

You might also like