Professional Documents
Culture Documents
Comparative Study of Short-Term Wind Speed Forecasting Techniques Using Arti Cial Neural Networks
Comparative Study of Short-Term Wind Speed Forecasting Techniques Using Arti Cial Neural Networks
Abstract—This paper focuses on the importance of wind fore- II. R ECURRENT N EURAL N ETWORK M ODEL
casting and comparison of two different forecasting schemes using
artificial neural network approach. Types of forecasting include
One of the deep learning models designed to recognise
feed-forward network models using standard back propagation patterns in a sequence of data. These are capable of learning
technique and recurrent neural network models with inherent local as well as long-term dependencies in data and also ac-
memory for any given data. In this study, how local memory and commodate sequences of variable length. Therefore, recurrent
relevant inputs make recurrent neural networks more suitable neural networks are kind of general paradigm for handling
for time-series prediction than normal feed-forward networks
is shown. And also for accurate forecasting and better energy
variably sized data that allow us to capture different types of
trading, fine tuning of present techniques is required. Therefore, usecase setups.
LSTM models are implemented which are a part of recurrent
neural networks. Finally, the results are measured in terms A. RNN Architecture
of mean-squared error, an error function which calculates the In feed-forward neural networks, outputs of one layer is
difference between actual and model outputs. It was found that fed as input into the subsequent layers and each unit does
LSTM models were more suitable for short as well as long term
time-series forecasting as compared to RNN model. relatively simple computations. The first layer takes input xi ,
Index Terms—fully recurrent, standard back-propagation, multiplies it by a weight matrix wij , performs a sum and
time-series data, future forecast models, RNN, time stamp. passes it through an activation function g to yield output yi
i.e.,
I. I NTRODUCTION Xn
In recent decades, there has been a shift from statistical yi = g( wij xj + bi ) (1)
i,j=0
models to artificial neural network models for future forecast-
ing so as to combat the complexities that are being observed In order to train these models, we use cost function and take
in improving the generation of wind energy. Randomness in its derivative w.r.t weight and use this derivative to move
the wind speed prevents the supply of predefined power to through nested layer of computations until it equals zero[6].
the grid thus resulting in inequality of the power that was
promised and received at the other end. Therefore, to reduce
unreliability in electricity supply, future forecast models are
being developed for accurate forecast of wind speed and power
generation[1][2].
Recurrent neural networks are used when patterns in data
change with time. This deep learning model has a simple
structure with a built-in feed back loop allowing it to act as
a forecasting engine. Taking a closer look at the structure,
these are very much different from the other neural networks,
like in feed-forward networks signals flow from one direction
i.e., from input to output, one layer at a time where as in
recurrent neural networks (RNN) the output of layer is added Fig. 1: Traditional feed-forward network
as the next input and fed back into the same layer which is
typically the only layer in the entire network. This process Major assumptions that can be drawn from feed-forward
can be termed as passage through time. Unlike feed-forward networks is fixed length and independence. From Fig. 1, it
networks, RNN can receive sequence of values as input and can be observed that outputs are independent of each other
sequence of values as output. The ability of these models i.e., output at time t is independent of output at t − 1. This
working with sequence of data opens up a wide variety of idea of independence considered above does not match with
applications in which forecasting is one of a kind. sequences such as time-series data which consists of short-long
D. Algorithm
Fig. 3: Struture of RNN model This is the proposed algorithm to verify the network model.
2
Proceeding of 2018 IEEE International Conference on Current Trends toward Converging Technologies, Coimbatore, India
E. Implementation Referring to Fig. 7, the model performance is evaluated
The neural network models were implemented using deep in terms of mean-squared error(MSE) which follows gradient
learning libraries Keras 2.0.6 and TensorFlow 1.2.1[4]. Other descent optimisation. The MSE is reduced to 12.36.
Python stacks include:
• pandas 0.20.3
• pyflux 0.4.15
• numpy 1.13.1
• matplotLib 2.0.2
• plotly 2.0.15
• scikit-learn 0.18.2
• scipy 0.19.1
• statsmodels 0.8.0
• seaborn 0.8
3
Proceeding of 2018 IEEE International Conference on Current Trends toward Converging Technologies, Coimbatore, India
xt =New input,
bf =Bias
A. Training Algorithm
4
Proceeding of 2018 IEEE International Conference on Current Trends toward Converging Technologies, Coimbatore, India
B. Implementation R EFERENCES
This forecasting model was developed using Keras deep [1] Douglas C. Montgomery, Cheryl L. Jennings, Murat Kulahci, Introduction
learning library using TensorFlow[4] in the back-end. to Time-Series analysis and Forecasting, Newyork:Wiley Publications,
2015.
The following graph shows the data recorded from wind farm [2] Mohammad Shahidehpur, Hatim Yamin, Zuyi Li, Market oper-
near Managuli village in the southern part of Karnataka, which ations in electric power systems, NewYork:A John Wiley and
includes the date-time, zonal and meridional component of Sons.Inc.,Publication, 2002
[3] Trevor Hastie, Robert Tibshiani, Jerome Friedman, The elements of
wind, wind direction and wind speed. The complete list of statistical engineering, NewYork City:Springer publications, 2001.
features[5] available in the raw data are as follows: [4] Sam Abrahams, Danijar Hafner, Erik Erwitt, Ariel Scarpinelli, Tensor-
1.No:Row number Flow for Machine Intelligence, Bleeding Edge Press, 2016.
[5] Song Li, Peng Wang, Lalit Goel,Wind Power Forecasting Using Neural
2.Year:2010-2014 Network Ensembles With Feature Selection, IEEE Transactions on sus-
3.Month:1-12 tainable energy, VOL. 6, NO. 4, October 2015.
4.Day:1-30/31 [6] Hao Quan, Dipti Srinivasan, Abbas Khosravi, Short-Term Load and Wind
Power Forecasting Using Neural Network-Based Prediction Intervals,
5.Hour:1-24 IEEE Transactions on neural networks and learning systems, VOL. 25,
6.v:Zonal component of wind in degrees NO. 2, February 2014.
7.u:meridional component of wind in degrees [7] Qianyao Xu, Dawei He, Ning Zhang, Chongqing Kang, Qing Xia,
Jianhua Bai, Junhui Huang, A Short-Term Wind Power Forecasting
8.ws:Wind speed in m/s Approach With Adjustment of Numerical Weather Prediction Input by
9.wd:Wind direction in azimuth degrees Data Mining,IEEE Transactions on sustainable energy, VOL. 6, NO. 4,
October 2015.
C. Results [8] Thanasis G. Barbounis, Johnn B. Theocharis, Minas C. Alexiadis, Pet-
ros S. Dokopoulos, Long-Term Wind Speed and Power Forecasting Using
Referring to Fig. 12, it can be seen that model achieves Local Recurrent Neural Network Models, IEEE Transactions on Energy
a score of 3.061 RMSE and loss of 0.078 which is pretty Conversion, Vol-21, No. 1, March, 2006.
acceptable for any multivariate forecast model.
V. C ONCLUSION
Literature have mixed results regarding the performance
of neural networks compared to other forecasting methods.
Neural network models prove very useful for bigger and
high frequency data as compared to the other statistical
models. Feed-forward networks are commonly not used for
implementing forecast models because the current output is
independent of previous output where as RNN’s are the best
for time-series forecasting. Evaluating the two network models
considered above the results have shown that LSTM’s perform
better and faster in case of given sequence having long-term
dependencies where as RNN’s perform better with short-term
dependencies. Time taken for LSTM networks for time-series
prediction is much smaller as compared to RNN models for
the same given data. With multiple data inputs, LSTM’s can
easily adapt to changes in the patterns of data and learn at a
greater speed and model takes lesser time to converge.