You are on page 1of 7

Multi-time-horizon Solar Forecasting Using

Recurrent Neural Network


Sakshi Mishra Praveen Palanisamy
Transmission Planning Engineer, West Transmission Planning Alumni, Robotics Institute
American Electric Power Carnegie Mellon University
Tulsa, USA Pittsburgh, USA
sakshi.m@outlook.com praveen.palanisamy@outlook.com

Abstract— The non-stationarity characteristic of the solar time-scales. For example, day-ahead solar forecast accuracy
power renders traditional point forecasting methods to be less plays a significant role in the effectiveness of Unit
useful due to large prediction errors. This results in increased Commitment (UC); very-short-term solar forecast errors due
uncertainties in the grid operation, thereby negatively affecting to fluctuations caused by the passing clouds lead to sudden
the reliability and increased cost of operation. This research changes in PV plant outputs that can cause strain to the grid by
paper proposes a unified architecture for multi-time-horizon inducing voltage-flickers and real-time balancing issues. Thus,
predictions for short and long-term solar forecasting using solar power generation forecasting is an area of paramount
Recurrent Neural Networks (RNN). The paper describes an end- research, as the need for robust forecast for all timescales
to-end pipeline to implement the architecture along with methods
(weekly, day-ahead, hourly and intra-hour) is critical for
to test and validate the performance of the prediction model. The
effectively incorporating the increasing amount of solar energy
results demonstrate that the proposed method based on the
unified architecture is effective for multi-horizon solar
resources at a global level and contributing to the evolution of
forecasting and achieves a lower root-mean-squared prediction the smart grid. Moreover, improving the accuracy of solar
error compared to the previous best performing methods which forecasting is one of the low cost methods for efficiently
use one model for each time-horizon. The proposed method integrating solar energy into the grid.
enables multi-horizon forecasts with real-time inputs, which have The rest of the paper is organized as follows. The literature
a high potential for practical applications in the evolving smart is reviewed and the significant shortcomings of the current
grid.
forecasting approaches are recognized in Section II. Section II
Keywords— Artificial Neural Network, Forecasting, Predictive
further introduces the capabilities of the proposed unified
Analysis, Recurrent Neural Network, Renewable Energy, Solar architecture and the novel algorithm to fill in the gap between
Power, Multi-time-horizon Solar Forecasting, Smart Grid the need to improve the forecasting techniques and the existing
approaches. Section III introduces the proposed unified
I. INTRODUCTION architecture based on RNNs and the training procedure used
Today’s power grid has become dynamic in nature mainly for implementing the neural network. Exploratory data
because of three changes in the modern grid: 1. Higher analysis, evaluation metric and the structure of the input data,
penetration level of renewables, 2. Introduction and rapidly and the proposed algorithm are presented in Section IV.
increasing deployment of storage devices, and 3. Loads Section V discusses the results and their interpretation. The
becoming active by participating in demand response. This paper is concluded with Section VI, which also identifies the
dynamic modern grid faces the challenge of strong fluctuations future avenue of research in this method of solar forecasting.
due to uncertainty. There is a critical need of gaining real time II. LITERATURE REVIEW AND PROBLEM IDENTIFICATION
observability, control, and improving renewable generation
forecast accuracy to enhance the resiliency and to keep the Forecasting methods which have been used for renewable
operational costs sustainable. Independent system operators generation and electric load forecasting prediction can be
(ISOs) have already been facing challenges with higher mainly classified into five categories: 1) Regressive methods,
renewable penetration on the grid due to the uncertainties such as Autoregressive (AR), AR integrated moving average
resulting from short-term forecasting errors. In the year 2016, (ARIMA), and exponential smoothing (ES) models [3], [4],
California ISO doubled its frequency regulation service [5], nonlinear stationary models; 2) Artificial Intelligence (AI)
requirements, causing a sharp rise in the cost of requirements, techniques, such as Artificial Neural Networks (ANN) [6]-
to manage the recurring short-term forecasting errors in [10], k-nearest neighbors [11]-[14], fuzzy logic systems
renewable generation [1]. The Western Electricity (FLSs) [15]-[17]; 3) Numerical Weather Prediction (NWP)
Coordinating Council (WECC) could save $5 billion per year [18]; 4) Sensing (remote and local) [19]. 5) Hybrid models,
by integrating wind and solar forecasts into unit commitment, such as neuro-fuzzy systems [20]-[21], ANN and satellite
according to the study conducted by Lew et al [2]. Thus, it is derived cloud indices [22], to name a few.
clear that the increased grid penetration levels of solar with its Numerical Weather Prediction (NWP) models are based
inherent variability caused by a combination of intermittence,
on physical laws of motion and thermodynamics that govern
high-frequency and non-stationarity, poses problems for grid
the weather. For the places where ground data is not available,
reliability and increases the grid operation costs at various

978-1-4799-7312-5/18/$31.00 ©2018 IEEE 18


NWP models are powerful tools to forecast solar radiation. computational capabilities, DNNs have proven to be effective
However, they pose significant limitations in predicting the and efficient in solving complex problems in many fields
precise position and extent of cloud fields due to their including image recognition, automatic speech recognition
relatively coarse spatial resolution. Their inability to resolve and natural language processing etc [26]. Although, feed-
the micro-scale physics associated with cloud formation forward neural network models have been used for solar
renders them with relatively large error in terms of cloud forecasting problem, the use of Recurrent Neural Networks
prediction accuracy. In order to mitigate this limitation, NWPs (RNN) models have not been explored yet, to the best of the
are simulated at regional level (called Regional NWP) models, author’s knowledge. RNN is a class of ANN that capture the
downscaled to derive improved site-specific forecasts. NWP dynamics of sequences using directed cyclic feedback
has another limitation of temporal-resolution. The timescale of connections [27]. Feedforward neural networks rely on the
output variables of NWP models is from 3 hour - 6 hours for assumption of independence among the data points or
the Global Forecast System (GFS) and 1-hour (for mesoscale samples. The entire state of the network is lost after
model), which is not useful for predicting the ramp-rate and processing each data point (sample). Unlike vanilla
very-short-term output fluctuations. feedforward neural networks, recurrent neural networks
For the areas where the previous ground-based (RNNs) exhibit dynamic temporal behavior by using their
measurement are not available, satellite based irradiance internal memory to process arbitrary sequences if inputs,
measurement proved to be a useful tool [22]. The images from which can be harnessed in predicting the irradiance for the
satellite are used to analyze the time evolution of air mass by next time step by considering the input from many previous
the superimposition of images of the same area. Radiometer times steps. Recent advances in parallelism, network
installed in the satellite records the radiance, states of the architectures, optimization techniques, and graphics
atmosphere (clear sky to overcast) impacts the radiance. processing units (GPUs) have enabled successful large-scale
Satellite sensing has the main limitation of determining an learning with RNNs overcoming their traditional limitations of
accurate set point for the radiance value under clear sky being difficult to train due to having millions of parameters.
conditions and under dense cloudiness condition from every Several methods have been proposed for solar forecasting
pixel in every image. Another limitation of solar irradiance in the past but most of them were modeled for a particular
forecasting using remotes sensing with satellite is the time-horizon and no single model performed well compared to
algorithms that are classified as empirical or statistical [23]- others for multi-time-horizon forecasting/prediction. In
[24]. These algorithms are based on simple statistical addition, the state-of-the-art methods used for solar
regression between surface measurements and satellite forecasting primarily focuses on averaged rather than
information and do not need accurate information of the instantaneous forecasts. This paper proposes two approaches
parameters that model the solar radiation attenuation through using RNNs. 1). A single system that is capable of being
the atmosphere. So the ground-based solar data is required for trained to output solar forecast for 1-hour or 2-hour or 3-hour,
these satellite statistical algorithms anyway. or for 4-hour time horizons. ii) A unified architecture that
The aforementioned limitations of NWP and sensing can predict/forecast the solar irradiance for multi-time-
models have steered the short-term solar forecasting research horizons; for example, the trained model can predict/forecast
towards time-series analysis using statistical models and more the solar irradiance values for the 1-hour, 2-hour, 3-hour and
recently AI techniques. Statistical techniques can mainly be 4-hour time horizons. Our proposed method is capable of
classified as [25]: 1) Linear stationary models (Autoregressive taking a time-series data as the input and provides predictions
models, Moving Average models, Mixed Autoregressive with a forward inference time in the order of milliseconds,
Moving Average Models, and Mixed Autoregressive Moving enabling real-time forecasts based on live measured data. This
Average models with exogenous variables); 2) Nonlinear offers a great value for industrial applications that require real-
stationary models; 3) Linear non-stationary models time multi-time-horizon forecasting for overcoming the
(Autoregressive integrated moving average models and current operational challenges with high penetration of
Autoregressive integrated moving average models with renewable source of energy.
exogenous variables). Though these conventional statistical
techniques provide a number of advantages over NWP and III. ARCHITECTURE AND ALGORITHM
sensing methods, but these are often limited by strict The RNN resembles a feedforward neural network except
assumptions of normality, linearity, variable independence. for additional directed edges. These edges span adjacent time
Artificial Neural Networks (ANN) are able to represent steps, introducing the notion of temporal component to the
complex non-linear behaviors in higher dimensional settings. model. Theoretically, the RNN architecture enables the
When exogenous variables like humidity, temperature and network to make use of past information in sequential data.
pressure are considered in the process of solar forecasting - A. Recurrent Neural Network – Unified Architecture
ANNs act as universal function approximators to model the
The input to an RNN is a sequence, and its target can be a
complex non-linear relationships between these variables and
their relationship with the Global Horizontal Irradiance (GHI). sequence or a single value. An input sequence is denoted by
An ANN with multiple hidden layers can be called A Deep (x(1), x(2),...x(T)), where each sample/data-point x(t) is a real
Neural Network (DNN). With the advancements in valued vector. The target sequence is denoted by (y(1),
y(2),...y(T)) and the predicted target data-point is denoted by

19
(y(1), y(2),...y(T)). There are three dimensions to the input of the steps. bh and by are bias parameters. The proposed architecture
RNN (shown in Figure 1): 1) Mini-batch Size; 2) Number of uses Rectified Linear Units (ReLU) as the activation function.
columns in the vector per time-step; and 3) Number of time- The network unfolds the given input at time t as shown in
steps. Mini-batch size is the sample length (data-points in the Figure 1.
time-series). Number of columns are the input features in the B. Algorithm description
input vector. The number of time-steps is the differentiating
The unfolded network is trained across the time steps
factor of RNN, which unfolds the input vector over time.
using an algorithm called backpropagation through time
In a typical multilayer feedforward neural network, the
(BPTT) [28]. The loss function used for this regression
input vector is fed to the neurons at the input layer, which then problem is Mean Squared Loss (MSE). The loss function finds
gets multiplied by the activation function to produce the the error between the target output and the predicted output
intermediate output of the neuron, this output then becomes from the network. Gradients are computed using back-
the input to the neuron in the next layer. The net input propagation-through time [28] and the stochastic gradient
(denoted by input_sumi) to this neuron belonging to the next descent optimizer is used to update the weights so as to
layer is the weight on connections (W) multiplied by previous minimize the loss. The RMSE is calculated for benchmarking
neuron’s output with the bias term, as shown in Equation 1. purposes.
An activation function (denoted by g) is then applied to the The motivation to use a RNN is to identify and learn the
input_sumi to produce the output from the neuron Equation 2 complex relationship between sequences of various exogenous
and 3. variables and their combined impact on the solar irradiance.
݅݊‫݉ݑݏ̴ݐݑ݌‬௜  ൌ  ܹ௜  ή  ࢞࢏ ൅ ܾ (1) This, in the author’s view, enables the algorithm to recognize
non-linear contributing factors for example the atmospheric
ܽ௜  ൌ ݃ሺ݅݊‫݉ݑݏ̴ݐݑ݌‬௜ ሻ (2)
conditions, which may lead to cloud formation in nearby time
ܽ௜  ൌ ݃ሺܹ௜  ή  ࢞࢏ ൅ ܾሻ (3) horizon. This is one of the reasons the prediction RMSE is
lower in the proposed approach compared to other reported
approaches.
C. Solar Forecasting – input features and
predictions
The algorithm and the unified architecture developed in this
paper were trained and tested on data from the NOAA‘s
SURFRAD [31] sites similar to the previous works in the
literature [29][32]. The input features are: downwelling global
solar (Watts/m^2), upwelling global solar (Watts/ m^2), direct-
normal solar (Watts/ m^2), downwelling diffuse solar (Watts/
m^2), downwelling thermal infrared (Watts/ m^2),
downwelling IR case temp. (K), downwelling IR dome temp.
(K), upwelling thermal infrared (Watts/ m^2), upwelling IR
case temp. (K), upwelling IR dome temp. (K), global UVB
(milliWatts/ m^2), photosynthetically active radiation (Watts/
m^2), net solar (dw_solar - uw_solar) (Watts/ m^2), net
infrared (dw_ir - uw_ir) (Watts/ m^2), net radiation
(netsolar+netir) (Watts/ m^2), 10-meter air temperature (C),
Figure 1 Input representation for the Recurrent Neural Network relative humidity (%), wind speed (ms^1), wind direction
(degrees, clockwise from north), and station pressure (mb).
For RNN network, at time t, neurons with recurrent edges According to Dobbs [29], Global downwelling solar
receive input from the current sample x(T) and also from measurements best represent the Global Horizontal Irradiance
hidden node values h(t-1) in the network’s previous state (GHI) at the SURFRAD sites, which was validated in this
(Equation 4). Given the hidden node values h(t) at time t, the paper through exploratory data analysis. [Figure 3] shows the
output y(T) at each time t is calculated (Equation 5). daily averages of Clear Sky GHI and global downwelling solar
at SURFRAD site for a year, both the variable follow the same
trend. [Figure 4] shows that both these variables are positively
ࢎሺ‫ݐ‬ሻ  ൌ ߪሺࢃ௛௫ ‫ ݔ‬ሺ௧ሻ  ൅  ࢃ௛௛ ࢎሺ௧ିଵሻ  ൅  ܾ௛ ሻ (4) correlated.
Bird model is used to calculate clear sky GHI [30]. At time
ෝሺ࢚ሻ  ൌ ݃ሺࢃ ௬௛ ݄ሺ௧ሻ  ൅  ܾ௬ ሻ
࢟ (5) t, clear sky GHI is denoted by ‫ܫܪܩ‬௖௟௘௔௥ ௧
, representing the
theoretical GHI at time t assuming zero cloud coverage. At
where, Whx is the conventional weight matrix between the time t, the ratio between the instantaneously observed ‫ ܫܪܩ‬௧

input and the hidden layer and Whh is the recurrent weights and the theoretical maximum ‫ܫܪܩ‬௖௟௘௔௥ is called clear sky
ሺ௧ሻ
matrix between the hidden layer and itself at adjacent time index, denoted by ‫ݐܭ‬௜ , this parameter is introduced in [29].

20
ሺ௧ሻ quantities, as shown by the regression line plotted on to the
‫ݐܭ‬௜ is used as the dependent variable for training and testing
ሺ௧ሻ scatter plot.
the model. ‫ݐܭ‬௜ is averaged over forecasting horizon, for
hourly predictions. The averaged hourly clear sky index B. Evaluation Metric
ending at time f.h. is denoted by ‫ݐܭ‬௔
ሺ௙Ǥ௛Ǥሻ
and calculated as The algorithm uses Mean Squared Error (MSE) as a
shown in Equation 6. measure to find the difference between the target and the
output that the neural network produces during the training
೑Ǥ೓Ǥ ሺೞሻ process, this is shown in Equation 7. Later in the process, for
σೞస೑Ǥ೓Ǥషలబ ௄௧೔
(6) the purpose of benchmarking, the Root Mean Squared Error is
଺଴ calculated by taking square root of the MSE values.
IV. EXPLORATORY DATA ANALYSIS AND PROPOSED ALGORITHM

෡ప ൯ଶ
‫ ܧܵܯ‬ൌ  σ௡௜ୀଵ൫ܻ௜ െ ܻ (7)
A. Exploratory Data Analysis ௡
Where ܻ is a vector of target values and ܻ෠ is a vector of n
predicted values.
C. Algorithm

Figure 2 Cleary Sky and Observed Irradiance

Figure 3 Correlation between Observed and Clear Sky GHI

Figure[2] shows the variation of the observed global Figure 4 Flow chart of the proposed method
downwelling solar and clear sky global horizontal irradiance
for the year 2010 at Boulder, CO. Figure [3] shows the The flowchart of the proposed Unified Recurrent Neural
correlation between the two for same year and same site. Network Architecture based method is shown in Figure 4. The
There is a positive and strong correlation between the two overall algorithm can be divided into three mail blocks:

21
1) Preprocessing A. Fixed Time Horizon Predictions
The site-specific data is imported and clear sky global The first method uses the proposed RNN architecture and
horizontal irradiance values for that site are obtained from the algorithm to predict for 1-hour, 2-hour, 3-hour and 4-hour
Bird Model. The two are merged. Dataset is split into train and time horizons, independently. In other words, four
testing sets. The clear sky index parameter is created as the independent models are developed for 1-hour, 2-hour, 3-hour
ratio of observed global downwelling solar (Watts /m^2) and and 4-hour predictions for each of the seven SURFRAD sites.
GHI (Watts /m^2). Kt is a dimensionless parameter. The Figure 5 shows the test loss over 1000 epochs (with a batch
missing values in the data are replaced by the mean and/or the size of 100 and test set of 3300 samples from the test year
neighborhood values. Exploratory data analysis is conducted 2009) for all the seven sites. The RMSE values in Table 1
to identify and eliminate extreme outliers in order to show that the proposed architecture and algorithm has lower
normalize the data. RMSE values for all four forecasting horizons and all the
2) RNN training and testing seven sites, compared to the best RMSE values reported in
[29] from a suite of other machine learning algorithms
A Recurrent Neural Network based model architecture is
(Random Forests, Support Vector Machine, Gradient Boosting
instantiated by specifying the architectural parameters: input
and vanilla Feed-Forward).
dimension (number of nodes at the input layer; 22), hidden
dimension (number of nodes in the hidden layer; 15), layer B. Multi-time-horizon prediction
dimension (number of hidden layers; 1) and output dimension In this method, the architecture predicts for all four
(number of nodes in the output layer; 4). Sequence length, forecasting time horizons (1-hour, 2-hour, 3-hour, and 4-hour)
which unfolds as time-steps is also defined here along with the in parallel; i.e. one model per SURFRAD site is developed
batch size. The model is trained and test by iterating through which makes predictions for all the four time horizons. This
the whole dataset based on pre-set number of epochs. We used method is the multi-time-horizon implementation of the
a batch size of 100 for 1000 number of epochs for obtaining proposed architecture. None of the methods discussed in the
the results discussed in this paper. literature have been shown to be capable of producing multi-
3) Post-processing time-horizon predictions. Table 2 enlists the RMSE values
obtained for the test years 2009, 2015, 2016 and 2017. To
Once the training and testing is over, the stored MSE is
quantify the overall performance of the predictive model in
first de-normalized and then it is used to calculate RMSE. If terms of its combined forecasting accuracy for all four
the RMSE is not satisfactory the hyperparameters (learning forecasting horizons, the mean of the RMSE values is
rate and number of epochs) are tuned and the model is trained
calculated. Although, even the best RMSE values reported in
again. When a satisfactory (or as expected) RMSE is
the literature (for example in [29][32]) were for a single time
achieved, the training process of the algorithm terminates.
horizon forecast at a time, the proposed method achieves a
V. RESULTS AND DISCUSSION significantly lower RMSE in predictions across all the short-
term (1 hour to 4 hours) forecasting time-horizons as seen in
The algorithm is trained using the data for the year 2010 Table 2.
and 2011 from the SURFRAD observations sites in Boulder,
Capability to predict for multi-time-horizons makes the
CO; Desert Rock, NV; Fort Peck, MT; Sioux Falls, SD;
Bondville, IL; Goodwin Creek, MS; and Penn State, PA. The proposed method very relevant for industry applications. The
test year for each respective site was chosen to be 2009 for the real-time data can be fed to the RNN and due to its lower
purpose of benchmarking against [29] and other previously forward inference time, predictions can be made for multiple
reported results in the literature. Results from the two methods time horizons. The proposed method is implemented using
proposed in this paper are presented in the following two sub- PyTorch and the code and additional information can be found
sections. on this site: http://sakshi-mishra.github.io/.
VI. CONCLUSION AND FUTURE WORK
Short-term solar forecasting is of great importance for
optimizing the operational efficiencies of smart grids, as the
uncertainties in the power systems are ever-increasing,
spanning from the generation arena to the demand-side
domain. A number of methods and applications have been
developed for solar forecasting, with some level of predictive
success. The main limitation of the approaches developed so
far is their specificity with a given temporal and/or spatial
resolution. For predictive analysis problems, the field of AI
has become promising with the recent advances in
optimization techniques, parallelism, and GPUs. AI
(especially deep neural networks) thrives on data, and with
decreasing cost of sensor and measurement equipment,
Figure 5 Test Mean Squared Error Plot plethora of solar data is getting available. Data availability is

22
only going to keep increasing in the coming years. The
proposed novel Unified Recurrent Neural Network Although a deeper neural network will have more
Architecture harnesses the power of AI to form a high-fidelity capacity, we experimentally observed that it leads to high
solar forecasting engine. This architecture has the potential to variance in the model and therefore a reduced generalization
be implemented as a complete forecasting system, which power for the particular problem dealt in this paper. The
spans the entire spectrum of spatial and temporal horizons performance of the proposed method can be further improved
with a capability to take real-time data as input to produce in several ways including hyper-parameter tuning and
multi-time-scale (intra-hour, hourly and day-ahead scales) architectural changes like the activation functions used or the
predictions. In addition, the proposed algorithm outperforms type of layers. Extension of the proposed architecture with
traditional Machine Learning methods in terms of quality of LSTM cells and intra-hour forecasting horizons are potential
the forecast and its low forward inference time makes it a future research avenues in this domain.
robust real-time solar forecasting engine.
Table 1
Method 1 (Fixed Time Horizon Predictions) Results
Year 2009 Bondville Boulder Desert Rock Fort Peck Goodwin Penn State Sioux Falls
Creek
F.H. RNN ML RNN ML RNN ML RNN ML RNN ML RNN ML RNN ML
1-hour 16.8 62 17 74 41.7 52 21.2 56 24.8 71 8.64 67 27.2 52
2-hour 20.73 98 20.7 108 57.23 72 29.7 81 25.2 103 10.5 97 32.1 81
3-hour 18.78 116 21.2 123 60.54 83 25.5 94 26.9 125 11.8 114 30.6 96
4-hour 17.98 121 22.9 125 49.71 82 29.4 93 22 120 10.7 117 35.3 103
Mean 18.57 99.25 20.45 107.5 52.29 72.25 26.45 81 24.73 104.8 10.41 98.75 31.3 83
RMSE

Table 2
Method 2 (Multi-time-horizon prediction) Results
Year Forecast Bondville Boulder Desert Fort Peck Goodwin Penn Sioux
Horizon Rock Creek State Falls
1-hour 0.706 0.593 0.469 0.709 0.647 0.679 0.686
2 -hour 1.119 1.055 0.984 1.352 1.173 1.134 1.247
2009 3-hour 4.524 4.010 4.527 6.311 4.586 3.161 5.061
4-hour 52.130 49.453 153.270 69.987 58.984 28.396 73.409
Mean 14.619 13.779 39.812 19.589 16.347 8.342 20.100
RMSE
1-hour 0.717 0.559 0.437 0.689 0.643 0.734 0.680
2-hour 1.166 1.026 0.952 1.302 1.190 1.180 1.252
2015 3-hour 4.719 4.006 4.268 5.845 4.764 3.118 5.216
4-hour 57.839 49.349 85.952 57.687 35.814 25.796 50.823
Mean 16.110 13.735 22.902 16.381 10.603 7.707 14.493
RMSE
1-hour 0.731 0.593 0.456 0.726 0.669 0.738 0.723
2-hour 1.184 1.079 0.979 1.372 1.204 1.187 1.331
2016 3-hour 4.639 4.227 4.685 6.406 4.760 3.174 5.781
4-hour 69.837 89.990 90.553 73.523 45.026 21.994 56.113
Mean 19.098 23.973 24.169 20.507 12.915 6.773 15.987
RMSE
1-hour 0.744 0.593 0.444 0.726 0.649 0.711 0.721
2-hour 1.190 1.059 0.949 1.368 1.179 1.149 1.323
2017 3-hour 4.577 4.010 4.319 6.245 4.593 3.211 5.715
4-hour 81.434 49.453 44.589 114.072 32.431 29.307 59.761
Mean 21.986 13.779 12.575 30.603 9.713 8.594 16.879
RMSE

23
REFERENCES prediction with cross-validation and akaike test,” in Soft Computing for
Intelligent Control and Mobile Robotics. New York, NY, USA:
1. (2016) The RTO Insider website. [Online]. Available: Springer-Verlag, 2011, pp. 269–285.
https://www.rtoinsider.com/caiso-regulation-requirements-29299/ 17. D. Hidalgo, P. Melin, and O. Castillo, “An optimization method for
2. D. Lew, D. Piwko, N. Miller, G. Jordan, K. Clark, and L. Freeman, designing type-2 fuzzy inference systems based on the footprint of
“How do high levels of wind and solar impact the grid? The western uncertainty using genetic algorithms,” Expert Syst. Appl., vol. 39, no. 4,
wind and solar integration study,” National Renewable Energy pp. 4590–4598, 2012.
Laboratory, CO, 303, 2010. 18. P. Mathiesen, J.,Kleissl, “Evaluation of numerical weather prediction for
3. J. Taylor and P. McSharry, “Short-term load forecasting methods: An intra-day solar forecasting in the continental United States,” Solar
evaluation based on European data,” IEEE Trans. Power Syst., vol. 22, Energy, vol. 85, pp. 967-977, May 2011.
no. 4, pp. 2213–2219, Nov. 2007. 19. Polo J, Zarzalejo L, RamÕ´rez L, Badescu V, “Solar radiation derived
4. A. Conejo, M. Plazas, R. Espinola, and A. Molina, “Day-ahead from satellite images”, Modeling Solar Radiation at the Earth’s Surface,
electricity price forecasting using the wavelet transform and ARIMA vol 18. Springer-Verlag Berlin, (ed) 2008.
models,” IEEE Trans. Power Syst., vol. 20, no. 2, pp. 1035–1042, May 20. P. Melin, J. Soto, O. Castillo, and J. Soria, “A new approach for time
2005. series prediction using ensembles of ANFIS models,” Expert Syst. Appl.,
5. J. Deng and P. Jirutitijaroen, “Short-term load forecasting using time vol. 39, no. 3, pp. 3494–3506, 2012.
series analysis: A case study for singapore,” in Proc. IEEE Conf. CIS, 21. C. Potter and M. Negnevitsky, “Very short-term wind forecasting for
Jun. 2010, pp. 231–236. Tasmanian power generation,” IEEE Trans. Power Syst., vol. 21, no. 2,
6. H. Hippert, C. Pedreira, and R. Souza, “Neural networks for short-term pp. 965–972, May 2006.
load forecasting: A review and evaluation,” IEEE Trans. Power Syst., 22. Zarzalejo LF, Ramirez L, Polo J., “Artificial intelligence techniques
vol. 16, no. 1, pp. 44–55, Feb. 2001. applied to hourly global irradiance estimation from satellite-derived
7. A. da Silva and L. Moulin, “Confidence intervals for neural network cloud index”, Energy; 30(9):1685e97, 2005.
based short-term load forecasting,” IEEE Trans. Power Syst., vol. 15, 23. M. Noia, C. F. Ratto, and R. Festa, “Solar irradiance estimation from
no. 4, pp. 1191–1196, Nov. 2000. geostationary satellite data: I. Statistical models”, Solar Energy, vol. 51,
8. R. Sood, I. Koprinska, and V. Agelidis, “Electricity load forecasting pp. 449-456, Dec. 1993.
based on autocorrelation analysis,” in Proc. IJCNN, Jul. 2010, pp. 1–8 24. M. Noia, C. F. Ratto, and R. Festa, “Solar irradiance estimation from
9. G. Capizzi, C. Napoli, and F. Bonanno, “Innovative second-generation geostationary satellite data: II. Physical models”, Solar Energy, vol. 51,
wavelets construction with recurrent neural networks for solar radiation pp. 457-465, Dec. 1993.
forecasting,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 11, pp. 25. R. H. Inman, H. T.C. Pedro, and C. F. M. Coimbra, “Solar Forecasting
1805–1815, Nov. 2012. methods for renewable energy integration”, Progress in Energy and
10. Q. Yu, H. Tang, K. Tan, and H. Li, “Rapid feedforward computation by Combustion Science, vol. 39, pp. 535-576, Dec. 2013.
temporal encoding and learning with spiking neurons,” IEEE Trans. 26. Y. LeCun, Y. Bengio and G. Hinton, “Deep Learning”, Nature, vol. 521,
Neural Netw. Learn. Syst., 2013, doi: 10.1109/TNNLS.2013.2245677. pp. 436-444, May 2015.
11. Paoli C, Voyant C, Muselli M, Nivet M-L, “Forecasting of preprocessed 27. A. Graves, “Supervised sequence labelling with recurrent neural
daily solar radiation time series using neural networks”, Solar Energy networks”, Studies in Computational Intelligence, ser. vol. 385, Berlin
2010;84(12): 2146e60. Heidelberg: Springer-Verlag, 2012.
12. Mellit A, Pavan AM, “A 24-h forecast of solar irradiance using artificial 28. P. J. Werbos, “Backpropagation through time: what it does and how to
neural network: application for performance prediction of a grid- do it.”, Proceedings of IEEE, vol. 78, pp. 1550-1560, 1990.
connected PV plant at Trieste, Italy”, Solar Energy 2010;84(5):807e21. 29. A. Dobbs, “Short-term solar forecasting performance of popular
13. Chen C, Duan S, Cai T, Liu B, “Online 24-h solar power forecasting machine learning algorithms”, National Renewable Energy Laboratory,
based on weather type classification using artificial neural network”, Golden, CO, Aug. 2017.
Solar Energy 2011;85(11):2856e70. 30. R.E. Bird and R.L. Hulstrom, “Simplified clear sky model for direct and
14. Pedro HTC, Coimbra CFM. “Assessment of forecasting techniques for diffuse insolation on horizontal surfaces”, Solar Energy Research Inst.,
solar power production with no exogenous inputs”, Solar Energy Golden, CO. 2013.
2012;86(7): 2017e28. 31. NOAA Surface Radiation Network (online). Available:
15. A. Khosravi, S. Nahavandi, D. Creighton, and D. Srinivasan, “Interval https://www.esrl.noaa.gov/gmd/grad/surfrad/
type-2 fuzzy logic systems for load forecasting: A comparative study,” 32. R. Perez, S. Kivalov, J. Schlemmer, K. Hemkar Jr., D. Renne, and T.E.
IEEE Trans. Power Syst., vol. 27, no. 3, pp. 1274–1282, Aug. 2012. Hoff, “Validation of short and medium term operational solar radiation
16. J. R. Castro, O. Castillo, P. Melin, O. Mendoza, and A. Rodríguez-Díaz, forecasts in the US”, Solar Energy. 84(12), 2161-2172, 2010
“An interval type-2 fuzzy neural network for chaotic time series

24

You might also like