You are on page 1of 7

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/245287627

Hydrologic Applications of MRAN Algorithm

Article in Journal of Hydrologic Engineering · January 2007


DOI: 10.1061/(ASCE)1084-0699(2007)12:1(124)

CITATIONS READS

7 34

4 authors:

Getachew Bereta Geremew Momcilo Markus


Arba Minch University University of Illinois, Urbana-Champaign
8 PUBLICATIONS 131 CITATIONS 33 PUBLICATIONS 568 CITATIONS

SEE PROFILE SEE PROFILE

Praveen Kumar Demissie Misganaw


University of Illinois, Urbana-Champaign University of Illinois, Urbana-Champaign
150 PUBLICATIONS 3,829 CITATIONS 85 PUBLICATIONS 784 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

River Ecology View project

Realizing increased photosynthetic efficiency View project

All content following this page was uploaded by Momcilo Markus on 29 November 2014.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Hydrologic Applications of MRAN Algorithm
Geremew G. Amenu1; Momcilo Markus, M.ASCE2; Praveen Kumar, M.ASCE3; and
Misganaw Demissie, M.ASCE4

Abstract: Applications of artificial neural networks in simulation and forecasting of hydrologic systems have a long record and generally
promising results. Most of the earlier applications were based on the back-propagation 共BP兲 feed-forward method, which used a trial-
and-error to determine the final network parameters. The minimal resource allocation network 共MRAN兲 is an on-line adaptive method that
automatically configures the number of hidden nodes based on the input–output patterns presented to the network. Numerous MRAN
applications in various fields such as system identification and signal processing demonstrated flexibility of the MRAN approach and
higher or similar accuracy with more compact networks, compared to other learning algorithms. This research introduces MRAN and
assesses its performance in hydrologic applications. The technique was applied to an agricultural watershed in central Illinois to predict
daily runoff and nitrate–nitrogen concentration, and the predictions were more accurate compared to the BP model.
DOI: 10.1061/共ASCE兲1084-0699共2007兲12:1共124兲
CE Database subject headings: Hydrologic aspects; Algorithms; Neural networks; Illinois; Watershed management.

Introduction 共Govindaraju and Zhang 2000兲. In terms of network topology,


RBF networks can be viewed as a special case of multilayer feed-
Since the early 1990s, many hydrologic modeling applications forward networks with a single hidden layer. However, in terms
have used artificial neural networks 共ANNs兲 for simulation, fore- of the activation functions and training algorithms, RBF networks
casting, and classification. A comprehensive review and discus- are very different from multilayer feed-forward networks and can
sions about the hydrologic applications of ANNs can be found in be considered as a different ANN class. The RBF networks use
the ASCE Task Committee on Applications of ANNs in Hydrol- radially symmetrical basis activation functions 共usually Gaussian兲
ogy 共ASCE 2000a,b兲. Most hydrologic applications of ANNs as opposed to the sigmoidal functions of BP networks. Many
used multilayer feed-forward neural networks with sigmoidal ac- training algorithms developed for RBF networks improve their
tivation functions. A widely used algorithm for training such net- speed of convergence compared with the BP algorithm. However,
works is the back-propagation 共BP兲 algorithm 共Rumelhart et al. as in the BP networks, most algorithms traditionally used for
1986; Salas et al. 2000兲. This technique requires a priori knowl- training the RBF neural networks require the number of hidden
edge of the number of hidden layers and neurons in each layer, units to be fixed a priori by a trial-and-error procedure.
i.e., the complete architecture of the network must be defined in Researchers in different disciplines have tried to develop alter-
advance. The usual strategy to determine the number of hidden native techniques that overcome the drawbacks of the commonly
layers and neurons is based solely on a trial-and-error procedure, used classical algorithms for training ANNs. The main focus has
which could be very time-consuming and requires significant user been on avoiding the costly trial-and-error procedure of assigning
experience. the number of hidden neurons while getting a compact network
Another class of artificial neural networks used in hydrologic structure. The minimal resource allocation network 共MRAN兲 al-
applications is the radial basis function 共RBF兲 neural networks gorithm 共Yingwei et al. 1998兲 is one such technique developed for
training of RBF neural networks. The MRAN algorithm is a se-
1
Graduate Research Assistant, Environmental Hydrology and quential learning algorithm for Gaussian RBF networks and as-
Hydraulic Engineering, Dept. of Civil and Environmental Engineering, signs the number of hidden units automatically based on growth
Univ. of Illinois at Urbana-Champaign, Urbana, IL 61801. criteria, while at the same time achieving minimal network struc-
2
Hydrologist, Watershed Science Section, Illinois State Water Survey, ture by pruning inactive hidden neurons.
Champaign, IL 61820. The MRAN algorithm has been applied in many areas, such as
3
Professor, Environmental Hydrology and Hydraulic Engineering,
signal processing 共Jianping et al. 2000兲, nonlinear system identi-
Dept. of Civil and Environmental Engineering, Univ. of Illinois at
Urbana-Champaign, Urbana, IL 61801. fication 共Yan et al. 2000兲, function approximation 共Yingwei et al.
4
Section Head, Watershed Science Section, Illinois State Water 1998兲, and pattern recognition 共Yingwei et al. 1998兲. Those ap-
Survey, Champaign, IL 61820. plications indicated that the MRAN algorithm is simpler, more
Note. Discussion open until June 1, 2007. Separate discussions must accurate, and less time-consuming than traditional network train-
be submitted for individual papers. To extend the closing date by one ing algorithms. This research introduces the MRAN algorithm to
month, a written request must be filed with the ASCE Managing Editor. the hydrology and water resources community with the following
The manuscript for this technical note was submitted for review and
possible publication on May 5, 2004; approved on May 19, 2006. This
specific objectives: 共1兲 investigate the ways and tools for im-
technical note is part of the Journal of Hydrologic Engineering, Vol. 12, proved real-time hydrologic forecasting; 共2兲 test MRAN accuracy
No. 1, January 1, 2007. ©ASCE, ISSN 1084-0699/2007/1-124–129/ for real-time hydrologic forecasting; and 共3兲 provide guidelines
$25.00. for other similar applications.

124 / JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007

Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright

i


j=i−共s −1兲
e2j
⬎ ␧3 共5兲
w
erms =
sw
where ␧1, ␧2, and ␧3 = thresholds to be selected appropriately;
␮nr = center vector of the hidden unit closest to the input vector xi;
and sw = user-selected sliding data window. The first criterion
compares the distance between the new observation and all the
existing nodes, and the second criterion determines if the existing
nodes are insufficient to produce a reasonable network output.
The third criterion checks whether the network meets the required
sum-squared-error specification for the past sw network outputs,
and is used to control the effect of noise from overfitting the
Fig. 1. Structure of Gaussian RBF neural networks hidden neurons. The algorithm begins with ␧1 = ␧1 max, the largest
scale of interest in the input space, and decays until it reaches
␧1 min according to ␧1 = max兵␧1 max␥i , ␧1 min其, where 0 ⬍ ␥ ⬍ 1
⫽decay constant.
The MRAN Algorithm When the above-noted three criteria are satisfied, a new hidden
unit is added to the network with associated parameters:
A typical Gaussian RBF network structure with a multiple input– ␣h+1 = ei, ␮h+1 = xi, and ␴h+1 = ␬储xi − ␮nr储, where ␬ = factor that de-
single output 共MISO兲 system is shown in Fig. 1. There are five termines the overlap of the responses of the hidden units in the
main types of parameters in the network: the number of hidden input space. When the observation 共xi , y i兲 does not meet the three
neurons 共h兲, the center positions 共␮兲 for all hidden neurons, the criteria for adding a new hidden unit, the network parameters are
widths 共␴兲 of the Gaussian functions, the connection weights 共␣兲 adjusted using the extended Kalman filter 共EKF兲 to fit that obser-
between hidden layer and output layers, and the bias 共␣0兲. These vation. The total network parameters at the ith instant are repre-
parameters are determined using a MRAN learning algorithm. sented by the vector wi = 关␣0 , ␣1 , ␮T1 , ␴1 , . . . , ␣h , ␮Th , ␴h兴T, where ␮
The MRAN algorithm described in the following is for MISO represents column vectors. The EKF approach obtains the poste-
system as shown in Fig. 1, and can be modified accordingly for rior estimate wi from its prior estimate wi−1 and prior error cova-
multiple output systems. riance estimate Pi−1 as follows:
The activation function of the hidden units is radially sym-
metrical in the input space, and the output from each hidden unit wi = wi−1 + Kiei 共6兲
depends only on the radial distance between the input vector x
and the center parameter ␮ of that hidden unit. The response of Ki共z⫻1兲 = Pi−1Bi关Ri + BTi Pi−1Bi兴−1 共7兲
each hidden unit is scaled by its connection weight ␣ to the output
units. The overall output f共x兲 from the network is a linear com- Pi共z⫻z兲 = 共I共z⫻z兲 − KiBTi 兲Pi−1 + q0I共z⫻z兲 共8兲
bination of the outputs from its hidden units, and is expressed as
where Ki = Kalman gain vector; Ri = variance of the measurement
h noise; Bi = gradient of the function f共xi兲 with respect to the param-
f共x兲 = ␣0 + 兺
k=1
␣k␾k共x兲 共1兲
eter vector wi evaluated at wi−1; I共zxz兲 = identity matrix; q0 = scalar
coefficient; and z = total number of network parameters to be ad-
justed. When a new hidden unit is allocated, the dimensions of Pi
increase with new rows and columns, in which case Pi becomes

␾k共x兲 = exp − 冉 储x − ␮k储2


␴2k
冊 共2兲
Pi = 冋 Pi−1
0
0
p0I共z1⫻z1兲 册 共9兲
where ␾k共x兲 = response 共activation function兲 of the kth hidden
where p0 = estimate of the uncertainty in the initial values as-
unit; ␣k = connection weight between the kth hidden unit and the
signed to the parameters and z1 = number of new parameters in-
output unit; ␣0 = bias term; ␮k = center vector for the kth hidden
troduced by the addition of the new hidden unit. For the structure
unit; ␴k = width of the Gaussian function of the kth hidden unit;
in Fig. 1, Bi becomes
and 储 · 储 = Euclidian norm.
The MRAN learning process involves allocation of new hid-
den neurons and adjusting the network parameters 共Yingwei et al.
1998兲. The network initially has no hidden units, and, as new

Bi共z⫻1兲 = 1,␾1共xi兲,␾1共xi兲
2␣1
␴21
共xi − ␮1兲T,

observations are received, the network grows based on selected 2␣1


growth criteria. The following three criteria are used to determine ␾1共xi兲 储xi − ␮1储2, . . . ,␾h1共xi兲,
␴31


whether a newly received input–output observation 共xi , y i兲 at the T
ith instant should give rise to a new hidden unit: 2␣h 2␣h
␾h共xi兲 共xi − ␮h兲T,␾h共xi兲 储xi − ␮h储2 共10兲
␴2h ␴3h
储xi − ␮nr储 ⬎ ␧1 共3兲 If the network is allowed to grow only according to the above-
described growth criteria, some hidden neurons, although active
initially, subsequently may end up contributing little to the net-
兩ei兩 = 兩y i − f共xi兲兩 ⬎ ␧2 共4兲 work output. A more compact network topology can be realized

JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007 / 125

Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright
Fig. 3. Correlations curves between runoff 共Q兲 and precipitation 共P兲
and between nitrate 共N兲 and runoff

River Basin with a drainage area of about 2,400 km2. Daily data
of nitrate concentration, discharge, precipitation, and temperature
were available for the basin for 6 years 共1993–1999兲. These vari-
ables have different orders of magnitude, and to reduce the effect
of magnitude on the system input the data of each variable are
standardized by subtracting the mean from each value and divid-
ing the result by the standard deviation of the data. Here, we
apply MRAN to predict runoff and nitrate-N concentration at the
outlet of the Upper Sangamon River Basin. Both the runoff and
nitrate-N data are highly nonnormal and nonstationary and tradi-
Fig. 2. Flowchart of MRAN algorithm tional autoregressive-moving average 共ARMA兲 models would not
be appropriate.
by removing inactive hidden units as learning progresses. This is Because ANNs are data-driven models, choice of input data
achieved in the MRAN algorithm by incorporating a pruning for the network is very important to obtain acceptable output in a
strategy, which checks the weight of each hidden unit and re- reasonable time. Unlike physically based models, ANN input
moves those units with weights below a certain threshold. In this variables that influence system outputs are not well known a pri-
approach, first, outputs from each hidden unit are normalized with ori 共ASCE 2000b; Sudheer et al. 2002兲. In this study, input vari-
respect to the maximum value of the outputs of all the hidden ables for each model and associated time lags for each input
units according to variable were determined through correlation analysis of available

冏 冏冏 冏
data sets, such that variables that provide good correlation with
Ok Ok the output variable were chosen as the network inputs. The time
rk = = 共11兲
Omax max兵兩O1兩,兩O2兩, . . . ,兩Ok兩, . . . ,兩Oh兩其 lags corresponding to maximum correlations were adopted for
each chosen input variable. The correlation of temperature with

Ok = ␣k exp − 冉 1
␴2k
储xi − ␮k储2 冊 共12兲
both runoff and nitrate-N were insignificant and, thus, tempera-
ture was not used in any of the models. Similarly, the correlation
between precipitation and nitrate-N is negligible and precipitation
where Ok = output of the kth hidden unit and rk = its normalized was excluded from input to nitrate prediction model. Fig. 3 shows
output value. Then, the normalized value of each hidden neuron is the correlation curves between runoff and precipitation and be-
compared with a user-defined threshold ␦. If the normalized value tween nitrate and runoff data sets.
falls below this threshold for sw consecutive observations, it indi-
cates that the kth hidden neuron makes an insignificant contribu- Runoff Forecasting
tion to the network output and will be removed from the network.
Consequently, EKF dimensionality is updated to suit the reduced Streamflow forecasting is a key component of water resources
network. management and yet accurate prediction is a difficult task for
The MRAN learning algorithm can be summarized by the hydrologists due to 共1兲 the complex physical processes involved;
flowchart given in Fig. 2. 共2兲 the nonlinearity of the system; 共3兲 the spatial and temporal
variability of the variables involved; and 共4兲 the difficulty of mea-
suring all the variables influencing the process. In some applica-
Model Application tions, ANN-based forecasting could potentially be more accurate
than physical-based models 共ASCE 2000b; Tokar and Markus
2000兲. Here, the MRAN algorithm is applied to predict runoff for
Study Area Description and Data Preprocessing
different lead times based on antecedent precipitation and runoff
Accurate prediction of streamflow and water quality is of a great observations.
value for agricultural watersheds in north–central Illinois 共Markus A qualitative observation of the runoff–precipitation correla-
et al. 2003; Suen and Eheart 2003; Yu et al. 2004兲. In this study, tion curve 共Fig. 3兲 shows higher correlation at time lags of
performance of the MRAN algorithm is evaluated using hydrom- 1 – 3 days. Accordingly, precipitation data at times 共t − 1兲, 共t − 2兲,
eteorological and water quality data sets from agriculture- and 共t − 3兲 are chosen to predict the discharge at time t. The dis-
dominated watershed in central Illinois, the Upper Sangamon charges at times 共t − 1兲 and 共t − 2兲 are also included in the input for

126 / JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007

Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright
Table 1. Values of MRAN User-Defined Parameters output. Similarly, hidden neurons that are inactive 共or make an
Runoff Nitrate-N Range of insignificant contribution to the output兲 are removed from the
Parameter prediction prediction values network to minimize network architecture and hence, reduce
computational time. Generally, it is found that the network has a
␧1 max 4.00 3.50 2.00–4.50
higher number of hidden neurons during high-flow seasons com-
␧1 min 3.00 3.00 0.50–3.00
pared to low-flow seasons, indicating seasonality in the number of
␥ 0.80 0.90 0.80–1.00
parameters.
␧2 0.02 0.05 0.01–0.50
Figs. 5共a–c兲 共left兲 show graphical comparisons between
␧3 0.20 0.06 0.02–0.50
model-predicted and field-observed runoff values. The plots are
␬ 0.99 0.98 0.50–1.00
shown only for 1 year 共1998兲 out of the 6 year simulation period.
q0 0.005 0.50 0.001–0.50
The RMSE and R2 for 1 day forecast are 13.0 m3 / s and 0.93,
p0 0.95 0.90 0.85–1.00
respectively. The discrepancy between computed and observed
␦ 0.05 0.03 0.001–0.50
runoff is generally lower during low-flow seasons compared to
sw 100 50 20–100
high-flow seasons. The corresponding BP model resulted in root-
mean-square error 共RMSE兲 of 14.0 m3 / s and R2 of 0.93. It is
difficult to predict both low- and high-flow magnitudes accurately
predicting runoff at time t. Thus, the model has five-dimensional with traditional modeling techniques that use the same number of
input 共Qt−1, Qt−2, Pt−1, Pt−2, and Pt−3兲 and one-dimensional output
parameters for both low and high flows. MRAN resulted in about
共Qt兲. The modeling is performed for the whole period of available
7% smaller RMSE than BP, which could be explained by the
data, which is 6 years. Table 1 共column 2兲 gives the values of the
dynamic ability of the MRAN algorithm to change network struc-
user-defined parameters chosen for the model.
ture 共and hence the number of model parameters兲, depending on
Fig. 4共a兲 共top兲 shows the evolution of the number of hidden
neurons over time during the training of the runoff model for the complexity of the relationship.
different forecast periods. Changes in the number of hidden neu-
rons occur due to the adding and pruning capabilities of the Nitrate Forecasting
MRAN algorithm depending on the situation. Whenever the num-
ber of available hidden neurons is insufficient to produce a rea- Nonpoint source pollution, mainly from agricultural activities,
sonable output, i.e., when the discrepancy between computed and can impair the quality of drinking water. In central Illinois,
observed runoff exceeds the acceptable value 关Eqs. 共3兲–共5兲兴, the nitrate-N concentration in rivers during high-flow seasons
number of hidden neurons increases to reduce the error in the often exceeds the maximum limit of 10 mg/ L set by the U.S.
Environmental Protection Agency 共USEPA 1991兲. As a result,
several utilities are evaluating alternative solutions for reducing
nitrate-N concentration during critical periods and for adopting
water supply planning strategy that uses predictions of nitrate-N
concentrations.
The current study evaluates the performance of the MRAN
algorithm for predicting nitrate-N concentration for the Sangamon
River. The model uses past discharge and past nitrate-N concen-
tration observations for prediction current nitrate-N. Nitrate-N at-
tains maximum correlation with runoff at around 7- to 10-day
time lags 共Fig. 3兲. Thus, to predict the nitrate-N concentration
共Nt兲 at time t, the model uses three inputs from past discharges
共Qt−7, Qt−8, and Qt−9兲. In addition, the model uses three inputs
from past nitrate-N concentrations 共Nt−1, Nt−2, and Nt−3兲. Overall,
the model has six inputs 共Nt−1, Nt−2, Nt−3, Qt−7, Qt−8, and Qt−9兲 and
one output 共Nt兲. The optimal values of the user-defined param-
eters for this model are given column 3 of Table 1.
Fig. 4共b兲 共bottom兲 shows the evolution of the number of hid-
den neurons for the nitrate-N case. In this case, variation in the
number of hidden neurons from one season to another is more
evident compared to the case of the runoff prediction model dis-
cussed earlier. Further, the total number of hidden neurons in each
season is smaller than that of the runoff prediction model. The
smaller day-to-day fluctuation in nitrate-N compared to runoff
may explain the resulting differences in the number of hidden
neurons.
The graphical comparisons of the predicted and observed
nitrate-N concentrations are given in Figs. 5共d–f兲 共shown for 1998
out of 6 years兲. The RMSE and R2 values for 1-day forecast are
Fig. 4. Evolution of number of hidden neurons over time for 0.57 mg/ L and 0.98, respectively. The corresponding BP resulted
different forecast periods for 共a兲 the runoff; 共b兲 the nitrate-N in RMSE of 0.63 mg/ L and R2 of 0.97. MRAN produced about
prediction models 9% smaller RMSE than BP.

JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007 / 127

Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright
Fig. 5. Typical 1-year plots of observed and 1-day 共top兲 and 3-day 共middle兲 MRAN forecasted runoff 共left兲 and nitrate-N 共right兲 for Upper
Sangamon River. Shown at the bottom is the corresponding RMSE and correlation 共R2兲 between observed and simulated runoff 共left兲 and
nitrate-N 共right兲 for different forecast lengths.

User-Defined Parameters One shortcoming of the MRAN algorithm is that there are no
established guidelines for assigning values to the user-defined pa-
The MRAN algorithm has two types of parameters: network 共dy- rameters. Generally, these parameters are specified through trial-
namic兲 parameters and user-defined 共static兲 parameters. The net- and-error and it is often difficult to estimate the initial values,
work parameters are calculated by the algorithm, and they include which could be discouraging for less experienced users. There-
the number of hidden units 共h兲, the centers 共␮兲 and the widths 共␴兲 fore, having guidelines for specifying these parameters is very
of the hidden units, the weights 共␣兲, and the biases 共␣0兲. These are important. In this study, preliminary guidelines are established
dynamic in that their magnitudes and their total number are not based on authors’ experiences with the algorithm for hydrologic
constant, but rather change at each iteration. On the other hand, applications.
the user-defined parameters need to be defined a priori to run the To create some guidelines for user-defined parameters, a
model, and they include the thresholds for adding hidden neurons simple sensitivity analysis is performed for each parameter. Over-
共␧1, ␧2, and ␧3兲, the threshold for pruning hidden neurons 共␦兲, all, the algorithm was found to be very sensitive to the thresholds
various coefficients 共␥, ␬, p0, and q0兲, and the size of the sliding 共␧1, ␧2, ␧3, and ␦兲 and the EKF parameter q0. Based on the results
data window 共sw兲. User-defined parameters are static in that they of the sensitivity analysis, the authors’ experiences, and values
remain constant throughout the iteration period. adopted in previous studies 共Sundararajan et al. 1999兲, the authors

128 / JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007

Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright
recommend the range of parameter values as given in column 4 of References
Table 1. These values could be used as initial values for calibra-
tion in similar applications of the MRAN model. ASCE Task Committee. 共2000a兲. “Artificial neural networks in hydrol-
ogy. I: Preliminary concepts.” J. Hydrol. Eng., 5共2兲, 115–123.
ASCE Task Committee. 共2000b兲. “Artificial neural networks in hydrol-
ogy. II: Hydrologic applications.” J. Hydrol. Eng., 5共2兲, 124–137.
Conclusions
Govindaraju, R. S., and Zhang, B. 共2000兲. “Radial basis function net-
works.” Artificial neural networks in hydrology, R. S. Govindaraju
This study introduced the minimal resources allocation network
and A. R. Rao, eds., Kluwer Academic, Dordrecht, The Netherlands,
共MRAN兲 algorithm, which demonstrated the potential for over- 93–109.
coming shortcomings of the traditional back-propagation algo- Jianping, D., Sundararajan, N., and Saratchandran, P. 共2000兲. “Complex-
rithm. Performance of the MRAN algorithm was evaluated for valued minimal resource allocation network for non-linear signal pro-
hydrologic applications through case studies. The algorithm was cessing.” Int. J. Neural Syst., 10共2兲, 95–106.
applied to the Sangamon River watershed in central Illinois for Markus, M., Tsai, C. W.-S., and Demissie, M. 共2003兲. “Uncertainty of
predicting river discharge and nitrate-N concentration. Compared weekly nitrate-nitrogen forecasts using artificial neural networks.” J.
with the back-propagation method, MRAN reduced RMSE by 7% Environ. Eng., 129共3兲, 267–274.
in forecasting daily discharge, and by 9% for nitrate-N 1-day Rumelhart, D. E., Hinton, G. E., and Williams, R. J. 共1986兲. “Learning
forecasting. The advantage of MRAN could be found in its flex- representations by back-propagation error.” Nature, 323共6088兲, 533–
536.
ibility to adjust the model parameters online.
Salas, J. D., Markus, M., and Tokar, A. S. 共2000兲. “Streamflow forecast-
Guidelines were established for user-defined parameters. Al-
ing based on artificial neural networks.” Artificial neural networks in
though the user-defined parameters depend on the type of data,
hydrology, R. S. Govindaraju and A. R. Rao, eds., Kluwer Academic,
and complexity of relationship, these guidelines could be useful Dordrecht, The Netherlands, 23–51.
in similar applications. Sudheer, K. P., Gosain, A. K., and Ramasastri, K. S. 共2002兲. “A data-
The applications of the MRAN algorithm also demonstrated driven algorithm for constructing artificial neural network rainfall-
that this technique could automatically detect changes in com- runoff models.” Hydrolog. Process., 16共6兲, 1325–1330.
plexity of relationships over time. In case of more complex rela- Suen, J.-P., and Eheart, J. W. 共2003兲. “Evaluation of neural networks for
tionships, the algorithm allocates more hidden nodes, and vice modeling nitrate concentrations in rivers.” J. Constr. Eng. Manage.,
versa. It is shown in Fig. 4 that the method allocates more hidden 129共6兲, 505–510.
nodes during high-flow and high-nitrate-N seasons when the sys- Sundararajan, N., Saratchandran, P., and Yingwei, L. 共1999兲. Radial basis
tem is more complex. function neural networks with sequential learning, MRAN and its ap-
Further applications of the algorithm to different geographical plications, World Scientific, Singapore, 1–115.
regions with different watershed sizes will validate the above- Tokar, A. S., and Markus, M. 共2000兲. “Precipitation-runoff modeling
using artificial neural networks and conceptual models.” J. Hydrol.
mentioned conclusions.
Eng., 5共2兲, 156–161.
U.S. Environmental Protection Agency 共USEPA兲. 共1991兲. “National pri-
mary drinking water regulations; final rule. 40 CFR Parts 141, 142,
Acknowledgments and 143.” Federal Register, 56共No. 20兲, 3526–3597.
Yan, L., Sundararajan, N., and Saratchandran, P. 共2000兲. “Nonlinear sys-
The writers appreciate the cooperation of Ms. Laura Keefer, Hy- tem identification using Lyapunov based fully tuned dynamic RBF
networks.” Neural Processing Letters, 12共3兲, 291–303.
drologist at the Illinois State Water Survey, for providing the data.
Yingwei, L., Sundararajan, N., and Saratchandran, P. 共1998兲. “Perfor-
Partial support for the research was provided by NSF Grant No. mance evaluation of a sequential minimal radial basis function 共RBF兲
EAR 02-08009, NOAA Grant No. NA03OAR4310070, and neural network algorithm.” IEEE Trans. Neural Netw., 9共2兲, 308–318.
NCSA Faculty Fellowship. The writers gratefully acknowledge Yu, C., Northcott, W. J., and McIsaac, G. F. 共2004兲. “Development of an
help and valuable suggestions from Dr. N. Sundararajan, and Dr. artificial neural network for hydrologic and water quality modeling of
P. Saratchandran, Nanyang Technological University, Singapore. agricultural watersheds.” Trans. ASAE, 47共1兲, 285–290.

JOURNAL OF HYDROLOGIC ENGINEERING © ASCE / JANUARY/FEBRUARY 2007 / 129

View publication stats Downloaded 30 Nov 2008 to 134.71.59.187. Redistribution subject to ASCE license or copyright; see http://pubs.asce.org/copyright

You might also like