You are on page 1of 18

31

CHAPTER 4

METHODOLOGY

4.1 ANALYTICAL METHOD

In the past years, empirical or analytical methods are used for
calculating the sediment deposition in the reservoir. One of the best methods
available in the literature which is proposed by Garde et al (1978) is used in
this study to estimate the sediment deposition in the Vaigai reservoir.

This method is based on the sedimentation studies on nine
reservoirs listed below.

1. Bhakra (Punjab),

2. Panchet hill (Bihar),

3. Matatila (Uttar Pradesh),

4. Hirakud (Orissa),

5. Maithon (Bihar),

6. Mayurkashi (West Bengal),

7. Nizamsagar (Andhra Pradesh).

8. Wuchieh (Taiwan) and

9. Guernsey (USA).

sediment transport and deposition process in reservoirs. Initial capacity of reservoir. 1964). I( t 1) = Generated water inflow in (t+1)th year It = Annual water inflow in tth year _ I = Mean annual inflow of historic data . Annual rainfall. 4. I( t 1) RI t (1 R ) I (1 R 2 )1 / 2 * Z (4. Using the above details the inflow in to the Vaigai reservoir and sediment deposition due this inflow is calculated as below. Annual inflows in to the reservoir.1) Where. 3. 4.1 Inflow Generation The available annual inflow (1959-1999) are used to generate the flows for subsequent years using Brittan’s method by means of Markov-chain Model. Average initial bed slope. This method makes use of the following data which are normally available for any reservoir. (Ven Te Chow. Width of reservoir at FRL and other hydraulic details of reservoir. 5. 32 This method is more practical (Garde et al 1987) and it is extremely useful technique since it represents a procedure relying on the integration of complex erosion. 1. 2.1.

(a) First order serial co-efficient R is given by N 1 1 N 1 N 1 i 1 Xi * Xi 1 ( i 1 X i )( i 1 Xi 1 ) R N 1 N 1 2 1 N 1 2 1/ 2 N 1 2 1 N 1 2 1/ 2 ( i 1 Xi ( i 1 Xi ) ) *( i 1 Xi 1 ( i 1 Xi 1) ) N 1 N 1 (4. This is known as “Warming up”. 33 R = First order serial correlation co-efficient of historic inflow data.2) . = Standard deviation of historic inflow data Z = Random normal deviate having zero mean and unit standard deviation When the flow is assumed to be normally distributed (Jayarami Reddy 1987) it is possible that negative flows may be generated during some periods. In such cases it is recommended that the negative flow be used in the generation of subsequent flow and later all negative flows may be made equal to zero. standard deviation ( ) and random normal deviate are calculated for inflow generation. To eliminate the seed of generation the sequence for a longer period than required and initial segment which is in excess is dropped from further usage. First order serial correlation co-efficient (R). we generate the flow for about 200 years and drop the first 50 years of generated record. For example if the required length of the generated sequence is say 150 years.

2 4.4) Z2 [ 2 ln(1 U1 )]1/ 2 Sin (2 U 2 ) (4.2 Volume of Sediment Deposited The cumulative volume of sediment deposited is calculated from the cumulative volume of water inflow into the reservoir which is used for the generation of future rate of sedimentation. U1 and U2 are given in the Table A 2. (Garde et al 1987) .1 and normal random deviate (Z) also furnished in Table A 2.1). Xi = Historic annual inflow in ith year Xi 1 = Historic annual inflow in (i 1) th year N = Number of years (b) Standard deviation of historic inflow data ( ) 1/ 2 2 2 1/ 2 2 1/ 2 Xi Xi X i 12 Xi 1 * (4. U1 and U2 are the uniformly distributed random numbers in the range (0. 34 where. Z1 [ 2 ln(1 U1 )]1 / 2 Cos(2 U 2 ) (4.1.5) where.3) N 1 N 1 N 1 N 1 (c) Random normal deviate (Z) is found out by the Box Muller method using rectangular distributed random numbers using the equations.

which is given in the Table 5.6) Vac 1. VS = Volume of sediment in Mm3 Vac = Cumulative volume of sediment in m3/m width of reservoir in t years (Coefficients used in equation (4.2 of Chapter 5 The volume of sediment deposit in the Vaigai reservoir for these future years also calculated using the equation (4.1). 35 VS Vac * B / 106 (4.7) are derived from Vaigai reservoir data) Vw = Cumulative volume of water inflow in m3/m width in t years SO = Average initial longitudinal bed slope along the deepest course of reservoir (1 in 450) Ic = Cumulative annual volume of water inflow in Mm3 B = Average width of reservoir at F. because at this year Vaigai reservoir will lose 70% capacity.8) Where.7) Vw Ic / B (4.84 * (Vw ) 0. .L in m (1946.43 m) Using Equation (4. Flow was generated from 2000 to 2150.16* (SO ) 0.R. Volume of sediment deposition was calculated upto the year 2108.94 (4. say 150 years. the inflow into the reservoir for future years was generated. It is to be noted that any reservoir will lose its intended purpose once 70% capacity has been lost (Santosh Kumar Garg 2009).3 of Chapter 5.6) and the same is given in the Table 5.

such as perception. is due to massively parallel processing basic unit of brain called NEURON. 4. Artificial neural networks attempt to model these poorly understood problems by employing a mathematical model of the brain’s structure (Fausett. etc. and an output function. On the other hand.1 Overview This subsection provides a brief introduction to artificial neural network models.. It has been now established that the high performance of the human brain in the natural information processing tasks.1994). 36 4.2. The function of the brain is due to a very complicated biochemical reaction which is yet to be analysed satisfactorily. The premise behind artificial neural network models is that mimicking the brain’s structure of many highly connected processing elements will enable computers to tackle tasks they have not as of yet performed well. The brain consists of billions of densely interconnected neurons. such as pattern recognition and motor control which are not well understood. language understanding. Traditional computing techniques take advantage of the computer’s architecture to solve problems well understood but not easily solved by human calculation. some tasks.2. Artificial neural networks are mathematical models derived from this structure.2 ARTIFICIAL NEURAL NETWORKS 4. they are not intended to model the actual workings inside the brain or nervous system. a summer. are easily handled by the brain and nervous system yet elude traditional computer procedures. motor control.2 Neuron Model A model of a neuron has three basic parts: input weights. The input weights scale values used as inputs to the . Though biological plausibility is sometimes applied to artificial neural network models.

This description is laid out visually in Figure 4. 37 neuron. and W3 are the weights. and I3 are the inputs. W1. I1. I1 W1 I2 W2 x f(x) a W3 I3 B 1 Figure 4. or some complex curve used in function matching. I2. it can be represented by a weight with a constant input of 1. most often. x is an intermediate output and ‘a’ is final output. Often.9) where. linear (i. The equation for a is given by a f (W1I1 W2 I 2 W3I3 B) (4. If a bias is used. the output is simply the input times some constant factor).1 General Neuron Model where. one additional input. f is the sign of the argument (i.e.1. B is the bias. known as the bias is added to the system. . the summer adds all the scaled values together and the output function produces the final output of the neuron.e. f could be any function. W2. 1 if the argument is positive and -1 if the argument is negative).

11) a3 f 3 (W3 * I B3 ) (4.2. . Using this notation the output is simplified to a f (W * I B) where all the inputs are contained in I and all the weights are contained in W.12) Keeping matrix multiplication in mind. This can be represented mathematically by the following series of equations: a1 f1 (W1 * I B1 ) (4. Now. vectors are commonly used to represent the inputs and the weights so the first of two brief reviews x (x1 .13) which is the final form of the mathematical representation of one layer of artificial neurons. y 2 .3 Neuron Layer In a neuron layer. x 2 . 38 When artificial neurons are implemented. The dot product of two vectors and y (y1 . 4. representing the input vector and the biases as one column matrices and simplified the above equations as a f (W * I B) (4. each input is tied to every neuron and each neuron produces its own output.10) a2 f 2 (W2 * I B2 ) (4. . . y n ) is given by x * y x1y1 x 2 y 2 x n yn . x n ) of linear algebra is appropriate here. append the weights so that each row of a matrix represents the weights of 1 neuron.

the processing elements are classified as input units.2. The model learns by adjusting its connection weights in response to the input- output pairs presented to it during training. Neural networks are trained by example. and (3) the output layer.(Mirchandani and Cao.2 illustrates a typical neural network model. Usually. The basic structure of an ANN usually consists of three layers: (1) the input layer. where data are processed. is known as a feed-forward network. Model input is supplied through the input units and model output is shown on the output units. they are not usually programmed with a prior knowledge. where the data are introduced to the network. This type of network. and presenting these to the ANN in some ordered manner. where the results of given input are produced. . (2) the hidden layer(s). where data flow is in one direction. or hidden units. The connection weights between processing elements contain the knowledge stored in the artificial neural network model. output units. each of which is connected to other elements according to some schema by connection weights.4 Neural Network Model In general. a neural network model consists of neurons or processing elements. 39 4.1989) The weight between neurons is optimized by using known inputs and outputs. Figure 4. The hidden elements are necessary to enable the system to learn relationships which are not linearly separable. During training. the strength of these interconnections is adjusted using an error convergence technique so that a desired output is produced for a known pattern. This process is called training of the network.

giving the capacity to capture and represent relationships between patterns in a given data sample (Eberhart and Dobbins. Error back propagation is one of the most commonly used procedures. Error back propagation provides a feed forward neural network. The strength of these interconnections is adjusted using an error convergence technique so that a desired output will be produced for a known input pattern. 40 Input Layer Hidden Layer Output Layer Figure 4. The method is generally an iterative nonlinear optimisation approach using a gradient descent search method. 1990). 4. Fernando and Jayawardena (1998) have used radial basis function networks for training. in which the external input information at the input nodes is propagated . Many training procedures are discussed in the literature.2.2 Neural Network Model Example The learning process or training forms the interconnection between neurons.5 Back Propagation Training Error back propagation involves two phases: a feed forward phase. They concluded that ANNs trained either using radial basis function or error back propagation provided comparable estimations. The processing units are arranged in layers.

When presented with a new input pattern. the collection of connection strengths captures and stores the knowledge and the information present in the examples used in the training process. It will depend on many factors. . the number of data points in the training set. 41 forward to compute the output information signal at the output unit. and whether the network is being used for pattern recognition (discriminant analysis) or function approximation (regression). The ANNs can be categorized in terms of topology such as single and multi-layer feed forward networks (FFNN). It is very difficult to know which training algorithm will be the fastest for a given problem.6 Selection of Network Type There are multitudes of network types available for ANN applications and its choice depends on the nature of problem and data. and a backward phase. When the iterative process has converged. 4. feedback networks (FBNN). the error goal. At the beginning of a training process.2. in which modifications to the connection strengths are made based on the differences between the computed and observed information signals at the output units (Eberhart and Dobbins 1990). The learning algorithm modifies the strength in each iteration until the successful completion of the training (Nagy et al2002). the number of weights and biases in the network. the connection strengths are assigned random values. including the complexity of the problem. The back propagation algorithm was given in Appendix 4. a feed forward network computation results in an output pattern which is the result of the generalisation and synthesis of what ANN has learned and stored in its connection strengths.

There is no way to determine the best number of hidden units without training several networks and estimating the generalization error of each. Selecting network structure is a crucial step in the overall design of NNs (Suribabu et al 2005). and the Gaussian function. In addition. its building block is a simple structure called the neuron. In this topology. connection type and learning methods. In this case. The structure must be optimized to reduce computer processing. one output layer and a minimum of one hidden layer. achieve good performance and avoid overfitting. On the other hand. The term feed forward describes the way in which the output of the FFNN is calculated from its input layer-by-layer throughout the network. No matter how complex the network is. 42 recurrent networks (RNN). If there are few hidden units. then high training error and high generalization error due to underfitting may occur. The most commonly used type of networks in the field of modeling and prediction is the FFNN. The Sigmoidal functions. It performs a weighted sum of its inputs and calculates an output using certain predefined activation functions. type of activation functions used and the training algorithm all have interacting effects on the sizes of the hidden layers. Activation functions for the hidden units are needed to introduce the nonlinearity into the network. the connections between network neurons do not form cycles. if many hidden units are used. such as logistic and tanh. the network is composed of one input layer. complexity of the sought function to be modeled. . self-organized networks. Improperly trained neural networks may suffer from either underfitting or overfitting. they can be further categorized in terms of application. The size of the training set. amount of noise in the targets. The selection of the best number of hidden units depends on many factors. low training error can be achieved at the expense of network generalization which degrades overfitting. are the most common choices for the activation functions.

or when the . so its attributes become even more pronounced in a MATLAB setting.2.8 Merits and limitations of ANN There are certain practical advantages and limitations of ANNs in forecasting applications. The network is fed with a set of input–output pairs and trained to reproduce the outputs. It refers to the method for computing the gradient of the case-wise error function with respect to the weights for a feed forward network. because the solution of the matrix equation is a built-in function.7 Method Description Backpropagation algorithm is the training technique usually used for this purpose. statistical hydrological models particularly when the feature space is complex and the source data has different statistical distributions.2. The training is done by adjusting the neurons weights using an optimization algorithm to minimize the quadratic error between observed data and computed outputs. 4. It also has a very efficient MATLAB implementation. 43 4. The merits of ANNs include. This algorithm appears to be the fastest method for training moderate-sized feed forward neural networks (up to several hundred weights). Input-target training data are usually pretreated as explained above in order to improve the numerical condition for the optimization problem and for better behavior of the training process. They can model complex non-linear input-output time series relationships in wide variety of fields Perform more accurately and rapidly than other techniques such as statistical classifiers. The neural system architecture is defined by the number of neurons and the way in which the neurons are interconnected.

44 underlying processes are not well understood/ cannot be handled adequately Incorporate a prior knowledge and realistic physical constraints into the analysis Incorporate different types of data on diverse attributes into the analysis. Need representative and adequate number of input and output data sets for training the network chosen Physical processes if not clearly understood formulating the ANN structure may be difficult Large number of iterations are involved in training and becomes computationally intensive if the number of hidden layers and number of nodes in each layer are wrongly chosen Model results depend on input and output nodes selected for a structure and Finding optimal number of nodes in the hidden layer is uncertain. thus facilitating synergistic studies 3-layer feed forward ANNs using sigmoid transfer function are found to be sufficient to implement any continuous and bounded multivariate function mapping and thus are the most widely accepted and applied in hydrologic modelling and Depending on the input-output relationship. The limitations include. an appropriate structure of ANN can be chosen reasonably easily. .

the best results are obtained generally by trial and error. . In which. Connection weights are the interconnecting links between the neurons in successive layers. The MLP consists of an input layer consisting of node(s) representing various input variable(s).5 + m) where n is the number of input nodes and m is number of output nodes. The values are distributed to all the nodes in the hidden layer depending on the connection weights wij and wjk. The architecture of the neural network used in this study and the schematic representation of a neuron are shown in Figure 4. Each neuron in a certain layer is connected to every single neuron in the next layer by links having an appropriate and an adjustable connection weight. Though this forms a guideline. P2…P7 are the annual rainfall which is taken as input to the model and output is the Runoff (R). P1. 4.3 RAINFALL – RUNOFF MODELLING In this study.3. 45 The number of nodes in the hidden layer is an important but indeterminate parameter with respect to computational efficiency and accuracy of an ANN model. The number of nodes in the hidden layer can range (Millar et al 1995) from (2n + 1) to (2n0. The input nodes pass on the input signal values to the nodes in the hidden layer unprocessed. H1. we use an multilater perceptrons (MLP) trained with a backpropagation algorithm to predict the runoff. H2 and H3 are the hidden layers of the model. (Riad et al 2004). the hidden layer consisting of many hidden nodes and an output layer consisting of output variable(s).

The number of input parameters in the ANN was determined on basis of parameters causing and affecting the underlying process which are also easily measurable at the reservoir site. (Jothiprakash et al 2009). The Sigmoid and Hyperbolic Tangent (tan h) transfer functions corresponding to a single sediment yield output were used to select the best ANN architecture. In the present study. . a trial and error approach was employed in the present analysis to select the appropriate ANN architecture.3 ANN model for Rainfall-Runoff 4. even though a general framework can be followed based on previous successful applications in engineering. H2 …H5 are the hidden nodes. 46 P1 wij H1 P2 wjk P3 H2 P4 R P5 H3 P6 H4 P7 Figure 4.4.4 INFLOW – SEDIMENT YIELD MODELLING There are no fixed rules for developing an ANN model. H1. Using available data of the study area. multilayer perceptrons (MLP) ANN model architectures to estimate volume of sediment retained in the reservoir were developed as shown in Figure 4. The number of hidden layers and the number of nodes in each hidden layer were also determined by a trial-and-error procedure. The number of nodes in the hidden layer play a significant role in ANN model performance.

The inputs are presented to a network at the input layer.1 Training and Validation of Networks In a MLP ANN. in which part of the available data from a site is used to develop a predictive relationship and then tested with the remaining data. 47 H1 Annual rainfall I1 H2 Volume of sediment Annual inflow I2 O1 H3 Annual capacity I3 H4 H5 Figure 4. Once the training process was satisfactorily completed. while no such connections exist between nodes within the same layer. and are acted upon by transformations to produce an output (Jain and Indurthy 2003).4. the network was saved.01 threshold) with respect to epoch size and validation results were obtained. Before training.4 Neural Network Model Used for Sediment Yield Prediction 4. For training purpose. back propagation (BP) training algorithm was used The learning process was terminated when an optimum prediction statistics (MSE=0. connections exist between nodes of different layers. The neural network learns by adjusting the weights and biases of such connections. The split-sample approach was applied. the test and validation data sets recalled and values . the initial network biases and weights were assigned small random values.

The performance of the models was tested through statistical indicators such as coefficient of correlation (R2). 4. . then the neural network structure was considered to perform well for predicting sediment yield with different sets of data. Here feed forward back propagation neural network is used to train the artificial neural networks. In general one hidden layer is found adequate for neural networks. Then ANN model was trained and tested to predict the runoff. annual rainfall and annual inflow are also included in input layer. If the prediction error statistics for these data sets were acceptable. 48 predicted by the model were compared with the observed values. one of the most significant parts is determination of the number of hidden layers and the nodes in input and hidden layers (Cigizoglu 2002a. In addition to previous annual sediment deposition values. Artificial Neural Networks approach was applied to forecast and estimate the future values of given data by using the computer program codes that are written in MATLAB programming language. The input layer and hidden layer node number is adjusted by checking the training and testing stage performances of neural networks. root-mean-square error (RMSE). In the training procedure. The first phase was the training of the neural networks. The networks were trained with various available input and output parameters. The determination coefficient and the mean square error are the performance criterion for the testing stage. sediment deposition in the reservoir.5 FORECASTING SEDIMENT DEPOSITION The annual sediment deposition values of the reservoir are forecasted based on the previous values. mean absolute error (MAE) (Srinivasulu and Jain 2006). Volume of sediment deposit and the capacity of Vaigai reservoir are estimated by using the above said analytical method.b). It was composed of two phases.