Professional Documents
Culture Documents
INTRODUCTION
Estimation of design flood is one of the important components of planning, design and operation
of water resources projects. Information on flood magnitudes and their frequencies is needed for
design of hydraulic structures such as dams, spillways, road and railway bridges, culverts, urban
drainage systems, flood plain zoning, economic evaluation of flood protection projects etc.
Methods of flood estimation may be broadly divided into five categories viz. (i) flood formulae
and envelope curves, (ii) rational formula, (iii) flood frequency analysis, (iv) unit hydrograph
techniques and (v) watershed models. The generally adopted methods of flood estimation are
based on two types of approaches viz. (i) deterministic approach, and (ii) statistical approach.
The deterministic approach is based on the hydro meteorological technique, which requires
design storm and the unit hydrograph for a catchment. The statistical approach is based on the
flood frequency analysis using the observed annual maximum peak flood data. The choice of
method depends on the design criteria applicable to the structure and availability of data.
Hydrologic processes may be thought of as stochastic processes. Annual maximum daily rainfall
serves as example of stochastic hydrologic process. Hydrologic processes are continuous
process. Determination of the probability distribution of yearly maximum and minimum
discharges is of fundamental importance in many water resource design problems. In flood
frequency analysis, the objective is to establish a flow magnitude (Q) corresponding to any
required return period (T) of occurrence. That is, a past record is fit with a statistical distribution
function, which is then used to make inferences about future events. Identification of the true
statistical distributions for the various hydrologic and meteorological data sets (annual flood
peaks and annual maximum daily rainfall) continues to be a major question facing engineers and
scientists. An even greater problem facing hydrologists and meteorologists is the identification of
the distribution form for regional data.
1
In hydrology, sufficient information is seldom available at a site to adequately determine the
frequency of rare events using frequency analysis. This is certainly the case for the extremely
rare events, which are of interest in dam safety risk assessment. One substitutes space for time by
using hydrologic information at different locations in a region to compensate for short records at
a single site.
Three approaches (Cudworth, 1989) have been considered for regional flood frequency analysis:
(1) average parameter approach; (2) index flood approach; and (3) specific frequency approach.
With the average parameter approach, some parameters are assigned average values based upon
regional analyses, such as the log-space skew or standard deviation. Other parameters are
estimated using at-site data, or regression on physiographic basin characteristics, perhaps the real
or log-space means. The index flood method is a special case of the average parameter approach.
It is a simple regionalization technique. The specific frequency approach employs regression
relationships between drainage basin characteristics and particular quantiles of a flood frequency
distribution. Regional analysis can be used to derive equations to predict the values of various
hydrologic statistics (including means, standard deviations, quantiles, and normalized regional
flood quantiles) as a function of physiographic characteristics and other parameters.
Flood frequency analysis, as commonly practiced, focuses on the estimation of return periods
associated with annual maximum flood peaks of various magnitudes. Based on an assumed
distribution, it is possible to make probability statements of future flows of various magnitudes.
The expected value of the random variable is also estimated for a given probability. For the
design purpose, T year design flood (T = 100, 50, 25, 10 or any desired year) is often required to
calculate from the best-fit distribution. So probability distribution plays a vital role in designing
structures and proper management of resources.
2
Popularly used continuous distributions are: Normal, Lognormal, Gamma, Pearson, LogPearson,
Generalized Extreme Value, Weibull, and Gumbel distribution. Commonly used parameter
estimation procedures are the method of moments (MOM) and the method of maximum
likelihood (MLE). Probability weighted moments (PWM) and method of maximum entropy
(MME) are of recent interest. But of late, a new method called L-moments was introduces which
has gained immense popularity.
L-moments of a random variable were first introduced by Hosking (1990). They are analogous to
conventional moments, but are estimated as linear combinations of order statistics. Hosking
(1990) defined L-moments as linear combinations of the Probability Weighted Moments. In a
wide range of hydrologic applications, L-moments provide simple and reasonably efficient
estimators of characteristics of hydrologic data and of a distribution's parameters (Stedinger et
al., 1992).
Frequency based design flood estimation primarily needs proper selection of the distribution to
be used to the previous years data to calculate T year return period flood. Selection of the
distribution can be done by goodness of fit test: Chi- Square Test or Kolmogorov-Smirnov Test.
Apart from the aforementioned tests the recently introduced L-moment ratio diagram based on
the approximations given by Hosking (1990) and the goodness of fit or behavior analysis
dist
measure for a frequency distribution given by statistic Zi described below, are also used to
identify the suitable frequency distribution. They have been proved to be very efficient in
selection of the distribution to be used.
In India, a number of studies have been carried out for estimation of design floods for various
structures by different organizations. Prominent among these include the studies carried out
jointly by Central Water Commission (CWC), Research Designs and Standards Organization
(RDSO) and India Meteorological Department (IMD) using the method based on synthetic unit
hydrograph and design rainfall considering physiographic and meteorological characteristics for
estimation of design floods3 and regional flood frequency studies carried out by RDSO using the
3
USGS and pooled curve methods4 for some of the hydro meteorological sub-zones of India.
Besides these, regional flood frequency studies have also been carried out at some of the
academic and research institutions. But, most of the regional flood frequency analysis studies
carried out in India are based on the conventional approaches.
(b) To screen the data using discordancy measure (Di), outliers test and regional
homogeneity test.
(d) To carryout comparative regional flood frequency analysis employing some of the
commonly adopted frequency distributions using L-moment ratio diagram
(e) To develop a regional flood frequency relationship for the selected distribution of
the Mahanadi river basin.
4
CHAPTER II
REVIEW OF LITERATURE
For hydraulic structure design, to estimate the design flood, selection of return period and
probability distribution are required. Generally last 35 years data are necessary to analyze and
compute design flood. For a distribution, there may be more than one method of parameter
estimation. So it is necessary to know (a) which distribution function is to be used and (b) which
method should be followed. . The main modelling problem is the selection of the probability
distribution for the flood magnitudes coupled with the choice of estimation procedure .As such
there are essentially two types of models adopted in flood frequency analysis literature: (i)
annual flood series (AFS) models and (ii) partial duration series models (PDS). Maximum
amount of efforts have been made for modelling of the annual flood series as compared to the
partial duration series. In the majority of research projects attention has been confined to the
AFS models.
In flood frequency analysis, the data collected at various sites should be true representative of the
annual maximum peak flood measured and must be drawn from the same frequency distribution.
The first step in flood frequency analysis is to verify that the data are appropriate for the analysis.
They should be checked for randomness in time and space domain and presence of outliers etc.
Tests for outliers and trends are well established in the statistical literature (e.g., Barnett and
Lewis, 1994; W.R.C., 1981; Kendall, 1975).
Hosking and Wallis (1997) mention that in the context of regional frequency analysis using L-
moments, useful information can be obtained by comparing the sample L-moment ratios for
5
different sites, incorrect data values, outliers, trends and shifts in the mean of a sample can all be
related to L-moments of the sample. A convenient amalgamation of the L-moment ratios into a
single statistic, a measure of discordancy between L-moment ratios of a site and the average L-
moment ratios of a group of similar sites, has been termed as “discordancy measure”, Di.
Hosking and Wallis (1997) mention that of all the stages in regional frequency analysis involving
many sites, the identification of homogeneous regions is usually most difficult and requires the
greatest amount of subjective judgement. The aim is to form groups of sites that approximately
satisfy the homogeneity condition, that the sites’ frequency distributions are identical apart from
a site-specific scaling factor. Several authors have proposed methods for forming groups of
similar sites for use in regional frequency analysis. A summary of these procedures and some of
the examples of their applications in regional frequency analysis, described by the authors is
given below.
The methods for forming homogenous group of sites can be categorized as geographical
convenience, subjective partitioning, objective partitioning, cluster analysis and other
multivariate analysis method
Under the procedure of geographical convenience the regions are often chosen to be sets of
contiguous sites based on administrative areas or major physical groupings of sites. To define
regions subjectively by inspection of the site characteristics such as annual maximum peak flood
data. Schaefer (1990) analyzing annual maximum peak flood data for sites in Washington State
formed regions by grouping together sites with similar values of mean annual precipitation. In
objective partitioning methods, regions are formed by assigning sites to one of the two groups
depending on whether a chosen site characteristic does or does not exceed some threshold value.
The threshold is chosen to minimize a within-group heterogeneity criterion.
6
Cluster analysis is a standard method of statistical multivariate analysis for dividing a data set
into groups and has been successfully used to form regions for regional frequency analysis. A
data vector is associated with each site, and sites are partitioned or aggregated into groups
according to the similarity of their data vectors. Hosking and Wallis (1997) regard cluster
analysis of site characteristics as the most practical method of forming regions from large data
sets.
For testing homogeneity, the Discordancy measure test and Heterogeneity measure test are
proposed and are used regularly since they give better results. Hosking and Wallis2
recommended a homogeneity test based on L-moment ratio. The importance of regional
homogeneity has been demonstrated by Hosking (1985), Wiltshire (1986) and Lettenmaier
(1987). There are several tests available to examine regional homogeneity in terms of the
hydrologic response of stations in a region, (Zrinji Z, 1996). In order to ensure that the resulting
regions are unique internally to a given level of tolerance, homogeneity and heterogeneity test
need to be performed.
Method of moments (MOM) is the most commonly used for parameter estimation. The other
methods are method of maximum likelihood (MML), method of probability-weighted moments
(PWM) and method of maximum entropy (MME). For a distribution with m parameters, the
technique in method of moments is to equate the first m moments of the distribution to the first
m sample moments. This result in m equations that can be solved for the m unknown parameters.
Moments about the origin, the mean, or the any other points can be used. L-moments is a
recently found method which was proved to be very effective.
Method of moments can be applied in two ways to estimate parameters of the LogPearson type
III distribution. Method of W.R.C is the method of moments applied to the logarithms of flood
flows (referred to as the indirect method). Bobee (1975) proposed that the method of moments be
applied directly to the observed data (direct method). This method is named as Method of Bobee
7
(MOB). First three moments about zero are used to estimate parameters in MOB. Phien and Hira
(1983) provided four additional methods for estimating parameters of the LogPearson type III
distribution, denoted by MM1, MM2, MM3 and MM4 (MM : mixed moments, as moments of x
and y (=logex) both are used ) and investigated the reliability of methods among that four
method, WRC and MOB. They showed that WRC, MM3 and MM4 produce unreliable estimates
for the parameters, and MOB did not performed well. Among the remaining three methods,
MM2, which is based on the first two moments of x and the variance of y, is the best, but it is
only slightly better than the MML, which in turn slightly better than the MM1, which is based on
the first two moments of x and the mean of y.
Singh et al (1986) employed the principle of maximum entropy to develop a procedure for
derivation of a number of frequency distributions used in hydrology. And they found that
principle of maximum entropy led to a unique technique for parameter estimation. The technique
is named as method of maximum entropy.
There are several methods for regional selection of distribution such as moment ratio diagram,
product moment ratio diagram and L-moment ratio diagrams. An L-moment diagram compares
sample estimates of the L-moment ratios L-cv, L-skew, and L-kurtosis with their population
counterparts for a range of assumed distributions.
Haktanir (1992) made a comparison of various flood frequency distributions using annual flood
peaks data of rivers in Anatolia. He applied the 2-parameter lognormal, 3-parameter lognormal,
SMEMAX, two-step power, log-Boughton, Gumbel, general extreme value, Pearson type 3, log-
Pearson type 3, log-logistic and Wakeby distribution to the annual flood peaks series of 45
unregulated streams in Anatolia. He found that lognormal (3-parameter and 2-parameter) and the
Gumbel distribution predicted better than other distribution functions for the return period of
100, 1000 and 10000 years.
8
Vogel and Wilson (1996) studied extensively with the data of 1455 sites for the selection of
probability distribution of annual maximum, mean, and minimum stream flows in the US with
the help of L-moment diagrams. They concluded that the Log- Normal (3-parameter), Log
Pearson type 3, and Generalized Extreme Value models are all acceptable models in the
continental US, whereas other three-parameter alternatives such as the Pearson type 3 and
Extreme Value type 3 and two-parameter alternatives such as Normal, Gamma, Extreme Value
type 1 (maximum), and Log- Normal (2-parameter) are not acceptable for the entire continent.
Their study revealed that annual minimum stream flows in the US are best approximated by the
Pearson type 3 distribution, yet the Log Pearson type 3 or Extreme Value type 3 distribution
would suffice, whereas Pearson type 3, Log- Normal (3-parameter) and Log Pearson type 3
distributions provide better fit for annual average stream flows.
Parmesraran et al. (1999) developed a flood-estimating model for individual catchment and for
the region as a whole using the data of fifteen gauging sites of Upper Godavari Basins of
Maharashtra. Seven probability distributions have been used in the study. Based on the goodness
of fit tests log normal distribution is reported to be the best-fit distribution. A regional
relationship between mean annual peak flood and catchment area has been developed for
estimation of mean annual peak flood for ungauged catchments and regional relationship for
maximum discharge of a known recurrence interval for the ungauged catchments.
Kumar et al (2003) carried on flood frequency analyses by the method of L-moment diagrams
for Middle Ganga Plains Subzone (which covers parts of Uttar Pradesh, Bihar, Jharkhand and
West Bengal) and identified GEV distribution as the robust distribution for their study area.
9
CHAPTER III
THEORY
The following aspects of methodology used for development of L-moment based regional flood
frequency relationship for gauged catchments as well as development of regional flood formula
for estimation of floods of various return periods for ungauged catchments of are discussed as
follows.
L-moments of a random variable were first introduced by Hosking (1990). Hosking and Wallis
(1997) state that L-moments are an alternative system of describing the shapes of probability
distributions. Historically they arose as modifications of the probability weighted moments’
(PWMs) of Greenwood et al. (1979).
The conventional moments or “product moments” involve higher powers of the quantile function
x(F); whereas, PWMs involve successively higher powers of non-exceedance probability (F) or
exceedance probability (1-F) and may be regarded as integrals of x(F) weighted by the
polynomials Fr or (1-F)r. As the quantile function x(F) is weighted by the probability F or (1-F),
hence these are named as probability weighted moments. The PWMs have been used for
estimation of parameters of probability distributions.
10
The first PWM estimator b0 is the sample mean µx. The higher PWMs can be calculated from the
order statistics Xn≤……≤X1. A simple estimator of br for r≥1 is:
( j − 0.35 )
r
1 n
br = ∑
n j =1
X j 1 −
n (3.1)
b0 = µx (3.2)
n −1(n − j) X j
b1 = ∑ (3.3)
j =1 n( n − 1)
n −2 ( n − j )( n − j − 1) X j
b2 = ∑ (3.4)
j =1 n( n − 1)( n − 2 )
However, PWMs are difficult to interpret as measures of scale and shape of a probability
distribution. This information is carried in certain linear combinations of the PWMs. These linear
combinations arise naturally from integrals of x(F) weighted not by polynomials Fr or (1-f)r but
by a set of orthogonal polynomials (Hosking and Wallis, 1997).
3.1.2 L-moments
11
• Although moment ratios can be arbitrarily large, sample L-moment ratios have algebraic
bounds
• L-moments provide better identification of the parent distribution that generated a
particular data sample.
• L-moments are less sensitive to outlying data values
λ 1= α 0 =β 0 (3.5)
λ 2= α 0 - 2α 1 = 2β 1 - β 0 (3.6)
λ 3 = α 0 - 6α 1 + 6α 2 = 6β 2 - 6β 1 + β 0
(3.7)
(3.8)
The procedure based on PWMs and L-moments are equivalent. However, L-moments are more
convenient, as these are directly interpretable as measures of the scale and shape of probability
distributions. Clearly λ 1, the mean, is a measure of location, λ 2 is a measure of scale or
dispersion of random variable. It is often convenient to standardise the higher moments so that
they are independent of units of measurement.
λr
τr = for r = 3, 4
λ2
(3.9)
12
Analogous to conventional moment ratios, such as coefficient of skewness τ 3 is the L-skewness
and reflects the degree of symmetry of a sample. Similarly τ 4 is a measure of peakedness and is
referred to as L-kurtosis. These are defined as:
Symmetric distributions have τ 3 = 0 and its values lie between -1 and +1. Although the theory
and application of L-moments is parallel to that of conventional moments, L-moment have
several important advantages. Since sample estimators of L-moments are always linear
combination of ranked observations, they are subject to less bias than ordinary product moments.
This is because ordinary product moments require squaring, cubing and so on of observations.
This causes them to give greater weight to the observations far from the mean, resulting in
substantial bias and variance.
In flood frequency analysis, the data collected at various sites should be true representative of the
annual maximum peak flood measured and must be drawn from the same frequency distribution.
The first step in flood frequency analysis is to verify that the data are appropriate for the analysis.
The preliminary screening of the data must be carried out to ensure that the above requirements
are satisfied. Statistical analysis of hydrologic data often assumes certain conditions of the data,
which have to be tested before proceeding with the analysis. One must verify that the flood data
is random based on statistical tests of independence in space and time.
13
When a sequence of observations is uncorrelated the auto correlation function for all lags other
than zero is theoretically equal to zero. However, when sampling from an uncorrelated series, the
estimated autocorrelation function rk is not equal to zero, but it has a sampling distribution,
which depends on the sample size N. This sampling distribution may be used to test hypothesis
that rk is not significantly different from zero. If the hypothesis is accepted, then the series is
uncorrelated, otherwise it is correlated. Thus the test of independence of time is based on the test
of correlogram rk. In addition to correlogram test, three other tests of independence are presented.
It has been shown that when the sample size N is large the distribution of rk is normal with mean
zero and variance 1/N.Therefore, the hypothesis is tested based on the 2 sided tolerance limits
given as
− 1 − u1−α / 2 N − K − 1 − 1 + u1−α / 2 N − K − 1
,
N −K N −K
The 95% confidence limits for the correlogram are mostly used for hydrologic data and that
implies that any given rk has a 5% chance of being outside the confidence limits.
N −k − −
∑( xi − x)( xi+k − x)
rk = i =1
N −
∑( xi − x) 2
i =1
(3.10)
14
_ N
, Where x = ∑ xi is the overall mean.
i =1
If the rk falls between the limits the data can be considered as independent in time.
The basic principle behind the test is to look for a run of 0 or 1 in a sequence. The runs test looks
at a sequence to see if the oscillations between the zeros and ones are too fast or too slow.
Consider a series of observations yi, i=1, 2, 3……………….N, with N=sample size and as the
sample mean. The sequence of ones and zeroes can be found as
Wi =1 if yi > y
Wi =0 if yi ≤ y
The run test is based on the assumption that if a series is independent, the number of total runs U
(runs of zeros and ones) is approximately normal with mean E(u) and variance
2 N1 N 2
E (U ) = +1 (3.11)
N 1 +N 2
2 N1 N 2 (2 N1 N 2 − N1 − N 2 )
Var (U ) = (3.12)
( N 1 + N 2 ) 2 ( N 1 + N 2 − 1)
Where N1 is the no of ones in the series Wi, and N2 is the number of zeros. The test statistic T is
computed by
U − E (U )
T =
(Var (U )) 1 / 2
(3.13)
15
The hypothesis of independence is accepted at the γ= (1-α/2) probability level if:
T ≤u1−α/ 2
(3.14)
Spearman’s Rank Correlation Test is used to test the strength of a correlation. When applied to
any two sets of results, the Spearman Test produces a Spearman Correlation Coefficient, r. This r
can take values between -1 and + 1. When r = -1, we have two sets of numbers that have a
perfect negative correlation. That is, without exception, as the value of one quantity in our
sample becomes larger, the value of the second quantity gets smaller. Similarly an r = +1
indicates a perfect positive correlation. Consider a sample series yi , i=1, 2, 3…………….N,
where N=sample size and let wi be the rank of yi when the series of observations is arranged in
ascending order. The spearman’s test is based on the rank correlation coefficient R between the
pairs (i,wi) for i=1, 2,………..N. This coefficient is computed by:
N
6∑ (i − wi ) 2
(3.15)
R = 1− i =1
N ( N − 1)
2
If the sample series is independent, the spearmans rank correlation coefficient R is normally
distributed, and 1-R2 has a chi square distribution with (N-2) degrees of freedom. Then the ratio
R N −2
T = (3.16)
1−R2
Follow the student t-distribution with N-2 degrees of freedom. The hypothesis is accepted when
16
T ≤t1−α/ 2 ,( N −2 )
(3.17)
Where, t(1-α/2) ,(N-2) is the (1-α/2) quantile of the student’s t-distribution with N-2 degrees of
freedom.
This test counts the number of turning points (peaks and troughs) in a sequence. Let M be the
total no of peaks and troughs. To calculate the test statistic the number of samples tested needs to
be large. This allows for the assumption of a normal distribution with a mean of
E(M) = 2/3 (n − 2) (3.18)
and a variance of
16 N − 29
Var ( M ) = (3.19)
90
The test characteristic T can then be calculated with the following equation
M − E (M )
T =
(Var ( M )) 1 / 2
(3.20)
T ≤u1−α/ 2 (3.21)
17
3.2.2 Discorancy and Homogenity
Once a set of physically plausible regions has been identified, it is desirable to assess whether the
region is meaningful and may be accepted as homogeneous. These tests include 2 types which
test the discordancy and heterogenity test. A convenient amalgamation of the L-moment ratios
into a single statistic, a measure of discordancy between L-moment ratios of a site and the
average L-moment ratios of a group of similar sites, has been termed as “discordancy measure”,
Di.
The aim of the discordancy measure is to identify those sites from a group of given sites that are
grossly discordant with the group as a whole. Discordancy is measured in terms of the L-
moments of the data of the various sites as defined below (Hosking and Wallis, 1997). Suppose
that there are N sites in the group. Let ui = [t(i) t3(i) t4(i)]T be a vector containing the t, t3 and t4
values for site i: T denotes transposition of a vector or matrix. Let
(3.22)
be (unweighted) group average. The matrix of sums of squares and cross products is defined as:
N _ _
A = ∑ (ui − u )(ui − u )T (3.23)
i =1
1 _ _
Di = N (ui − u )T A−1 (ui − u ) (3.24)
3
18
The site i is declared to be discordant, if the discordancy measure is larger than the critical value
of the discordancy statistic Di given in Table 3.1.
For a discordancy test with significance level α an approximate critical value of maxi Di is (N-
1)Z/(N-4+3Z), where Z is the upper 100α /N percentage point of an F distribution with 3 and N-
4 degrees of freedom. This critical value is a function of α and N, where α = 0.10. Di is likely
to be useful only for regions with N ≥ 7.
Outliers are the data points which depart significantly from the trend of the remaining data. The
retention, modification, deletion of these outliers can significantly affect the statistical
parameters computed from the data. All the procedures for treating outliers require judgment
involving hydrological and mathematical considerations. The basic decision needed to be taken
are that the skewness values of the logarithm values should be less than 0.4 , if we get the
skewness more than 0.4 , we need to recompute the test statistics by adjusting the values of mean
and coefficient of variation.
(3.25)
19
Where
XH =high outliers threshold in log units
=Mean logarithm of systematic peaks excluding zero flood events and outliers
previously detected
S =standard deviation of peaks
If the logarithms peaks are in the sample are greater than XH then they are considered as high
outliers.
The equation used to detect the low outliers is
X L = X − K N .S
(3.26)
Where
XL=low outlier threshold in log units
And other terms are same as in above equation.
All the data, which are above the high outlier threshold and lower than the low outlier threshold,
are removed. Now for the sites identified as discordant, this outlier test is carried out to identify
the outliers and they are removed, now again all the L-moments are calculated and the L-moment
ratios are calculated.
The test based on the heterogeneity measure H takes into consideration that in a homogeneous
region, all sites have same population L-moment ratios. But their sample L-moment ratios may
differ at each site due to sampling variability. The inter-site variation of L-moment ratio is
measured as the standard deviation of the at-site L-Coefficient of variation weighted
proportionally to the record length at each site. To establish what would be the expected inter-
20
site variation of L-moment ratios for a homogeneous region, 500 simulations were carried out
using the Kappa distribution for computing the heterogeneity measure, H.
If the value of
H ≥ 2 then region is heterogeneous
1 ≤ H<2 region is possibly homogenous
H<1 region is acceptably homogenous
The various distributions (Rao and Hameed, 2000) that are considered in the analysis are
( x−µ )2
1
p x ( x) = e − 2σ 2 -α < x < +α ; σ , µ >0
σ 2π
(3.28)
by integrating, we get the P(X)
Parameter estimation:
21
where l1 , l2 are the L moments.
Parameter estimation
22
3.3.4 Pearson Type III distribution
b −1
1 x −c x −c
f ( x) = ×exp − (3.37)
a Γ(b) a a
b=
( 0.36067t
− 0.5967t m2 + 0.25361t m3
m )
(
1 − 2.78861t m + 2.56096t m2 − 0.77045t m3 ) (3.39)
tm = 3πt32 (3.40)
(1 + 0.2906t m )
b= (3.41)
t m + 0.1882t m2 + 0.0442t m3
Γ(b)
a = sign (t 3 ) π l 2
Γ(b + 0.5)
(3.42)
c = l1 –ab (3.43)
1
F ( x) = exp − {1 − k ( x − ε ) / α} k ;k≠0 (3.44)
23
Parameter estimation:
where, (3.47)
ε = b0 + α {Γ(1 + k ) − 1} k (3.49)
1k
k
F ( x) = 1 − 1 − ( x − ε ) (3.50)
α
Parameter estimation:
1 − 3t3
k= (3.51)
1 + t3
α = l2 (1 + k )(1 + 2k ) (3.52)
ε = l1 − l2 (2 + k ) (3.53)
24
t3 is the L-skewness ratio.
Where, u, α and k are location, scale and shape parameters respectively. Logistic distribution is
the special case of the Generalized Logistic distribution, when k = 0.
(ln( x −c ) − µ y ) 2
1 −2σ y 2
p x ( x) = e (3.56)
( x − c )σ y 2π
Parameter estimation:
This is same as estimation of parameters for Pearson distribution. The difference is that
t3 is calculated from yi ( = log(Xi)).
If t3 ≥ 1/3 then
tm = 1 – t3 (3.57)
25
b=
( 0.36067t− 0.5967t m2 + 0.25361t m3
m )
(
1 − 2.78861t m + 2.56096t m2 − 0.77045t m3 ) (3.58)
(3.60)
c = l1 –ab (3.61)
3.4 Identification of Regional Frequency Distribution
26
Fig 3.1 Theoretical L-moment ratio diagram
The data point of the regional average is highlighted and the distribution lying close to the point
is taken as the regional average distribution. This distribution is subjectively selected as the
regional distribution and the further analysis is carried on using this.
The L-moment ratio diagram for our region is drawn by the theoretical relationships and
approximations provided by Hosking, which can be used to identify the suitable regional flood
frequency distribution. Hosking has provided some relationships between L-skewness and L-
kurtosis, for all the distributions.
Generalized Pareto
27
Generalized Logistic
Lognormal
Where t4, t3 are the L-Skewness and L-Kurtosis for the site.
The regional distribution is identified and the parameters for the whole region are to be
calculated. This is done using the regional mean method in which we consider the data of the
whole region as a single site and divide the site data by the site mean. Then the L-moments are
calculated for the whole region and the regional parameters are estimated for the selected
distribution. Substituting values of these regional parameters in the regional flood frequency
relationship, estimation of floods of various return periods for the catchments of Mahanadi river
system is done.
28
CHAPTER IV
The present study has been carried out for the region comprising of the catchment area of the
Mahanadi river basin. A brief description of the Mahanadi river basin and some of the tributaries
of the river Mahanadi whose data have been used in carrying out the study is given below.
29
4.1 Mahanadi river basin
Mahanadi basin extends over an area of 1,41,589 sq. km which is nearly 4.3% of the total
geographical area of India. It is bounded on the north by the Central India hills of Bundelkhand,
on the south and east by the Eastern Ghats and in the west by the Maikala range. The basin lies
in the state of Chhatisgarh, Orissa, Bihar and Maharasthra. Mahanadi rises from Raipur district
of Chhatisgarh and flows for about 851 km before it outfalls into the Bay of Bengal. Its main
tributaries are the Seonath, the Jonk, the Mand, the Ib, the Ong and the Tel.
The 15 sites are Baroda, Basanatapur, Ghatora, Jondhra, Kantamal, Kesinga, Kotni, Kurubhata,
Rajim, Ramnidh, Rampur, Salebhata, Sigma, Sundargarh, and Tikarapada.
The river system is as shown in the fig 1.the state wide distribution of the Mahanadi river basin
is given in table 4.1
Table 4.1 State-wise drainage area of Mahanadi basin
State Drainage area (sq. km)
Chhatisgarh 75136
Orissa 65580
Bihar 635
Maharastra 238
30
Fig. 4.1: Index map of river system of Mahanadi river basin
31
CHAPTER V
The annual maximum peak flood data of the 12 gauging sites are available for carrying out the
study. The following aspects of analysis and discussion of results are described in this chapter
1) Calculation of L moments and L moment ratios for the data available for each
sample
2) Testing the data for independence in time and space using the following tests
- Anderson’s correlogram test
- Runs test
- Spearman’s rank correlation coefficient test
- Turning point test
3) Screening the data of the sites with Discordancy test and removing the outliers in
the discordant sites and testing the Homogeneity of the region.
4) Estimation of parameters for the sites for each distribution used.
5) Identifying the regional frequency distribution and estimation of the quantiles for
several return periods.
The sample statistics of all the sites are given in Table 5.1.1.
32
Table 5.1.1 statistics of the various sites in Mahanadi river basin
S. Name Sample Mean Annual Standard Variance
No. Size Peak Flood Deviation
(m3/s)2
(Years) 3
(m /s) (m3/s)
1 Baronda 26 2119.107 235.686 5563509
2 Basantpur 33 13161.12 6718.263 45135052
3 Ghatora 24 715.52 464.5778 215832.5
4 Jhondhra 25 5024 2224.912 4760766
5 Kantamal 29 8152.24 5185.042 26884659
6 Kesinga 25 6540.83 5246.126 27521839
7 Kotni 25 1888.74 1216.11 1478923
8 Kurubhata 26 1463.99 455.2309 207235.2
9 Rajim 30 3466.157 2983.992 8904207
10 Ramnidhi 33 3831.076 2408.109 5798989
11 Rampur 33 1658.969 1021.466 1043392
12 Salebata 32 2372.672 2696.87 7273108
13 Sigma 33 4066.924 2183.834 4769133
14 Sundargarh 26 2709.873 2113.604 4467321
15 Tikarpada 31 18466.14 7156.092 51209648
The probability weighted moments are calculated and then the L-moments are calculated from
them.The L-moments and the L- moment ratios L-coefficient of variation, L-skewness and L-
kurtosis and L- for the different sites are given in table 5.1.2.
33
5 Kantamal 8152.24 2996.834 104.8853 -291.51 0.367609 0.034999 -0.09727
6 Kesinga 6540.83 2927.29 726.8339 163.139 0.447541 0.248296 0.05573
7 Kotni 1918.532 656.7888 153.2407 124.4524 0.342339 0.233318 0.189486
8 Kurubhata 1457.8 263.1805 -17.4602 -11.2457 0.180533 -0.06634 -0.04273
9 Rajim 3466.157 1673.526 347.8738 -124.524 0.482819 0.207869 -0.07441
10 Ramnidhi 3831.076 1360.212 137.5156 64.5708 0.355047 0.101099 0.047471
11 Rampur 1658.969 593.4039 31.22839 19.68567 0.357694 0.052626 0.033174
12 Salebata 2372.672 1156.504 522.5823 415.6877 0.487427 0.451864 0.359435
13 Sigma 4066.924 1225.851 60.02222 153.032 0.30142 0.048964 0.124837
14 Sundargarh 2709.873 993.614 449.1975 318.5188 0.366664 0.452085 0.320566
15 Tikarpada 18466.14 4174.721 -53.4453 192.4355 0.226074 -0.0128 0.046095
The selected data is tested for independence in time and space. For this 4 test have been
performed.
34
13 Sigma 0.0718 -0.37228 0.309776
14 Sundargarh -0.225 -0.42408 0.34408
15 Tikarpada 0.219 -0.38516 0.318497
The table shows that for all the sites lag one serial coefficient is with in the tolerance limits.
Hence, the data is independent in time and can be considered as random.
Run test is based on the assumption that, the hypothesis of independence is accepted if the mod
of test statistic T is less than the quantile of standard normal distribution. It is calculated as 1.96
from the statistics table for normal distribution. The results of the run test are given in Table
5.2.2.
As the test statistic is observed to be with in the limits (-1.96, +1.96), the run test shows all the
data to be independent.
35
5.2.3 Spearman’s rank correlation coefficient test
The Spearman’s test is based on the rank correlation coefficient R .If the sample series is
independent, the Spearman’s rank correlation coefficient R is normally distributed.The
hypothesis is accepted when the ratio T is within the desired limits. The results of test are given
in Table 5.2.3.
As the test statistic T is within the specified limits of the quantile of students-t distribution, the
hypothesis of independence can be accepted.
The turning point test is used to test the independence, and the test statistic T is calculated to
check if it is in the required limits of quantile of standard normal distribution. It is calculated as
36
1.96 from the statistics table for normal distribution. The results of the turning point test are
given in Table 5.2.4.
As all the calculated T values are with in the specified limit (-1.96, +1.96), the data can be
considered random.
The objective of the Discordancy measure (Di) test is to identify those sites from a group of
given sites that are grossly discordant with the group as a whole. Values of discordancy measure
have been computed in terms of the L-moments for all the 12 gauging sites of Mahanadi river
basin and are given in Table 5.3.1
Table 5.3.1 Discordancy measure for the 15 sites in Mahanadi river basin
S.no Name Sample Di
size
37
1 Baronda 26 1.368668
2 Basantpur 33 0.383616
3 Ghatora 24 0.762493
4 Jhondhra 25 0.692982
5 Kantamal 29 0.830140
6 Kesinga 25 0.411843
7 Kotni 25 0.113334
8 Kurubhata 26 1.558653
9 Rajim 30 1.585622
10 Ramnidhi 33 0.173700
11 Rampur 33 0.704656
12 Salebata 32 1.764878
13 Sigma 33 1.087157
14 Sundargarh 26 2.005929
15 Tikarpada 31 0.556330
As the numbers of sites are 15, the critical value of D i will be 3.0.The sites with value of Di
greater than 3.0 can be safely assumed to be discordant. It is observed in the table 5.3.1 that the
Di values for all the sites are less than the critical Di value of 3. Hence, as per the discordance
measure test, all the data of each site can be used for analysis.
Outliers are defined as the data which are extreme values and which deviate from the mean .So
we perform the outliers test to remove these extreme data values to get non discordant sites
which can be used in the flood frequency analysis. In the outliers test the higher outlier threshold
and lower outlier threshold is calculated and the data which are out side the threshold are
considered as outliers and are removed. Since the skewness values of the log values of the data
of all the sites are less than 0.4 the threshold values can be calculated using equation 3.25 and
3.26.The threshold values are given in Table 5.3.2
38
2 Basantpur -0.08464 4.717746 3.394231 Nil
3 Ghatora 0.02416 3.4325 2.04521 Nil
4 Jhondhra 0.2564 4.2198 3.1256 Nil
5 Kantamal 0.0051 4.3105 2.9152 Nil
6 Kesinga 0.00215 4.45621 2.8952 Nil
7 Kotni -0.00312 3.8541 2.6874 Nil
8 Kurubhata -0.3812 3.8756 2.8451 2
9 Rajim 0.0023 3.9451 2.7456 Nil
10 Ramnidhi 0.1456 4.1258 2.8923 Nil
11 Rampur 0.2549 3.6412 2.1954 Nil
12 Salebata 0.0045 4.2174 2.5621 1
13 Sigma 0.0984 4.3567 2.689 Nil
14 Sundargarh 0.2478 4.1247 3.0114 1
15 Tikarpada 0.0098 4.5129 3.7489 Nil
The data of the sites, which are outside the higher outlier and lower outlier, are removed and the
remaining data is considered for further analysis.
The test based on the heterogeneity measure H takes into consideration that in a homogeneous
region, all sites have same population L-moment ratios. To establish what would be the expected
inter-site variation of L-moment ratios for a homogeneous region, 500 simulations were carried
out using the Kappa distribution for computing the heterogeneity measure, H.
Since the region heterogeneity measure H is more than 2, the region is declared as
heterogeneous. Based on the statistical properties (L-moment ratio) one by one, two sites of the
region were excluded till H value less than 1.0 was obtained.
39
The site with L-moments ratios near the theoretical limits are removed, the site Baronda is
identified with such kind of L-moment ratios and is removed. The heterogeneity measure H is
calculated after removing the site.
This shows that the region is not completely homogeneous, so the site Kurubhata is removed and
the heterogeneity measure H is calculated
The region is declared as homogeneous after the removal of the 2 sites Baronda and Kurubhata.
The parameters are estimated from the L moments calculated after removing the outliers.
For each distribution the parameters are estimated for all the sites and the values are shown in the
tables below.
40
Parameter Parameter
1 Basantpur 2119.107 377.833
2 Ghatora 13161.12 -1069.2
3 Jhondhra 5024 -563.84
4 Kantamal 8152.24 -1113.58
5 Kesinga 1888.74 -103.24
6 Kotni 1463.99 75.09
7 Rajim 3466.157 -197.426
8 Ramnidhi 3831.076 -547.844
9 Rampur 1658.969 -143.335
10 Salebata 2372.672 28.03418
11 Sigma 4066.924 -58.822
12 Sundargarh 2709.873 187.7129
13 Tikarpada 18466.14 -1528
This exponential distribution is also a 2-parameter distribution with the parameters as scale and
location; in this we usually find the lower endpoint of the distribution. The parameters are given
in Table 5.4.2.
41
13 Tikarpada 18466.14 -1528
42
3 Jhondhra 5204.192 -1119.77 0.634611
4 Kantamal 8899.843 -1899.14 0.221893
5 Kesinga 2000.659 6.827214 -1.08763
6 Kotni 1415.525 135.5867 0.348667
7 Rajim 3402.287 -356.506 1.413514
8 Ramnidhi 3899.201 -1106.03 0.855534
9 Rampur 1811.089 -41.883 -0.75805
10 Salebata 2358.699 52.80088 0.430737
11 Sigma 4081.273 -5.82743 -2.15349
12 Sundargarh 2879.634 140.0148 2.72744
13 Tikarpada 19941.5 -1682.29 -0.23534
The 3-parameters of Generalized Pareto distribution are location, scale and shape. The estimated
parameters are given in Table 5.4.5.
The 3 parameter of Generalized logistic distribution are location, scale and shape. The estimated
parameters are given in Table 5.4.6.
43
Table 5.4.6 Parameter of Generalized logistic distribution
s.no Name Location Scale parameter Shape
parameter parameter
1 Basantpur 1782.044 148.4513 -0.68053
2 Ghatora 12196.89 -399.379 0.694669
3 Jhondhra 4858.692 -534 0.180512
4 Kantamal 8217.497 -1111.26 -0.03553
5 Kesinga 1990.281 -15.2519 -0.86652
6 Kotni 1471.224 74.66708 0.058463
7 Rajim 3322.223 -125.446 0.50009
8 Ramnidhi 3586.455 -479.022 0.28134
9 Rampur 1794.464 -39.1811 -0.7687
10 Salebata 2346.041 7.312231 -0.77813
11 Sigma 4124.129 -11.2321 -0.83193
12 Sundargarh 2894.983 25.53706 0.876106
13 Tikarpada 19254.36 -1268.01 -0.32984
The 3 parameter of Pearson Type III distribution are location, scale and shape. The estimated
parameters are in the Table 5.4.7.
44
9 Rampur 1658.969 -489.287 5.94009
10 Salebata 2372.672 50.0616 -0.48235
11 Sigma 4066.924 2563.8 2.2489
12 Sundargarh 2709.873 880.997 -8.7679
13 Tikarpada 18466.14 -3049.44 1.97941
45
Sigma 0.048964 0.012072 0.16866 0.114466 0.137151 0.124693
Sundargarh 0.452085 0.270353 0.33698 0.32414 0.293998 0.291631
Tikarpada -0.0128 -0.0024 0.1668 0.105729 0.118545 0.122947
The L-moment ratio diagram is plotted using these L-moment ratios with L-Skewness on X-axis
and L-kurtosis on Y-axis. The regional average is shown as a point in the Fig 5.1.
0.4
0.35
0.3
Fig 5.1 L-moment ratio diagram for the Mahanadi river basin
0.25
As shown in Fig 5.1, the GEV distribution lies closest to the point defined by the regional
average values of L-skewness, τ 3=0.180769 and L-kurtosis, τ 4 =0.147863 and the same is
identified as the regional distribution.
L-ku
0.2 46
5.5.2 Estimation of the regional Parameters and Quantiles
The parameters for the whole region are estimated using the regional mean method in which we
consider the data of the whole region as a single site and divide the site data by the site mean.
The mean of each site is given in Table 5.5.2
1 Basantpur 13161.1
2 Ghatora 715.52
3 Jhondhra 5024
4 Kantamal 8152.24
5 Kesinga 6540.83
6 Kotni 1918.532
7 Rajim 3466.157
8 Ramnidhi 3831.076
9 Rampur 1658.969
10 Salebata 2372.672
11 Sigma 4066.924
12 Sundargarh 2709.873
13 Tikarpada 18466.14
The new data is obtained after dividing the site data by the site mean, it is shown in Appendix B.
Then the L-moments are calculated for the whole region and the regional parameters are
estimated for the selected distribution.
The total number of data Sample for the whole region =379
Mean Annual Peak Flood =1.000003 (m3/s)
47
The Probability weighted moments and the first 4 L-moments are
L1 =1.000003, L2=0.355577, L3=0.066369, L4=0.048527
The L-moment ratios are
L-coefficient of variation (L-CV), (τ ) = 0.355576
L-coefficient of skewness, L-skewness (τ 3) = 0.186653
L-coefficient of kurtosis, L-kurtosis (τ 4) = 0.136473
The regional parameters for the GEV distribution are calculated from the regional average L-
moments, as
The GEV distribution has been identified as the robust distribution for the study area.
The form of the regional frequency relationship for GEV distribution is expressed as:
[ ]
yt = 1 − { − ln (1 − 1 / T )} / K
k
(5.2)
Where, K=regional shape parameter
48
Substituting values of the regional parameters in the above equations the regional flood
frequency relationship for estimation of floods of various return periods for the catchments of
Mahanadi river system is calculated and shown in Table 5.5.3
Table 5.5.3 Values of the Peak Flood for various return periods
Return Growth factor for Flood estimate of
period, T various return periods T year return periods
2
0.881387 5030.604
5
1.448740 8268.828
10
1.824377 10412.81
25
2.184698 12469.38
50
2.298996 13121.75
100
2.651096 15131.4
200
3.000596 17126.2
500
3.348820 19113.73
3.808242
1000 21735.92
1) Regional flood frequency analysis has been carried out based on L-moments approach,
considering the annual maximum peak flood data of 15 locations of the Mahanadi river
basin. The data collected were tested for independence in space and time. Four tests were
performed to test the hypothesis of independence and the data was found to be
independent and random.
2) Discordancy measure (Di) test was carried out and it was found that the data of the sites is
not discordant. Outlier test was performed to remove the outliers existing in the data. By
carrying this test the outliers were removed and this data was used for further analysis.
49
The L-moment based homogeneity test, namely, heterogeneity measure, shows that the
region is heterogeneous. Hence 2 of the sites Baronda and Kurubhata are removed based
on the statistical properties (L-moment ratio), and the remaining sites are found to be
homogeneous.
4) For estimation of floods of various return periods for the catchments of the study area the
developed regional flood frequency relationship is used and the flood estimates for
various return periods are calculated.
50
CHAPTER VI
Conclusions
• The annual maximum discharge data collected from the Office of the Central Water
Commission has been tested for independence and made free from extreme values.
• The flood frequency analysis is carried out using L-moments and L-moment ratio
diagram is helpful in easy identification of frequency distribution.
• GEV distribution has been identified as the robust distribution for the study area and this
is used to calculate the flood frequency formulae. The peak floods for various return
periods are calculated.
• The superiority of using this method has been thoroughly proved in this study. This
method can be easily employed for both the at-site and regional flood frequency analysis.
51
REFERENCES
1. Bobee, B., (1975). “The log Pearson type 3 distribution and its applications in hydrology”.
Water Resour. Res., 11(5): 681-689.
2. Haktanir, T., (1992). “Comparison of various flood frequency distributions using annual
flood peaks data of rivers in Anatolia”. J. Hydrol., 136: 1-31.
3. Hosking J.R.M., (1990). L–moments: analysis and estimation of distribution using linear
combination of order statistic. J. Royal Stat. Soc. B, 52, 105-124.
4. Hosking, J.R.M. (1985). The theory of probability weighted moments. Res. Rep. RC12210,
IBM Res., Yorktown Heights, N.Y.
5. Hosking, J. R.M., and Willis, J. R., (1997). “Some statistics useful in regional frequency
analysis”. Water Resour. Res., 29(2): 271-281.
6. Kumar, R., Chatterjee, C., Kumar, S., Chani, A. K. L., and Singh, R. D., (2003).
“Development of regional flood frequency relationships using L-moments for middle
Ganga Plains subzone 1(f) of India”. Water Resources Management.,17:243-257.
8. Rao, A. R., and Hamed, K. H., (2000). Flood Frequency Analysis. CRC Press, Boca Raton,
Florida.
9. Singh, V. P., Rajagopal, A. K. and Singh, K., (1986). “Derivation of some frequency
distributions using the principle of maximum entropy (POME)”. Adv. Water Resources.,
9: 91-106.
52
10. J R Stedinger, R M Vogel and E Foufoula-Georgiou (1992). Frequency Analysis of Extreme
Events.in. Handbook of Hydrology. (ed) D R Maidment, Mc Graw- Hill, New York.
11. Subramanya, K., (1999). Engineering Hydrology. Tata McGraw-Hill Publishing Company
Limited, New Delhi, India.
12. Wiltshire, S.E., (1986a). Regional flood frequency analysis I: Homogeneity Statistics.
Hydrological Sciences Journal, 31, 321-333
13. Vogel, R. M., and Wilson, I., (1996). “Probability distribution of annual maximum, mean,
and minimum stream flows in the United States”. J. Hydrologic Engg., 1(2) :69-76.
53
APPENDICES
54
1982 275 1990 453.6 1998 1259
1983 529.5 1991 1033 1999 658.2
1984 588 1992 579.8 2000 220.8
1985 729 1993 1315 2001 749.5
1986 478 1994 2276 2002 383.2
1987 680 1995 436.5 2003 449
55
1983 1655 1992 17569 2001 13200
1984 4142 1993 2753 2002 1988
1985 8655 1994 11734 2003 10814
1986 3707 1995 6638
1987 2484 1996 2943
56
1981 1681 1991 6122 2001 713.3
1982 1210 1992 5926 2002 838
1983 2680 1993 1577 2003 8449
57
Table A12 Yearly maximum discharge data of Salebhata
year maximum year maximum year maximum
discharge discharge discharge
3 3
(m /sec) (m /sec) (m3/sec)
1972 764.8 1983 3864 1994 2577
1973 787.0 1984 1701 1995 4764
1974 3423 1985 2645 1996 398.3
1975 800.0 1986 3771 1997 2428
1976 2696 1987 939.0 1998 581.7
1977 1356 1988 310.0 1999 1935
1978 2352 1989 533.7 2000 114.0
1979 1547 1990 1514 2001 3225
1980 2317 1991 1560 2002 1545.0
1981 1087 1992 924 2003 7616.0
1982 14509 1993 1341
58
1982 2152 1991 3556 2000 2000
1983 2070 1992 1896 2001 3200
1984 3409 1993 1208 2002 2069
1985 3965 1994 2587 2003 1338
1986 1710 1995 850.3
Appendix B: Yearly maximum discharge data of 13 sites of Mahanadi river basin after dividing
each one of them by the regional mean.
59
1980 1.665514 1991 1.068224 2002 0.258489
1981 0.935332 1992 1.245489 2003 2.514076
60
1983 0.252569 1993 0.729861 2003 0.46797
1984 0.637371 1994 1.430159
61
Table B8 Yearly maximum discharge data of Ramnidhi
maximum maximum maximum
discharge discharge discharge
3 3
year (m /sec) year (m /sec) year (m3/sec)
1971 1.503496 1982 1.199143 1993 0.328629
1972 1.223679 1983 0.311923 1994 1.670813
1973 1.194183 1984 1.879893 1995 0.795078
1974 0.822225 1985 1.289457 1996 1.176434
1975 2.842287 1986 0.906535 1997 0.212552
1976 1.421535 1987 1.737896 1998 1.092906
1977 1.514459 1988 0.403543 1999 0.4487
1978 0.706591 1989 0.239098 2000 0.242961
1979 0.613145 1990 0.344029 2001 0.792729
1980 1.601902 1991 1.38447 2002 0.188564
1981 0.95613 1992 0.198117 2003 1.75695
62
1977 0.571508 1988 0.130654 1999 0.815537
1978 0.991288 1989 0.224936 2000 0.048047
1979 0.652008 1990 0.6381 2001 1.359228
1980 0.976537 1991 0.657487 2002 0.651165
1981 0.458134 1992 0.389435 2003 3.209886
1982 6.115052 1993 0.565186
63
Table B13 Yearly maximum discharge data of Tikarapada
maximum maximum maximum
discharge discharge discharge
3 3
year (m /sec) year (m /sec) year (m3/sec)
1972 0.537655 1983 1.626873 1994 1.195055
1973 1.116912 1984 1.339048 1995 0.857301
1974 0.91362 1985 1.295401 1996 0.646049
1975 1.230092 1986 1.429159 1997 1.003785
1976 1.425531 1987 0.528374 1998 0.824538
1977 1.55382 1988 0.518518 1999 0.515756
1978 1.671333 1989 0.354108 2000 0.250838
1979 0.770439 1990 1.078625 2001 1.445893
1980 1.425531 1991 0.860117 2002 0.66641
1981 0.810187 1992 1.087669
1982 1.057505 1993 0.963928
64