You are on page 1of 11

GLUE Celebratory Workshop

GLUE: 20 years on
27th June - 28th June 2012
Lancaster Environment Centre,
Lancaster University

2012 marks the 20th anniversary of the first GLUE (Generalised Likelihood
Uncertainty Estimation) paper by Beven and Binley in 1992 and has, in addition, just
passed 1000 citations on the Web of Science. The GLUE methodology has been
controversial; viewed by some as simply wrong, by others as an earlier version of
Approximate Bayesian Computation, and by others as a useful way of trying to reflect
the impacts of epistemic errors on complex error structures in environmental
modelling. This workshop will review the way in which the GLUE controversy has
illuminated the debate about how to assess uncertainty in environmental models, the
philosophy that underlies the GLUE methodology and examples of using GLUE in
practice.

Session 1 - Wednesday 27th June

13.30-14.00 Registration and refreshments


Training Rooms 1 & 2, Gordon Manley Building

14.00 Looking back on 20 years of GLUE controversy


Keith Beven, Lancaster University, UK

14.30 The 1992 Gwy case study revisited


Andy Binley, Lancaster University, UK

14.50 On the use of innovative post-event data for reducing uncertainty in calibrating
flood propagation models
Angela Candela, University of Palermo, Italy

15.10-15.30 Refreshments

15.30 A flow-duration curve strategy for gauging ungauged catchments


Ida Westerberg, Uppsala University, Sweden

15.50 Limits of acceptability and complex error reconstruction in a rainfall-runoff


simulation
Phil Younger, University of Wisconsin, USA
16.10 What has the 20 years of GLUE brought to flood frequency estimates?
Sarka Blazkova, T G Masaryk Water Research Institute, Prague, Czech Republic

16.40 Parameter estimation with non-concomitant time series


Bettina Schaefli, Delft University of Technology, Switzerland

17.00 Close

19.30 Celebratory Meal


The Sultan, Brock Street, Lancaster

Session 2 – Thursday 28th June

09.30 Ensemble modelling and data uncertainty within GLUE


Tobi Krueger, University of East Anglia, UK

09.50 Does increased hydrochemical model complexity decrease robustness?


Chiara Medici, Universidad Politecnica de Valencia, Spain

10.10 Key insights for understanding and reducing uncertainty in hydrological modelling
Georges-Marie Saulnier, Université de Savoie, France

10.30 Coffee and posters

11.00 Informal likelihoods, consistently wrong models and prediction uncertainty


Paul Smith, Lancaster University, UK

11.20 Hydrological models as web services: the consequences for model development
and uncertainty analysis
Wouter Buytaert, Imperial College, London, UK

11.40 Understanding the main uncertainties in hydrological ensembles of RCM


predictions for large catchments in the UK
Jim Freer, University of Bristol, UK

12.00-12.30 Final discussion: Does GLUE have a future?

12.30-1.30 Lunch and close


Abstracts
Session 1

Looking back on 20 years of GLUE controversy


Keith Beven
Lancaster University, UK

Although the first GLUE paper was published in 1992, my work on exploring the results of
Monte Carlo realizations of models actually goes back further to when I was working at the
University of Virginia in 1980. George Hornberger, who had helped in developing the
Generalised Sensitivity Analysis based on behavioural/non-behavioural splits was a colleague.
Applying this to hydrological models revealed issues of equifinality and complex parameter
interactions. It was then but a short step to using an ensemble of behavioural models in
prediction weighted by some likelihood measure. The question – as now – was what likelihood
measure to use….. In 1992 we suggested a few alternatives. We also tried formal statistical
methods (long before being criticized for not using formal likelihoods!!) but it was clear that
these led to over-conditioning of the likelihood surface given what we now refer to as epistemic
sources of error. So the likelihood measure and of behavioural threshold were user choices (but
hopefully conforming to common sense!) which led to the criticism of GLUE being too subjective
by Mantovan and Todini, Stedinger and Vogel, and others. Some of these criticisms were
misguided for the type of non-ideal problem GLUE is intended to deal with as explained in the
2006 Manifesto for the Equifinality Thesis paper and the 2008 ‘So why would a modeller choose
to be incoherent?’ paper. More recently we have tried to make GLUE more objective by defining
reasoned limits of acceptability independent of any model run. This has also led to more careful
consideration of epistemic uncertainties in the data and the identification of disinformation.
Many issues about the real information content in calibration data remain, however, and these
will be discussed.

The 1992 Gwy case study revisited


Andy Binley
Lancaster University, UK

The original Beven and Binley 1992 paper illustrated the GLUE approach through a series of
simulations of simulations of rainfall-runoff for the Gwy sub-catchment of the River Wye
catchment in central Wales. The simulations were based on the Institute of Hydrology (now
Centre for Ecology and Hydrology) distributed model, IHDM. At the time, carrying out Monte
Carlo simulations with IHDM was significantly constrained by computational resources, forcing
us to utilise parallel computing facilities (a relatively new technology to environmental sciences
in the early 1990s). Given these constraints, we limited the exploration of likelihood functions
and Monte Carlo sampling. Here we revisit the case study and examine the validity of the
relatively small number of realisations in the original study; 500 realisations 20 years ago
demanded significant computing power, now we can compute 500,000 realisations on a desktop
PC. We also compare a number of likelihood functions and examine potentially more realistic
measures that account for uncertainty in observations (rainfall and discharge).
On the use of innovative post-event data for reducing uncertainty in
calibrating flood propagation models
Angela Candela1, Giuseppe T. Aronica2 and Susanna Naso2
1
University of Palermo, Italy
2
University of Messina, Italy

Hydraulic models for flood propagation description are an essential tool in many fields and are
used, for example, for flood hazard and risk assessments, evaluation of flood control measures,
etc. However, the calibration of these models is still underdeveloped in contrast to other models
like e.g. hydrological models essentially for lacking of specific data, because extreme flood events
occur rarely and very rarely are monitored. Very often calibration data, when available, consist
of water depths measure in some scattered points.

For an inundation event occurred on November 2011 in Sicily, new sources of data were
available due to the availability of many videos recorded by ‘normal’ people using new
technologies. These videos allowed to derive flow velocities and estimate flow discharges in
some parts of the inundated area. These pieces of information have been used together with the
measured water depths to improve GLUE calibration of a two-dimensional finite element flood
propagation model and reduce equifinality in its predictions.

A flow-duration curve strategy for gauging ungauged catchments


Ida K. Westerberg1 , Keith J. Beven2 and Jan Seibert3
1
Uppsala University, Sweden
2
Lancaster University, UK
3
Stockholm University, Sweden

A few discharge measurements may contain a lot of the information that is needed to calibrate a
hydrological model. Taking a few measurements could therefore be a good strategy for reducing
model predictive uncertainties in ungauged catchments. Guidance is needed regarding how
many water level and discharge measurements are required, and how uncertainties in these
measured data can be accounted for in model calibration.

In this study we investigate how far the installation of a water-level recorder together with a few
discharge measurements can reduce the simulated uncertainty at an ungauged site. We used
observed water-level and discharge measurements from the Brue catchment in England
together with simulated discharge at an hourly time step from TOPMODEL. The observed
discharge and water-level data were used to estimate the flow-duration curve for the period of
water-level record, which was then used to calibrate the model in GLUE accounting for
observational uncertainties in discharge. We investigated how the simulated uncertainty was
constrained depending on both the number of discharge measurements within different flow
intervals and the length of the water-level record.
Limits of acceptability and complex error reconstruction in a rainfall-
runoff simulation
Philip M. Younger1, Keith J. Beven2 and Jim E. Freer3
1
University of Wisconsin, USA
2
Lancaster University, UK
3
University of Bristol, UK

We explore the extended Generalised Likelihood Uncertainty Estimation (GLUE) methodology,


in which `limits of acceptability' are specified for each time step on a model prediction. Any
model whose predictions fall within these limits is considered to be `behavioural'. The
standardisation method is used in the determination of which models are `behavioural'. The
genesis and application of the standardisation method are shown. This method also provides an
opportunity to use the information contained in the scaled deviations of the behavioural models,
to reconstruct predicted flows that are closer to the observed than the original model outputs
(in a way analogous to a statistical model inadequacy function but which allows for non-
stationary structure in the residuals). In this first application of the reconstruction methodology,
it is shown how nonstationarities and nonlinearities in the deviations can be taken into account
by means of a classification of the hydrograph characteristics.

What has the 20 years of GLUE brought to flood frequency estimates?


Sarka Blazkova1 and Keith Beven2
1
T G Masaryk Water Research Institute, Prague, Czech Republic
2
Lancaster University, UK

…obviously more and more problems. As we look at the estimation process from the other side,
i.e. “how much it cannot be”, instead of the traditional “how much it is”, the increase in rejecting
is just a little bit quicker than the development of computational and measurement power.

The flood frequency requires the estimation of N-year floods, i.e. the main criteria are those on
the flood exceedence curve. Nevertheless we would like to have also flow duration correct and
snow criteria correct etc. etc. We do not like our model to be wrong at all. And we have good
reasons for it: the global change for example.

Fortunately at least some of us would like also our models to be useful. Everything is the
question of measure.

The contribution will give a history of continuous simulation within the GLUE methodology
starting with running through Lancaster University in evening hours and starting frequency
TOPMODEL on every idle computer, up to running series of 100-thousand years length in hourly
time step on a cluster of several dozens PCs (it takes weeks or months, anyway).

Parameter estimation with non-concomitant time series


Bettina Schaefli
Ecole Polytechnique Fédérale de Lausanne, Switzerland

Rainfall-runoff models should not be run with input time series that are significantly different
from the ones that have been used to estimate the model parameters. This well-known problem
affects the reliability of hydrologic predictions in many classical simulation set-
ups, e.g. for climate change impact studies or real time flood forecasting using climate model
outputs or for design flood estimation with weather generators. A potential way to overcome
this limitation is the calibration of hydrological models on reference times series (e.g. discharge)
that are non-concomitant with the input time series (precipitation, temperature). This
contribution summarizes existing attempts (e.g. calibration on signatures or spectral
calibration) and discusses open questions.
Abstracts
Session 2

Ensemble modelling and data uncertainty within GLUE


Tobias Krueger1, Jim Freer2 and John Quinton3
1
University of East Anglia, UK
2
University of Bristol, UK
3
Lancaster University, UK

The consideration of multiple model structural hypotheses and the explicit incorporation of data
uncertainty estimates in performance metrics have always been inherent in the GLUE
methodology. However, these aspects only recently began to be realised when the computing
power was available to run an ensemble of structures in parallel and when field experiments
were conducted to quantify data uncertainty. In this paper we review the first developments in
these areas which include significant contributions of our own such as the first realisation of
ensemble modelling within GLUE. Parallels to and distinctions from other uncertainty
methodologies are discussed.

Our studies are concerned with rainfall-runoff, soil erosion, sediment and phosphorus transport
modelling (Krueger et al., 2009; 2010a; 2010b; 2012; Quinton et al., 2011) and provide
examples of: ensemble modelling via flexible model structures; model comparison; input
uncertainty propagation; model evaluation against fuzzy data uncertainty estimates. Our focus
of discussion is the rigour of model hypothesis testing when all sources of uncertainty are
considered.

Does increased hydrochemical model complexity decrease


robustness?
C. Medici1, F. Francés1 and A. J. Wade2
1
Universitat Politècnica de València, Spain
2
University of Reading, UK

The aim of this study was to analyse if additional model complexity gives better capability to
model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if
additional parameters cause over-fitting. Three nitrogen-models of varying hydrological
complexity (LU4-N, LU4-R-N and SD4-R-N) were considered. For each model, general sensitivity
analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each
based on 100,000 Monte Carlo simulations. The results show that as the complexity of a
hydrological model increases over-parameterisation occurs, but the converse is true for a water
quality model where additional process representation leads to additional acceptable model
simulations. Water quality data help constrain the hydrological representation in process-based
models. Increased complexity was justifiable for modelling river-system hydrochemistry.
Key insights for understanding and reducing uncertainty in
hydrological modelling
W. Castaings and G-M. Saulnier
Université de Savoie, France

Models are essential in order to understand, reproduce, forecast and control the behaviour of
hydrological systems. However, since our imperfect models are forced and constrained by
indirect and uncertain observation data, sensitivity analysis is essential for multiple motivations
and stages characterising the development and use of models. Sensitivity analysis aims at
instructing the modeller as to the relative importance of the uncertain inputs in determining the
variable(s) of interest. It is a valuable and impartial step, carried out to assess and improve
parameters identifiability, understand and corroborate the model structure, guide future
monitoring and modelling efforts. However, no single sensitivity analysis method is best under
all circumstances and the space of uncertain inputs to be explored should be carefully specified.
The problems listed above will be illustrated through the application of variational methods (i.e.
adjoint technique) and smpling-based approaches to hydrological models adopting a
mechanistic approach or based on the concept of hydrological similarity.

Informal likelihoods, consistently wrong models and prediction


uncertainty
Paul Smith
Lancaster University, UK

Many of the Informal Likelihoods used within GLUE have been based on a sum of squared errors
criteria; possibly of transformed data. Such criteria trade off maximising the linear correlation
and matching the first two moments of the model simulation and the observed data. For a given
model the conditioned parameter space resulting from a GLUE analysis can be thought of as a
subjective probability distribution representing the modeller’s belief about the relative
probability of various parameter combinations. However the interpretation of the predictive
distribution given by the simulated model output for future events weighted by these posterior
odds and its validation against observed data is not clear.

The method presented addresses the interpretation of the predictive distribution. It attempts to
balance the desire to fit the observed data against the risk of over fitting. The method is
motivated by the idea that in hydrologically similar situations the model residuals should be in
some sense consistent. Assessment of this consistency is clouded by error sources since
hydrological models are only approximations of real world systems which are typically driven
by, and assessed against, data that may not be representative at the model scale and are subject
to observation error. By using clustering techniques to relate the model states, which indicate
hydrological similarity in terms of the model; observed data and the residuals of the model fit it
is shown that useful measures of model performance can be derived. The use of these to
formulate an Informal Likelihood and in deriving predictive distributions of future observations
is demonstrated and critiqued.
Hydrological models as web services: the consequences for model
development and uncertainty analysis
Wouter Buytaert and Claudia Vitolo
Imperial College, London, UK

We are entering the era of "big data science". The internet opens up a collection of data sets that
are so large and heterogeneous that they become awkward to work with. New algorithms,
methods and models are needed to filter these data to find trends, test hypotheses and make
predictions. This provides exciting challenges for environmental sciences, and hydrology in
particular. Web-enabled models will make it possible to process continuous streams of satellite
images, online sensors and observations, and to provide tailored products for a variety of end-
users. However, in order to do so we need to reconsider the ways that models are built and
results are presented. This paper will give a broad-brush view of the consequences of this
evolution for model development and uncertainty analysis. It is rooted in recent evolutions
studied in the UK Virtual Observatory pilot and related projects.

Understanding the main uncertainties in hydrological ensembles of


RCM predictions for large catchments in the UK
Freer, J.E. 1, Clark, M. 2, Odoni, N. 1, Coxon, G. 1, McMillan, H. 3, Souvignet, M. 1, Cloke, H.
4,5
, Wetterhall, F. 5, Pappenberger, F. 5, Bloomfield, J. 6, Greene, S.7, Johnes, P. 7, MacLeod,
C. 8 and Reaney, S. 9
1
University of Bristol, UK
2
National Center for Atmospheric Research, USA
3
National Institute of Water and Atmospheric Research, New Zealand
4
Kings College London, UK
5
European Centre for Medium-Range Weather Forecasts, UK
6
British Geological Society, UK
7
University of Reading, UK
8
The James Hutton Institute, UK
9
Durham University, Durham, UK

In this paper we explore the potential for an ensemble of hydrological model structures coupled
to Regional Climate Models to predict current and future changes in river flows. We develop this
approach for a large number of catchments in the UK using the FUSE (Framework for
Understanding Structural Errors) modelling framework. This ensemble of models is driven by
observed data and Regional Climate Model (RCM) predictions to understand differences in
predictions of hydrological responses. We include uncertainties by allowing for multiple model
structures and parameters in FUSE and by using an ensemble of RCM outputs driven by ERA40
re-analysis boundary conditions. We discuss if such a modelling system can make useful
predictions regionally and how much predictions overlap when such uncertainties in climate
predictions are included.
Posters

A decade of flood extension uncertainty assessment in the GLUE


framework in Luxembourg - Lessons learned and where to go from
here.
Giustarini L., Hostache R., Matgen P., Pfister L.
Centre de Recherche Public - Gabriel Lippmann, Luxembourg

Flood inundation models play a central role in both real-time flood forecasting and in floodplain
mapping. A full understanding of the model and the uncertainty in the model strategy is
therefore fundamental for many operational applications. The GLUE methodology can be used
in these cases as a tool for model calibration and uncertainty analyses.

In this poster, a review of the most significant applications, over the last decade, of the GLUE
methodology to the hydraulic model of the Alzette River in Luxembourg is reported.
The paper by Pappenberger et al. (2007) focused on the subjectivity in calibration of flood
inundation models. They suggested the use of performance measures directly related to the
vulnerability of specific locations. To partially mitigate the issue of overfitting in order to get
better results at the location of interest, the calibration was performed in the framework of the
GLUE methodology. The main outcome of their work was that, to support flood hazard
management decisions with imperfect models, the calibration can be weighted in favour of the
vulnerability of the targeted structures, incorporating it directly into the calibration process.

In the study of Pappenberger et al. (2006), the GLUE methodology was been applied in order to
establish on the Alzette catchment the sensitivity of a flood inundation model to uncertainty of
the upstream boundary condition and the bridges within the modelled region. Different
likelihood measures were used within the GLUE framework: the choice of the appropriate
measure was linked to the available dataset (i.e. radar satellite images, photographs, high water
marks, water level hydrograph, travel time), however all datasets were used in a fuzzy
membership framework.

The study of Schumann et al. (2008a) investigated the utility of uncertain remotely sensed
water stages to evaluate uncertain flood inundation predictions. A SAR image for the Alzette
2003 flood enabled hydraulic analyses with spatially distributed water stage data. Applying the
concept of the extended GLUE methodology, behavioural models were required to fall within
the uncertainty range of remotely sensed water stages. It was shown that in order to constrain
model parameter uncertainty and, at the same time, increase parameter identifiability as much
as possible, models needed to satisfy the behavioural criterion at all locations. It was derived
that it is necessary not only to evaluate models at a high number of locations using
observational error ranges, but also to examine where the model would require additional
degrees of freedom to generate low model uncertainty at every location.

On a follow-up work, Schumann et al. (2008b) invoked that uncertainty in measurement should
be viewed as central to remote sensing. In their work, the uncertainty associated with water
stages derived from a single SAR image for the Alzette 2003 flood was assessed using a stepped
GLUE procedure. The main uncertain input factors to the SAR processing chain to estimate
water stages included geolocation accuracy, spatial filter window size, image thresholding
value, DEM vertical precision and the number of river cross sections at which water stages are
estimated. For the GLUE analysis, a Nash-like efficiency criterion adapted to spatial data was
proposed whereby acceptable SAR model simulations were required to outperform a simpler
regression model based on the field-surveyed average river bed gradient.
Weighted CDFs for all factors based on the proposed efficiency criterion allowed the generation
of reliable uncertainty quantile ranges and 2D maps showing the uncertainty associated with
SAR-derived water stages. The stepped GLUE procedure demonstrated that not all field data
collected are necessary to achieve maximum constraining. A possible efficient way to decide on
relevant locations at which to sample in the field was also proposed. It was also concluded that
the resulting uncertainty ranges and flood extent or depth maps may be used to evaluate 1D or
2D flood inundation models in terms of water stages, depths or extents. For this, the extended
GLUE approach, which copes with the presence of uncertainty in the observed data, may be
adopted.

The GLUE approach has been successfully applied to the hydraulic model of the Alzette River in
Luxembourg, proving to be a helpful tool in the support of flood hazard management decisions,
in parameter uncertainty assessment, in model calibration with remotely sensed and spatially
distributed dataset and in deriving uncertainty flood maps.

REFERENCES

F. Pappenberger, K. Beven, K. Frodsham, R, Romanowicz, P. Matgen, Grasping the unavoidable


subjetivity in calibration of flood inundation models: A vulnerability weighted approach, Journal of
Hydrology, (333), pp. 275-287, 2007.

F. Pappenberger, P. Matgen, K. J. Beven, G.-B. Henry, L. Pfister, P. de Fraipont, Influence of


uncertainty boundary conditions and model structure on flood inundation predictions, Advances in
Water Resources, (29), pp. 1430-1449, 2006.

G. Schumann, M. Cutler, A. Black, P. Matgen, L. Pfister, L. Hoffmann, Evaluating uncertain flood


inundation predictions with uncertain remotely sensed water stages, Int. J. River Basin Management,
vol. 6, no. 3, pp. 187-199, 2008.

G. Schumann, F. Pappenberger, P. Matgen, Estimating uncertainty associated with water stages from
a single SAR image, Advances in Water Resources, (31), pp. 1038-1047, 2008.

You might also like