Estimating uncertainty in future assessments of climate change

Peter Challenor

2006

Tyndall Centre for Climate Change Research

Technical Report 50

FINAL PROJECT REPORT FOR T2.13:

Estimating uncertainty in future assessments of climate change .
Overview of project work and outcomes Non-technical summary
Project T2.13 has studied the role of uncertainty in predictions of climate change using a variety of statistical techniques including methods developed specifically for the project. The project investigated uncertainty in climate modelling using statistical methods developed for the analysis of the outputs of complex computer programs alongside techniques that were developed under T2.13 specifically for use in the discipline of climate science. We have, meanwhile, challenged the use of probability for representation of all climate uncertainties and have demonstrated alternative approaches. We have produced a suite of publicly available software for uncertainty analysis, published numerous peer-reviewed papers, and set out two novel frameworks for the assessment of uncertainty in fields such as climate science. We have used this software to look at the probability of an abrupt or rapid climate change in the climate model used in the Tyndall Integrated Assessment Model. We have also analyzed uncertainties in econometric predictions.

Objectives
Project T2.13 incorporates several sub-objectives: • To take recently developed Bayesian statistical methods for the analysis of complex computer models, further develop them and apply them to climate models, in particular to components of the Integrated Assessment Model, and infer probabilistic statements from them To establish whether extreme outcomes may be estimated using Bayesian techniques To demonstrate methods of handling uncertainty in the less quantitative areas of integrated assessment. Methods include the use of imprecise probabilities, and the use of fuzzy sets for representation of linguistic information associated with socio-economic scenarios. To provide users of the Tyndall integrated assessment framework with tools that allow them to assess uncertainties in their predictions, and provide statistical advice on an ad-hoc basis

• •

Work undertaken
We used our newly developed methods to generate more reliable estimates of the uncertainty present in current climate predictions; and produced a suite of software that allows straightforward production of these calculations. These methods have been applied to the estimation of the probability of an abrupt or rapid climate change caused by the collapse of the thermohaline circulation. We developed new methods that can use imprecise probabilities (such as provided by linguistic descriptions of socio-economic scenarios) in a mathematically rigorous manner, and challenged the

majority view of the climate research community that probabilistic predictions of climate change represent the totality of uncertainty in the field. We have developed a new modelling framework in which climate models can be assessed in a statistically rigorous manner. We have applied our theoretical work to case studies in the field of climate science including investigation of climate model uncertainty, and econometric projections.

Results
The Bayesian component of T2.13 has been successful: we have developed a suite of software that applies the methods for analysing uncertainty developed in computer models from a Bayesian perspective by the Sheffield partners. A peer-reviewed journal article describing the code is currently at revision stage (minor comments). We have applied this software to a range of problems in climate science. Two papers describing the application of these methods to estimating the probability of rapid climate change are in the process of being published. The probability of such a change depends upon the emission scenario chosen but for all scenarios it is much higher than was expected around 0.3. The software includes quantile-based methods that assess distributions of extremes using 95th or 99th percentiles as a surrogate for “worst credible case” although we have not yet applied quantile methods to climate science The modelling framework component of T2.13, carried out at Durham, has met the objective of analysing the statistical nature of the connection between a climate model and the true climate system into more manageable components. A number of papers have been published or are in review. Project T2.13 has met the objective of demonstrating new methods for uncertainty representation: imprecise probability concepts have been developed in which fuzzy linguistic scenarios may be incorporated into uncertainty analysis We have demonstrated how conflicting probability distributions for climate sensitivity can be used in assessments of climate change, within the framework of imprecise probabilities. In the process we have developed new theory in the field of uncertainty representation. This work is also in press. We have also assessed uncertainty in econometric models using the abovementioned tools as part of the fourth objective above.

Relevance to Tyndall Centre research strategy and overall Centre objectives
The IPCC’s Third Assessment Report (TAR) clearly and explicitly flags uncertainty as being of major importance in climate science, both from a technical and political perspective. Tyndall project T2.13 directly addresses this issue: we have developed new conceptual and computational tools for quantifying and reducing uncertainty, and have applied these techniques successfully to problems of climate science.

Potential for further work
• All strands of T2.13 have the potential for extensive further work. The use of imprecise methods for representation of climate uncertainties is in its infancy.

The Bayesian analysis strand, having written and tested an extensive suite of software tools, has the potential for much further work. Now that the software is mature, we expect it to be used in a wide variety of climatological contexts including uncertainty analysis, the incorporation of observational data into computer models, and identification of ‘tipping points’ such as MOC collapse. The modelling framework concepts developed also clearly need further work to fully utilize in a climate science context; the issues raised are clearly of great importance, yet we have not as yet realized the full potential of these newly-developed conceptual tools in a climate science context. There are still some unsolved problems. For example we can

only produce uncertainty distributions on univariate outputs at the moment. This has meant that we have not been able to devise a scheme to deal with modular models in an efficient modular way. The use of imprecise methods for representation of uncertainty is in its infancy. Directions for further work include: • • • Application of uncertainty to econometric modelling in a rigorous way Application to climate decisions (both mitigation and adaptation) Conditioning of imprecise probabilities with climate observations Aggregation of uncertainties

• •

Implementation of the imprecise probability methods in the Tyndall Community Integrated Assessment System

The broader, fourth component of T2.13 in which we provide statistical advice to Tyndall researchers, has also met with some success: two econometric models (one for long term global energy markets and one for theoretical investigation of Kondratiev waves) have been developed, using methods drawn from the discipline of numerical Ordinary Differential Equation solution. We can now begin to use this software to conduct more formal assessment of the large uncertainties inherent in medium-scale econometric forecasting.

Technical report
1 Introduction: overview
Project T2.13 splits into three separate themes: Bayesian statistical techniques; imprecise probability theory (fuzzy membership); and econometric modelling. We summarize the findings of each part separately.

2 Bayesian statistical techniques
Realistic models of the Earth’s climate, including climate prediction models proposed for the Tyndall IAM, such as C-GOLDSTEIN (Marsh 2002), take many hours, or even weeks, to execute. This type of model can have tens to hundreds of free parameters, each of which is only approximately known. Consider a scenario in which a particular model has been run in the past with different sets of input parameters. The code output (here a single scalar value) is desired at a new set of parameters, at which the code has not been run; applications would include Monte Carlo simulations.

Under the Bayesian view, the true value of the code output is a random variable, drawn from a distribution that is conditioned by our prior knowledge (and in this case by the previous code runs); the computer code is thus viewed as a random function. The methodology we are using gives statistical inferences about this random function, and may be used to furnish a quick, and computationally cheap, statistical approximation to the actual computer code.

Although deterministic---in the sense that running the model twice with identical inputs gives identical outputs---the Bayesian paradigm is to treat the code output as a random variable. Formally, the code output y is represented as a function η(x) of the input parameter vector x and parameter vector theta; if no confusion can arise, y=η(x) is written. Although η is known in principle, in practice this is not the case. C-GOLDSTEIN, for example, comprises over 10000 lines of computer code, and the fact that the output is knowable in principle appears to be unhelpful in practice.

It is perhaps easier to understand output from a deterministic code as a random variable if one

imagines oneself to be at a computer terminal, waiting for a run to finish. The fact that both the code itself, and its input parameters, are perfectly prescribed (thus the output is in principle predetermined), does not reduce one’s subjective uncertainty in the output; the Bayesian paradigm treats this as indistinguishable from uncertainty engendered by a conventional random variable. Of course, if the code has been run before at slightly different point in parameter space, one may be able to assert with some confidence that this particular run output shouldn’t be too different from the last run’s (and of course if the code is run at exactly the same point in parameter space, we can be certain that the code output will be identical).

Such considerations are formalized in the Gaussian process model. A case often encountered in practice is that the values of one or more components of x are uncertain, typically because of practical difficulties of measurement (cloud albedo is a commonly-cited example). It is therefore appropriate to consider the true input configuration to be a random variable with a particular, but unknown, distribution G(x). Thus the output Y=η(X) is also a random variable and it is the distribution of Y---the `uncertainty distribution'---that is of interest.

In the case of C-GOLDSTEIN, direct Monte-Carlo simulation of Y is so computationally intensive as to become impractical: a single run typically takes 24 hours, and the parameter space is 27 dimensional. The methods discussed here allow one to emulate the code: the unknown function η(.) is assumed to be a Gaussian process, and previous code runs constitute observations. We show here that emulation can be orders of magnitude faster than running the code itself; and, at least in the examples considered here, the emulator provides an estimate that is reasonably close to the value that code itself would produce.

In the emulator, the object of inference is the random function that is evaluated by the computer code. Although one may question whether this object is actually of interest given that the model is unlikely to predict reality correctly even a good model may be rendered ineffective by uncertainty in the input; and that quantification of the uncertainty distribution is the first step towards reducing uncertainty in the predictions.

Notwithstanding that computer models are interesting and important entities in their own right, the ultimate object of inference would nevertheless be reality. Incorporating observational data into the current Bayesian framework is discussed below.

2.1 Bayesian Calibration

As discussed in above, many computer models, including climate prediction models such as CGOLDSTEIN, take weeks or months to execute; this type of model can have tens to hundreds of free parameters, each of which is only approximately known. Notwithstanding that computer models are interesting and important entities in their own right, the ultimate object of inference is reality, not the model. Here we discuss and use a statistically rigorous method of incorporating observational data into uncertainty analyses, that of Kennedy and O’Hagan 2001, and Kennedy and O’Hagan 2001a (hereafter KOH2001 and KOH2001S respectively).

When preparing a computer model for making predictions, workers often calibrate the model by using observational data. In rough terms, the idea is to alter the model's parameters to make it fit the data.

A statistically unsound (Currin 1991) methodology would be to minimize, over the allowable parameter space, some badness-of-fit measure over the physical domain of applicability of the code.

To fix ideas, consider C-GOLDSTEIN predicting sea surface temperature (SST) as a function of latitude and longitude. The naive approach discussed above would be to minimize, over parameter space P, the squared discrepancies between observations and model predictions summed over observational space.

From a computational perspective, minimizing the object function above necessitates minimization over a large dimensional parameter space; optimization techniques over such spaces are notorious in requiring large numbers of function evaluations, which is not be feasible in the current context. Also note that the method requires code evaluations at the same points as the observations, which may not be available.

Other problems with this approach include the implicit assumption that observation errors (and model evaluations) are independent of one another. This assumption is likely to be misleading for closely separated observation points and for pairs of code evaluations that use parameter sets that differ by only a small amount.

Bayesian methods are particularly suitable for cases in which the above problems are present, such as the climate model predictions considered here. KOH2001 present a methodology by which physical observations of the system are used to learn about the unknown parameters of a computer model using the Bayesian paradigm. 2.1.1 Calibration data Computer climate models generally require two distinct groups of inputs. One group is the (unknown) parameter set about which we wish to make inferences; these are the calibration inputs. It is assumed that there is a true but unknown parameter set θ with the property that if the model were run with these values, it would reproduce the observations plus a model inadequacy term plus noise. For any single model run, it is convenient to denote the calibration parameters by θT.

The other group comprises all the other model inputs whose value might change when the calibration set is fixed, conventionally denoted by x. In the current context, this group is the latitude and longitude at which mean surface temperature is desired.

A model run is essentially an evaluation---or statistical observation---of the unknown random function η at a specified point in parameter and location space; a field observation is a statistical observation of another unknown random function ξ. This system has the notational advantage of distinguishing the true but unknown value of the calibration inputs from the values that were set as inputs when running the model.

The calibration data comprise the field observations (subject to error), and the outputs from N runs of the computer code evaluated at points on a design matrix. 2.1.2 Model

KOH2001 suggest that ξ(x)=ρ η(x,θ)+δ(x) is an adequate representation of the relationship between observation and computer model output. Here δ is a model inadequacy term that is independent of

model output. It is then assumed that the observations have normal error with standard deviation λ. The true value of neither λ nor ρ is known and has to be inferred from observations and the calibration dataset.

This statistical model suggests a non-linear regression in which the computer code itself defines the regression function. KOH2001 consider the meaning of model fitting in this context and conclude that “the true value for θ has the same meaning and validity as the true values of regression parameters [in a standard regression model]”. 2.1.3 Prior knowledge The Bayesian paradigm allows the incorporation of a priori knowledge (beliefs) about the parameters of any model being used. Consider a model in which climate sensitivity λ is a free parameter. In principle, we are free to tune it to give the best fit to observational data. However, most workers would consider 0.4-1.2 (SI units) to be a reasonable range for this physical constant, in the sense that finding an optimal value outside this range would be unacceptable.

It is possible to incorporate such `prior knowledge' into analysis by ascribing a prior distribution to λ. One might interpret the upper and lower ends of the range as 5th and 95th percentiles of a normal, or lognormal, distribution.

2.2 Design of package ‘calibrator’
The calibrator package is an integrated suite of software that is to be used alongside the emulator package. This package is considerably more conceptually involved than the emulator package, and the code includes perhaps an order of magnitude more lines.

The overarching design philosophy of the package is to be a direct and transparent implementation of KOH2001 and KOH2001S. Each equation appearing in the papers has a corresponding R function, and the notation is changed as little as possible. One reason for this approach is debugging: with such a structured programming style, it is possible to identify differences between alternative implementations. Speed penalties of this approach appear to be slight, but are noted where appropriate.

2.3 Illustrations of the use of the methods in a climate science context
The methods above are now used in four separate case studies that arise in statistical analysis of climate modelling. The first case study is a direct application of the emulator technique in which an emulator for C-GOLDSTEIN is built and used.

The second case study shows a slightly more sophisticated application in which a pair of emulators for pre-industrial and post-industrial climate (the difference being atmospheric CO2) is built and used to estimate the probability of thermohaline circulation collapse.

The third case study uses the calibrator package to shows how observed data can affect predicted PDFs as predicted by C-GOLDSTEIN, and the fourth uses Bayesian methods to assess sea level changes.

2.3.1 Illustration 1: emulation of C-GOLDSTEIN
Here we consider a dataset obtained from C-GOLDSTEIN (Edwards 2004, 2005), an Earth systems model of intermediate complexity. C-GOLDSTEIN requires 16 parameters, each of which is uncertain. Each line of the dataset consists of a (randomly chosen) set of parameters, together with the global average surface air temperature (SAT) predicted by C-GOLDSTEIN when using that set of parameters.

An emulator is then built for this dataset, which is then used to predict SATs.

Figure 1 shows observed versus predicted SATs. The emulator was “trained” on the first 27 runs, so 27 of the points are perfect predictions. The other 77 points show a small amount of error, of about 0.1C. Such an error is acceptable in many contexts.

Figure 1: global mean temperature as predicted by the model (x axis) and the emulator (yaxis). Diagonal line indicates perfect agreement. One interpretation of the accurate prediction of SAT would be that this output variable is welldescribed by the Gaussian process model.

However, variables which exhibit violently nonlinear dynamics might be less well modelled: examples would include diagnostics of the thermohaline circulation (THC), which is often considered to be a bistable system, exhibiting catastrophic breakdown near some critical hypersurface in parameter space. Such behaviour is not easily incorporated into the Gaussian process paradigm and one would expect emulator techniques to be less effective in such cases. However our experiments with emulating analytical functions including such steps have shown that in general the emulator does an excellent job.

C-GOLDSTEIN takes 12 to 24 hours to complete a single run, depending on the parameters. The emulator software presented here takes about 0.1 seconds to calculate the expected value and estimated error---a saving of over five orders of magnitude.

Viewing a deterministic computer code as a Gaussian process with unknown coefficients and applying Bayesian analysis to estimate them appears to be a useful and powerful technique in the field of climate science, as it furnishes a statistical approximation to the computer model that runs in a fraction of the time.

2.3.2 Illustration 2: estimating the probability of MOC collapse
The emulator technique is now applied to provide an estimate for thermohaline circulation collapse. This work was done jointly with the NERC Rapid programme. The thermohaline circulation keeps the North Atlantic anomalously warm by carrying warm water northwards. One possible result of anthropogenic greenhouse gases is the shutdown of this circulation. We have two papers submitted on this topic. The first (Challenor and Marsh, 2006) details the methods we are using and is in review in Ocean Modelling. The second (Challenor, Hankin and Marsh, 2005) is in press in the proceedings of the ‘Avoiding Dangerous Climate Change’ conference held in Exeter in 2005.

Our procedure is as follows: 1. For each of the parameters of the model, specify an uncertainty distribution (a ‘prior’) by expert elicitation and thereby define a prior pdf for the parameter space of the model 2. We generate a set of parameter values that allow us to span the parameter space of these prior pdf’s and run the climate model at each of these points to provide a calibration dataset of predicted strength of the meridional overturning circulation. 3. Estimate the parameters of the emulator using the calibration dataset using the methods described above. 4. Sample a large number (thousands) of points from the prior pdf 5. Evaluate the emulator at each of these points. The output from the emulator then gives us an estimate of pdf of the variable being emulated from which we can calculate statistics such as the probability of being less than a specified value To generate our emulator in this way we need an ensemble of model runs to act as our ‘training set’. We use an ensemble of 100 members in a latin hypercube design. We first “spin up” the climate model for 4000 years to the present day (the year 2000, henceforth “present day”) in an ensemble of 100 members that coarsely samples from a range of values for fifteen key model parameters (see Table 1); the remaining two parameters are only used for simulations beyond the present day. Following 3800 years of spin-up under pre-industrial CO2 concentration, the overturning reaches a near-equilibrium state in all ensemble members (see Figure 2). For the last 200 years of the spin-up,

we specify historical CO2 concentrations, leading to slight (up to 5%) weakening in the overturning circulation. After the complete 4000-year spin-up we have 100 simulations of the current climate and the thermohaline circulation. Figure 2 shows fields of mean and standard deviation in surface temperature. The mean temperature field is similar to the ensemble–mean obtained by Hargreaves et al. (2005). The standard deviations reveal highest sensitivity to model parameters at high latitudes, especially in the northern hemisphere, principally due to differences (between ensemble members) in Arctic sea ice extent. We obtain an ensemble of present day overturning states, with ψmax in the plausible range 12-23 Sv for 91 of the ensemble members (see Figure 2). The overturning circulation collapsed in the remaining nine members after the first 1000 years. Since we know that the overturning is not currently collapsed, we remove these from further analysis. This is a controversial point that we will return to in the discussion. We then specify future anthropogenic CO2 emissions according to each of the six illustrative “SRES” scenarios (Nakicenovic et al., 2000) (A1, A2, B1, B2, A1FI, A1T), to extend those simulations with a plausible overturning to the year 2100. Table 1. Mean value and standard deviation for each model parameter
Parameter* Windstress scaling factor 2 -1 Ocean horizontal diffusivity (m s ) 2 -1 Ocean vertical diffusivity (m s ) -5 -1 Ocean drag coefficient (10 s ) 2 -1 Atmospheric heat diffusivity (m s ) 2 -1 Atmospheric moisture diffusivity (m s ) “Width” of atmospheric heat diffusivity profile (radians) Slope (south-to-north) of atmospheric heat diffusivity profile Zonal heat advection factor Zonal moisture advection factor 2 -1 Sea ice diffusivity (m s ) Scaling factor for Atlantic-Pacific moisture flux (x 0.32 Sv) Threshold humidity, for precipitation (%) † -2 “Climate sensitivity” (CO2 radiative forcing, Wm ) -2 Solar constant (Wm ) Carbon removal e-folding time (years) ‡ Greenland melt rate due to global warming (Sv/°C) Mean 1.734 4342 5.811e-05 3.625 3.898e+06 1.631e+06 1.347 0.2178 0.1594 0.1594 6786. 0.9208 0.8511 6.000 1368 111.4 0.01(Low) 0.03617 (High) St. Dev. 0.1080 437.9 1.428e-06 0.3841 2.705e+05 7.904e+04 0.1086 0.04215 0.02254 0.02254 831.6 0.05056 0.01342 5.000 3.755 15.10 0.005793

* The first 15 parameters control the background model state. The first 12 of these have been objectively tuned in a previous study, while the last three (threshold humidity, climate sensitivity and solar output) are specified according to expert elicitation. The last two parameters control transient forcing (CO2 concentration and ice sheet melting, see Methods for details). Italics show the parameters which exert particular control on the strength of the overturning and which we varied in our experiment. For these parameters, the standard deviation was doubled in the cases with high uncertainty. The climate sensitivity parameter, ΔF2x, determines an additional component in the outgoing planetary long-wave radiation according to ΔF2xln(C/350), where C is the atmospheric concentration of CO2 (units ppmv). Values for ΔF2x of 1, 6 and 11 Wm-2 yield “orthodox” climate

sensitivities of global-mean temperature rise under doubled CO2 of around 0.5, 3.0 and 5.5 K, respectively.

We used two mean values of the Greenland melt rate parameter (see text).

In extending the simulations over 2000-2100, we specify the SRES CO2 emissions scenarios and introduce two further parameters (the last two parameters in Table 1) that relate to future melting of the Greenland ice sheet and the rate at which natural processes remove anthropogenic CO2 from the atmosphere. The rate of CO2 uptake is parameterised according to an e-folding timescale that represents the background absorption of excess CO2 into marine and terrestrial reservoirs. This timescale can be roughly equated with a fractional annual uptake of emissions. Timescales of 50, 100 and 300 years equate to fractional uptakes of around 50%, 30% and 10% respectively, spanning the range of uncertainty in present and future uptake. For each emissions scenario, a wide range of CO2 rise range is obtained, according to the uptake timescale. This in turn leads to a wide range of global-mean temperature rise, which is further broadened by the uncertainty in climate sensitivity. The freshwater flux due to melting of the Greenland ice sheet is linearly proportional to the air temperature anomaly relative to 2000 (Rahmstorf and Ganopolski, 1999). This is consistent with evidence that the Greenland mass balance has only recently started changing (Bøggild et al., 2004). Using the model results for each SRES scenario at 2100, we build a statistical model (emulator) of ψmax as a function of the model parameters. A separate emulator is built for each emissions scenario. We then use these six emulators, coupled with probability densities of parameter uncertainty, to calculate the probability that ψmax falls below 5 Sv by 2100 using Monte Carlo methods. We use a sample size of 20000 for all our Monte Carlo calculations. An initial, one-at-a-time, sensitivity analysis shows that the four most important parameters are: (1) sensitivity to global warming of the Greenland Ice Sheet melt rate, providing a fresh water influx to the mid-latitude North Atlantic that tends to suppress the overturning; (2) the rate at which anthropogenic CO2 is removed from the atmosphere; (3) climate sensitivity (i.e. the global warming per CO2 forcing); (4) a specified Atlantic-to-Pacific net moisture flux which increases Atlantic surface salinity and helps to support strong overturning. We perform a number of experiments calculating the probability of substantial slow-down of the overturning under variations in the values of these parameters and their uncertainties. For each SRES scenario, we show in Table 2 the probability of substantial reduction in Atlantic overturning for five uncertainty cases. Each case is split into low and high mean Greenland melt rate, as this has been previously identified as a particularly crucial factor in the thermohaline

circulation response to CO2 forcing (Rahmstorf and Ganopolski, 1999). The probabilities in Table 2 are much higher than expected: substantial weakening of the overturning circulation is generally assumed to be a “low probability, high impact” event, although “low probability” tends not to be defined in numerical terms. Our results show that the probability is in the range 0.30 to 0.46 (depending on the SRES scenario adopted and the uncertainty case): this could not reasonably be described as “low”. Even with the relatively benign B2 scenario we obtain probabilities of order 0.30, while with the fossil fuel intensive A1FI we obtain even higher probabilities, up to a maximum of 0.46.
Figure 2. Spin-up of the Atlantic MOC, including CO2 forcing from1800.

Table 2. Probability of Atlantic overturning falling below 5 Sv by 2100
SRES scenario Uncertainty Case A1 A2 B1 B2 A1FI A1T

default uncertainty Case 1a 0.37 0.38 0.31 Case 1b 0.38 0.40 0.30 doubled uncertainty in Climate sensitivity Case 2a 0.37 0.38 0.33 Case 2b 0.39 0.40 0.31 doubled uncertainty in Atlantic-Pacific moisture flux Case 3a 0.37 0.38 0.32 Case 3b 0.40 0.40 0.30 doubled uncertainty in CO2 uptake Case 4a 0.38 0.38 0.31 Case 4b 0.38 0.39 0.31 doubled uncertainty in Greenland melt rate Case 5a 0.37 0.38 0.31 Case 5b 0.38 0.39 0.30

0.32 0.31 0.33 0.32 0.33 0.30 0.32 0.31 0.32 0.32

0.43 0.46 0.43 0.46 0.43 0.46 0.44 0.44 0.43 0.45

0.32 0.31 0.33 0.32 0.33 0.32 0.33 0.32 0.32 0.32

In Case 1, “default uncertainty” refers to the standard deviations for all 17 parameters in Table 1. In Cases 2-5, “doubled uncertainty” refers to twice the standard deviation on an individual parameter (italic in Table 1). In each case, “a” (“b”) indicates low (high) mean Greenland melt rate.

2.3.3 Illustration 3: incorporation of observational data
The implementation is now applied to a problem from climate science: the output from the CGOLDSTEIN computer model. The techniques presented here are complementary to more conventional assimilation schemes such as the Ensemble Kalman filter (Annan 2005). Here, observational data is taken from the SGEN dataset.

The procedure is as follows: 1. Pick a few representative points on the Earth's surface whose temperature is of interest. This set might include, for example, the North and South Poles, and a point in Western Europe. Seven points appears to be a good compromise between coverage and acceptable runtime. 2. Choose a reasonable prior distribution for the 16 adjustable parameters, using expert judgement. 3. Generate an ensemble of calibration points in parameter space by sampling from the prior PDF. For each member of this ensemble, run C-GOLDSTEIN at that point in parameter

space and record the predicted temperature at each of the seven representative points on Earth's surface. For this application, a calibration ensemble of about~100 code runs appears to represent a reasonable compromise between coverage and acceptable runtime. 4. Estimate the hyperparameters using numerical optimization techniques. 5. From the prior PDF, sample an ensemble of say 10000 points and, using the hyperparameters estimated in step 4 above, use the emulator package to estimate the European temperature conditional on the field observations and code runs. 6. Estimate the probability of each of the 10000 points in parameter space 7. Construct a weighted CDF for the temperature predictions.

Such a procedure will specify a CDF that incorporates observed temperature data from the NCEP dataset. In essence, parameter sets that give predicted temperature distributions closely matching observed data are assigned a high probability; if the match is poor, a low probability is assigned.

Although it might be possible to maximize the posterior PDF for parameter space (and thus determine a maximum likelihood estimate for the ``true'' parameters), such an approach would preclude techniques that require averaging over parameter space Results are now presented which show distributions for temperatures in Northern Europe, conditional on the observed values of the observations and optimized values of the hyperparameters. For the purposes of this paper, ``temperature in Northern Europe'' means annual average temperatures, as predicted by C-GOLDSTEIN, at latitude 60, longitude 0. Figure 3 shows how the distribution function changes on incorporation of observed temperatures.

Figure 2: distribution function for temperature in Northern Europe. Solid line shows posterior curve, dotted line shows prior Note how the median temperature, for example, falls by about one degree centigrade when the observational dataset is included.

The primary conclusion from the above analysis is that it is possible to apply the methods of KOH2001 to a real problem in climate dynamics, in a computationally efficient manner.

This is, as far as we are aware, the first time that a posterior PDF has been rigorously derived for the parameter space of a climate model.

The fact that the estimated PDF changes significantly on incorporation of observational data suggests that the prior is uninformative. Such conclusions can be useful in climate science, by suggesting where new observational data can be most effective.

2.3.4 Illustration 4: sealevel rise on millennial timescales
The emulator technique is now applied to the problem of sealevel changes on millennial timescales. This work was undertaken for the Environment Agency using techniques and software developed under Tyndall T2.13.

Sea level is almost certain to rise over the next thousand years due to global climate change. The emulator technique was used to assess uncertainties in the predictions of sea level rise in 3000AD by emulating the intermediate complexity model GENIE-1 over the credible range of the input parameter space.

Figure 3: uncertainties in sealevel rise over the next thousand years. For details, see text

Figure 2 shows the salient results of the emulator: the vertical axis shows sea level rise for each of five emissions scenarios (A-F, detailing carbon emissions over the next few hundred years and incorporating features such as economic and political uncertainty as well as geological uncertainty in respect of fossil fuel reserves). Each scenario has two uncertainty distributions: one for the ‘prior’ estimate (that is, estimates made in the absence of observational data) and one for the `posterior’ estimate (made with the benefit of data). Note how the posterior distributions are tighter than the priors, indicating an informative dataset. Depending on the emissions scenario, se can expect sea level rises of up to about 3 meters by 3000AD.

3 Imprecise probability theory (fuzzy membership)
3.1 Introduction: constructing fuzzy emissions scenarios Models of future climate change require, amongst other inputs, time series of greenhouse gas emissions. However, the socio-economic scenarios from which these time series are derived are far from being precise constructs (IPCC 2000). They are based upon linguistic narratives of potential socio-economic futures. We therefore think of scenarios as linguistic constructs, and therefore they are inherently fuzzy in the sense of Zadeh (1965) and Williamson (1994). We used the published families of emissions scenarios to construct fuzzy membership functions through time.

3.2 Imprecise probability distributions of climate sensitivity Imprecise probability distributions were constructed to represent climate model uncertainties in terms of the published probability distributions of climate sensitivity. This is justified on the basis that probabilistic estimates of climate sensitivity are highly contested and there is little prospect of a unique probability distribution being collectively agreed upon in the near future. We used the socalled p-box method to construct a random set consistent with the published probability distributions of climate sensitivity.

3.3 Propagating random sets through MAGICC Both fuzzy sets and p-boxes can be thought of as being instances of random sets. We demonstrated how these two types of uncertainty can be propagated through the simple climate model MAGICC. The monotonicity of MAGICC, which we demonstrated, meant that this uncertainty propagation was relatively straightforward.

3.4 Aggregation of fuzzy scenarios We demonstrated how the Choquet integral may be used to combine the random sets resulting from the analysis outlined above. Rather than attempting to distribute an additive probability measure across scenarios a weaker assumption is adopted in monotonic (but non-additive) measures. We argue that this approach can capture some of the semantics of socio-economic scenarios that defy conventional probabilistic representation.

3.5 A granular semantics for fuzzy measures Our fuzzy treatment of emissions scenarios led us to develop a new semantics for imprecise events on a granular space. The approach has some attractions in formalising the notion of a scenario defined on a space of imprecise and abstract variables. It provides a clear justification for the use of non-additive measures in the context of scenarios.

3.6 Imprecise probabilities of sea level rise in the Thames Estuary The uncertainty analysis of climate sensitivity outlined above has also been applied to sea level rise, again using MAGICC, in order to provide insights into uncertainties in sea level rise projections in the Thames estuary. A preliminary analysis of the potential effects of these uncertainties in decisions surrounding the upgrading or replacement of the Thames tidal defences was conducted.

3.7 Eliciting imprecise conditional probabilities of abrupt climate change We have collaborated with PIK and made use of extension funding to develop a questionnaire for elicitation of imprecise probabilities of abrupt climate change. This questionnaire will be used in the workshop on Tipping Points in the Earth System, to be held in Berlin on 5/6 October.

4 Development of modelling framework
One very challenging aspect of using computer-based approaches to climate prediction is to account for the uncertainty introduced by model inadequacy, sometimes referred to as "structural error". Formally speaking, it is a necessary pre-condition of calibrating computer models with climate system data that we quantify this discrepancy. The fact that ⎯though crucial⎯this is seldom done, or, where it is done, it is done in a very rudimentary fashion, testifies to the challenge of quantifying model inadequacy. As a result, however, climate-model-based predictions are at best suggestive for the underlying climate system, and the activity of calibrating models without paying due attention to their inadequacy may make their predictions worse, not better.

One aspect of project T2.13 has been to develop greater insight into this source of uncertainty; this work has been carried out primarily at Durham, in conjunction with the NERC-RAPID Project “Estimating the Probability of Rapid Climate Change”. In particular, we have re-examined the standard statistical approach to linking computer models and the systems they purport to represent (as found in, say, Kennedy and O'Hagan, 2001; Craig et al, 2001; Higdon et al, 2005). In a series of papers we have uncovered limitations in this approach, and developed a generalization that not only allows us better to link an individual model with the system, but also to unify several different but related models into a single system prediction. These papers have appeared or are appearing in the

statistical literature, but they have been developed and illustrated with climate applications in mind. Detailed case studies developed jointly with climate scientists, still currently work-in-progress, will appear in the climate literature.

Goldstein and Rougier (2005a) is a broadly theoretical paper that outlines a strategy for linking a model and its underlying system that goes far beyond the simple “model + noise”, investigating the particular challenges of non-measurable inputs and exploring the minimal statistical modelling required to link several models together. In another guise, this might be seen as the minimal modelling required to “future-proof” an existing framework, since it is a given in the field of climate science that a better model will be available next year. A framework that accounts for multiple models allows us to profit directly from the evaluations of last year's model; it also allows us to design ensembles that range across several models, for example to trade off accuracy and evaluation time. The next step for this approach is to design a sequential approach to selecting model evaluations on expensive climate simulators, informed by evaluations of a simpler version, see, e.g., Goldstein and Rougier (2005b, revised and resubmitted).

Goldstein and Rougier (2005c, in revision) is a discussion paper that explores in more detail the practicalities of linking models together. It is based around a recent model of the Atlantic (Zickfeld et al, 2004), and considers ways in which that model may be linked to observable quantities in the Atlantic itself. Rather than taking a single huge step from the model (which is rather simple) to the Atlantic, a series of smaller steps are taken, first to a generalization with additional physics, then to a broader generalization with uncertain physics, and then finally to the Atlantic itself.

This type of analysis has direct consequences for Tyndall projects such as the CIAS. Such tools require some quantification of uncertainty, much of which arises from imperfections in the models. A key tenet of subjective probability theory is that we should make probabilistic quantifications of observable quantities, and one of the challenges is to understand the way in which we can describe model inadequacy in terms of such quantities. While we do not believe that we have yet achieved this goal, we do think that in breaking down the relationship between a given model and the system into a series of better-understood smaller steps we have made the assessment of a complicated quantity a bit easier. In our on-going case studies (for example, using HadAM3 data from both the QUMP and climateprediction.net ensembles; linking C-GOLDSTEIN with IGCM-GOLDSTEIN and FAMOUS) we are developing practical strategies for taking these steps, directly involving both the statistics and the climate communities.

References

5 Energy Technology: uptake of alternative energy sources
5.1 Background
Long-term climate models require predictions of anthropogenic carbon emissions to determine future CO2 concentrations. This requires an assessment of oil consumption. However, the global energy market includes many non-carbon energy sources such as wind power, nuclear power, and solar power. The Energy Technology Model (ETM) is a dynamic econometric model that represents technological change. ETM models, in a simplified way, the switch from carbon energy sources to non-carbon energy sources over time. It was designed to account for the fact that a large array of non-carbon options is emerging, though their costs are generally high relative to those of fossil fuels. However, costs are declining relatively with innovation, investment and learning-bydoing. The process of substitution is also argued to be highly non-linear, involving threshold effects. The ETM models the process of substitution, allowing for non-carbon energy sources to meet a larger part of global energy demand as the price of these sources decrease with investment, learning-by-doing, and innovation. The innovative component of the ETM is the learning curve1: technology costs do not simply decline as a function of time, but decrease as experience is gained by using a particular technology.2 The learning curve in the ETM has the standard form
Ct = C 0 ( X t / X 0 ) !b

(1)

where Ct are the unit costs at time t, C0 initial costs, Xt the cumulative investment (taken as an indicator of experience) in the technology by time t from the time of its first introduction and b is the ‘learning-curve parameter’. This relationship is highly non-linear, especially in the early phases when Xt is small and experience accumulates rapidly. Figure 1 below shows the effect, in which market share is taken as a proxy for Xt.3

1 2

A. McDonald and L. Schrattenholzer (2001). A. Manne and R. Richels, (2004.) 3 The term learning rate in Figure 3 refers to the ‘per unit’ decline in costs with each doubling of cumulative investment, which is equal to 1-2-b.

Threshold Effects in Technology Development
(learning rates in percentages)
6 5

Initial Costs, when markets are small

Cost Ratio, relative to existing technologie s

4

3

10%
2 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

20% 30%

Market Share (fraction 1.0)

of

Figure 4: cost ratios as a function of market share: costs decrease with increasing investment

As investment is made in ‘new’ technologies, learning takes place and the cost of the new technology lowers so that it becomes competitive with the ‘old’ technologies. For each type of energy demanded there is usually a technology or fuel ‘of choice’—what might be termed a ‘marker’ technology—against which the alternatives will have to compete. In the ETM, the total capital and operating costs of using this fuel per unit output will be used as a basis for expressing the relative costs of the alternatives.

Even though a marker technology may comprise the majority of the market, there are always niche markets and opportunities where the non-carbon technology is cheaper than the marker technology. Such applications may be limited, but the point is that, while it may be generally true that the costs of a marker technology may be lower or higher than those of its substitute, this is not true in every case. Even with hydrogen, there have been small niche markets for a century, for instance in oil refineries and for the chemical industry; hydrogen surpluses have been used for co-generation for several years. Thus once a particular energy technology has a foothold in the market (typically a specialized niche) it may become established once the costs reduce following the learning-by-doing effect given in equation (1).

5.2 Uncertainty analysis of ETM

In order to perform uncertainty analysis (Tyndall T2.13), the ETM model was rewritten in parallelized Fortran code, and passed through an emulator, using parameter ranges that were obtained by expert elicitation. This approach revealed a number of findings: • • • • • •

Many aspects of the model appeared to exhibit chaotic-type behaviour Uncertainty (in predictions of the percentage of carbon-intensive sources in the global energy market) was overwhelming Uncertainty in the model input parameters was large The model had limited predictive power, especially about the medium-term and long-term future One set of parameters to which predictions were extremely sensitive was the size and, in particular, the duration, of tax breaks and research funding for alternative energy sources The research highlighted the difficulty of applying standard ODE techniques to economic forecasting and suggested the long-term state of the global energy market was very sensitive to political decisions being made at the present time.

6 Kondratiev waves
6.1 Background and Introduction
As a part of the ETech and ETech+ projects: Technology policy and technical change, a dynamic global and UK approach (ID CODE: T2.12), a model of Kondratiev waves has been developed. The work (reported in Köhler (2005) suggests a quantitative theory of long term technical change, that can reproduce the features of a notional Kondratiev wave. It will be part of a global macroeconometric model. There is no suitable and generally accepted theory of long term technical change for incorporation in a macroeconomic modelling structure. However, there is now a good descriptive theory, Freeman and Louçã (2001), which is intended to provide an economic history perspective of long-term change. They argue that Kondratiev waves involve a process of dynamic interaction between 5 subsystems: science, technology, economy, politics and culture. For our purpose of developing a quantitative model, it is only realistic to try and model technology and economy. The impacts and feedbacks through the other subsystems will be reflected qualitatively in the macroeconomic model structure and through scenarios. The objective of our model is to interpret

this descriptive theory in quantitative terms, as far as is plausible, in the context of the macroeconomic analysis.

6.2 Uncertainty analysis of Kondratiev waves
The T2.13 part of this study was investigation of uncertainty in Kondratiev waves. To this end, the work of Köhler (2005) was expressed in terms of in terms of a system of linked Ordinary Differential Equations (ODEs); this in contrast to the difference equation approach adopted by Kohler in which the system state vector is computed at one-year time intervals. A suite of software in the R programming language was written that solved these ODEs and tabulated the output in a form suitable for econometric analysis.

The findings from this study were as follows:

The ODE formalism is an appropriate mathematical framework to describe Kondratiev waves: ODEs are simple and computationally cheap to solve with high accuracy, and coding can use existing, highly optimized, ODE solvers. This approach also has the advantage of ensuring that results are dependent only on the structure of the equations used (and the parameters), and not on the details of the numerical scheme.

The transition from difference equation to ODE highlighted inconsistencies in earlier formulations. Resolving these inconsistencies in the context of an ODE system required structural reformulation of the original (difference equation) system.

The resulting ODE system admitted realistic solutions for only a tiny part of parameter space; most of parameter space gave simulations that rapidly became unrealistic. Interpretation of this finding is not clear at present

This model displays properties of instability and extreme sensitivity to initial conditions, characteristic of non-linear dynamic systems. However, it should be noted that this chaotic-like behaviour is unambiguously attributable to the equation structure, rather than the numerical scheme used.

The model exhibited uncertainty in terms of medium-to-long term predictions. At present, it is unclear how to reduce this uncertainty.

7 Results from T2.13 and conclusions
The objectives of the original proposal have largely been achieved. We have also generated some additional, unanticipated, findings. We present both here, in condensed bullet point format: •

It is possible to produce high-quality, peer-reviewed, documented, open-source, transparent, structured software to analyse uncertainties in climate science. We have produced such software and documentation in the form of peer-reviewed journal articles (currently under revision; minor referees’ comments) for use by the wider community. We applied this statistical software to problems in climate science. Our four case studies were: o First, the creation of an emulator of the computer climate model C-GOLDSTEIN, leading to a statistically rigorous uncertainty analysis o Secondly, the incorporation of field data into C-GOLDSTEIN in a statistically rigorous manner o Third, the estimation of the probability of MOC collapse using an emulator of CGOLDSTEIN o Fourth, the assessment of sea level changes on millennial timescales o We also produced software that can analyze extrema, by way of 95th and 99th percentiles as a surrogate for ‘worst credible case’, which could be applied to climate science problems.

We have developed a framework that uses imprecise probability measures, to demonstrate how emissions scenarios can be propagated through a simple climate model, MAGICC. o We showed how imprecise probability distributions of climate sensitivity can be propagated through MAGICC. Emissions scenario uncertainties and imprecise probabilistic representation of model uncertainties were combined to generate lower and upper cumulative probability distributions for Global Mean Temperature (GMT).
o

We demonstrated imprecise probability analysis in the context of the decisionmaking for the tidal flood defences in the Thames estuary.

We have developed a new modelling framework for the theoretical understanding of model performance in a statistically robust context. This type of analysis has direct consequences for Tyndall projects such as the CIAS. One outcome of our research is a renewed emphasis on breaking down the relationship between a given model and the system into a series of better-understood smaller steps. In our case studies (including HadAM3 data from both the QUMP and climateprediction.net ensembles and linking C-GOLDSTEIN with IGCM-

GOLDSTEIN and FAMOUS) we are developing practical strategies for this work, directly involving both the statistics and the climate communities. • We performed uncertainty analysis of a model for medium-long term econometric prediction using parallelized Fortran code an emulator. This approach revealed that uncertainty (in predictions of the percentage of carbon-intensive sources in the global energy market) was overwhelming and the model had limited predictive power. This research highlighted the difficulty of applying standard ODE techniques to economic forecasting and suggested the long-term state of the global energy market was very sensitive to political decisions being made at the present time. We discovered that the appropriate mathematical framework to describe Kondratiev waves is the ODE formalism and wrote software to implement this. Interpretation of our numerical findings is not clear at present.

8 Communication highlights
Project T2.13 has successfully communicated many of its findings to the wider scientific and political community. Publications are listed below. Notable examples of other communication highlights include an oral presentation at “Avoiding Dangerous Climate Change”, Exeter, February 2005 (and associated media interest).

8.1

Publications in reporting period

Challenor, P. G., Hankin, R. K. S. and Marsh, R. “Towards the probability of Rapid Climate Change”, accepted for publication in proceedings of “Avoiding Dangerous Climate Change”, Exeter, February 2005

Challenor, P. G. and Marsh, R. “First steps towards the estimation of the probability of themohaline collapse”, at revision stage for Ocean Modelling.

Edwards NR, Marsh R, 2005, “Uncertainties due to transport-parameter sensitivity in an efficient 3-D ocean-climate model”. Climate Dynamics, 24, 415-433.

Goldstein, M and Rougier, J. C. (2005a), Probabilistic formulations for transferring inferences from mathematical models to physical systems, SIAM Journal on Scientific Computing, 26, 467-487.

Goldstein, M. and Rougier, J. C. (2005b), Bayes Linear Calibrated Prediction for Complex Systems, revised and resubmitted to the Journal of the American Statistical Association.

Goldstein, M and Rougier, J. C. (2005c), Reified Bayesian Modelling and Inference for Physical Systems; being revised as a discussion paper for the Journal of Statistical Planning and Inference.

Hall, J.W., Fu, G. and Lawry, J. Imprecise probabilities of climate change: aggregation of fuzzy scenarios and model uncertainties, Climatic Change, in review.

Hall, J.W., Reeder, T., Fu, G., Nicholls, R.J., Wicks, J., Lawry, J., Dawson, R.J., and Parker, D. Tidal flood risk in London under stabilisation scenarios, Int. Symposium on Stabilisation of Greenhouse Gasses, Met Office, Exeter, February 1-3, 2005, book of abstracts.

Hankin, R. K. S. “Introducing BACCO, an R package for Bayesian Analysis of Computer Code Output”, accepted for publication in the Journal of Statistical Software

Lawry, J., Hall, J.W. and Fu, G., A granular semantics for fuzzy measures and its application to climate change scenarios, ISIPTA '05, Proc. 4th Int. Symp. on Imprecise Probabilities and Their Applications, Carnegie Mellon University, July 20-22, 2005.

Marsh R, Edwards NR, Shepherd JG (2002). “Development of a fast climate model ( C-GOLDSTEIN ) for Earth System Science.” Southampton Oceanography Centre, European Way, Southampton SO14 3ZH.

9 References

Andersdon, D. and Winne, S. (2003). “Innovation and threshold effects in technology responses to climate change” Tyndall Centre for Climate Change Research, Working Paper 43.

Bøggild, C. E., C. Mayer, S. Podlech, A. Taurisano and S. Nielsen. (2004) Towards an assessment of the balance state of the Greenland Ice Sheet. Geological Survey of Denmark and Greenland Bulletin 4, 81–84

Challenor, P. G., Hankin, R. K. S. and Marsh, R. “Towards the probability of Rapid Climate Change”, accepted for publication in proceedings of “Avoiding Dangerous Climate Change”, Exeter, February 2005

Challenor, P. G. and Marsh, R. “First steps towards the estimation of the probability of themohaline collapse”, at revision stage for Ocean Modelling.

Craig, P. S. Goldstein, M., Rougier, J. C. and Seheult, A. H. (2001), Bayesian Forecasting for Complex Systems Using Computer Simulators, Journal of the American Statistical Association, 96, 717-729.

Currin C, Mitchell T, Morris M, Ylvisaker D., 1991. “Bayesian Prediction of Deterministic Functions, with Application to the Design and Analysis of Computer Experiments”. Journal of the American Statistical Association 86 (416), 953-963.

Edwards NR, Marsh R, 2005, “Uncertainties due to transport-parameter sensitivity in an efficient 3-D ocean-climate model”. Climate Dynamics, 24, 415-433.

Goldstein, M and Rougier, J. C. (2005a), Probabilistic formulations for transferring inferences from mathematical models to physical systems, SIAM Journal on Scientific Computing, 26, 467-487.

Goldstein, M. and Rougier, J. C. (2005b), Bayes Linear Calibrated Prediction for Complex Systems, revised and resubmitted to the Journal of the American Statistical Association.

Goldstein, M and Rougier, J. C. (2005c), Reified Bayesian Modelling and Inference for Physical Systems, being revised as a discussion paper for the Journal of Statistical Planning and Inference.

Gordon C, et al. 2000. “The simulation of Sea Surface Temperature , sea ice extents and ocean heat transports in a version of the Hadley Centre coupled model without flux adjustments. Climate Dynamics, 16, 147-168

Hall, J.W., Fu, G. and Lawry, J. Imprecise probabilities of climate change: aggregation of fuzzy scenarios and model uncertainties, Climatic Change, in review.

Hall, J.W., Reeder, T., Fu, G., Nicholls, R.J., Wicks, J., Lawry, J., Dawson, R.J., and Parker, D. Tidal flood risk in London under stabilisation scenarios, Int. Symposium on Stabilisation of Greenhouse Gasses, Met Office, Exeter, February 1-3, 2005, book of abstracts.

Higdon, D, Kennedy, M , Cavendish, J. Cafeo, J. and Ryne, R. D. (2005), Combining Field Data and Computer Simulations for Calibration and Prediction, SIAM Journal on Scientific Computing, 26, 448-466.

Hothorn T, Bretz F, Genz A, 2001. “On Multivariate t and Gauss Probabilities in R”. R News, 1(2), 27-29. (http://CRAN.R-project.org/doc/Rnews/).

Kennedy M.C., O'Hagan A (2000). “Predicting the output from a complex computer code when fast approximations are available”. Biometrika, 87(1), 1-13.

Kennedy M. C. , O'Hagan A, 2001. “Bayesian calibration of computer models”. Society B, 63 (3), 425-464.

Journal of the Royal Statistical

Lawry, J., Hall, J.W. and Fu, G., A granular semantics for fuzzy measures and its application to climate change scenarios, ISIPTA '05, Proc. 4th Int. Symp. on Imprecise Probabilities and Their Applications, Carnegie Mellon University, July 20-22, 2005.

Manne, A. and Richels, R. (2004), “The impact of learning-by-doing on the timing and costs of CO2 abatement”, Energy Economics 26: 603 - 619.

Marsh R, Edwards NR, Shepherd JG (2002). “Development of a fast climate model ( C-GOLDSTEIN ) for Earth System Science.” Southampton Oceanography Centre, European Way, Southampton SO14 3ZH.

McDonald, A. and Schrattenholzer, L. (2001), “ Learning rates for energy technologies”. Energy Policy 29: 255-261.

Nakicenovic, N. and R. Swart (eds.), (2000) IPCC. Special Report on Emissions Scenarios. Cambridge University Press, Cambridge, United Kingdom, 612 pp. Oakley J, O'Hagan A 2002. “Bayesian inference for the uncertainty distribution of computer model outputs”. Biometrika, 89 (4), 769--784.

Rahmstorf, S. and A. Ganopolski. (1999) Long-term global warming scenarios computed with an efficient coupled climate model. Clim. Change, 43, 353-367 Vellinga M, Wood RA, 2002. “Global climatic impacts of a collapse of the Atlantic thermohaline circulation”. Climatic Change, 54 (3), 251-267.

Zickfeld,, K. Slawig, T and Rahmstorf, S. (2004), A Low-Order Model for the Response of the Atlantic Thermohaline circulation to Climate Change, Ocean Dynamics, 54, 8-26

The inter-disciplinary Tyndall Centre for Climate Change Research undertakes integrated research into the long-term consequences of climate change for society and into the development of sustainable responses that governments, business-leaders and decision-makers can evaluate and implement. Achieving these objectives brings together UK climate scientists, social scientists, engineers and economists in a unique collaborative research effort. The Tyndall Centre is named after the 19th century UK scientist John Tyndall, who was the first to prove the Earth’s natural greenhouse effect and suggested that slight changes in atmospheric composition could bring about climate variations. In addition, he was committed to improving the quality of science education and knowledge. The Tyndall Centre is a partnership of the following institutions: University University University University University University of of of of of of East Anglia Manchester Southampton Sussex Oxford Newcastle

Recent Tyndall Centre Technical Reports

http://www.tyndall.ac.uk/publications/tech_r eports/tech_reports.shtml

Tyndall Centre Technical Reports are available online at

O'Riordan T., Watkinson A., Milligan J, (2006) Living with a changing coastline: Exploring new forms of governance for sustainable coastal futures : Tyndall Centre Technical Report No. 49. Anderson K., Bows A., Mander S, Shackley S., Agnolucci P., Ekins P., (2006) Decarbonising Modern Societies:Integrated Scenarios Process and Workshops, Tyndall Centre Technical Report 48. Gough C., Shackley S. (2005) An integrated Assesment of Carbon Dioxide Capture and Storage in the UK. Tyndall Centre Technical Report 47. Nicholls R., Hanson S., Balson P., Brown I., French J., Spencer T., (2005) Capturing Geomorphological Change in the Coastal Simulator, Tyndall Centre Technical Report 46 Weatherhead K, Knox J, Ramsden S, Gibbons J, Arnell N. W., Odoni, N, Hiscock K, Sandhu C, Saich A., Conway D, Warwick C, Bharwani S, (2006) Sustainable water resources: A framework for assessing adaptation options in the rural sector, Tyndall Centre Technical Report 45 Weatherhead K, Knox J, Ramsden S, Gibbons J, Arnell N. W., Odoni, N, Hiscock K, Sandhu C, Saich A., Conway D, Warwick C, Bharwani S, (2006) Sustainable water resources: A framework for assessing adaptation options in the rural sector, Tyndall Centre Technical Report 44

The Centre is core funded by the following organisations: Natural Environmental Research Council (NERC) Economic and Social Research Council (ESRC) Engineering and Physical Sciences Research Council (EPSRC) For more information, visit the Tyndall Centre Web site (www.tyndall.ac.uk) or contact: Communications Manager Tyndall Centre for Climate Change Research University of East Anglia, Norwich NR4 7TJ, UK Phone: +44 (0) 1603 59 3906; Fax: +44 (0) 1603 59 3901 Email: tyndall@uea.ac.uk

Lowe, T. (2006) Vicarious experience vs. scientific information in climate change risk perception and behaviour: a case study of undergraduate students in Norwich, UK, Tyndall Centre Technical Report 43 Atkinson, P, (2006) Towards an integrated
coastal simulator of the impact of sea level rise in East Anglia: Part B3- Coastal simulator and biodiversity - Modelling the change in wintering Twite Carduelis flavirostris populations in relation to changing saltmarsh area, Tyndall Centre Technical Report 42B3

Pearson, S., Rees, J., Poulton, C., Dickson, M., Walkden, M., Hall, J., Nicholls, R., Mokrech, M., Koukoulas, S. and Spencer, T. (2005) Towards an integrated coastal sediment dynamics and shoreline response simulator, Tyndall Centre Technical Report 38 Sorrell, S. (2005) The contribution of energy service contracting to a low carbon economy, Tyndall Centre Technical Report 37 Tratalos, J. A., Gill, J. A., Jones, A., Showler, D., Bateman, A., Watkinson, A., Sugden, R., and Sutherland, W. (2005) Interactions between tourism, breeding birds and climate change across a regional scale, Tyndall Centre Technical Report 36 Thomas, D., Osbahr, H., Twyman, C., Adger, W. N. and Hewitson, B., (2005) ADAPTIVE: Adaptations to climate change amongst natural resourcedependant societies in the developing world: across the Southern African climate gradient, Tyndall Centre Technical Report 35 Arnell, N. W., Tompkins, E. L., Adger, W. N. and Delany, K. (2005) Vulnerability to abrupt climate change in Europe, Tyndall Centre Technical Report 34 Shackley, S. and Anderson, K. et al. (2005) Decarbonising the UK: Energy for a climate conscious future, Tyndall Technical Report 33 Halliday, J., Ruddell, A., Powell, J. and Peters, M. (2005) Fuel cells: Providing heat and power in the urban environment, Tyndall Centre Technical Report 32 Haxeltine, A., Turnpenny, J., O’Riordan, T., and Warren, R (2005) The creation of a pilot phase Interactive Integrated Assessment Process for managing climate futures, Tyndall Centre Technical Report 31 Nedic, D. P., Shakoor, A. A., Strbac, G., Black, M., Watson, J., and Mitchell, C. (2005) Security assessment of futures electricity scenarios, Tyndall Centre Technical Report 30 Shepherd, J., Challenor, P., Marsh, B., Williamson, M., Yool, W., Lenton, T., Huntingford, C., Ridgwell, A and Raper, S. (2005) Planning and Prototyping a Climate Module for the Tyndall Integrated Assessment Model, Tyndall Centre Technical Report 29 Lorenzoni, I., Lowe, T. and Pidgeon, N. (2005) A strategic assessment of scientific and behavioural perspectives on ‘dangerous’ climate change, Tyndall Centre Technical Report 28

Gill, J, Watkinson, A. and Sutherland, W.,
(2006) Towards an integrated coastal simulator of the impact of sea level rise in East Anglia: Part B2- Coastal simulator and biodiversity models of biodiversity responses to environmental change Tyndall Centre Technical Report 42B2

Ridley, J., Gill, J, Watkinson, A. and Sutherland, W., (2006) Towards an integrated
coastal simulator of the impact of sea level rise in East Anglia: Part B1- Coastal simulator and biodiversity - Design and structure of the coastal simulator Tyndall Centre Technical Report 42B1

Stansby, P., Launder B., Laurence, D., Kuang, C., and Zhou, J., (2006) Towards an integrated
coastal simulator of the impact of sea level rise in East Anglia: Part A- Coastal wave climate prediction and sandbanks for coastal protection Tyndall Centre Technical Report 42A

Lenton, T. M., Loutre, M. F, Williamson, M. S., Warren, R., Goodess, C., Swann, M., Cameron, D. R., Hankin, R., Marsh, R. and Shepherd, J. G., (2006) Climate change on the millennial
timescale, Tyndall Centre Technical Report 41 Bows, A., Anderson, K. and Upham, P. (2006) Contraction & Convergence: UK carbon emissions and the implications for UK air traffic, Tyndall Centre Technical Report 40 Starkey R., Anderson K., (2005) Domestic Tradeable Quotas: A policy instrument for reducing greenhouse gas emissions from energy use:, Tyndall Centre Technical Report 39

Boardman, B., Killip, G., Darby S. and Sinden, G, (2005) Lower Carbon Futures: the 40% House Project, Tyndall Centre Technical Report 27 Dearing, J.A., Plater, A.J., Richmond, N., Prandle, D. and Wolf , J. (2005) Towards a high resolution cellular model for coastal simulation (CEMCOS), Tyndall Centre Technical Report 26 Timms, P., Kelly, C., and Hodgson, F., (2005) World transport scenarios project, Tyndall Centre Technical Report 25 Brown, K., Few, R., Tompkins, E. L., Tsimplis, M. and Sortti, (2005) Responding to climate change: inclusive and integrated coastal analysis, Tyndall Centre Technical Report 24 Anderson, D., Barker, T., Ekins, P., Green, K., Köhler, J., Warren, R., Agnolucci, P., Dewick, P., Foxon, T., Pan, H. and Winne, S. (2005) ETech+: Technology policy and technical change, a dynamic global and UK approach, Tyndall Centre Technical Report 23 Abu-Sharkh, S., Li, R., Markvart, T., Ross, N., Wilson, P., Yao, R., Steemers, K., Kohler, J. and Arnold, R. (2005) Microgrids: distributed on-site generation, Tyndall Centre Technical Report 22 Shepherd, D., Jickells, T., Andrews, J., Cave, R., Ledoux, L, Turner, R., Watkinson, A., Aldridge, J. Malcolm, S, Parker, R., Young, E., Nedwell, D. (2005) Integrated modelling of an estuarine environment: an assessment of managed realignment options, Tyndall Centre Technical Report 21 Dlugolecki, A. and Mansley, M. (2005) Asset management and climate change, Tyndall Centre Technical Report 20 Shackley, S., Bray, D. and Bleda, M., (2005) Developing discourse coalitions to incorporate stakeholder perceptions and responses within the Tyndall Integrated Assessment, Tyndall Centre Technical Report 19 Dutton, A. G., Bristow, A. L., Page, M. W., Kelly, C. E., Watson, J. and Tetteh, A. (2005) The Hydrogen energy economy: its long term role in greenhouse gas reduction, Tyndall Centre Technical Report 18 Few, R. (2005) Health and flood risk: A strategic assessment of adaptation processes and policies, Tyndall Centre Technical Report 17 Brown, K., Boyd, E., Corbera-Elizalde, E., Adger, W. N. and Shackley, S (2004) How do CDM projects contribute to sustainable development? Tyndall Centre Technical Report 16

Levermore, G, Chow, D., Jones, P. and Lister, D. (2004) Accuracy of modelled extremes of temperature and climate change and its implications for the built environment in the UK, Tyndall Centre Technical Report 14 Jenkins, N., Strbac G. and Watson J. (2004) Connecting new and renewable energy sources to the UK electricity system, Tyndall Centre Technical Report 13 Palutikof, J. and Hanson, C. (2004) Integrated assessment of the potential for change in storm activity over Europe: Implications for insurance and forestry, Tyndall Centre Technical Report 12 Berkhout, F., Hertin, J., and Arnell, N. (2004) Business and Climate Change: Measuring and Enhancing Adaptive Capacity, Tyndall Centre Technical Report 11 Tsimplis, S. et al (2004) Towards a vulnerability assessment for the UK coastline, Tyndall Centre Technical Report 10 Gill, J., Watkinson, A. and Côté, I (2004). Linking sea level rise, coastal biodiversity and economic activity in Caribbean island states: towards the development of a coastal island simulator, Tyndall Centre Technical Report 9 Skinner, I., Fergusson, M., Kröger, K., Kelly, C. and Bristow, A. (2004) Critical Issues in Decarbonising Transport, Tyndall Centre Technical Report 8 Adger W. N., Brooks, N., Kelly, M., Bentham, S. and Eriksen, S. (2004) New indicators of vulnerability and adaptive capacity, Tyndall Centre Technical Report 7 Macmillan, S. and Köhler, J.H., (2004) Modelling energy use in the global building stock: a pilot survey to identify available data, Tyndall Centre Technical Report 6 Steemers, K. (2003) Establishing research directions in sustainable building design, Tyndall Centre Technical Report 5 Goodess, C.M. Osborn, T. J. and Hulme, M. (2003) The identification and evaluation of suitable scenario development methods for the estimation of future probabilities of extreme weather events, Tyndall Centre Technical Report 4 Köhler, J.H. (2002). Modelling technological change, Tyndall Centre Technical Report 3 Gough, C., Shackley, S., Cannell, M.G.R. (2002). Evaluating the options for carbon sequestration, Tyndall Centre Technical Report 2

Warren, R. (2002). A blueprint for integrated assessment of climate change, Tyndall

CentreTechnical

Report

1