Beyond probability: new methods for representing uncertainty in projections of future climate

Guangtao Fu, Jim Hall and Jonathan Lawry June 2005

Tyndall Centre for Climate Change Research

Working Paper 75

Beyond probability: new methods for representing uncertainty in projections of future climate

Guangtao Fu1, Jim Hall2 and Jonathan Lawry1
2

Faculty of Engineering, University of Bristol, BS8 1TR School of Civil Engineering and Geosciences, University of Newcastle upon Tyne, NE1 7RU Emails: guangtao.fu@bristol.ac.uk, jim.hall@ncl.ac.uk, j.lawry@bristol.ac.uk

1

Tyndall Centre Working Paper No. 75 June 2005

Please note that Tyndall working papers are "work in progress". Whilst they are commented on by Tyndall researchers, they have not been subject to a full peer review. The accuracy of this work and the conclusions reached are the responsibility of the author(s) alone and not the Tyndall Centre.

1

Summary Whilst the majority of the climate research community is now set upon the objective of generating probabilistic predictions of climate change, disconcerting reservations persist. Attempts to construct probability distributions over socio-economic scenarios are doggedly resisted. Variation between published probability distributions of climate sensitivity attests to incomplete knowledge of the prior distributions of critical parameters in climate models. In this paper we address these concerns by adopting an imprecise probability approach. The uncertainties considered in our analysis are from two sources: emissions scenarios and climate model uncertainties. For the former, we argue that emissions scenarios based on different views of social, economic and technical developments in the future that are expressed in terms of fuzzy linguistic narratives and therefore any precise emissions trajectory can be thought of as having a degree of membership between 0 and 1 in a given scenario. We demonstrate how these scenarios can be propagated through a simple climate model, MAGICC. Imprecise probability distributions are constructed to represent climate model uncertainties in terms of the published probability distributions of climate sensitivity. This is justified on the basis that probabilistic estimates of climate sensitivity are highly contested and there is little prospect of a unique probability distribution being collectively agreed upon in the near future. We then demonstrate how imprecise probability distributions of climate sensitivity can be propagated through MAGICC. Emissions scenario uncertainties and imprecise probabilistic representation of model uncertainties are combined to generate lower and upper cumulative probability distributions for Global Mean Temperature (GMT).

2

1

Introduction

The issue of uncertainty in climate projections was considered by the Intergovernmental Panel on Climate Change (IPCC) in its Third Assessment Report (TAR) through attachment of likelihoods to many parameters, variables and outcomes, although there were no explicit probability distributions assigned to some crucial outcomes, for example, its 1.4–5.8K range for projected warming over the twenty-first century. The lack of such a probabilistic estimate has stimulated the research community, and many more attempts have since been made to provide probabilistic climate projections. Most uncertainty studies performed so far are in fact sensitivity studies in which the sensitivity of model projections to different parameters is explored by repeating a numerical simulation with the parameter in question set to a nominal, a low, and a high value, with all other parameters fixed. Multiple parameters may also be perturbed at the same time to evaluate their joint impact and to obtain an extreme value of a model response. For example, the Second Assessment Report of the IPCC used the sensitivity analysis approach to characterize the uncertainty in future climate change (IPCC, 1996). Simulations were performed with an energy balance-upwelling diffusion model by varying the increase in future greenhouse gases over 6 different scenarios (IS92a–f), varying the climate sensitivity across the values 1.5۫ºC, 2.5ºC, and 4.5ºC, and varying the assumption about aerosols (Webster and Sokolov, 2000). Sensitivity analysis usually gives only ranges of possible outcomes, so authors in all three working groups of the TAR were encouraged to identify the most important uncertainties and characterize the distribution of values of key parameters, variables, or outcomes, where possible using formal probabilistic methods (Moss and Schneider, 2000; Reilly et al., 2001). Uncertainty was indeed quantified for some aspects of climate change in the TAR, however, uncertainty in key results, such as the increase in global mean surface temperature through 2100, was given only as a range without probabilities (Houghton et al., 2001). Schneider (2001, 2002) has expressed deep concern about the ambiguity and potential dangers resulting from the absence of probabilities in climate projections. Since then, several studies have contributed to probabilistic forecasts of future climate projections. Stott and Kettleborough (2002) presented probabilistic forecasts of global-mean temperatures for four representative SRES marker scenarios (A1FI, A2, B1 and B2), for future emissions, obtained with a comprehensive climate model. Wigley and Raper (2001) provided the probability distributions for the 1990 to 2100 warming range given in the IPCC Second Assessment Report and based on the assumption that all 35 emissions scenarios in SRES are equally likely, and that there is only a 10% chance that the climate sensitivity lies outside the range 1.5º to 4.5ºC. Allen et al. (2000) obtained a 5-95% confidence interval of 1-2.5 K above pre-industrial values by the 2040s under the IS92a scenario, which is consistent with the observed nearsurface temperature record as well as with the overall patterns of response predicted by several general circulation models. Knutti et al. (2002) applied a Monte Carlo approach to produce probabilistic climate projections under two illustrative emission scenarios B1 and A2 respectively, using a climate model of reduced complexity. The uncertainties in the input parameters and in the model itself are taken into account, and past observations of oceanic and atmospheric warming were used to constrain the range of realistic model responses. Webster et al. (2003) assessed a probability distribution for global mean temperature change generated by propagating probabilities in both emissions forecasts and uncertainty climate

3

parameters. In addition, they estimated uncertainty under a policy constraint as well as a no policy case to show how much uncertainty remains even after a relatively certain cap on emissions is put in place. These previous attempts to describe uncertainty have, however, been limited to the use of probability theory as the only tool of uncertainty modelling. In fact, the concept of uncertainty is too broad to be captured by the classical probability measures alone (Klir, 1999). As is well known, a number of alternative uncertainty theories have been proposed since at least 1970s, including fuzzy measures, possibility measures, upper and lower probabilities, all of which can be expressed in the framework of Imprecise Probabilities. Imprecise probabilities provide a basis for partial ordering of preferences, which may be sufficient to enable policy-related decision-making, whilst better reflecting the very significant uncertainties. The approach is also attractive in that, through the theory of random sets, both probabilistic and fuzzy information (provided a fuzzy set can be interpreted as a consonant random set) can be combined to generate lower and upper probabilities on the quantity of interest. The challenge of probabilistic - or risk based – climate forecasting is to begin to assess what changes can be ruled out as unlikely, rather than simply ruled in as possible (Allen, 2003). Our objective is, through applying imprecise probabilities to climate projections, to provide projections of climate-related quantities of interest (e.g. Global Mean Temperature (GMT) change through the 21st century) in a format that accounts for uncertainties yet forms a basis for decision-making. The approach should locate middle ground between two hitherto implacable positions: • • Projections presented as a set of scenarios with no information whatsoever about the relative likelihood of those scenarios provides a limited (insufficient it is argued) basis for decision making. Constructing a unique probability distribution over the quantity of interest is desirable in that it forms a rational basis for decision-making. However, there may not be sufficient information to justifiably construct a unique probability distribution due to lack of knowledge about key physical and socio-economic processes.

2

Uncertainty in Emissions Scenarios

Future emissions of greenhouse gases (GHG) into the atmosphere are subject to large uncertainties, which are among the major sources of uncertainties in projections of global warming, particularly over the latter half of the 21st century. Webster et al. (2002) produced probability density functions (pdfs) of GHG emissions for each time period, using a Monte Carlo analysis of the MIT EPPA model, which is a computable general equilibrium model of the world economy with sectoral and regional detail. And then the results were used to estimate the probability distribution of global mean temperature change (Webster et al., 2002). Wigley and Raper (2001) assumed all of the 35 emissions scenarios in IPCC SRES are of equal likelihood. Other studies (Allen et al., 2000; Knutti et al., 2002; Stott and Kettleborough, 2002) have used specific representative scenarios to calculate future uncertainty. As such, these studies analyzed the uncertainty only in the climate system response without characterizing the socio-economic uncertainty except through individual IPCC emissions scenarios.

4

Uncertainties in emissions scenarios are mainly from modelling of economic activities and technological changes in the future. Fuzzy set theory as an alternative tool of uncertainty modelling, has an advantage over classical probability theory in modelling these uncertainties because much of the information on which human decisions are based is possibilistic rather than probabilistic in nature. Since its emergence (Zadeh, 1965), fuzzy set theory has been used in a wide variety of applications, especially in fields such as decision theory, artificial intelligence and the social sciences. In this section, we begin by constructing a fuzzy description of the data for each emission scenario, then propagating each fuzzy scenario through the Model for the Assessment of Greenhouse-gas Induced Climate Change (MAGICC) to generate the fuzzy climate projections (such as fuzzy global mean temperature changes). Finally we aggregate fuzzy global mean temperature changes from the total IPCC emissions scenarios which result from this test. 2.1 Constructing fuzzy scenarios Models of future climate change require, amongst other inputs, time series of greenhouse gas emissions. However, the socio-economic scenarios from which these time series are derived are far from being precise constructs (IPCC 2000). They are based upon linguistic naratives of potential socio-economic futures. The axes of the abstract space upon which scenarios are defined are not absolute. Whilst the SRES scenarios are commonly characterised in terms of dimensions of “globalization” and “sustainability,” is it clear that “they occupy a multidimensional space and no simple metric can be used to classify them” (IPCC 2000) so alternative abstract dimensions might equally well be defined. Nonetheless, in general we think of scenarios as linguistic constructs and therefore inherently fuzzy in the sense of Zadeh (1965) and Williamson (1994). There is an approximate correspondence between a given scenario, S, and a quantity of interest, X, for example annual fossil CO2 emissions in the 2100. In other words it is not always possible to say definitely whether a measurement x of X corresponds to a scenario, S, merely that it has some degree of correspondence with the scenario, which may be represented by the membership µS(x) in the set S. The fuzzy membership function µS has the from µS: X → [0, 1] where [0, 1] denotes the interval of real numbers from 0 to 1 inclusive. If there exists some x for which µS(x) = 1, the fuzzy set is said to be ‘normalised’. Some quantities associated with future scenarios are more precisely know than others, in which case they can be represented by more narrowly defined membership functions. For example, future global population is held to be quite well estimated (World Bank, 1991, UN, 1998), whilst fossil fuel consumption is more contested, as evidenced by the rather different fossil CO2 emissions trajectories associated with the SRES A1 scenario. A fuzzy number is fuzzy subset of the universe of a real number. It is characterized by a possibility distribution, and can, it is argued, model physical phenomena more approximately than single-valued numbers. Here trapezoidal fuzzy numbers are used to represent the values of emissions scenarios in the future. The fuzzy set need not be trapezoidal. Sensitivity analysis indicates that any membership function consistent with the general form of the scenario family will generate similar results. Moreover, functions of fuzzy numbers will not necessarily be trapezoidal, as subsequent examples will demonstrate. A fuzzy number A in R is a trapezoidal fuzzy number denoted by A = (a1 , a 2 , a3 , a 4 ) , if its

~

~

membership function µ A : R → [0,1] satisfies ~

5

⎧ ( x − a1 ) (a 2 − a1 ) , ⎪1, ⎪ ~ µ A ( x) = ⎨ ⎪ ( x − a 4 ) (a3 − a 4 ) , ⎪ 0, ⎩

a1 ≤ x ≤ a 2 a2 ≤ x ≤ a3 a3 ≤ x ≤ a4 otherwise

where a1 , a 2 , a 3 and a 4 are crisp numbers, and a1 < a 2 ≤ a3 < a 4 . The scenario A1B-AIM gives a precise forecast for the Fossil CO2 value in year 2100, i.e., 13.096 Gt C. It is more reasonable to describe it by a trapezoidal fuzzy number, for example, we suppose the possible value span to range from the minimum to the maximum value in the A1 family with a positive fuzzy membership degree, and the interval of maximum possibility is of 25 per cent of the full range. In this way, we obtain the trapezoidal fuzzy number, (4.31, 10.9, 19.03, 36.84), shown in Figure 1. Here the outer envelope on the set of scenarios in the SRES A1 family is used to construct the support set of the fuzzy numbers, shown in Figure 2.
1

Fuzzy membership degree

0.8 0.6 0.4 0.2 0 0 10 20 30 40

Fossil CO2 emissions (Gt C)

Figure 1: The trapezoidal fuzzy number for Fossil CO2 in year 2100
40 the central scenario the upper and low er bounds

Fossil CO2 emissions (GtC)

30

20

10

0 2000

2020

2040

Year

2060

2080

2100

Figure 2: The fuzzy description for Fossil CO2 in Scenario A1B-AIM More specifically, the following figures 3-6 illustrate the trajectories of fuzzy CO2 emissions constructed in terms of the four scenario families A1, A2, B1 and B2 in SRES, respectively. For each SRES scenario family, we construct a fuzzy emissions scenario, where its marker scenario is regarded as the central line, and the minimum and maximum values of the scenarios in the family are regarded as the envelope of its fuzzy sets. Marker scenarios are shown with black thick lines and others with thin lines. The dotted lines show the envelope of fuzzy sets.

6

40

Global Carbon Dioxide Emissions (GtC)

30

20

10

0 2000

2020 A1B-AIM A1MIN A1GAI A1T-MES A1TMAR

2040 A1ASF A1CAI A1GME A1V1MI upper

2060 A1IMA A1CME A1FI-MI A1V2MI low er

2080 A1MES A1CMI A1TAI A1MAR

2100

Figure 3: The CO2 trajectories for all the A1 family scenarios
40

Global Carbon Dioxide Emissions (GtC)

30

20

10

0 2000

2020 A2A1MI A2MIN

2040 A2AIM A2GIM

2060 A2-ASF low er

2080 A2MES upper

2100

Figure 4: The CO2 trajectories for all the A2 family scenarios

7

Global Carbon Dioxide Emissions (GtC)

16

12

8

4

0 2000 2020
B1AIM B1MIN B1MAR

2040
B1ASF B1HIME low er

2060
B1-IMA B1HIMI upper

2080
B1MES B1TME

2100

Figure 5: The CO2 trajectories for all the B1 family scenarios
Global Carbon Dioxide Emissions (GtC)
25

20

15

10

5

0 2000 B2AIM B2HIMI

2020 B2ASF B2MAR

2040

2060 B2IMA B2CMAR

2080 B2-MES upper

2100 B2MIN low er

Figure 6: The CO2 trajectories for all the B2 family scenarios 2.2 Propagating fuzzy scenarios Zadeh (1975) introduced the extension principle as a method for extending point to point mappings to mappings between fuzzy sets. The extension principle plays a significant role in the development of fuzzy arithmetic by providing a means for the extension of fuzzy sets through a functional relation.

x n ) in X 1 ×L × X n to a point y in Y such that y = f ( x1 , L , x n ) . Let A = A1 × L × An be fuzzy subsets of X 1 ×L × X n , then the image B of A under f is a subset of V defined by

Let f be a mapping X 1 ×L × X n → Y , which maps a point x = ( x1 , L,

8

µ B ( y ) = sup {µ A ( x1 ) ∧ ... ∧ µ A ( xn )}
A: y = f ( x )
1 n

where µA and µB are the fuzzy memberships in sets A and B respectively and ∧ is the minimum operator. The difficulty lies in calculating the fuzzy subset B , i.e., finding the image of A under f . In our case, the aim is to obtain the GMT changes in fuzzy terms through propagating the fuzzy emissions scenario through the climate model. The sensitivity and monotonicity analysis in Appendix I demonstrated that, the image of fuzzy emissions scenario can be computed only by consideration of the direction of increase of the climate model. Figures 7 and 8 respectively show the fuzzy GMT changes in 2050 and 2100 on the basis of four SRES emissions scenarios.
1.0

Fuzzy membership degree

0.8 0.6 0.4 0.2 0.0 0.9 1.1 1.3 1.5 1.7 1.9

Global mean temperature change (ºC)
A1 A2 B1 B2

Figure 7: The fuzzy description of GMT changes in year 2050
1.0

Fuzzy membership degree

0.8 0.6 0.4 0.2 0.0 1 1.5 A1 2 A2 2.5 B1 3 B2 3.5 4

Global mean temperature change (ºC)

Figure 8: The fuzzy description of GMT changes in year 2100

2.3 Aggregating fuzzy GMT changes Contained within MAGICC are 19 pre-defined emissions scenarios, and there are 40 scenarios provided in SRES. Amongst these, four marker scenarios, i.e., A1B-AIM, A2-ASF, B1-IMA, B2-MES, are designated to be characteristic of the four storylines related to different assumptions of socio-economic development (Nakicenovic and Swart, 2000).

9

In the following, we use Choquet Integral (as described in Appendix II) to aggregate the fuzzy GMT changes from four SRES emission scenarios (i.e., the results in Figures 7 and 8). We use three different assumptions for the fuzzy measure on the power set of four SRES emissions scenarios, shown in Tables 1-3. Table 1: Assumption one: fuzzy measures on the power set (additive)
A1 0.5 A2B2 0.25 A2 0.2 B1 B2 0.3 B1 0.25 A1A2 B1 0.95 B2 0.05 A1A2 B2 0.75 A1A2 0.7 A1B1 B2 0.8 A1 B1 0.75 A2 B1B2 0.5 A1B2 0.55 A1A2 B1 B2 1 A2 B1 0.45

10

Table 2: Assumption two: fuzzy measures on the power set (sub-additive)
A1 0.7 A2B2 0.3 A2 0.3 B1 B2 0.35 B1 0.35 A1A2 B1 1 B2 0.1 A1A2 B2 0.8 A1A2 0.8 A1B1 B2 0.8 A1 B1 0.85 A2 B1B2 0.5 A1B2 0.6 A1A2 B1 B2 1 A2 B1 0.5

Table 3: Assumption three: fuzzy measures on the power set (super-additive)
A1 0.3 A2B2 0.2 A2 0.1 B1 B2 0.2 B1 0.15 A1A2 B1 0.85 B2 0.05 A1A2 B2 0.65 A1A2 0.5 A1B1 B2 0.7 A1 B1 0.55 A2 B1B2 0.45 A1B2 0.5 A1A2 B1 B2 1 A2 B1 0.3

Figures 9 and 10 show the results of aggregation for GMT changes in years 2050 and 2100 respectively, also show the results from the above three assumptions used in Choquet integral process.
1

Fuzzy membership degree

0.8 0.6

0.4 0.2

0 0.9 1.1 1.3 1.5 1.7 1.9

Global mean temperature change (ºC) additive sub-additive super-additive

Figure 9: The aggregation results of GMT change in year 2050
1

Fuzzy membership degree

0.8 0.6 0.4 0.2 0 1 1.5 2 2.5 3 3.5 4

Global mean temperature change (ºC) additive sub-additive super-additive

Figure 10: The aggregation results of GMT change in year 2100

3

11

Uncertainty in Model Parameters

3.1 Constructing lower and upper cumulative probabilities Figure 11 illustrates probability distributions for climate sensitivity that are reported in the literature. To reduce present knowledge about climate sensitivity to a single probability distribution would clearly misrepresent the scientific disagreement. It seems therefore to be more appropriate to deal with a set of probability distributions that include the reported distributions i.e. we wish to construct an imprecise probability distribution of climate sensitivity.
1.0

Cumulative probability

0.5

0.0 0 1 2 3 4 5 6 7 8 9 10

Climate sensitivity (ºC)
Uniform Forest et al. (2002) Andronova & Schlesinger (2001) Knutti et al. (2002) Wigley & Raper (2001) Weighted Murphy et al. (2004) Expert Forest et al. (2002) Gregory et al. (2002) Tol & de Vos (1998) Unw eighted Murphy et al. (2004) Discrete bounding functions

Figure 11: Cumulative probability distributions for climate sensitivity together with the lower and upper envelope and the discrete approximation An outer approximation to the lower and upper bounding cumulative probabilities F* ( x ) and

F * (x ) is constructed by drawing n+1 cumulative probability levels on the vertical axis, P0,

P1,…, Pn, where Pi = i/n. An interval xi , xi i = 1, …, n, is defined by each discrete probability interval Pi-1 to Pi located such that F * xi = Pi −1 and F* xi = Pi . These intervals are labelled as sets and a probability mass m(Ai) = 1/n is assigned to each interval. The bounds on the lower and upper cumulative probabilities F* ( x ) and F * ( x ) are exact at P0, P1,…, Pn, in that the approximation touches the lower and upper cumulative probability curves at these probability levels. Si = {x: x∈ xi , xi }, i = 1, …, n,

[

]

( )

( )

[

]

4

12

Combining Uncertainties of a Single Scenario and Climate Sensitivity In the above, the uncertainty in emissions scenarios is described in fuzzy terms, and the uncertainty in model parameters is described in imprecise probabilistic terms. These terms are different not only in concepts and knowledge sources but also in formalisms. A natural way of combining directly the two types of uncertainty is to provide a formalism which can function as a bridge to connect the fuzzy terms and probabilistic terms. Random sets can provide such a formalism in which the two types of uncertain can be represented. In the following, we introduce the methods of constructing random sets from fuzzy numbers and lower and upper cumulative probability bounds, respectively. 4.1 Random set from fuzzy emissions scenario The α -cut of a fuzzy number A is defined,

~

~ ~ Aα = {x ∈ X µ A ( x) ≥ α } ~ ~ where µ A ( x) is the membership of x in A and α ∈ [0,1] . Aα is a non-empty closed ~
interval. A fuzzy number can be represented in terms of its level-cuts, i.e.,

~ ~ A = U αAα
α ∈[0 ,1]

( )

supposing

n levels α1 > α 2 L > α n , then we get a nested family of sets { A1 ⊆ A2 ⊆ L ⊆ An } . Let m i = α i − α i+1 , with α n +1 = 0 by convention , the fuzzy
~ µ A (x ) =

membership degree function can be rewritten as

x∈Ai

∑m

i

According to the definition of a fuzzy number, ∃x, µ A ( x ) = 1 , so we have ~

∑m
i =1

n

i

= 1 . mi can

be viewed as the probability that Ai stands as a crisp (non-fuzzy) representative of the fuzzy ~ number A (Dubois and Prade, 1989). The pair ( Ai , mi ) i = 1,2,L, n represents as nested

{

}

set of sets, so is referred to as a consonant random set. For example, we use 6 level-cuts to represent the trapezoidal fuzzy number shown in Figure α 12, i.e., Aα α ∈ { 1 , α 2 , L, α 6 } , where A1=[10.9,19.03], A2=[9.58,22.59],

{

}

A3=[8.27,26.15], According to

α 1 , α 2 , L , α 6 take the values of 1.0, 0.8, 0.6, 0.4, 0.2, 0, respectively (Figure 22.)
m1 = m2 = m3 = m4 = m5 = 0.2 , and m6=0. In this way, we construct a consonant random
mi = αi αi+1 and α7 = 0 by convention, we have

A4=[6.95,29.71],

A5=[5.63,33.27]

and

A6=[4.31,36.84]

while

set with five local elements, i.e.,

{( A , m ) i = 1,2,L,5}, from a trapezoidal fuzzy number. In
i i

our analysis, 11 level cuts are actually used for climate projection.

13

1

Fuzzy membership degree

0.8

0.6 0.4

0.2 0 0 10 20 30 40

Fossil CO2 emissions (Gt C)

Figure 12: Random sets from a trapezoidal fuzzy number

4.2 Random set from lower and upper probabilities One approach to constructing an imprecise probability distribution from the set of probability distributions shown in Figure 11 would be to use the set of all linear combination of the reported probability distributions. This would include all of reported distributions, but would exclude many plausible variants. As well arguably being too precise, in practice propagation of sets of continuous distributions can be computationally expensive and discrete approaches may be more efficient. A well established approach (Williamson and Downs, 1990, Ferson 2004) is to construct a discrete outer envelope on the cumulative probability distributions. In the following analysis we have made use of this more general set of probability distributions that contains all of the reported distributions. In the p-box method, an approximation to the lower and upper cumulative probabilities F* ( x ) and F * ( x ) is constructed by drawing n+1 cumulative probability levels on the vertical axis,

P0, P1,…, Pn, where Pi = i/n. An interval xi , xi i = 1, …, n, is defined by each discrete probability interval Pi-1 to Pi located such that F *

[

assigning m(Ai) = 1/n. The bounds on the lower and upper cumulative probabilities F* ( x ) and

are labelled as focal sets Ai = {x: x∈ xi , xi }, i = 1, …, n, then a random set is constructed by

[

]

] (x ) = P
i

i −1 and

F* xi = Pi . If these intervals

( )

F * (x ) are exact at P0, P1,…, Pn, in that the random set approximation touches the lower and

upper cumulative probability curves at these probability levels. The approach is attractive because it provides a simple rule for partitioning the x-axis and, perhaps more significantly, the resulting mass assignment defined on only n closed intervals of real numbers (Hall and Lawry, 2004). Using the above p-box approach, we can get the random set approximation to the lower and upper cumulative probabilities of Climate Sensitivity. Taking an example of 6 levels, we draw 6 cumulative probability levels P0, P1, …, P6, which have the value of 0, 0.2, 0.4, 0.6, 0.8, 1.0, respectively, shown in Figure 13. The intervals defined by the above probability levels are obtained as follows: A1 =[0, 3.278], A2 =[1.377, 4.477], A3 =[1.940, 5.864], A4 =[2.471, 8.031], and A5 =[3.148, 20.470], a random set is then constructed by assigning m(Ai) = 0.2, i = 1,…, 5, to the above intervals. In the following, a 21-level random set approximation is adopted for climate projection, see Figure 13.

14

1.0

0.8

Cumulative probability

0.6

0.4

0.2

Lower and upper probability 6-level approximation 21-level approximation
0 1 2 3 4 5 6 7 8 9 10

0.0

Climate sensitivity (ºC)

Figure 13: Random sets approximation for lower and upper cumulative probabilities

4.3 Propagation of random sets Let g be a mapping X 1 ×L × X n → Y , which maps a point x = ( x1 , L,

X 1 ×L × X n to a point y in Y such that y = f ( x1 , L , x n ) . The incomplete knowledge about x = ( x1 , L, x n ) , including their dependency, can be expressed as a random
relation, which is a random set (ℑ, m) on the Cartesian product X 1 ×L × X n . The random set (ℜ, ρ), which is the image of (ℑ, m) through g is given by Tonon et al. (2000), ℜ = {Rj = g(Ai) : Ai ∈ ℑ}: g(Ai) = {g(x) : x∈Ai}

x n ) in

ρ (R j ) =

Ai :R j = f ( Ai )

∑ m( A )
i

The summation in the above equation accounts for the fact that more than one focal element Ai may yield the same set of probability measures on Y. Dubois and Prade (1991) addressed some special cases of the above extension principle, including (i) set-valued variables (ii) consonant random Cartesian products (iii) stochastically decomposable Cartesian products and (iv) joint probability distributions (Hall and Lawry, 2004). Once we get the random sets of GMT changes through propagating uncertainty in emissions scenarios and model parameters, we can visualise them by plotting the cumulative lower and upper probability distributions. Suppose a random set defined on a partition of [x1, xs+1] is partitioned into disjoint sub-intervals [x1, x2], (x2, x3],…, (xs-1, xs], (xs, xs+1] labelled A1, A2,…As, respectively. A set of intervals {Ai,…, Aj} i<…<j is labelled Ai,j+1, i.e. according to its extreme lower and upper limits. The lower and upper cumulative probability distribution functions, F* ( x ) and F * ( x ) respectively, at some point x in [x1, xs+1] can be obtained as follows:

F* ( x ) =
and

x≥ x j

∑ m(A )
i. j

15

F * (x ) =

x ≥ xi

∑ m( A )
i. j

In the above, we constructed the random sets from fuzzy emissions scenarios and upper and lower probabilities of climate sensitivity. Suppose the input variables and climate sensitivity are independent, we can combine the two types of uncertainty through the climate model to generate GMT changes with lower and upper bounds. For comparison, we respectively propagate the uncertainties from two different sources through the climate model, including (1) only Climate sensitivity; and (2) Climate sensitivity and Fossil CO2. The results in years 2050 and 2100 on the basis of scenario A1 are shown in Figures 14 and 15, respectively.
1

0.8

Cumulative probability

0.6

0.4

0.2 Climate sensitivity Climate sensitivity+Fossil CO2 0 0 1 2 3 4

Global mean temperature change (ºC)

Figure 14: The lower and upper cumulative probability for GMT changes in year 2050

1

0.8

Cumulative probability

0.6

0.4

0.2 Climate sensitivity Climate sensitivity+Fossil CO2 0 0 2 4 6 8 10

Global mean temperature change (ºC)

Figure 15: The lower and upper cumulative probability for GMT changes in year 2100 The results can be compared with the TAR estimate [1.4ºC, 5.8ºC] for GMT increase in year 2100 relative to 1990. When considering the uncertainty from Climate sensitivity and Fossil

16

CO2, the probability for ∆T <1.4ºC lies in the interval [0, 0.21], for ∆T∈[1.4ºC, 5.8ºC] in [0.79, 1], and for ∆T >5.8ºC in [0, 0.38].

5

Propagating the Four Marker Scenarios Including Climate Sensitivity

In the previous section, we combined the uncertainties from a single emissions scenario and climate sensitivity to get the lower and upper probability distributions of GMT changes. However, in the above tests, the results still differ in using different emissions scenarios; in other words, the results only reflect the uncertainties under one specific emissions scenario. In this section, we will combine the influence from the four marker scenarios in the SRES with the influence from climate sensitivity to generate imprecise probabilities of global mean temperature changes. Different approaches are demonstrated for the case of aggregation then propagation and the case of propagation then aggregation. (1) Aggregation then propagation (Approach I): first aggregate the four fuzzy sets of the marker scenarios to generate an aggregated fuzzy emissions scenario, and then propagate the aggregated fuzzy scenario together with climate sensitivity through MAGICC to get the upper and lower cumulative probabilities of GMT changes. (2) Propagation then aggregation (Approach II and III): first propagate each of the fuzzy emissions scenarios together with climate sensitivity to generate the upper and lower cumulative probabilities of GMT changes (the process in Section 4), and then aggregate the four upper and lower cumulative probabilities of GMT changes to get the aggregated upper and lower cumulative probabilities. In this case, two different approaches are employed in the process of aggregation.

5.1 The case of aggregation then propagation - Approach I

5.1.1 Aggregating the fuzzy emissions scenarios In this test we only consider Fossil CO2 amongst the 7 gases in the aggregation process. Similar to the previous tests, the Choquet integral is used for aggregation. Fuzzy measures over the four SRES marker scenarios, A1, A2, B1, B2, given in Tables 1 to 3, have been employed. The fuzzy sets of the four Fossil CO2 scenarios of 2100 and their aggregated fuzzy sets are shown in Figure 16.

17

1

Fuzzy membership degree

0.8

0.6

0.4

0.2

0 0 20 30 Fossil CO2 emissions (Gt C) A1 A2 B1 B2 additive sub-additive super-additive 10 40

Figure 16: The four fuzzy sets of Fossil CO2 in year 2100 and their aggregated sets The aggregated fuzzy sets are normalized by dividing the maximum fuzzy membership degree before transforming them into the corresponding random sets, Figure 17 shows the results from the assumption of the additive fuzzy measures. The random sets of emissions can be elicited by the method introduced above.
1 original normalized Fuzzy membership degree 0.8

0.6

0.4

0.2

0 0 10 20 30 Fossil CO2 emissions (Gt C) 40

Figure 17: Normalized aggregation results of fossil CO2 for the additive measures 5.1.2 Propagating through MAGICC

Together with the random set of climate sensitivity obtained in previous tests, the random set of aggregated fuzzy scenario is propagated to get the upper and lower cumulative probabilities of GMT changes. For comparison, we also provide the results obtained using the method in Section 4, i.e., those considering climate sensitivity and one single emissions scenario, A1, A2, B1 and B2, respectively. The results of 2050 and 2100 are shown in Figures 18 and 19. As far as emissions scenario is concerned, all the above results only consider the variation of Fossil CO2 with other gases from Scenario A1 fixed. In other words, the

18

aggregated fuzzy scenario consists of the aggregated Fossil CO2 and other gases directly from emissions scenario A1.
1

0.8

Cumulative probability

0.6

0.4

0.2

A1 A2 B1 B2 Aggregated
0 1 2 3 4

0

Global mean temperature Changes (ºC)

Figure 18: The lower and upper cumulative probability for GMT changes in year 2050

1

0.8

Cumulative probability

0.6

0.4

A1 A2 B1 B2 aggregated

0.2

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature Changes (ºC)

Figure 19: The lower and upper cumulative probability for GMT changes in year 2100 From the above results, we can see: the probability for ∆T <1.4ºC lies in the interval [0, 0.26], for ∆T∈[1.4ºC, 5.8ºC] in [0.74, 1], and for ∆T >5.8ºC in [0, 0.34].

19

1

0.8

Cumulative probability

0.6

0.4

2025
0.2

2050 2075 2100

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature changes (ºC)

Figure 20: The GMT changes in years 2025, 2050, 2075 and 2100 Figure 20 shows the variation of uncertainties of the GMT changes in four different years, 2025, 2050, 2075 and 2100. The above results are from the propagation of uncertainties in the aggregated emissions scenario (Fossil CO2) and in climate sensitivity. The uncertainty range increases with time as expected. We also demonstrate the different projections in the GMT changes from the different measure assumptions used in the Choquet integral. Figures 21 and 22 show the lower and upper cumulative probabilities aggregated from the three sets of measures in Tables 1 to 3, for years 2050 and 2100 respectively.
1

0.8

Cumulative probability

0.6

0.4

additive
0.2

sub-additive super-additive

0 0 1 2 3 4

Global mean temperature changes (ºC)

Figure 21: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2050

20

1

0.8

Cumulative probability

0.6

0.4

additive
0.2

sub-additive super-additive

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature changes (ºC)

Figure 22: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2100

5.2 The case of propagation then aggregation - Approach II In Section 4, we obtained the upper and lower cumulative probabilities of GMT changes through propagating the uncertainty of Fossil CO2 and Climate sensitivity. However, the results are different for different emission scenarios. Figure 23 shows the results of year 2100 based on four SRES marker scenarios A1, A2, B1, and B2 respectively. Now we study the method of combining the upper and lower cumulative probabilities of GMT changes in Figure 23.
1

0.8

Cumulative probability

0.6

0.4

A1 A2

0.2

B1 B2

0 0 1 2 3 4 5 6 7 8 9 10

A B C D E F G H I Figure 23: GMT Changes in year 2100 based on different scenarios

J

21

5.2.1 Aggregating the random sets A random set can be elicited from a pair of upper and lower cumulative probabilities. To combine the random sets from different sources, a common discretization is required for constructing the reference random set. We discretize GMT increase on an interval [x1, xs+1] into disjoint sub-intervals [x1, x2], (x2, x3],…, (xs-1, xs], (xs, xs+1] labelled A1, A2,…As, respectively. To illustrate the method, in Figure 23, we partition [0, 10] into 10 sub-intervals A, B, …, J. In this way, we can get a reference power set. We then convert the upper and lower cumulative probabilities of GMT changes in Figure 23 into their random sets, shown in Table 4. Table 4: Four mass assignments from upper and lower cumulative probabilities in Figure 21 Scenario: A1 Scenario: A2 Scenario: B1 Scenario: B2
Focal elements mass Focal elements mass Focal elements mass Focal elements mass

A-C 0.025 A-D 0.06 A-D 0.09 B-D 0.09 B-D 0.06 B-E 0.10 B-E 0.23 C-E 0.13 B-F 0.065 C-F 0.23 C-F 0.185 D-F 0.04 C-G 0.18 D-G 0.20 C-H 0.055 D-H 0.08 D-H 0.045 E-H 0.02 D- I 0.04 E -I 0.05 D-J 0.025 Note: A-D denotes the focal element {A,B,C,D}

A-B A-C B-C B-D B-E C-E C-F C-G

0.035 0.195 0.055 0.35 0.2 0.04 0.09 0.035

A-C B-D B-E C-E C-F C-G D-G D-H

0.1 0.245 0.125 0.17 0.215 0.045 0.05 0.05

Now we can use Choquet Integral to combine the four mass assignments. We use the same fuzzy measure on the four emissions scenarios as assumed before, see Table 4. Now we use two examples to show the performing process of the Choquet integral. (1) For the focal element {B,C,D,E} From the Table 4, we get the mass assignments, mA1({B,C,D,E})=0.23, mA2({B,C,D,E})=0.1, mB1({B,C,D,E})=0.2 and mB2({B,C,D,E})=0.125, the permutation is mA2<mB2<mB1<mA1. Applying the Choquet integral to aggregate the mass, m v({B,C,D,E })=mA2[v(N)-v({B2,B1,A1})]+mB2[v({B2,B1,A1})-v({B1,A1})] + mB1[v({B1,A1})-v({A1})]+mA1 v({A1}) = 0.1×[1-0.8] + 0.125×[0.8-0.75] + 0.2×[0.75-0.5] + 0.23×0.5 = 0.1912 (2) For the focal element {D,E,F,G,H,I,J} From the Table 4, we get the mass assignments, mA1({D,E,F,G,H,I,J})=0.025, mA2({D,E,F,G,H,I,J})=0, mB1({D,E,F,G,H,I,J})=0 and mB2({D,E,F,G,H,I,J})=0, the permutation is mB1≤ mB2≤ mA2<mA1. Applying the Choquet integral to aggregate the mass, m v({D,E,F,G,H,I,J })=mB1[v(N)-v({B2,A2,A1})]+mB2[v({B2,A2,A1})-v({A2,A1})] + mA2[v({A2,A1}) - v({A1})] + mA1 v({A1}) = 0×[1-0.75] + 0×[0.75-0.7] + 0×[0.7-0.5] + 0.025×0.5 = 0.0125

22

Similarly, we can compute the aggregated masses for all the focal elements, the results are shown in Table 5. The aggregated masses are then normalized before being transformed back to the form of imprecise probabilities if their sum is not equal to 1. Table 5: The aggregated mass assignment Focal Mass elements A - B 0.0088 A - C 0.0663 A - D 0.057 B - C 0.0138 B - D 0.1477 B - E 0.1912 B - F 0.0325 C - E 0.0445 C - F 0.1717 C - G 0.1010 C - H 0.0275 D - F 0.008 D - G 0.0425 D - H 0.041 D - I 0.02 D - J 0.0125 E - H 0.004 E - I 0.01

5.2.2 Converting the aggregated random set to the upper and lower cumulative probabilities The final step is to convert the aggregated random set back to the upper and lower cumulative probabilities. A finer partition of 100 sub-intervals is used to the above process because 10 sub-intervals are not enough to generate smooth cumulative probability curves. The aggregated upper and lower cumulative probabilities for years 2050 and 2100 are shown in Figures 22 and 23, respectively. For comparison, the upper and lower cumulative probabilities from a single emissions scenario are shown in Figures 24 and 25 as well. Figure 26 shows the variation of uncertainties of the GMT changes in four different years, 2025, 2050, 2075 and 2100. From Figure 25, we can see: the probability for ∆T <1.4ºC lies in the interval [0, 0.26], for ∆T∈[1.4ºC, 5.8ºC] in [0.74, 1], and for ∆T >5.8ºC in [0, 0.3].

23

1

0.8

Cumulative probability

0.6

0.4

0.2

A1 A2 B1 B2 Aggregated
0 1 2 3 4

0

Global mean temperature Changes (ºC)

Figure 24: The aggregated upper and lower probabilities in year 2050
1

0.8

Cumulative probability

0.6

0.4

A1 A2 B1 B2 aggregated

0.2

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature Changes (ºC)

Figure 25: The aggregated upper and lower probabilities in year 2100

24

1

0.8

Cumulative probability

0.6

0.4

2025
0.2

2050 2075 2100

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature changes (ºC)

Figure 26: The GMT changes in years 2025, 2050, 2075 and 2100 Similar to approach I, the influences of the three sets of measures in Tables 1 to 3 on the lower and upper cumulative probabilities of GMT changes, are shown in Figures 27 and 28, for years 2050 and 2100 respectively. In 2050, the lower and upper cumulative probabilities from the three measure assumptions are identical, even in 2100, they are only slightly different, which indicate that the GMT changes are, in this approach, insensitive to fuzzy measure assumptions in the process of aggregation.
1

0.8

Cumulative probability

0.6

0.4

additive
0.2

sub-additive super-additive

0 0 1 2 3 4

Global mean temperature Changes (ºC)

Figure 27: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2050

25

1

0.8

Cumulative probability

0.6

0.4

additive
0.2

sub-additive super-additive

0 0 1 2 3 4 5 6 7 Global mean temperature Changes (ºC) 8 9 10

Figure 28: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2100

5.3 The case of propagation then aggregation - Approach III Now we use the Choquet integral to directly aggregate upper and lower cumulative probabilities of GMT changes generated from a single fuzzy emissions scenario and climate sensitivity. In this approach, upper and lower cumulative probabilities are aggregated separately. For each point in the universe of discourse of GMT changes, we can get its corresponding values of cumulative probabilities from the four upper (lower) cumulative probability curves, and then aggregate them by the Choquet integral to get the aggregated value. Take the point of 4.0 as an example. According to lower probability curves shown in Figure 23, we can get four corresponding values PA1=0.175, PA2=0.15, PB1=0.635 and PB2=0.345 with the increasing order PA2 < P A1 < PB2 < PB1. Now we apply the Choquet integral to aggregate the four values. (1) For the set of additive fuzzy measures in Table 1, we have the aggregated value P= PA2[v(N)-v({A1,B2,B1})]+PA1[v({A1,B2,B1})-v({B2,B1})] + PB2[v({B2,B1})-v({B1})]+PB1 v({B1}) = 0.15×[1-0.8] + 0.175×[0.8-0.3] + 0.345×[0.3-0.25] + 0.635×0.25 = 0.2935 (2) For the set of sub-additive fuzzy measures in Table 2, we have the aggregated value P= PA2[v(N)-v({A1,B2,B1})]+PA1[v({A1,B2,B1})-v({B2,B1})] + PB2[v({B2,B1})-v({B1})]+PB1 v({B1}) = 0.15×[1-0.8] + 0.175×[0.8-0.35] + 0.345×[0.35-0.35] + 0.635×0.35 = 0.331 (3) For the set of super-additive fuzzy measures in Table 3, we have the aggregated value P= PA2[v(N)-v({A1,B2,B1})]+PA1[v({A1,B2,B1})-v({B2,B1})] + PB2[v({B2,B1})-v({B1})]+PB1 v({B1}) = 0.15×[1-0.7] + 0.175×[0.7-0.2] + 0.345×[0.2-0.15] + 0.635×0.15 = 0.245

26

The aggregated upper and lower cumulative probabilities of GMT changes for years 2025 and 2100 are respectively shown in Figures 29 and 30, in comparison with the upper and lower cumulative probabilities from a single emissions scenario. Figure 31 shows the variation of uncertainties of the GMT changes in four different years, 2025, 2050, 2075 and 2100.
1

0.8

Cumulative probability

0.6

0.4

0.2

A1 A2 B1 B2 Aggregated
0 1 2 3 4

0

Global mean temperature Changes (ºC)

Figure 29: The aggregated upper and lower probabilities in year 2050
1

0.8

Cumulative probability

0.6

0.4

A1 A2 B1 B2 aggregated

0.2

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature Changes (ºC)

Figure 30: The aggregated upper and lower probabilities in year 2100

27

1

0.8

Cumulative probability

0.6

0.4

2025
0.2

2050 2075 2100

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature changes (ºC)

Figure 31: The GMT changes in years 2025, 2050, 2075 and 2100 The influences of the three sets of measures in Tables 1 to 3 on the lower and upper cumulative probabilities of GMT changes, are shown in Figures 32 and 33, for years 2050 and 2100 respectively.
1

0.8

Cumulative probability

0.6

0.4

additive
0.2

sub-additive super-additive

0 0 1 2 3 4

Global mean temperature Changes (ºC)

Figure 32: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2050

28

1

0.8

Cumulative probability

0.6

0.4

additive sub-additive super-additive

0.2

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature Changes (ºC)

Figure 33: The lower and upper cumulative probability of GMT changes for the three measure assumptions in year 2100

5.4 Comparison of the approaches Now we can compare the results from the above approaches. Figures 34 and 35 show the three pairs of aggregated upper and lower cumulative probabilities of GMT changes for years 2025 and 2100, respectively.

1

0.8

Cumulative probability

0.6

0.4

Approach I
0.2

Approach II Approach III

0 0.0 0.5 1.0 1.5 2.0

Global mean temperature changes (ºC)

Figure 34: Comparison of the results of the three approaches in year 2025

29

1

0.8

Cumulative probability

0.6

0.4

Approach I
0.2

Approach II Approach III

0 0 1 2 3 4 5 6 7 8 9 10

Global mean temperature Changes (ºC)

Figure 35: Comparison of the results of the three approaches in year 2100

6

Conclusions

We have argued that socio-economic scenarios, which determine future greenhouse gas emissions, are fuzzy linguistic constructs. Any precise emissions trajectory (which is required for climate modelling) can be thought of as having a degree of membership between 0 and 1 in a fuzzy scenario. We have demonstrated how fuzzy scenarios can be propagated through a simple climate model, MAGICC. The computational expense is much reduced if the number of variables in the analysis can be reduced through sensitivity analysis and if the model output (in this case global mean temperature) can be shown to be monotonic with respect to the influential variables. We have constructed imprecise probability distributions to represent climate model uncertainties. This is justified on the basis that probabilistic estimates of climate sensitivity are highly contested and there is little prospect of a unique probability distribution being collectively agreed upon in the near future. We have demonstrated how imprecise probability distributions of climate sensitivity can be propagated through MAGICC using the so-called pbox approximation. Fuzzy scenario uncertainties and imprecise probabilistic representation of model uncertainties have been combined using random set theory, to generate lower and upper cumulative probability distributions for GMT. Aggregation of scenarios has been demonstrated using the Choquet Integral. It is argued that the Choquet Integral provides a more flexible approach to aggregation than simple weighted average methods. Acknowledgements This report is part of a Tyndall Centre project ‘Estimating uncertainty in future assessments of climate change’ (T2.13). The advice of Rachel Warren, Frans Berkhout and Suraje Dessai is gratefully acknowledged. This report was reviewed by Peter Challenor.

30

References Allen, M. (2003) Climate forecasting: possible or probable? Nature 425, 242. Allen, M., S. Raper and J. Mitchell (2001) Uncertainty in the IPCC’s assessment report. Science 293, 430-433. Andronova, N. G. and M. E. Schlesinger (2001) Objective estimation of the probability density function for climate sensitivity. Journal of Geophysical Research-Atmospheres 106, 22605-22611. Chateauneuf A. and J.Y. Jaffray (1989) Some characterizations of lower probabilities and other monotone capacities through the use of M¨obius inversion. Mathematical Social Sciences 17, 263-283. Dubois D. and H. Prade (1986) Weighted minimum and maximum operations in fuzzy set theory. Inform. Sci. 39, 205-210. Dubois D. and H. Prade (1989) Fuzzy sets, possibility and measurement. European Journal of Operational Research 40,135-154. Dubois D. and H. Prade (1991) Random sets and fuzzy interval analysis. Fuzzy Sets and Systems 42, 87-101. Dubois D. and H. Prade (2004) On the use of aggregation operations in information fusion processes. Fuzzy Sets and Systems 142, 143-161. Dubois D., M. Grabisch, F. Modave and H. Prade (2000) Relating decision under uncertainty and multicriteria decision making models. International Journal of Intelligent Systems 15, 967-979. Dubois D., J.-L. Marichal, H. Prade, M. Roubens, and R. Sabbadin (2001) The use of the discrete Sugeno integral in decision making: a survey. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9(5), 539-561. Forest, C. E., P. H. Stone, A. P. Sokolov, M. R. Allen and M. D. Webster (2002) Quantifying uncertainties in climate system properties with the use of recent climate observations. Science 295, 113-117. Grabisch M. (1995) Fuzzy integral in multicriteria decision making. Fuzzy Sets and Systems 69(3), 279-298. Grabisch M. (1996) The application of fuzzy integrals in multicriteria decision making. European Journal of Operational Research 89, 445-456. Gregory, J. M., R. J. Stouffer, S. C. B. Raper, P. A. Stott and N. A. Rayner (2002) An observationally based estimate of the climate sensitivity. Journal of Climate 15, 31173121. Hall, J.W. and Lawry, J. (2004) Generation, combination and extension of random set approximations to coherent lower and upper probabilities. Reliability Engineering and System Safety 85, 89-101. Houghton, J.T. et al. (eds.), (2001). Climate Change 2001: The Scientific Basis. Cambridge Univ. Press, Cambridge. Intergovernmental Panel on Climate Change, (1996) Climate Change 1995 – The Science of Climate Change, Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change, Houghton, J. T., Meira Filho, L. G., Callander, B. A., Harris, N., Kattenberg, A., and Maskell, K. (eds.), Cambridge University Press, Cambridge and New York. Klir, G.J. and T.A. Folger (1988) Fuzzy sets, uncertainty and information, Prentice Hall: Englewood Cliffs, N.J. London. Klir, G.J. (1999) Uncertainty and information measure for imprecise probabilities: an overview. In: 1st International Symposium on Imprecise Probabilities and Their Applications, Ghent, Belgium, 1999. Knutti, R., T. F. Stocker, F. Joos and G. K. Plattner (2002) Constraints on radiative forcing and future climate change from observations and climate model ensembles. Nature 416, 719-723.

31

Kojadinovic Ivan, (2004) Estimation of the weights of interacting criteria from the set of profiles by means of information-theoretic functionals. European Journal of Operational Research 155 (3): 741-751. Kriegler, E. and H. Held (2003) Global Mean Temperature projections for the 21st Century using random sets. ISIPTA03: Proc. 3rd Int. Symp on Imprecise Probabilities and their Applications, Carlton Scientific, 345-359. Labreuche C. and M. Grabisch, (2003) The Choquet integral for the aggregation of interval scales in multicriteria decision making. Fuzzy Sets and Systems 137 (1),11-26. Marichal, J.L. (2000) An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria. IEEE Transactions on Fuzzy Systems 8 (6), 800-807. Marichal, J.L. (2000) On Sugeno integral as an aggregation function. Fuzzy Sets and Systems 114 (3), 347-365 Marichal, J.-L. (2001) An axiomatic approach of the discrete Sugeno integral as a tool to aggregate interacting criteria in a qualitative framework. IEEE Transactions on Fuzzy Systems 9 (1), 164-172. Marichal J.-L., M. Roubens (2000) Determination of weights of interacting criteria from a reference set. European Journal of Operational Research 124, 641–650. Modave, F. and V. kreinovich (2002) Fuzzy measures and Integrals as aggregation operators: solving the commensurability problem. http://citeseer.nj.nec.com/cachedpage/509905/1 Moss, R.H. and S.H. Schneider (2000) Uncertainties in the IPCC TAR: Recommendations to Lead Authors For More Consistent Assessment and Reporting, in Guidance Papers on the Cross Cutting Issues of the Third Assessment Report, R. Pachauri, T. Taniguchi, K. Tanaka, Eds. World Meteorological Organization, Geneva. Nakicenovic, N. and R. Swart (eds.) (2000) Special Report on Emissions Scenarios, Cambridge University Press, Cambridge, United Kingdom. Reilly, J., P. H. Stone, C. E. Forest, M. D. Webster, H. D. Jacoby, and R. G. Prinn, (2001) Uncertainty and climate change assessments. Science 293, 430-433. Ruspini, E., P. Bonissone, and W. Pedrycz (eds.) (1998) Handbook of Fuzzy Computation. IOP Publishing Ltd, Philadelphia. Schneider, S.H. (2001) What is “Dangerous” Climate Change? Nature 411, 17-19. Schneider, S.H. (2002) Can we estimate the likelihood of climatic changes at 2100? Climatic Change 52, 441–451. Shafer G. (1976) A Mathematical Theory of Evidence. Princeton: Princeton University Press. Stott, P.A. and J.A. Kettleborough (2002) Origins and estimates of uncertainty in predictions of twenty-first century temperature rise. Nature 416, 723–726. Sugeno, M. (1974) Theory of fuzzy integrals and its applications. Ph.D. Thesis, Tokyo Institute of Technology, Tokyo. Tol, R. S. J. and A. F. de Vos (1998) A Bayesian statistical analysis of the enhanced greenhouse effect. ClimaticChange 38, 87-112. Tonon F., A. Bernardini and A. Mammino (2000) Reliability analysis of rock mass response by means of random set theory. Reliability Engineering and System Safety 70(3), 263282. Visser, H., R. J. M. Folkert, J. Hoekstra and J. J. DeWolff (2000) Identifying key sources of uncertainty in climate change projections. Climatic Change 45 (3), 421-457. Wang, Z.Y. and G.J. Klir (1992) Fuzzy measure theory, Plenum Press: New York Wang, Z.Y. and G. J. Klir (1997) Choquet integrals and natural extensions of lower probabilities. International Journal of Approximate Reasoning 16(2), 137-147. Webster, M.D., and A. Sokolov (2000) A methodology for qualifying uncertainty in climate projections. Climatic Change 46, 417–446. Webster, M. D., M. Babiker, M. Mayer, J. M. Reilly, J. Harnisch, R. Hyman, M. C. Sarofim and C. Wang (2002) Uncertainty in emissions projections for climate models. Atmospheric Environment 36, 3659-3670.

32

Webster, M.D., C. Forest, J. Reilly, M. Babiker, D. Kicklighter, M. Mayer, R. Prinn, M. Sarofim, A. Sokolov, P. Stone, and C. Wang (2003) Uncertainty analysis of climate change and policy response. Climatic Change 61, 295–320. Wigley, T.M. L. and S.C.B. Raper (2001) Interpretation of high projections for global-mean warming. Science 293, 451-454. Williamson RC and T. Downs (1990) Probabilistic arithmetic I: numerical methods for calculating convolutions and dependency bounds. Int. J. Approximate Reasoning 4(2), 89-158. Yager R.R. (1988) On ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Transactions on Systems, Man Cybernetics 18, 183-190. Yager, R. R. and D. P. Filev (1994) Essentials of fuzzy modeling and control. New York, Chichester : Wiley. Zadeh, L.A. (1965) Fuzzy sets. Information and Control 8, 338-353. Zadeh, L.A. (1975) The concept of a linguistic variable and its application to approximate reasoning. Information Sciences 8, 199-249,301-357; 9, 43-80.

33

Appendix I: Sensitivity and monotonicity analysis of MAGICC

Sensitivity to greenhouse gasses In MAGICC, emissions scenarios are specified as precise time-dependent trajectories for the 7 emissions gases (variables). CO2 emissions are divided into those from fossil energy sources (Fos CO2) and those from net land use change (Def CO2). SO2 emissions are also partitioned into three regional components, Europe and North America (SO21), Asia (SO22) and the rest of the World (SO23). The regional disaggregation of global SO2 emissions has negligible influence upon the results generated in MAGICC, and only becomes relevant while describing regional climate changes in SCENGEN. In addition, the emissions of CH4 and N2O are also considered in MAGICC. Each of the variables has a different influence on the climate projection in the future. The process of sensitivity analysis is described to identify those variables that have most influence so that the number of variables can be reduced in the following analysis. To test the sensitivity of the model to a single variable with the other 6 variables fixed, a variation interval of GMT changes can be obtained by running the model with the variable values in a test range. The test range for each variable is assumed to be 0 to 2 times of its original values in emissions scenarios. Here four SRES illustrative emissions scenarios are used in our test. The variations of GMT changes for each variable are shown in Figures 36-39 and the lengths of the variations given in Table 5.
4 GMT change in 2100 (۫ºC) 3.5 3 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 Variation times of orginal values of 7 varialbes
Fos CO2 Def CO2 CH4 N2O SO21 SO22 SO23

Figure 36: The GMT change variations due to each variable based on A1 marker scenario

34

4.5 GMT change in 2100 (۫ºC) 4 3.5 3 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 Variation times of orginal values of 7 varialbes
Fos CO2 Def CO2 CH4 N2O SO21 SO22 SO23

Figure 37: The GMT change variations due to each variable based on A2 marker scenario
3
GMT change in 2100 (۫ºC)

2.5 2 1.5 1 0.5 0 0.5 1 1.5 2
Variation times of orginal values of 7 varialbes

Fos CO2 Def CO2 CH4 N2O SO21 SO22 SO23

Figure 38: The GMT change variations due to each variable based on B1 marker scenario
3.5
GMT change in 2100 (۫ºC)

3 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2
Variation times of orginal values of 7 varialbes

Fos CO2 Def CO2 CH4 N2O SO21 SO22 SO23

Figure 39: The GMT change variations due to each variable based on B2 marker scenario

35

Table 5: The variations of GMT change in 2100 due to the variations of a single gas Scenario Fos CO2 Def CO2 CH4 N2O SO21 SO22 SO23 A1 3.0701 0.0345 0.0965 0.015 0.5625 0.2721 0.0577 A2 3.3493 0.0212 0.7456 0.1875 0.3101 0.0446 0.1645 B1 2.107 0.0334 0.1727 0.0572 0.5706 0.2759 0.0944 B2 2.546 0.0101 0.3667 0.0324 0.4848 0.001 0.096 Average 2.7681 0.0248 0.3454 0.0730 0.4820 0.1484 0.1032 According to the average variations in Table 5, the decreasing ranking of the gases with respect to their influence on GMT change is Fossil CO2, SO21, CH4, SO22, SO23, N2O, and Deforestation CO2. The tests of monotonicity of the model to a single variable with all the other variables fixed showed that the model is monotonic with regard to each of the following five gases, Fossil CO2, CH4, SO21, SO22, SO23. Therefore, to reduce the computational complexity of estimating upper and lower probabilities in the following analysis, we only consider a subset of the above five gases.

Sensitivity to model parameters MAGICC has four model parameters that may be changed by the user: (1) Carbon cycle model parameter; (2) Two aerosol forcing parameters; (3) Climate sensitivity. In the carbon cycle model, the Dn80s parameter means the 1980s-mean value of net land-use change CO2 emissions and gives an indication of the assumed CO2 fertilisation effect. Its uncertainty range given in IPCC is 0.4 to 1.8 Gt C yr-1, with a central estimate of 1.1 Gt C yr1 . In MAGICC, there are three components to the aerosol forcing used: namely, direct, indirect and biospheric. The values of the direct and indirect aerosol forcing can be changed by the user to reflect different estimates of the strength of the aerosol forcing. The 1990 direct and indirect forcing values (S90DIR and S90IND) used in the IPCC SAR are -0.3 Wm-2 and -0.8 Wm-2 respectively. The climate sensitivity (DT2X) defines the equilibrium response of the global-mean surface air temperature to a doubling of atmospheric CO2 concentration. IPCC have used the range from 1.5۫ºC to 4.5ºC, with a mid-range estimate of 2.5ºC. In the IPCC TAR, the range was reevaluated as 1.7۫ºC to 4.2ºC. In the following, we test the sensitivity of climate projections to the above model parameters. The value of GMT change can be obtained by running the model with a parameter value, and a range of the value of GMT change is thereby derived using the parameter values in the test range. The chosen test ranges for DT2X, Dn80s, S90DIR and S90IND are obtained by plus and minus 100% (90%) of their default values, i.e., [0.26, 5.2], [0.11, 2.2], [-0.8, 0], [-1.6, 0], respectively. These parameters were varied one at a time and in combination. Using four SRES illustrative scenarios A1B-AIM, A2-ASF, B1-IMA, B2-MES to run the model, we get the corresponding GMT change shown in Figures 40-43 and variations shown in Table 6.

36

GMT Change in 2100 (°C)

5 4 3 2 1 0 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Variation percentage DT2X DN80S S90DIR S90IND

Figure 40: The variations of GMT change in 2100 (scenario: A1B-AIM)
GMT change in 2100 (°C)
6 5 4 3 2 1 0 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Variation percentage DT2X DN80S S90DIR S90IND

Figure 41: The variations of GMT change in 2100 (scenario: A2-ASF)
4

GMT change in 2100 (°C)

3

2

1

0 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Variation percentage DT2X DN80S S90DIR S90IND

Figure 42: The variations of GMT change in 2100 (scenario: B1-IMA)

37

5

GMT change in 2100 (°C)

4 3 2 1 0 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Variation percentage
DT2X DN80S S90DIR S90IND

Figure 43: The variations of GMT change in 2100 (scenario: B2-MES) Table 6: The variations for GMT change in 2100 due to the variations of a single parameter Scenarios DT2X DN80S S90DIR S90IND A1B-AIM 4.0453 0.3813 0.2218 0.3247 A2-ASF 4.8578 0.3546 0.0334 0.0755 B1-IMA 2.8633 0.5119 0.2298 0.3386 B2-MES 3.6235 0.5018 0.0959 0.1001 Average 3.8475 0.4374 0.1452 0.2097 From the results, we can see climate sensitivity is the dominating parameter amongst the four model parameters, this is consistent with previous studies (for example, IPCC, 1996; Visser, et al., 2000; Wigley and Raper, 2001; Houghton, et al., 2001). In the following, we only deal with the uncertainty from climate sensitivity.

Monotonicity analysis The next step is to test the monotonicity of the climate model to the parameter of climate sensitivity. A wide range is used for test, (0, 10], excluding the point of zero. The results of GMT changes in the years 2050 and 2100 on the basis of the four SRES emissions scenarios are shown in Figure 44. The results show that the climate model is monotonically increasing in the parameter of climate sensitivity, regardless of the other three parameters.

38

8 7

GMT Change (°C)

6 5 4 3 2 1 0 0 2 A1(2050) B1(2050) 4 A1(2100) B1(2100) 6 A2(2050) B2(2050) 8 A2(2100) B2(2100) 10

Climate Sensitivity-DT2X(K)

Figure 44: The GMT Changes with the increase of Climate Sensitivity

39

Appendix II: Aggregation operators on fuzzy sets Aggregation operations on fuzzy sets are operations by which several fuzzy sets are combined in a desirable way to produce a single fuzzy set. An aggregation operator on n fuzzy sets is defined by a function
n

h : [0,1] → [0,1] when applied to fuzzy sets A1 , A2 , L , An defined on X , function h produces an aggregate fuzzy set A by operation on the membership degrees of these sets for each x ∈ X ,
i.e.,

A( x ) = h( A1 ( x ),

A2 (x ), L,

An ( x ))

In order to qualify as an intuitively meaningful aggregation function, the function h must satisfy at least the following three axiomatic requirements (Ruspini et al., 1998): (1) Monotonicity: for any pair (a1 , L, a n ) and (b1 , L , bn ) of n-tuples such that (2) Boundary conditions: h(0, L, (3) h is a continuous function.

ai , bi ∈ [0,1] , if ai ≤ bi for all i , then h(a1 , L , a n ) ≤ h(b1 , L , bn )

0) = 0 and h(1, L, 1) = 1

With these properties, there are a wide variety of aggregation operators in the literature. Here we provide a brief review, including those developed to aggregate the preference profiles in the fields of multiagent fusion, multicriteria decision making and decision under uncertainty (Yager and Filev, 1994; Klir and Folger, 1988; Dubois and Prade, 2004). The intersection and union operations, exemplified by the most often used Max and Min operations, can be viewed in a more general setting as an aggregation of fuzzy subsets, though they are defined for only two arguments, their property of associativity provides a mechanism for extending their definitions to any number of arguments. T-norms and T-conorms generalize the intersection and union operations, respectively. T-norms are basically the minimum and all those functions bounded from above by the minimum operation and Tconorms are basically the maximum and all those functions bounded from below by the maximum operation. Klir and Folger (1988) gave a listing of these T-norms and T-conorms. In many cases of aggregation of fuzzy sets, the type of aggregation required is neither the pure conjunction of the T-norm with its complete lack of compensation nor the pure disjunction of the T-conorm with its complete submission to any good satisfaction, and the desired are the aggregation operators which lie somewhere between these two extremes. These aggregation operations covering the entire interval between the Min and Max operations are usually called averaging operations, such as mean aggregation operators. Another class of aggregation operations, different from the above, is designed to consider a weight associated with each aggregated fuzzy subset in the process of aggregation. The above aggregation operators without considering weights actually regard each aggregated fuzzy subset as identically important. The intersection and union operations can be extended to weighted intersections and unions by transforming the original fuzzy subsets and their associated weights into modified fuzzy subsets. Weighted minimum and maximum operations proposed by Dubois and Prade (1986) in the framework of possibility theory, are a generalisation of min and max. They are defined as follows,

w min w1 ,L, wn (a1 ,L, a n ) = ∧ ((1 − wi ) ∨ ai )
n i =1

40

w max w1 ,L, wn (a1 ,L, a n ) = ∨ (wi ∧ ai )
n i =1

where weights are normalized so that ∨ wi = 1 , ∧ and ∨ denote min and max respectively.
i =1

n

Similarly, mean aggregation operators also have their weighted transformations. One of the most often used amongst them, especially in the fields of Multi-criteria decision making (simple additive weighting method), is the weighted arithmetic mean, which has the following form

h(a1 , L , a n ) = ∑ wi ai
i =1

n

where the weight vector w = (w1 , L ,

wn ) such that

∑w
i =1

n

i

= 1 and wi ≥ 0

for all

i ∈ { , L, n}. Yager (1988) proposed the concept of ordered weighted averaging (OWA) 1
operators by performing a reordering step to the above weighted arithmetic mean operator. Its aggregation function is

h(a1 , L, a n ) = ∑ w j b j
j =1

n

where b j is the jth largest element of the collection of the aggregated objects a1 , L,

an .

The re-ordering step in OWA operator give it the property that an argument ai is not associated with a particular weight wi but rather a weight wi is associated with a particular ordered position i of the arguments. The above weighted aggregation, however, can be used only in the cases where aggregated fuzzy subsets are independent, because these operators cannot model interaction among the aggregated fuzzy subsets. For this purpose, a class of fuzzy aggregation operators, based upon the concept of fuzzy measure and the related idea of fuzzy integral (Sugeno, 1974), is now introduced. the the set the

Definition Assume N is a finite set of elements. A set function v, v: 2N→[0,1] mapping from crisp subsets of N into the unit interval, is a fuzzy measure on N if it has the properties: (1) v(φ) = 0 (2) v(N) = 1 We denote by FN the set of all fuzzy measures on N. Definition The discrete Sugeno Integral of a function x: N→[0,1] with respect to v∈FN is defined by,

S v ( x) = ∨ [xi ∧ v( Ai )]
n i =1

where xi (i = 1, …, n) is the permutation on N such that x1 ≤ L ≤ x n and

Ai = {xi , L, x n } .

The Sugeno Integral can also be written in the following form, which does not need the reordering of the variables:

41

S v ( x) = ∨ ∧ xi ∧ v(T )
T ⊆ N i∈T

[( )

]
xn } ,

Definition The discrete Choquet Integral of x: N→[0,1] with respect to v∈FN is defined by,

C v ( x) = ∑ xi [v( Ai ) − v( Ai +1 )]
and A(n +1) = φ . The Choquet integral can also be written as
n i =1

n

where xi (i = 1, …, n) is the permutation on N such that x1 ≤ L ≤ x n , Ai = {xi , L,

i =1

C v ( x) = ∑ [xi − xi −1 ]v( Ai )
with the same notation as above, and xi-1 = 0. In the Choquet integral, the interaction among criteria is modelled by the fuzzy measure v. When the criteria are exclusive so that v is additive, the above Choquet integral collapses into the weighted arithmetic mean. It is worth introducing another expression form of the Choquet integral that extending the weighted arithmetic mean in an intuitive way. Any fuzzy measure v∈FN can be expressed in a unique way as

v(S ) =

T ⊆S

∑ u (T ) , S ⊆ N

where u is a set function on the power set P(N), if and only if

u (S ) =

T ⊆S

∑ (− 1)

S −T

v(T ) , S ⊆ N

where |A| denotes the cardinality of the set A. The set function u is referred to as the Möbius inversion of v (see Shafer (1976)). In terms of the Möbius representation, the Choquet integral can be written as [Chateauneuf and Jaffray (1989)]

C v (x ) =

T ⊆N

∑ u (T ) ∧ x
i∈T

i

where ∧ is the minimum operation. Fuzzy measures and fuzzy integrals, although they were primarily intended as a new uncertainty measure, from an early stage have been used in the field of subjective multicriteria evaluation mainly in Japan. Grabisch (1995, 1996) provided a synthesis on the application of fuzzy integrals as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The Sugeno Integral is justified by an axiomatic approach as an aggregation function in multicriteria decision making problems and can be written under the form of a weighted maxmin function (Marichal, 2000b and 2001). Dubois etc. (2001) reviews the use of the discrete Sugeno integral as either an aggregation tool or a preference function both in multi-criteria decision-making and decision-making under uncertainty. The discrete Choquet Integral was axiomatized for their application to multicriteria decision making problems as adequate aggregation operators that extends the weighted arithmetic mean by taking into consideration of the interaction among Criteria (Marichal, 2000a). The

42

Choquet integral is a unique possible aggregation operator under a priori assumptions and the conditions induced by some information concerning the preferences of the decision maker over each attribute and the aggregation of criteria (interacting criteria) (Labreuche and Grabisch, 2003). Because the Choquet integral is used for aggregation of the fuzzy sets in this study, we now briefly introduce some of its properties (for more details see Grabisch, 1995 and 1996; Marichal, 2000a). Theorem The Choquet integral is monotonically non-decreasing, i.e., for all xi, xi'∈ [0,1] such ' that xi ≤ xi' for all i, we have C v ( x1 , L , x n ) ≤ C v x1' , L , x n .

(

)

Theorem The Choquet integral is continuous, i.e., for every x, we have lim C v ( x n ) = C v lim x n .
n→∞ n→∞

(

)

Theorem The Choquet integral is idempotent, i.e., for every xi∈[0,1], we have C v ( xi , L , xi ) = xi . Theorem The Choquet integral is stable under the same positive linear transformations, i.e., for every xi∈[0,1], r>0 and s∈[0,1], we have C v (rx1 + s, L , rx n + s ) = rC v ( x1 , L , x n ) + s . Theorem A Choquet integral with respect to an ordinary (i.e., additive) measure v coincides with a weighted arithmetic mean, whose weights wi are v({xi}). Theorem Any OWA operator with weights w = (w1 , L , whose fuzzy measure v is defined by

wn ) is a Choquet integral,

v( Ai ) = ∑ wn − j ,
j =0

i −1

Reciprocally, any commutative Choquet integral is an OWA operator, whose weights are wi = v( An −i +1 ) − v( An −i ) , i = 2, …, n. and

w1 = 1 − ∑ wi
i=2

n

where Ai is any subset of N with Ai = i , and Ai denotes the cardinal of Ai. Theorem The Choquet integral contain all order statistics, and in particular min, max and the median, i.e.,
i =1

∧ xi ≤ C v (x1 , L, x n ) ≤ ∨ xi .
n n i =1

Definition The Choquet integral is linear with respect to the fuzzy measure v if there exist 2n functions fT: [0,1]n→[0,1] (T ⊆ N) such that C v (x ) = v(T ) f T , v∈FN
T⊆N

Definition The Choquet integral is properly weighted by the fuzzy measure v if Cv(e(S)) = v(S) for all S ⊆ N, where e(S) represents the characteristic vector of S, i.e., the vector of {0,1}n whose i-th component is 1 if i∈S.

43

In practise, super-additivity and sub-additivity can be applied at the same time to a fuzzy measure on N, as shown in the following example. However, some specific measures can be applied, such as, belief measure and plausibility. The fuzzy measure is a belief measure, if it is complete monotone, i.e., satisfying
I +1 ⎛n ⎞ u⎜ U Ei ⎟ ≥ ⎟ ⎜ ∑},I ≠φ (− 1) u⎛ iII Ei ⎞ i =1 ⎝∈ ⎠ ⎠ I ⊂{1,L,n ⎝ where I is the cardinality of the set I for n≥2, and {E1, …, En} is any finite subclass of the

power set of N (Wang and Klir, 1997). Now we use an example given in Grabisch (1995 and 1996) to illustrate how the Choquet integral is computed. Consider the problem of the evaluation of students in high school with respect to three subjects: mathematics (M), physics (P) and literature (L). Usually, this is done by a simple additive weighting method, whose weights are the coefficients of importance of the different subjects. Suppose that the school is more scientifically than literary oriented, so that weights could be for example 3, 3 and 2 respectively. Then the simple additive weighting method (SAW) gives the following results for the three students A, B, and C (marks are given on a scale from 0 to 20): Table 7: Three students’ marks and their aggregation results student A B C mathematics 18 10 14 physics 16 12 15 literature 10 18 15 SAW 15.25 12.75 14.62 Choquet integral 13.9 13.6 14.9

If the school wants to favour students with a broad education, the above ranking is not fully satisfactory, since student A has a severe weakness in literature, but has been considered better than student C, who has no weak point. The reason is that too much importance is given to mathematics and physics, which are in a sense redundant since, usually, students good at mathematics are also good at physics (and vice versa), so that the evaluation is overestimated (resp. underestimated) for students good (resp. bad) at mathematics and physics. To solve this problem, we use a suitable fuzzy measure u to measure the importance over the three subjects, i.e., the element set N={M, P, L}. (i) Since scientific subjects are more important than literature, we put the following weights on subjects taken individually: v({M}) = v ({P}) = 0.45 v({L}) = 0.3 Note that the initial ratio of weights (3, 3, 2) is kept unchanged. (ii) Since mathematics and physics require similar skills, the weight attributed to the set {P, M} should be less than the sum of the weights of mathematics and physics, i.e., sub-additive, v({P, M}) = 0.5 < 0.45 + 0.45 (iii) Since we must favour students equally good at scientific subjects and literature (and this is rather uncommon), the weight attributed to the set {L, M} or {L, P} should be greater than the sum of individual weights (the same for physics and literature), i.e, super-additive, v({L, M}) = 0.9 > 0.45 + 0.3 v({L, P}) = 0.9 > 0.45 + 0.3 (iv) v(N) = 1 by definition.

44

According to the above fuzzy measures, we apply the Choquet integral to get the evaluation of students. For student A, x(L)< x(P)< x(M), we have Cv(A) = x(L)[v(N) - v({P, M})] + x(P)[v({P, M}) - v({M})] + x(M) v(M) =10×[1-0.5] + 16×[0.5-0.45] + 18×0.45 = 13.9 Similarly, we can compute the results for students B and C, shown in Table 7. The above example shows how easy it is to translate requirements of the decision maker into coefficients of the fuzzy measure. The idea is that super-additivity of the fuzzy measure implies synergy between criteria, and sub-additivity implies redundancy. The main problem of using the Choquet integral as an aggregation operator in a multicriteria decision problem is to identify the coefficients of a fuzzy measure on N. However, unlike for weighted arithmetic means, it is rather unrealistic to assume that the 2n-2 coefficients of a fuzzy measure can be provided by the decision maker especially if the number of criteria is large. In order to cope with the exponentially increasing complexity of fuzzy measures, many approaches have been proposed. Examples include, the method proposed by Marichal and Roubens (2000) who use knowledge in the form of a partial ranking over a reference set of alternatives, a partial ranking over the set of criteria, and a partial ranking over the set of interactions between pairs of criteria. Kojadinovic (2004) proposed an alternative unsupervised identification method to estimate the weights of interacting criteria from the set of profiles by means of information-theoretic functionals. Decision under uncertainty and multicriteria decision making are of striking similarity (Dubois etc., 2000), and they are formally equivalent under the conditions of the finite set of states of the world and the order separability of the sets of attitudes (Modave and Kreinovich, 2002).

45

The trans-disciplinary Tyndall Centre for Climate Change Research undertakes integrated research into the long-term consequences of climate change for society and into the development of sustainable responses that governments, business-leaders and decision-makers can evaluate and implement. Achieving these objectives brings together UK climate scientists, social scientists, engineers and economists in a unique collaborative research effort. Research at the Tyndall Centre is organised into four research themes that collectively contribute to all aspects of the climate change issue: Integrating Frameworks; Decarbonising Modern Societies; Adapting to Climate Change; and Sustaining the Coastal Zone. All thematic fields address a clear problem posed to society by climate change, and will generate results to guide the strategic development of climate change mitigation and adaptation policies at local, national and global scales. The Tyndall Centre is named after the 19th century UK scientist John Tyndall, who was the first to prove the Earth’s natural greenhouse effect and suggested that slight changes in atmospheric composition could bring about climate variations. In addition, he was committed to improving the quality of science education and knowledge. The Tyndall Centre is a partnership of the following institutions: University of East Anglia UMIST Southampton Oceanography Centre University of Southampton University of Cambridge Centre for Ecology and Hydrology SPRU – Science and Technology Policy Research (University of Sussex) Institute for Transport Studies (University of Leeds) Complex Systems Management Centre (Cranfield University) Energy Research Unit (CLRC Rutherford Appleton Laboratory) The Centre is core funded by the following organisations: Natural Environmental Research Council (NERC) Economic and Social Research Council (ESRC) Engineering and Physical Sciences Research Council (EPSRC) UK Government Department of Trade and Industry (DTI) For more information, visit the Tyndall Centre Web site (www.tyndall.ac.uk) or contact: External Communications Manager Tyndall Centre for Climate Change Research University of East Anglia, Norwich NR4 7TJ, UK Phone: +44 (0) 1603 59 3906; Fax: +44 (0) 1603 59 3901 Email: tyndall@uea.ac.uk

Tyndall Working Papers are available online at http://www.tyndall.ac.uk/publications/working_papers/working_papers.shtml Mitchell, T. and Hulme, M. (2000). A Country-byCountry Analysis of Past and Future Warming Rates, Tyndall Centre Working Paper 1. Hulme, M. (2001). Integrated Assessment Models, Tyndall Centre Working Paper 2. Berkhout, F, Hertin, J. and Jordan, A. J. (2001). Socio-economic futures in climate change impact assessment: using scenarios as 'learning machines', Tyndall Centre Working Paper 3. Barker, T. and Ekins, P. (2001). How High are the Costs of Kyoto for the US Economy?, Tyndall Centre Working Paper 4. Barnett, J. (2001). The issue of 'Adverse Effects and the Impacts of Response Measures' in the UNFCCC, Tyndall Centre Working Paper 5. Goodess, C.M., Hulme, M. and Osborn, T. (2001). The identification and evaluation of suitable scenario development methods for the estimation of future probabilities of extreme weather events, Tyndall Centre Working Paper 6. Barnett, J. (2001). Security and Climate Change, Tyndall Centre Working Paper 7. Adger, W. N. (2001). Social Capital and Climate Change, Tyndall Centre Working Paper 8. Barnett, J. and Adger, W. N. (2001). Climate Dangers and Atoll Countries, Tyndall Centre Working Paper 9. Gough, C., Taylor, I. and Shackley, S. (2001). Burying Carbon under the Sea: An Initial Exploration of Public Opinions, Tyndall Centre Working Paper 10. Barker, T. (2001). Representing the Integrated Assessment of Climate Change, Adaptation and Mitigation, Tyndall Centre Working Paper 11. Dessai, S., (2001). The climate regime from The Hague to Marrakech: Saving or sinking the Kyoto Protocol?, Tyndall Centre Working Paper 12. Dewick, P., Green K., Miozzo, M., (2002). Technological Change, Industry Structure and the Environment, Tyndall Centre Working Paper 13. Shackley, S. and Gough, C., (2002). The Use of Integrated Assessment: An Institutional Analysis Perspective, Tyndall Centre Working Paper 14. Köhler, J.H., (2002). Long run technical change in an energy-environment-economy (E3) model for an IA system: A model of Kondratiev waves, Tyndall Centre Working Paper 15. Adger, W.N., Huq, S., Brown, K., Conway, D. and Hulme, M. (2002). Adaptation to climate change: Setting the Agenda for Development Policy and Research, Tyndall Centre Working Paper 16. Dutton, G., (2002). Hydrogen Energy Technology, Tyndall Centre Working Paper 17. Watson, J. (2002). The development of large technical systems: implications for hydrogen, Tyndall Centre Working Paper 18. Pridmore, A. and Bristow, A., (2002). The role of hydrogen in powering road transport, Tyndall Centre Working Paper 19. Turnpenny, J. (2002). Reviewing organisational use of scenarios: Case study - evaluating UK energy policy options, Tyndall Centre Working Paper 20. Watson, W. J. (2002). Renewables and CHP Deployment in the UK to 2020, Tyndall Centre Working Paper 21. Watson, W.J., Hertin, J., Randall, T., Gough, C. (2002). Renewable Energy and Combined Heat and Power Resources in the UK, Tyndall Centre Working Paper 22. Paavola, J. and Adger, W.N. (2002). Justice and adaptation to climate change, Tyndall Centre Working Paper 23. Xueguang Wu, Jenkins, N. and Strbac, G. (2002). Impact of Integrating Renewables and CHP into the UK Transmission Network, Tyndall Centre Working Paper 24 Xueguang Wu, Mutale, J., Jenkins, N. and Strbac, G. (2003). An investigation of Network Splitting for Fault Level Reduction, Tyndall Centre Working Paper 25 Brooks, N. and Adger W.N. (2003). Country level risk measures of climate-related natural disasters and implications for adaptation to climate change, Tyndall Centre Working Paper 26 Tompkins, E.L. and Adger, W.N. (2003). Building resilience to climate change through adaptive management of natural resources, Tyndall Centre Working Paper 27

Dessai, S., Adger, W.N., Hulme, M., Köhler, J.H., Turnpenny, J. and Warren, R. (2003). Defining and experiencing dangerous climate change, Tyndall Centre Working Paper 28 Brown, K. and Corbera, E. (2003). A MultiCriteria Assessment Framework for CarbonMitigation Projects: Putting “development” in the centre of decision-making, Tyndall Centre Working Paper 29 Hulme, M. (2003). Abrupt climate change: can society cope?, Tyndall Centre Working Paper 30 Turnpenny, J., Haxeltine A. and O’Riordan, T. (2003). A scoping study of UK user needs for managing climate futures. Part 1 of the pilotphase interactive integrated assessment process (Aurion Project), Tyndall Centre Working Paper 31 Xueguang Wu, Jenkins, N. and Strbac, G. (2003). Integrating Renewables and CHP into the UK Electricity System: Investigation of the impact of network faults on the stability of large offshore wind farms, Tyndall Centre Working Paper 32 Pridmore, A., Bristow, A.L., May, A. D. and Tight, M.R. (2003). Climate Change, Impacts, Future Scenarios and the Role of Transport, Tyndall Centre Working Paper 33 Dessai, S., Hulme, M (2003). Does climate policy need probabilities?, Tyndall Centre Working Paper 34 Tompkins, E. L. and Hurlston, L. (2003). Report to the Cayman Islands’ Government. Adaptation lessons learned from responding to tropical cyclones by the Cayman Islands’ Government, 1988 – 2002, Tyndall Centre Working Paper 35 Kröger, K. Fergusson, M. and Skinner, I. (2003). Critical Issues in Decarbonising Transport: The Role of Technologies, Tyndall Centre Working Paper 36 Ingham, A. and Ulph, A. (2003) Uncertainty, Irreversibility, Precaution and the Social Cost of Carbon, Tyndall Centre Working Paper 37 Brooks, N. (2003). Vulnerability, risk and adaptation: a conceptual framework, Tyndall Centre Working Paper 38 Tompkins, E.L. and Adger, W.N. (2003). Defining response capacity to enhance climate change policy, Tyndall Centre Working Paper 39

Klein, R.J.T., Lisa Schipper, E. and Dessai, S. (2003), Integrating mitigation and adaptation into climate and development policy: three research questions, Tyndall Centre Working Paper 40 Watson, J. (2003), UK Electricity Scenarios for 2050, Tyndall Centre Working Paper 41 Kim, J. A. (2003), Sustainable Development and the CDM: A South African Case Study, Tyndall Centre Working Paper 42 Anderson, D. and Winne, S. (2003), Innovation and Threshold Effects in Technology Responses to Climate Change, Tyndall Centre Working Paper 43 Shackley, S., McLachlan, C. and Gough, C. (2004) The Public Perceptions of Carbon Capture and Storage, Tyndall Centre Working Paper 44 Purdy, R. and Macrory, R. (2004) Geological carbon sequestration: critical legal issues, Tyndall Centre Working Paper 45 Watson, J., Tetteh, A., Dutton, G., Bristow, A., Kelly, C., Page, M. and Pridmore, A., (2004) UK Hydrogen Futures to 2050, Tyndall Centre Working Paper 46 Berkhout, F., Hertin, J. and Gann, D. M., (2004) Learning to adapt: Organisational adaptation to climate change impacts, Tyndall Centre Working Paper 47 Pan, H. (2004) The evolution of economic structure under technological development, Tyndall Centre Working Paper 48 Awerbuch, S. (2004) Restructuring our electricity networks to promote decarbonisation, Tyndall Centre Working Paper 49 Powell, J.C., Peters, M.D., Ruddell, A. & Halliday, J. (2004) Fuel Cells for a Sustainable Future? Tyndall Centre Working Paper 50 Agnolucci, P., Barker, T. & Ekins, P. (2004) Hysteresis and energy demand: the Announcement Effects and the effects of the UK climate change levy, Tyndall Centre Working Paper 51 Agnolucci, P. (2004) Ex post evaluations of CO2 –Based Taxes: A Survey, Tyndall Centre Working Paper 52

Agnolucci, P. & Ekins, P. (2004) The Announcement Effect and environmental taxation, Tyndall Centre Working Paper 53 Turnpenny, J., Carney, S., Haxeltine, A., & O’Riordan, T. (2004) Developing regional and local scenarios for climate change mitigation and adaptation, Part 1: A framing of the East of England, Tyndall Centre Working Paper 54 Mitchell, T.D. Carter, T.R., Jones, .P.D, Hulme, M. and New, M. (2004) A comprehensive set of high-resolution grids of monthly climate for Europe and the globe: the observed record (1901-2000) and 16 scenarios (2001-2100), Tyndall Centre Working Paper 55 Vincent, K. (2004) Creating an index of social vulnerability to climate change for Africa, Tyndall Centre Working Paper 56 Shackley, S., Reiche, A. and Mander, S (2004) The Public Perceptions of Underground Coal Gasification (UCG): A Pilot Study, Tyndall Centre Working Paper 57 Bray, D and Shackley, S. (2004) The Social Simulation of The Public Perceptions of Weather Events and their Effect upon the Development of Belief in Anthropogenic Climate Change, Tyndall Centre Working Paper 58 Anderson, D and Winne, S. (2004) Modelling Innovation and Threshold Effects In Climate Change Mitigation, Tyndall Centre Working Paper 59 Few, R., Brown, K. and Tompkins, E.L. (2004) Scaling adaptation: climate change response and coastal management in the UK, Tyndall Centre Working Paper 60 Brooks, N. (2004) Drought in the African Sahel: Long term perspectives and future prospects, Tyndall Centre Working Paper 61 Barker, T. (2004) The transition to sustainability: a comparison of economics approaches, Tyndall Centre Working Paper 62 Few, R., Ahern, M., Matthies, F. and Kovats, S. (2004) Floods, health and climate change: a strategic review, Tyndall Centre Working Paper 63 Peters, M.D. and Powell, J.C. (2004) Fuel Cells for a Sustainable Future II, Tyndall Centre Working Paper 64

Adger, W. N., Brown, K. and Tompkins, E. L. (2004) The political economy of cross-scale networks in resource co-management, Tyndall Centre Working Paper 65 Turnpenny, J., Haxeltine, A., Lorenzoni, I., O’Riordan, T., and Jones, M., (2005) Mapping actors involved in climate change policy networks in the UK, Tyndall Centre Working Paper 66 Turnpenny, J., Haxeltine, A. and O’Riordan, T., (2005) Developing regional and local scenarios for climate change mitigation and adaptation: Part 2: Scenario creation, Tyndall Centre Working Paper 67 Bleda, M. and Shackley, S. (2005) The formation of belief in climate change in business organisations: a dynamic simulation model, Tyndall Centre Working Paper 68 Tompkins, E. L. and Hurlston, L. A. (2005) Natural hazards and climate change: what knowledge is transferable?, Tyndall Centre Working Paper 69 Abu-Sharkh, S., Li, R., Markvart, T., Ross, N., Wilson, P., Yao, R., Steemers, K., Kohler, J. and Arnold, R. (2005) Can Migrogrids Make a Major Contribution to UK Energy Supply?, Tyndall Centre Working Paper 70 Boyd, E. Gutierrez, M. and Chang, M. (2005) Adapting small-scale CDM sinks projects to low-income communities, Tyndall Centre Working Paper 71 Lowe, T., Brown, K., Suraje Dessai, S., Doria, M., Haynes, K. and Vincent., K (2005) Does tomorrow ever come? Disaster narrative and public perceptions of climate change, Tyndall Centre Working Paper 72 Walkden, M. (2005) Coastal process simulator scoping study, Tyndall Centre Working Paper 73 Ingham, I., Ma, J., and Ulph, A. M. (2005) How do the costs of adaptation affect optimal mitigation when there is uncertainty, irreversibility and learning?, Tyndall Centre Working Paper 74 Fu, G., Hall, J. W. and Lawry, J. (2005) Beyond probability: new methods for representing uncertainty in projections of future climate, Tyndall Centre Working Paper 75