Tyndall˚Centre

for Climate Change Research

The Use of Integrated Assessment: An Institutional Analysis Perspective

Simon Shackley and Clair Gough

April 2002

Tyndall Centre for Climate Change Research

Working Paper 14

The Use of Integrated Assessment: An Institutional Analysis Perspective

Simon Shackley and Clair Gough
Tyndall Centre for Climate Change Research, Manchester School of Management, UMIST, PO Box 88, Manchester M60 1QD, UK

Tyndall Centre Working Paper No. 14 April 2002

Presented at the Conference on 'Futures Thinking and Sustainability' organised by the French Institute for Forestry, Agriculture and Environmental Engineering (ENGREF), October 2001

Abstract
In this paper we examine Integrated Assessment and Modelling (IA/M) from an institutional and policy perspective. The application and use of IA/Ms is the main topic of interest and we draw upon existing experiences of using models in policy making across a range of issue domains. Since most of the examples emerge from the UK, however, the analysis may not be as pertinent to other national political cultures. Our focus is more upon the use of formal computer-based modelling tools rather than on the qualitative dimensions of assessment: this seems to us to reflect the main impetus and originality behind current Integrated Assessment efforts.

A Mismatch between the Supply of and Demand for IA and IAMs?
Integrated Assessment is currently seen as the method of choice for bringing large scale scientific analysis into policy frameworks. Typically, this has come to involve the use or construction of Integrated Assessment Models (IAMs) as a major component of the assessment activity (although we would argue that numerical modelling is not necessarily an essential to the IA process). Here we consider the institutional context for this type of model application and its role in environmental governance. The conventional view of the role of formal models in decision-making is depicted in Figure One. In this perspective, models may have influence at two distinct stages: firstly in identification of a problem to society; secondly, in assessing response options to an established problem. In these two roles, models serve as tools to help scan and identify potential issues, and then to provide a way of more rigorously assessing solutions. It is assumed that for complex issues, computer models will be indispensable additions to the decision-makers’ tool-kit. In the conventional view, there is a clear separation between the outputs of models (‘facts’) and the political process of identifying values and objectives and their trade-off which is required in the assessment of facts. In the view represented in Figure One it is also assumed that the decision-maker (the user) is fully familiar with the benefits (and limitations) of utilising models. In reality, Integrated Assessment & Modelling (IA/M) appears to demonstrate a stronger supply- than demand-side character. This is perhaps to be expected because its origins are as an innovative research activity. However, if assessment is intended to produce ‘useable knowledge’ (Lindblom & Cohen 1979) for a non-research purpose, then how can the suppliers of IA/M know whether what they provide is useable or not without a reasonably strong indication from its potential users? In other words (unlike conventional science) IA/Ms perhaps need to be routinely subject to some ‘extended peer review’ from decisionmakers and stakeholders to guide their utility (Funtowicz & Ravetz, 1993). How else can we ensure that IAMs are constructed as a result of an explicit or perceived demand for such an approach by its users (decision-makers, policy makers, stakeholders, etc.) and not because it is currently ‘in vogue’ amongst sections of the research community?

Scientific Problem Identified Scientists monitor effects of policy implementation

Models and analytical tools used for Describing Problem

Policy Makers Informed of Problem

Do not accept the problem

Policy Makers Accept the Problem as ‘Real’ and Express Societal Preferences

Models and analytical tools used for Assessing Solutions to the Problem

‘Optimising’ solution provided to policy makers

Policy Makers Accept the Solution and Implement

Figure One: Schematic of the Conventional View of Models in Environmental Policy Making This problem of the need for extended review is similar to that faced by many commercial innovators, in which case it appears to be overcome by obtaining early indications from the market place, together with shaping of the potential users’ ‘demands’. As the literature on social shaping of technology has indicated, an important feature of successful innovation is to devise a technological option which is sufficiently flexible that it can satisfy the divergent needs and interests of the range of actors who are important in the adoption or sanctioning of a new technology (Bijker et al. 1992 & 1997). This indicates that use of a novel decisionanalysis tool will depend not just upon whether the model is ‘good’ or ‘bad’ for a given purpose but also upon how well the tool accords with or accommodates the frequently divergent set of interests represented in the decision-making context.

Institutional Requirements for Accountability in Decision-Making
Yaron Ezrahi (1990) explores eloquently the argument that in modern liberal democracies, technical instrumental criteria have become the key means by which the accountability of public policy is sought. Only such technical instrumentalism has, he argues, been capable of acting as a ‘lowest common denominator’ in holding public officials and public policy to account. A classic case is the use of formal cost-benefit analysis in the appraisal of different options. Porter (1995) has developed Ezrahi’s general argument by exploring trust in numerical knowledge in a range of institutional contexts. When organisations are trusted, he argued, there is little need for them to resort to numbers to justify decisions. When trust is low, however, as in instances of political contestation, numbers are required to try to justify the policy being advocated. This is tantamount to seeing numbers as the ‘lowest common denominator’ by which to represent reality and with which to try and organise consensus and agreement around a particular analysis and policy trajectory. A quite different view of quantitative decision-making tools as accountability devices which assist in the production of political consensus around policy arises from such historical analyses. Accountability for political purposes is far from being a purely rational or scientific process from the perspectives of Ezrahi and Porter. In addition to the analytical issue of whether the model is the ‘right tool for the job’, the question emerges of whether the model is seen to be an appropriate accountability device. Robert Merton’s (1957) analytical distinction between ‘manifest’ and ‘latent’ functions is useful to discriminate the two roles identified above. A ‘manifest’ function is the overt rationale for an action (i.e. use of a model because it is the best validated and effective way of addressing a particular question). A ‘latent’ function is the unarticulated rationale for an action, which is usually not know to the actors themselves (or at least not openly expressed and acknowledged). The latent function of a model emerges, we argue, when its use satisfies the need for accountability in decisionmaking because of its (apparent) objectivity, complexity, flexibility, etc. Figure Two illustrates that in an idealised decision-making context of clear objectives, priorities and options, the manifest and latent role of models can be assessed relatively straightforwardly. Does the model (appear to) provide robust answers to the questions arising from assessment (manifest role)? Does the model (appear to) provide consensus and agreement on the legitimacy of a course of action (latent role)? Examples of the decisionmaking depicted in figure two are perhaps more common within commerce than government because commercial objectives are frequently more clearly articulated and limited in scope than those in public policy. Furthermore, there are reasonably good ways of assessing the success or otherwise of business activities (profit margins, dividends, rate of return on capital invested, etc.). Quantitative models in business are used in the short-term in support of very specific decisions (Tayler 1998). Providers of models for business applications (mostly consultants) must adapt their tools and methods very rapidly and pragmatically to the customers’ requirements. They must produce tools which can be readily assessed by the intended users. Models must manifestly demonstrate their utility in the cut-and-thrust of business – they have to provide clear answers to questions posed. Model validation and improvement are regarded as an ill-afforded luxury. The models used must, furthermore, provide answers that enough people want to hear – they must legitimate the preferred course of decision-making. However, it is unlikely that the latent functions alone can sustain use of the models for very long: the model has to prove its worth in terms of manifest functions if it is to be regarded as ‘robust’ by commercial decision-makers.

Clear cut objectives & purposes

Models and analytical tools used for specific purposes

Accountability of actions relatively straightforward

Models’ ‘fit for (manifest) purpose’ is readily assessed

Models kept if useful Models rejected if not useful Models’ ‘fit for (latent) purpose’ is readily assessed

Figure Two: Influences Upon the Use of Models in Clear-Cut Applications The idealised decision-making context described above is a long way from much of public policy making, which has to reconcile frequently conflicting perspectives and values, tensions between long-term and short-term goals and attempt to manage multiple actors, many with their own sources of institutional legitimacy. The function and construction of models changes quite strikingly within governmental contexts. As we move into areas where users, values, objectives, and specific tasks are heterogeneous, less clearly defined, and quantitative indicators correspondingly less readily available, modelling per se tends to become much more central to the accountability of decision-making. The expectations being placed on the models’ outputs results in increased complexity in model formulation in the hope that greater detail and sophistication will improve the adequacy and utility of models. In the extreme case, developing the model becomes the undeclared end in itself, rather than the means to a policyprescribed end, as shown in Figure Three.

Model itself becomes (unofficially) objective

Lack of clear cut objectives & purposes

Accountability of actions unclear and ambiguous

Models and analytical tools looked to as a source of coherency & clarity

Models’ ‘fit for purpose’ is difficult to assess Manifest & latent functions are difficult to disentangle

Models become more complex as they attempt to accommodate different purposes, objectives, and social actors

Models kept if seem to provide some (hope) of consensus or clarity or Legitimation

Models dispatched if socio-political context cannot cope with ambiguity and if legitimation not required

Figure Three: Influences Upon the Use of Models in Muddled Applications

In muddled applications, there is no clear distinction between manifest and latent functions, and this affords greater flexibility to the use of models in complex social settings. A feedback loop then occurs between model complexity and the social function of models in generating (hoped for) consensus, greater clarity and holding a policy arena together. In these circumstances, the model itself becomes an objective of policy, not just a tool for examining what actions are most suitable to achieve independently established objectives. Certain sociopolitical domains will not sustain complex models, however, possibly because of systemic macro-level changes, such as a breakdown in political consensus, rendering the flexibility provided by complex models redundant, or new political entrants stretching the limits of the model’s flexibility. Figures Two and Three are ‘extreme’ cases on the opposite ends of a spectrum from ‘clear-cut’ to ‘muddled’, and most real-world examples will contain elements of both.

Examples of Model Use in Public Policy Contexts
Good examples of clear and muddled decision-making comes from the work of Greenberger et al. (1976), who analysed the use of models by different sections of the administration of New York City (who had employed the Rand Institute for this purpose). They found that models were taken-up most readily in the hierarchically-organised Fire Department, but much less so in the more organisationally-fragmented Health Department. Greenberger et al. identify two factors of importance in explaining the difference. Firstly, a hierarchical organisation could make use of models more effectively than one in which there is disagreement over policy objectives and priorities (and in which one group’s use of models was perceived as being partisan). Secondly, the work of the Fire Department lent itself to quantification more readily than that of the Health Department. In this example, the Fire Department represented a more consensual and less political policy arena than the Health Department; its tasks and indicators of performance (hence forms of accountability) found a place for models as useful tools. A further example comes from the use of macro-economic models by the UK government (which type of model probably represents the most complex form routinely used in current policy making). Smith (1998) argued that such governmental econometric models do not rate highly according to the adequacy criteria of the economic research community. He also challenges the view that model forecasts are used directly in setting financial measures such as interest rates or inflation rate targets. That is not, he points out, the prime role of models within the UK Treasury. They serve instead a variety of social and institutional roles which are critical to understanding their continued employment and success. The econometric model serves as an encyclopedia of current understanding, data and beliefs within the Treasury; and they serve as a ‘boundary-object’ (Star & Griesemer 1989) between different groups and individuals within that organisation. These models are part of the tried-and-tested (hence trusted) ‘ways and means’ by which economic conditions are assessed within the Treasury and by which the implications of different future policy interventions are assessed. The models not only help to collate wide-ranging and disparate information; they also facilitate standardisation across a range of governmental departments and allow exploratory testing and scenario analysis using a tool which is understood and reasonably well trusted. A model used within governmental decision-making operates in a very different setting from one used by the academic community, with correspondingly different implicit roles and standards of evaluation. From an analysis of the use of modelling in different institutional

contexts (Shackley 1998) it can be suggested that this disparity between academe and government becomes most significant when there is a politicisation of the policy domain. Where there is an overt political conflict, the evaluatory criteria for decision-making tools and aids move away from the ‘internal’ implicit and latent functions of models in policy institutions, and turn instead towards the (supposedly) more external and objective criteria attached to the manifest function (such as providing the correct numbers, or at least generating robust insights). This politicisation corresponds to the transition from Figure Three to Figure Two: clear political objectives emerge within one or more parties which were previously either not part of the socio-political consensus, or which depart from that prior consensus position. In other words, a muddled situation becomes clear-cut, not through some change in reality, but in the minds of one or more influential stakeholders. In the case of macro-economic forecasting, the policy domain has traditionally been relatively secure from explicit political challenge. Business interests are clearly engaged and mobilise opinion, but they are closely aligned to, and heard by, financial decision-makers in government already: they do not have to ‘upset the apple cart’ to be noticed. By contrast, policy domains such as transport have become intensely political in the last decade or so in the UK and the past use of transport models to justify transport planning decisions has been correspondingly heavily scrutinised, found wanting because of the variables omitted from the model and other assumptions made, and roundly criticised. Models of open systems are susceptible to endless sceptical questioning, so straightforward, generally agreed-upon assessments of adequacy are unlikely: if political conditions favour sceptical probing, the model is very unlikely to be able to settle differences of opinion.

The Implications for Integrated Assessment and Modelling
How does this apply to IA modelling? Well, from Ezrahi and Porter’s analysis, one might have supposed that policymaking would be well disposed to the development and uptake of IA modelling, albeit doing so through greater internal adoption and development of the models, rather than relying on external academic communities. The quantification promised by IAMs should be welcomed by policy makers looking for acceptable forms of justification. An interesting case is the RAINS model, an IAM used to inform policy in Europe on sulphur emission reductions. Gough et al. describe how the “the entire process [of the RAINS model development] was geared towards arriving at numerical ‘answers’” (1998:27). The conclusion that the quantified outputs actually created political consensus, as has been promoted by the developers of RAINS, is challenged by Gough et al., however. They show how what was considered to be politically-acceptable in terms of (emission reduction) responses, shaped the scientific knowledge that was an input into the IA, and that such inputs were, furthermore, outside the strict bounds of validity of that particular knowledge. The high political consensus in the sulphur policy domain resulted in acceptance of the manifest and latent functions of the RAINS model. If there had been more political opposition, then the manifest and latent functions would have been cleaved apart, and the RAINS model would (we speculate) have been found wanting on scientific grounds. Depending upon the political influence of the opponents, the latent functions may then have come under attack and the RAINS model would then, according to the hypotheses set out above, have been rejected in this policy domain. Much of the viability of IAMs in the future will depend upon whether key stakeholders can be persuaded to remain within consensually-based coalitions. Ezrahi is sceptical on this point,

arguing that there is a contemporary breakdown in the ability of technical instrumentalism to fulfill the historical role he outlines, a phenomenon more widely commented upon (e.g. by Funtowicz & Ravetz’s (1993) notion of ‘post-normal science’). This breakdown seems to be reflected in the experience of transport and air pollution modelling in the UK, as politicisation forces more external evaluation of models which are indeterminate and uncertain. As the limits of the models everyone ‘in the know’ had tacitly come to accept are revealed more publicly, so a positive feedback further politicises the policy domain, encouraging yet further scepticism towards the devices and instruments used to sustain the ‘old orthodoxy’. Hence, contemporary political and institutional conditions are not as favourable to the introduction of IAMs as might have been the case several decades ago (though there are also important national differences which influence uptake of IAMs). In the case of climate change, General Circulation Models (GCMs), rather than IAMs, have been the principal means by which consensus on the issue has been cultivated and maintained between stakeholders at the international scale (and within many individual countries). GCMs are rather effective models in their manifest and latent roles: they are far from easy to attack by sceptical civil servants, industrialists or scientists because of their arcane and highly technical character. Sceptics are largely dependent upon what GCM modellers themselves reveal about the shortcomings of their models. GCMs are quite good as ‘boundary objects’, i.e. allowing an assortment of stakeholders from government, business, NGOs and international agencies to come together in support of their findings. This is not only because of their technical sophistication and extensive validation (partly rooted in weather forecasting) but also because GCMs only provide information about changing climatic patterns such as temperature and precipitation. They provide no indication of what are the impacts of climate change, nor on what measures should be undertaken to ameliorate climate change or the effectiveness of such measures. This provides GCMs with significant flexibility in their latent role of sustaining a policy coalition: those who prefer to downplay the impacts of climate change can co-exist with those who think the impacts will be catastrophic; those who wish to reduce carbon emissions by market-instruments alone can co-exist with those who prefer strong national regulation, and so on. Herein lies a few of the challenges facing IAMs for an application such as climate change. IAMs go much further than GCMs, in the sense that they provide information on the impacts of climate change and their costs. They also go ‘full circle’ by including energy modelling, so permitting the implications of carbon reduction measures to be fed-through to physical climate change and its socio-economic, land-use environmental impacts. From an intellectual standpoint, such an integrated approach is appealing, though the above analysis suggests that several significant weak points will occur. In terms of the manifest function, the inclusion of downstream impacts, energy and land-use modelling and so forth, serves to increase uncertainty and indeterminacy. The inclusion of socio-economic systems within IAMs means that many of the component models are based not upon physical laws, but upon the more conditional, changeable and contested understanding of socio-economic systems. It is not possible when developing IAMs to avoid the value-laden and subjective character of at least some of the choices made during the modelling process. Achieving consensus from disparate stakeholders becomes much more difficult due to the underlying difference in perceptions of socio-economic processes entertained by those stakeholders. For a reasonably wide range of stakeholders, this reduces the meaningfulness of quantitative numbers ,and hence the ability to use the model to account for specific policy decisions. None of this is to deride the intellectual challenge and ambition behind Integrated Assessment Modelling, but in terms of institutional and political accountability, one questions whether IAMs are actually

required, in the climate change policy field at least, given the successful manifest and latent functions of GCMs. It is conventionally assumed that a complex problem such as global climate change, requires a complex tool for its analysis. Both GCMs and IAMs are examples of complex analytical tools, but there is an important difference between them. The outputs from GCMs are relatively easy to comprehend by non-experts, unlike the outputs from IAMs, which require far more engagement for their comprehension at a conceptual level by the stakeholder. In the case of GCMs, maps and visual images of changing temperatures and rainfall effectively convey the key outputs to non-experts. A range of scenarios can be employed from ‘high’ to ‘low’ levels of climate change and a number of uncertainties can be bundled-up within those simple descriptions (i.e. high to low future emissions, high to low climate sensitivity, and inter-model differences). IAMS, on the other hand, require more understanding at the input stage (e.g. scenarios, assumptions, what systems and feedbacks are included, what policy interventions are included, etc.) as well as in interpreting the outputs (not readily conveyed in a set of visual images). Developers of IAMs have resorted to metaphors as a solution to their communication problem as discussed in Box One below. However, such metaphors are not immediately obvious in their intent and do not appear, as yet, to have solved the dilemma of complexity: namely that whilst the model itself frequently needs to be complex to fulfill its manifest and latent functions, its inputs and outputs need to be relatively simple. Box One explores the problem in more detail. BOX ONE - The Dilemma of Complexity One of the dilemmas for IA is that by including more interaction, feedback loops and drivers, it is frequently perceived as making the existing analysis of problems more complex. There are few if any mechanisms currently available for cutting through the complexity depicted / revealed by IA/Ms; even the metaphors of the ‘safe landing analysis’ derived from the IMAGE2 model developed by RIVM in the Netherlands, and the conceptually similar, ‘tolerable windows approach’ from PIK in Germany, do not convey a simple and clear message. One of the successes of the Limits to Growth and related Systems Dynamics modelling, arose from its presentation of fairly simple and understandable storylines: exponential growth resulting in overshoot and collapse (Edwards 1996). Hence, it seems quite likely that simpler ways of presenting the key and distinctive message of IAMs is a prerequisite to their wider use in policy. Simplicity is per se not sufficient of course: the simple message needs to connect also with a political and institutional context and need. An example of this may be the IAMs study by Schlesinger and Lempert (2000), which shows very eloquently, and in a mathematically rigorous way which can nevertheless be understood in simpler qualitative terms, that the precise value of an emission reductions target now is not that significant in climate change policy given uncertainty: rather, the key feature of a policy response should be to create a learning system which reduces uncertainty in the climate sensitivity, costs of impacts and costs of mitigation. Whilst in analytical terms, this piece of work is stimulating, it fails to connect with the political reality of the Kyoto Protocol negotiations, in which the setting of differential targets was clearly the key political priority. A further example is the ‘safe landings approach’ which was conceptually simple and elegant, but faltered on the basic problem that it foreclosed on a critical part of the discussion that continued

BOX ONE (continued) policymakers wished to engage in: namely, what is an acceptable level of climate change and what are the acceptable costs of mitigation? (I.e. we don’t know where the ground for the landing of the fictional aeroplane actually is!). It also failed to take account of national differences, which are clearly crucial to understanding the international negotiations. By contrast, the ‘Contraction and Convergence’ idea developed by the Global Commons Institute has been rather widely adopted (Meyer 2000). It connects well with the more explicitly political formulation of the climate change issue in equity terms of tbe North-South divide, and allows for national differences to be acknowledged in the short to medium term. Its lack of integration (e.g. through not including analysis of the economic costs of mitigation) may be an advantage in its acceptability to policymakers. Interestingly, the contraction and convergence concept has engendered significant political support as well as attracting support from assessment organisations (e.g. the influential Royal Commission on Environmental Pollution in the UK (2000)) without recourse to a complex numerical model. The often talked-about problem of scientific uncertainty may to some extent relate more to knowledge complexity. Uncertain science tends to be more complex to comprehend - it requires a larger investment of time and energy. By contrast, certain science promises to tell a straightforward story. Uncertain knowledge is confusing and irritating, when different messages emerge from different apparently equally authoritative texts and experts. The climate change sceptics manage to ‘muddy the waters’ for many stakeholders (not all of whom are inclined to discredit anthropogenic climate change) by exploiting uncertainty and complexity. There seems to be a reluctance to engage in scientific discourse by many nonscientists - and perhaps even a certain disdain for the 'uncertainty / complexity business'. Or perhaps the disdain is for a profession whose ‘bread and butter‘ appears to be making things more complex and less certain in a world where, as Schon pointed out (1982), the ethos of business (and we might add, increasingly, government) demands simplicity? The extent to which scientific certainty / simplicity is prized by users is evidenced in the stakeholder-led studies on the regional impacts of climate change in the UK (Shackley et al. 2001, MacKenzie-Hedger et al., 2000). There was a dislike and distrust of scientific uncertainty amongst intelligent stakeholders, and this extended even to the governmental programme office level. The complexity / uncertainty problem may well relate to the political and institutional need for boundary-objects around which consensus can emerge, especially in the case of new policy issues which introduce potential change in existing decision-making processes. Stakeholders seemed to feel reasonably comfortable with climate change ‘scenarios’ provided that a range pointing in the same direction could be provided. Hence, they were content with suggestions that summer temperatures would increase by 1-3°C by 2050 and that winter rainfall would increase by 10-20%. A range of values pointing in the same direction still permit a straightforward statement to be expressed: ‘its going to get hotter and drier in summer, warmer and wetter in the winter’. Many stakeholders were much more hostile to scenarios which indicated that the trends may go in different directions. When we presented a rainfall scenario (obtained using a statistical downscaling method) indicating a decrease in winter rainfall (and a significant net decrease in annual rainfall), for example, the response was negative and even hostile, not because of its content, but because of its disparity with the UK government-sanctioned climate change scenarios. A representative of a business continued

BOX ONE (continued) forum during a focus group discussion even argued that scientists (i.e. we) were being irresponsible in making claims about climate change but then not being able to state exactly what sorts of change would occur. Others in that group, who were highly sympathetic to the climate change issue becoming more widely accepted and integrated into policy making, suggested that the messages from science had to be more straightforward and made without dwelling on uncertainty. This commonly heard view point suggests that certainty is perceived as being necessary to advance a response to climate change in policy circles. It is in principle compatible with Michael’s (1996) suggestion that scientific uncertainty is used by stakeholders to delegitimise scientific knowledge (e.g. to avoid having to take action), since those sceptical of climate change could use uncertainty as a weapon against its proponents. Such use of uncertainty by sceptics forces proponents into demands for certainty. Social studies of science and technology indicate that complexity / uncertainty is often dealt with by black-boxing (Latour 1987) or by turning tacit knowledge into explicit knowledge (Nonaka & Takeuchi, 1995). An effective visual presentation of model output or effective metaphor is precisely such a black-boxing or externalisation of tacit knowledge. The user then does not need to be even aware of the uncertainty. Developers of IAMs have yet to produce effective devices for black-boxing. Whether it is desirable to encourage black-boxing for the sake of communicative efficiency, or better instead to bring uncertainty to the foreground to increase transparency and knowledge, is a complex debate which we do not aim to engage with here.

Challenges for Integration within Public Policymaking
So far we have examined Integrated Assessment & Modelling as a development activity drawing upon research, analysis and consultancy. A some what different interpretation emerges when we turn our attention to the practical application of integrated analysis in policy making and in particular with respect to: a) the rationale and character of knowledge for policy; and b) the organisational formulation and delivery of policy within government. Bureaucratic and Fiducial Knowledge Majone has described bureaucratic knowledge used in policy making as a form of craft knowledge, tied closely to the: “production of useful objects: careful attention to the quality of the product; and a sense of responsibility both to the ends of the client and to the values of the guild” (1989:21-22). The skills of such knowledge production “are not algorithmical but argumentative: the ability to probe assumptions critically, to produce and evaluate evidence, to keep many threads in hand, to draw for an argument from many disparate sources, to communicate effectively” (ibid.). Many of these skills describe a semi-private form of integrated assessment not subject to the prying eyes of external parties unless this is explicitly desired, and done ultimately for internal reasons amongst a trusted community of colleagues. The space between traditional academic knowledge and bureaucratic knowledge has been variously termed ‘regulatory science’, ‘trans-science’, ‘fiducial science’, and ‘mandated science’. Such fiducial science (our preferred term) is produced as a service for users and is policy-driven. Much of its credibility derives not from formal peer review, but rather from the authority of its authors and from the demonstration of its use (often gauged in a proxy way

by the reaction of relevant stakeholders from government, business, NGOs and trusted think tanks) (Hunt & Shackley 1999). Much effort has been devoted to creating closure in such uncertain and contested arenas of fiducial science and methods and heuristics for producing greater apparent certainty are a key product (and in terms of our earlier discussion good fiducial knowledge is that which combines its manifest and latent functions in a robust fashion). One such example in the climate change field is the use of a discrete set of climate change scenarios, which are claimed to represent the most credible set of possibilities. The uncertainty of climate change science is thereby neatly limited and essentially backgrounded, allowing impacts and responses to be evaluated. Clearly, IA practitioners are aiming to operate in the domain of fiducial science, at least in part. Part of the task of a fiducial science enterprise is to continue to market the message of its value and purpose (the need for IA/M, the need for analysis of climate change impacts and responses, etc.) to policymakers. This might not be work at the frontier of science, but is nonetheless likely to be necessary for the effective emergence of policy institutions and policymakers who have an intelligent understanding of what IA/M is and what it can do. Hence, effective assessment of climate change for regions (sub-national) and sectors (health, construction, etc.) in the UK context has, over the past several years of operation of the UK Climate Impacts Programme (UKCIP), meant a compilation (almost a checklist) of potential impacts, state-of-the-art knowledge, expert and stakeholder judgements. The integration has consisted of matching-up existing stakeholder concerns in a particular locality to potential climate changes as indicated by the UKCIP scenarios (which are derived from GCMs and provide a range of potential future climate changes (temperature, rainfall, extremes, etc.) for the UK from 'low' to 'high'). The resonance between stakeholder preoccupations and climate change impacts across different areas can then be identified. Hence, nature conservation experts have expressed concerns about upland fires, or about drying-out of wetland habitats: the argument can be readily fashioned that these effects will become worse under climate change. Chemical industry environmental managers have expressed concerns over the capacity of onsite wastewater management systems and again there is a resonance here with what the UKCIP scenarios tell us about change in rainfall patterns. Exploiting the livedthrough experience of resource managers and stakeholders has been critical to the success of this form of integration, which is perhaps a common feature of fiducial science and of the extended peer-review to which it is usually beholden. Furthermore, limiting the ambitions of integration has perhaps also been important in its uptake. In a sense, the stakeholders perform their own form of integration, deciding on the basis of information provided about climate change scenarios, just how important the issues are for their own area of responsibility. A more formal approach to integration, e.g. through IA modelling, would not, in our view, have been successful at this relatively early stage in awareness-raising and issue-formulation given the lack of available formal tools and / or the need for ownership of issues by a wide range of stakeholders to emerge. In summary, knowledge for policy is a quite distinctive form of knowledge production and evaluation, and is a co-production of scientists and stakeholders. To be useful and used as intended by scientists, Integrated Assessment Models need to be seen as a form of fiducial knowledge with manifest and latent functions in specific institutional contexts.

Fragmented Policy Contexts and Integrated Assessment The context of public policy making is frequently one of fragmented decision-making structures and processes within government. The trend in public administration towards efficiency drives, ‘hollowing-out’ of bureaucracy, privatisation and deregulation, as well as the move towards measurable indicators of performance and outcomes, has often acted against an integrated, cross-cutting approach. For instance, the privatisation of electricity and water supply in the UK has resulted in frequent conflicts between economic and environmental aspects of government policy. The regulator, the Office of Gas and Electricity Markets (OFGEM), has promoted competition between electricity suppliers and distributors to bring down prices, but in so doing it has created a framework which seriously disadvantages supply from renewable energy sources. In the mid-1990s, the gas regulator refused to sanction a small levy on gas customers which was used to finance energy efficiency improvements on the basis that the regulator’s role was short-term cost reduction to consumers. On these occasions, conflicts have emerged between different departments or agencies, or indeed between the contradictory objectives of policy administered by a single agency, indicating a rather profound lack of integration within government (O’Riordan & Rowbotham, 1996). Despite the popularity in public rhetoric of ‘joined-up thinking’ there may be reasons to do with political culture for suspecting that disintegration will persist. For example, distinct functions are frequently separated within Cabinet-style governments as part of a deliberate strategy of allowing different perspectives to be championed by reasonably independent factions before coming to a consensus view within Cabinet. This was part of the rationale for establishing distinct Environment Ministries within governments back in the 1970s. It was felt that the pro-environment arguments within Cabinet would be strengthened and not diluted or discounted by the need for compromise at the individual ministerial level. That is not to assume that the arguments of the Environmental Ministry would take precedence within Cabinet, but that at least there would be greater visibility at the highest decision-making level of government. We are left with the legacy of powerful ministries which are anxious to maintain influence and authority, and which will fight hard for it. In such a political system, integrated analysis may not actually be welcomed by the different perspectives unless it is seen as advancing their own cause. IAMs will probably have to align themselves to a particular cause within government to be effective and to try and move out from that powerbase. In the UK, this is what happened with the General Circulation Models (GCMs) in the case of climate change policy, which had a power-base and support from the Department of the Environment and, fortuitously, the Ministry of Defence (which is the home department of the UK Meteorological Office, under whose auspices climate modelling is conducted). The Department of the Environment championed ‘their’ model within government as one of only a handful of internationally recognised state-of-the-art models, selling the climate change ‘story’ to other departments and ministers. Another instance of disintegration and territorial fighting occurs between different layers of government. This is a particularly acute problem in relationships between central and local government or implementing agencies. Central government supported research does not always dovetail with the research needs of regional agencies with delivery responsibilities. Sponsors of research-for-policy appear to be reluctant to give too much legitimacy to a project they do not directly control and which is not ‘theirs’, in part because of resource constraints, but possibly also because they do not wish to become beholden to its findings and perhaps also because they wish to be seen to be being proactive in responding to climate change in

their own right. The system of research support by government departments / agencies relies upon agency and departmental ‘champions’, who primarily look after their own projects, striving to make them a success. That is how individuals are acknowledged and advance within the civil service. Such an individualistic system of sponsorship within government research does little to encourage genuine integrated assessment, or a channeling of resources to a systematic research strategy. Yet, from government’s perspective, the quasi-competition which emerges may deliver a useful range of methodologies, findings and help to build up distinct sources of expertise within departments and amongst academics and consultants (as opposed to government coming to rely too heavily on just one source). For an assessment to be directly useful to policy making, it must also relate effectively to the existing set of policy instruments and frameworks, e.g. local and regional plans for water resources, coastal protection and biodiversity, in the case of climate change, etc. (Shackley & Deanwood, 2002). These frameworks, multi-stakeholder processes, and documents providethe hooks on which assessment of climate change impacts and responses could be hung. Whether the hooks provided by climate change are exploited at all, however, seems to depend in part on the current perceived interests of the relevant officials and influential stakeholders at the appropriate scale. Hence some water companies and their regulators have found it useful to include climate change scenarios in assessment of regional water resources (e.g. because they show a consistent decrease in the resource and hence the need for new supplies), whilst other water companies have chosen not to include climate change because different scenarios present conflicting information on the future water supply-demand balance. Integrated assessment is more effectively undertaken where there are suitable geographical, ecological and political scales. Integrated systems for water provision lend themselves more readily to IA/Ms, for instance, than biodiversity or flood protection, which are more locally specific and tend to be managed ‘on the ground’, albeit in the context of generic higher-level guidance. Water is collected from across large areas, and is therefore more effectively regulated at the regional and national scales, than site-specific systems (Environment Agency North West 2001). A very localised integrated assessment could be conducted for sitespecific systems, but it would require a large investment of modelling, data collection and analysis to address a very small part of the whole picture. Hence, an appropriate form of integration in such cases may be to provide general guidance and continue to rely on informal judgement by those ‘on the ground’ to perform necessary integration. Conceivably, cheaper ways of modelling systems could be sought and tested (such as Bayesian Belief Network approaches) and data and modelling costs may well reduce with time.

Conclusions
In this paper we have argued that the structures of accountability by which decision-making is assessed are particularly important in understanding the use of formal analytical tools such as Integrated Assessment Models (IAMs). Computer-models and formal analytical tools are most effectively employed where there is consensus over values and objectives (or a powerful hierarchical control of decision-making) and clear quantitative and transparent indicators. Where there is confusion or disagreement over objectives and values, the model can become a last-resort of the aimed-for consensus-formation. The model in this situation becomes more complex and elaborate as it attempts to accommodate the disparate objectives and ambitions of stakeholders who cannot agree. In the extreme case, the model then becomes the surrogate 'end' of the policy-making process rather than the 'means to the (predefined) ends'.

IAMs are appealing to policymakers because they promise the provision of hard numbers against which competing policy options can be evaluated. Yet, the same policymakers and stakeholders know all to well, or discover rapidly, that such models need to be robust in terms of: • • Inputs and outputs:- which need to be relatively straightforward and easily understood by non-experts Model structure and operations:- which need to be sufficiently complex and elaborate that expert critics do not reject the model as non-credible.

IAMs are, in our view, at a fairly early stage in meeting the challenge of complexity, namely that of being at the same time sufficiently complex in form and simple in application and use. Such elaboration requires further application and IAMs are perhaps waiting for a window of opportunity, biding their time while policy makers begin to take notice of them and learn what they can and cannot do. If so, then past uses of models in assessment suggest that their use will be opportunistic and fragmented, different between (and even within) government departments and agencies, and different between national governments and lower-levels at the regional and local scales. That past experience also suggests that, to be used effectively IAMs will need to be able to translate complex inputs, model workings, findings and insights into reasonably simple formulations. Belief in a reasonably straightforward and explainable scientific knowledge claim is an important part of the glueholding together coalitions around issues such as climate change. In formulating simple messages out of complexity, however, it is vital that the underlying science is seen to be robust and not readily attacked by stakeholders or scientists enrolled for the purpose. A message from past experience is that the symbolic use of externally-derived assessments is a very different matter from the more operational, hands-on use of assessments in specific policy-making decisions. IAMs may be useful and used for the former, but policymakers will be much more reluctant to rely upon the IAM to make a specific decision (though they might well use an IAM as one input). Governments and firms as large bureaucracies are more comfortable with their own applications of bureaucratic knowledge, which they can control and manage, which usually includes proactively engaging with scientists and consultants in the production of fiducial knowledge.

Acknowledgements
We would like to thank Laurent Mermet (ENGREF, Paris) and Steve Rayner (Said Business School, Oxford) for providing the opportunity to present the ideas presented in this paper at meetings in New York (May 2001) and Lalonde (October 2001).

References
Bijker, W., Hughes, T., & Pinch, T. (eds.) (1992), The Social Construction of Technological Systems: New Directions in the Sociology of Technology, MIT Press, Cambridge, Mass. Bijker, W. & Law, J. (eds.) (1997), Shaping Technology/Building Society: Studies in SocioTechnical Change, MIT Press, Cambridge, Mass.

Edwards, P. (1996), 'Global Comprehensive Models in Politics and Policy Making', Climatic Change 32: 149-161 Environment Agency North West (2001), Water Resources for the Future: A Strategy for North West Region, EA North West, Warrington. Ezrahi, Y. (1990), The Descent of Icarus, Harvard University Press, Cambridge, Mass. Funtowicz, S. & Ravetz, J. (1993), ‘Science for the Post-Normal Age’, Futures 25(7): 739755 Gough, C., Castells, N. & Funtowicz, S. (1998), ‘Integrated Assessment: An Emerging Methodology for Complex Issues’, Environmental Modeling and Assessment, 3: 19-29 Greenberger, M., Crennson, M. & Crissey, B. (1976), Models in the Policy Process, Russel Sage Foundation, NYC Hunt, J. & Shackley, S. (1999), ‘Reconceiving Science and Policy: Academic, Fiducial and Bureaucratic Knowledge’, Minerva, XXXVII, 2: 141-166 Latour, B. (1987), Science in Action, Harvard University Press, Cambridge, Mass. Lindblom, C. & Cohen, D. (1979), Usable Knowledge, Yale University Press, New Haven, Conn. Majone, G. (1989), Evidence, Argument and Persuasion in the Policy Process, University Press, New Haven Yale

McKenzie-Hedger, M., Gawith, M., Brown, I., Connell, R. & Downing, T. (eds.) (2000), Climate Change: Assessing the Impacts - Identifying Responses. The First Three Years of the UK Climate Impacts Programme, UKCIP Technical Report, UKCIP & DETR, Oxford. Merton, R. (1957), 'Manifest and Latent Functions', in Social Theory and Social Structure, Free Press, Glencoe, Illinois Meyer, A. (2000), Contraction and Convergence: The Global Solution to Climate Change, Green Books, Totnes Michael, M. (1996), 'Ignoring Science: discourses of ignorance in the public understanding of science', in Irwin, A. & Wynne, B. (eds.), Misunderstanding Science? The Public Reconstruction of Science and Technology, Cambridge University Press, Cambridge. Nonaka, I. & Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Press, Oxford O’Riordan, T. and Rowbotham, E. (1996), 'Struggling for Credibility: The United Kingdom’s Response', in O’Riordan, T. & Jäger, J. (eds.), Politics of Climate Change: A European Perspective, London: Routledge: 228-267.

Porter, T. (1995), Trust in Numbers, Princeton University Press, Princeton, NJ Royal Commission on Environmental Pollution (2000), Energy: The Future Climate, HMSO, London Schon, D. (1982), 'The Fear of Innovation', in Science in Context, Barnes, B. & Edge, D. (eds.), Open University Press, Milton Keynes: 290-302. Schlesinger, M. & Lempert, R. {TO FIND REF} Shackley, S. (1998), ‘Introduction to Special Section on the Use of Models in Appraisal and Policy-Making’, Impact Assessment and Project Appraisal, 16(2): 81-89. Shackley, S., Kersey, J., Wilby, R. & Fleming, P. (2001) Changing by Degrees: The Potential Impacts of Climate Change in the East Midlands, Ashgate, Aldershot. Shackley, S. & Deanwood, R. (in the press), 'Stakeholder Perceptions of Climate Change Impacts at the Regional Scale: Implications for the Effectiveness of Regional & Local responses', Journal of Environmental Planning and Management Smith, R. (1998), ‘Use of Quantitative Models in UK Economic Appraisal and PolicyMaking’, Impact Assessment and Project Appraisal, 16(2): 105-114 Star, S. & Griesemer, J. (1989), 'Institutional Ecology: "Translations" and Boundary Objects', Social Studies of Science, 19: 387-420 Tayler, P. (1998), ‘The Business of Modelling: Some Anecdotes on Modelling in Business and Story-Telling’, Impact Assessment and Project Appraisal, 16(2): 133-138

The inter-disciplinary Tyndall Centre for Climate Change Research undertakes integrated research into the long-term consequences of climate change for society and into the development of sustainable responses that governments, business-leaders and decisionmakers can evaluate and implement. Achieving these objectives brings together UK climate scientists, social scientists, engineers and economists in a unique collaborative research effort. Research at the Tyndall Centre is organised into four research themes that collectively contribute to all aspects of the climate change issue: Integrating Frameworks; Decarbonising Modern Societies; Adapting to Climate Change; and Sustaining the Coastal Zone. All thematic fields address a clear problem posed to society by climate change, and will generate results to guide the strategic development of climate change mitigation and adaptation policies at local, national and global scales. The Tyndall Centre is named after the 19th century UK scientist John Tyndall, who was the first to prove the Earth’s natural greenhouse effect and suggested that slight changes in atmospheric composition could bring about climate variations. In addition, he was committed to improving the quality of science education and knowledge. The Tyndall Centre is a partnership of the following institutions: University of East Anglia UMIST Southampton Oceanography Centre University of Southampton University of Cambridge Centre for Ecology and Hydrology SPRU – Science and Technology Policy Research (University of Sussex) Institute for Transport Studies (University of Leeds) Complex Systems Management Centre (Cranfield University) Energy Research Unit (CLRC Rutherford Appleton Laboratory) The Centre is core funded by the following organisations: Natural Environmental Research Council (NERC) Economic and Social Research Council (ESRC) Engineering and Physical Sciences Research Council (EPSRC) UK Government Department of Trade and Industry (DTI) For more information, visit the Tyndall Centre Web site (www.tyndall.ac.uk) or contact: External Communications Manager Tyndall Centre for Climate Change Research University of East Anglia, Norwich NR4 7TJ, UK Phone: +44 (0) 1603 59 3906; Fax: +44 (0) 1603 59 3901 Email: tyndall@uea.ac.uk

Other titles in the Tyndall Working Paper series include: 1. A country-by-country analysis of past and future warming rates, November 2000 2. Integrated Assessment Models, March 2001 3. Socio-economic futures in climate change impact assessment: using scenarios as ‘learning machines’, July 2001 4. How high are the costs of Kyoto for the US economy?, July 2001 5. The issue of ‘Adverse Effects and the Impacts of Response Measures’ in the UNFCCC, July 2001 6. The identification and evaluation of suitable scenario development methods for the estimation of future probabilities of extreme weather events, July 2001 7. Security and Climate Change, October 2001 8. Social Capital and Climate Change, October 2001 9. Climate Dangers and Atoll Countries, October 2001 10. Burying Carbon under the Sea: An initial Exploration of Public Opinions, December 2001 11. Representing the Integrated Assessment of Climate Change, Adaptation and Mitigation, December 2001 12. The climate regime from The Hague to Marrakech: Saving or sinking the Kyoto Protocol?, December 2001 13. Technological Change, Industry Structure and the Environment, January 2002 14. The Use of Integrated Assessment: An Institutional Analysis Perspective, April 2002 15. Long run technical change in an energy-environment-economy (E3) model for an IA system: A model of Kondratiev waves, April 2002 16. Adaptation to climate change: Setting the Agenda for Development Policy and Research, April 2002 The Tyndall Working Papers are available online at: http://www.tyndall.ac.uk/publications/working_papers/working_papers.shtml