You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/242347923

Dynamic foresight evaluation

Article  in  Foresight · February 2012


DOI: 10.1108/14636681211210378

CITATIONS READS

33 8,611

1 author:

Ian Miles
The University of Manchester - & - Higher School of Economics, Moscow
331 PUBLICATIONS   6,731 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Public service innovation View project

Basic Research Program at the National Research University Higher School of Economics View project

All content following this page was uploaded by Ian Miles on 23 July 2014.

The user has requested enhancement of the downloaded file.


foresight
Emerald Article: Dynamic foresight evaluation
Ian Miles

Article information:
To cite this document: Ian Miles, (2012),"Dynamic foresight evaluation", foresight, Vol. 14 Iss: 1 pp. 69 - 81
Permanent link to this document:
http://dx.doi.org/10.1108/14636681211210378
Downloaded on: 26-07-2012
References: This document contains references to 22 other documents
To copy this document: permissions@emeraldinsight.com
This document has been downloaded 95 times since 2012. *

Users who downloaded this Article also downloaded: *


Ron Johnston, (2012),"Developing the capacity to assess the impact of foresight", foresight, Vol. 14 Iss: 1 pp. 56 - 68
http://dx.doi.org/10.1108/14636681211210369

Jonathan Calof, Jack E. Smith, (2012),"Foresight impacts from around the world: a special issue", foresight, Vol. 14 Iss: 1 pp. 5
- 14
http://dx.doi.org/10.1108/14636681211214879

Jonathan Calof, Riel Miller, Michael Jackson, (2012),"Towards impactful foresight: viewpoints from foresight consultants and
academics", foresight, Vol. 14 Iss: 1 pp. 82 - 97
http://dx.doi.org/10.1108/14636681211210387

Access to this document was granted through an Emerald subscription provided by UNIVERSITY OF MANCHESTER

For Authors:
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service.
Information about how to choose which publication to write for and submission guidelines are available for all. Please visit
www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in
business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as
well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is
a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive
preservation.
*Related content and download information correct at time of download.
Dynamic foresight evaluation
Ian Miles

Ian Miles is Head of the Abstract


Laboratory for Economics Purpose – This paper aims to depict foresight programmes as extended service encounters between
of Innovation, National foresight practitioners, sponsors, and other stakeholders. The implications of this perspective for
Research University – evaluating the outcomes of such programmes are to be explored.
Higher School of Design/methodology/approach – The range of activities comprising foresight is reviewed, along with
Economics, Moscow, the various objectives that may underpin these activities. The more substantial foresight programmes
Russia. are seen in terms of a series of steps, in each of which various partners can be involved in generating
service outcomes and later steps of the process. The arguments are illustrated with insights drawn from
various cases.
Findings – A foresight programme is likely to feed into more than one policy process, so that the
foresight activities can be linked to various stages of the policy cycles, as well as engaging participants
with different degrees of influence on the policies in question. The outcomes of the foresight activity are
also heavily shaped by the degree of involvement of various stakeholders, not least the sponsoring
agency and any other groups it seeks to mobilise. Seeing foresight as a service activity brings to the fore
the notion of co-production, and the importance of the design of the service encounters involved.
Research limitations/implications – The task of evaluating foresight is a challenging one, and
comparison of foresight activities needs to bear in mind the different scale, scope, and ambitions of
different programmes. Simple static comparison of formal inputs and outputs will miss much of the value
and value-added of the activity.
Practical implications – A dynamic approach to evaluation stresses the learning of lessons about the
roles of multiple stakeholders – and the responsibilities of sponsors as well as practitioners.
Originality/value – Foresight programmes are frequently commissioned, and often have significant
influence on decision-making. Attempts to systematically evaluate these efforts have begun, and this
essay stresses the need to be aware of the complex interactive nature of foresight, highlighted by
viewing it in service terms.
Keywords Foresight, Foresight programmes, Evaluation, Services, Stakeholders, Values, Value added,
Forward planning
Paper type Conceptual paper

lbert Einstein remarked that ‘‘Everything that can be counted does not necessarily

A count; everything that counts cannot necessarily be counted’’. Less elegantly, we


can say that the things that are easiest to measure are not necessarily the things we
really need to assess. In the Foresight field, the distinction between product outputs and
process benefits of exercises is often related to this point.
Process benefits will include things like deepening knowledge – including know-who – and
building networks, whereas product outputs will mainly involve artefacts and activities that
may be counted up for example as the number of reports produced, number of people
This study was implemented in trained. Many commentators have pointed out that process benefits are often, at least of
the framework of the equal importance as the formal products of the exercises. One significant point that is also
Programme of Fundamental
Studies of the Higher School of
often made, furthermore, is that the impact that can be achieved from these formal outputs
Economics in 2011. will often be very much a function of the extent of engagement of key stakeholders in the

DOI 10.1108/14636681211210378 VOL. 14 NO. 1 2012, pp. 69-81, Q Emerald Group Publishing Limited, ISSN 1463-6689 j foresight j PAGE 69
process of the exercise. For example, it is rather plausibly claimed that the impact of taking
part in a scenario workshop is liable to be far more than that achieved as a result of being
handed the report of that workshop.
Sometimes evaluation is taken to mean simply auditing the activity in question – producing
an assessment of whether funds have been spent efficiently (or even effectively). But this is a
very limited understanding of the task, with similarly limited scope for learning from the
evaluation. Miles and Cunningham (2006) discussed the evaluation of innovation
programmes, in this light, arguing that understanding how, how far, and why programmes
have succeeded or failed to realise the desired effects. Given the complexity of innovation
systems, the evaluation of such programmes is needed to help us understand better the
interaction between policies and innovation processes, to establish, to identify how
programmes and broader policies might be designed so as to have more of the desired
effects. In other words, evaluation can help us better understand the system we are trying to
influence, by telling us what can be learned from our efforts to exert influence. This means
that we are interested in the impact, the outcome of the work – and in what processes have
led to this, and why. This point applies equally to foresight practice – evaluation should
enable us to understand the object of foresight, and the ways in which foresight can be used
more generally in our social and political contexts. We term this ‘‘dynamic foresight
evaluation’’.

Intensivity and extensivity


This discussion leads us toward one of the major challenges in assessing foresight work –
and indeed in assessing many of the activities of the knowledge based economy. The
involvement of the user in creating the product is a critical issue, and the success of an
exercise is very dependent on how the user is engaged. The service supplier has some
scope for influencing this, but much depends on the willingness of the user to be involved, in
one way or another. For example, it is much remarked by researchers into service
productivity and quality that these features are very heavily, when not completely, dependent
on the use of these services. When one acquires a good, then it is possible to say whether it
works or not as specified, whether it is fit for purpose, and the like. But when services are
involved, then it may be a good deal harder to make this assessment. If one has not been
fully forthcoming to the doctor or consultant, it is no wonder that their diagnoses and
prescriptions may be imperfect. If one has not arrived at the railway station or theatre on
time, then the fact that one has missed the service altogether may be grounds for complaint
about schedules – but the failure to derive the desired service is quite likely to be completely
in line with the contract involved when the ticket was purchased. If one is too tired to enjoy a
concert or a holiday, or too frightened to take a theme park ride or psychiatric consultation,
then the service supplier has only a limited part to play in creating the impact of the service.
In these, and many other instances, the word ‘‘coproduction’’ becomes applicable. (Thus a
challenge for productivity measurement is that the usual denominator for productivity
measurement – the labour inputs of the producer – becomes problematic; should we also
include the labour inputs from users that have gone into creating the final product?)
But the challenge is deeper still, and service research again can help point the way. The
production and consumption of services is typically spread out over time – over years in the
case of some medical treatments and educational experiences, for example. There may be
multiple encounters between the user (or members of the user organisation) and the service
supplier (or members of the supplier organisation). The jargon among service researchers
refers to these encounters as ‘‘touchpoints’’, and the sites at which they take place as
‘‘servicescapes’’[1]. The design of each can be important in shaping engagement and
service quality. When we are considering different members of the two organisations, it is
quite possible for different sets of people on both sides being brought into relationship at
different points in time. The extensivity over time of service relationships leads commentators
to talk about ‘‘service journeys’’ and ‘‘service pathways’’, and it has led the new profession of
service design to apply tools like storyboarding, roadmaps and flow charts to the creation
of service blueprints and designs.

j j
PAGE 70 foresight VOL. 14 NO. 1 2012
It can be helpful to see foresight as a service that a supplier is providing to (or facilitating
within) a user, and as such that it can be intensive in terms of user-supplier interaction, and
can be extensive over time. This is very different from, for example, providing a one-off
forecast to an interested party.

Objectives and orientations


The case of the forecaster mentioned above implies that we need to consider just what we
mean by ‘‘foresight studies’’, for the purpose of this essay. This is necessary because of the
confusion and obfuscation associated with the term ‘‘foresight’’, which was rarely used to
describe long-term futures work before the 1990s. In that decade, the prominence of
large-scale national Technology Foresight Programmes (TFPs) led many people active in
futures work to adopt this term, whether or not they had been involved in these TFPs. All sorts
of activities became labelled as foresight, and elsewhere I have sought to disentangle the
various strands of activity here.
The basic conclusions, emerging from literature search and discussion with a range of
futurists, can be summarised quite briefly. The term ‘‘foresight’’ was in use sporadically to
describe futures activities, perhaps most notably in H.G. Wells’ (1932) call for ‘‘Professors of
Foresight’’. Wells applied this term at a time before ‘‘futures studies’’ and the like had entered
the lexicon, but relatively few people took up his lead. It was used in the USA in the 1970s
and 1980s to describe one stream of work for government bodies; in the wake of this Joseph
Coates (1985) wrote an article whose definition of foresight in government has often been
cited by those looking for authoritative and foundational formulations. It was also applied in a
relatively unknown Canadian TFP-like programme on emerging science and technologies in
the mid and late 1980s. But it was Irvine and Martin’s use of the term that really gave it
traction (Irvine and Martin, 1984; Martin and Irvine, 1989). Their analysis of exercises
designed to inform Science, Technology and Innovation (STI) decision-making around the
world was extremely influential on the TFPs of the 1990s. As argued with supporting
evidence from bibliographic analyses, it was the emergence of these large-scale exercises
from the mid-1990s on that lent the term its kudos, and led to its popularity as a way of
describing all sorts of futures activities.
The foresight processes adopted in the TFPs drew very much on Irvine and Martin’s
understanding of the STI system, informed by their grounding in innovation studies. The STI
system was seen as highly complex, with knowledge widely dispersed across many areas of
expertise, where many stakeholders have viewpoints and influence of various kinds, and
where decisions typically require coordination among multiple players (and often across
multiple parts of the policy system). The result is that what we have dubbed ‘‘fully-fledged
foresight’’ came to characterise many of the TFPs, especially those that were to be seen as
the most successful ones. ‘‘Fully-fledged foresight’’ combined prospective analysis (futures
studies’ insistence on the importance of relating present choices to awareness of long-term
future prospects, and of the need to pay due regard to agency, uncertainty, and the
associated scope for alternative futures), with a participatory orientation (paying due regard
to the dispersion of knowledge and agency across multiple stakeholders, whose insights
and engagement need to be mobilised), and a practical relevance (being closely related to
actual decision making and strategy formation actions, such as STI policies in European
countries). This underpinned large-scale TFP exercises, engaging many experts and
stakeholders, and informing national (and sometimes regional or transnational)
policy-making in STI or other areas of concern.
The term ‘‘foresight’’ now became widely deployed to cover activities much more narrow
than those described above as ‘‘fully-fledged foresight’’. This makes the task of discussing
the evaluation of foresight activities much more difficult, since we could be looking at
exercises of vastly varying scope and scale, with quite different objectives.
Some idea of this variety can be gleaned from the results of the European Foresight
Monitoring Network (EFMN), undertaken a few years ago (and now replaced by the
European Foresight Platform)[2]. (EFMN, 2009); The hundreds of exercises captured in the

j j
VOL. 14 NO. 1 2012 foresight PAGE 71
EFMN database feature many cases with relatively few participants – more than half of them
feature less than 50 participants, and are quite possibly simply desk exercises run by
relatively few people. This is not to discount the value of such activities – but they are quite
different in nature from the large-scale TFPs. (Though the way the EFMN database is
constructed, it is possible that some of the activities described are individual cases are
actually part of a single large exercise with several specific projects within it.) Most of the
exercises captured had quite long time horizons, but around a quarter were focused on the
relatively short-term future (less than ten years). Longer-term time horizons mean that the
definitive evidence about impact of a TFP (and about how well it went about its task of
appraising future prospects)[3] may not be available for decades. It will be necessary to
assess the exercise in terms of more immediate outcomes, which will need to be
persuasively linked to the longer-term objectives.
Not all were linked to policy or decision-making processes in an obvious way, and a range of
objectives were encountered. Most of the exercises had three to four specific objectives,
spread across two or three of nine families of objectives, ranked here in order of prevalence
across some 200 cases:
1. Orienting policy formulation and decisions [33 cases].
2. Supporting STI strategy- and priority-setting [30 cases].
3. Fostering STI cooperation and networking [29 cases].
4. Generating visions and images of the future [24 cases].
5. Triggering actions and promoting public debate [21].
6. Recognising key barriers and drivers of STI [20 cases].
7. Identifying research/investment opportunities [16 cases].
8. Encouraging strategic and futures thinking [15 cases].
9. Helping to cope with Grand Challenges [12 cases].
Though the families are here defined in rather broad and potentially overlapping ways, the
diversity of objectives is apparent. This also poses a challenge for evaluation – or at least, for
efforts to benchmark exercises – they are not all trying to achieve the same things.
This is even the case if we consider the national TFPs outlined in Georghiou et al. (2008).
Here we find cases whose main objectives are informing particular decisions about critical
technologies, priority setting across fields of research, enhancing awareness of future issues
and the opportunities for applying technology to major problems, helping to build consensus
on desirable targets and aligning various stakeholders behind these, promoting networking
in the STI system (the famous ‘‘wiring up’’ of the national innovation system, see Martin and
Johnston, 1999).
Strikingly, we can even point to a substantial reorientation of objectives in the UK foresight
programme over its 16 year history. Originally its aims were to aid research priority setting
and help ‘‘wire up’’ the innovation system; it attempted to map out opportunities across a
very wide range of areas of technology development and application. Its current mission is
seen much more as that of promoting better interaction and networking across different
paths of the policy system bringing together different parts of government that need to have
an understanding of each other’s actions if policies to deal with long-term challenges (that
have a strong technology dimension) are to be coordinated. The criteria for evaluation will be
required to change to take into account this evolution of objectives. The current objectives
also highlight the point that policy impact may involve the interaction across different fields of
policy, not just the adoption or shaping of specific policy measures.
More generally, while a foresight exercise may often be linked to specific decisions and
planning processes, there will probably be other objectives that are intended, often explicitly
(and we would suspect that when these other objectives are not spelled out explicitly, they
will nonetheless be implicit in the design of the exercise and narratives of the programme

j j
PAGE 72 foresight VOL. 14 NO. 1 2012
managers). Evaluation will thus need to be done in the context of or against multiple criteria,
and understanding how far different sorts of objectives have been realised may call for
rather different research methods - surveys of stakeholders and participants, analysis of
official documents, interviews with the primary ‘‘users’’ of the work, and so on.
If we focus on the large-scale TFPs, then, we still face a wide range of activities, even if we
have excluded the small-scale, short time horizon activities. The TFPs typically take from one
to three years, though sometimes they last longer. But these exercises are often not just
one-offs; it is common to find a national programme becoming a more or less continuous
activity, a series of interlinked activities, often divided into distinct cycles, waves, or stages.
Sometimes the connection between stages is rather loose – a new exercise is launched that
draws to a greater or lesser extent on lessons learned and personnel active in the earlier
programme. Sometimes there is tight coupling. The current UK Foresight, for instance, is a
rolling programme, with a stream of projects that typically last for about two years – at any
one time there will be some projects that are fairly new ones and others nearing completion
or into follow-up stages[4].

Survival of the foresightful


The fact that many countries have not only ‘‘discovered’’ foresight, but are continuing with
second, third and later cycles of activity, suggests that national governments, at least, are
evaluating their TFPs in positive terms. The diffusion of TFPs as a policy innovation across
countries could also be seen as a matter of positive assessment of the experience of others.
‘‘Me too’’ is not a convincing explanation of such a phenomenon. But it must be conceded
that some of the diffusion of TFP practices is a result of their being promoted assiduously, by
the European Commission and some other international bodies. Thus a wave of foresight
programs swept across Central and Eastern Europe when countries in that region were
preparing to join the European Union; the European Commission encouraged them to
undertake TFP as part of the transition of their STI systems from those established under the
Warsaw Pact. Quite possibly some TFPs have been instituted more as a result of such
pressure than because of endogenous requirements – and it is quite possible that the
quality of the exercise and the impact it achieves will reflect this limited articulation with local
interests. It should be noted that some Central and Eastern European TFPs are widely
regarded as rather good examples of the activity, however, while I can testify from first-hand
experience that, in at least one Western European case, outside observers present at the
launch event were staggered by the national team’s poor understanding of TFP methods and
objectives.
In the context of policy learning across countries, there are many cases of less extreme, but
still disabling, failure to fully engage with the philosophy of fully-fledged foresight. There are
well-known cases of efforts to simply transplant tools – such as Delphi questions –
developed in the context of the Japanese STI system into Western Europe, of course, but
there has also often been a fixation on these tools without adequate recognition of the need
to configure foresight to the wider STI system. The way Delphi results are used in Japan is
not identical to that in, say, the UK; the ways in which stakeholders engage in workshops and
other events, and employ the knowledge and network links they have gained, also vary a
great deal. TFP design needs to take such specificities into account, rather than simply copy
a menu of activities that has been used elsewhere, as many experienced practitioners
stress. Evaluation that seeks to link methods to outcomes without considering the
complexities of the STI system is thus liable to result in misleading conclusions.
The persistence and diffusion of TFPs is documented in The Handbook of Technology
Foresight (Georghiou et al., 2008), and is ongoing – though whether the current economic
crisis eventually leads to a downturn in foresight, too, remains to be seen. (At the time of
writing, the UK Programme, at least, has weathered a round of cuts in government spending
that put an end to many other policy activities, impacting higher education rather seriously,
for example.) In many cases TFPs have survived change in governments, confirming that
governments of a broad range of political persuasions have found these programs valuable
for informing their decisions and/or mobilising the STI system. This may imply some agility on

j j
VOL. 14 NO. 1 2012 foresight PAGE 73
the part of programme managers, but could not have been achieved without demonstrable
achievements – most likely the approval of key stakeholders, as well as any more formal
evaluations.
An interesting sidelight on the social and political embedding of foresight, and of TFPs in
particular, can be gleaned from analysis of the use of terminology in publications. An earlier
application of Harzing’s Publish or Perish data analysis system (Harzing, 2007) was to
demonstrate the surge of publications using ‘‘Technology Foresight’’ in their titles, dating
from the mid-1990s, so that it eclipsed the similar use of ‘‘Technology Forecasting’’ and
‘‘Technological Forecasting’’. (The data are drawn from Google Scholar, and thus emphasise
journal articles and academic books, though some policy and business texts and some
more popular literature also slip in; the data are very ‘‘noisy’’, but are generally valuable for
looking at broad trends.) What happens when we consider use of the term ‘‘Foresight
Programme’’? Figure 1 displays the count of publications using the term in their title, Figure 2
those using it in their text. The results are intriguing.
In the cases of the use of the term in titles, we can expect much of this to refer to discussion
of TFP as a policy innovation, about what this actually was, why it was happening, how to
create and manage the programme, and the like. Uses rapidly peaked around the time of the

Figure 1 ‘‘Foresight programme’’ used in title of publication

Figure 2 ‘‘Foresight programme’’ used in text of publication

j j
PAGE 74 foresight VOL. 14 NO. 1 2012
take off of foresight programs in the 1990s and has declined subsequently. The TFP as an
object of intellectual curiosity seems to have declined in salience. This could be because
TFPs have become less important in the policy mix – which does not accord with our
discussion above. Perhaps, then, they have become less interesting for this sort of
discourse because the novelty has worn off, and only become foregrounded when there is
something really new happening in TFPs.
In contrast, the number of publications that refer to, but are not centrally about, the TFPs
broadly increases over time. There are many times more publications of this sort, and the
trend is clearly upward, though uneven. This level and trend could well be an indicator of the
impact of TFPs on intellectual discussion, that this has been growing alongside the policy
influence of the programmes. Thus TFPs are of intellectual interest, but because of their
results (and perhaps the way they are used).

Extensivity and touchpoints


We mentioned that the UK Foresight programme has shifted in its objectives over
successive cycles – and we could note that methods have also changed, with no repetition
of the major Delphi exercise of the first cycle. Even within a cycle, there can well be several
components with their own objectives. In the current British case, for example, the rolling
programme of projects are quite traditional in the sense that they are provide very deep
scrutiny of specific issues, mainly defined in terms of specific technological opportunities
(Intelligent infrastructure systems, Exploiting the electromagnetic spectrum, etc.), specific
problem areas with technological dimensions (Flood and coastal defence, Cyber trust and
Crime prevention, etc.), or areas that span the two (e.g. Brain science, addiction and drugs).
These are tackled with state of the science reviews, stakeholder workshops, scenario
development, and so on. Meanwhile the programme also runs an ongoing cross-cutting
horizon scanning activity, and generates some more generic analysis of technology futures,
and scenarios to help inform STI policy.
These individual foresight activities typically go through several stages. Various
stakeholders can be engaged into activities in various ways, across these stages.
Impacts may be generated throughout the project, then. They do not emerge only after the
project has delivered its report. Impacts may vary across stages, in terms of whom and what
has been influenced. We may thus require some analysis of different stages in order to
assess and understand these impacts.
There are various accounts of the different phases of Foresight exercises. Figure 3 displays
one of these, based on the account Miles (2002) developed (in order to highlight the point
that Foresight methods involve more than just the ‘‘futures’’ methods that feature in the
scanning and appraising steps of the Foresight activity – they also include project
management methods, stakeholder analysis, networking, decision tools, and so on). As with
most ‘‘stages’’ accounts, the model is not to be taken to imply a rigid sequence of steps – in
practice the various activities often overlap, some elements many need to be returned to and
revised, and so on. But the broad picture is that we determine the scope of the exercise
(e.g. specific focus, geographical and time scales), access key sources of knowledge and
develop information on major drivers, trends and actors, appraise alternative futures and
associated opportunities and challenges, assess courses of action, communicate key
messages, draw lessons about the project and identify future requirements for foresight.
But this cycle does not take place in a vacuum. Foresight exercises unfold within a policy
context – and of course they are designed to feed into policy processes, indeed this is a
central feature of fully-fledged foresight. There are two ways in which this influences the
evaluation problematique. Policy processes have their own cycles and stages. Second,
individual policies are themselves situated within a broader policy context.

j j
VOL. 14 NO. 1 2012 foresight PAGE 75
Figure 3 The foresight cycle

Policy processes have their own cycles and stages


As with foresight, so it is familiar to view policy process in terms of a series of stages, and
indeed ‘‘the policy cycle’’ is familiar terminology (for example, May and Wildavsky, 1978).
Here we follow Knill and Tosun (2008), who elaborate on our knowledge of five stages (again,
with caveats about not taking these too rigidly):
1. agenda setting;
2. policy formulation;
3. policy adoption;
4. implementation; and
5. evaluation.
These are quite similar to the stages in the Foresight cycle, for example ‘‘agenda setting’’
clearly overlaps with ‘‘scoping’’, ‘‘evaluation’’ with ‘‘renewal’’ (Figure 4).
In the agenda setting stage there is a matter of registration and definition of a problem for
policy to address; courses of action are explored in policy formulation, and chosen among in
policy adoption; the policy is implemented and then evaluated in the final stages[5].
This is not a double helix, with the foresight and policy cycles uncoiling together in
synchrony. Often, of course, foresight is deliberately designed to feed in to the first three
steps of the policy cycle, being triggered by the agenda setting stage of the latter, but also
quite possibly helping to further define the agenda. The various steps in foresight should be
contributing to further understanding of the problem, formulation and assessment of policy
alternatives, and selection of preferred policies. The foresight activity could thus be seen as
simply providing knowledge for the early stages of the policy process.
But, even if there were only one policy cycle underway, life is liable to be more complicated.
The full foresight activity may require more time than the pressing policy requirement, and
thus have to contribute initial results well before the entire foresight cycle has been
completed; foresight will thus continue while the policy cycle is moving on to its later stages.

j j
PAGE 76 foresight VOL. 14 NO. 1 2012
Figure 4 Knill and Tolsun: five stages of the policy cycle

This can be important in that policy implementation may be dependent on, or much affected
by, the engagement of stakeholders in foresight. The ‘‘Advising’’ and ‘‘Renewing’’ stages of
the Foresight cycle are liable to be contributing to policy implementation in other ways, too,
as the policy plans are translated into new routines and concrete actions for civil servants
and other officials and stakeholders; and ‘‘Renewing’’ and evaluation can coincide implicitly
or explicitly – or in some cases, we might say that they collide. For example, we know of one
case where Foresight recommendations were rapidly translated into a strategy statement. In
a striking example of the uptake of a product into policy, the report of a roadmapping
exercise and scenario workshop looked to be the statement of official policy for the UK
nanotechnology sector. But then implementation was captured by a different agenda; a full
assessment of the impacts of the foresight and the action would be bound to foreground the
tensions and interests that led to this[6].

In reality, of course, a policy is not formulated in a vacuum. At any one time, there may be
several policy cycles underway, and several more in the offing. Even in the STI field, there
may be initiatives underway on IPR, training, tax relief, small firm support, regional
strategies, cluster strategies, research funding, and so on. Policies in some other fields may
also be related to the topic – for example might be migration policy, health policy, trade
policy . . .. Different policies could well be at different stages of development, and thus a
technology foresight exercise is liable to feed into (and draw from) them in very different
ways. This will, of course, include links achieved through engagement with some of the
stakeholders involved with the other policies. It is not uncommon for stakeholders to have
interests in multiple policy areas – indeed, one persistent problem with policy networks is
that often the ‘‘usual suspects’’ are hauled in as relevant stakeholders and experts across a
wide range of topics. Foresight programmes should be using systematic stakeholder
analysis to ensure that they are not simply drawing on this small, but loquacious, pool, when
they have been drawn into the exercise as being involved in some way with the main policy
process to which Foresight has been designed to contribute.

j j
VOL. 14 NO. 1 2012 foresight PAGE 77
With these caveats in mind, we can briefly refer to a helpful account of the role of foresight in
innovation policymaking (Havas et al., 2010), which deploys the idea of the policy cycle and
suggests three major forms of influence. Rephrasing their account, the three roles are:
1. Policy-informing – where formal products (codified information, results and reports) are
supplied to policymakers as inputs into policy design. Havas et al. note that the inclusion
of a wide range of stakeholders into the process is important here.
2. Policy advising / strategic policy counselling – which relates the information generated in
the foresight process to the strategies, instruments and policy options of particular
policymaking bodies.
3. Policy-facilitating – using foresight for collective learning, cognitive alignment of
appraisals of risks and opportunities, and building networks and infrastructures to
support this.
For each of these roles Havas et al. (2010) list a number of concrete outcomes that might be
expected, and that can in principle be the basis of an evaluation effort. Some are much
easier to assess than others, of course. One general message is that that evaluation will be
much more thorough if it can be built into the process from the outset, so that relevant
information is not lost in the pressure of implementing the exercise.
The policy cycle of course influences the foresight process – how it is set up, how
stakeholders (especially civil servants) participate, how results are used, and so on. The
options and choices considered, the stakeholders considered legitimate, the scope for
challenging as opposed to just working within an established framework are all important
conditions where the foresight practitioners may be able to influence outcomes, but may find
themselves constrained by the policy context. (An interesting case of a foresight program
that was mandated to challenge the institutional framework of an innovation system was the
French FUTURIS exercise. See Barré, 2008.)

Individual policies are themselves situated within a broader policy context


As noted above, a policy is not formulated in a vacuum - and neither is the decision to
undertake a foresight exercise.
Foresight is initiated in the wake of recognition that a problem or opportunity of some kind
exists; in the case of technology foresight this is usually a problem connected with the STI
system, though it may be a problem of some other nature to which S&T is thought capable of
providing some of the solution.
The decision to use foresight as a way of addressing this problem – through helping to forge
policy, or through actively constituting new policy networks and stakeholder constituencies
– is typically not the only response to a perceived problem. Policy makers are sometimes
content to let time pass while a problem is being appraised, but often they will want to be
seen to be acting decisively – and rapidly.
The result is that if foresight is intended to raise awareness, build networks, help promote a
more innovative culture – just about any ambition above that of providing a list of key
technologies or R&D priorities – then typically there will be other policy efforts intended to
address to same problem, being be introduced at more or less the same time. Disentangling
the effect of foresight from the effects of these other policies can be problematic. It is not
helped by the likelihood that actors in the system will want to gain credit for positive
outcomes, and we know of cases where initiatives that flowed directly from
recommendations of foresight exercises have been presented as emerging from quite
different sources (such as wise policymakers). There is equally the likelihood that actors will
seek to legitimize activities they had already decided on by claiming them to be results of the
foresight activity, if the latter has been positioned as the main source of guidance in the
particular policy arena.
Additionally, in pluralistic societies, other actors in the system may be introducing initiatives
of their own. It is not unknown for a foresight initiative from one part of government to provoke

j j
PAGE 78 foresight VOL. 14 NO. 1 2012
responses from other parts of the system – a Ministry of Research and a Ministry of Industry
may have their own agendas to pursue, a research funding agency with a particular sectoral
of S&T focus may be concerned that a national exercise will not place sufficient emphasis on
its concerns – which may lead to competing exercises (which may labelled as ‘‘key
technologies’’, ‘‘roadmapping’’, ‘‘horizon scanning’’, while having very much a foresight
agenda)[7]. This may create confusion or even some delegitimation of one or both activities
– but even if this does not happen, then disentangling the consequences of the distinct
activities can be problematic. Expert informants may be able to clarify affairs, though they
may also be aligned with one or another side! In addition to various parts of government,
there may be other stakeholder organizations that are also active on the foresight front –
industry and employee bodies, third sector organizations, independent foundations funding
S&T or investigation into the implications of various technology developments.
It could be hailed as a success for foresight in that, in such instances, it has helped establish
a broader foresight culture – by triggering activities in other institutions – even if the way in
which this has happened is unintended, and in the worst cases it may undermine some of the
other goals of the programme.

Beyond impacts
This highlights the inadequacy of the term ‘‘impact’’ for thinking about foresight
interventions. The term is best used for describing physical collisions between inanimate
objects (a ball of putty is impacted by hitting the floor, but it has not learned anything).
People may be impacted by cars, in terms of physical trauma. But people can learn, and
experience and learning are crucially involved in using foresight results (which is one reason
why the results are generally more meaningful to those who have been involved in the
process), and of course in active engagement in foresight activities. The consequences of
foresight activities will very much depend on the orientations of the ‘‘users’’ of the activity –
their existing appraisal of the topic, the effort they are prepared to put into understanding
alternative perspectives, the extent to which they can think beyond existing policy
perspectives. We began this essay by discussing the role of users in coproducing services,
and if we should want to continue to talk of impacts, we should note that the impactee is a
coproducer (to a great or lesser extent) of the impact (which may also be a greater or lesser
impact). Given that the impactee may also have played a role in setting up the foresight
activity in the first place, any unidirectional language will be too one-dimensional to capture
the effectivity of foresight.
Foresight practitioners may seek to engage users in the most productive way, but –
especially when dealing with large bureaucracies (possibly especially international
organizations) – they are institutionally limited in what they can accomplish[8]. Creating
communities of practice in Foresight, where professionals can be mutually supportive of
efforts to maintain the quality of activities in the face of restrictions that users and sponsors
may want to put into place, and be constructively critical of actual practice, is liable to help
matters. Evaluation can also play a role – if users and sponsors are prepared to learn the
lessons that are telling them about the ways in which framing foresight can limit its ultimate
value. Thus, evaluation needs to focus less on simple ‘‘impacts’’, and much more on how the
outcomes of foresight have been coproduced by the various actors engaged (or that should
have been engaged) in the process. This will be what makes it dynamic foresight evaluation.

Notes
1. A wide-ranging guide to current service research is Maglio et al. (2010).
2. EFMN web site still operational at www.efmn.info/. EFP available at www.foresight-platform.eu/
(accessed 22/11/2011)
3. Which is definitely not the same thing as forecasting accuracy, though the accuracy of specific
forecasts may be part of this appraisal.

j j
VOL. 14 NO. 1 2012 foresight PAGE 79
4. It should be noted that the UK Programme is often officially described as having involved three
‘‘cycles’’; the terminology is not precisely the same as that used here. See Miles (2005).
5. An interesting account of the use of expertise at various stages of the policy cycle is provided by
Barkenbus (1998).

6. The ‘‘Taylor Report’’ (Advisory Group On Nanotechnology, 2002) – heavily based on the report of a
Foresight process (documented as Miles and Jarvis, 2001) – was a strategy for nanotechnology in
the UK. However, the strategy was watered down in a policy entitled the Micro and Nano Technology
Manufacturing Initiative. (The ‘‘Micro’’ element of this title is, perhaps, suggestive of the industrial
interests involved. This was almost immediately criticised by a Parliamentary Committee as ‘‘an
inadequate response to the Taylor Report. The £90 million over six years on offer from DTI would
have been better spent on the establishment of one or two nanofabrication facilities, along the lines
of those recommended by the Taylor Report. . . . at the cost of a clearly focussed strategy, based on
areas of existing UK strength. . . . the pressures for short term financial returns and for a wide
regional distribution of funding will result in a disparate range of microtechnology facilities being
supported instead of the few world class nanotechnology centres necessary to raise the UK’s
nanotechnology profile. . . . The education and training strategy called for by the Taylor Report has
still to be developed.’’ The Science and Technology Committee (2004, p. 3).

7. This element of the politics of foresight exercises is not well-documented, to my knowledge, though
it is often discussed among practitioners. The phrase ‘‘politics of foresight’’ returns startlingly few
google hits, most to the same two essays (search performed 22/11/2011).In contrast, the ‘‘politics of
expertise’’ is the focus of too vast a body of literature to attempt to summarise here.
8. See Loveridge (2009) for further reflections on what he calls institutional foresight.

References
Advisory Group on Nanotechnology (2002), New Dimensions for Manufacturing: A UK Strategy for
Nanotechnology, Department of Trade and Industry, London, DTI Pub 6182 2k/06/02/NP, URN 02/1034,
originally published online at www.dti.gov.uk/innovation/nanotechnologyreport.pdf (but removed);
now available at: www.innovateuk.org/_assets/pdf/taylor%20report.pdf
Barkenbus, J. (1998), Expertise and the Policy Cycle, Energy, Environment, and Resources Center
(EERC), The University of Tennessee, Knoxville, TN, available at: www.oocities.org/hgheleji/en/
publicpolicy/policy_cycle.pdf (accessed 22 November 2011).
Barré, R. (2008), ‘‘Foresight in France’’, in Georghiou, L., Cassingena, J., Keenan, M., Miles, I. and
Popper, R. (Eds), The Handbook of Technology Foresight, Edward Elgar, Cheltenham and
Northampton, MA.
Coates, J.F. (1985), ‘‘Foresight in federal government policymaking’’, Futures Research Quarterly,
Vol. 2, pp. 29-53.
EFMN (2009), Mapping Foresight: Revealing how Europe and Other World Regions Navigate
into the Future, Directorate-General for Research, European Commission, Brussels, EUR 24041 EN,
available at: ftp://ftp.cordis.europa.eu/pub/fp7/ssh/docs/efmn-mapping-foresight.pdf (accessed
22 November 2011).

Georghiou, L., Cassingena, J., Keenan, M., Miles, I. and Popper, R. (Eds) (2008), The Handbook of
Technology Foresight, Edward Elgar, Cheltenham and Northampton, MA.
Harzing, A.W. (2007), Publish or Perish, available at: www.harzing.com/pop.htm (accessed
22 November 2011).

Havas, A., Schartinger, D. and Weber, M. (2010), ‘‘The impact of foresight on innovation policy-making:
recent experiences and future perspectives’’, Research Evaluation, Vol. 19 No. 2, pp. 91-104.
Irvine, J. and Martin, B. (1984), Foresight in Science, Edward Elgar, Aldershot.
Knill, C. and Tosun, J. (2008), ‘‘Policy making’’, in Caramani, D. (Ed.), Comparative Politics,
Oxford University Press, Oxford.

Loveridge, D. (2009), Foresight, Routledge, New York, NY, and London.


Maglio, P.P., Kieliszewski, C.A. and Spohrer, J.C. (Eds) (2010), The Handbook of Service Science,
Springer, New York, NY.

j j
PAGE 80 foresight VOL. 14 NO. 1 2012
Martin, B. and Irvine, J. (1989), Research Foresight, Edward Elgar, Aldershot.

Martin, B. and Johnston, R. (1999), ‘‘Technology foresight for wiring up the national innovation system.
Experiences in Britain, Australia, and New Zealand’’, Technological Forecasting and Social Change,
Vol. 60 No. 1, pp. 37-54.

May, J.V. and Wildavsky, A. (Eds) (1978), The Policy Cycle, Sage, London.

Miles, I. (2002), Appraisal of Alternative Methods and Procedures for Producing Regional Foresight,
report for the European Commission’s DG Research funded STRATA – ETAN Expert Group Action, CRIC,
Manchester, available at: http://ec.europa.eu/research/social-sciences/pdf/appraisalof-alternative-
methods_en.pdf

Miles, I. (2005), ‘‘UK foresight: three cycles on a highway’’, International Journal of Foresight and
Innovation Policy, Vol. 2 No. 1, pp. 1-34.

Miles, I. and Cunningham, P. (2006), Smart Innovation: A Practical Guide to Evaluating Innovation
Programmes, European Commission, Luxembourg, ISBN 92079-01697-0, available at: ftp://ftp.cordis.
lu/pub/innovation/docs/sar1_smartinnovation_master2.pdf

Miles, I. and Jarvis, D. (2001), Nanotechnology – A Scenario for Success in 2006. HMSO, Teddington,
National Physical Laboratory Report Number: CBTLM 16, available at: http://libsvr.npl.co.uk/npl_web/
pdf/cbtlm16.pdf

Science and Technology Committee (House of Commons) (2004), Too Little Too Late? Government
Investment in Nanotechnology – Fifth Report of Session 2003-04, Report Volume I, The Stationery Office
Limited, London.

Wells, H.G. (1932), ‘‘Wanted professors of foresight!’’, BBC radio programme broadcast
19 November 1932, published in Futures Research Quarterly, Vol. 3 No. 1 (1987), pp. 89-91.

Further reading
Miles, I. (2010), ‘‘The development of technology foresight: a review’’, Technological Forecasting and
Social Change, Vol. 77 No. 9, pp. 1448-56.

About the author


Ian Miles is Head of the Laboratory for Economics of Innovation at the Research University –
Higher School of Economics in Moscow, and Professor at Manchester Institute of Innovation
Research, and at the Centre for Service Research, University of Manchester. Ian Miles can
be contacted at: ian.miles@mbs.ac.uk or imiles@hse.ru

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

j j
VOL. 14 NO. 1 2012 foresight PAGE 81

View publication stats

You might also like