You are on page 1of 18

Available online at www.sciencedirect.

com

Policy and Society 29 (2010) 77–94


www.elsevier.com/locate/polsoc

Editorial
Reconsidering evidence-based policy: Key issues and challenges

Abstract
The evidence-based policy (EBP) movement has sought to promote rigorous analysis of service programs and policy options in
order to improve the quality of decision-making. Rigorous research findings are seen as useful and necessary inputs for policy-
makers in their ongoing consideration of policy development and program review. This article provides a critical overview of the
research literature on evidence-based policy in the context of government policy-making and program improvement. Particular
attention is given to the rational expectation that improved policy analysis will flow from a better evidence base, with consequent
improvements in the service delivery and problem-solving capacities of government agencies. This expectation is contrasted with
the practical limitations on rational processes typical in the real world of political decision-making, which is characterised by
bargaining, entrenched commitments, and the interplay of diverse stakeholder values and interests. Key issues for consideration
include the forms of evidence that are of greatest relevance or utility for decision-makers, and the most productive forms of
interaction between the producers and the users of research and evaluation findings.
# 2010 Policy and Society Associates (APSS). Elsevier Ltd. All rights reserved.

1. Introduction

This symposium brings together a range of contemporary analyses which illustrate the scope of recent work
exploring and assessing ‘evidence-based policy’ (EBP). The EBP movement represents both an important set of
professional practices and aspirations; and also a political rhetoric seeking to legitimate forms of decision-making that
are alternatives to ideological or faith-based policy-making. A summary of the papers included in this special issue is
provided in the final section of this article. The authors in this symposium consider a range of examples of EBP-in-
action, drawn from OECD countries—the USA, the United Kingdom, Ireland, Canada, Australia, and beyond.
Although the range of examples is broad, and the wider research literature is growing rapidly, it is not yet possible to
make systematic comparisons and generalisations about the role of EBP across different kinds of political systems and
across different policy domains.
The symposium does not attempt to cover all areas of policy-relevant research, where the tools of rigorous analysis
and evaluation are sometimes brought to bear on controversial or innovative policy development, e.g. debates about
the evidence base for national security and foreign policy, bio-technology policy issues, and regional economic
innovation policy. The strength of the EBP movement has been evident mainly in the human services policy domains
(such as health, social services, education and criminal justice). The adoption of EBP approaches has proceeded more
strongly in those advanced democratic nations which have invested in policy-relevant research and in performance
information systems. It is possible that the EBP movement will remain largely confined for some time to these
countries, but it is noteworthy that some of the analytical techniques typical of EBP are already being applied in
technical and managerial aspects of policy-making in several of the rapidly developing nations.
This article comprises the following sections. Firstly we consider the origins and purposes underlying the rise of the
evidence-based policy movement. Secondly, we consider the conditions and capabilities required for EBP to flourish

1449-4035/$ – see front matter # 2010 Policy and Society Associates (APSS). Elsevier Ltd. All rights reserved.
doi:10.1016/j.polsoc.2010.03.001
78 Editorial / Policy and Society 29 (2010) 77–94

in a practical political context. Thirdly, we consider the debates over forms of knowledge and evidence, and note the
methodological issues concerning what counts as reliable and relevant evidence. Fourthly, we consider several sources
of challenge and critique concerning the ambitions and efficacy of EBP, the limitations of knowledge for decision-
making and political learning, and limitations arising in policy design and program implementation. Fifthly we
consider the research literature on how to increase the positive impacts of research through promoting better
interaction and knowledge transfer between the research and policy communities. Finally, we briefly outline the papers
included in this symposium and note their significance for these debates.

2. The evolution and purpose of ‘evidence-based’ policy

The general proposition that reliable knowledge is a powerful instrument for advising decision-makers and for
achieving political success is a very old doctrine, linked to the exercise of effective statecraft and efficient governance
in early modern Europe. In later periods, the distinctively modern foundation of the knowledge/power relationship
has been closely associated with the rise of the empirical social sciences – sociology, economics, political science,
social psychology, etc. – which emerged in the 19th century and expanded very rapidly in the 20th century.
Historically, the underlying sense of purpose in the social sciences – their values base – has always involved the
enlightenment ethos of human improvement arising from greater understanding and therefore greater control
(Abrams, 1968; Head, 1982; McDonald, 1993; Heilbron, Magnusson, & Wittrock, 1998). In advanced Western
nations the social sciences developed rapidly in the postwar era of the 1940s and 1950s, exemplified by Keynesian
economics and by welfare-oriented social and educational planning. In the USA the ‘policy sciences’ championed by
Lasswell and his colleagues were clearly linked to the value bases of democracy and social well-being (e.g. Lerner &
Lasswell, 1951). Social scientists became deeply involved in the grand programs of social welfare, educational
reform and urban renewal in North America, Europe and Australia over several decades through to the 1960s and
1970s (e.g. Aaron, 1978; Innes, 2002; Meltsner, 1976; Nathan, 1988; Wilson, 1981; Wagner, Weiss, Wittrock, &
Wollmann, 1991; Solesbury, 2002).
However, the results were often disappointing, and some of the blame fell on the inadequacies of social research, as
well as the poor implementation and coordination capacities of government agencies (e.g. Pressman & Wildavsky,
1973). For example, in the field of prisons and corrections policy, Martinson (1974) concluded that ‘nothing works’
and that the research base was too thin to suggest reliable approaches. How could the social sciences become more
precise and reliable? By the 1960s a doctrine championing rigorous behavioural methodologies had emerged, insisting
that the social sciences should raise their ‘scientific’ standards. The use of quantitative data and experimental methods
was strongly advocated as a means of providing more precise and reliable evidence for decision-makers (e.g.
Campbell, 1969). Thus in these decades there emerged a focus on the potential benefits of greater rigour and greater
use of analytical methods. Many of the recent debates within the social sciences about the prospects for EBP stem from
the controversy about the centrality of rigorous quantitative methodologies, as noted in a later section below.
In considering the evolution of the evidence-based policy movement, it is useful to note the operation of both
demand and supply factors. The demand for rigorous social and economic research stems largely from government
agencies and legislative bodies which may be seeking information to report on performance and meet the needs of
decision-makers. Government-funded research has become, directly and indirectly, the most important source of
social science input to government. The perceived preferences of government bodies for certain types of research have
a large impact on how research is conducted. On the supply side, social and economic researchers have developed
research capacities that enable them to provide research findings on topics of interest to government. The topics and
formats are usually influenced by funders’ priorities. These research capacities have been consolidated over time in
substantial research centres. There are several organizational types in the research sector including universities,
consultancy firms, private sector think-tanks (Stone & Denham, 2004), and not-for-profit social welfare bodies (e.g.
Joseph Rowntree Foundation, 2000). Government agencies draw on these diverse external bodies for information and
advice; but they also maintain substantial units inside the public sector for gathering and processing social and
economic information as inputs to the policy process. In federal countries (such as Canada, Germany, Australia and the
USA), these public sector capabilities for information analysis and policy advice are replicated at national and sub-
national levels, leading to some challenging problems of coordination and even competition in some policy areas.
Evidence-based policy has become a catch-cry in recent decades in those countries which take seriously the quality
of policy analysis and the evaluation of program effectiveness. The EBP approach under discussion arguably has two
Editorial / Policy and Society 29 (2010) 77–94 79

fundamental foundations in those countries. Firstly, a favourable political culture can allow substantial elements of
transparency and rationality in the policy process. This in turn may facilitate a preference by decision-makers for
increased utilization of policy-relevant knowledge. Secondly, the associated research culture will encourage and foster
an analytical commitment to rigorous methodologies for generating a range of policy-relevant evidence. Evidence-
based policy has been a growth area in policy research because of these demand and supply factors: the ‘needs’ of
government decision-makers for certain types of information about problems, programs and the effectiveness of
options; and the increased array of tools and techniques for analysis and evaluation of policy options that emerged over
recent years. EBP is attractive to professionals concerned with building robust information bases and improving the
techniques for analysis and evaluation.
The UK central government, under incoming Prime Minister Blair from 1997, attempted to develop a more
coherent approach to policy development, championing evidence-based policy as a major element in developing fresh
thinking and increased policy capability as required by a reformist government (UK Cabinet Office, 1999a, 1999b).
This approach became institutionalised through new public service units and cross-departmental teams working on
complex issues. It also was reflected in new skills and training initiatives. Importantly, the UK Economic and Social
Research Council committed substantial funds to a broad long-term research programs on evidence-based policy and
practice, thereby linking the spheres of policy-making, evaluation research, and improved professional practice in
health and social care. Issues targeted by the ESRC included the barriers and facilitators to research utilisation, and
programs to build capacities for evidence-based policy and professional practice. The diverse findings of these
research programs have played a crucial role in legitimating and institutionalising EBP approaches to applied research
(see Bochel & Duncan, 2007; Nutley, Walter, & Davies, 2007). To their credit, the researchers have managed to avoid
creating new orthodoxies, by carefully exploring the nuanced complexities of EBP and maintaining the need to
incorporate a wide range of methodologies and forms of knowledge.
Thus the debate has moved along. It began many years ago in relation to the uptake of more rigorous approaches to
the systematic gathering of evidence and its rigorous use as a foundation for decision-making in the public sector. The
debate has now broadened as a result of a large body of case-study material which allows greater sophistication in
teasing out the contextual factors which help to explain the conditions influencing program effectiveness, and for
which clients. These case studies also draw attention to some of the factors that may facilitate or hinder research
utilization by policy-makers and program professionals. Even more ambitiously, it is possible this rich material will
ultimately throw more light on the conditions for institutional learning.
The concept of ‘evidence-based management’ has also developed a considerable literature. This is based on the
common-sense notion that business strategies and directions that are underpinned by a solid information base will be
superior to navigating without reliable charts and compass. Business leaders are dependent on accurate evidence about
performance, standards and market conditions; successful businesses depend on reliable information and expert
dialogue rather than precedent, power and personal intuition (Argyris, 2000; Pfeffer & Sutton, 2006; Rousseau, 2006).
Even more notable, the concept of ‘evidence-based practice’ has become very widely adopted in the health and social
care sectors (Barnardo’s, 2000; Barnardo’s, 2006; Dopson & Fitzgerald, 2005; Roberts & Yeager, 2006; Simons, 2004;
Walshe & Rundall, 2001). For example, one major challenge has been to ‘translate’ the findings and lessons of
healthcare research about effective treatment into clinical guidelines used by healthcare professionals. However, the
challenge of extracting ‘lessons’ from research findings and adopting them successfully in professional practice entails
complex issues of education, relationships and collaboration. The ‘implementation’ of research findings in
organisational and professional settings is not a simple linear process:
. . .adopting and utilizing an evidence-based innovation in clinical practice fundamentally depends on a set of
social processes such as sensing and interpreting new evidence; integrating it with existing evidence, including
tacit evidence; its reinforcement or marginalization by professional networks and communities of practice;
relating the new evidence to the needs of the local context; discussing and debating the evidence with local
stakeholders; taking joint decisions about its enactment; and changing practice. Successful ‘adoption’ of an
evidence-based practice depends on all these supportive social processes operating in the background. (Ferlie,
2005:183)
The reputation of clinical medicine in pioneering ‘evidence-based’ approaches remains so powerful that much of
the broader ‘evidence-based’ social evaluation literature begins by asking whether these rigorous health-based
approaches are applicable to other areas of policy, such as social welfare, schooling, juvenile justice, community-
80 Editorial / Policy and Society 29 (2010) 77–94

health and natural resource management. It is argued that the uptake of EBP approaches is facilitated by several
factors, outlined in the section below, but that significant variations across policy domains remain important in relation
to the dynamics of policy change and program implementation.

3. Capabilities and conditions required for EBP

One of the leading strategic documents promoting EBP makes a strong case for the primacy of gathering and
analysing good information from several sources:
This Government’s declaration that ‘what counts is what works’ is the basis for the present heightened interest in
the part played by evidence in policy making... . ... policy decisions should be based on sound evidence. The raw
ingredient of evidence is information. Good quality policy making depends on high-quality information, derived
from a variety of sources—expert knowledge; existing domestic and international research; existing statistics;
stakeholder consultation; evaluation of previous policies; new research, if appropriate; or secondary sources,
including the internet. (UK Cabinet Office, 1999b: ch. 7)
The need for good information is one of the foundations for good policy and review processes (Shaxson, 2005).
However, the dynamics of policy-making are deeply affected by institutional, professional and cultural factors, which
will differ across policy domains and issues. Evidence-based interventions associated with the EBP approach have
been most carefully pursued in areas such as education (e.g. Mosteller & Boruch, 2002; Zigler & Styfco, 2004), social
welfare (e.g. Cannon & Kilburn, 2003; Roberts, 2005), criminology (e.g. France & Homel, 2007; Sherman,
Farrington, Welsh, & MacKenzie, 2006; Welsh, Farrington, & Sherman, 2001), healthcare (e.g. Lemieux-Charles &
Champagne, 2004; Lin & Gibson, 2003; Cochrane Collaboration, 2010), and environment/natural resource
management.
The evidence-based policy (EBP) movement has sought to promote rigorous analysis of service programs and
policy options, thereby providing useful inputs for policy-makers in their ongoing consideration of policy
development and program improvement. However, it is now clear that the early hopes of large and rapid improvements
in policies and programs, as a result of closer integration with rigorous research, have not materialised as readily as
anticipated. The new ‘realism’ emerging from recent research suggests that while evidence-based improvements are
both desirable and possible, we cannot expect to construct a policy system that is fuelled primarily by objective
research findings. Creating more systematic linkages between rigorous research and the processes of policy-making
would require long and complex effort at many levels. This reconsideration concludes that varieties of evidence can
inform policy rather than constitute a systematic foundation for the policy process.
There are several reasons for this. Firstly, a strong research base, with rigorous research findings on key issues, is
simply not yet available in many areas for informing policy and program managers. Moreover there is a growing
recognition of the paradox that the more we discover about social issues the more we are likely to become aware of the
gaps and limitations of our knowledge. Thus the repeated call by social advocacy groups to move from a focus on
‘knowledge’ (understanding the problem) to ‘action’ (doing what needs to be done) may be premature in some areas.
Secondly, policy managers and political leaders are often motivated and influenced by many factors besides
research evidence, as discussed in the following sections. The mere availability of reliable research does not ensure its
subsequent influence and impact. Political leaders are often preoccupied with maintaining support among allies,
responding to media commentary, polishing leadership credentials, and managing risks. Policy managers are as much
motivated by perceptions about external support (stakeholders and partner organisations) as by the depth of their
systematic evidence base for decision-making.
Thirdly, even where reliable evidence has been documented, there is often a poor ‘fit’ between how this information
has been assembled by researchers (e.g. scientific reports) and the practical needs of policy and program managers.
Researchers themselves are not often adept at packaging and communicating their findings, and many prefer to remain
distant from direct engagement with public debates around key issues.
Fourthly, the value of professional knowledge, as distinct from experimental research-based knowledge, is
increasingly being recognised as crucial. This is especially so in social care domains where robust and wide-ranging
experimental knowledge is unlikely to emerge. How should policy-makers and service providers weigh the relative
merits of relying on the precise findings of systematic/scientific research and relying on the practical experience of
professional service providers? The latter are able to gain valuable insights through grappling with complex problems
Editorial / Policy and Society 29 (2010) 77–94 81

in field situations; and they are well positioned to understand local nuances, including the practical need for adjusting
general objectives and procedures for local conditions.
Fifthly, EBP seems to have less standing and relevance in those areas where the issues are turbulent or subject to
rapid change. EBP seems to ‘work’ best where it is possible to identify an agreed and persuasive body of research
evidence and to position it at the centre of decision-making. However, evidence is deployed under different conditions
in turbulent policy fields, marked by value-conflict, rapid change and high risk. Here, evidence-based arguments are
likely to become politicized, and evidence will be used for partisan purposes. In complex value-laden areas – such as
bio-technology applications in healthcare (e.g. Mintrom & Bollard, 2009), or policy responses to climate change –
rational and reasonable deliberative processes can become side-tracked by heated controversy. To the extent that
research findings are widely used as weapons in strongly emotive debates, it may be only a short step to accusations
that most research on these matters is biased and lacks objectivity.
Taking these concerns into account, it appears there are four crucial enabling factors which both underpin modern
conceptions of evidence-based policy and which need to be promoted as a basis for constructing a more robust
evidence-based (or at least evidence-informed) system of decision-making:

(1) High-quality information bases on relevant topic areas.


Governments have been the main investors in the collection and dissemination of systematic data on major
economic, social and environmental trends and issues. For example, among the more advanced nations in the
OECD, a commitment to providing good information and undertaking rigorous analysis of trends and issues has
been seen as inherent in good governance and democratic accountability. Moreover the endorsement of
international agreements is often predicated on undertakings to comply with sophisticated reporting on
performance. In advanced nations, the principle of transparency reinforces the need for investing in high-quality
and accessible information. But building information bases and investing in capability can be expensive.
(2) Cohorts of professionals with skills in data analysis and policy evaluation.
Data needs to be converted into useful information about causal patterns, trends over time, and understanding
the likely effects of various policy instruments. Within the government sector, this requirement raises serious
challenges concerning the recruitment, training and retention of skilled analytical staff (Meltsner, 1976). There
have been internal debates within government agencies about which skills are most vital for policy work, and about
the appropriate balance between ‘in-house’ skills and reliance on external advice from policy-oriented consultancy
firms and independent think-tanks outside the public sector, whose proliferation has sharpened the ongoing debates
about the quality and timeliness of advice. There has been surprisingly little research about how policy staff
actually do their jobs (but see the recent work of Colebatch, 2006b; Dobuzinskis, Howlett, & Laycock, 2007;
Hoppe & Jeliazkova, 2006; Page & Jenkins, 2005; Radin, 2000). The very nature of policy work is disputed by
some researchers, some of whom argue that policy analysis is a professional craft with a broad skills base (e.g.
Parsons, 2002, 2004), while others give precedence to statistical analytical tools such as cost-benefit analysis (e.g.
Argyrous, 2009) or the use of systematic reviews of evidence (e.g. Petticrew & Roberts, 2005).
(3) Political and organisational incentives for utilising evidence-based analysis and advice.
Governmental decision-making processes are not automatically geared to generating and using evidence-based
analysis; this requires attention to the organisational climate and culture surrounding the production of analysis
and advice. Some political leaders are populist and anti-intellectual, casting major difficulties in the way of
promoting EBP. On the other hand, some leaders of reformist governments offer the prospect of more transparent
and rigorous analysis to underpin open debate on policy improvement. Thus, echoing Tony Blair in the UK in
1997, the incoming Prime Minister of Australia in 2008 offered a political commitment to
a robust, evidence-based policy making process. Policy design and policy evaluation should be driven by
analysis of all the available options, and not by ideology. . ... In fostering a culture of policy innovation, we
should trial new approaches and policy options through small-scale pilot studies. . ... Policy innovation and
evidence-based policy making is at the heart of being a reformist government. (Rudd, 2008)
The operational implications of such general statements are seldom clear and remain to be tested. One such test
might be the extent to which government organisations invest in research and evaluation functions over a period of
time. This would provide an indication of whether organisational culture supports the systematic evaluation of
initiatives and interventions. Incentives and protections are perhaps also necessary to counter the risk-averse
82 Editorial / Policy and Society 29 (2010) 77–94

behaviour of civil servants and other advisors who may be wary that the balance of evidence may be critical of
cherished policies and thus the messages arising from EBP will sometimes be unwelcome. Rigorous evaluation of
program initiatives can be politically risky. Governments do not relish being exposed to strong public criticism
when program outcomes are disappointing or when pilot schemes produce very weak results. Building a culture of
evaluation is crucial, especially if this is understood as a culture of learning; and therefore ideally should be
institutionalised as good practice with bipartisan support across the political divide.
(4) Substantial mutual understanding between the roles of policy professionals, researchers and decision-makers.
Research about EBP considers both the ‘supply side’ (the work of researchers or analysts) and the ‘demand
side’ (the behaviour of policy and program managers in using information). All these groupings need to adapt their
activities and priorities if research utilisation is to be improved. Researchers need to work on important issues, ask
the right questions, and provide relevant findings in a well written and accessible way. Additionally, where
possible, the implications of the research for policy and practice might be noted. On the other side of the ledger,
policy managers need to become more aware of the value of relevant research, become more adept at accessing and
using such research, understand both the strengths and limitations of the evidence base, and know how to balance
the competing perspectives of research, politics and management (Nutley et al., 2007: ch. 8).

4. Forms of knowledge and evidence

The quest for evidence-based policy is premised on the use of rigorous research and the incorporation of these
findings into governmental processes for policy and program review and more broadly into public policy debates.
Seeking rigorous and reliable knowledge, and promoting its utilisation within the policy process, are core features of
the EBP approach. The primary goal is to improve the reliability of advice concerning the efficiency and effectiveness
of policy settings and possible alternatives. The quest for rigour is central, and there is a large literature providing
guidance on methodological questions of data validity, reliability and objectivity. Specialists in the design of
information collection and statistical analysis have a special interest in data quality for use in applied research on
specific problems or programs. But other voices in the EBP debates, noted below, argue that the circle of relevant
evidence needs to be broadened to reflect other important types of knowledge and information.
One key issue is the significance of ‘qualitative’ evidence, e.g. evidence concerning the values, attitudes and
perceptions of stakeholders and decision-makers. The importance assigned to qualitative evidence differs across the
disciplinary traditions of the social sciences. Some disciplines (e.g. social anthropology, history) generally centre upon
the ‘experience’ of participants – meanings, motives, contexts – rather than seeking behavioural generalisations
(which are more typical of quantitative approaches relying on economic and social statistics). Program evaluation
professionals tend to use mixed methods as appropriate, and bridges are being built across the qualitative/quantitative
divide. Mixed methods are increasingly championed by analysts and evaluators who are attempting to explain
complex problems and assess complex interventions (e.g. Woolcock, 2009).
The most fiercely debated notion in EBP is the claim, drawn from the medical research literature, that there is an
‘evidence-hierarchy’, i.e. forms of evidence should be ranked in terms of the methodological rigour underlying project
design, data collection and causal analysis. Scientific experts may reasonably disagree about methods and instruments.
The fundamental question is the degree of trust in the reliability of research findings. Whereas standards of proof and
rules on the admissibility of testimony in courts of law are often linked to standards of fairness, the standards for EBP
are derived from scientific methods for establishing data validity and testing explanatory rigour. Fifty years ago, debate
on how to use ‘empiricist’ methods centred on whether the big issues were being either ignored or trivialised through a
narrow focus on ‘researchable’ issues (e.g. Mills (1959) pointed to the dangers of fragmented and abstracted
empiricism). In more recent decades the argument for rigour was strongly advanced by those who claim that
randomised controlled trials (RCTs), first pioneered in medical research to appraise clinical interventions, can and
should be applied in the social sciences (Boruch & Rui, 2008; Campbell, 1969; Leigh, 2009; see also Coalition for
Evidence-Based Policy, 2006, and the Campbell Collaboration, 2010). A softer variant is the claim that single-study
case findings are misleading, and that a better understanding of causes and consequences emerges from ‘systematic
reviews’ of all available research, after taking into account the rigour of the methods followed in each study (Petticrew,
2007; Petticrew & Roberts, 2005). Government agencies have generally been wary of prescribing a single preferred
methodology, and UK government agencies have softened their apparent initial preference for randomised controlled
trials. However, the largest concentrations of professional expertise in program assessment and policy analysis are in
Editorial / Policy and Society 29 (2010) 77–94 83

the USA, where generations of analysts have refined the methodologies for evidence-based assessment of programs
and policy options. The mandating of particular forms of program appraisal as a condition of program funding has
proceeded further in the USA than most other nations (Boruch & Rui, 2008; Haskins, 2006), although the practical
conduct of evaluations remain diverse and somewhat fragmented across agencies and levels of government.
The counter-arguments against the primacy given to RCTs are ethical, practical and methodological. They turn on
(1) the difficulty of implementing RCTs in sensitive areas of social policy; (2) the difficulty of transplanting quasi-
experimental results to the complex interactive world of large-scale program applications (Deaton, 2009); and (3) the
tendency for RCTs to downplay the experience of professionals with field experience who have contextual knowledge
of what works under what conditions (Pawson, Boaz, Grayson, Long, & Barnes, 2003; Pawson, 2006; Schorr, 2003;
Schorr & Auspos, 2003). It is also probable that politicians, policy managers, scientists, and service users have very
different perspectives on what kinds of evidence are regarded as most trustworthy (e.g. Glasby & Beresford, 2006). A
key challenge is therefore how best to utilise the practical knowledge of professional service delivery managers and the
experience of service users.
Policy decisions in the real world are not deduced from empirical-analytical models, but from politics and practical
judgement. There is an interplay of facts, norms and preferred courses of action. In the real world of policy-making,
what counts as ‘evidence’ is diverse and contestable. The policy-making process in democratic countries uses the
rhetoric of rational problem-solving and managerial effectiveness, but the policy process itself is fuzzy, political and
conflictual. As Lindblom noted many years ago:
Instead of reaching ‘solutions’ that can be judged by standards of rationality, policy making reaches settlements,
reconciliations, adjustments and agreements that one can evaluate only inconclusively by such standards as
fairness, acceptability, openness to reconsideration and responsiveness to a variety of interests. And analysis in
large part is transformed from an evaluative technique into a method of exerting influence, control and power
which we call partisan analysis. (Lindblom, 1980: 122)
Lindblom perhaps goes too far towards discounting the EBP vision of science seeking to inform (if not shape) the
exercise of political power. But Lindblom is certainly correct in asserting that the policy process comprises many
activities in which scientific rigour rubs up against power, interests and values. In many advanced countries, systematic
research (scientific knowledge) does provide an important contribution to policy-making. Such research and analysis
is undertaken in many institutional contexts—including academia, think-tanks, consultancy firms, and large units
within the government agencies themselves. In particular, the role of rigorous evaluation has gradually become more
substantial in the policy and review process of many countries, in conjunction with the increased capability of
evaluation practitioners to assess the effectiveness of complex interventions.
But rigorous scientific analysis in this sense constitutes only one set of voices in the larger world of policy and program
debate. Several other types of knowledge and expertise have legitimate voices in a democratic society. It has been argued
that these other ‘lenses’ (Head, 2008a) or knowledge cultures (Shonkoff, 2000) may include the following:

 The political knowledge, strategies, tactics and agenda-setting of political leaders and their organisations set the ‘big
picture’ of priorities and approaches. The logic of political debate is often seen as inimical to the reasoned use of
objective policy-relevant evidence. Ideas, arguments and values are instead mobilised to support political objectives
and to build coalitions of support; ‘spin’ may become more important than objective tests of effectiveness and
accountability.
 The professional knowledge of service delivery practitioners and the technical knowledge of program managers and
coordinators is vital for advising on feasibility and effectiveness. They have crucial experience in service delivery
roles, and field experience in implementing and monitoring client services across social care, education, healthcare,
environmental standards, etc. They wrestle with everyday problems of effectiveness and implementation, and
develop practical understandings of what works (and under what conditions), and sometimes improvise to meet local
challenges (Lipsky, 1980; Mashaw, 1983; Schon, 1983). The practical details of how programs are implemented in a
variety of specific settings are essential components for a realistic understanding of policy effectiveness (Durlak &
DuPre, 2008; Hill & Hupe, 2002; Nutley & Homel, 2006).
 In addition to these institutional sources of expertise, the experiential knowledge of service users and stakeholders is
vital for ‘client-focused’ service delivery. Ordinary citizens may have different perspectives from those of service
providers and program designers; their views are increasingly seen as important for program evaluation in order to
84 Editorial / Policy and Society 29 (2010) 77–94

ensure that services are appropriately responsive to clients’ needs and choices or that standards are set realistically
(Coote, Allen, & Woodhead, 2004).

Thus, rigorous analysis can often make an important contribution but is only part of the story. A related issue is to
consider where are the most accessible points in the policy development and evaluation process for ‘EBP rigour’ to be
influential. The initial assumption might be that policy-relevant research on ‘what works under what conditions’ would
be linked into the evaluation phase of the policy cycle (e.g. Roberts, 2005). However, the notion of a rational and
cyclical process of policy development, implementation and review does not correspond closely with political realities
(Colebatch, 2006a). A more realistic model would allow for reiteration of steps with continual processes of
consultation, gathering evidence, and assessing options. Case studies of policy change and review, including
international studies, do not provide general answers but illustrate the diverse contexts for policy analysis both inside
and outside the public bureaucracies (e.g. Edwards, 2001).
Rigorous evidence could therefore be relevant at several points in the development and review processes. But not all
matters are genuinely open to rethinking by decision-makers at a given point in time. Some areas of policy are tightly
defined by government priorities, electoral promises, and ideological preferences. Here, the scope for revised
approaches may be strictly limited even when there is major new evidence about ‘what works’. On the other hand,
some program areas achieve more settled character over a period of time (Mulgan, 2005). Evidence-based arguments
about ‘fine-tuning’, based on careful research about effectiveness, might be more likely to gain traction in those areas
that are away from the political heat. By contrast, on matters of deep controversy, research findings are more likely to
be mobilised as arrows in the battle of ideas, and sometimes in ways that the original authors may find distasteful.

5. Criticisms, reservations and challenges

The rationalist and optimistic view of EBP aims at achieving cumulative knowledge, as a basis for iterative policy
learning in a democratic policy-making process which takes close account of research and evaluation findings (Lerner
& Lasswell, 1951). However, there are few contemporary analysts who now adopt such a rationalist position. The
recent trajectory of EBP in the United Kingdom illustrates some of the current ambiguities. With the demise of a long-
serving conservative government in 1997 and the launching of reformist policy frameworks foreshadowing evidence-
based policy-making (UK Cabinet Office, 1999a, 1999b), the increased respect for research and evaluation was
generally welcomed by policy researchers (e.g. Davies, Nutley, & Smith, 2000; Walker, 2001). But some
commentators came to see the new approach as ‘technocratic’ (Clarence, 2002), owing to its implicit preference for
quantitative precision and technical expertise over other forms of professional knowledge, and its tacit opposition to
inclusive deliberation as a basis for reform (e.g. Parsons, 2002, 2004). Moreover, the EBP approach was seen as
operating within a particular paradigm (e.g. new public management notions of efficiency) as defined by government
rather than encouraging new thinking and operating through broad and open debate (Marston & Watts, 2003).
Within the policy bureaucracies the early champions of a rigorous quantitative approach to EBP quickly found it
necessary to broaden their view to accommodate the plural forms of knowledge that are relevant to policy analysis (e.g.
Davies, 2004), and there has been a degree of rapprochement on methodological issues. Thus the UK central agencies
have recognised the importance of qualitative studies, if conducted with appropriate methodological rigour (UK
Cabinet Office, 2003, 2008; UK Treasury, 2007). Large-N qualitative studies are increasingly seen as open to the
rigorous techniques of quantitative analysis. Longitudinal panel studies have also become an increasingly rich source
of several types of evidence. Understanding complex issues and associated interventions will require mixed methods
that go beyond computer-generated data models.
A second area of concern has centred on the importance of long-term or durable governmental commitments both to
program implementation and to evaluation processes. Ideally, EBP is not about quick fixes for simple tactical issues,
but centres on more complex issues where research and evaluation findings can enhance understanding of patterns,
trends and causes. A recent report found that up to 60% of the research budget of UK government departments is
devoted to ‘short-term’ projects to meet current political and administrative demands (British Academy, 2008: 26).
Political leaders often insist that measurable results be available in a short time-frame, leading to greater focus on
visible activities rather building the foundations for sustainable benefits. Moreover, governments have a propensity to
change a program before outcomes have been assessed, so that any evaluation would thus be measuring moving targets
with variable criteria of success.
Editorial / Policy and Society 29 (2010) 77–94 85

In a complex policy intervention, clear program outcomes may not emerge for several years. As Brooks-Gunn
(2003) reminds us, there are no silver bullets or magical solutions in the hard business of social policy, and persistence
over time using a wide range of expertise is usually required. Fortunately there are recent examples of solid
commitment to program stability and to long-term evaluation. In the UK, for example, a complex program targeting
early childhood development in disadvantaged communities underwent structural changes to improve coordination
but the associated evaluation project has managed to provide an impressive series of assessments over several years
(Percy-Smith, 2005; Belsky, Barnes, & Melhuish, 2007; Melhuish, Belsky, & Barnes, 2009). One of the positive trends
in some countries has been more comprehensive investment in policy-relevant research and stronger commitment to
evaluation, including making better use of the products of rigorous independent evaluations. When these evaluations
have an educative and learning purpose (Edelenbos & Van Buuren, 2005; Head, 2008b), rather than simply a
performance auditing role, the contribution to EBP approaches is likely to be much enhanced.
The third challenge is how EBP can deal with large and complex problems. In relatively simple issues where all the
variables can be specified and controlled, methodological rigour is likely to be tight, with some confidence that causal
factors can be clarified. But in programs with multiple objectives, or where the clients/stakeholders are subjected to
many sources of influence beyond the scope of the program, the challenge of accurate understanding is compounded.
Thus, the way in which a problem is framed has implications both for scientific validity and for political management.
The scales or units of analysis (e.g. individuals, households, organisations and suburbs) and the degree of issue-
complexity (one problem or a nested series of related issues) strongly influences how policy problems are framed,
debated and researched. This poses diverse challenges for evidence, analysis and recommendation. The political
requirement for solutions will sometimes encourage rapid generic responses, rather than the hard road of detailed field
research with careful evaluation of program trials and possible larger-scale implementation in due course.
Some of these large problems have been termed ‘wicked’ owing to their resistance to clear and agreed solutions.
These systemic and complex problems are marked by value divergence, knowledge gaps and uncertainties, and
involve complex relationships to other problems (APSC, 2007; Head, 2008c). Traditional bureaucratic structures, even
when energised by the sophisticated managerial approaches of outcome-focused governments, cannot successfully
tackle these intractable problems through standard performance management approaches. This is partly because
complex problems are unlikely to be ‘solvable’ through a single policy instrument or ‘magic bullet’. Policy-makers
might be wise to adopt multi-layered approaches which may look ‘clumsy’ rather than elegant (Verweij & Thompson,
2006). Moreover, a key feature of complex social problems is that there are underlying clashes of values, which are
often not well identified and need to be addressed (Schon & Rein, 1994). Policy analysts confronted by complex
problems have tended to drift into one of three camps, centred on what we can describe as incentives, conflict
resolution, and increased coordination.
The first approach to complexity relies on identifying levers or instruments (APSC, 2009a) which can influence
behaviour at arm’s length, through changing the patterns of incentives and constraints. Examples include the so-called
‘new paternalism’ or conditional welfare (e.g. Mead, 1997), whereby disadvantaged clients are required to take
available job and training opportunities as a condition of social security payments. Behavioural analysis based on
incentive theory (e.g. Thaler & Sunstein, 2008) argues that judicious ‘nudging’ of citizens through incentives and
penalties can potentially produce positive outcomes, with less need for intensive long-term case-management or other
expensive oversight and compliance mechanisms. Behavioural change arises indirectly from the ‘choice architecture’
embedded in the program design.
The second approach, by contrast, is to see complexity as an opportunity to address underlying sources of
disagreement. Value conflicts are identified as matters for matters for dialogue, mediation and conflict reduction
(Hemmati, 2002; Lewicki, Gray, & Elliott, 2003). A relational conception of the policy process, as dialogue-based
deliberation and debate, is at the core of this viewpoint (Fischer, 2003; Hajer & Wagenaar, 2003). The third approach
to tackling complexity is to upgrade the capacity for coordination and integration across diverse organisations and
problem areas. However, there is a vast research literature demonstrating that coordination and integration are widely
invoked but very difficult to implement effectively (e.g. Bardach, 1998; Bogdanor, 2005; Bruner, Kunesh, & Knuth,
1992; Pressman & Wildavsky, 1973). The managerial challenges and the knowledge challenges across the various
organisations are massive (Weber & Khademian, 2008; APSC, 2009b). Policy problems that appear to require more
effective program integration cannot be resolved just by better information. For example, in a study of integrated land
management in Canada, Rayner and Howlett (2009) emphasise that there may be structural challenges and policy
legacies that require urgent attention. They conclude that:
86 Editorial / Policy and Society 29 (2010) 77–94

Responsive policy-making for large-scale complex policy issues such as ILM requires both sophisticated policy
analysis as well as an institutional structure which allows problems to be addressed on a multilevel and multi-
sectoral basis.
Thus, each of these three approaches to tackling complexity may pose major challenges for understanding causal
patterns and for assessing related program interventions, which are themselves the fundamental basis for EBP.

6. Enhanced impact through knowledge transfer and interaction

Evidence-based policy requires that research and evaluation findings are effectively communicated and available
to be utilised by policy and program managers (Huberman, 1994; Radaelli, 1995). There is an extensive literature
suggesting that even where the quality of research is high, the apparent overall level of impact or utilisation by
decision-makers is problematic or disappointing (Amara, Ouimet, & Landry, 2004; Commission on the Social
Sciences, 2003; Landry, Amara, & Lamari, 2001; Landry, Lamari, & Amara, 2003; Nelson, Roberts, Maederer,
Wertheimer, & Johnson, 1987; Nutley et al., 2007; Rich & Oh, 2000; Shulock, 1999). To test the significance of
these claims we need to clarify the conception of ‘influence’ that is being deployed, and whether the anticipated
‘flow’ of influence from research into policy is realistic. It is convenient to follow the typology suggested by Weiss
(1978, 1979, 1980), who distinguished between direct (instrumental) uses or impacts of research, which may
involve the direct adoption/utilisation of findings (especially when research has been tailored to address a specific
issue identified by decision-makers); and indirect forms of influence via research contributions toward enhanced
understanding of social processes or through new frameworks for re-conceptualising processes, problems or
solutions. Weiss also drew attention to the political or symbolic uses of research findings, when they are taken up as
weapons in partisan debate. Most impacts are indirect in the above sense. Recent literature reviews (Boaz,
Fitzpatrick, & Shaw, 2008; Walter, Nutley, & Davies, 2003; Walter, Nutley, & Davies, 2005;) have confirmed this
general pattern, and added useful detail about practical strategies for improving the interaction between the
research, policy and professional practice sectors.
A report commissioned by the British Academy collected evidence on perceptions about the research/ policy
interface. Based on interviews with researchers, policy-makers and others, the following lengthy list of possible
contributions by researchers to social policy were noted:

 acting as special government advisers;


 leading or contributing to major national enquiries, or to the work of various standing commissions;
 raising public awareness of key problems and issues;
 providing the answers to specific questions through, for example, modelling or evaluations;
 providing objective analysis of what works and what does not;
 monitoring and analysing social trends;
 providing independent scrutiny of government initiatives and developments;
 offering solutions to help improve and refine current policy initiatives;
 enhancing the effective delivery of public services;
 challenging current paradigms, helping to identify new approaches, concepts and principles. (British Academy,
2008: 20).
Researchers who seek engagement with the policy process generally desire substantive influence, and this can
be attempted in a variety of roles as experts, consultants and advisors (Jackson, 2007; Pollitt, 2006). But many
academics are disappointed to find that policy-makers are highly constrained and that they do not appreciate
research which is contentious or inconsistent with current orthodoxies (British Academy, 2008: 3). For those
researchers who engage closely with policy bureaucrats, perhaps through some form of contract research, there
are possible dangers of dependency and policy ‘capture’, i.e. tacit politicisation in the conduct and scope of
research projects. However, given the cultural context of academic independence, many researchers prefer to
avoid co-option and close engagement. These researchers include both radical critics of government policy who
cherish their freedom to question fundamentals (Horowitz, 1975), as well as conservatives and sceptics who
query the capacity and wisdom of governments to ‘solve’ social problems. According to Smith (2009), in an
ideal world academics would be well funded by public agencies to conduct independent research, subject to
Editorial / Policy and Society 29 (2010) 77–94 87

rigorous peer review, concerning ‘the success of existing programs’. But, he notes, in the real world this is very
unlikely.
The value-proposition for EBP is that policy settings can be improved on the basis of high-quality evidence. But
how does reliable knowledge actually flow between producers and users? How strong are the channels and
relationships that improve these flows? Considerable research on such matters has been undertaken across a range of
applied social policy areas (e.g., see France & Homel, 2007; Ferlie, 2005; Jones & Seelig, 2005; Lin & Gibson, 2003;
Mosteller & Boruch, 2002; Saunders & Walter, 2005). The issues involved in creating useful forms of mutual
interaction and influence between the policy and research sectors have been well surveyed by Nutley et al (2007: ch. 7)
and Davies, Nutley, and Walter (2008). Detailed studies suggest that the metaphor of knowledge ‘transfer’ or
knowledge ‘transmission’ is too linear and simplistic. The challenge is to consider how to construct better ‘learning’
processes, together with establishing new channels and forums for mutual understanding and influence (Meagher,
Lyall, & Nutley, 2008; Tsui, 2006).
However, the most useful ways to enhance interaction need to be carefully considered in the light of the known
systemic obstacles to the wider impact of EBP approaches. These include:

(1) The politicised context of governmental commitments and decision-making.


As noted above, political leaders and program managers are unlikely to fund, or even notice, research projects
whose findings ‘encourage policy-makers to jettison dearly held but failing policies in the face of overwhelming
evidence’. The realities of research funding and the realities of political institutions point towards a preference for
research that ‘reinforces desired policies’ (Smith, 2009: 96–7). One of the criteria implicitly used by policy
analysts is the ‘political feasibility’ of specific options as well as their technical and economic feasibility
(Meltsner, 1972). Thus, in many cases where rigorous research can be undertaken on contentious issues, the
‘lessons’ will be seen as unwelcome in the eyes of some political leaders. For example, political leaders
campaigning on law-and-order issues have chosen to ignore systematic reviews (Petrosino, Petrosino, & Buehler,
2003) which demonstrate that young at-risk offenders are not ‘scared straight’ by being required to visit
corrections facilities.
(2) Public officials often have a low awareness of research and evaluation findings.
It is clear that key personnel have to be aware of research findings if the EBP process can be robust and
energised. It is important to recognise the substantial role of ‘in-house’ research and evaluation activities inside
public agencies, as well as highlighting the findings of ‘external’ researchers in academic and independent bodies.
A number of ‘bridging’ strategies have been developed in recent years to ensure that both the researchers and
policy-makers learn more about each other’s business as a basis for better utilisation of policy-relevant research by
government agencies (see Boaz, Fitzpatrick, et al., 2008; Bochel & Duncan, 2007; Bowen & Zwi, 2005; Lavis
et al., 2002; Lavis, Robertson, Woodside, McLeod, & Abelson, 2003; Lomas, 1990; Lomas, 2000; Nutley et al.,
2007).
(3) The research sector lacks an appreciation of decision-makers’ needs for well targeted and well communicated
findings.
In the 1970s, Scott and Shore (1979) concluded that a great deal of social analysis undertaken by applied
researchers tended to exhibit two basic weaknesses. Either the research was fragmented and lacked ‘discernible
policy implications’; or alternatively, where recommendations were suggested, these were not well calibrated to
the practical and political needs of decision-makers (1979: ch. 1). These problems are not new and the possible
solutions are still not clear. A report by the British Academy (2008) noted a report that listed reasons given by
policy-makers as to why they avoided or ignored certain types of external research. These included the perception
that:
 research was not highly valued or well communicated within their organisation;
 internally conducted or commissioned research was more likely to be seen as relevant;
 research was not timely or relevant to users’ needs;
 research was less likely to be used when findings were controversial or upset the status quo;
 policy-makers might prefer to rely on other sources of information. (British Academy, 2008: 27).

Thus the channels through which rigorous evidence might influence policymaking are somewhat fragile, and are
readily disrupted by political and organisational pressures. Hence, the knowledge/ communication channels need
88 Editorial / Policy and Society 29 (2010) 77–94

specific care and attention both to understand their characteristics and to improve their outcomes (Lavis et al.,
2003; Lewig, Arney, & Scott, 2006; Nutley et al., 2007; Ouimet, Landry, Ziam, & Bedard, 2009). Supply-side
provision of good research about ‘what works’ is not enough. Potential users of research findings will pay close
attention only if they are more familiar with these potential inputs, understand the advantages and limits of the
information, and are in a position to make use of the findings either directly or indirectly (Edwards, 2004; Nutley
et al., 2007). The social and institutional foundations for EBP do not emerge spontaneously from the policy process
or from the information systems. Capabilities and understandings need to be built within and across organisations.
If the research community and the policy community are mutually ignorant or indifferent, there is little prospect of
evidence-based policy becoming more robust. This situation arises not primarily from a deficiency in technical
knowledge, but more from deficiencies in relationships, communication and mutual interest.

7. Conclusions: EBP in transition

Thus, after some decades of experience, both the supply of policy-relevant research and the policy analysis
capacities of governmental and other organisations have increased in many countries. There has been a gradual
incorporation of rigorous research evidence into public policy debates and internal public sector processes for policy
evaluation and program improvement (Adams, 2004; Banks, 2009; Boaz, Grayson, Levitt, & Solebury, 2008;
Maynard, 2006). This trend has been supported by the research efforts of international organisations such as the
OECD. Arguably there have been numerous improvements in policy and practice flowing from closer integration with
rigorous research. If the primary goal of the EBP movement is to improve the reliability of advice concerning the
efficiency and effectiveness of policy settings and possible alternatives, this approach can be very attractive to
pragmatic leaders and decision-makers, who want to know what works under what conditions. But despite wide
support for the goals of better information and better analysis, the substantial uptake of research findings remains
problematic.
Some decades of academic research about policy change, reinforced by media commentary, strongly suggest that
policy and program changes are seldom generated primarily by research. Political judgements are generally decisive.
Nevertheless, decision-makers are often reliant on advice from officials who may be well informed about research
findings, evaluation reports, and their implications. There are potential roles for research findings in helping to clarify
trends and opportunities; moreover, opportunities for new thinking and policy adjustment can emerge in unexpected
ways in response to incidents, crises and conflicts. But overall, the policy process is best understood as a patchwork
quilt of arguments and persuasion (Fischer, 2003; Majone, 1989) in which various forms of evidence are deployed as
part of political debate. The professional crafts of policy and program development will continue to require ‘weaving’
together the implications of case-studies with the big picture, and to reconcile the strands of scientific information with
the underlying value-driven approaches of the political system (Head, 2008a; Parsons, 2004; Sabatier, 2007).
The relationship between policy processes and research production has become more closely institutionalised in
some countries and in some policy domains (Jones & Seelig, 2005; Nutley, Walter, & Davies, 2009; Saunders &
Walter, 2005), thus providing positive opportunities for fruitful influence and interaction. The conditions for further
embedding EBP approaches in different countries will be widely divergent, and different models are likely to prevail.
Among these differences, issues will arise in relation to the four key areas of investment in data and research-based
evaluation, professional skills, political and organisational support for independent analysis, and constructive working
relations across the divide between policy, research and practice networks.
Further research is needed to ‘connect the dots’ between policy, research and evaluation processes, and to document
the transfer of research insights into the practice of service professionals (evidence-based practice). The role of
research and policy analysis within the government agencies (e.g. Hoppe & Jeliazkova, 2006; Page & Jenkins, 2005),
and their role in commissioning external research, seems to be crucial. It is quite possible that the research-informed
analysis provided by policy bureaucrats will be found to be more influential in policy-making than the numerous
academic studies undertaken for professional/scientific audiences. These institutional patterns are likely to be country-
specific and policy-specific, but insufficient research has yet been undertaken to allow systematic comparisons. Some
useful comparative studies have been initiated on themes such as policy capacity (e.g. Bakvis, 2000; Painter & Pierre,
2005; Peters, 1996) and in areas of health research (e.g. Council on Health Research, 2000). Some detailed single-
country studies are beginning to emerge (e.g. Dobuzinskis et al., 2007, on Canada) which may become an important
foundation for comparative analyses.
Editorial / Policy and Society 29 (2010) 77–94 89

The politics of decision-making inherently involves a mixing of science, value preferences, and practical
judgements about the feasibility and legitimacy of policy choices. Outside the scientific community the realms of
knowledge and evidence are diverse and contested. Competing sets of evidence and testimony inform and influence
policy. Rigorous and systematic policy research seeks a voice in a competitive struggle for clarity and attention, jostled
by many powerful players in the wider context of public debate and media commentary. To the extent that rigour is
valued, it needs to be protected by strong institutions and robust professional practices. It remains to be seen whether
the stated commitment to EBP in some countries will lead to measurably greater investment in policy research and
evaluation over the coming years. In many countries the overall level of investment in policy-relevant research,
program evaluation and policy skills training remains disappointing. EBP will continue to inspire efforts for better
performance in the policy sciences, but will always be constrained by the realities of community values, political
authority and stakeholder interests.

8. Papers in this symposium

This symposium considers a range of examples of EBP-in-action, drawing primarily on the experience of OECD
countries—the United Kingdom, Ireland, Canada, the USA, Australia, and beyond. The papers consider the roles and
knowledge not only of government agency managers – whose contributions to EBP involve developing, implementing
or evaluating public policy programs – but also of research professionals and independent evaluators who provide
policy-relevant analysis and review. The symposium thus makes a significant contribution to ongoing debates
concerning the contributions and limitations of the evidence-based policy movement.
Whereas the present overview article has surveyed a number of the major issues concerning EBP, the following
eight papers provide case studies analysing several of the key challenges, and exploring the roles of knowledge
producers, stakeholders, analysts and decision-makers across a range of policy issues and institutional contexts. These
eight papers were originally prepared for the Panel on Evidence-based Policy at the International Research
Symposium on Public Management held in Copenhagen in April 2009. The first three of these papers tackle various
aspects of the supply and demand for policy-relevant knowledge including how knowledge is actually used in specific
institutional settings.
Iestyn Williams and Jon Glasby provide an analysis of how different forms of knowledge are produced and used for
UK health and social care decision-making. Decision-makers in UK health and social care are routinely expected to
draw on evidence of ‘what works’ when designing services and changing practices. However, this paper argues that too
much focus has been placed on a narrow definition of what constitutes valid ‘evidence’ (and one that privileges
particular approaches and voices over others). As a result, policy and practice have too often been dominated by
medical and quantitative ways of ‘knowing the world’. In contrast, Williams and Glasby call for a more inclusive
notion of ‘knowledge-based practice’, which draws on different types of research, including the tacit knowledge of
front-line practitioners and the lived experience of people using the services. This approach, they argue, will lead to a
more nuanced approach to ‘the evidence’ and – ultimately – to better and more integrated decisions. Against this
background, the paper outlines a suggested research agenda for those seeking to develop decision-making within these
areas.
In a related paper drawing on examples from the UK health and social care sector, Kieran Walshe and Huw Davies
examine how research findings have made a difference in the behaviour of healthcare organisations. New
organisations have been created to bring research evidence to bear directly on healthcare management decision-
making, and to encourage both the production of well-focussed research and the better communication and
mobilisation of knowledge. Walshe and Davies discuss attempts to promote linkage and exchange between the
research and practitioner communities and to build awareness, receptiveness and capacity in research use among
healthcare managers. However, the barriers or constraints on research use are still deeply embedded in cultures,
incentives, structures and organisational arrangements. They therefore propose new approaches for commissioning
research which would give greater weight to the potential impact and influence of research, with more attention to
implementation issues. Such new priorities might help to change the longer term and more deep-seated characteristics
of both the academic and healthcare systems.
Looking beyond the context of health and social care, Peter Carroll asks whether the evidence that is regularly
produced for improving the quality of regulatory regimes – regulatory impact assessment – is being translated into
better policy outcomes. Regulatory impact assessment systems are supposed to generate evidence relevant to
90 Editorial / Policy and Society 29 (2010) 77–94

improving the quality or efficacy of new or modified regulations. Such review systems have become mandatory in
many OECD countries. Regulatory proposals put forward by government agencies thus should have a firm evidence
base that clearly supports the new or modified regulations, but Carroll suggests that regulatory proposals continue to
offer little in the way of a rigorous and convincing evidence base. Carroll explores the reasons for this poor
performance including the varying levels of ministerial and executive commitment, poor integration of impact
assessment with policy development processes, variable capacity for rigorous policy analysis in departments, and a
lack of data on which evidence-based policy can be developed. These factors would clearly have resonance in a wide
range of policy examples.
Turning from cases focusing mainly on how evidence is generated to the question of how information and research
knowledge are actually used by government analysts and decision-makers, the remaining five papers in the
symposium address various aspects of this question. Two papers examine what types of information and skills are
deployed by policy and program staff within government agencies. Among federal countries, most of the previous
research has focused on the national level, but much of the policy and program activity remains at the sub-national
level (states or provinces). Michael Howlett and Joshua Newman report on the first thorough survey of the work of
policy analysts at the sub-national level in Canada. Their paper reports on the findings of a 2008–2009 survey aimed
specifically at examining the background and training of provincial policy analysts in Canada, the types of techniques
they employ in their jobs, and what they do in their work on a day-by-day basis. The resulting profile of sub-national
policy analysts reveals several substantial differences between analysts working for national government and their
sub-national counterparts. This may have important implications for policy training and practice, and for the ability of
Canada and other countries to improve their policy advice systems in order to better accomplish their long-term policy
goals.
Jeremy Hall and Edward Jennings also consider the sub-national level of evidence-based policy, reporting results
from a 2008 survey of U.S. State agency administrators across all 50 states and across 12 different agency types.
Agency managers were asked to disclose the extent to which they relied on information from diverse information
sources and to weight the value of information from each source. Hall and Jennings are especially interested in
ascertaining the importance of formal scientific evidence, and its influence on agency decisions relative to other
potential sources of information. Hall and Jennings present their findings by agency type with significant differences
across substantive policy areas noted.
Mary Lee Rhodes and Simon Brooke analyse the role of evaluation studies in contributing to the development of
services for the homeless in Ireland. They show how the processes of evaluation interact with the complex issues of
policy and implementation. There are many complexities in determining ‘what works’ in the cycle of policy
development, implementation and evaluation. This paper examines how these three activities unfolded over two
decades since 1988, when ‘homelessness’ was defined first in Irish legislation. They explain the complex and
significant changes in policy, changes in implementation approaches, and the positive impact of a series of evaluations
over this period.
Tracey Wond and Michael Macaulay investigate the evaluation of a UK local economic development program
which raises key questions about the disparity between central guidance and local implementation, and the use of
evaluation data as an evidence base for public policy. Based on observational and interview data, they highlight a
number of issues. Firstly, uncertainty and ambiguity of policy direction can lead to barriers in establishing clear
evaluation goals. Secondly, differences emerge between the generic strategies developed by central policy-makers in
response to broad problems (which they term ‘problem-inspired’ policy), and the local ‘problem-solving’
interpretations by implementers responding to local challenges. This discontinuity can be compounded where central
decision-makers overlook the significance of local variations in terms of cultures, geographies or historical contexts.
In responding to these problems, Wond and Macaulay argue that regardless of where policy control and decision-
making occurs, the experiences of implementers at a local level are crucial for success because they better understand
the nuances of context and capability.
Finally, Brian Head presents a case study in water policy in order to explore the implications for evidence-based
policy of the distinction between relatively stable and relatively turbulent policy issues. In stable policy domains it is
more likely that rigorous analysis will be able to enhance policy-makers’ consideration of improvement options.
However, EBP has taken different turns in policy fields marked by value-conflict, rapid change, high risk or radical
uncertainty. One such area in recent years has been water policy, in the context of water scarcity. There have been
urgent new challenges for water policy, planning and delivery in many cities and regions around the world. This paper
Editorial / Policy and Society 29 (2010) 77–94 91

examines an Australian case study, the urban water crisis in Southeast Queensland (SEQ). The sub-national
government became increasingly alarmed by evidence of a deteriorating water-supply outlook, and undertook a
number of policy changes including substantial re-structuring of urban water governance. The paper raises issues
about the evidence base for decision-making, and for policy learning, where policy is shaped under conditions of
uncertainty and crisis.

Acknowledgements

Work for this project on EBP was partly funded by Australian Research Council grant LP100100380. The author
also thanks numerous colleagues who have helped to shape these ideas over many years including Meredith Edwards,
Sandra Nutley, and several of the contributors to this symposium.

References

Aaron, H. J. (1978). Politics and the professors: The great society in perspective. Washington: Brookings.
Abrams, P. (1968). The origins of British sociology 1834–1914. Chicago: University of Chicago Press.
Adams, D. (2004). Usable knowledge in public policy. Australian Journal of Public Administration, 63(1), 29–42.
Amara, N., Ouimet, M., & Landry, R. (2004). New evidence on instrumental, conceptual, and symbolic utilization of university research in
government agencies. Science Communication, 26(1), 75–106.
APSC [Australian Public Service Commission]. (2007). Tackling wicked problems: A public policy perspective. Canberra: APSC.
APSC. (2009a). Smarter policy: Choosing policy instruments and working with others to influence behaviour. Canberra: APSC.
APSC. (2009b). Policy development through devolved government. Canberra: APSC.
Argyris, C. (2000). Flawed advice and the management trap. Oxford: Oxford University Press.
Argyrous, G. (Ed.). (2009). Evidence for policy and decision-making. Sydney: UNSW Press.
Bakvis, H. (2000). Rebuilding policy capacity in the era of the fiscal dividend: A report from Canada. Governance, 13(1), 71–103.
Banks, G. (2009). Evidence-based policy-making: What is it? How do we get it? ANZSOG Public Lecture, 4 February, http://www.pc.gov.au/
speeches/cs20090204.
Bardach, E. (1998). Getting government agencies to work together. Washington: Brookings.
Barnardo’s. (2000). What works? Making connections: Linking research and practice Ilford: Barnardo’s.
Barnardo’s. (2006). The evidence guide (5 Vols.). Ilford: Barnardo’s.
Belsky, J., Barnes, J., & Melhuish, E. (Eds.). (2007). The National evaluation of sure start. Bristol: Policy Press.
Boaz, A., Grayson, L., Levitt, R., & Solebury, W. (2008). Does evidence-based policy work? Learning from the UK experience. Evidence & Policy,
4(2), 233–253.
Boaz, A., Fitzpatrick, S., & Shaw, B. (2008). Assessing the impact of research on policy: A review of the literature. London: King’s College & Policy
Studies Institute.
Bochel, H., & Duncan, S. (Eds.). (2007). Making policy in theory and practice. Bristol: Policy Press.
Bogdanor, V. (Ed.). (2005). Joined-up government. Oxford: Oxford University Press.
Boruch, R., & Rui, N. (2008). From randomized controlled trials to evidence grading schemes: Current state of evidence-based practice in social
sciences. Journal of Evidence-Based Medicine, 1(1), 41–49.
Bowen, S., & Zwi, A. B. (2005). Pathways to ‘‘evidence-informed’’ policy and practice: A framework for action. PLoS Medicine, 2(7), 100–105
e166.
British Academy. (2008). Punching our weight: The humanities and social sciences in public policy making. London: British Academy.http://
www.britac.ac.uk/reports/wilson/.
Brooks-Gunn, J. (2003). Do you believe in magic? What we can expect from early childhood intervention programs. Social Policy Report, 17(1),
1–14.
Bruner, C., Kunesh, L. G., & Knuth, R.A. (1992). What does research say about interagency collaboration? http://www.ncrel.org.
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24(4), 409–429.
Campbell Collaboration. (2010). About the Campbell Collaboration. http://www.campbellcollaboration.org/.
Cannon, J. S., & Kilburn, M. R. (2003). Meeting decision makers’ needs for evidence-based information on child and family policy. Journal of Policy
Analysis and Management, 22(4), 665–668.
Clarence, E. (2002). Technocracy reinvented: The new evidence-based policy movement. Public Policy and Administration, 17(3), 1–11.
Coalition for Evidence-Based Policy. (2006). What works and what doesn’t work in social policy? Findings from well-designed randomized
controlled trials. http://www.evidencebasedprograms.org/.
Cochrane Collaboration. (2010). The Cochrane Collaboration: The reliable source of evidence in health care. http://www.cochrane.org/.
Colebatch, H. K. (Ed.). (2006). Beyond the policy cycle. Sydney: Allen & Unwin.
Colebatch, H. K. (Ed.). (2006). The work of policy: An international survey. New York: Rowman and Littlefield.
Commission on the Social Sciences. (2003). Great expectations: The social sciences in Britain. London: Academy of Learned Societies for the
Social Sciences.
92 Editorial / Policy and Society 29 (2010) 77–94

Coote, A., Allen, J., & Woodhead, D. (2004). Finding out what works: Building knowledge about complex community-based initiatives. London:
King’s Fund.
Council on Health Research for Development. (2000). Lessons in research to action and policy: Case studies from seven countries. Geneva: CHRD.
Davies, P. (2004). Is Evidence-based Policy Possible? The Jerry Lee Lecture, Campbell Collaboration Colloquium, Washington, 18 February.
Davies, H. T., Nutley, S., & Smith, P. C. (Eds.). (2000). What works? Evidence-based policy and practice in public services. Bristol: Policy Press.
Davies, H. T., Nutley, S., & Walter, I. (2008). Why ‘knowledge transfer’ is misconceived for applied social research. Journal of Health Services
Research & Policy, 13(3), 188–190.
Deaton, A. S. (2009). Instruments of development: Randomization in the tropics and the search for the elusive keys to economic development.
Working Paper 14690. National Bureau of Economic Research, Cambridge MA. http://www.nber.org/papers/w14690.
Dobuzinskis, L., Howlett, M., & Laycock, D. (Eds.). (2007). Policy analysis in Canada: The state of the art. Toronto: University of Toronto Press.
Dopson, S., & Fitzgerald, L. (Eds.). (2005). Knowledge to action? Evidence-based health care in context. Oxford: Oxford University Press.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the
factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327–350.
Edelenbos, J., & Van Buuren, A. (2005). The learning evaluation: A theoretical and empirical exploration. Evaluation Review, 29(6), 591–612.
Edwards, M. (2001). Social policy, public policy: From problem to practice. Sydney: Allen & Unwin.
Edwards, M. (2004). Social science research and public policy: Narrowing the divide. Policy Paper 2. Canberra: Academy of Social Sciences in
Australia.
Ferlie, E. (2005). Conclusion: From evidence to actionable knowledge? In S. Dopson & L. Fitzgerald (Eds.), Knowledge to action? Evidence-based
health care in context (pp. 182–197). Oxford University Press: Oxford.
Fischer, F. (2003). Reframing public policy: Discursive politics and deliberative practices. Oxford: Oxford University Press.
France, A., & Homel, R. (Eds.). (2007). Pathways and Crime Prevention: Theory, Policy and Practice. Cullompton, Devon: Willan Publishing.
Glasby, J., & Beresford, P. (2006). Who knows best? Evidence-based practice and the service user contribution. Critical Social Policy, 26(1), 268–
284.
Hajer, M. A., & Wagenaar, H. (Eds.). (2003). Deliberative policy analysis: Understanding governance in the network society. Cambridge:
Cambridge University Press.
Haskins, R. (2006). Testimony on the welfare reform law, 19 July 2006. Committee on ways and means. Washington: US House of Representatives.
Head, B. W. (1982). The origins of ‘la science sociale’ in France 1770–1800. Australian Journal of French Studies, 19(2), 115–132.
Head, B. W. (2008a). Three lenses of evidence based policy. Australian Journal of Public Administration, 67(1), 1–11.
Head, B. W. (2008b). Assessing network-based collaborations: Effectiveness for whom? Public Management Review, 10(6), 733–749.
Head, B. W. (2008c). Wicked problems in public policy. Public Policy, 3(2), 101–118.
Heilbron, J., Magnusson, L., & Wittrock, B. (Eds.). (1998). The rise of the social sciences and the formation of modernity. Dordrecht: Kluwer.
Hemmati, M. (2002). Multi-stakeholder processes for governance and sustainability. London: Earthscan.
Hill, M., & Hupe, P. (2002). Implementing public policy. London: Sage.
Hoppe, R., & Jeliazkova, M. (2006). How policy workers define their job: A netherlands case study. In H. K. Colebatch (Ed.), The work of policy: An
international survey (pp. 35–60). New York: Rowan and Littlefield.
Horowitz, I. L. (1975). Conflict and consensus between social scientists and policy-makers. In I. L. Horowitz (Ed.), The use and abuse of social
science (2nd ed., pp. 110–135). New Brunswick, NJ: Transaction Books.
Huberman, M. (1994). Research utilization: The state of the art. Knowledge and Policy: The International Journal of Knowledge Transfer and
Utilization, 7(4), 13–33.
Innes, J. E. (2002). Improving policy making with information. Planning Theory & Practice, 3(1), 102–104.
Jackson, P. M. (2007). Making sense of policy advice. Public Money and Management, 27(4), 257–264.
Jones, A., & Seelig, T. (2005). Enhancing research-policy linkages in Australian housing, final report 79. Australian Housing & Urban Research
Institute.http://www.ahuri.edu.au/publications.
Joseph Rowntree Foundation. (2000). Linking research and practice. York: Joseph Rowntree Foundation.http://www.jrf.org.uk.
Landry, R., Amara, N., & Lamari, M. (2001). Utilization of social science research knowledge in Canada. Research Policy, 30(2), 333–349.
Landry, R., Lamari, M., & Amara, N. (2003). The extent and determinants of the utilization of university research in government agencies. Public
Administration Review, 63(2), 192–205.
Lavis, J. N., Ross, S., Hurley, J., Hohenadel, J., Stoddart, G., Woodward, C., et al. (2002). Examining the role of health services research in public
policymaking. Milbank Quarterly, 80(1), 125–154.
Lavis, J. N., Robertson, D., Woodside, J., McLeod, C., & Abelson, J. (2003). How can research organizations more effectively transfer research
knowledge to decision makers? Milbank Quarterly, 81(2), 221–248.
Leigh, A. (2009). What evidence should social policymakers use? Economic Roundup, 2009(1), 27–43.
Lemieux-Charles, L., & Champagne, F. (Eds.). (2004). Using knowledge and evidence in health care. Toronto: University of Toronto Press.
Lerner, D., & Lasswell, H. D. (Eds.). (1951). The policy sciences. Stanford: Stanford University Press.
Lewicki, R. J., Gray, B., & Elliott, M. (Eds.). (2003). Making sense of intractable environmental conflicts. Washington: Island Press.
Lewig, K., Arney, F., & Scott, D. (2006). Closing the research-policy and research-practice gaps: Ideas for child and family services. Family Matters,
74, 12–19.
Lin, V., & Gibson, B. (Eds.). (2003). Evidence-based health policy: Problems and possibilities. Oxford: Oxford University Press.
Lindblom, C. E. (1980). The policy-making process (2nd ed.). Englewood Cliffs: Prentice-Hall.
Lipsky, M. (1980). Street level bureaucracy. New York: Russell Sage Foundation.
Lomas, J. (1990). Finding audiences, changing beliefs: The structure of research use in Canadian health policy. Journal of Health Politics, Policy and
Law, 15(3), 525–542.
Editorial / Policy and Society 29 (2010) 77–94 93

Lomas, J. (2000). Using ‘‘linkage and exchange’’ to move research into policy at a Canadian foundation: Encouraging partnerships between
researchers and policymakers is the goal of a promising new Canadian initiative. Health Affairs, 19, 236–240.
Majone, G. (1989). Evidence, argument, and persuasion in the policy process. New Haven: Yale University Press.
Marston, G., & Watts, R. (2003). Tampering with the evidence: A critical appraisal of evidence-based policy. Australian Review of Public Affairs,
3(3), 143–163.
Martinson, R. (1974). What works? Questions and answers about prison reform. The Public Interest, 10, 22–54.
Mashaw, J. L. (1983). Bureaucratic justice. New Haven: Yale University Press.
Maynard, R. A. (2006). Presidential address: Evidence-based decision making: What will it take for decision makers to decide. Journal of Policy
Analysis and Management, 25(1), 249–266.
McDonald, L. (1993). The early origins of the social sciences. Montreal & Kingston: McGill-Queen’s University Press.
Mead, L. M. (Ed.). (1997). The new paternalism. Washington: Brookings Institution.
Meagher, L., Lyall, C., & Nutley, S. (2008). Flows of knowledge, expertise and influence: A method for assessing policy and practice impacts from
social science research. Research Evaluation, 17(3), 163–173.
Melhuish, E., Belsky, J., & Barnes, J. (2009). Evaluation and value in sure start. Archives of Disease in Childhood 10.1136/adc.2009.161018.
Meltsner, A. J. (1972). Political feasibility and policy analysis. Public Administration Review, 32(6), 859–867.
Meltsner, A. J. (1976). Policy analysts in the bureaucracy. Berkeley: University of California Press.
Mills, C. W. (1959). The sociological imagination. New York: Oxford University Press.
Mintrom, M., & Bollard, R. (2009). Governing controversial science: Lessons from stem cell research. Policy and Society, 28(4),
301–314.
Mosteller, F., & Boruch, R. (Eds.). (2002). Evidence matters: Randomized trials in education research. Washington: Brookings.
Mulgan, G. (2005). The academic and the policy-maker’, presentation to public policy unit. Oxford University, 18 November.
Nathan, R. P. (1988). Social science in government. New York: Basic Books.
Nelson, C. E., Roberts, J., Maederer, C., Wertheimer, B., & Johnson, B. (1987). The utilization of social science information by policymakers.
American Behavioral Scientist, 30(6), 569–577.
Nutley, S., Walter, I., & Davies, H. (2007). Using evidence: How research can inform public services. Bristol: Policy Press.
Nutley, S., Walter, I., & Davies, H. (2009). Past, present and possible futures for evidence-based policy. In G. Argyrous (Ed.), Evidence for policy and
decision-making (pp. 1–23). Sydney: UNSW Press.
Nutley, S., & Homel, P. (2006). Delivering evidence-based policy and practice: Lessons from the implementation of the UK crime reduction
program. Evidence and Policy, 2(1), 5–26.
Ouimet, M., Landry, R., Ziam, S., & Bedard, P. O. (2009). The absorption of research knowledge by public civil servants. Evidence & Policy, 5(4),
331–350.
Page, E. C., & Jenkins, B. (2005). Policy bureaucracy: Governing with a cast of thousands. Oxford: Oxford University Press.
Painter, M., & Pierre, J. (Eds.). (2005). Challenges to state policy capacity: Global trends and comparative perspectives. London: Palgrave
Macmillan.
Parsons, W. (2002). From muddling through to muddling up—evidence based policy making and the modernisation of British government. Public
Policy & Administration, 17(3), 43–60.
Parsons, W. (2004). Not just steering but weaving: Relevant knowledge and the craft of building policy capacity and coherence. Australian Journal of
Public Administration, 63(1), 43–57.
Pawson, R., Boaz, A., Grayson, L., Long, A., & Barnes, C. (2003). Types and quality of knowledge in social care knowledge review #3. London:
Social Care Institute for Excellence.
Pawson, R. (2006). Evidence-based policy: A realist perspective. London: Sage.
Percy-Smith, J. (2005). What works in strategic partnerships for children? Ilford: Barnardo’s.
Peters, B. G. (1996). The policy capacity of government. Ottawa: Canadian Centre for Management Development.
Petrosino, A., Petrosino, C. T., & Buehler, J. (2003). ‘Scared straight’ and other juvenile awareness programs for preventing juvenile delinquency.
(Updated C2 Review). Campbell Collaboration website.
Petticrew, M. (2007). Making high quality research relevant and accessible to policy makers and social care practitioners. Presentation to Campbell
Collaboration Colloquium, 16 May.
Petticrew, M., & Roberts, H. (2005). Systematic reviews in the social sciences. Oxford: Blackwells.
Pfeffer, J., & Sutton, R. (2006). Hard facts, dangerous half-truths and total nonsense: Profiting from evidence-based management. Boston: Harvard
Business School Publishing.
Pollitt, C. (2006). Academic advice to practitioners—what is its nature, place and value within academia? Public Money & Management, 26(4),
257–264.
Pressman, J. L., & Wildavsky, A. (1973). Implementation: How great expectations in Washington are dashed in Oakland. Berkeley: University of
California Press.
Radaelli, C. (1995). The role of knowledge in the policy process. Journal of European Public Policy, 2(2), 159–183.
Radin, B. A. (2000). Beyond machiavelli: Policy analysis comes of age. Washington: Georgetown University Press.
Rayner, J., & Howlett, M. (2009). Conclusions: Governance arrangements and policy capacity for policy integration. Politics and Society, 28(2),
165–172.
Rich, R. F., & Oh, C. H. (2000). Rationality and the use of information in policy decisions. Science Communication, 22(2), 173–211.
Roberts, A. R., & Yeager, K. R. (Eds.). (2006). Foundations of evidence-based social work practice. New York: Oxford University Press.
Roberts, H. (2005). What works? Social Policy Journal of New Zealand, 24, 34–54.
Rousseau, D. (2006). Is there such a thing as ‘‘evidence-based management’’? Academy of Management Review, 31(2), 256–269.
94 Editorial / Policy and Society 29 (2010) 77–94

Rudd, K. (2008). Prime Minister: Address to Heads of Agencies and Members of Senior Executive Service, 30 April. http://www.pm.gov.au/node/
5817.
Sabatier, P. A. (Ed.). (2007). Theories of the policy process (2nd ed.). Boulder, CO: Westview Press.
Saunders, P., & Walter, J. (Eds.). (2005). Ideas and influence: Social science and public policy in Australia. Sydney: UNSW Press.
Shaxson, L. (2005). Is your evidence robust enough? Questions for policy makers and practitioners. Evidence and Policy, 1(1), 101–111.
Schon, D. A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.
Schon, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. New York: Basic Books.
Schorr, L. B. (2003). Determining ‘what works’ in social programs and social policies: Towards a more inclusive knowledge base. Harvard
University.
Schorr, L. B., & Auspos, P. (2003). Usable information about what works: Building a broader and deeper knowledge base. Journal of Policy Analysis
and Management, 22(4), 669–676.
Scott, R. A., & Shore, A. R. (1979). Why sociology does not apply. New York: Elsevier.
Sherman, L. W., Farrington, D. P., Welsh, B. C., & MacKenzie, D. L. (Eds.). (2006). Evidence-based crime prevention. New York: Routledge.
Shonkoff, J. P. (2000). Science, policy and practice: Three cultures in search of a shared mission. Child Development, 71(1), 181–187.
Shulock, N. (1999). The paradox of policy analysis: If it is not used, why do we produce so much of it? Journal of Policy Analysis and Management,
18(2), 226–244.
Simons, H. (2004). Utilizing evaluation evidence to enhance professional practice. Evaluation, 10(4), 410–429.
Smith, C. (2009). Review of british academy report ‘punching our weight’. Economic Affairs, 29(1), 95–97.
Solesbury, W. (2002). The ascendancy of evidence. Planning Theory & Practice, 3(1), 90–96.
Stone, D., & Denham, A. (Eds.). (2004). Think tank traditions: Policy research and the politics of ideas. Manchester: Manchester University Press.
Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth and happiness. New Haven: Yale University Press.
Tsui, L. (2006). A handbook for knowledge sharing. Alberta: Community-University Partnership for Study of Children Youth and Families.
UK Cabinet Office. (1999a). Modernising government. London: Cabinet Office.
UK Cabinet Office. (1999b). Professional policy making for the twenty first century. London: Cabinet Office.
UK Cabinet Office. (2003). Quality in qualitative evaluation: A framework for assessing research evidence. London: Cabinet Office.
UK Cabinet Office. (2008). Think research: Using research evidence to inform service development for vulnerable groups. London: Social Exclusion
Taskforce, Cabinet Office.
UK Treasury. (2007). Analysis for policy: Evidence-based policy in practice. London: Government Social Research Unit, Treasury.
Verweij, M., & Thompson, M. (Eds.). (2006). Clumsy solutions for a complex world. London: Palgrave Macmillan.
Wagner, P., Weiss, C. H., Wittrock, B., & Wollmann, H. (Eds.). (1991). Social sciences and modern states: National experiences and theoretical
crossroads. Cambridge: Cambridge University Press.
Walker, R. (2001). Great expectations: Can social science evaluate New Labour’s policies? Evaluation, 7(3), 305–330.
Walshe, K., & Rundall, T. (2001). Evidence-based management: From theory to practice in health care. Milbank Quarterly, 79(3), 429–457.
Walter, I., Nutley, S., & Davies, H. (2003). Research impact, a cross sector review, literature review. Research Unit for Research Utilisation
University of St Andrews.
Walter, I., Nutley, S., & Davies, H. (2005). What works to promote evidence-based practice? A cross-sector review. Evidence and Policy, 1(3),
335–364.
Weber, E. P., & Khademian, A. M. (2008). Wicked problems, knowledge challenges, and collaborative capacity builders in network settings. Public
Administration Review, 68(2), 334–349.
Weiss, C. H. (1978). Improving the linkage between social science and public policy. In L. E. Lynn (Ed.), Knowledge and policy: The uncertain
connection (pp. 23–81). Washington: National Academy of Sciences.
Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431.
Weiss, C. H. (with M.J. Bucuvalas). (1980). Social science research and decision-making. New York: Columbia University Press.
Welsh, B., Farrington, D., & Sherman, L. (Eds.). (2001). Costs and benefits of preventing crime. Boulder: Westview Press.
Wilson, J. Q. (1981). Policy intellectuals and public policy. The Public Interest, 64, 31–46.
Woolcock, M. (2009). Toward a plurality of methods in project evaluation. Journal of Development Effectiveness, 1(1), 1–14.
Zigler, E., & Styfco, S. (Eds.). (2004). The head start debates. Baltimore: Brookes Publishing.

Brian W. Head
Institute for Social Science Research,
University of Queensland, Australia
E-mail address: brian.head@uq.edu.au

You might also like