You are on page 1of 12

Evaluation and Program Planning 48 (2015) 137–148

Contents lists available at ScienceDirect

Evaluation and Program Planning


journal homepage: www.elsevier.com/locate/evalprogplan

‘‘They Just Know’’: The epistemological politics of ‘‘evidence-based’’


non-formal education
Thomas Archibald a,b,*
a
Cornell University, 3M14 Martha Van Rensselaer Hall, Ithaca, NY 14853, USA
b
Virginia Tech, Agricultural & Extension Education (0343), 276 Litton-Reaves Hall, 175 West Campus Drive, Blacksburg, VA 24061, USA

A R T I C L E I N F O A B S T R A C T

Community education and outreach programs should be evidence-based. This dictum seems at once
warranted, welcome, and slightly platitudinous. However, the ‘‘evidence-based’’ movement’s more
Article history:
Available online 17 August 2014 narrow definition of evidence—privileging randomized controlled trials as the ‘‘gold standard’’—has
fomented much debate. Such debate, though insightful, often lacks grounding in actual practice. To
address that lack, the purpose of the study presented in this paper was to examine what actually
Keywords:
Evidence-based programs happens, in practice, when people support the implementation of evidence-based programs (EBPs) or
Evidence-based practice engage in related efforts to make non-formal education more ‘‘evidence-based.’’ Focusing on three
Epistemology cases—two adolescent sexual health projects (one in the United States and one in Kenya) and one more
Epistemological politics general youth development organization—I used qualitative methods to address the questions: (1) How
Non-formal education is evidence-based program and evidence-based practice work actually practiced? (2) What perspectives
Adolescent sexual health and assumptions about what non-formal education is are manifested through that work? and (3) What
Randomized controlled trials conflicts and tensions emerge through that work related to those perspectives and assumptions?
Informed by theoretical perspectives on the intersection of science, expertise, and democracy, I conclude
that the current dominant approach to making non-formal education more evidence-based by way of
EBPs is seriously flawed.
ß 2014 Elsevier Ltd. All rights reserved.

1. Introduction and treatment work, the domain in which one of this paper’s focal
cases is active:
Community education and outreach programs should be based
In the context of research, treatment, and prevention, evidence
on evidence. On the face of it, this dictum seems at once warranted,
usually refers to qualitative and/or quantitative results that
welcome, and slightly platitudinous. Few people would argue that
have been published in a peer-reviewed journal. The term
an educational initiative should, or even could, be planned and
‘evidence-informed’ is preferred to ‘evidence-based’ in recog-
implemented in a way that is somehow devoid of evidence. Yet, as
nition of the fact that several elements may play a role in
the contributions to this volume suggest, what counts as credible
decision-making, only one of which may be scientific evidence.
evidence in evaluation and social research is far from self-evident.
Other elements may include cultural appropriateness, concerns
Relatedly, how such evidence is expected and understood to guide
about equity and human rights, feasibility, opportunity costs,
professional practice remains unclear, posing a daunting challenge
etc. (UNAIDS, 2011, pp. 10–11)
to policy-makers, researchers, evaluators, and educators alike. This
These terminology guidelines make explicit the perspective that
challenge is encapsulated in the ‘‘preferred terminology guide-
‘‘scientific evidence,’’ though important, is just one of many factors
lines’’ put forward by UNAIDS (the Joint United Nations Program on
to consider when creating or choosing between various educational
HIV and AIDS) to guide the discourse of global HIV/AIDS prevention
programs or practices. The guidelines presuppose that there exists
no single, universally accepted definition of ‘‘evidence,’’ a presup-
position of central importance to this paper. What’s more, the
* Correspondence to: Virginia Tech, Agricultural & Extension Education (0343),
statement glosses over decades of acerbic debates about the relative
276 Litton-Reaves Hall, 175 West Campus Drive, Blacksburg, VA 24061, USA.
Tel.: +1 540 231 6192. merits of quantitative versus qualitative evidence, a noteworthy
E-mail address: tgarch@vt.edu point because most dominant approaches to ‘‘evidence-based’’

http://dx.doi.org/10.1016/j.evalprogplan.2014.08.001
0149-7189/ß 2014 Elsevier Ltd. All rights reserved.
138 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

education value only quantitative evidence gathered through disadvantages of evidence-based approaches and getting stuck on
‘‘rigorous’’ research or evaluation designs. Also, the statement a rhetorical level of analysis, I offer a study of the politics of
equates ‘‘evidence’’ with that which has been published in a peer- evidence in practice. By concentrating on the details of practice in
reviewed journal. While many—especially scientists—might take three cases in which people’s work involves evidence-based
that aspect of the definition of evidence to be self-evidently correct, programs and practices, I elucidate some of the tensions and gaps
many others—including scientists—work, pragmatically, with much inherent in that work, calling the apparently self-evident
broader definitions of ‘‘evidence.’’ superiority of evidence-based education into question. Specifically,
However, the ‘‘evidence-based’’ education movement—one of I address the following three research questions: (1) How is
many related attempts to ‘‘bridge the research-practice gap’’ that evidence-based program and evidence-based practice work
has gained prevalence in recent decades—is predicated on more actually practiced? (2) What perspectives and assumptions about
formal and conscribed definitions of evidence, whereby certain what non-formal education is are manifested through that work?
research and evaluation approaches are valued more highly than and (3) What conflicts emerge through that work related to those
others. In the current era of accountability, some policy-makers, perspectives and assumptions?
funding agencies, and scholars position ‘‘scientific’’ evidence In the remainder of this paper, I first provide a brief background
derived from randomized controlled trials (RCTs) as the ‘‘gold discussion on the ‘‘evidence-based’’ movement, focusing especially
standard’’ for establishing proof of which programs ‘‘work’’ and on the critical debates that have accompanied it. Then, I introduce
which do not (Coalition for Evidence-Based Policy, 2003; Mosteller this paper’s three focal cases and present the methodological
& Boruch, 2002; U.S. Department of Education, 2003). According to approach I employed to explicate their politics of evidence. Third,
Trochim: informed by theoretical approaches that explore the intersection of
science, expertise, and democracy, I present data and interpreta-
The gold standard debate is one of the most important
tions regarding the practices of the ‘‘evidence-based’’ education
controversies in contemporary evaluation and applied social
movement. I focus specifically on some of the divergent and
sciences. It’s at the heart of how we go about trying to
conflicting perspectives and assumptions about what non-formal
understand the world around us. It is integrally related to what
education is that are brought to light through my data and
we think science is and how it relates to practice. There is a lot at
interpretations. In doing so, my goal is to explore the ways in which
stake. (W. Trochim, unpublished speech transcript, September
the everyday work, the social relations, the textual mediations that
10, 2007)
make up evidence-based educational programs and practices are
In his critique of the RCT design, Scriven reiterates the point that
inherently and inevitably political in that—through their config-
much is at stake, claiming, ‘‘This issue is not a mere academic
urations of knowledge and power—they are performative and
dispute, and should be treated as one involving the welfare of very
productive; they make educational realities (the program, the
many people, not just the egos of a few’’ (2008, p. 24).
educator, the learner, etc.) to be this rather than that way.
With so much at stake, the evidence-based movement has
fomented significant debate in recent years. The positions
2. Background: decades of ‘‘evidence-based’’ debates
espoused by participants in those debates tend to fall into two
general categories, suggesting that how questions about evidence-
2.1. Debates in evidence-based medicine
based education are posed is at least as important as how they are
answered: Some discussions treat the problem primarily on a
The increased push to use only programs and practices ‘‘that
technical-rationalistic1 level, focused on improving the fidelity of
work’’ (meaning those for which evidence of effectiveness has been
implementation of evidence-based interventions (e.g., Galbraith
shown by way of at least one RCT) has catalyzed the proliferation of
et al., 2009; Wandersman et al., 2008); others foreground the
evidence clearinghouses, which establish criteria for what counts
normative and axiological nature of the problem, offering
as credible evidence and then rank programs or other interventions
theoretical critiques of the assumptions that undergird the very
accordingly. The approach originated in biomedicine as part of the
notion of the research-practice gap (e.g., Biesta, 2007; St Pierre,
evidence-based medicine (EBM) movement. One leading group in
2002). Each of these approaches to posing and answering questions
that field is the Cochrane Collaboration (www.cochrane.org),
about how to make education more evidence-based is helpful, yet
which curates a database containing systematic reviews and meta-
each is also limiting. The first leaves too much unproblematized
analyses of published RCT studies on a wide variety of health and
and risks reifying hegemonic relations of knowledge and power in
medical topics. EBM is ‘‘the conscientious, explicit, and judicious
society; the second lacks grounding in practical contexts and risks
use of current best evidence in making decisions about the care of
dissolving into polemical verbalism. The theoretical critiques
individual patients, . . . [integrating] individual clinical expertise
characterizing this second approach are necessary and helpful, but
with the best available external clinical evidence from systematic
must be supplemented by empirical studies rooted in the
research’’ (Sackett, Rosenberg, Muir, & Brian, 1996, p. 71). Although
particular work processes of individuals and organizations.
EBM is strongly linked to the RCT design, Sackett and colleagues
have admitted the need for methodological plurality: ‘‘Evidence
1.1. Purpose and research questions
based medicine is not restricted to randomised trials and meta-
analyses. It involves tracking down the best external evidence with
To that end, the purpose of the study presented in this paper
which to answer our clinical questions’’ (Sackett et al., 1996, p. 71).
was to better understand what actually happens, in practice, when
Regardless of Sackett et al.’s intentions, EBM quickly became
people try to make non-formal education more ‘‘evidence-based.’’
synonymous with RCTs and, simultaneously, became a polemic in
Like Timmermans and Berg (2003) in their analysis of standardi-
the health sciences. Critics railed against it as a needless neologism,
zation in medical practice, instead of debating the advantages and
an act of intellectual poverty, a governmental management
strategy to cut health care costs, and a scientistic oversimplifica-
1
‘‘Technical-rationality’’ is defined as a positivist epistemology of practice tion of professional health practice (Couto, 1998; Shahar, 1998).
whereby ‘‘professional activity consists in instrumental problem solving made Highlighting the fact that many other (non-randomized) sources of
rigorous by the application of scientific theory and technique’’; it is ‘‘the view of
professional knowledge which has most powerfully shaped both our thinking about
evidence are obviously important in health care, a facetious article
the professions and the institutional relations of research, education, and practice’’ appeared in the British Medical Journal entitled ‘‘Parachute use to
(Schön, 1983, p. 21). See also Usher et al. (1997). prevent death and major trauma related to gravitational challenge:
T. Archibald / Evaluation and Program Planning 48 (2015) 137–148 139

Systematic review of randomised controlled trials’’ (Smith & Pell, Gutierrez, 2002, p. 21). Still others took a decidedly more critical
2003). One of EBM’s most scathing critiques comes from Holmes, approach to their response, seeing the NRC report as narrowly
Murray, Perron, and Rail (2006), who presented the EBM defining ‘‘science as positivism and methodology as quantitative’’
movement as ‘‘outrageously exclusionary and dangerously nor- (St Pierre, 2002, p. 25). Like Holmes et al. (2006), St. Pierre’s
mative with regards to scientific knowledge . . . a good example of postmodern analysis illustrates ‘‘the danger of the report’s
microfascism at play in the contemporary scientific arena’’ (p. 181). normalizing and totalizing discourse’’ (St Pierre, 2002, p. 25).
They described EBM as a ‘‘regime of truth’’ (as defined by Foucault) Again evoking Holmes et al., she remarked that ‘‘it is often the case
which produces subjugated knowledge and ‘‘does not allow that those who work within one theoretical framework find others
pluralism, unless that pluralism is engineered by the Cochrane unintelligible’’ (St Pierre, 2002, p. 25).
hierarchy itself’’ (p. 181). The CEBP report, unlike the NRC report, is more straightforward
A salient and troubling aspect of the responses to Holmes in its advocacy of RCTs: ‘‘Well-designed and implemented
et al.’s postmodern deconstructions of EBM’s philosophical randomized controlled trials are considered the ‘gold standard’
foundations is the way in which such responses ignored the for evaluating an intervention’s effectiveness, in fields such as
substantive thrust of the critical arguments, dwelling instead on medicine, welfare and employment policy, and psychology. . . .
the difficult to understand nature of the text (Goldacre, 2006). Such trials should play a similar role in education’’ (CEBP, 2003, p.
Ironically justifying claims about the exclusionary discourses of 1). This discourse makes explicit the envy that the educational
EBM, ‘‘Goldacre readily admits to having formed his assessment researchers involved display for (their perception of) medical
of their article on the basis of ‘looking at the title’; he ‘just knows’ science:
he can dismiss them because they disagree with EBM, a position
In the field of medicine, public policies based on scientifically-
he characterizes in such platitudinous terms that anyone
rigorous evidence have produced extraordinary advances in
opposing it must have misunderstood’’ (Loughlin, 2007, p.
health over the past 50 years. By contrast, in most areas of social
517). The trope of ‘‘just knowing,’’ and its underlying conceptual
policy . . . government programs often are implemented with
significance, is an idea to which I return below.
little regard to evidence, costing billions of dollars yet failing to
address critical social problems. (CEBP, 2012)
2.2. Debates in evidence-based education
Whether and how social scientific endeavors should—or could—
answer the CEBP’s call to emulate the evidence-based medical
In an example of ‘‘borrowed knowledge’’ (Kellert, 2008;
model, which itself is not without detractors, are questions of
Longino, 1993), the discourses and practices of the evidence-
central importance.
based movement were translated from biomedicine to other
Helpful insights regarding those questions are offered in
domains. The field of education features prominently among the
Donaldson, Christie, and Mark (2009), What Counts as Credible
plethora of fields which have received or developed their own
Evidence in Applied Research and Evaluation Practice. The text is
appropriation of evidence-based practice—which include evi-
presented in two parts, one containing chapters from authors who
dence-based management, decision making, coaching, social
favor experimental approaches as the route to credible evidence
services, policing, conservation, dentistry, policy, occupational
and the other with contributions from authors who favor
therapy, prevention science, dermatology, gambling treatment, sex
‘‘nonexperimental’’ approaches.2 For instance, Henry presented
education, criminal justice, needle exchange programs, and
his rationale for supporting use of RCTs when ‘‘getting it right
prices—all part of ‘‘an evidence-based global society’’ (Donaldson,
matters;’’ claiming that ‘‘the methods-based approach is deeply
2009, p. 4). Akin to the Cochrane Collaboration in healthcare,
rooted in the theory of representative democracy and the value
groups such as the Campbell Collaboration (http://www.camp-
that evaluation can bring to society’’ (2009, p. 33). In the second
bellcollaboration.org/) and the What Works Clearinghouse (http://
half of the book, Rallis (2009) considered the ethical premises for
ies.ed.gov/ncee/wwc/) of the Institute of Education Sciences were
credible evidence, adding the criterion of probity (integrity and
established to rate the evidence and conduct systematic reviews
uprightness) to balance criteria of rigor as they pertain to
and meta-analyses on educational programs.
reasoning. Similarly, Greene (2009) premised her chapter ‘‘on
In the field of education (especially formal education, initially),
the understanding that evaluation is both influenced by the
debates about evidence-based education and RCTs were largely
political, organizational, and sociocultural context in which it is
catalyzed by the passage of the No Child Left Behind (NCLB) Act of
conducted and, in turn, serves to reshape and reconstitute that
2001 (Public Law 107–110), with its explicit discursive positioning
context in particular ways’’ (p. 153).
of ‘‘scientifically based research,’’ effectively mandating RCT
RCTs are one practice through which evaluation and research
designs for the research and evaluation of federally funded
reconstitute contexts (and other realities). The strengths, limita-
education programs (Carlson & Levin, 2005; Feuer, Towne, &
tions, and philosophical groundings of RCTs in social research have
Shavelson, 2002). The passage of that act was followed by two
been explicated on many occasions (Donaldson et al., 2009;
reports commissioned by the U.S. Department of Education,
Morrison, 2009; Scriven, 2008; Walters, Lareau, & Ranis, 2009).
written by the National Research Council of the National Academy
Scriven (2008) provided a summative evaluation of the RCT
of Sciences (National Research Council [NRC], 2002) and by the
methodology. He pointed out the ‘‘effort at persuasive redefini-
Coalition for Evidence-Based Policy (Coalition for Evidence-Based
tion’’ of RCTs as ‘‘true experiments’’ and related that discursive
Policy [CEBP], 2003), both of which were intended to guide
strategy to ‘‘the urge to create a new specialty, an esoteric cabal in
educational researchers toward more ‘‘scientific’’ work. Feuer et al.
which the originators have a privileged position as keeper of the
(2002), three of the NRC report’s authors, described the thinking
keys to knowledge’’ (p. 12). Among his summative propositions,
represented in the report alongside a handful of invited responses.
Scriven first claimed that true RCTs have ‘‘essentially zero practical
Among those responses, some authors agreed generally with Feuer
application to the field of human affairs’’ (2008, p. 12), especially
et al., yet took issue with the NRC committee’s apparently
because it is almost impossible to have a truly single-blind
uncritical acceptance of the charge ‘‘to define the scientific in
educational research’’ which led the committee to produce ‘‘a 2
The definition of the latter half of the book by way of negation (grouping
statement that risks being read as endorsing both the possibility approaches by what they are not rather than what they are) problematically
and the desirability of taking an evidence-based social engineering ‘‘others’’ those approaches, and indexes the difficulty of engaging equally with the
approach to educational improvement nationwide’’ (Erickson & plurality of approaches discussed.
140 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

(let alone double-blind) study in social research; RCTs in education version of it in physics) Heisenberg wrote, ‘‘What we observe is
are ‘‘zero-blind,’’ and thus should be called ‘‘pseudo-RCTs.’’ It is not nature itself, but nature exposed to our method of
worth noting that a few years earlier, Scriven, in his Claremont questioning’’ (quoted in Law & Urry, 2004, p. 395). In this
debate with Lipsey, expressed a considerably more moderate view epistemological domain, Biesta described a ‘‘knowledge deficit’’
of the RCT’s utility: ‘‘This is a very powerful tool, and sometimes that relates to our actual inability to know, through experimen-
much the best tool, but it has as the same value as the torque tation, that ‘‘what works’’ now, in one context, will work in the
wrench in a good mechanic’s toolbox. For certain tasks, you can’t future and in other contexts.
beat it’’ (Donaldson & Christie, 2007, p. 75). But no mechanic, and In the ontological domain, ‘‘talk about ‘what works’ . . . operates
no governing body of professional mechanics, would ever deem the on the assumption of a mechanistic ontology that is actually the
torque wrench the ‘‘gold standard’’ of tools. exception, not the norm in the domain of human interaction’’
Scriven (2008) pointed out that ‘‘the real ‘gold standard’ for (Biesta, 2010, p. 497). Social systems, such as non-formal
causal claims is the same ultimate standard as for all scientific education, can be characterized as open, semiotic, recursive
claims; it is critical observation’’ (p. 18). This perspective is closely systems. Yet when proponents of evidence-based education
aligned with the widely held belief within the field of evaluation express their desire to replicate the impact that RCTs had on
that the true gold standard is methodological appropriateness. agriculture and medicine—like Duflo and Kremer (2003) who
Interestingly—as Scriven discussed in a post to the EvalTalk listserv stated optimistically, ‘‘Just as randomized trials for pharmaceu-
of the American Evaluation Association (Scriven, 2013, May 29)— ticals revolutionized medicine in the 20th Century, randomized
Cook, Shadish, and other leading theorists behind the push to evaluations have the potential to revolutionize social policy during
establish the primacy of the RCT have recently explored non- the 21st’’ (p. 32; see also CEBP, 2012)—they apparently overlook
randomized designs capable of supporting causal claims (Pohl, the ways in which ‘‘the dynamics of education are fundamentally
Steiner, Eisermann, Soellner, & Cook, 2009; Shadish, 2011; Shadish, different from the dynamics of, say, potato growing or chemistry’’
Rindskopf, & Hedges, 2008). Yet despite such nuanced methodo- (Biesta, 2010, p. 497). This is what Biesta referred to as the ‘‘efficacy
logical debates that occur in some quarters of academia, the ‘‘RCT deficit’’ of the evidence-based movement, ‘‘indicating that in the
as gold standard’’ perspective still persists in a number of practical social domain interventions do not generate effects in a
policy settings. According to Schwandt, in those settings—as a mechanistic or deterministic way, but through processes that . . .
result of the evidence-based movement—policy discussions on are open so that the connections between intervention and effect
methodology are driven by ‘‘a new epistemological politics in are non-linear’’ (Biesta, 2010, p. 497). In complex, dynamic, non-
which some knowledge and some knowers are privileged over linear social systems, efficacy is made possible through ‘‘complex-
others in making decisions about practices and policy . . . not [by] ity reduction,’’ which ‘‘has to do with the reduction of the number
the relative merits of experimental and quasi-experimental of available options for action for the elements of a system’’ (Biesta,
designs’’ (Greene, Lipsey, Schwandt, Smith, & Tharp, 2007, p. 2010, p. 497). Complexity reduction should be seen as a social
116). It is from those settings that I drew the example cases for the construction, not as a naturally occurring phenomenon; and ‘‘since
study presented in this paper. any attempt to reduce the number of available options for action
for the ‘elements’ within a system is about the exertion of power,
2.3. Assumptions at the intersection of science, expertise, and complexity reduction should therefore be understood as a political
democracy act’’ (Biesta, 2010, p. 498).
Finally, in the praxeological domain—which has to do with the
Before presenting this study’s three focal cases and the ways in which practice, especially professional practice, can or
qualitative methods I used to elucidate their politics of evidence, should be understood in terms of the application of scientific
I first introduce some theoretical perspectives that helped frame knowledge—Biesta built on Latour’s discussions of how techno-
my approach to data collection, analysis, and interpretation. As my science succeeds and moves by rendering the world to be more like
review of the ‘‘evidence-based’’ debates presented above suggests, the laboratory from which it originated (Latour, 1983). Biesta
the work to make non-formal education more ‘‘evidence-based’’ or highlighted that
‘‘scientific’’ contains and relies on a number of philosophical
to think of the impact of modern science on society in terms of
assumptions—on the nature of knowledge, of practice, and even of
the application of scientific knowledge—which is central to the
reality—which lie at the intersection of science, expertise, and
notion of evidence-based and evidence-informed practice—at
democracy. To address my research questions, to identify and
least misses important aspects of what makes the application of
understand the divergent perspectives on non-formal education
such knowledge possible (particularly the work that is needed
that are manifest through the ‘‘evidence-based’’ movement,
to transform the outside world so that knowledge becomes
requires theoretical lenses that focus on uncovering and proble-
applicable) and perhaps even serves as an ideology that makes
matizing tacit philosophical assumptions.
the incorporation of practices into particular networks invisi-
One such lens is offered by Biesta (2010), who explicated the
ble. (Biesta, 2010, p. 499)
epistemological, ontological, and praxeological assumptions that
inhere evidence-based education. All three types of assumptions This ‘‘application deficit’’ points to the ways in which the
are evident in the data on this study’s three focal cases presented application of scientific knowledge is performative and transfor-
below. In the epistemological domain, evidence-based education mative, what Leach, Scoones, and Wynne (2005) called ‘‘the
apparently relies on a representational epistemology, ‘‘in which tacit provisional performance of human ontologies in the making’’
true knowledge is seen as an accurate representation of how (p. 13). Just as the complexity reduction that characterizes the
‘things’ are in ‘the world’’’ (Biesta, 2010, p. 494). Paraphrasing ontological domain of evidence-based education is a political act,
Dewey’s critique of ‘‘the spectator theory of knowledge,’’ Biesta so too is the ideological shaping of society effected through the
argued that a more accurate view of the nature of knowledge application of scientific knowledge in the praxeological domain.
(especially knowledge generated through experimentation) is Taken together, these views of the evidence-based education
offered by transactional epistemology, in which ‘‘knowledge movement raise questions about whether RCTs and evidence-
is not a depiction of a static world ‘out there’ . . .. It rather is based approaches are ‘‘rooted in democratic theory,’’ as Henry
knowledge about the world in function of our interventions’’ (2009, p. 39) claimed, or whether they pose ‘‘a threat to democracy
(2010, p. 495, emphasis in the original). On this problem (or the itself’’ (Biesta, 2007, p. 21). Perhaps Henry and Biesta arrived at
T. Archibald / Evaluation and Program Planning 48 (2015) 137–148 141

such disparate conclusions because each was working with been evaluated with at least one RCT—designed to prevent teenage
different democratic theories; Henry’s analysis was based on pregnancy and sexually transmitted diseases such as HIV. They do
theories of representative democracy, while Biesta’s was based on not implement the EBPs themselves. Rather, both projects are
Deweyan theories of direct deliberative democracy. From that positioned between a funder and the community-based educa-
latter perspective, the effects of RCTs and EBPs on democracy can tional organizations that implement specific EBPs supported (or
also be understood at the level of ‘‘micropolitics,’’ based on the mandated) by that funder. Both projects offer training and
circulation of power and knowledge in the everyday workings of technical assistance, provide guidance on evaluation, seek to
society (Foucault, 1980; Kulynych, 1997). improve the fidelity of implementation of programs, and address
Biesta concluded his analysis of the epistemological, ontolog- other needs expressed by both the funders and the community-
ical and praxeological assumptions of the evidence-based based educators. The third, more general organization (Case C)
movement by emphasizing the role of power and normativity takes place in the United States. All three cases have some relation
and calling instead for ‘‘value-based education.’’ Given the to a university. Cases A and C are both based in a university center
teleological character of education, normative questions of focused on translational research in the social and behavioral
purpose must be primary—‘‘because if evidence were the only sciences. As such, they are very much at the interface of research
base for educational practice, educational practice would be and practice. Case C is also part of the Cooperative Extension
entirely without direction’’ (Biesta, 2010, p. 500). On this topic, System (CES)—in the United States, there is a system of ‘‘land
Putnam (quoted in Bernstein, 1983) wrote, ‘‘A view of knowledge grant’’ colleges and universities established 150 years ago that for
that acknowledges that the sphere of knowledge is wider than over 100 years have interacted with the communities of their
the sphere of ‘science’ seems to me to be a cultural necessity if we individual states via the CES (i.e., disseminating research-based
are to arrive at a sane and human view of ourselves or of science’’ knowledge, engaging with local communities to identify and solve
(p. 1). This relationship between scientific knowledge, values, local issues, etc.).3 Case B is a project that is a partnership of a U.S.
and praxis is encapsulated in Aristotle’s classification of three university and a Kenyan faith-based organization.
types of knowledge: episteme, techne, and phronesis. As described I selected these three cases for inclusion in this study because
by Flyvbjerg (2001), episteme ‘‘concerns universals and the they are data rich—at the time the study was conducted, each of
production of knowledge which is invariable in time and space’’ them was experiencing increased pressure from various stake-
(p. 55), techne is ‘‘craft and art, and as an activity it is concrete, holders to be more ‘‘evidence-based.’’ In other words, my
variable, and context-dependent’’ (p. 56), and phronesis is approach to sampling (at the level of organizations, projects,
‘‘prudence’’ or ‘‘practical common sense’’ and involves ethics programs, and people) was purposive, directed at cases in which
and ‘‘deliberation about values with reference to praxis. [It is] this study’s phenomena of interest were particularly salient. More
pragmatic, variable, context-dependent’’ (p. 57). Equipped with detail on each of the three focal cases is included in the
Biesta’s theoretical perspectives, and sensitive to the issues of presentation of data in Section 4. First, I describe my approach
power and normativity that inevitably lie at the intersection of to data collection and analysis.
science, expertise, and democracy, I turn now to the empirical
analysis of three cases in which people’s work involves efforts to 3.2. Data collection
make non-formal education to be more ‘‘evidence-based.’’
I collected qualitative data via semi-structured interviews,
3. Methods observation, and document analysis. I interviewed thirty individ-
uals (roughly ten per case). Interviews lasted on average 1 h. I
The methodology of the study presented in this paper is digitally recorded each interview and later transcribed them
informed by the approach taken by Timmermans and Berg (2003) verbatim. Interview questions were designed to elicit both people’s
in their book, The Gold Standard: The Challenge of Evidence-based stories of their work practices and people’s conceptualizations and
Medicine and Standardization in Health Care: ‘‘Instead of debating perceptions of key constructs such as evidence, the research-
the advantages and disadvantages of standardization and getting practice gap, and non-formal education. The coming together of
stuck on a rhetorical level of analysis, [they] propose a study of people’s stories of their work and people’s more abstract
the politics of standardization in practice’’ (2003, p. 21). They conceptualizations of their work ideally opens up space for
drew from the interactionist sociology of work, science studies, tensions and gaps to emerge, highlighting moments and places
and ethnomethodology, to examine ‘‘how evidence-based where seemingly universal constructs are actually enacted in a
medicine changes the practice of medicine on both a micro multitude of disparate and even contradictory ways.
and a macro level’’ (Timmermans & Berg, 2003, p. 21). Below, I I observed over 100 h of discrete events such as meetings and
first present this study’s three focal cases and then describe the public lectures, plus many additional hours of everyday work
qualitative methods used to analyze the politics of evidence at activities. Observation of the Kenyan project was limited due to
play in each of those cases. In an effort to protect the feasibility constraints, though I did observe a week-long meeting in
confidentiality of the participants of this study, I refrain from Nairobi that focused explicitly on the need for the project’s work to
referring to the projects, organizations, and programs by name. be more ‘‘evidence-based,’’ which was a very data-rich meeting. To
record observational data, I took notes by hand and later
3.1. Sample: three focal cases transcribed them or scanned the hand-written page.
In addition to interviews and observation, I also collected
The study presented in this paper focused on three cases, all of textual data such as a series of memos about evidence-based
which involve non-formal youth development education. Two of practice and EBPs written by an administrator of one of the
the cases are projects focused specifically on adolescent sexual projects; press releases and newspaper articles; research briefs
health, while the third involves a more broadly focused positive and outreach materials; project websites; funding proposals and
youth development organization. Of the two adolescent sexual requests for funding related to the projects; textual tools
health projects, one takes place in the United States (Case A), while
the other takes place in Kenya (Case B). The project staff from both 3
For a review of the multiple and changing missions of the CES and the land grant
Case A and Case B support the implementation of evidence-based colleges and universities, including a review of the conflicting perspectives and
programs (EBPs)—well-defined, tightly scripted curricula that have narratives on what CES is, see Peters (2006, 2007) and Peters et al. (2008).
142 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

(e.g., surveys, implementation checklists, communication vehicles) project staff advance research-based practices, while also learning
that were central to the daily functioning of project staff, and more. directly from practitioners about the realities of implementation in
Language and other aspects of discourse are extremely important the field.’’ In the project’s early years, it ‘‘worked with community
given the focus of this study (Wilson, 2009). As such, my inclusion partnerships doing youth development work with community
of textual data offered an additional level of insights and partnerships, broadly trying to change how communities per-
perspectives about the phenomena of interest. ceived and dealt with teenagers.’’ During those years, the project
was ‘‘trying to change environments, trying to change adult
3.3. Analysis attitudes, trying to sort of saturate the community with
opportunities and supports and services; and trying to really shift
The analysis of all three types of data was supported using a this approach to a PYD [positive youth development] framework.’’
computer assisted qualitative data software program (ATLAS.ti Between 2008 and 2010, the federal and state funding agencies
7.0). I coded the data using a blend of a priori codes (based on my associated with Case A dramatically shifted their funding
research questions) and ‘‘emergent’’ codes. Methodologically, I priorities, triggering significant changes to the project’s work.
was guided by institutional ethnography (Smith, 2005, 2006) and The funding agencies defunded the community PYD partnerships
by science and technology studies (STS; e.g., Mol & Berg, 1998; and began funding the Case A project as an intermediary support
Timmermans & Berg, 2003). From STS, I drew guidance from agency for community ‘‘providers,’’ who themselves were funded
approaches to data collection and analysis that realize the need to implement adolescent sexual health EBPs in their respective
for empirical inquiry aimed at determining ‘‘the processes by communities. The Case A project then began to work with
which certain forms of knowledge achieve a moral legitimacy and 58 providers around the state (e.g., community organizations,
appear to be part of the natural order’’ (Lindenbaum & Lock, 1993, Planned Parenthood offices, etc.) who could choose from a list of
p. xiii). For instance, Shapin and Schaffer (1985) examined the 28 EBPs. The list of EBPs was established following a review of the
‘‘historical circumstances in which experiment as a systematic evidence conducted by Mathematica Policy Research and Child
means of generating natural knowledge arose, in which experi- Trends, two research firms that were contracted by the federal
mental practices became institutionalized, and in which experi- government. All programs on the U.S. Department of Health and
mentally produced matters of fact were made into the Human Services (HHS) approved list have been shown to have
foundations of what counted as proper scientific knowledge’’ evidence of positive outcomes through experimental or quasi-
(p. 3). They treated truth, adequacy, and objectivity as ‘‘accom- experimental research designs. According to a report authored by
plishments, as historical products, as actors’ judgments and those two firms, the review ‘‘identifies, assesses, and rates the
categories’’ that are ‘‘topics for our inquiry, not resources rigor of program impact studies and describes the strength of
unreflectively to be used in that inquiry’’ (p. 14). In similar evidence supporting different program models’’ (Mathematica
fashion, instead of departing from pre-established truths or Policy Research and Child Trends, 2010, p. 1). In the review, ‘‘The
norms, STS scholars ‘‘try to dissect [them] from the inside, and by highest study quality rating is reserved for randomized controlled
empirical means. At that point, both truth and norms start to look trials (RCTs) and similar studies that randomly assigned subjects
like parts of ongoing practices, mobilized in specific situations by to the study’s research groups,’’ whereas ‘‘[q]uasi-experimental
some participants and not others’’ (Mol & Berg, 1998, p. 6). How designs with an external comparison group are eligible for at
RCTs and EBPs come to be, through various practices and best a moderate rating’’ (Mathematica Policy Research and Child
discourses, the dominant truth and norm for legitimate knowl- Trends, 2010, p. 4).
edge about non-formal education is of central importance to this Generally, EBPs on the list are curricula that have been
paper. These methodological and theoretical perspectives guided developed and tested by university-based researchers, referred
my approach to explicating the epistemological politics at play in to as ‘‘developers.’’ Many of the curricula were originally developed
the three focal non-formal education cases presented below. in the late 1990s yet have been updated in subsequent years. After
the curricula are developed and proven to be effective, they are
4. Results and discussion then published and packaged by ‘‘purveyors,’’ who sell them to
community-based educators along with accompanying services,
In this section, I first present each of the three focal cases in such as trainings on the curricula. The chain of development,
greater detail, providing evidence of the tensions, negotiations, and testing, and production of these curricula is often referred to as ‘‘an
divergent assumptions and perspectives involved in the work of industry.’’ Relatedly, many people call the EBPs ‘‘education in a
making each of them to be more ‘‘evidence-based.’’ Then, I box,’’ referring figuratively to the fact that the curricula are highly
synthesize and interpret that evidence to draw conclusions scripted, formalized lesson plans, and literally to the fact that they
regarding the ways in which those conflicting perspectives and are sometimes sold in cardboard boxes (as a kit containing the
assumptions affect non-formal educational praxis. facilitator’s guide, a DVD, materials for youth participants, etc.). As
an intermediary organization, Case A project staff offer training,
4.1. Case A technical assistance, and program evaluation support to the
providers.
The project that comprises Case A originated in 2000 with Adaptation is a crucially important issue in the world of EBPs
federal funding from the Department of Health and Human (Durlak & DuPre, 2008; Firpo-Triplet & Fuller, 2012; Wandersman
Services that is administered at the state level by the state et al., 2008). Case A staff must address what they call ‘‘fidelity
Department of Health. The project is active across the entire state tension.’’ Program developers and prevention researchers are
in which it is based. According to project documents, the project concerned that changes in implementation of EBPs will dilute
‘‘promotes positive youth outcomes and aims to reduce risky effectiveness, while community leaders and practitioners
sexual behavior among youth by advancing the principles of are concerned that ‘‘one size does not fit all.’’ Case A staff are
positive youth development and supporting the implementation charged with surveilling the providers in an effort to reduce
and evaluation of evidence-based programming.’’ The project ‘‘unapproved’’ adaptations of the EBPs as they are implemented in
‘‘serves as an intermediary organization that provides technical the field. The evaluation packets which the providers complete and
assistance, training, and evaluation for comprehensive adolescent return to the Case A project include an implementation checklist in
pregnancy prevention programs’’ across the state. ‘‘In this capacity which the providers ideally report any unapproved adaptations
T. Archibald / Evaluation and Program Planning 48 (2015) 137–148 143

they have made to the curriculum. Often, the providers will report Kenya. The agency overseeing the funds that support the Case B
that the EBP was implemented with perfect fidelity, yet then project is the Global HIV/AIDS Program of the U.S. Health Resources
informally disclose that the contingencies involved in implement- and Services Administration (HRSA), part of HHS (the same agency
ing the curricula in community contexts required significant funding Case A). CDC Kenya oversees PEPFAR funding in Kenya
deviation from the script. Some informal unapproved adaptations because HRSA does not have a presence in that country.
appear to be made unreflectively by community educators (e.g., When the Kenyan faith-based organization contacted CDC
simply omitting some activities to make the module fit into the Kenya to receive funding, CDC then approached a research group at
time allocation of school health classes, or responding to difficult a U.S. university to see if they would partner with the Kenyan
questions about sex with incorrect information), while others are organization—such a partnership was required in order for the
intentional, based on the expertise and agency that is expected of Kenyan group to receive PEPFAR funding. The U.S. research group,
skilled, professional educators (e.g., an educator who works with which was looking for potential partners for international HIV/
gang members knows that the activities in the EBP will not retain AIDS prevention education work, involves a small group of faculty,
the attention of participants, so changes are made to render the staff, and graduate students. The partnership that comprises Case B
curriculum more accessible to the learners). In either case, staff is focused on promoting youth abstinence and behavior change.
from the Case A project are engaged with the providers to manage The main curriculum implemented by the Case B project is a faith-
adaptations and increase the quality of implementation. based abstinence and behavior change curriculum targeting youth
However, when one popular program on the list of approved ages 11–14. The program is extra-curricular; teachers volunteer to
programs was found to focus on prevention of sexually transmitted participate in trainings and then implement the program with
infections, and not on pregnancy prevention (which is the purpose their school children (e.g., before school, after school, on Saturdays,
of the initiative), the program developer apparently ‘‘tacked on’’ a etc.). Early in the partnership, the U.S.-based team worked closely
curricular unit on pregnancy prevention to the existing EBP and with the Kenyan team to refine and develop the existing
‘‘sent it back out.’’ But, since the original EBP was included on the curriculum. In subsequent years, the U.S. team has helped to
official, sanctioned list because it had undergone a ‘‘rigorous’’ design and deliver training, capacity building, and technical
evaluation (that is, with an RCT design), it should be—but assistance to national teams of teachers and administrators.
apparently was not—reevaluated after such a significant change; The curriculum, which is referred to as ‘‘homegrown’’ because it
this irony was not lost on project staff and some of the educators was developed by the educators of Case B instead of emerging from
with whom they partner. When asked why educators cannot adapt a research study, includes various characteristics of evidence-
EBPs while developers can, one study participant replied based interventions and is rooted in health behavior change
sarcastically, the developers ‘‘just know.’’ This instance provides theories, including the Social Cognitive Theory and the Theory of
a stark example of the ways knowledge hierarchies are established Reasoned Action. Furthermore, according to Case B documents,
and reinforced through EBP work. It echoes Goldacre ‘‘just ‘‘the learning modules and accompanying resources incorporate
knowing’’ that Holmes et al.’s postmodern critique of EBM is Christian and traditional Kenyan and African values to more
nonsense. effectively reinforce and sustain the messages.’’ In the early years,
This instance reveals the complex, contingent nature of EBP the individuals involved in the Case B project had a fair amount of
work. Behind the veneer of ‘‘science-based education,’’ it appears control over how they developed and evaluated their homegrown
that who can and cannot adapt an EBP is not about evidence curriculum. But starting in the years between 2008 and 2010, the
derived from an RCT, but rather is about who is in a privileged CDC in Kenya and elsewhere in Africa increased the emphasis on
position as the expert ‘‘knower.’’ In a similar instance that calls the implementing evidence-based (or, ‘‘evidence-informed’’) inter-
‘‘scientific’’ veneer of the EBP enterprise into question, uncertainty ventions. Case B project staff became acutely aware that they had
exists regarding what degree of youth attendance in EBPs sessions not generated sufficiently ‘‘rigorous’’ evidence (meaning evidence
is sufficient to constitute participation. It is a question about derived from an RCT) of their program’s effectiveness.
‘‘dosage.’’ In many cases, in the controlled context in which the EBP One of the initiatives undertaken by the Case B project since
was studied via an RCT, youth attendance was 100%. Often, this 2011 involves a ‘‘targeted evaluation,’’ using a pre/post design to
was because the youth were given financial incentive to participate assess change over time on some participant self-reported
as part of the study. Obviously, the contexts in which the EBPs are behavior outcomes with a stratified random sample of schools
implemented in actual community education settings differ a great across Kenya. The importance of this targeted evaluation cannot be
deal from those of the original study. One difference is that youth overstated. The evaluation is explicitly linked to the homegrown
participation is much lower. People from Case A realized that this curriculum’s continued existence. CDC, in the United States and
was an issue and sought to find out what percentage of abroad, has a number of initiatives related to their Diffusion of
participation would count as a sufficient dosage. Eventually, Effective Behavioral Interventions (DEBI) project, which began in
administrators representing one of the funding agencies involved 1999 after they published the first CDC compendium of evidence-
relayed to Case A that 75% attendance constituted sufficient based behavioral interventions, listing programs that have
dosage. However, it is unclear what rigorous trials were conducted sufficient evidence of effectiveness based on ‘‘rigorous’’ evaluation
to arrive at the 75% standard. and research (with experimental or quasi-experimental designs).
Many CDC scientists and administrators related to HIV/AIDS EBP
4.2. Case B efforts in Africa have published on this topic, focusing especially on
the adaptation of U.S.-based EBPs to African contexts (e.g.,
The partnership that comprises the Kenyan project started Fitzgerald et al., 1999; McKleroy et al., 2006; Parker et al., 2012;
unofficially in December, 2004 and officially in February, 2006. The Poulsen et al., 2010).
Kenyan faith-based organization had been providing a variety of Starting in 2008, the CDC and the Kenya National AIDS and STI
educational programs and services across Kenya for many years. In Control Program (NASCOP) Evidence-Informed Behavioral Inter-
the early 2000s, they developed a series of five modules, or lesson vention Technical Working Group began addressing the need to
plans, for HIV/AIDS prevention education. They requested funding ensure that HIV/AIDS education across the country was evidence-
from the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR). based. Together they developed the Kenya HIV Prevention
The Centers for Disease Control and Prevention (CDC), through Intervention Assessment Tool (KHPIAT), a document designed to
their office in Kenya, is responsible for implementing PEPFAR in ‘‘guide the systematic assessment of homegrown HIV prevention
144 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

behavioral interventions that have not been rigorously evaluated the parents was significant. Again, this highlights the nuances
. . . [and help] ensure that behavioral interventions implemented in involved in ‘‘Kenyanizing’’ imported EBPs and shows where locally
the country are of high quality and include characteristics of developed curricula may have an advantage.
effective interventions.’’ At the time this study was conducted, the
Case B project was awaiting the release of the final draft of the 4.3. Case C
KHPIAT, hoping that their curriculum would be rated as having
enough characteristics of an evidence-based intervention to Case C is focused on an organization that does not work with
maintain its funding. Simultaneously, the Case B project was adolescent sexual health EBPs, but rather takes a broader approach
working to complete their targeted outcome evaluation, which to working with youth. As such, by comparing it to Cases A and B,
they hoped would further bolster the evidence-base for their divergent and conflicting perspectives and assumptions about
program. According to one administrator with the CDC: what non-formal education is and about how it should be informed
by evidence become apparent. As mentioned above, the organiza-
At different times, when we were really making a movement
tion comprising Case C is the youth development initiative of
away from things without evidence to EBIs [evidence-based
one state’s Cooperative Extension system. As such, it is active
interventions], we considered at different times telling
across the entire state in which it is located, and is also affiliated
[the project] they need to move towards one of the EBIs, but
with the national Extension system. In one sense, as part of the
because they have this targeted evaluation in place and because
land-grant mission, the organization has been connecting research
they’ve based it on characteristics of effective programs and
and practice for one hundred years. Yet especially because of
they kept such a strong model with the religious tone, too,
shrinking financial resources, and because in 2011 the Case C
which is so important here and culturally relevant in Kenya,
organization became affiliated with a center for translational
we’ve continued to let it go and try to improve the quality.
research in the social and behavioral sciences, the organization was
At the time my study concluded, the Case B project was still
experiencing an increased emphasis on being evidence-based. One
permitted to implement their homegrown program, yet they also
university-based administrator expressed an interest in only
were required to implement an EBP that had been adapted from a
allowing the organization’s community-based educators to imple-
U.S.-based program, and were still involved in collecting and
ment EBPs that were connected to university faculty’s research. In
analyzing outcome data from their own randomized trial (which
effect, this would have meant eliminating the Case C organization,
was referred to sarcastically by one Case B staff person as ‘‘our
since there are practically no EBPs that fit that criterion.
great positivist model’’). Both of these two additional sets of
The Case C organization appears to be incommensurable with
activities were very taxing to the limited capacity of the project
the educational framework that EBPs represent. For example, the
staff, yet they were pleased that they could continue to implement
organization emphasizes research-based practices—e.g., the prac-
their homegrown curriculum. Additionally, they appreciated that
tices associated with positive youth development, or PYD (Damon
the quality of its implementation was increasing as a result of the
& Gregory, 2002; Delgado, 2002; Hamilton, Hamilton, & Pittman,
heightened emphasis on evidence-based interventions.
2004; Jelicic, Bobek, Phelps, Lerner, & Lerner, 2007; Lerner &
As with Case A, adaptation is an important issue facing EBP
Benson, 2003; Pittman, Irby, & Ferber, 2000; Roth & Brooks-Gunn,
work in Kenya. As mentioned above, the CDC is involved in
2003)—rather than research-based programs. Relatedly, while
adapting U.S.-derived EBPs for implementation in Kenya, a process
some specific outcomes such as reduced teen pregnancy or
they refer to as ‘‘Kenyanizing’’ the curricula. This raises two related
increased interest in science are important to the Case C
questions. First, there is a question of external validity. If a program
organization, it tends to focus more on providing a positive
worked with African American youth in Baltimore, should it be
environment for youth so as to foster broader PYD outcomes such
expected to work with African youth in Kisumu? Second, how
as a youth’s sense of belonging, mastery, independence, and
much adaptation is necessary and sufficient? Are there some
generosity. This begs the question: Are some educational
aspects of the curriculum—such as the philosophical assumptions
initiatives and some social phenomena intrinsically more amena-
implicit in the underlying theories (e.g., Social Cognitive Theory
ble to an RCT design and to being packaged as an EBP?
and the Theory of Reasoned Action)—that are essentially universal,
Educators involved in the Case C organization—like some
and others—such as the names and verbal mannerisms of the youth
people involved with Cases A and B as well—question the theory of
in the curriculum’s examples and vignettes—that are culturally
change that is manifest in an EBP. They do not think that if a youth
specific? How do we know which elements of the curriculum are
attends 75% of eight 2-h modules, then that youth will make
universal and which require adaptation? The CDC attempts to be
positive changes to her or his behavior. Summarizing this point,
systematic about the process of Kenyanization, but has experi-
one administrator involved in Case A stated:
enced some difficulties. For instance, a CDC administrator
described observing the implementation of a U.S.-derived EBP I have a healthy skepticism about behavior change to begin
that had supposedly been adapted to the Kenyan context, yet with, and behavior change among young people and the actual
which still contained American mannerisms, concluding that ‘‘if ultimate effectiveness of a curriculum that’s going to change
you start with the Kenyan process, it is so much more Kenyan then behaviors, such basic behavior as sexual activity . . . based on
taking an American product and trying to make it Kenyan.’’ some intervention provided for four weeks or six weeks. . . . I
Adapted U.S.-derived EBPs ‘‘still don’t come across with the same think the impetus, especially at the federal level is they wanted
Kenyan sound as probably [the partnership’s homegrown curricu- to be able to document to Congress that the money that they
lum] does.’’ were spending on this wasn’t just a total waste, so . . . they
Another difficulty in adaptation arose when a U.S.-derived EBP created a whole industry of [EBPs] that comes with the training
was adapted and split into two versions, one for use with primary- and buying the curriculum; people are experts. . . . I know the
school age Kenyans (which promoted abstinence and behavior intention is good, I just think the execution is shoddy and the
change) and another for use with older youth (which included theory behind it is tenuous at best about that whole, providing
promotion of condom use). Community educators were trained on education is going to lead to behavior change, especially long
both versions together, and then some educators mistakenly went term behavior change.
into a Catholic elementary school and implemented the version People such as this administrator and the educators involved in
including condoms. The backlash from the local school officials and Case C perceive the complex and dynamic nature of youths’ lives
T. Archibald / Evaluation and Program Planning 48 (2015) 137–148 145

and try to craft educational opportunities that respond to those In similar fashion, both the Kenyan and American staff involved in
realities. Case B, based on their professional expertise in non-formal
Importantly, during the time this study was conducted, the education in Kenya, perceived that the EBP model (especially
educators of Case C had the agency to create their own homegrown with its privileging of U.S.-derived EBPs) ignored or effaced many
programs and did not need to choose a curriculum from a list of cultural and contextual complexities that are fundamentally
EBPs. According to many Case C staff, this allows them to be more important considerations in their praxis. There are stark, unan-
responsive to community needs. One Case C administrator, swered ontological (and ethical) questions that surround the
speaking about Cooperative Extension in general, describes their process of ‘‘Kenyanizing’’ an EBP. How much adaptation is
overarching approach to community education this way: (1) necessary and sufficient? How do we know which elements of
people in the community identify the priorities that matter to the curriculum are universal and which require adaptation? Some
them; (2) they then think in terms of how they want to address could (and do) treat these as technical, empirical questions. Yet
that issue—the solutions that they might come up with are theirs; values, norms, and the ontological status of culture must somehow
(3) they put their own resources on the table; (4) if they bring in be admitted into such conversations as well.
other resources from the outside, they are brought in on local Evidence of the ways in which the application deficit, in the
people’s terms; and (5) there is an institutional underpinning for praxeological domain, changes educational realities through EBP
the work that gets done. It is important to note, however, that work was plentiful. For example, the project in Case A fundamen-
Cooperative Extension is often also perceived more as a tally shifted its mode of working and its pedagogical theory-in-
‘‘technology-transfer’’ system, and in that way is seen to be more action when it converted to the EBP model. The community
similar to EBP-implementing organizations. It is a debate that educators with whom the project staff work did so as well. The
continues within the Cooperative Extension system. identity and subjectivity of what it means to do adolescent sexual
health education work across the entire state took on completely
4.4. Divergent perspectives and assumptions about non-formal different ontological and pedagogical forms. Encapsulating this
education point, one university researcher not directly associated with the
project, yet who was previously a community-based adolescent
Applying Biesta’s theoretical analysis to the data presented sexual health educator, learned that the current EBP model
above can help me analyze the epistemological and ontological requires educators to literally read from the curriculum’s script
politics at play these cases, particularly in regards to the divergent and asked (incredulously and rhetorically), ‘‘Well, who would ever
philosophical perspectives and assumptions that underlie (and are want to be an educator then?!’’ It is important to note that some
reproduced by) those practices. For example, data about the community educators like being able to ‘‘hang their hats’’ on the
funder-mandated EBPs with which the projects in Cases A and B EBP, yet they simultaneously realize that they have become less
work suggest that the creation and dissemination of those able to respond to community needs and to exercise their
programs manifest and tacitly assume a representational episte- professional judgment. Concerning the project in Case B, the
mology. Those specific EBPs were created and evaluated in one homegrown program would have been cut if they did not engage in
specific context under explicitly manipulated conditions, produc- the EBP movement’s model, one stark way in which the application
ing knowledge of the programs’ outcomes that is then assumed to of scientific knowledge remakes the world. One less obvious way it
hold true over time and space, as long as the EBP is implemented does so pertains to how the identities and subjectivities of Kenyan
with fidelity. Yet the representational epistemology that is tacitly youth might be remade depending on which type of sexual health
assumed ignores the multitude of ways in which the RCTs which education is offered to; are the ‘‘Kenyanized’’ EBPs the project
earned those EBPs their ‘‘evidence-based’’ status were actually implements overly ‘‘Americanizing’’ the Kenyan youth? Concern-
interventions. Those experiments were highly transactional. To ing the organization in Case C, if the administrator who stated that
think that the knowledge created about an EBP’s outcomes through they should only implement EBPs derived from university research
the transactional processes of an RCT can somehow be generalized had implemented that plan, the organization would have been
into future implementations of that same curriculum, under vastly eliminated. Corroborating Latour’s and Biesta’s claims, these
different contexts, requires a rather large leap of epistemic logic. It examples show that the application of scientific knowledge in
is a problem of external validity and generalizability. society is ontologically performative and transformative. Reflect-
In the ontological and praxeological domains, the data indicate ing on these examples, I agree with Biesta that the application
that the developers, purveyors, and funders involved in implement- deficit involves ‘‘ideology that makes the incorporation of practices
ing the EBP work in Cases A and B tacitly rely on mechanistic notions into particular networks invisible’’ (Biesta, 2010, p. 499).
of behavior change that are at the heart of the broader ‘‘evidence- In light of the analyses presented above, in order to synthesize
based’’ movement. Interestingly, when participants in my study and succinctly articulate my comparison of these three cases, I
spoke explicitly to these assumptions about the nature of human parsed the themes that I identified about divergent perspectives
behavior change, it was to critique the EBP endeavors in which they and assumptions on what non-formal education is into two
were involved. For example, the staff of the project in Case A spoke fundamentally different modes of education: Mode 1 and Mode 2
longingly about their years spent working on holistic community- (see Table 1). The primary axis of difference, which relates to
wide PYD efforts. They indicated that such work was much more in Biesta’s discussion of the inevitably teleological character of
tune with their actual espoused theories of behavior change. As education, is whether non-formal education is seen and done as an
quoted previously, one administrator expressed ‘‘healthy skepti- infrastructure for the dissemination of scientific information or as a
cism’’ of the theories that frame the entire EBP endeavor, calling site of grassroots knowledge sharing. This first axis is closely related
them ‘‘tenuous at best.’’ One staff member put it this way: to a second one regarding different perspectives about what the
essential unit of educational interaction is: A program (meaning a
When you think about social behavior or community based tightly bounded and scripted curriculum) or a set of practices. As I
behavior there are so many factors that influence that. And present in the table, these two axes of difference relate to other
when you take a rigorous research method, you’re trying to variable ways of seeing and doing non-formal education.
eliminate all those factors and you focus on the one or two. And It is important to note that, in practice, the two modes are
so that’s not that realistic. There’s no community, there’s no not (usually) so starkly dichotomized as this table suggests. In the
social behavior that’s just influenced by one thing. day-to-day contingencies of people’s work processes, the two tend
146 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

Table 1
Divergent perspectives and assumptions about what non-formal education is.

Mode 1 Mode 2

Non-formal education is. . . An infrastructure for the dissemination of scientific information A site of grassroots knowledge sharing
Program planning and evaluation decisions are. . . Campus- or scientist-driven Community-driven
The essential unit of educational interaction is. . . A program (meaning a tightly bounded and scripted curriculum) A set of practices and processes
Behavior change is assumed to be. . . Simple, or complicated; linear Complex; non-linear
The focus is on. . . Content delivery and specific outcomes Process facilitation and general outcomes

to be blended together, with one or the other being more or less on their agency to respond and adapt to the needs and realities in
emphasized depending on the particular situation. However, the their local communities; and project administrators admit that the
table offers a simplified heuristic overview of these divergent EBP model’s espoused theories of behavior change, processes of
perspectives and assumptions that constitute a key finding of my ‘‘Kenyanization’’ and adaptation, and overall efficacy are all highly
study. questionable.
Both modes value evidence and knowledge. Both are motivated At least two ironies emerge related to my conclusions about the
by democratic and ethical concerns. Yet each mode engages with contingent and messy implementation of ‘‘science-based educa-
its democratic and ethical project differently. For instance, EBPs tion.’’ One is that, because the EBPs have already been proven to be
and Mode 1 education value episteme over techne and phronesis. effective (in some cases, by way of one RCT fifteen or twenty years
Mode 1 education seeks to advance democracy by directing and ago), the curricula are generally implemented in communities with
justifying action with scientific knowledge and technique, an no outcome evaluation whatsoever. Another irony is that the
example of the processes ‘‘through which science and technology ‘‘evidence-based education’’ movement itself is non-experimental,
have been harnessed to the development of modern liberal- since it has not undergone an RCT and has not been proven to be
democratic concepts of action, authority, and accountability’’ better than alternative approaches; thus it ‘‘reflects the introduc-
(Ezrahi, 1990, p. 1). Mode 2 education, on the other hand, values tion of ideology into the heart of the empirical paradigm’’
both techne and phronesis over episteme. It engages with its (Chelimsky, 2007, p. 32; see also Scriven, 2008). If the dominant
democratic project by questioning abstract hierarchies of knowl- approach to making non-formal education more evidence-based
edge and power. Mode 2 education is concerned with cognitive, or by way of EBPs is so flawed, what alternative approaches are
epistemic justice, defined as ‘‘the constitutional right of different available to program planners, administrators, researchers, and
systems of knowledge to exist as part of dialogue and debate’’ evaluators?
(Visvanathan, 2005, p. 92). It ‘‘has to do with the coexistence of I propose instead a refocusing of the field’s efforts on a more
many knowledges in the world and the relation between the methodologically pluralistic approach involving support for
abstract hierarchies which constitute them and the unequal evidence-based practices. Doing so will require more nuanced
economic and political power relations which produce and conceptualizations of grades of evidence (Chatterji, 2007), varieties
reproduce increasingly more severe social injustice’’ (Toulmin, of validity (Chen, 2010; Cronbach, 1982), and definitions of rigor
2007, p. xv). According to Visvanathan, ‘‘One has to realize that (Urban, Hargraves, & Trochim, 2014). This will perhaps come more
epistemology is not a remote, exotic term. It determines life naturally to (or already be the norm for) program planners,
chances’’ (2005, p. 84), which evokes Scriven’s claim regarding educators, administrators, researchers, and evaluators who are
what is at stake in the ‘‘gold standard’’ debates: ‘‘This issue is not a primarily engaged in Mode 2 educational contexts. Yet there are
mere academic dispute, and should be treated as one involving the ways to engender methodological plurality and practice-focused
welfare of very many people, not just the egos of a few’’ (2008, p. evaluation in Mode 1 contexts, too. One suggestion for how to do
24). As such, what Table 1 helps put into sharp relief is that, while this was articulated by the president of the William T. Grant
the conflicts and tensions between these divergent perspectives Foundation in their 2009 annual report. The foundation supports
may appear academic or irrelevant, they actually have stark the goal of evidence-based youth development education, but they
implications for what non-formal education is and can be in ‘‘have come to question the predominant approach to achieving
society. that goal. That approach assumes that widespread improvement in
child and youth outcomes will occur through ‘scaling-up’ brand-
5. Conclusions: lessons learned name programs, models, and organizations . . . and its track record
is modest at best’’ (Granger, 2010, p. 5). According to Granger:
The study described here suggests that the current dominant
In today’s vernacular, we need more research attention paid to
approach to making non-formal education more evidence-based
why things work as the missing ingredient in the ‘what works’
by way of EBPs is seriously flawed. The self-evident superiority of
agenda. . . . Despite the research community’s ability to identify
evidence-based programs must be revisited. In this study, I found
promising programs, our Foundation has become increasingly
that the practices and processes that comprise EBP work are far
convinced that it is not possible to take such programs to scale
from ‘‘scientific,’’ in the honorific sense of the term: university-
in a way that maintains their effectiveness. (2010, pp. 5–6,
based researchers develop and test EBPs in contexts that differ
emphasis in the original)
vastly from those where the EBPs are eventually implemented,
seriously stretching the philosophical limits of external validity The shift in focus to why things work ‘‘led the Foundation to
and generalizability; the list of approved EBPs for a pregnancy examine how policies and programs shape the daily experiences
prevention initiative includes a curriculum that makes no mention that are the pathways to individual-level change’’ (Granger, 2010,
of pregnancy; unapproved accidental adaptations as well as p. 8), phenomena which are not amenable to study via an RCT.
unapproved purposeful adaptations are commonplace in the Their new research agenda focuses on how resources, such as
settings where community educators implement the EBPs; tacitly ‘‘human, social, economic, cultural, and physical capital, . . . seem to
(and paradoxically) approved adaptations are made by the condition the social interactions in a setting in a way that creates
developers, apparently based on their more privileged position and reinforces setting-level phenomena—such as norms and
as expert knowers, while community educators feel limits placed routines—that lead to differences in youth outcomes’’ (Granger,
T. Archibald / Evaluation and Program Planning 48 (2015) 137–148 147

2010, p. 8). To do so, they encourage their grantees ‘‘to integrate and Stacey Langwick. The author is grateful to Dr. Trochim and to
qualitative and quantitative methods to better theorize the four anonymous reviewers for their very helpful comments on an
processes and mechanisms that lead to improved youth outcomes earlier version of this article. The author would also like to thank all
. . . [and] support work on some of the methodological issues those who participated in this study. The study was supported in
inherent in assessing the precision of these setting-level measures’’ part by a Graduate Research Assistantship funded through
(Granger, 2010, p. 9). In this area of program evaluation, the W. T. National Science Foundation Award # 0814364 and by the
Grant foundation offers concrete alternatives in response to the Institute for African Development at The Mario Einaudi Center
flaws of the dominant EBP methods. for International Studies.
Another suggestion for how to respond to the short-comings of
the dominant EBP model comes from Donald Campbell. Ironically—
based on Campbell’s later work—there is reason to believe that he References
might disparage the evidence-based education approaches favored
ATLAS.ti 7.0 (2014). ATLAS.ti scientific software development. Berlin: GmbH.
by the clearinghouse that bears his name. Campbell took issue with
Bernstein, R. J. (1983). Beyond objectivism and relativism: Science, hermeneutics, and
approaches that employ designated operational definitions for praxis. Philadelphia: University of Pennsylvania Press.
theoretical terms. He saw such definitional operationalism as a Biesta, G. J. J. (2007). Why ‘‘what works’’ won’t work: Evidence-based practice and the
‘‘dogma’’ which ‘‘discouraged support for exploring multiple democratic deficit in educational research. Educational Theory, 57(1), 1–22.
Biesta, G. J. J. (2010). Why ‘‘what works’’ still won’t work: From evidence-based
approaches to measuring ‘the same thing’ . . .’’ (Campbell, 1984, education to value-based education. Studies in Philosophy and Education, 29(5),
p. 18). In other words, such approaches are epistemically unjust. 491–503.
Campbell called definitional operationalism an ‘‘unmitigated Campbell, D. T. (1984). Science policy from a naturalistic sociological epistemology.
PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association (pp.
disaster,’’ imported from logical positivism, ‘‘which persists long 14–29). Philosophy of Science Association.
after the substantial revision or rejection of positivism within the Carlson, J. S., & Levin, J. R. (2005). The No Child Left Behind legislation: Educational
philosophy of science. It persists most perniciously in social policy research and federal funding. Greenwich, CT: Information Age Publishing.
Chatterji, M. (2007). Grades of evidence: Variability in quality of findings in effective-
science, in the accountability movement, or in managerial control ness studies of complex field interventions. American Journal of Evaluation, 28(3),
efforts employing single explicit quantitative criteria’’ (Campbell, 239–255.
1984, p. 18). While the Campbell Collaboration is not based solely Chelimsky, E. (2007). Factors influencing the choice of methods in federal evaluation
practice. In G. Julnes, & D. Rog (Eds.), Informing federal policies on evaluation
on definitional operationalism, its methodological and evidentiary methodology: Building the evidence base for method choice in government sponsored
hierarchies—with their over-emphasis on RCTs—do lead to the evaluation. New directions for evaluation (Vol. 113, pp. 13–33). .
same sort of conceptual narrowing that Campbell warns against. Chen, H. T. (2010). The bottom-up approach to integrative validity: A new perspective
for program evaluation. Evaluation and Program Planning, 33(3), 205–214.
As an alternative, Campbell suggested that we need a ‘‘cross-
Coalition for Evidence-based Policy (CEBP) (2003). Identifying and implementing edu-
validational model of program dissemination and validation,’’ with cational practices supported by rigorous evidence: A user friendly guide. Washington,
‘‘large numbers of decentralized local innovators and independent DC: US Dept of Education Institute of Education Sciences National Center for
adopters, independently making the many ad hoc decisions about Education Evaluation and Regional Assistance.
Coalition for Evidence-Based Policy (CEBP) (2012). Our mission. Retrieved from http://
implementation and measurement’’ (1984, p. 19). We should coalition4evidence.org/
support ‘‘a heterogeneity of programs, each evaluating themselves Couto, J. S. (1998). Evidence-based medicine: A Kuhnian perspective of a transvestite
until they feel they have a package worth others borrowing, and non-theory. Journal of Evaluation in Clinical Practice, 4(4), 267–275.
Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San
support those who borrow to cross-validate the efficacy’’ (Camp- Francisco: Jossey-Bass Publishers.
bell, 1984, p. 19). In this truly experimenting society, evaluative Damon, W., & Gregory, A. (2002). Bringing in a new era in the field of youth develop-
thinking and reasoning would trump methodological dogmatism. ment. In R. Lerner, F. Jacobs, & D. Wertlieb (Eds.), Handbook of applied developmental
science (vol. 1, pp. 407–420). New York: Wiley.
Instead of the epistemological and pedagogical monoculture Delgado, M. (2002). New frontiers for youth development in the twenty-first century. New
represented by the EBP movement, heeding Campbell’s call would York: Colombia University Press.
foster methodological and epistemological plurality and would Donaldson, S. I. (2009). In search of the blueprint for an evidence-based global society.
In S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), What counts as credible
leverage evaluation’s democratic potential. The question remains: evidence in applied research and evaluation practice? (pp. 2–18). Los Angeles: Sage.
How could the field of evaluation best operationalize Campbell’s Donaldson, S. I., & Christie, C. A. (2007). The 2004 Claremont bebate: Lipsey vs. Scriven.
cross-validational model? Answering that question is beyond the Journal of Multidisciplinary Evaluation, 2(3), 60–77.
Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.). (2009). What counts as credible
scope of this paper.
evidence in applied research and evaluation practice? Los Angeles: Sage.
In conclusion, if the RCT is no longer considered the gold Duflo, E., & Kremer, M. (2003). Use of randomization in the evaluation of development
standard, and if we realize that the dominant approach to making effectiveness. Paper prepared for the World Bank Operations Evaluation Department
education more evidence-based by way of EBPs is seriously flawed, (OED) conference on evaluation and development effectiveness in Washington, D.C.,
15–16 July. Retrieved from http://economics.mit.edu/files/765
then more phenomena (especially phenomena that involve open, Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the
recursive, dynamic, non-linear, values-explicit practices and influence of implementation on program outcomes and the factors affecting
processes) could be legitimated as realities that are worthy to implementation. American Journal of Community Psychology, 41(3–4), 327–350.
Erickson, F., & Gutierrez, K. (2002). Comment: Culture, rigor, and science in educational
be researched and evaluated, and thus valued as worthy to exist. I research. Educational Researcher, 31(8), 21–24.
hope that the theoretical and empirical analysis of the three cases Ezrahi, Y. (1990). The descent of Icarus: Science and the transformation of contemporary
presented above helps open up dialog on questions about what democracy. Cambridge, MA: Harvard University Press.
Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational
education is, what research is, what science is, what practice is, and research. Educational Researcher, 31(8), 4–14.
what evaluative perspectives should be brought to bear on all of Firpo-Triplet, R., & Fuller, T. (2012). General adaptation guidance: A guide to adapting
those questions. Such discussions are crucial if we are to seek a evidence-based sexual health curricula. ETR Associates and the Centers for Disease
Control and Prevention Division of Reproductive Health.
more effective and ethical praxis of evaluation and program
Fitzgerald, A. M., Stanton, B. F., Terreri, N., Shipena, H., Li, X., Kahihuata, J., et al. (1999).
planning which embraces rather than effaces the intricacies and Use of Western-based HIV risk-reduction interventions targeting adolescents in an
nuances that characterize social human action. African setting. Journal of Adolescent Health: Official Publication of the Society for
Adolescent Medicine, 25(1), 52–61.
Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can
Acknowledgements succeed again. Oxford, UK: Cambridge University Press.
Foucault, M. (1977). Discipline and punish: The birth of the prison. New York: Pantheon
This study is drawn from the author’s doctoral dissertation. The Books.
Foucault, M. (1980). Power/knowledge. Brighton: Harvester.
author would like to thank the members of the thesis committee Galbraith, J. S., Stanton, B., Boekeloo, B., King, W., Desmond, S., Howard, D., et al. (2009).
for their guidance on this project: Arthur Wilson, William Trochim, Exploring implementation and fidelity of evidence-based behavioral interventions
148 T. Archibald / Evaluation and Program Planning 48 (2015) 137–148

for HIV prevention: Lessons learned from the Focus on Kids diffusion case study. Pohl, S., Steiner, P., Eisermann, J., Soellner, R. R., & Cook, T. D. (2009). Unbiased causal
Health Education & Behavior, 36(3), 532–549. inference from an observational study: Results of a within-study comparison.
Goldacre, B. (2006, August). Objectionable ‘objectives’. In The guardian Retrieved from Educational Evaluation and Policy Analysis, 31(4), 463–479.
http://www.guardian.co.uk/science/2006/aug/19/badscience.uknews Poulsen, M. N., Vandenhoudt, H., Wyckoff, S. C., Obong’o, C. O., Ochura, J., Njika, G., et al.
Granger, R. C. (2010). Improving practice at scale. In William T. Grant Foundation (2010). Cultural adaptation of a US evidence-based parenting intervention for rural
2009 annual report. New York, NY: William T. Grant Foundation Retrieved from Western Kenya: From parents matter! To families matter! AIDS Education and
http://wtgrantfoundation.org/about_us/annual_reports Prevention, 22(4), 273–285.
Greene, J. C. (2009). Evidence as ‘‘proof’’ and evidence as ‘‘inkling’’. In S. I. Donaldson, C. Public Law 107-110 (2014). No Child Left Behind Act of 2001.
A. Christie, & M. M. Mark (Eds.), What counts as credible evidence in applied research Rallis, S. (2009). Reasoning with rigor and probity: Ethical premises for credible
and evaluation practice? (pp. 153–167). Los Angeles: Sage. evidence. In S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), What counts as
Greene, J. C., Lipsey, M. W., Schwandt, T. A., Smith, N. L., & Tharp, R. G. (2007). Method credible evidence in applied research and evaluation practice? (pp. 168–180). Los
choice: Five discussant commentaries. In G. Julnes, & D. Rog (Eds.), Informing Angeles: Sage.
federal policies on evaluation methodology: Building the evidence base for method Roth, J. L., & Brooks-Gunn, J. (2003). What exactly is a youth development program?
choice in government sponsored evaluation. New directions for evaluation (Vol. 113, Answers from research and practice. Applied Developmental Science, 7(2),
pp. 111–127). . 94–111.
Hamilton, S., Hamilton, M., & Pittman, K. (2004). Principles for youth development. In S. Sackett, D. L., Rosenberg, W. M. C., Muir, G. J. A., & Brian, H. R. (1996). Evidence based
F. Hamilton & M. A. Hamilton (Eds.), The handbook of youth development: Coming of medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 71.
age in American communities (pp. 3–22). Thousand Oaks, CA: Sage. Schön, D. (1983). The reflective practitioner. How professionals think in action. London:
Henry, G. T. (2009). When getting it right matters: The case for high-quality policy and Temple Smith.
program impact evaluations. In S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), Scriven, M. (2008). A summative evaluation of RCT methodology: An alternative
What counts as credible evidence in applied research and evaluation practice? (pp. 32– approach to causal research. Journal of Multidisciplinary Evaluation, 5(9),
50). Los Angeles: Sage. 11–24.
Holmes, D., Murray, S. J., Perron, A., & Rail, G. (2006). Deconstructing the evidence- Scriven, M. (2013, May). Re: Federal evaluation policy for selected grant programs and the
based discourse in health sciences: Truth, power and fascism. International Journal definition of ‘‘evidence.’’. Email message to the EvalTalk list serve.
of Evidence-Based Healthcare, 4(3), 180–186. Shadish, W. R. (2011). Randomized controlled studies and alternative designs in
Jelicic, H., Bobek, D., Phelps, E., Lerner, R., & Lerner, J. (2007). Using positive youth outcome studies: Challenges and opportunities. Research on Social Work Practice,
development to predict contribution and risk behaviors in early adolescence: (6), 636–643.
Findings from the first two waves of the 4-H Study of Positive Youth Development. Shadish, W. R., Rindskopf, D. M., & Hedges, L. V. (2008). The state of the science in the
ral Development, 31(3), 263–273. meta-analysis of single-case experimental designs. Evidence-Based Communication
Kellert, S. H. (2008). Borrowed knowledge: Chaos theory and the challenge of learning Assessment and Intervention, 2(3), 188–196.
across disciplines. Chicago: University of Chicago Press. Shahar, E. (1998). Evidence-based medicine: A new paradigm or the Emperor’s new
Kulynych, J. J. (1997). Performing politics: Foucault, Habermas, and postmodern clothes? Journal of Evaluation in Clinical Practice, 4(4), 277–282.
participation. Polity, 30(2), 315–346. Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the
Latour, B. (1983). Give me a laboratory and I will raise the world. In K. D. Knorr & M. experimental life. Princeton, NJ: Princeton University Press.
Mulkay (Eds.), Science observed (pp. 141–170). London: Sage. Smith, D. E. (2005). Institutional ethnography: A sociology for people. Walnut Creek, CA:
Law, J., & Urry, J. (2004). Enacting the social. Economy and Society, 33(3), 390–410. AltaMira Press.
Leach, M., Scoones, I., & Wynne, B. (2005). Introduction: Science, citizenship, and Smith, D. E. (Ed.). (2006). Institutional ethnography as practice. Lanham, MD: Rowman &
globalization. In M. Leach, I. Scoones, & B. Wynne (Eds.), Science and citizens: Littlefield.
Globalization and the challenge of engagement (pp. 1–14). London: Zed Books. Smith, G. C., & Pell, J. P. (2003). Parachute use to prevent death and major trauma
Lerner, R. M., & Benson, P. I. (2003). Developmental assets and asset-building communi- related to gravitational challenge: Systematic review of randomised controlled
ties: Implications for research, policy, and practice. New York: Kluwer Academic/ trials. BMJ: British Medical Journal, 327(7429), 1459.
Plenum. St Pierre, E. A. (2002). Comment: ‘‘Science’’ rejects postmodernism. Educational Re-
Lindenbaum, S., & Lock, M. M. (1993). Knowledge, power, and practice: The anthropology searcher, 31(8), 25–27.
of medicine and everyday life. Berkeley: University of California Press. Timmermans, S., & Berg, M. (2003). The gold standard: The challenge of evidence-based
Longino, H. (1993). Subject, power, and knowledge: Description and prescription in medicine and standardization in health care. Philadelphia, PA: Temple University
feminist philosophies of science. In L. Alcoff & E. Potter (Eds.), Feminist epistemol- Press.
ogies (pp. 101–120). New York: Routledge. Toulmin, S. (2007). Preface: How reason lost its balance. In B. de Sousa Santos (Ed.),
Loughlin, M. (2007). Style, substance, Newspeak ‘and all that’: A commentary on Cognitive justice in a global world: Prudent knowledges for a decent life (pp. ix–xv).
Murray et al. (2007) and an open challenge to Goldacre and other ‘offended’ Lanham: Lexington Books.
apologists for EBM. Journal of Evaluation in Clinical Practice, 13(4), 517–521. U.S. Department of Education (2003). Notice of proposed priority: Scientifically based
Mathematica Policy Research and Child Trends (2010). Identifying programs that impact evaluation methods (RIN 1890-ZA00). Federal Register, 68(213), 62445–62447.
teen pregnancy, sexually transmitted infections, and associated sexual risk behaviors. UNAIDS (2011). Terminology guidelines – Revised version, October 2011. Retrieved from
Review protocol, Version 1.0. Retrieved from http://www.hhs.gov/ash/oah/oah- http://www.unaids.org/en/media/unaids/contentassets/documents/unaidspubli-
initiatives/teen_pregnancy/db/eb-programs-review-v1.pdf cation/2011/JC2118_terminology-guidelines_en.pdf
McKleroy, V. S., Galbraith, J. S., Cummings, B., Jones, P., Harshbarger, C., Collins, C., et al. Urban, J. B., Hargraves, M., & Trochim, W. M. (2014). Evolutionary evaluation: Implica-
(2006). Adapting evidence-based behavioral interventions for new settings and tions for evaluators, researchers, practitioners, funders and the evidence-based
target populations. AIDS Education & Prevention, 18(Suppl.), 59–73. program mandate. Evaluation and Program Planning. http://dx.doi.org/10.1016/
Mol, A., & Berg, M. (1998). Differences in medicine: An introduction. In M. Berg & A. Mol j.evalprogplan.2014.03.011
(Eds.), Differences in medicine: Unraveling practices, techniques, and bodies (pp. 1– Usher, R., Bryant, I., & Johnston, R. (1997). Adult education and the postmodern challenge:
12). Durham, NC: Duke University Press. Learning beyond the limits. London: Routledge.
Morrison, K. (2009). Causation in educational research. London: Routledge. Visvanathan, S. (2005). Knowledge, justice, and democracy. In M. Leach, I. Scoones, & B.
Mosteller, F., & Boruch, R. (Eds.). (2002). Evidence matters: Randomized trials in education Wynne (Eds.), Science and citizens: Globalization and the challenge of engagement
research. Brookings Institution. (pp. 83–94). London: Zed Books.
National Research Council (NRC) (2002). Scientific research in education. Washington, Walters, P. B., Lareau, A., & Ranis, S. H. (2009). Education research on trial: Policy reform
DC: National Academy Press. and the call for scientific rigor. Milton Park, Oxon: Routledge.
Parker, L., Maman, S., Pettifor, A., Chalachala, J. L., Edmonds, A., Golin, C. E., et al. (2012). Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., et al. (2008).
Adaptation of a US evidence-based positive prevention intervention for youth Bridging the gap between prevention research and practice: The interactive
living with HIV/AIDS in Kinshasa, Democratic Republic of the Congo. Evaluation and systems framework for dissemination and implementation. American Journal of
Program Planning, 36(1), 124–135. Community Psychology, 41(3–4), 171–181.
Peters, S. J. (2006). ‘Every farmer should be awakened’: Liberty Hyde Bailey’s vision of Wilson, A. L. (2009). Learning to read: Discourse analysis and the study and practice of
agricultural extension work. Agricultural History, 80(2), 190–219. adult education. Studies in Continuing Education, 31(1), 1–12.
Peters, S. J. (2007). Changing the story about higher education’s public purposes and work:
Land-grants, liberty, and the Little Country Theater. Foreseeable futures position paper
Thomas Archibald is an Assistant Professor and Extension Specialist in the Agricul-
#6, published by Imagining America: Artists and Scholars in Public Life. Retrieved from
tural and Extension Education department at Virginia Polytechnic Institute and State
http://imaginingamerica.org/fg-item/changing-the-story-about-higher-education
University (Virginia Tech), where he focuses on program evaluation, evaluation
Peters, S. J., Alter, T. R., & Schwartzbach, N. (2008). Unsettling a settled discourse:
capacity building, and research-practice integration. He received his doctoral degree
Faculty views of the meaning and significance of the land-grant mission. Journal of
in Education from Cornell in 2013. From 2008 to 2013, he was a Graduate Research
Higher Education Outreach and Engagement, 12(2), 33–66.
Assistant with the Cornell Office for Research on Evaluation under the direction of
Pittman, K., Irby, M., & Ferber, T. (2000). Unfinished business: Further reflections on a
Professor Bill Trochim. Previously, he was a Youth Development Program Manager
decade of promoting youth development. In N. Jaffe (Ed.), Youth development:
with Cornell Cooperative Extension of Tompkins County and an Environmental Edu-
Issues, challenges, and directions (pp. 17–64). Philadelphia, PA: Public/Private
cation Volunteer with the Peace Corps in Gabon.
Ventures.

You might also like