Professional Documents
Culture Documents
Cap6 - Duncan
Cap6 - Duncan
6
EVIDENCE OR ORTHODOXY?
JULIA H. LITTELL
Believe those who are seeking the truth; doubt those whofindit.
—Andre Gide
The hope that clinical practice will be informed by the results of empii-
ical research is not new, but this ideal has been difficult to attain. Reformets
have tiled to use scientific knowledge to improve the human condition since
the late 19th centuty. Eaily reformers embraced "a rathei sweeping tmst in sci-
ence to guide the hand ofthe practitionei" (Zimbalist, 1977, p. 32), but theii
use of science was rhetorical. Scientific therapeutics have come and gone, yet
the desire to build empirical foundations for the helping professions remains.
A new breed of scientist-practitionet emerged in psychology and social work
in the 1960s and 1970s. A product of academics, the scientist-practitioners'
designs were not sustained in piactice, thwarted by lack of agency support
(Reid, 1994).
Much has been written about tensions that divide clinical wotk and
research. Clinicians and social scientists have distinct imperatives and sensi-
bilities. Theiapy requites action and faith in the process, whereas science
demands observation and skepticism. Most scientific knowledge is tentative
and nomothetic, not directly applicable to individual cases. Experts have
stepped into this breach by packaging empiiical evidence fot use in piactice.
Sometimes this is little mote than a mse to promote favoiite theoties and thei-
167
apies. Yet, wtapped in scientific rhetoric, some authoritative pronouncements
have become orthodoxy.
Now evidence-based practice (EBP) is in vogue in the helping professions.
Are its scientific claims genuine? Can its methods be seamlessly integrated into
practice? Or is EBP just a new orthodoxy?
Two fundamentally different approaches to EBP and their influence on
the maiketing and regulation of clinical practice are desciibed in this chaptei.
Because both approaches rely on summaties of empiiical evidence, methods of
research synthesis aie examined, using case examples to illustrate the promise
and problems of these methods. This leads to a discussion about the cunent
state ofthe science, the premature closure of inquiry about what wotks, and the
requirements fot fitm empiiical foundations. Recommendations are offered to
enhance ciitical evaluation of research so that EBP is not inteipreted in a way
that unfairly rest!icts tieatments.
EVIDENCE-BASED TREATMENTS
Atguments have been made foi and against EBP and EBTs (e.g., Gambtill,
2006; Gibbs & Gambtill, 2002; Hubble, Duncan, & Millei, 1999; Wampold,
2001; Westen, Novotny, & Thompson-Btennei, 2004). EBP reflects the
understanding that scientific evidence is tentative, whereas EBTs depict
Systematic Reviews
The term systematic review refers to "a process involving measures to con-
trol biases in research synthesis" (Chalmeis, Hedges, & Coopet, 2002, p. 16).
Systematic reviewsfollowbasic steps in the research process, using explicit and
replicable procedures to minimize bias at each step (Higgins & Green, 2008;
Littell, Corcoran, & Pillai, 2008).
Search Strategies
Reviewers use a systematic appioach and a vaiiety of souices to tiy to
locate all potentially material studies. In collaboration with infoimation
retrieval specialists, they identify electronic databases and develop appro-
priate keyword strings to use in each database. Hand searching of the
contents of journals is often needed to find eligible studies that aie
not piopetly indexed (Hopewell, Claike, Lefebvie, & Schetei, 2006).
Reviewers must make vigorous efforts to locate the gray literature (i.e.,
unpublished and haid-to-find studies) to avoid the file diawei problem
(Hopewell, McDonald, Clarke, & Egger, 2006; Rothstein et a l , 2005).
This involves personal contacts with experts, along with scanning confet-
ence abstracts and reference lists. The search process and its results are
carefully documented.
Synthesis of Results
Transparent methods are used to combine results across studies. Quanti-
tative methods lend themselves to this putpose. Meta-analysis is a set of
statistical techniques used to estimate combined effect sizes, account foi vati-
ations in the precision of effect size estimates drawn from different samples,
explore potential modeiatois of effects, and examine potential effects of pub-
lication bias (Lipsey & Wilson, 2001; Littell et al., 2008). It is noteworthy that
some meta-analyses ate not embedded in systematic teviews. For example,
a meta-analysis of a convenience sample of published studies is not a sys-
tematic review. Systematic teviews do not always include meta-analysis.
Reporting of Results
Mohei et al. (1999) developed the Quality of Repotting of Meta-
analyses (QUOROM) statement to improve reports on systematic reviews and
meta-analyses. The QUOROM statement includes a checklist of items that
should be reported and a flow diagram for authois to use to desciibe how studies
were identified, screened, and selected.
Updating
To remain current and getmane for policy and practice, systematic
leviews need to be updated tegulatly. Systematic approaches to reviewing
research are not new, nor did they originate in the biomedical sciences
(Chalmers et al., 2002; Petticiew & Robetts, 2006). Two international, intei-
disciplinaty collaborations of scholais, policy makets, clinicians, and con-
sumets have emeiged to biidge the science and piactice of research synthesis.
The Cochrane Collaboration produces systematic reviews of studies on effects
of interventions in health care (see http://www.cochiane.otg). The Campbell
Collaboration synthesizes results of interventions in social care (education,
social welfare, mental health, and ciime and justice; http://www.campbellcol
laboiation.otg). These groups produce guidelines for research synthesis that
are based on methodological research where that research is available.
What criteria and methods have revieweis used to locate, analyze, and
synthesize evidence foi EBTs? How systematic are thesereviews?To find out,
I conducted a case study of published reviews of effects of a model program,
Multisystemic Therapy (MST). This case was prompted by discrepancies
between results of a Cochrane and Campbell systematic review (Littell, Popa,
& Fotsythe, 2005) and conclusions of prior reviews (Littell, 2005).
MST is a short-term, home- and community-based program that
addresses complex psychosocial problems. It also provides alternatives to out-
of-home placement fot youth with social, emotional, and behavioral prob-
lems (Henggelet, Schoenwald, Botduin, Rowland, & Cunningham, 1998;
Henggeler, Schoenwald, Rowland, & Cunningham, 2002). It is one ofthe
model programs identified by SAMSHA (2007) and by the Blueprints for Vio-
lence Prevention (Henggelet, Mihalic, Rone, Thomas, & Timmons-Mitchell,
1998). MST has been cited as an effective tieatment by the National Institute
on Dmg Abuse (1999,2003), National Institute of Mental Health (2001), and
the Suigeon General's office (U.S. DHHS, 1999, 2001). Licensed MST
programs exist in more than 30 states in the United States and in Canada,
A Systematic Review
Published Reviews
# rH H o
ft ^ iH
r- ro \D o
fTi
o Ul m H
r-t o o cn
t ^1
ro r- vo co
If •<t
—. CO II
111 life
tsi •S — ^ o
Ids
S o o
CMfeI Si t-11 SI t l
*" t 03 <D
111 i
i & o iP o
5S88 I!?? ilcodo
• • o
^:
0
(2
H r- ^ H •*
W) Ul rH o 1
. o
s o •3
W)
s
e w £2
p
o 3 o
Ji £ <iS
H —
m X
SSM! CO
~^-r
•
I
l •c
3
2 a
X
1 co -a
5 o
_; u
fe H I
2
H H H H f- H
II
00 o)
O^ •S •?. 00'
B l s2 I
J3 ^^ TD ^ M5
(^
"rt
n
"3
|
4) t§ aa
oa
c
1
Si •« c •« B .2
» ««
Q £"3
ffl a
S 3
-S « u_
R "2 o
a: c! *- £
1-.
aj
Ji S 1c
^i
2 «« £
3
O
3
0
(4 0 c
« 3 S 2 £ ti g 8 y 1? £
<0
§£ 2
a
Q
g
a
ai
2
«
Q
B
w3
2
a ^ ^ a. 05
<*> in
(Continued)
reduced overall s t r e s s — both groups demonstrated decreases in the severity
of the identified problems [MST] improved [parent-child] interactions, implying
a decreased risk for maltreatment of children in the MST condition" (p. 293). ""Both
approaches acted to reduce psychiatric symptoms in parents and parental stress, as well
as to alleviate individual and family problems— [MST] was more effective in improving
parent-child interactions, helping physically abusive parents manage child behavior, and
assisting neglectful parents in responding more appropriately to their child's needs. Sur-
prisingly, [PT] was more advantageous for improving parents' social lives. The hypoth-
esis is the group setting for parent training reduced isolation and improved parents'
support system." (p. 568). C"MST has consistently produced improvements in family func-
tioning across outcome studies with juvenile offenders and maltreating families. Several
of these studies used observational methods to demonstrate increased positive family
interactions and decreased negative interactions" (p. 209). d"[MST] outcome studies
have extended t o . . . parents who engage in physical abuse or neglect Thus, the
model of providing treatment may have broad applicability across problem domains
among seriously disturbed children" (p. 79). e"MST was more effective than [PT] for
improving parent-child interactions associated with maltreatment. Abusive parents
showed greater progress in controlling their child's behavior, maltreated children exhib-
ited less passive noncompliance, and neglecting parents became more responsive to
their child's behavior. [PT] was superior to MST [in] decreasing social problems (i.e.,
social support network)" (pp. 75-76).
Questions about the scientific bases of EBTs have been met with appeals
to consensus and authority, resistance to ciiticism, outfight hostility, and ad
hominem responses (Gorman, 2003a, 2003b, 2005c; Littell, 2006). Gotman
(2005b) aigued that some proponents of EBTs are "abandoning the ciitical
undetpinnings of science in favoi of verification and affiimation . . . [which]
leads to intellectual stagnation and dogma." These conditions are antithetical
to serious empirical inquiry, practice-based research, and EBP.
Researchers must distinguish empiiical evidence from expeit opinion.
That is difficult to do when experts claim theii opinions are based on sound
scientific evidence and when theii conclusions are often lepeated and
endotsed by others. Sackett (2000) said that too many experts impede the
healthy advancement of science. It is inteiesting that he stopped writing
about EBM after he discovered the teim Sackettisation had been coined to
connote "the artificial linkage of a publication to the evidence based medi-
cine movement in order to improve sales" (Sackett, 2000, p. 1283). Equally
impottant, reseaicheis must critically appraise evidentiaiy claims and be
wary of experts, junk science, and pseudoscience. Pseudoscience sells. It
appeals to the desire for quick fixes and certainty, whereas science leaves
probability and doubt (Tavris, 2003).
It is inevitable that the dialectic between EBP and EBTs involves power
and influence. Each approach enhances the powei, influence, and wealth of
some individuals and groups, reducing that of otheis. EBP "directly challenges
the authoiity of expeit opinion" and is "a threat to the powei of consultants"
(Lipman, 2000, p. 557). The dtive to implement EBTs provides new funding
opportunities foi piomoteis of EBTs (e.g., in the foim ofresearchgrants to study
knowledge ttansfei, transportability, and implementation). The linkage of
'It is not clear whether publication bias and confirmation bias are serious threats to the validity of syn-
theses of research on the common factors of therapy, but there is reason for concern here. Meta-analyses
of studies of therapeutic relationship variables have relied on published journal articles.
IMPLICATIONS
EBP models can provide useful guidance about how to ask and answei
clinically relevant questions, how to identify and critically appraise relevant
research andresearchreviews,and how to incoiporate empiiical evidence along
with othet considerations (such as client values) to make good practice deci-
sions. Gibbs (2003) and otheis (e.g., Straus, Richardson, Glaziou, & Haynes,
2005) have provided detailed suggestions in this regard. Some general princi-
ples for clinicians are as follows.
• Evidence from multiple studies is always piefeired to results of
a single study. Systematic reviews of research ate preferable to
traditional nanative reviews. Thus, clinicians should look for
systematic reviews, mindful of the fact that these reviews vary
in quality.
• The Cochrane and Campbell Collaborations ate good sources of
high-quality systematic reviews. Clinicians can and should assess
potential sources of bias in any review. The characteristics of
systematic teviews described in this chapter can be used as a
yardstick that clinicians can use to judge how well specific
reviews measure up. The QUOROM statement (Moher et al.,
1999) provides guidance about what to look for in reports on sys-
tematic reviews, as does a recent report by Shea et al. (2007).
• When relevant reviews are not available, out of date, or poten-
tially biased, clinicians can identify individual studies and assess
the credibility of those studies, using one of many tools developed
for this puipose (e.g., Gibbs, 2003). It would be ideal if clinicians
were able to rely on otheis to produce valid research syntheses.
• Above all, clinicians should remember that critical thinking is
cmcial to understanding and using evidence. Authorities, expeit
opinion, and lists of ESTs provide insufficient evidence foi sound
clinical piactice. Futthet, clinicians must deteimine how credi-
ble evidence relates to the particulat needs, values, preferences,
circumstances, and ultimately, theresponsesof theit clients.
Clinicians and reseaicheis also need to have an effect on policy so that
EBP is not inteipreted in a way that unfaiily restricts treatments.
REFERENCES