Professional Documents
Culture Documents
Evidence Based Practice PDF
Evidence Based Practice PDF
The concept of evidence-based practice dates back to the 19th century but has emerged into
prominence in debate and policy since the early 1990s. Reasons for this include growth in the
perceived need for greater effectiveness and efficiency during an era of increased public
accountability and managerialism, increased capacity for systematic electronic data collection
(and monitoring of performance), and developments in communications technology that
facilitate rapid dissemination of research findings.
Knowledge derived from research has consistently been recognized as a central (and often the
central) component of evidence. However, not all research methods have been equally
esteemed. Hierarchies of evidence were developed to categorize studies into levels of strength.
These hierarchies frequently positioned expert opinion as the least trustworthy source and
randomized control trials and/or systematic reviews as the strongest, most reliable forms of
evidence. The most trustworthy research, when synthesized into systematic reviews or practice
guidelines, could then be disseminated to practitioners in a parsimonious and accessible form
purportedly ripe for application to practice.
The case for evidence-based practice has been promoted through political, empirical, ethical,
practical, educational, and ideological means. With such powerful forces at play, it was difficult
to argue that practice should be anything other than evidence based. From the early 1990s,
numerous influential organizations, commentators, and researchers have championed the
ethical and social need for greater reliance on evidence to improve outcomes and make
decision making more transparent and effective. Professionals have been urged by
government, the scientific community, and regulatory bodies alike that it is not only desirable
but also ethically essential for them to practice in accordance with “the evidence.” Practice
guidelines proliferated. These guidelines were often developed by professional bodies and/or
experts who had a priori screened and appraised studies and reviews in an existing area. These
guidelines were replete with the findings of meta-analyses, randomized trials, and larger scale
observation studies because they held higher status in the methodological hierarchies.
Universities responded by creating new curricula around the need to practice in accordance
with the evidence.
Those espousing the need for evidence-based practice have been successful in framing debate
over the past decade. However, the changes that have actually been made to practice are
much less marked. Despite the prominence of the evidence-based practice movement and
attendant guidelines, the vast majority of practice remains contrary to the evidence. This is
testament not only to the complexities of practice but also to continuing contentions over what
counts as evidence and how best to support professionals to practice in accordance with
evidence.
After the initial enthusiasm for evidence-based practice subsided, a debate emerged as to why
substantial improvements in rates of evidence-based practice had not occurred. Some argued
that hierarchies of evidence were too methodologically restrictive and overly reliant on
randomized trials. Research participants often did not represent the broader population. This
was most apparent in randomized controlled clinical trials, where restrictive criteria excluded
adequate numbers of females, older adults, people with comorbidities, and diverse ethnic
groups.
Furthermore, many of the decisions confronting professionals in their practice are not
necessarily about effectiveness. However, most of the hierarchies of evidence-based practice
focus on research questions pertaining to effectiveness. For example, although meta-analyses
and randomized trials may have relevance to some aspects of health care (e.g., prescribing
medication) in which the principal issue is efficacy, this does little to guide the professional on
how to create a positive therapeutic relationship with the patient, how to empower the individual
to use the prescribed regimen, or how best to engage informal caregivers. For this, the
professional needs considerable clinical and social skills and insight into the patient's milieu as
well as an environment that provides adequate time and resources.
Prior to 2000, qualitative research had, at most, a peripheral role in debate around evidence-
based practice. Hierarchies favored methods that allowed for manipulation and intervention in
unnaturally closed systems. Evidence hierarchies, although widely adopted, ascribed far less
esteem to methods that collected data in natural settings (whether based on quantitative or
qualitative data) and often made no reference to qualitative research whatsoever. Was this to be
a reenactment of a paradigm debate that once more led to the incommensurability of qualitative
research with a dominant view?
However, more critical comment that cautions the positing of methods into any hierarchy has
arisen across methodological divides. These proponents maintain that what matters most in
terms of the validity and strength of a method is the applicability of the method to the research
question. Many great advances in the natural and human sciences have occurred despite a
lack of evidence from randomized trials. In making decisions, professionals must rely on
findings from different methods and knowledge bases. This acknowledgment provided an early
There was also a growing recognition that research evidence must capture the personal, social,
and contextual complexities that are central to professional practice. Combined with the view
that the world was not as ordered or predictable as proponents of evidence-based practice had
envisaged, arguments for a more nuanced evidence-based practice emerged within
mainstream debate.
Ray Pawson and Nick Tilley captured this well in their plea for research into health and social
interventions to examine “what works for whom, when, and why.” Trials were undertaken in
artificially controlled closed systems, but findings were then generalized to natural open
systems. The moderating effect on outcomes of other factors in the natural world (both
contextual and individual) lessened the effectiveness of the intervention. Hence, generalizability
of benefit was not achieved. Rather, a qualitative research was needed to understand how
interventions led to different outcomes for different people.
The continued relative lack of use of evidence in practice also drove governments and
disciplines to consider why this might be the case. New areas of study around knowledge
translation and use emerged. Almost inevitability, these areas needed to acknowledge and
explore the complex nature of practice settings and organizations. Disciplines historically more
peripheral in the evidence-based practice movement, such as organizational studies, nursing,
and the social sciences, were mobilized. Significantly, these were disciplines in which the
contributions of qualitative research were accepted.
How will qualitative research build its influence in the sphere of evidence-based practice in the
future? Although critical comment is essential for the continued evolution of evidence-based
practice, these developments are unlikely to result from bemoaning or undermining the merits
of making practice more evidence based. The movement is far from perfect, but it has sufficient
professional, public, and political momentum to continue to frame debate during the coming
years. However, a number of opportunities that show considerable promise for qualitative
research have emerged during recent years.
Policymakers and practitioners continue to face challenging decisions in which reliance on trials
and meta-analyses fails to provide sufficiently qualified and context-responsive answers.
Randomized trials and systematic reviews still remain focused on the global effectiveness of
interventions. Qualitative research is also suited to understanding the complexities of lay
understanding and experience, understanding the influence of context on outcomes, and
explaining behavior. Continued exploration of the factors influencing implementation of
evidence in practice is likely to occur. Research funding bodies have become increasingly
attuned to the need for knowledge translation in studies. Qualitative research will continue to
elucidate the complexities of how and why research should shape practice.
The recognition that knowledge beyond that related to questions of effectiveness is important
for practice justifies the applicability of qualitative research for policy and practice. One of the
most promising areas of recent progress has been the advent of qualitative systematic review—
rigorous syntheses of qualitative findings that can distill the wider body of qualitative evidence
in a set area. These reviews often draw on tools for the methodological appraisal of qualitative
research and have developed different methods to synthesize study findings. Findings can be
used to guide practice in relation to issues such as how interventions work and what different
subpopulations value.
Notably, even in areas where systematic reviews suggest a type of intervention, the trials on
which those reviews are based often have markedly different levels of effectiveness. Qualitative
research can explicate why interventions work or do not work and the influence that contextual
factors have on outcome.
Alexander M.Clark
http://dx.doi.org/10.4135/9781412963909.n159
See also
Critical Realism
Evidence
Meta-Synthesis
Further Readings
Edited by: Byrne, D.(1998). Complexity theory and the social sciences: An introduction.
London: Routledge.
Edited by: Pawson, R., &Tilley, N.(1997). Realistic evaluation. London: Sage.