You are on page 1of 11

Physiotherap y Theory an d Practice ( 2001 ) 17 , 201 –211

Ó 2001 Taylor & Francis

Evidence-based practice Ð imperfect but necessary

Robert D. Herbert, Catherine Sherrington, Christopher Maher, and


Anne M. Moseley

Evidence-based practice implies the systematic use of best evidence, usually in the
form of high quality clinical research, to solve clinical problems. This article
considers a series of objections to evidence-based physiotherapy including that ( 1 ) ,
it is too time-consuming, ( 2 ) , there is not enough evidence, ( 3 ) , the evidence is
not good enough, ( 4) , readers of clinical research cannot distinguish between
high and low quality studies, ( 5 ) , clinical research does not provide certainty when
it is most needed, ( 6 ) , Ž ndings of clinical research cannot be applied to individual
patients, ( 7) , clinical research does not tell us about patients’ true experiences,
and ( 8 ) , evidence-based practice removes respon sibility for decision making from
individual physiotherapists. We argue that, while there is some truth in each of
these objections, they need to be weighed against the potential beneŽ ts of
evidence-based practice. The overwhelming strength of the evidence-based
approach to clinical practice is that it takes full advantage of the only potentially
unbiased estimates of effects of therapy— those which are derived from carefully
conducted clinical research. The evidence-based practice model may be imperfect,
but it may be the best model of clinical practice that is currently available.

overview of what is implied by evidence-based


INTRODUCTION practice and discusses how this differs from
This article addresses some theoretical and traditional clinical practice. It then considers
practical issues with the implementation of some frequently raised objections to the
evidence-based practice. It begins with a brief evidence-based practice model.

Robert D. Herbert, Centre for Evidence-Based WHAT IS EVIDENCE-BASED


Physiotherapy and School of Physiotherapy, PRACTICE?
University of Sydney. Address Correspondence
to School & Physiotherapy, University of The term ‘‘evidence-based practice’’ is used
Sydney, P.O. Box 170, Lidcombe NSW 1825, in a variety of ways. We use the term as it is
Australia. E-mail: R.Herbert@cchs.usyd.edu.a u
Catherine Sherrington, Research manager, used by Sackett and colleagues in their in uen-
Prince of Wales Medical Research Institute tial book on evidence-based medicine ( Sackett
Christopher Maher, Senior lecturer, School of et al, 2000 ) . These authors conceive of evidence-
Physiotherapy. University of Sydney
Anne Moseley, Lecturer, Rehabilitation Studies based practice as consisting of a Ž ve step process
Unit, University of Sydney. that is carried out routinely in clinical encoun-
Accepted for publication April 2001. ters. The Ž ve-step process involves ( 1) asking
202 R. D. HERBERT

answerable clinical questions, ( 2 ) Ž nding the evidence which supports preconceived ideas of
best evidence with which to answer these ques- which therapies are effective.
tions, ( 3) critically appraising the evidence ( this An implicit assumption in this model of
involves deciding if the evidence is believable evidence-based practice is that well-conducted
and, if so, what it means ) , ( 4 ) applying the clinical research often provides the best
evidence to clinical problems, and ( 5) evalu- information about what interventions are
ating the effects of the intervention on individ- effective and ineffective, how useful a diagnostic
uals ( Sackett et al, 2000 ) . test is, or a patient’s likely prognosis. That is,
These Ž ve steps allude to some of the where good quality, relevant clinical research
most important distinctions between evidence- is available, it usually takes precedence over
based practice and clinical practice as it theory or personal experience, even the
is traditionally conducted. First, the process theories or experiences of experts ( National
of evidence-based practice begins with an Health and Medical Research Council, 2000;
acknowledgment of uncertainty. That is, the but see Greenhalgh, 1999) . The role of clinical
evidence-based practitioner strives to explicitly experience, clinical wisdom, and intuition
identify knowledge gaps. This contrasts with is primarily in making best use of good
some traditional models of clinical practice in evidence to meet individual patients’ needs and
which uncertainty is seen as a failing and good preferences.
clinicians are thought to be those who always The requirement of good evidence neces-
know what to do, not those who question what sarily restricts the focus in evidence-based prac-
tice to optimally designed studies. The optimal
they do. In many clinical environments there
study design will depend on the type of clinical
is an attitude that physiotherapists learn what
question. For example, the best evidence about
to do in clinical practice during their formal
the effects of therapy is provided by randomised
physiotherapy training ( Turner and WhitŽ eld,
trials or systematic reviews of randomised
1997, 1999 ) . An attitude of uncertainty is likely
trials ( National Health and Medical Research
to better equip health professionals to deal with
Council, 2000) . On theoretical grounds these
rapidly changing evidence.
sorts of evidence are expected to provide
A second distinction is that the process
relatively unbiased estimates of the effects of
of gathering and synthesising evidence is therapy. There is some empirical evidence that
systematic and critical ( Sackett et al, 2000) . other sorts of studies, particularly uncontrolled
It involves recording clinical questions that studies or studies with historical controls, tend
arise in clinical practice, ranking them in order to produce in ated estimates of the size of
of importance, and then tackling them in an treatment effects ( Chalmers et al, 1983; Colditz,
optimal way. Evidence is chosen on the basis of Miller, and Mosteller, 1989; Linde et al, 1999;
its probable validity. There is an emphasis on Miller, Colditz, and Masteller, 1989; Sacks,
deciding if the intervention will produce the Chalmers, and Smith, 1982; but see also Benson
desired outcomes without unreasonable risks and Hartz, 2000; Concato, Shah, and Horwitz,
and at a reasonable cost. This differs from 2000 and the ensuing letters ) . Questions about
traditional models of practice in which there diagnostic tests are usually best answered by
may be priority given to clinical experience as studies in which there is independent ( blind )
a form of evidence ( Carr et al, 1994; Nilsson comparison of the test with a gold standard
and Nordholm, 1992 ) , where clinical research test ( see Sackett et al, 2000 and paper by Strat-
evidence is often happened upon rather than ford in this issue ) . There is some empirical
strategically sought out, and where appraisal of evidence that studies that include nonrepre-
the quality of clinical research is superŽ cial or sentative patients, lack blinding, or do not use
does not occur at all. A systematic approach to a single gold standard for all subjects tend to
the use of evidence from clinical trials helps overestimate the diagnostic accuracy of a test
avoid the temptation to attend only to that ( Lijmer et al, 1999) . Questions about prognosis
EVIDENCE-BASED PRACTICE — IMPERFECT BUT NECESSARY 203

are best answered by studies that prospectively covered in other papers in this issue. Our
monitor well-deŽ ned cohorts from an early and conclusions will be that there are, indeed, some
uniform point in the course of their condition serious practical, theoretical, and philosoph-
( see Sackett et al, 2000 and paper by de Bie ical problems with evidence-based practice.
in this issue ) . The most difŽ cult questions, Nonetheless, evidence-based practice offers at
those about patients’ beliefs and the mean- least one profound advantage over alternative
ings they attach to their experiences, may be models of clinical practice in that optimal use
best explored with carefully conducted qualita- is made of the least-biased evidence from clin-
tive research ( see Ritchie, 1999 and paper by ical research. Thus evidence-based practice may
Ritchie in this issue ) . be imperfect, but it may be the best model of
Evidence-based practice does not imply clinical practice that is currently available.
that clinical decisions should be made on the
basis of clinical research alone. Key proponents
of evidence-based healthcare have emphasised Evidence-based practice is too
that the evidence provided by clinical research time-consuming to be practical
must complement other sorts of information, Even with practice and optimal resources, the
such as information about individual patients’ process of Ž nding and critically appraising the
speciŽ c needs and preferences ( Sackett et al, best evidence pertaining to a single clinical
2000; Greenhalgh, 1999) . Good clinicians are question usually takes considerable time. As
able to discern these needs and preferences. a consequence, it is not practical to use the
In the best models of evidence-based practice, best evidence to deal with every uncertainty
evidence about the effects of therapy ( or that arises in every clinical encounter, and even
accuracy of diagnostic tests or prognoses ) if there was good quality evidence to answer
informs, but does not dominate clinical all clinical questions, not all practice could be
decision-making. The physiotherapist draws on evidence-based. Any realistic model of evidence-
past clinical experience to apply the results of based practice must involve deciding what
research to the care of individual patients. The are the most important clinical questions and
best decisions are made with the patient, not Ž nding answers to those questions Ž rst. Given
found in journals and books. this reality, evidence must be used strategically.
Time should be devoted to answering questions
that are commonly seen in practice, have
OBJECTIONS TO important consequences, have potential for
EVIDENCE-BASED PRACTICE either beneŽ cial or harmful treatment, or
The preceding section has described a model incur considerable cost ( Evidence-Based Care
of clinical practice that probably differs signif- Resource Group, 1994 ) . In this issue, Walker-
icantly from what happens in even the most Dilks discusses the issue of secondary sources
evidence-based clinical settings. Real world of information ( such as the ACP Journal Club,
evidence-based practice faces signiŽ cant prac- Evidence-Based Medicine and the Au stralian
tical difŽ culties. In addition, legitimate philo- Jou rn al of Physiotherapy Critically Appraised
sophical and theoretical objections have been Papers ) . These sources distill the key Ž ndings
raised against models of evidence-based prac- of high-quality papers, usually in one page or
tice ( see, for example, Feinstein and Horwitz, less, so they potentially provide a signiŽ cant
1997; DiFabio, 1999) . time-saving mechanism for busy practitioners.
In this section we attempt to confront some How much time is and should be spent
of the objections to evidence-based practice. seeking out and appraising the evidence?
The emphasis will be on objections to the use of Most physiotherapists spend little time reading
systematic reviews and randomised controlled clinical research ( Turner and WhitŽ eld, 1997 )
trials in making decisions about therapy, as and, because few physiotherapists have training
issues concerning diagnosis and prognosis are in clinical appraisal, reading time may be
204 R. D. HERBERT

spent suboptimally. Rational determination of answering all important clinical questions. Of


the amount of time that should be spent course, that is not the case. It has been
seeking out and appraising evidence requires claimed that there is not enough evidence to
information about both the effectiveness of practice evidence-based physiotherapy ( Bithell,
current clinical practices and about how much 2000 ) . How much clinical research exists
of an improvement in effectiveness could be and how much can it assist clinical decision
accrued in a given amount of time by searching making?
for and appraising papers. Unfortunately, data It is difŽ cult to quantify the volume of
on these issues are elusive. Our view is that much clinical research in physiotherapy. However
of clinical practice is far from optimally effective it is possible to estimate, at least roughly,
and that potentially even modest amounts of the number of relevant randomised trials
time spent in the judicious application of and systematic reviews. The Centre for
evidence to clinical decision making could Evidence-Based Physiotherapy, with assistance
substantially improve clinical outcomes. As just from, among others, the Rehabilitation and
one example, exercise is prescribed with equal Related Therapies Field of the Cochrane
frequency for acute and chronic low back Collaboration, has attempted to identify all
pain ( van der Valk, Dekker, and van Baar, randomised controlled trials and systematic
1995 ) , but systematic reviews indicate there reviews in physiotherapy and collate these on
is strong evidence that exercise therapy is the Physiotherapy Evidence Database ( PEDro;
effective for chronic, but not acute, low back http://ptwww.cchs.usyd.edu.au/pedro ) . At the
pain ( van Tulder, Koes, and Bouter, 1997; time of writing 2,229 randomised or quasi-
Maher, Latimer, and Rofshauge, 1999 ) . This randomised trials and 297 systematic reviews
suggests that changes in exercise prescription had been identiŽ ed ( Moseley AM et al., in
practices could signiŽ cantly improve outcomes press; see also Sherrington et al, 2000; Moseley
in patients with low back pain. We expect that et al, 2001 ) .
many practices would converge rapidly on this There are more than 200 randomised trials
outcome if scarce time was used to answer key and systematic reviews on PEDro pertaining
clinical questions. to each of the following subdisciplines of
Most clinicians are busy. Where can they physiotherapy: cardiothoracics, continence and
Ž nd time to seek and critically appraise the women’s health, gerontology, musculoskeletal,
evidence from clinical research? There are neurology, orthopaedics, and sports ( Moseley
numerous possibilities. Time spent in formal AM et al., in press ) . This is enough to
continuing education activities ( staff seminars, tackle many fundamental clinical questions,
for example ) may be better spent by individuals though there are not yet enough trials in most
or small groups of physiotherapists answering areas of physiotherapy to provide convincing
their own clinical questions. Depending on the replication on every permutation of therapy
clinical setting, case conferences could also in every setting for every patient group. In
be restructured so that they create learning some areas of physiotherapy, the volume of
experiences for staff as well as deal with patient’s trials and reviews is not sufŽ cient to have
problems. These and other suggestions have any real impact on clinical practice. However,
been made by Sackett et al, ( 2000 ) . Time spent given the exponential rate of publication
busily applying ineffective or harmful therapies of clinical trials and systematic reviews in
would be better spent seeking out and critically physiotherapy ( Moseley AM, et al., in press )
appraising best evidence. this will almost certainly change in the near
future.
It is likely that most clinicians have not read
There is not enough evidence all of the high quality evidence that pertains to
Ideally, at least from a purely professional point their own clinical questions. In this sense at
of view, there would be good clinical research least, there is an abundance of evidence. It
EVIDENCE-BASED PRACTICE — IMPERFECT BUT NECESSARY 205

is probably reasonable to expect all practising estimates of treatment effects ( Stern and
therapists to be aware of key trials and reviews Simes, 1997 ) . Although it is often assumed
in their area of practice. that exhaustive searching reduces the poten-
tial for publication bias, it is possible
that this actually increases the potential
The evidence is not good enough
for publication bias. There are currently
Certain features of clinical trials ( such as no completely satisfactory solutions to the
concealment of randomisation, blinding of problem of publication bias ( Thornton and
subjects and assessors, and adequacy of follow- Lee, 2000 ) .
up ) tend to be associated with smaller effect 2. Scorin g of stu dy qu ality. Systematic reviews
sizes, suggesting that trials that have these must take into account the quality of the
features tend to be less biased ( Moher et al, study if they are to produce unbiased esti-
1999 ) . Other trials lack these features, and mates of the effects of treatment. However,
so we should expect that, on average, they the methods for assessing trial quality have
will be biased. In physiotherapy, the typical not yet been fully validated ( Moher et al,
randomised trial lacks concealment of alloca- 1999 ) , so we cannot yet be sure that mecha-
tion and has unblinded patients, assessors, and nisms for rating study quality are truly able to
therapists, but does have adequate follow-up discriminate between trials that are and are
( Moseley AM et al., in press ) . There must be not likely to be biased. To further complicate
real concern about the capacity of the typical this issue there are a wide variety of quality
trial to provide an unbiased picture of the scales currently available. The number of
effects of therapy. Fortunately, the quality of items in each scale ranges from as few as
clinical trials appears to be improving slowly.
3 to as many as 34, with no consensus on
The median PEDro score for randomised
the weighting applied to central items such
trials in physiotherapy has crept up from 3
as randomisation, blinding, and withdrawals
in the 1960s to its current value of 5. ( If
( Juni et al, 1999) . The choice of quality scale
this rate was to continue, most trials would
may in uence the conclusions of a system-
return perfect scores by the turn of the next
atic review by in uencing the eligibility of
century.)
particular trials for inclusion in the review
Systematic reviews ( such as those conduc-
ted by the Cochrane Collaboration ) synthesise or weighting of the trial’s Ž ndings in the
the Ž ndings of clinical trials. Ideally, systematic review synthesis.
reviews would objectively assess trial quality and A practical question for readers of clinical trials
then pool the Ž ndings of high quality studies to is how potentially biased does a study have
provide less biased and more precise estimates to be before it should no longer be used for
of the effects of therapy. There are some real clinical decision-making? The answer should
difŽ culties that arise, however, when an attempt depend on the degree of conŽ dence that is
is made to systematically review clinical trials in held in other information that pertains to
all areas of health care. Three such problems the clinical question at hand. As a working
are discussed below. The Ž rst two issues also principle, the threshold of quality should be
are relevant to readers of individual clinical that the study must be able to provide more
trials. certainty than the reader already has. Our
opinion is that, in practice, there will usually
1. Pu blication bias . This is the bias that arises be little point in reading clinical trials that do
because trials with positive Ž ndings are not meet basic criteria ( true randomisation,
more likely to be published than trials acceptable follow-up, and blinding where
with negative Ž ndings. Consequently posi- possible ) .
tive studies are more likely to be reviewed, 3. Syn thesis of ® n din gs . Ideally, systematic
and reviews are likely to contain in ated reviews are accompanied by meta-analyses
206 R. D. HERBERT

that provide pooled estimates of treat- that employ ‘‘best evidence’’ methods of
ment effects. However, this is only advis- synthesis.
able when the individual studies are
of sufŽ cient quality and when there is
sufŽ cient homogeneity of interventions, Many readers are unable to
outcomes, and Ž ndings across studies. When discriminate between studies
heterogeneity precludes meta-analysis, some that are probably valid and those
authors conduct best-evidence syntheses in that are probably not
which the quality of evidence supporting a
conclusion is rated according to a predeter- Almost all methodological surveys and most
mined scale of study quality and consistency systematic reviews in physiotherapy have
of Ž ndings. Unfortunately the Ž ndings of decried the quality of published research
best-evidence syntheses may depend heavily ( e.g., Green et al, 2000) . Many physiother-
on the rating system used, and may be apists do not have sufŽ cient training in
unduly sensitive to the Ž ndings of individual research methodology to conŽ dently distin-
studies. guish between studies of high and low
The sensitivity of conclusions in systematic quality. Consequently, there is a risk of
reviews to methods of best evidence synthesis many readers being mislead by potentially
is illustrated clearly with a recent review biased studies or excluding well-conducted
of ultrasound ( van der Windt et al, 1999) . trials.
The review concluded, on the basis of seven The eventual solution must be that physio-
randomised trials, that ‘‘ultrasound is not therapists will develop the skills to critically
effective in the treatment of shoulder disorders appraise clinical research. Most undergrad-
( pg. 263 ) .’’ When the more recent trial by uate curricula now teach research methods
Ebenbichler and colleagues ( 1999 ) is added to and increasingly more explicitly teach crit-
the review, the review’s best evidence synthesis ical appraisal of clinical research. In the near
methods support the conclusion that there future we may be able to expect new grad-
is weak evidence for ultrasound therapy for uates to have basic critical appraisal skills.
shoulder disorders. In contrast, use of van Graduate physiotherapists will have to seek
Tulder et al’s ( 1999 ) method of synthesis out training in skills of critical appraisal. It
would lead to the conclusion that there is is to be hoped that they do so with the
no evidence of effectiveness, and van Poppel same enthusiasm that most physiotherapists
et al’s ( 1997 ) method would lead to the decision apply to the development of new clinical
that there is strong evidence that ultrasound is skills.
ineffective. Some simple strategies may enhance phys-
The problem with these methods of iotherapists’ abilities to identify high quality
qualitative synthesis is that while they use similar trials. These include using methodological
descriptors such as ‘‘strong,’’ ‘‘moderate,’’ or Ž lters ( Guyatt, Sackett, and Cook, 1993; Sackett
‘‘limited’’ to describe the level of evidence, et al, 2000 ) or methodological ratings from
the deŽ nitions for each descriptor vary. With the PEDro database to screen out low quality
each method the addition of a single trial of research. Secondary sources of publication,
similar quality and precision to the existing such as those referred to earlier, can perform
trials can change the review conclusion to an much of the work of critical appraisal for
extent that seems unjustiŽ ed. For example, clinicians who lack critical appraisal skills.
with the van Poppel et al, ( 1997 ) system Some of these ( such as Cochrane System-
the Ž ndings of one trial can change the atic Reviews ) are quite uniformly of high
conclusion from ‘‘no evidence’’ to ‘‘strong quality, and can generally be considered
evidence.’’ We recommend that great caution to provide an unbiased synthesis of the
be used by readers of systematic reviews literature.
EVIDENCE-BASED PRACTICE — IMPERFECT BUT NECESSARY 207

When there is clinical The problem is that we most need clinical


uncertainty, randomised trials when there is most uncertainty. We
controlled trials and systematic are likely to be most uncertain when the
true size of the treatment effect is close to
reviews often cannot provide the smallest clinically worthwhile effect. Yet
certainty when the true effect of treatment is close to
Some therapies appear so unlikely to have the smallest clinically worthwhile effect the
useful therapeutic effects that they are of little conŽ dence intervals are likely to span the
smallest clinically worthwhile effect, regardless
interest to most therapists. Other therapies
of whether the treatment is clinically worthwhile
have such positive effects that their efŽ cacy
( Herbert, 2000a) . In these circumstances, we
is obvious to all ( for example, strapping to
cannot know if the treatment effect is large
prevent pain and further injury in acute skier’s
enough to be clinically worthwhile.
thumb ) . There is relatively little beneŽ t in
Meta-analysis is one solution to this
subjecting these therapies to rigorous clinical
problem. The advantage of meta-analysis is that
experimentation. The role of clinical trials and
it can provide estimates of effect size based
systematic reviews is to provide information on large numbers of subjects from several or
about the size of treatment effects where there many trials. Potentially, then, meta-analysis can
is reasonable doubt that the treatment has an provide the precision needed to decide if a
effect that is large enough to be worthwhile. The treatment produces clinically worthwhile effects
value of clinical trials and systematic reviews even if the true value is quite close to the
is that they provide estimates of the size of smallest clinically worthwhile effect.
treatment effects that can be compared to the
smallest clinically worthwhile effect ( Herbert,
2000a, 2000b ) . If the effect observed in the It is not possible to use the
trial is clearly larger than the smallest clinically ndings of a clinical trial
worthwhile effect, the therapy may be clinically performed on a particular
useful. sample to make inferences about
Unfortunately, because trials always involve the effects of treatment on an
a Ž nite sample of patients, they cannot tell individual patient who is not
us with absolute certainty the size of the from that sample
treatment effect. Instead, they provide us with
There are three subproblems here. These are
an estimate of the average treatment effect. The dealt with in more detail in two recent papers
uncertainty associated with this estimate can be ( Herbert, 2000a, 2000b ) :
described with conŽ dence intervals ( com monly First, trials usually only give us reliable infor-
the 95% conŽ dence interval ) . The width of mation about the average response to therapy,
the conŽ dence interval deŽ nes the range of yet obviously some patients will do much better
values within which the true average effect of than average and some will do much worse.
treatment probably lies. If all of the conŽ dence Thus, some argue, clinical trials cannot tell us
intervals fall to one side or other of the smallest about the responses of individuals.
clinically worthwhile effect, it is possible to be It is true that clinical trials cannot predict
conŽ dent that, on average, the therapy has ( or how each individual will respond to treatment,
does not have ) a clinically worthwhile effect but then neither can any other sort of
( Herbert, 2000a, 2000b ) . Studies with large information. Nonetheless, the information
numbers of subjects tend, all else being equal, provided by trials about the average ( or most
to provide more precise estimates of the size likely) outcome of therapy is valuable for
of treatment effects ( estimates with narrower clinical decision-making because the average
conŽ dence intervals ) than small studies with response is the response that we should expect
few subjects. in the absence of any other information. It
208 R. D. HERBERT

makes sense to make decisions on the basis of the unbiased estimate of treatment effects
expected outcomes, even though we know that provided by clinical trials can be combined
the expected outcome will probably not occur. with clinical intuition. Estimates of the sizes
Second, the average subject in a trial might of treatment effects provided by trials can be
differ in important ways from the people we are adjusted upwards or downwards on the basis of
contemplating treating. In that case it may no how much more or less effectively we feel we
longer be true that the average response of the could apply the therapy.
subjects in the trial is the expected response The alternative approach is more nihilistic.
when the therapy is applied. Many clinicians Some clinicians have Ž xed ideas about how a
feel uncomfortable about the fact that trials therapy should be administered, and consider
never contain quite the sorts of patients they that unless a trial is conducted in which
are interested in, and the unease may be fuelled the therapy is administered exactly as they
by a feeling that they can pick, at least roughly, would choose to administer it, the trial is
who is and who is not likely to respond well to not useful. There is an irony here: Diversity
therapy on the basis of their clinical experience. of practice arises when there is uncertainty
Clearly there are two important sources among clinicians about how therapies should be
of information about the likely size of the applied. Yet, when there is diversity of practice
treatment effect that can be brought to bear some practitioners are less likely to be satisŽ ed
on clinical decisions. On the one hand, with the Ž ndings of clinical trials because they
clinical trials and systematic reviews can provide believe the therapy should be administered as
relatively unbiased information about the they administer it in their clinical practices.
effects of therapy on the average patient in When there is diversity of clinical practice,
the trial or review. On the other hand, clinical a more rational way to use clinical trials is
experience and intuition may be capable of to be tolerant about exactly how therapies
discriminating between patients who are and are delivered in clinical trials. If diversity of
are not likely to respond to therapy. This clinical practice re ects uncertainty about how
suggests a sensible compromise. We can use a therapy should be administered, we should
clinical trials to provide unbiased estimates of be satisŽ ed when a therapy is tested as other
the average effect of therapy on the average clinicians feel it would best be administered.
patient in the trial. Then, when applying
the trial Ž ndings to a particular patient, the Only patient-centred research
estimate of the effect of therapy can be adjusted
can really tell us about peoples’
up or down based on what clinical intuition says
about how more or less likely the particular
experiences
patient is to respond to therapy ( Herbert, We want clinical trials to tell us how a therapy
2000a; see also Glasziou and Irwig, 1995 ) . affects a patient in terms that matter to patients.
Third, there is a ( similar) problem with A problem with clinical trials is that they
the diversity of ways in which a therapy can be only measure outcomes that the experimenter
applied. Differences in patient characteristics, perceives as important, and they do not permit
equipment availability, stafŽ ng levels, staff complete expression of what patients feel when
training and philosophies, and health care given a particular therapy ( Greenhalgh, 1999;
settings mean that the therapy is often not Higgs and Titchen, 1998; Ritchie, 1999 ) .
applied in trials exactly as we could or would At one level many trials do measure
choose to apply it. Therefore, trials might the effects of therapy in terms that patients
provide estimates of treatment effects that are themselves deem important. Many trials now
unduly pessimistic ( if we feel the therapy was measure outcomes such as ‘‘global perceived
applied suboptimally in the trial ) or optimistic effect’’ or ‘‘preference for treatment’’ because
( if we feel the therapy was applied better it is thought that measurement of these
than we would be able to apply it) . Again, outcomes gives patients the opportunity to
EVIDENCE-BASED PRACTICE — IMPERFECT BUT NECESSARY 209

assign appropriate weighting to their feelings for clinical decisions is taken away from
of their responses to therapy. Nonetheless, how-to textbooks and devolved to individual
these single-dimensional outcomes provide practitioners and their patients.
little opportunity for patients to express
the breadth of their feelings about the
effects of therapies. The need for patient- SUMMARY AND CONCLUSIONS
centred outcomes in clinical trials suggests We conclude that there are, indeed, some
one important way ( but not the only way) reasonable objections to the practice of
in which qualitative and quantitative research evidence-based physiotherapy, although in our
can complement each other in evidence-based opinions, this model of clinical practice has
practice. Qualitative research can inform the tremendous advantages as well. Evidence-based
designers of clinical trials about what consumers practice is time-consuming, and the time
see as the important issues when choosing involved in answering clinical questions does
therapies ( see paper by Ritchie, this issue ) . not Ž t easily into conventional models of clinical
Such considerations probably should be, but practice. Nonetheless, the time spent answering
rarely are, paramount. important clinical questions may prove worth-
while in the medium term. There is, unfor-
Evidence-based practice tunately, not enough evidence to answer all
removes the clinical clinical questions well, but there is much that
decision-making role from is worthwhile and underutilised. The available
clinicians and gives it to evidence is often not of sufŽ cient quality to
guide clinical decision-making, and many thera-
managers
pists may have difŽ culty distinguishing between
There is a view that evidence-based practice valid and potentially invalid research. Thus it
takes clinical decision-making out of clinicians’ is important for clinicians to develop skills or
hands. In our view, this is not intrinsically strategies that enable discrimination between
wrong: There is no intrinsic right of therapists potentially valid and probably invalid studies.
to be solely responsible for clinical decision- A particularly difŽ cult aspect of evidence-based
making. Instead, the justiŽ cation for clinician- practice is using trials to make inferences about
as-decision-maker lies in the reasonable expec- individual patients. We argue that this is best
tation that this provides the best possible care done by combining unbiased estimates of the
and outcomes. effects of treatment provided by clinical trials
Nonetheless, Sackett et al, ( 1996 ) have and systematic reviews with clinical intuition
argued that evidence-based practice does not about how well a particular patient will respond
subjugate responsibility for clinical decision- to therapy. Unfortunately clinical trials usually
making. It is true that, in evidence-based measure outcomes of interest to investigators,
practice, good clinical research provides an but currently we do not usually know if these
external measure of effectiveness, and this outcomes are of interest to the consumers them-
sort of evidence should take priority over selves. Evidence-based practice devolves respon-
clinical experience alone. That is, good clinical sibility for clinical-decision-making to therapists
research acts as an external arbiter of effective and their patients.
clinical practice that constrains clinicians’ In choosing between models of clinical
choices. In a more important sense, however, practice we must discern which is best in some
evidence-based practice does not constrain sense. Here ‘‘best’’ should mean something
decision-making. Instead, it emphasises the like ‘‘the model that produces the outcomes
role of clinicians in using evidence to answer most desired by recipients of physiotherapy
their own clinical problems, and removes the services.’’ We have argued that there are real
constraint of tradition from clinical practice. problems with current models of evidence-
In evidence-based practice the responsibility based practice, but we point out that many
210 R. D. HERBERT

of the problems of evidence-based practice Feinstein AR, Horwitz RI 1997 Problems in the ‘‘evidence’’
of ‘‘evidence-based medicine’’ American Jou rn al of
are common to other ways of doing therapy
Medicin e 103: 529 –535
as well. For example, clinical practice that Glasziou PP, Irwig LM 1995 An evidence based approach
is based on clinical experiences suffers from to individualising treatment. BMJ 311: 1356 –1359
Green S, Buchbinder R, Glazier R, Forbes A 2000
the problem that therapists must use their Intervention s for shoulder pain ( Cochrane Review) . In:
clinical experience to make predictions about The Cochran e Library, Issue 4. Oxford: Update Software
individual future patients, just as they must Greenhalgh T 1999 Narrative based medicine: narrative
based medicine in an evidence based world. BMJ 318:
using good clinical research in evidence-based 323 – 325
practice. The overwhelming strength of the Guyatt GH, Sackett DL, Cook DJ 1993 User’s guide to the
evidence-based approach to clinical practice is medical literature: II. How to use an article about therapy
or prevention: A. Are the results of the study valid? Jou rn al
that it takes full advantage of the only potentially
of the American Medical Association 270: 2598 – 2601
unbiased estimates of effects of therapy— those Herbert RD 2000a Critical appraisal of clinical trials. I:
which are derived from carefully conducted estimating the magnitude of treatment effects when
outcomes are measured on a continu ous scale. Au stralian
clinical research. There is a theoretical and
Jou rn al of Physiotherapy 46: 229 –235
professional imperative to use this ‘‘best Herbert RD 2000b Critical appraisal of clinical trials.
evidence.’’ The evidence is combined with, II: estimating the magnitude of treatment effects
when ou tcomes are measured on a dichoto mous scale.
but does not dominate, other information that Australian Jou rn al of Physiotherapy 46: 309 –313
practitioners glean by communicating well with Higgs J, Titchen A 1998 Research and knowledge.
their patients. Evidence-based practice is, in our Physiotherapy 84: 72 – 80
Juni P, Witschi A, Bloch R, Egger M 1999 The hazards
view, the best of a number of imperfect models of scoring the quality of clinical trials for meta-analysis.
of clinical practice in the sense that it is likely Jou rn al of the American Medical Association 282: 1054 –1060
to produce the best outcomes for patients with Lijmer J, Mol B, Heisterkamp S, Bonsel G, Prins M, van der
Meulen J, Bossuyt P 1999 Empirical evidence of design-
available resources. Evidence-based practice is related bias in studies of diagnostic tests. Jou rn al of the
imperfect, but necessary. American Medical Association 282: 1061 – 1066
Linde K, Scholz M, Ramirez G, Clausius N, Melchart D,
Jonas WB 1999 Impact of study quality on ou tcome
References in placebo-controlled trials of homeopathy. Jou rn al of
Clin ical Epidemiology 52: 631 – 636
Benson K, Hartz AJ 2000 A comparison of observational Maher C, Latimer J, Refshauge K 1999 Prescription of
studies and randomized, controll ed trials New En glan d activity for low back pain: what works? Au stralian Jou rn al
Jou rn al of Medicin e 342: 1878 –1886 of Physiotherapy 45: 121 – 132
Bithell C 2000 Evidence-based physiotherapy: some Miller JN, Colditz GA, Mosteller F 1989 How study design
thoughts on ‘best evidence’. Physiotherapy 86: 58 –60 affects ou tcomes in comparisons of therapy. II: surgical.
Carr JH, Mungovan SF, Shepherd RB, Dean CM, Nord- Statistics in Medicin e 8: 455 – 466
holm LA 1994 Physiotherapy in stroke rehabilitation: Moh er D, Cook DJ, Jadad AR, Tugwell P, Moher M, Jones A,
bases for Australian physioth erapists’ choice of treat- Pham B, Klassen TP 1999 Assessing the quality of reports
ment. Physiotherapy Theory an d Practice 10: 201 –209 of randomised trials: implications for the conduct of
Chalmers TC, Celano P, Sacks HS, Smith H 1983 Bias in meta-analyses. Health Techn ology Assessmen t 3: 1 –98
treatment assignment in controlled clinical trials. New Moseley AM, Herbert RD, Sherrington C, Maher CG
En glan d Jou rn al of Medicin e 309: 1358 –1361 Evidence for physioth erapy practice: a survey of the
Colditz GA, Miller JN, Mosteller F 1989 How study design physioth erapy evidence database ( PEDro ) . Au stralian
affects outcomes in comparisons of therapy. I: medical. Jou rn al of Physiotherapy. in press
Statistics in Medicin e 8: 441 –454 Moseley AM, Sherrington C, Herbert RD, Maher CG 2001
Concato J, Shah N, Horwitz RI 2000 Randomized, The extent and quality of evidence in neurological
controlled trials, observational studies, and the hierarchy physioth erapy: an analysis of the Physioth erapy Evidence
of research designs. New En glan d Jou rn al of Medicin e 342: Database ( PEDro ) . Brain Impairmen t, 1: 130 –140
1887 – 1892 National Health and Medical Research Council 2000 How
DiFabio R 1999 Myth of evidence-based practice. Jou rn al of to Use the Eviden ce: Assessmen t an d Application of Scien ti® c
Orthopaedic an d Sports Physical Therapy 29: 632 – 634 Eviden ce. Canberra, Biotext
Ebenbichler GR, Erdogmus CB, Resch KL, Funovics MA, Nilsson LM, Nordholm LA 1992 Physical therapy in stroke
Kainberger F, Barisani G, Aringer M, Nicolakis P, rehabilitation: bases for Swedish physioth erapists’ choice
Wiesinger GF, Baghestanian M, Preisinger E, Fialka- of treatment. Physiotherapy Theory & Practice 8: 49 – 55
Moser V, Weinstabl R 1999 Ultrasound therapy for calciŽ c Ritchie J 1999 Using qualitative research to enhance
tendinitis of the shoulder. New En glan d Jou rn al of Medicin e the evidence-based practice of health care providers.
340: 1533 – 1538 Australian Jou rn al of Physiotherapy 45: 251 –256
Evidence-Based Care Resource Group 1994 Evidence-based Sackett DL, Rosenberg WM, Gray JA, Haynes RB,
care: 1. Setting priorities: how important is the problem? Richardson WS 1996 Evidence based medicine: what
CMAJ 150: 1249 –1254 it is and what it isn’t. BMJ 312: 71 – 72
EVIDENCE-BASED PRACTICE — IMPERFECT BUT NECESSARY 211

Sackett DL, Straus SE, Richardson WS, Rosenberg W, Van der Valk R, Dekker J, van Baar M 1995 Physical
Haynes RB 2000 Eviden ce-Based Medicin e: How to Practice therapy for patients with back pain. Physiotherapy 81:
an d Teach EBM ( 2nd ed.) . Edinburgh, Scotland: 345 – 351
Churchill Livingstone Van der Windt D, van der Heijden G, van den Berg S, ter
Sacks H, Chalmers TC, Smith H 1982 Randomized versus Riet G, de Winter A, Bouter L 1999 Ultrasound therapy
historical controls for clinical trials. American Jou rn al of for musculoskeletal disorders: a systematic review. Pain
Medicine 72:233 –240 81: 257 – 271
Sherrington C, Herbert RD, Maher CG, Moseley AM 2000 Van Poppel MNM, Koes BW, Smid T, Bouter LM 1997
PEDro. A database of randomized trials and systematic A systematic review of controll ed clinical trials on the
reviews in physioth erapy. Man u al Therapy 5: 223 – 226 prevention of back pain in industry. Occu pation al an d
Stern JM, Simes RJ 1997 Publication bias: evidence of En viron men tal Medicin e 54: 841 –847
delayed publication in a cohor t study of clinical research Van Tulder MW, Koes BW, Bouter LM 1997 Conservative
projects. BMJ 315: 640 – 645 treatment of acute and chronic nonspeciŽ c low back
Thornto n A, Lee P 2000 Publication bias in meta-analysis: its pain. A systematic review of randomized controlled
causes and consequences. Jou rn al of Clin ical Epidemiology trials of the most common interventions. Spin e 22:
53: 207 –216 2128 –2156
Turner P, WhitŽ eld TWA 1997 Physiotherapists’ use Van Tulder MW, Cherkin DC, Berman B, Lao L,
of evidence based practice: a cross-national study. Koes B 1999 The effectiveness of acupuncture in the
Physiotherapy Research In tern ation al 2: 17 – 29 management of acute and chronic low back pain.
Turner PA, WhitŽ eld TWA 1999 Physioth erapists’ reasons A systematic review within the framework of the
for selection of treatment techniques: a cross-national Cochrane Collaboration Back Review Group. Spin e 24:
survey. Physiotherapy Theory & Practice 15: 235 –246 1113 –1123

You might also like