You are on page 1of 15

L

EVIDENCE BASED PRACTICE


CONTENTS
Evidence-based practice (EBP) is a perspective on
clinical decision-making that originated in evidence
based medicine, and has been defined as “… the
 INTRODUCTION
conscientious, explicit, and judicious use of current best
evidence in making decisions about the care of  STEPS TO IMPLEMENT
individual patients … [by] integrating individual clinical EBP
expertise with the best available external clinical  GENERALIZATION OF
evidence from systematic research” (Sackett, Rosenberg, RESEARCH FINDINGS
Gray, Haynes, & Richardson, 1996, p. 71). Recent  LEVELS OF EVIDENCE
discussions of EBP (e.g., Guyatt et al., 2000; Sackett,  BARRIERS TO EVIDENCE
Straus, Richardson, Rosenberg, & Haynes, 2000) have BASED PRACTICE
emphasized the need to integrate patient values and
preferences along with best current research evidence  ARTICLE
and clinical expertise in making clinical decisions.  REFERENCE

As noted by Sackett, Haynes, Guyatt, & Tugwell (1991),


the history of medicine includes a number of cases in
which the recommendations of respected authorities
have turned out to be wrong or harmful when subjected
to scientific investigation. These cases range from
William Osler's 19th century recommendation that
opium be used to treat diabetes (Sackett, Haynes, Guyatt,
& Tugwell, 1991) to the 1940s-era “best practice” of
oxygenating premature infants to prevent retrolental
fibroplasia, a condition that careful research eventually
showed to be caused, not cured, by this treatment
(Meehl, 1997). More recent examples are easy to find
(e.g., Barrett-Connor, 2002). At the time they were
made, all of these recommendations were consistent with
current clinical thinking; only when they were evaluated
by rigorous scientific tests, were they discounted
(Sackett et al., 1991). For this reason, the EBP
orientation accords greater weight to evidence from
high-quality studies than to the beliefs and opinions of
experts.

In the EBP framework, explicit criteria are used to


SUBMITTED BY:
evaluate the quality of evidence available to support
clinical decisions. Some of these criteria are common to NUHA HALEEMA
all scientific investigations, but others are specific to
studies of clinical activities. Many systems for ranking
the credibility of evidence have been proposed; in some FACULTY:
cases, evidence “grades” are then assigned to clinical DR. SANTOSH M
recommendations according to the strength of their
supporting evidence. The criteria for evaluating evidence

1
L

differ somewhat according to whether the evidence concerns screening, prevention, diagnosis,
therapy, prognosis, or healthcare economics.

Evidence-based practice (EBP) is the integration of

 Clinical expertise/expert opinion


o The knowledge, judgment, and critical reasoning acquired through your training
and professional experiences
 Evidence (external and internal)
o The best available information gathered from the scientific literature (external
evidence) and from data and observations collected on your individual client
(internal evidence)
 Client/patient/caregiver perspectives
o The unique set of personal and cultural circumstances, values, priorities, and
expectations identified by your client and their caregivers

When all three components of EBP are considered together, clinicians can make informed,
evidence-based decisions and provide high-quality services reflecting the interests, values,
needs, and choices of individuals with communication disorders. So, you can be confident that
you're providing the best possible care no matter what clinical questions may arise.

2
L

STEPS TO IMPLEMENT EBP

Follow these steps to initiate and implement EBP into your clinical practice.

Step 1: Frame Your Clinical Question

The first step in the evidence-based practice (EBP) process is to identify the clinical problem
or question for which you are seeking evidence. Asking a focused and relevant question about
your client's situation will inform your search. One widely used approach to frame a clinical
question is known as PICO, which stands for

Population Intervention Comparison Outcome

The PICO elements are as follows:

 Population: What are the characteristics and/or condition of the group? This may
include specific diagnoses, ages, or severity levels (e.g., autism spectrum disorder, mild
hearing loss).
 Intervention: What is the screening, assessment, treatment, or service delivery model
that you are considering (e.g., instrumental swallowing assessment, high-intensity
treatment, hearing aids)?
 Comparison: What is the main alternative to the intervention, assessment, or screening
approach (e.g., placebo, different technique, different amount of treatment)? Note: In
some situations, you may not have a specific comparison in your PICO question.
 Outcome: What do you want to accomplish, measure, or improve (e.g., upgraded diet
level, more intelligible speech, better hearing in background noise)?

Once you've identified the population, intervention, comparison, and outcome for your
situation, you can establish your PICO question.

There is no one "correct" way to construct a PICO question. Your clinical question
should include elements specific to each client's unique circumstances and values.

3
L

Population Intervention Comparison Outcome Example PICO Question

Children with Cochlear Hearing aids Speech and For children with severe to
severe to implants language profound hearing loss, what is the
profound development effect of cochlear implants
hearing loss compared with hearing aids on
speech and language
development?

Young adult Cognitive Not Return to work What is the effect of cognitive
with a stroke rehab applicable rehabilitation on vocational
outcomes in individuals who
experience a stroke?

Sometimes, you have a clinical situation that may have more than one PICO question. Write
them all down to tackle one search at a time. Your clinical question(s) should be specific
enough to guide your search—but not too specific that you are unable to find any evidence.

Step 2: Gather Evidence

Now that you’ve formulated your PICO question, the next step is to gather evidence that
addresses your question. There are two types of evidence to consider: internal evidence and
external evidence.

 Internal evidence refers to the data that you systematically collect directly from your
clients to ensure that they’re making progress. This data may include subjective
observations of your client as well as objective performance data compiled across time.
Use your clinical expertise to determine what information is most important to track for
your client’s specific situation and needs. Armed with internal evidence unique to your
client, you are better prepared to find targeted external evidence that will help you make
a clinical decision.

 External evidence refers to evidence from scientific literature - particularly the results,
data, statistical analysis, and conclusions of a study. This evidence helps you determine
whether an approach or a service delivery model might be effective at implementing
change in individuals like your client.

A well-planned search increases the likelihood that you will find relevant external evidence
that answers your PICO question.

4
L

Searching for evidence can be a daunting task. Knowing where to search can save you
valuable time in the EBP process.

 ASHAWire
 speechBITE™
 ERIC (Education Resources Information Center)
 ASHA's Evidence Maps
 The Cochrane Library
 Campbell Collaboration
 What Works Clearinghouse
 PubMed (MEDLINE)
 PsycNet
 JSTOR

Tips to translate a PICO Question into Search Terms


Example Clinical Question - Does cognitive rehabilitation improve cognitive skills in adults
with traumatic brain injury?

PICO PICO Element Keywords

P Adult, traumatic brain injury Adult


AND
Traumatic brain injury
OR
TBI
OR
Brain injury

I Cognitive rehabilitation Cognitive rehabilitation


OR
Cognitive training
OR
Cognitive treatment

C Not applicable

O Improved cognition Cognition


OR
Cognitive skills
OR
Memory OR Problem solving
OR Executive function

5
L

Step 3: Assess the Evidence

Internal evidence, or the data and observations collected on an individual client, is collected
both for the accountability of your session and for tracking a client’s performance. When
assessing the internal evidence, you are determining whether an intervention has impacted your
client. You may analyse your data to address the following questions (adapted from
Higginbotham & Satchidanand, 2019):

 Is your client demonstrating a response to the intervention?


 Is that response significant, especially for the client?
 How much longer should you continue the intervention?
 Is it time to change the therapy target, intervention approach, or service delivery model?

External evidence, found in scientific research literature, answers clinical questions such as
whether an assessment measures what it intended to measure or whether a treatment approach
is effective in causing change in individuals. Because the quality of external evidence is
variable, this step of assessing the evidence is crucial and includes determining the reliability,
importance, and applicability of the relevant scientific research to your client’s condition and
needs.
Critically appraising the external evidence can help you determine if the conclusions from one
or more studies can help guide your clinical decision. To assess the external evidence, you
should

 determine the relevance to your question,


 appraise the validity and trustworthiness, and
 review the results and conclusions.

Step 4: Make Your Clinical Decision

The final step of the EBP process requires you to make a clinical decision. To make an
evidence-based decision, clinicians must consider evidence (both internal and external), assess
the appropriateness of their clinical experience for the situation, and review the individual
client’s perspectives and priorities—the three components of EBP.

Although this complex and nuanced process may seem difficult, the D.E.C.I.D.E. framework
can help you easily remember and implement all four steps of the EBP process and can help
guide you to a clinical decision.

6
L

Define
Define your clinical question, gather external and internal evidence, and determine the validity
and trustworthiness of the results
Extrapolate
Extrapolate clinically applicable information from the external evidence. Although some
results may directly align with your client and setting, often, you will need to determine
whether the overall results are compelling and meaningful enough to apply to your clinical
situation.
Consider
Consider your clinical expertise and the expertise of others. Use your training, knowledge, and
clinical experience to collect and analyze internal evidence and to interpret and apply external
evidence when making a clinical decision.
Incorporate
Incorporate the needs and perspectives of your client, their caregiver, and/or their family into
your assessment and intervention decisions. These needs and perspectives can provide insight
into their priorities, values, and expectations.
Develop
Develop an assessment or treatment plan by bringing together the three components of EBP.
In some clinical situations, you may need to prioritize one of the EBP components (e.g.,

7
L

external scientific evidence reporting harm or a client’s preference/refusal); however, you


should consider all three components.
Evaluate
Evaluate your clinical decision. Use a trial period, collect internal evidence, and analyze all of
the clinical information to (a) ensure that the intervention is appropriate or (b) adjust your
treatment plan as needed.

EBP is a dynamic process and requires ongoing evaluation. If you don’t see progress, if your
client’s needs or circumstances have changed, or if you need to re-prioritize the goals, you
should cycle through the EBP process again to find another option that can better serve your
client.

GENERALIZABILITY OF RESEARCH FINDINGS

Although studies reporting definitive outcomes are ideal, sometimes the results from individual
studies or the body of external evidence are inconclusive. In other cases, there may be very
little to no scientific evidence available. In these instances, it may be valuable to consider
research evidence from similar populations or interventions and to determine whether the
results are generalizable to your client or clinical situation. In this circumstance, it is even more
critical to collect and consider data taken from your client’s performance to determine whether
the approach you are taking is having the intended effect.

LEVELS OF EVIDENCE

Levels of evidence for studies of treatment efficacy, ranked according to quality and credibility
from highest/most credible (Ia) to lowest/least credible (IV) (adapted from the Scottish
Intercollegiate Guideline Network, www.sign.ac.uk).
Level Description
Ia Well-designed meta-analysis of >1 randomized controlled trial

Ib Well-designed randomized controlled study


IIa Well-designed controlled study without randomization
IIb Well-designed quasi-experimental study

III Well-designed nonexperimental studies, i.e., correlational and case studies
IV Expert committee report, consensus conference, clinical experience of respected
authorities

8
L

Level Description

1 Systematic reviews and meta-analyses of randomized clinical trials and other


well-designed studies
2 Double-blinded, prospective, randomized, controlled clinical trials

3 Non randomized intervention studies


4 Non intervention studies:
● Cohort studies
● Case-control studies
● Cross-sectional surveys
5 Case reports
6 Expert opinion of respected authorities

In general, well-designed, synthesized evidence (e.g., systematic reviews, meta-analyses) is at


the top of the hierarchy because of the methodological quality control characteristics included
in those designs. Expert opinion or uncontrolled case series are often at the bottom of the
hierarchy because these designs do not include strong methodological steps to protect against
bias or systematic error. Ideally, when deciding whether evidence is strong and

9
L

trustworthy, you should consider both the study’s design AND the appraised methodological
quality.

Brief explanation of few research designs:

Primary Research: Experimental Study Designs:

o Randomized controlled trial (RCT)

Participants are randomly assigned to either the control group or an experimental


group. Researchers compare outcomes from each group to determine whether the intervention
caused any change.

o Controlled trial

A study involving non-randomized groups (i.e., experimental, comparison/control), which


helps determine the effects of the intervention

 Single-subject designs – Also known as single-case experimental designs, this type


of experimental design allows researchers to closely examine specific changes in each
participant. Each participant serves as their own control (i.e., compared to
themselves) and researchers measure the outcome or dependent
variable repeatedly across phases (e.g., baseline phase, intervention phase, withdraw
phase). There are many variations of a single-subject design study.
 Cross-over trial – This is a study in which participants first receive one type of
treatment and then researchers switch them to a different type of treatment.

Primary Research: Observational/Non-Experimental Study Designs


o Cohort Study
A clinical research study in which people who presently have a certain condition or
receive a particular treatment are followed over time and compared with another
group of people who are not affected by the condition.
o Case Control Study
The observational epidemiologic study of persons with the disease (or other outcome
variable) of interest and a suitable control (comparison, reference) group of persons without
the disease. The relationship of an attribute to the disease is examined by comparing the
diseased and nondiseased with regard to how frequently the attribute is present or, if
quantitative, the levels of the attribute, in each of the groups.
o Case Series
A group or series of case reports involving patients who were given similar treatment.
Reports of case series usually contain detailed information about the individual patients. This
includes demographic information (for example, age, gender, ethnic origin) and information
on diagnosis, treatment, response to treatment, and follow-up after treatment.
o Case Study

10
L

An investigation of a single subject or a single unit, which could be a small number of


individuals who seem to be representative of a larger group or very different from it.
o Editorial
Work consisting of a statement of the opinions, beliefs, and policy of the editor or publisher of
a journal, usually on current matters of medical or scientific significance to the medical
community or society at large. The editorials published by editors of journals representing the
official organ of a society or organization are generally substantive. (PubMed)
o Opinion
A belief or conclusion held with confidence but not substantiated by positive knowledge or
proof. (The Free Dictionary)
o Animal Research
A laboratory experiment using animals to study the development and progression of diseases.
Animal studies also test how safe and effective new treatments are before they are tested in
people.(NCI Dictionary of Cancer Terms)

Secondary Research

Secondary research, also called synthesized research, combines the findings from primary
research studies and provides conclusions about that body of evidence.

o Systematic Reviews
Systematic reviews use systematic methods to search for and compile a body of evidence to
answer a research or clinical question about the efficacy/effectiveness of an assessment or
treatment approach. Typically, studies included in a systematic review have met
predetermined eligibility and quality criteria (e.g., studies must be experimental designs). The
systematic review then provides qualitative conclusions based on the included studies.
o Meta-Analyses
Meta-analyses use systematic and statistical methods to answer a research or clinical question
about a specific assessment or treatment approach. Like systematic reviews, included primary
studies must meet predetermined eligibility and quality criteria. The meta-analyses provide
quantitative conclusions (e.g., pooled effect size, confidence interval) to determine the overall
treatment effect or effect size across studies. The additional statistical measures can provide a
better picture of the clinical significance.
Not all systematic reviews include meta-analysis, but all meta-analyses are found in
systematic reviews.
o Clinical Practice Guidelines
Clinical practice guidelines are systematically developed statements created by a group of
subject matter experts to provide a comprehensive overview of a disorder, detail the benefits
and harms of specific assessment and treatment approaches, and optimize delivery of
services.

11
L

BARRIERS TO EVIDENCE BASED PRACTICE

There is now overwhelming agreement that EBP sets a verifiable standard for practice and
recognition that clinical outcomes are enhanced when current best evidence is considered. For
audiology and speech-language pathology, the growing maturity of these professions is
reflected by the fact that current best practice is increasingly “rooted in data,” “replicable”, and
provide “an expectation of accuracy (in the case of diagnostic procedures), positive results (in
the case of therapeutic procedures) and benefit to either the consumer in particular or society
in general”. The laudable goal of EBP “is to promote the uptake of innovations that have been
shown to be effective, to delay the spread of those that have not yet been shown to be effective,
and to prevent the uptake of ineffective innovations”. However, despite growing support for
EBP, there remain a bevy of opinions on how clinicians can best translate research results to
more effective and efficient clinical services

In a review of the literature from journals spanning multiple health care professions, Zipoli and
Kennedy (2005) noted “mixed attitudes” toward EBP that ranged from enthusiastic support to
grave concern. Although relatively few denied the importance of research for professional
practice, the sceptical minority most typically viewed EBP as a threat to “traditional practice.”
Rarely do speech-language pathologists indicate that research is of little value for the clinic or
suggest that they are unwilling to try new approaches or otherwise modify their existing
practice. Surveying physical therapists, Jette and her colleagues (2003) reported that more
recently trained professionals tended to be more familiar with (and have greater confidence in)
literature search strategies, database use, and the critical evaluation of research articles than
their more experienced colleagues. Certainly, continuing professional education can help to
allay such concerns, but there are several obstacles to practice that, nonetheless, must be
addressed. In particular, there are at least four fundamental issues that hinder the clinician’s
ability to reference external evidence efficiently, effectively, and consistently in routine clinical

12
L

practice.

Lack of Treatment Outcomes Research


The first issue centres on the unfortunate fact that, despite the breadth of the literature in
communication sciences and disorders, relatively few empirical treatment studies have been
conducted. Thus, even though there seems to be widespread acclaim for EBP, for many
practices within audiology and speech-language pathology there remains little or no evidence
base to support (or refute) their use. It is certainly unlikely to find approaches for which there
is a great deal of high-level research evidence. Consequently, clinicians have little recourse but
to resort to trial-and-error problem solving in their practice

Employing Hierarchies of Evidence


The second issue concerns the need for clinicians to understand the “levels of evidence”
classification that issued to describe the scientific quality and rigor of quantitative research
studies. Clearly, the ability to evaluate research critically is vital for determining “the strength
or weakness of the scientific support for a specific intervention or diagnostic technique” but
many practitioners continue to report difficulty judging the adequacy of the statistical analysis
or research design employed in clinical outcome studies speculate, therefore, that clinicians
consequently tend to “accept research reports as reliable based on reputation rather than
substantive review.” Yet all hierarchies of evidence are tied, not so much to the experimental
variables or research question, but to the research design and analysis used
Thus, lingering confusion about the nature of evidence hierarchies and their implementation
within EBP remains an obstacle. Relying on only one type of evidence, regardless
of its scientific rigor, is likely to result in an incomplete, incorrect, and biased base of
knowledge from which to judge clinical decisions. Higher-level evidence, such as RCTs, does
not necessarily supplant lower-level evidence, represented by case reports, pragmatic trials,

13
L

and the like. Although EBP calls for reference to current best evidence, “best” should be
understood as a fair and comprehensive “portfolio
of research evidence” that consists of studies representing a variety of complementary
designs that promote both the internal and external validity of findings.

Role of Qualitative Research


Looking at what is occurring within health professions, it seems clear that there remains a great
need for an expanded and comprehensive “contextual framework” that will help guide the
development and implementation of evidence-based audiology and speech-language
pathology.

Clinician Time Constraints


The fourth issue concerns the practical need to locate relevant, germane sources of evidence
quickly and effectively. Few clinicians have the time
to perform a comprehensive search of journals and textbooks for clinical evidence. Even when
they do, it would seem unreasonable to expect them to be able to track down, read,
evaluate, and synthesize a large number of individual research studies for every clinical
question that arises. It is thus not surprising that, virtually without exception, clinicians
across many health professions regard constraints on their time as the most substantial barrier
to implementing research-evidence in their practice.

ARTICLE
Greenwell, T., & Walsh, B. (2021). Evidence-based practice in speech-language
pathology: Where are we now?. American Journal of Speech-Language Pathology, 30(1),
186-198.
ABSTRACT
Purpose: In 2004, American Speech-Language-Hearing Association established its position
statement on evidence-based practice (EBP). Since 2008, the Council on Academic
Accreditation has required accredited graduate education programs in speech-language
pathology to incorporate research methodology and EBP principles into their curricula and
clinical practicums. Over the past 15 years, access to EBP resources and employer-led EBP
training opportunities have increased. The purpose of this study is to provide an update of
how increased exposure to EBP principles affects reported use of EBP and perceived barriers
to providing EBP in clinical decision making. Method: Three hundred seventeen speech-language
pathologists completed an online questionnaire querying their perceptions about EBP, use of EBP in clinical
practice, and perceived barriers to incorporating EBP. Participants' responses were analyzed using descriptive
and inferential statistics. We used multiple linear regression to examine whether years of practice, degree, EBP
exposure during graduate program and clinical fellowship (CF), EBP career training, and average barrier score
predicted EBP use.

14
L

Results: Exposure to EBP in graduate school and during the CF, perception of barriers, and
EBP career training significantly predicted the use of EBP in clinical practice. Speech-
language pathologists identified the three major components of EBP: client preferences,
external evidence, and clinical experience as the most frequently turned to sources of EBP.
Inadequate time for research and workload/caseload size remain the most significant barriers
to EBP implementation. Respondents who indicated time was a barrier were more likely to
cite other barriers to implementing EBP. An increase in EBP career training was associated
with a decrease in the perception of time as a barrier.
Conclusions: These findings suggest that explicit training in graduate school and during the
CF lays a foundation for EBP principles that is shaped through continued learning
opportunities. We documented positive attitudes toward EBP and consistent application of
the three components of EBP in clinical practice. Nevertheless, long-standing barriers remain.
We suggest that accessible, time-saving resources, a consistent process for posing and
answering clinical questions, and on the job support and guidance from
employers/organizations are essential to implementing clinical practices that are evidence
based. The implications of our findings and suggestions for future research to bridge the
research-to-practice gap are discussed.

REFERENCES
1. Haynes, W. O. & Johnson, C. (2009). Understanding Research and Evidence based
practice in communication disorders. Boston: Pearson.
2. https://www.asha.org/research/ebp/

15

You might also like