You are on page 1of 15

EVIDENCE BASED PRACTICE

Faculty: Dr. Mahesh BVM


Presenters: Harshinee Krupanand, Moksh

Outline of the document


- Definition of Evidence Based Practice
- Levels of Evidence
- Steps to initiate and implement Evidence Based Practice in Clinical Practice
- Five Themes in Evidence Ratings
- Barriers to evidence based practice
- References

DEFINITION OF EVIDENCE BASED PRACTICE


Evidence Based Practice (EBP) is defined as the conscientious, explicit, and judicious use of
current best evidence in making decisions about the care of individual patients by integrating
individual clinical expertise with the best available external clinical evidence from systematic
research. - (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p. 71)
(One more)

LEVELS OF EVIDENCE
Levels of evidence for studies of treatment efficacy, ranked according to quality and
credibility from highest/most credible (Ia) to lowest/least credible (IV) (adapted from the
Scottish Intercollegiate Guideline Network, www.sign.ac.uk).

Level Description

Ia Well-designed meta-analysis of >1 randomized controlled trial

Ib Well-designed randomized controlled study

IIa Well-designed controlled study without randomization

IIb Well-designed quasi-experimental study

III Well-designed non experimental studies, i.e., correlational and case studies
Level Description

Expert committee report, consensus conference, clinical experience of respected


IV
authorities

STEPS TO INITIATE AND IMPLEMENT EBP IN CLINICAL


PRACTICE

Framing Clinical Question


The first step in the evidence-based practice (EBP) process is to identify the clinical problem
or question for which you are seeking evidence. Asking a focused and relevant question
about your client's situation will inform your search. One widely used approach to frame a
clinical question is known as PICO, which stands for:
Population Intervention Comparison Outcome
The PICO elements are as follows:
• Population: What are the characteristics and/or condition of the group? This may
include specific diagnoses, ages, or severity levels (e.g., autism spectrum disorder,
mild hearing loss).
• Intervention: What is the screening, assessment, treatment, or service delivery model
that you are considering (e.g., instrumental swallowing assessment, high-intensity
treatment, hearing aids)?
• Comparison: What is the main alternative to the intervention, assessment, or
screening approach (e.g., placebo, different technique, different amount of
treatment)? Note: In some situations, you may not have a specific comparison in your
PICO question.
• Outcome: What do you want to accomplish, measure, or improve (e.g., upgraded diet
level, more intelligible speech, better hearing in background noise)?
Once you've identified the population, intervention, comparison, and outcome for your
situation, you can establish your PICO question.

Population Intervention Comparison Outcome Example PICO Question

Children with Cochlear Hearing aids Speech and For children with severe to
severe to implants language profound hearing loss, what
profound development is the effect of cochlear
hearing loss implants compared with
hearing aids on speech and
language development?

Young adult Cognitive Not Return to What is the effect of


with a stroke rehab applicable work cognitive rehabilitation on
vocational outcomes in
individuals who experience
a stroke?

Gather Evidence
Next step is to gather evidence that addresses your question. There are two types of evidence
to consider: internal evidence and external evidence.
Internal evidence refers to the data that you systematically collect directly from your clients
to ensure that they’re making progress. This data may include subjective observations of your
client as well as objective performance data compiled across time.
External evidence refers to evidence from scientific literature—particularly the results, data,
statistical analysis, and conclusions of a study.
• How should you plan your search for external evidence?
1. Develop a list of search terms.
Example: What is the population, patient, or problem of interest? What is the main
intervention or issue being considered? What outcome do you want to accomplish?
2. Set parameters for your search.
Combine keywords and phrases using terms such as "OR" and "AND" (known
as Boolean operators) to broaden or narrow your search results.
• Use "OR" to increase your search results and find evidence that contains either
term (e.g., "dysphagia OR swallowing"; "teenagers OR adolescents").
• Use "AND" to limit your search and find evidence that must contain both words
(e.g., "stroke AND aphasia"; "children AND hearing loss").
Apply limits and filters to narrow your search (e.g., date range, language). A date
limit may be helpful, particularly when a search retrieves too many results. Date
limits may also be helpful if your question involves more recent technology or
practice (e.g., ""digital hearing aids", "telepractice").
3. Stay organized.
Write down the key terms searched, the databases used, and the search parameters
applied. Keep track of your search results. This will help you identify the most
effective search terms, eliminate duplicate citations, and ultimately save you time.

• What type of external evidence is needed to answer your question?


Different questions (e.g., therapy/treatment, diagnostic, prevention) may be better addressed
by different study designs. For example, determining the effect of a treatment on a specific
patient population compared with an alternative or no treatment may be best addressed by a
randomized controlled trial.
Synthesized evidence can save time
With synthesized evidence, researchers take a clinical question, gather available evidence,
and make conclusions or recommendations about the body of research. Various types of
synthesized evidence are:
1. Systematic Review: It is a formal assessment of the body of scientific evidence
related to a clinical question and describes the extent to which various diagnostic or
treatment approaches are supported by the evidence. Systematic reviews that use
statistical techniques to pool data and draw conclusions across studies are known
as meta-analyses.
2. Clinical practice guidelines: Developed by a group of topic experts, provide
recommendations for managing a specific condition or population to optimize care.
Guidelines may also discuss possible benefits and harms of a clinical action and
recommend alternative approaches. Guidelines can be evidence based or consensus
based.
▪ Evidence-based guidelines provide recommendations based on an evidence-
based systematic review.
▪ Consensus-based guidelines provide recommendations formed by consensus or
agreement among topic experts without an evidence-based systematic review.
• Where should you search for external evidence?
Begin where you are most likely to find evidence related to your clinical question, such as
databases specific to CSD research or related disciplines (e.g., education, rehabilitation). e.g.
ASHAWire, speechBITE™, ERIC (Education Resources Information Center).
Guidelines, systematic reviews, and meta-analyses, if done well, can be a trustworthy sources
of information. e.g. ASHA's Evidence Maps, The Cochrane Library, Campbell Collaboration,
What Works Clearinghouse (U.S. Department of Education).
Look for individual studies when evidence from guidelines, systematic reviews, or meta-
analyses is unavailable, out of date, unreliable, or irrelevant. Popular databases include
PubMed (MEDLINE), PsycNet, JSTOR.

• What should you do if you are unable to find external evidence?


• Reconsider your PICO question and/or search terms. Try broadening your search.
Add more synonyms or common acronyms to your search terms.
• Consider research from similar or related populations, interventions, or outcomes. Use
your clinical judgment to decide whether such information could be helpful for your
client.
• Find an alternative assessment, treatment, or service delivery option that is evidence-
based.
• Review and analyze your internal evidence, or client data, to find any changes or
patterns that may guide your clinical decision.

Assess the evidence


When assessing the evidence, keep in mind that each type of evidence serves a unique
purpose for your clinical decision making.
Internal evidence: You may analyze your data to address the following questions (adapted
from Higginbotham & Satchidanand, 2019):
• Is your client demonstrating a response to the intervention?
• Is that response significant, especially for the client?
• How much longer should you continue the intervention?
• Is it time to change the therapy target, intervention approach, or service delivery
model?

External evidence: To assess the external evidence, you should:


1. Determine the relevance to your question:
o Relevance refers to how closely connected the study's elements (e.g., study
aim, participants, method, results) are to your clinical question and how well
the external evidence fits your needs.
2. Appraise the validity and trustworthiness:
o It means that you have considered whether the study effectively investigates
its aim. The study should be transparent about its methodology―the research
procedure, the data collection methods, and the analysis of data and outcomes.
This helps you decide whether the research evidence is trustworthy and
whether you can have confidence in its results.

o Ask yourself:
▪ Will this research design help me answer my question?
▪ What are the limitations of the research evidence?
▪ Is the external evidence from a trusted source of information?

o Research Design and Study Quality


Because certain research designs offer better controls against bias, many EBP
hierarchies rank study quality solely based on study design. However, these
hierarchies often fall short because research design alone does not necessarily
equate to good external evidence. Moreover, no one study design can answer
all types of PICO questions. The chart below details the types of study designs
that are best suited for various types of clinical questions.

Preferred
Study Other Relevant
Type of Question Example Design(s) Study Design(s)

Screening/Diagnosis Is an auditory Prospective, Cross-sectional


Accuracy in brainstem response blind
differentiating clients with screening more comparison to
or without a condition accurate than an reference
Otoacoustic standard
emissions screening
in identifying
newborns with
hearing loss?

Treatment/Service What is the most Randomized, Controlled trial;


Delivery effective treatment to controlled trial single-subject/single-
Efficacy of an improve cognition in case experimental
intervention adults with traumatic design
brain injury?
Preferred
Study Other Relevant
Type of Question Example Design(s) Study Design(s)

Etiology What are the risk Cohort Case control;


Identify causes or risk factors for speech case series
factors of a condition and language
disorders?

Quality of How do parents feel Qualitative Ethnographic


Life/Perspective about implementing studies (e.g., interviews or surveys
Understand the opinions, parent-mediated case study, of the opinions,
experiences, and interventions? case series) perspectives, and
perspectives of clients, experiences of
caregivers, and other clients, their
relevant individuals caregivers, and other
relevant individuals

o Limitations of the Evidence


Limitations are the shortcomings or external influences for which the
investigators of a study could not, or did not, control.

o To help determine what limitations exist, you can appraise the methodological
quality of each study using one of many available research design–specific
checklists. Depending on the checklist, you can appraise some or all of the
following features:

▪ The study had a clearly stated and focused aim or objective.


▪ Investigators used methods, such as blinding or random assignment.
▪ The study clearly described the methods used, the intervention protocol
applied, and the participants involved (e.g., age, medical diagnosis,
severity of condition).
▪ The study objectively identified and accounted for any other
confounding factors (e.g., restrictions of design, implementation
fidelity).

o Although other sources of bias exist, they are not typically assessed as part of
these checklists. Other sources of bias to consider include conflicts of
interest and publication bias.
▪ Conflict of interest refers to factors that may compromise the
investigator's objectivity in conducting or reporting their research.
Financial funding from product developers or employment with the
sponsoring organization are common examples of conflicts of interest
within research. Be sure to interpret with caution any sources that
appear to (a) sensationalize information, (b) lack editorial peer review,
or (c) have an alternative agenda.
▪ Publication bias occurs when the results of a study influence whether
or not the study is published. This may result in studies with positive
or significant findings being more likely to be published than those
with null or negative findings.

3. Review the results and conclusions:


• The results can tell you if the desired outcome of the study was achieved (i.e., “Was
there a benefit from the intervention or assessment, or was there no effect?”) and
whether any adverse events occurred (i.e., harm). Knowing the extent of the effects
ultimately determines if the results of a study are clinically meaningful and important.
When examining the results and conclusions, consider the study's statistical analyses,
direction and consistency, and applicability and generalizability.
• Statistical Analyses
Information such as sample size, confidence interval, and effect size allow you to
decide how large and precise the intervention effect is. A p value can help you
determine whether the results of a study are statistically significant (in other
words, they likely did not occur by chance), but it cannot tell you whether the results
are clinically significant or clinically important. For example, a study may find a
statistically significant difference between the outcomes of two groups, yet the real-
life impact for the individuals in each group could be similar. Researchers can use
measures such as relative risks and minimally clinically important difference (also
referred to as minimally important difference) to report clinical significance.
• Direction and Consistency
Consider the results from individual studies and determine whether the overall
conclusions across studies are similar. For example, taken together, are the results
from the body of external evidence similarly positive or negative? Does the direction
and consistency of the evidence support a change in clinical practice? Be sure to
factor in any details (e.g., participant sample size and heterogeneity of participants)
that you identified in the individual studies that may limit the applicability of the
results.
• Applicability and Generalizability
Although studies reporting definitive outcomes are ideal, sometimes the results from
individual studies or the body of external evidence are inconclusive. In other cases,
there may be very little to no scientific evidence available. In these instances, it may
be valuable to consider research evidence from similar populations or interventions
and to determine whether the results are generalizable to your client or clinical
situation. In this circumstance, it is even more critical to collect and consider data
taken from your client’s performance to determine whether the approach you are
taking is having the intended effect.
Make Your Clinical Decision
The final step of the EBP process requires you to make a clinical decision. To make an
evidence-based decision, clinicians must consider evidence (both internal and external),
assess the appropriateness of their clinical experience for the situation, and review the
individual client’s perspectives and priorities—the three components of EBP.

Although this complex and nuanced process may seem difficult, the D.E.C.I.D.E. framework
can help you easily remember and implement all four steps of the EBP process and can help
guide you to a clinical decision.

Define
Define your clinical question, gather external and internal evidence, and determine the
validity and trustworthiness of the results.

Extrapolate
Extrapolate clinically applicable information from the external evidence. Although some
results may directly align with your client and setting, often, you will need to determine
whether the overall results are compelling and meaningful enough to apply to your clinical
situation. Sometimes, there is simply a lack of external evidence about your clinical question.
If there’s little or no external scientific evidence, then your treatment isn’t necessarily
disqualified—it just requires careful consideration and monitoring.

Consider
Consider your clinical expertise and the expertise of others. Use your training, knowledge,
and clinical experience to collect and analyze internal evidence and to interpret and apply
external evidence when making a clinical decision.

Incorporate
Incorporate the needs and perspectives of your client, their caregiver, and/or their family into
your assessment and intervention decisions. These needs and perspectives can provide insight
into their priorities, values, and expectations. A client’s cultural or linguistic characteristics
(e.g., status as an English language learner) can also impact how you interpret the internal
evidence and how you apply the external evidence to your clinical decision.
Develop
Develop an assessment or treatment plan by bringing together the three components of EBP.
In some clinical situations, you may need to prioritize one of the EBP components (e.g.,
external scientific evidence reporting harm or a client’s preference/refusal); however, you
should consider all three components.
• Use your clinical expertise to determine how to implement the external and internal
evidence into your assessment or intervention sessions (e.g., adapting an evidence-
based treatment into an engaging and individualized activity).
• Prioritize your client's perspectives to make the sessions meaningful. Include goals
that are measurable and functional.
• Consider organizational or other barriers when developing your plan (e.g., access to
materials, department protocols, transportation, or feasibility of implementation).

Evaluate
Evaluate your clinical decision. Use a trial period, collect internal evidence, and analyze all
of the clinical information to (a) ensure that the intervention is appropriate or (b) adjust your
treatment plan as needed. EBP is a dynamic process and requires ongoing evaluation. If you
don’t see progress, if your client’s needs or circumstances have changed, or if you need to re-
prioritize the goals, you should cycle through the EBP process again to find another option
that can better serve your client.

FIVE THEMES IN EVIDENCE RATINGS


1. Independent confirmation and converging evidence
It is extremely rare for a single study to provide the definitive answer to a scientific or
clinical question, but a body of evidence comprising high quality investigations can be
synthesized to approach a definitive answer even when, as is likely, results vary across
studies. When the question concerns treatment efficacy, the highest evidence ranking goes to
well-designed meta-analyses that summarize results across a number of scientifically rigorous
studies. In many cases, results are expressed using both summary statistics and a graphic
representation of the direction, size and precision of findings from individual studies. A
single meta-analysis or systematic review of evidence may not yield results that are so
uniform as to preclude disagreement and debate, especially if the number of high quality
studies available for inclusion is relatively small. However, the principle of seeking
converging evidence from multiple strong studies is inextricably linked to the EBP
orientation.

2. Experimental control
In the EBP framework, evidence from studies that are controlled (i.e., that contrast an
experimental group with a control group) and that employ prospective designs (in which
patients are recruited and assigned to conditions before the study begins) is rated more highly
than evidence from retrospective studies in which previously collected data are analyzed,
because the reliability and accuracy of many measures are difficult or impossible to ensure
post hoc.
In addition, group comparison studies are rated more highly when patients are randomly
assigned to groups than when they are not, because random assignment reduces the chance
that groups might differ systematically in some unanticipated or unrecognized ways other
than the experimental factor being investigated.
Lower evidence ratings generally are assigned to quasi-experimental studies, including cohort
studies in which patients with and without a variable of interest are followed forward in time
to compare their outcomes, and case-control designs in which patients with and without an
outcome are identified and compared for their previous exposure to a variable of interest.
Evidence from quasi-experimental studies ranks lower than evidence from controlled studies
because only through random assignment can the risk of differences due to unknown biases
be minimized. Evidence from nonexperimental designs such as correlational studies, case
studies (N = 1), and case series is rated even lower due to the lack of a control group, but
even evidence from nonexperimental study designs outranks statements of belief and opinion
in EBP rating schemes.

3. Avoidance of subjectivity and bias


An important criterion for credible evidence is that observers, investigators, statisticians,
others involved with patients, and if possible the patients themselves, be kept unaware of
information that could potentially influence, or bias, the results of a study. This tactic is
known as blinding, concealment, or masking. Blinding addresses a particular threat to the
validity of patient-oriented evidence: the seemingly inescapable bias that clinicians have
toward believing that their efforts are beneficial. Complete blinding of patients and clinicians
may be impossible in some studies, especially for behavioral treatments for which a placebo
condition cannot be constructed. However, even in such studies a number of steps can be
taken to minimize the potential for bias, such as ensuring that treatment effects (positive or
negative) are measured not by the clinician, the investigator, or a family member but rather
by independent examiners who rate patients without knowing their treatment assignments.
Similarly, examiners can rate unlabelled, randomly ordered recordings from different stages
in the course of intervention (e.g., pre-, intra- and post-treatment) to minimize the potential
influence of their expectations about treatment effects.
Another important control for potential bias that influences evidence ratings is the
requirement that outcomes be reported for every patient originally enrolled in a study, not just
for the patients who complete it. This ensures that patients who did not complete the study as
planned are taken into account in analyzing effects, avoiding the understandable tendency to
focus only on patients who have positive outcomes. In randomized trials this approach,
known as the “intention-to-treat” analysis, means that patients must be analyzed as part of the
treatment group to which they were originally assigned even if they did not actually receive
the treatment as planned (e.g., Moher, Schulz, Altman, et al. 2001; Sackett et al., 2000).
4. Effect sizes and confidence intervals
The EBP orientation emphasizes that studies of clinical questions should specify and justify
the size of effect that is deemed clinically important and should provide evidence that
statistical power is adequate to detecting an effect of this magnitude. Appreciation of the need
to consider not just statistical significance (i.e., the probability that differences or effects were
not chance events), but also practical significance (i.e., the magnitude of differences or
effects, usually in the form of a standardized metric such as d or omega-squared) has been
growing for at least 25 years, culminating in the mandate that information on effect sizes and
statistical power be included in every published study (Wilkinson & APA Task Force on
Statistical Inference, 1999).
The EBP orientation also emphasizes the need for investigators to report the confidence
interval (CI) associated with an experimental effect. CIs reflect the precision of the estimated
difference or effect, specifying a range of values within which the “true” value is expected to
occur with a given probability for a certain level of Type I error. Narrower CIs offer stronger
(i.e., more precise and interpretable) evidence than wider CIs; studies in which samples are
large and measurement error is small yield narrower CIs.

5. Relevance and feasibility


Relevance of evidence is considered highest when the patients studied are typical of those
commonly seen in clinical practice (Ebell, 1998), and/or when the clinical decision being
studied is one that is difficult to make. Feasibility or applicability (Scottish Intercollegiate
Guidelines Network, 2002) is high when the screening, diagnostic, or treatment activity being
investigated is one that could reasonably be applied or used by practitioners in real-world
settings. For example, some conditions can be diagnosed as accurately by interview as by
time-consuming and expensive tests; the former would accordingly out-rank the latter on
feasibility.
BARRIERS TO EVIDENCE BASED PRACTICE

There are at least four fundamental issues that hinder the clinician’s ability to reference
external evidence efficiently, effectively, and consistently in routine clinical practice:
1. Lack of treatment outcomes research
▪ It is certainly unlikely to find approaches for which there is a great deal
of high-level research evidence. Consequently, clinicians have little
recourse but to resort to trial-and-error problem solving in their
practice (O’Connor & Pettigrew, 2009; Worrall & Bennett, 2001).
▪ EBP requires only that clinicians seek out and consider current best
evidence, even when the “best” evidence may be weak (McKenna,
Cutcliffe, & McKenna, 2000).
2. Employing hierarchies of evidence
▪ The ability to evaluate research critically is vital for determining “the
strength or weakness of the scientific support for a specific
intervention or diagnostic technique” (Mullen, 2007), but many
practitioners continue to report difficulty judging the adequacy of the
statistical analysis or research design employed in clinical outcome
studies Metcalfe, Lewin, Wisher, Perry, Bannigan, & Klaber Moffet
(2001).
▪ Meline and Paradiso (2003) speculate, therefore, that clinicians
consequently tend to “accept research reports as reliable based on
reputation rather than substantive review.”
3. Role of qualitative research
▪ Dollaghan (2007) argued that the overshadowing focus on quantitative
research evidence has marginalized the importance of the remaining
two EBP components. She consequently proposed that EBP in
communication disorders requires “three kinds of evidence” to address
treatment outcome, clinical practice, and client preferences—and uses
the abbreviation E3BP to emphasize that need.
▪ Many questions about practice, stigma, culture, resources,
comorbidities, and other issues within the context of care are not easily
informed by quantitative investigations, but rather by scientific inquiry
using qualitative and mixed-methods approaches (Kovarsky, 2008;
McColl, Smith, White, & Field, 1998; McKenna, Ashton, & Keeney,
2004a; Tetnowski & Franklin, 2003).
▪ McKenna and his colleagues (2004b) point out that high level
quantitative evidence might serve as the “gold standard” if the clinician
is interested in a cause-effect relationship, but if “interested in what it
is like to experience a diagnosis,” then a “phenomenological approach
may be the gold standard.”
▪ Within current evaluative hierarchies, qualitative approaches simply do
not attract the same standing in the evidence-based literature (Cutcliffe
& McKenna, 1999; Hewitt-Taylor, 2003; McKenna, Cutcliffe, &
McKenna, 2000; Scott & McSherry, 2008). Because these
classification systems were developed specifically to evaluate the
scientific rigor of quantitative investigations, qualitative research
studies are viewed as little more than anecdotes, placing them among
the very lowest levels of evidence.
4. Clinician time constraints
▪ The fourth issue concerns the practical need to locate relevant,
germane sources of evidence quickly and effectively.
▪ Iacono and Cameron (2009) conducted a qualitative investigation to
explore the perceptions of SLPs regarding the delivery of evidence-
based AAC services for young children and their families. Although
the investigators noted that the participants made reference to “journal
articles for information” and their approaches to assessment and
expedience appeared to reflect an implied understanding of current
best practice, they nonetheless observed that the clinicians still seemed
“to rely mostly on other more experienced colleagues, attendance at
conferences, and other forms of professional development” to guide
clinical decision making.
Mechanisms are being developed in different disciplines that will effectively assess
competency in EBP, encompassing the practitioner’s knowledge, skills, and attitudes (Ilic,
2009). As barriers to EBP represent a complex interaction of practical, organizational,
economic, and cultural factors (Fairhurst & Huby, 1998; Newman, Papadopoulos, &
Sigsworth, 1998; Salbach, Jaglal, Korner-Bitensky, Rappolt, & Davis, 2007), it is likely that
EBP competence will require a substantive shift in the habits, values, and priorities of the
practitioner and others within the context of care (Hoffman, Ireland, Hall-Mills, & Flynn,
2013; McCluskey & Lovarini, 2005). Additional quantitative, qualitative, and mixed-methods
studies are needed to document benefit as well as to determine the most effective way to
establish a sustainable EBP routine.

References:
https://www.asha.org/policy/tr2004-00001/
https://www.asha.org/research/ebp/evidence-based-practice-process/
Orlikoff, R.F., Schiavetti, N., Metz, D.E. (2015). Evaluating research in communication
disorders (7th Ed.). Pearson Education, Inc.

Article links
https://journals.lww.com/ear-
hearing/Fulltext/2022/03000/American_Cochlear_Implant_Alliance_Task_Force.3.aspx
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0183349
https://ojs.lib.uwo.ca/index.php/eei/article/view/7727
https://journals.sagepub.com/doi/10.1177/0142723705050340
https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD006937.pub2/full

You might also like