Professional Documents
Culture Documents
EBP final 2
EBP final 2
LEVELS OF EVIDENCE
Levels of evidence for studies of treatment efficacy, ranked according to quality and
credibility from highest/most credible (Ia) to lowest/least credible (IV) (adapted from the
Scottish Intercollegiate Guideline Network, www.sign.ac.uk).
Level Description
III Well-designed non experimental studies, i.e., correlational and case studies
Level Description
Children with Cochlear Hearing aids Speech and For children with severe to
severe to implants language profound hearing loss, what
profound development is the effect of cochlear
hearing loss implants compared with
hearing aids on speech and
language development?
Gather Evidence
Next step is to gather evidence that addresses your question. There are two types of evidence
to consider: internal evidence and external evidence.
Internal evidence refers to the data that you systematically collect directly from your clients
to ensure that they’re making progress. This data may include subjective observations of your
client as well as objective performance data compiled across time.
External evidence refers to evidence from scientific literature—particularly the results, data,
statistical analysis, and conclusions of a study.
• How should you plan your search for external evidence?
1. Develop a list of search terms.
Example: What is the population, patient, or problem of interest? What is the main
intervention or issue being considered? What outcome do you want to accomplish?
2. Set parameters for your search.
Combine keywords and phrases using terms such as "OR" and "AND" (known
as Boolean operators) to broaden or narrow your search results.
• Use "OR" to increase your search results and find evidence that contains either
term (e.g., "dysphagia OR swallowing"; "teenagers OR adolescents").
• Use "AND" to limit your search and find evidence that must contain both words
(e.g., "stroke AND aphasia"; "children AND hearing loss").
Apply limits and filters to narrow your search (e.g., date range, language). A date
limit may be helpful, particularly when a search retrieves too many results. Date
limits may also be helpful if your question involves more recent technology or
practice (e.g., ""digital hearing aids", "telepractice").
3. Stay organized.
Write down the key terms searched, the databases used, and the search parameters
applied. Keep track of your search results. This will help you identify the most
effective search terms, eliminate duplicate citations, and ultimately save you time.
o Ask yourself:
▪ Will this research design help me answer my question?
▪ What are the limitations of the research evidence?
▪ Is the external evidence from a trusted source of information?
Preferred
Study Other Relevant
Type of Question Example Design(s) Study Design(s)
o To help determine what limitations exist, you can appraise the methodological
quality of each study using one of many available research design–specific
checklists. Depending on the checklist, you can appraise some or all of the
following features:
o Although other sources of bias exist, they are not typically assessed as part of
these checklists. Other sources of bias to consider include conflicts of
interest and publication bias.
▪ Conflict of interest refers to factors that may compromise the
investigator's objectivity in conducting or reporting their research.
Financial funding from product developers or employment with the
sponsoring organization are common examples of conflicts of interest
within research. Be sure to interpret with caution any sources that
appear to (a) sensationalize information, (b) lack editorial peer review,
or (c) have an alternative agenda.
▪ Publication bias occurs when the results of a study influence whether
or not the study is published. This may result in studies with positive
or significant findings being more likely to be published than those
with null or negative findings.
Although this complex and nuanced process may seem difficult, the D.E.C.I.D.E. framework
can help you easily remember and implement all four steps of the EBP process and can help
guide you to a clinical decision.
Define
Define your clinical question, gather external and internal evidence, and determine the
validity and trustworthiness of the results.
Extrapolate
Extrapolate clinically applicable information from the external evidence. Although some
results may directly align with your client and setting, often, you will need to determine
whether the overall results are compelling and meaningful enough to apply to your clinical
situation. Sometimes, there is simply a lack of external evidence about your clinical question.
If there’s little or no external scientific evidence, then your treatment isn’t necessarily
disqualified—it just requires careful consideration and monitoring.
Consider
Consider your clinical expertise and the expertise of others. Use your training, knowledge,
and clinical experience to collect and analyze internal evidence and to interpret and apply
external evidence when making a clinical decision.
Incorporate
Incorporate the needs and perspectives of your client, their caregiver, and/or their family into
your assessment and intervention decisions. These needs and perspectives can provide insight
into their priorities, values, and expectations. A client’s cultural or linguistic characteristics
(e.g., status as an English language learner) can also impact how you interpret the internal
evidence and how you apply the external evidence to your clinical decision.
Develop
Develop an assessment or treatment plan by bringing together the three components of EBP.
In some clinical situations, you may need to prioritize one of the EBP components (e.g.,
external scientific evidence reporting harm or a client’s preference/refusal); however, you
should consider all three components.
• Use your clinical expertise to determine how to implement the external and internal
evidence into your assessment or intervention sessions (e.g., adapting an evidence-
based treatment into an engaging and individualized activity).
• Prioritize your client's perspectives to make the sessions meaningful. Include goals
that are measurable and functional.
• Consider organizational or other barriers when developing your plan (e.g., access to
materials, department protocols, transportation, or feasibility of implementation).
Evaluate
Evaluate your clinical decision. Use a trial period, collect internal evidence, and analyze all
of the clinical information to (a) ensure that the intervention is appropriate or (b) adjust your
treatment plan as needed. EBP is a dynamic process and requires ongoing evaluation. If you
don’t see progress, if your client’s needs or circumstances have changed, or if you need to re-
prioritize the goals, you should cycle through the EBP process again to find another option
that can better serve your client.
2. Experimental control
In the EBP framework, evidence from studies that are controlled (i.e., that contrast an
experimental group with a control group) and that employ prospective designs (in which
patients are recruited and assigned to conditions before the study begins) is rated more highly
than evidence from retrospective studies in which previously collected data are analyzed,
because the reliability and accuracy of many measures are difficult or impossible to ensure
post hoc.
In addition, group comparison studies are rated more highly when patients are randomly
assigned to groups than when they are not, because random assignment reduces the chance
that groups might differ systematically in some unanticipated or unrecognized ways other
than the experimental factor being investigated.
Lower evidence ratings generally are assigned to quasi-experimental studies, including cohort
studies in which patients with and without a variable of interest are followed forward in time
to compare their outcomes, and case-control designs in which patients with and without an
outcome are identified and compared for their previous exposure to a variable of interest.
Evidence from quasi-experimental studies ranks lower than evidence from controlled studies
because only through random assignment can the risk of differences due to unknown biases
be minimized. Evidence from nonexperimental designs such as correlational studies, case
studies (N = 1), and case series is rated even lower due to the lack of a control group, but
even evidence from nonexperimental study designs outranks statements of belief and opinion
in EBP rating schemes.
There are at least four fundamental issues that hinder the clinician’s ability to reference
external evidence efficiently, effectively, and consistently in routine clinical practice:
1. Lack of treatment outcomes research
▪ It is certainly unlikely to find approaches for which there is a great deal
of high-level research evidence. Consequently, clinicians have little
recourse but to resort to trial-and-error problem solving in their
practice (O’Connor & Pettigrew, 2009; Worrall & Bennett, 2001).
▪ EBP requires only that clinicians seek out and consider current best
evidence, even when the “best” evidence may be weak (McKenna,
Cutcliffe, & McKenna, 2000).
2. Employing hierarchies of evidence
▪ The ability to evaluate research critically is vital for determining “the
strength or weakness of the scientific support for a specific
intervention or diagnostic technique” (Mullen, 2007), but many
practitioners continue to report difficulty judging the adequacy of the
statistical analysis or research design employed in clinical outcome
studies Metcalfe, Lewin, Wisher, Perry, Bannigan, & Klaber Moffet
(2001).
▪ Meline and Paradiso (2003) speculate, therefore, that clinicians
consequently tend to “accept research reports as reliable based on
reputation rather than substantive review.”
3. Role of qualitative research
▪ Dollaghan (2007) argued that the overshadowing focus on quantitative
research evidence has marginalized the importance of the remaining
two EBP components. She consequently proposed that EBP in
communication disorders requires “three kinds of evidence” to address
treatment outcome, clinical practice, and client preferences—and uses
the abbreviation E3BP to emphasize that need.
▪ Many questions about practice, stigma, culture, resources,
comorbidities, and other issues within the context of care are not easily
informed by quantitative investigations, but rather by scientific inquiry
using qualitative and mixed-methods approaches (Kovarsky, 2008;
McColl, Smith, White, & Field, 1998; McKenna, Ashton, & Keeney,
2004a; Tetnowski & Franklin, 2003).
▪ McKenna and his colleagues (2004b) point out that high level
quantitative evidence might serve as the “gold standard” if the clinician
is interested in a cause-effect relationship, but if “interested in what it
is like to experience a diagnosis,” then a “phenomenological approach
may be the gold standard.”
▪ Within current evaluative hierarchies, qualitative approaches simply do
not attract the same standing in the evidence-based literature (Cutcliffe
& McKenna, 1999; Hewitt-Taylor, 2003; McKenna, Cutcliffe, &
McKenna, 2000; Scott & McSherry, 2008). Because these
classification systems were developed specifically to evaluate the
scientific rigor of quantitative investigations, qualitative research
studies are viewed as little more than anecdotes, placing them among
the very lowest levels of evidence.
4. Clinician time constraints
▪ The fourth issue concerns the practical need to locate relevant,
germane sources of evidence quickly and effectively.
▪ Iacono and Cameron (2009) conducted a qualitative investigation to
explore the perceptions of SLPs regarding the delivery of evidence-
based AAC services for young children and their families. Although
the investigators noted that the participants made reference to “journal
articles for information” and their approaches to assessment and
expedience appeared to reflect an implied understanding of current
best practice, they nonetheless observed that the clinicians still seemed
“to rely mostly on other more experienced colleagues, attendance at
conferences, and other forms of professional development” to guide
clinical decision making.
Mechanisms are being developed in different disciplines that will effectively assess
competency in EBP, encompassing the practitioner’s knowledge, skills, and attitudes (Ilic,
2009). As barriers to EBP represent a complex interaction of practical, organizational,
economic, and cultural factors (Fairhurst & Huby, 1998; Newman, Papadopoulos, &
Sigsworth, 1998; Salbach, Jaglal, Korner-Bitensky, Rappolt, & Davis, 2007), it is likely that
EBP competence will require a substantive shift in the habits, values, and priorities of the
practitioner and others within the context of care (Hoffman, Ireland, Hall-Mills, & Flynn,
2013; McCluskey & Lovarini, 2005). Additional quantitative, qualitative, and mixed-methods
studies are needed to document benefit as well as to determine the most effective way to
establish a sustainable EBP routine.
References:
https://www.asha.org/policy/tr2004-00001/
https://www.asha.org/research/ebp/evidence-based-practice-process/
Orlikoff, R.F., Schiavetti, N., Metz, D.E. (2015). Evaluating research in communication
disorders (7th Ed.). Pearson Education, Inc.
Article links
https://journals.lww.com/ear-
hearing/Fulltext/2022/03000/American_Cochlear_Implant_Alliance_Task_Force.3.aspx
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0183349
https://ojs.lib.uwo.ca/index.php/eei/article/view/7727
https://journals.sagepub.com/doi/10.1177/0142723705050340
https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD006937.pub2/full