You are on page 1of 20

Dire Dawa University

College of Medicine and Health Sciences


School of Post Graduate Studies
Department of Public Health
Health Education/Health Promotion Assignment on Systematic Review Methods

Submitted To: - Bereket T. (MPH, MA)


Submitted By: - Natnael Getachew(MD)
January 14, 2024
Acknowledgement

I would like to thank Bereket T. (MPH, MA), Health Education/ Health Promotion lecturer from
Dire Dawa University, who tirelessly worked to enable us acquire knowledge and skill during
the course delivery & gave me a chance in the understanding of the development &
application of Systematic Review Methods. In addition, Iwould like to thank Dire Dawa
University, School of Post-Graduate Studies, and Department of Public Health to arrange the
schedule in need of teaching and learning process.
1. Introduction

Systematic review methodology is a rigorous and unbiased approach to synthesize all the
available evidence on a specific research question or topic. It is a well-defined and transparent
process that is used to identify, appraise, and synthesize relevant studies in a clear and
systematic manner. Systematic reviews are considered the best form of evidence synthesis as
they provide the most comprehensive and unbiased summary of the existing evidence. This
paper aims to provide an in-depth understanding of systematic review methodologies, including
their basics, definitions, objectives, and methods.

2. Objectives

The primary objective of this paper is to provide a comprehensive overview of systematic review
methodologies. This includes understanding the basics of systematic reviews, their definitions,
objectives, and the methods used to conduct them. Additionally, this paper aims to explore the
importance and impact of systematic review methodologies in healthcare research and decision-
making processes. To achieve this objective, the specific objectives of this paper are:
 To understand the fundamentals of systematic review methodologies, including their
definition, purpose, and key characteristics.
 To explore the different types of systematic reviews, such as narrative, meta-analysis,
and network meta-analysis.
 To describe the process of conducting a systematic review, including defining the
research question, searching for and selecting relevant studies, and data analysis.
 To examine the critical appraisal tools and techniques used to evaluate the quality of
studies included in a systematic review.
To discuss the significance of systematic review methodologies in healthcare research, policy-
making, and clinical practice.
3. Methods

To collect information for this paper, a systematic search of databases such as PubMed,
Cochrane Library, and Embase was conducted using relevant keywords such as 'systematic
review,' 'methodology,' 'evidence synthesis,' 'research synthesis,' and 'meta-analysis.'
Additionally, textbooks, articles, and guidelines on systematic reviews were reviewed. Relevant
publications and articles were also retrieved from reference lists and other related sources. The
collected information was then organized and synthesized to provide a comprehensive
understanding of the various aspects of systematic review methodologies.
4. Subject Details

I. Basics of Systematic Review Methodologies


a. Definition of Systematic Review
Chalmers and Altman(Egger, 2009) defined systematic review as a review that attempts to
collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific
research question. It uses explicit, systematic methods that are selected with a view to
minimizing bias, thus providing more reliable findings from which conclusions can be drawn and
decisions made.
A systematic review is a comprehensive summary of all available evidence that meets predefined
eligibility criteria to address a specific clinical question or range of questions. It is based upon a
rigorous process that incorporates:
o Systematic identification of studies that have evaluated the specific research
question(s)
o Critical appraisal of the studies
o Meta-analyses (not always performed)
o Presentation of key findings
o Explicit discussion of the limitations of the evidence and the review
A systematic review aims to comprehensively locate and synthesize research that bears on a
particular question, using organized, transparent, and replicable procedures at each step in the
process. Good systematic reviews take ample precautions to minimize error and bias. This is
particularly important in research synthesis, because biases can arise in the original studies as
well as in publication, dissemination, and review processes, and these biases can be cumulative.
Bias consistently exaggerates or underestimates effects, and it can lead to wrong conclusions.
Like any good study, a systematic review follows a protocol (a detailed plan) that specifies its
central objectives, concepts, and methods in advance.(Littell et al., 2008)
Systematic reviews contrast with traditional "narrative" reviews and textbook chapters. Such
reviews generally do not exhaustively review the literature, lack transparency in the selection
and interpretation of supporting evidence, generally do not provide a quantitative synthesis of
the data, and are more likely to be biased.(Littell et al., 2008)
A systematic review may, or may not, include a meta-analysis: a statistical analysis of the results
from independent studies, which generally aims to produce a single estimate of a treatment
effect. The distinction between systematic review and meta-analysis is important because it is
always appropriate and desirable to systematically review a body of data, but it may sometimes
be inappropriate, or even misleading, to statistically pool results from separate studies.
Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods
to summarize the results of independent studies. It is used to analyze central trends and
variations in results across studies, and to correct for error and bias in a body of research.
Results of the original studies usually are converted to one or more common metrics, called
effect sizes, which are then combined across studies. This allows us to synthesize results from
studies that use different measures of the same construct or report results in different
ways.(Littell et al., 2008) By combining information from all relevant studies, meta-analyses can
provide more precise estimates of the effects of health care than those derived from the
individual studies included within a review. They also facilitate investigations of the consistency
of evidence across studies, and the exploration of differences across studies.(Higgins & Green,
n.d.)
b. History of systematic reviews
The emergence of the first systematic review is unknown, but can usually be attributed to one
man Professor Archie Cochrane (a Scottish epidemiologist) whose seminal text “Effectiveness
and Efficiency” published in 1972 drew attention to the lack of reliable evidence on which to
base health care decision. Later when he wrote further text urging health practitioners to
organize knowledge into a useable and reliable format and practice evidence-based medicine,
others took up this challenge. In the late 1970s and 1980s, a group of health service researchers
in Oxford began a program of systematic reviews on the effectiveness of health care
interventions followed in 1992 by the establishment of The Cochrane Collaboration – an
international, independent and nonprofit organization committed to the principles of managing
healthcare knowledge by publishing and updating high-quality systematic reviews was
established.(MacKenzie et al., 2012)
c. Purpose and Key Characteristics
Research can be understood as systematic investigation to develop theories, establish evidence
and solve problems. We can either undertake new research or we can learn from what others
have already studied. How, then, do we go about finding out what has already been studied,
how it has been studied, and what this research has found out? A common method is to
undertake a review of the research literature or to consult already completed literature reviews.
For policy-makers, practitioners and people in general making personal decisions, engaging with
mountains of individual research reports, even if they could find them, would be an impossible
task. Instead, they rely on researchers to keep abreast of the growing literature, reviewing it and
making it available in a more digestible form.(Gough et al., 2012)
To keep up with research in this field, readers must locate relevant studies, assess their
credibility, and integrate credible results with findings from previous studies. This has become
increasingly difficult as research findings and other information have accumulated rapidly. The
synthesis of empirical evidence is further complicated by the fact that credible studies may use
different research designs, include different types of participants, employ different measures,
and produce inconsistent results. Systematic reviews carefully document and appraise study
qualities, while meta-analyses provide quantitative summaries of evidence, showing the central
trends, variations, and possible reasons for differences in results across studies.(Littell et al.,
2008)
Even where a study is well conceived, executed and reported, it may by chance have found and
reported atypical findings and so should not be relied upon alone. For all these reasons, it is
wiser to make decisions on the basis of all the relevant – and reliable – research that has been
undertaken rather than an individual study or limited groups of studies. If there are variations in
the quality or relevance in this previous research, then the review can take this into account
when examining its results and drawing conclusions. If there are variations in research
participants, settings or conceptualizations of the phenomena under investigation, these also
can be taken into account and may add strength to the findings.
While primary research is essential for producing much crucial original data and insights, its
findings may receive little attention when research publications are read by only a few. Reviews
can inform us about what is known, how it is known, how this varies across studies, and thus also
what is not known from previous research. It can therefore provide a basis for planning and
interpreting new primary research. It may not be a sensible use of resources and in some cases it
may be unethical to undertake research without being properly informed about previous
research; indeed, without a review of previous research the need for new primary research is
unknown. When a need for new primary research has been established, having a comprehensive
picture of what is already known can help us to understand its meaning and how it might be
used. In the past, individuals may have been able to keep abreast of all the studies on a topic but
this is increasingly difficult and expert knowledge of research may produce hidden biases. We
therefore need reviews because(Gough et al., 2012) :
 Any individual research study may be fallible, either by chance, or because of how it was
designed and conducted or reported.
 Any individual study may have limited relevance because of its scope and context.
 A review provides a more comprehensive and stronger picture based on many studies
and settings rather than a single study.
 The task of keeping abreast of all previous and new research is usually too large for an
individual. 5 Findings from a review provide a context for interpreting the results of a
new primary study. 6 Undertaking new primary studies without being informed about
previous research may result in unnecessary, inappropriate, irrelevant, or unethical
research.
Systematic reviews aim to identify, evaluate and summarize the findings of all relevant individual
studies, thereby making the available evidence more accessible to decision makers. When
appropriate, combining the results of several studies gives a more reliable and precise estimate
of an intervention’s effectiveness than one study alone.(Centre for Reviews and Dissemination,
2009)

d. Differences between Systematic Reviews and Other Forms of Reviews

At the heart of the distinction lies the concept of systematicness. Unlike traditional reviews,
which may rely on the author's expertise and subjective criteria, SRs follow a predefined,
transparent protocol (Higgins & Green, n.d.) This meticulous approach encompasses every step
from formulating the research question to synthesizing findings, minimizing potential bias and
maximizing reproducibility (Jackson et al., 2012).
The first divergences appear in the search for relevant studies. While traditional reviews might
search a specific set of databases or rely on readily available sources, SRs employ comprehensive
and systematic search strategies across multiple databases and platforms, ensuring no stone is
left unturned (Centre for Reviews and Dissemination, 2009). This thoroughness significantly
increases the likelihood of capturing all relevant evidence, reducing the risk of overlooking
crucial data points.
Moving beyond simply identifying studies, SRs engage in a critical appraisal of their
methodological quality and potential biases (Joanna Briggs Institute, 2020). This rigorous
assessment employs validated tools and criteria, ensuring only studies of sufficient rigor
contribute to the synthesis of knowledge. Traditional reviews, which often lack such systematic
appraisal, risk incorporating flawed studies, ultimately compromising the reliability of their
conclusions.
Perhaps the most defining characteristic of SRs lies in their ability to synthesize findings from
multiple studies. Through techniques like meta-analysis, SRs statistically combine quantitative
data from across relevant studies, providing a more robust and reliable estimate of the true
effect size(Higgins & Green, n.d.). Traditional reviews, often limited to qualitative summaries,
lack this quantitative synthesis, making it difficult to discern the overall weight of evidence.
Finally, SRs are characterized by their transparency and explicit discussion of limitations. The
detailed protocol, comprehensive search strategy, and critical appraisal process are all
documented and reported, allowing readers to assess the review's rigor and trustworthiness.
Traditional reviews, with their less systematic approach, often lack such transparency, making it
difficult to evaluate the robustness of their conclusions.
In conclusion, while both systematic and traditional reviews serve as valuable tools for navigating
the sea of scientific literature, the distinctive features of SRs elevate them to a new level of rigor
and reliability. Their systematic approach, critical appraisal, quantitative synthesis, and
transparency equip them to provide more robust and trustworthy evidence, ultimately guiding
us towards clearer shores of knowledge in an ever-growing ocean of research.

II. Types of Systematic Reviews


While "systematic review" may sound like a singular entity, a rich tapestry of approaches exists
within this methodology. Each variant caters to specific needs and offers unique insights into the
available evidence. Here's a glimpse into some of the distinct types of systematic reviews:

a. Narrative Reviews
Often mistaken for traditional reviews, narrative reviews also adhere to systematic search and
selection strategies. However, their focus lies in comprehensively summarizing and critically
appraising the identified studies, weaving a descriptive narrative of the field rather than
conducting quantitative analysis. This approach proves valuable for broad topics with limited
quantitative data or for highlighting key themes and gaps in knowledge.
b. Meta-analysis
By statistically pooling quantitative data from relevant studies, this approach generates a more
precise estimate of an effect size and increases the overall strength of the evidence. Meta-
analysis often forms an integral part of systematic reviews, but it can also be conducted
independently for existing bodies of literature (Higgins & Green, n.d.).

c. Network Meta-analysis
When comparing multiple interventions, direct head-to-head trials are not always available.
NMA, a specialized meta-analysis technique that utilizes indirect comparisons through common
comparators. By analyzing a network of studies where some interventions share comparisons,
NMA allows for broader evaluations and informs decision-making when direct evidence is scarce
(Egger, 2009)

d. Scoping Reviews
These reviews map the key concepts, existing research, and potential knowledge gaps within a
broad topic, providing a valuable first step for informing future research priorities and identifying
areas ripe for comprehensive systematic reviews.

e. Mixed-Methods Systematic Reviews


Quantitative data provides hard numbers, but qualitative data paints the picture. When both are
crucial to understanding a complex topic, mixed-methods systematic reviews bridge the gap.
These reviews synthesize both quantitative and qualitative findings, offering a broader, more
nuanced perspective than either approach alone (Pluye & Hong, 2014)

f. Realist Reviews
Beyond simply documenting "what works," realist reviews delve deeper, seeking to understand
"why and in what contexts" interventions work. By drawing on realist theory and qualitative
data, these reviews explore the mechanisms and conditions under which interventions achieve
their effects, providing valuable insights for tailoring interventions and maximizing their impact
(Pawson et al., 2005).

III. Conducting a Systematic Review


A systematic review generally requires considerably more effort than a traditional review. The
process is similar to primary scientific research and involves the careful and systematic
collection, measurement and synthesis of data (the ‘data’ in this instance being research papers).
The term ‘systematic review’ is used to indicate this careful review process and is preferred to
‘meta-analysis’ which is usually used synonymously but which has a more specific meaning
relating to the combining and quantitative summarizing of results from a number of
studies(Glasziou et al., n.d.).
Systematic review involves a number of discrete steps:
• Question formulation;
• Finding studies;
• Appraisal and selection of studies;
• Summary and synthesis of relevant studies; and
• determining the applicability of results.
Before starting the review, it is advisable to develop a protocol outlining the question to be
answered and the proposed methods. This is required for all systematic reviews carried out by
Cochrane reviewers
a. Formulating a Research Question

Research questions (often referred to as "key questions") are analogous to the research
hypotheses of primary research studies. They should be focused and defined clearly since they
determine the scope of research the systematic review will address.(Glasziou et al., n.d.)
Clinical problems and health policies may involve many divergent questions which need to be
informed by the best available evidence. It is useful to have a classification of the divergent types
of health care questions that we may ask:
 Phenomena: ‘What phenomena have been observed in a particular clinical problem, e.g.
what problems do patients complain of after a particular procedure?’
 Frequency or rate of a condition or disease: ‘How common is a particular condition or
disease in a specified group?’
 Diagnostic accuracy: ‘How accurate is a sign, symptom or diagnostic test in predicting the
true diagnostic category of a patient?’
 Etiology and risk factors: ‘Are there known factors that increase the risk of the disease?’
 Prediction and prognosis: ‘Can the risk for a patient be predicted?’
 Interventions: ‘What are the events of an intervention?’
Answering each type of question requires divergent study designs, and consequently divergent
methods of systematic review. A thorough understanding of the appropriate study types for
each question is therefore vital and will greatly assist the processes of writing, appraising and
synthesizing studies from the literature(Glasziou et al., n.d.).
Broad questions that cover a range of topics may not be directly answerable and are not
appropriate for systematic reviews or meta-analyses. As an example, the question "What is the
best treatment for chronic hepatitis B?" would need to be broken down into several smaller
well-focused questions that could be addressed in individual and complementary systematic
reviews. Examples of appropriate key questions may include, "How does entecavir compare with
placebo for achieving hepatitis B e antigen (HBeAg) seroconversion in patients with chronic
HBeAg-positive hepatitis B?" and "What is the relationship between hepatitis B genotypes and
response rates to entecavir?" These and other related questions would be addressed individually
and then, ideally, considered together to answer the more general question.
Key questions for studies of the effectiveness of interventions are commonly formulated
according to the "PICO" method, which fully defines the Population, Intervention, Comparator,
and Outcomes of interest (Glasziou et al., n.d.). The acronym "PICOD" is sometimes used to
indicate that investigators must also specify which study designs are appropriate to include (eg,
all comparative studies versus only randomized trials). Other eligibility criteria may include the
timing or setting of care. Variations of these criteria should be used for systematic reviews of
other study designs, such as of cohort studies (without a comparator), studies of exposures, or
studies of diagnostic tests.
b. Search Strategy
Finding all relevant studies that have addressed a single question is not easy. There are currently
over 22 000 journals in the biomedical literature. MEDLINE indexes only 3700 of these, and even
the MEDLINE journals represent a stack of over 200 metres of journals per year. Beyond sifting
through this mass of published literature, there are problems of duplicate publications and
accessing the ‘grey literature’, such as conference proceedings, reports, theses and unpublished
studies. A systematic approach to this literature is essential in order to identify all of the best
evidence available that addresses the question. As a first step, it is helpful to find out if a
systematic review has already been done or is under way. If not, published original articles need
to be found(Glasziou et al., n.d.).
The literature search should be systematic and comprehensive to minimize error and bias
(Glasziou et al., n.d.). Most systematic reviews start with a search of an electronic database of
the literature. PubMed is almost universally used; other commonly searched databases include
Embase and the Cochrane Central Register of Controlled Trials (CENTRAL). Inclusion of additional
databases should be considered for specialized topics such as complementary or alternative
medicine, quality of care, or nursing. Electronic searches should be supplemented by searches of
the bibliographies of retrieved articles and relevant review articles and by studies known to
domain experts.
The research community has also recognized a need to incorporate the "grey literature" to
diminish the risks of publication bias (selective publication of studies, possibly based on their
results) and reporting bias (selective reporting of study results, possibly based on statistical
significance). There is no standard definition of grey literature, but it generally refers to
information obtained from sources other than published, peer-reviewed articles, such as
conference proceedings, clinical trial registries, adverse events databases, government agency
databases and documents, unpublished industry data, dissertations, and online sites. Methods to
incorporate other types of relevant information, particularly "real-world data" obtained from
analyzing databases of patients undergoing routine care, are still being developed(Littell et al.,
2008).
c. Study Selection and Inclusion Criteria
The next step is to formulate specific eligibility criteria to determine what kinds of studies should
be included or excluded in the review. Again, it is important to develop clear criteria at the
outset to guide the study selection process and other critical decisions that will be made in the
review and meta-analysis. Study eligibility criteria specify the study designs, populations,
interventions, comparisons, and outcome measures to be included and excluded. These criteria
should be derived from the overall conceptual model described above. Ideally, this will be done
in consultation with users. The a priori specification of selection criteria limits reviewers’
freedom to select studies on the basis of their results or on some other basis, protecting the
review from unexamined selection bias. If specific selection criteria are not set up at the
beginning, inclusion decisions may be based on ideological views, personal preferences,
convenience, or other factors. In any case, the reader will be left to guess how and why some
studies were included and others were not. Clear eligibility criteria allow savvy readers to
determine whether relevant studies were omitted and/ or irrelevant studies included. Explicit
inclusion and exclusion criteria also provide clear boundaries so that the review can be replicated
or extended by others(Littell et al., 2008).
To delineate the domains of inclusion criteria, we begin with the PICO framework widely used for
this purpose in the Cochrane Collaboration. PICO stands for populations, interventions,
comparisons, and outcomes—four topics that should be addressed in detail in developing study
eligibility criteria. This framework has been adapted by the Campbell Collaboration and others.
To create eligibility criteria, we specify the characteristics we are looking for in study
populations, interventions, comparisons, and outcomes. Having stated the criteria and reasons
for inclusion, we may want to add exclusion criteria to identify important characteristics that
would lead us to rule out a study(Littell et al., 2008).

d. Quality Assessment
It's the rigorous process by which reviewers scrutinize the identified studies, separating the
wheat from the chaff to ensure only the most robust and reliable evidence forms the foundation
of the final synthesis. Understanding this critical stage is essential for comprehending the
trustworthiness of any systematic review.
Flawed methodologies, biases, and publication pressures can all cast a shadow of doubt on the
findings of individual studies. Quality assessment shines a light on these potential shortcomings,
enabling reviewers to:
 Identify credible studies: By applying established criteria like the Cochrane
Collaboration's Risk of Bias tool (Higgins & Green, n.d.), reviewers assess the study
design, data collection, and analysis methods, weeding out studies with inherent flaws
that could compromise their results.
 Determine the strength of evidence: Different levels of methodological rigor translate to
varying degrees of confidence in the findings. Quality assessment helps differentiate
between robust, high-quality studies and those with significant limitations.
 Reduce bias: Biases, both conscious and unconscious, can skew research findings. Quality
assessment tools employ strategies to identify and account for potential
biases, minimizing their impact on the overall synthesis of evidence.
To conduct a thorough quality assessment, reviewers rely on a range of validated tools and
frameworks, tailored to specific study types and research questions. Some popular examples
include:
 The Cochrane Collaboration's Risk of Bias tool: This widely used tool assesses bias in
randomized controlled trials across domains like random sequence
generation, blinding, and selective reporting (Higgins & Green, n.d.).
 The Joanna Briggs Institute Critical Appraisal Tools: Offering tools for various study
designs, like qualitative research and case-control studies, JBI tools focus on assessing the
methodological rigor and relevance of included studies (Joanna Briggs Institute, 2020).
 Mixed-Methods Appraisal Tool (MMAT): This tool guides the quality assessment of
studies employing mixed methods research, ensuring both quantitative and qualitative
components are evaluated rigorously (Pluye & Hong, 2014).
While tools provide a structured framework, quality assessment isn't a purely mechanical
exercise. Reviewers also exercise critical judgment, considering the specific research question
and context of each study. This qualitative analysis can uncover nuances and limitations that
standardized tools might miss, further ensuring the overall trustworthiness of the review.
A rigorous quality assessment process forms the bedrock of trustworthy and impactful
systematic reviews. By ensuring only the most robust evidence contributes to the synthesis,
reviews provide reliable guidance for decision-making in healthcare, policy, and beyond.
Ultimately, this critical stage safeguards the integrity of scientific inquiry and fuels the
advancement of knowledge across diverse fields.

e. Synthesis of Findings
The final act of a systematic review is synthesizing findings, where meticulously extracted data
and analysis results coalesce into a coherent picture. This stage answers the research question
and offers a nuanced understanding of the available evidence, employing diverse approaches:
 Narrative Synthesis: When quantitative data is limited or nuanced understanding is
key, this qualitative approach delves into themes, patterns, and contradictions across
studies, enriching the picture with textured insights (Popay et al., 2006).
 Meta-analysis: For quantifiable questions, this technique statistically combines data to
estimate a robust overall effect size, offering precise and reliable impact assessment
(Higgins & Green, n.d.)
 Mixed-Methods Synthesis: Where both quantitative and qualitative data hold value, this
approach seamlessly integrates findings, fostering a multifaceted understanding of the
research question (Pluye & Hong, 2014).
Robust synthesis hinges on key considerations:
 Addressing Heterogeneity: Acknowledging and accounting for differences in study design
and context through subgroup and sensitivity analyses ensures valid and generalizable
findings (Systematic Reviews, Chalmers I, Altman DG (Eds), BMJ Publishing Group, London
Pdf - Google, n.d.).
 Grading Certainty of Evidence: Systems like GRADE assess the quality and limitations of
evidence, providing transparent guidance on the level of confidence we can place in the
findings.
 Explaining Discrepancies: Inconsistencies shouldn't be ignored; delving into potential
explanations like methodological factors or context-specific influences enriches the
overall understanding.
The impact of a robust synthesis is profound:
 Clear and Cohesive Answers: The research question finds a succinct and understandable
answer, guiding interpretation and application of the findings.
 Knowledge Gap Identification: Inconsistencies become apparent, informing future
research priorities and directions.
 Evidence-Based Practice: Robustly synthesized evidence empowers informed decision-
making by policymakers, practitioners, and individuals.
The synthesis stage isn't just a summary; it's the culmination of a rigorous journey, transforming
isolated studies into a unified tapestry of knowledge. By appreciating this critical stage, you gain
the power to discern the true strength and nuances of evidence, ultimately navigating the
research landscape with confidence and insight.

f. Reporting the Results


The culmination of a systematic review isn't just uncovering valuable insights; it's about sharing
them with the world in a clear, transparent, and informative manner. This is where the reporting
of results takes center stage, ensuring the knowledge gleaned from the rigorous review process
reaches its intended audience.
The Pillars of Effective Reporting:
 Transparency: Every step of the review, from search strategy to synthesis methods, must
be documented transparently. Readers should be able to easily replicate the review and
assess its validity (Higgins & Green, n.d.)
 Structure and Clarity: Results should be presented in a well-organized and logical
manner, using concise language and appropriate headings. Figures, tables, and visual aids
can be invaluable tools for effectively presenting complex data.
 Synthesis Summary: Clearly and succinctly summarize the key findings of the
review, addressing the initial research question. This summary should highlight the main
themes, patterns, and conclusions derived from the analysis.
 Confidence in the Evidence: Assess and report the certainty of the evidence using
established grading systems like GRADE. This informs readers about the limitations and
strengths of the findings, enhancing their interpretation and applicability.
 Subgroup Analysis and Heterogeneity: Address and discuss any observed
heterogeneity, explaining how it might impact the generalizability of the
findings. Subgroup analysis results should be presented alongside the overall
synthesis, providing a nuanced understanding of the evidence.
 Addressing Unexpected Findings: Don't shy away from discussing unexpected results or
discrepancies between studies. Delving into potential explanations fosters a deeper
understanding of the research landscape and informs future research directions.
Effective reporting goes beyond presenting data; it contextualizes the findings within the
broader field of knowledge. This includes:
 Discussing the implications of the results for practice, policy, and future research.
 Highlighting the limitations of the review and potential areas for further investigation.
 Identifying future research priorities based on the knowledge gaps identified during the
review.
By adhering to these key principles, researchers can ensure their systematic review results
illuminate the path for future knowledge advancement and guide informed decision-making
across diverse fields.

IV. Critical Appraisal in Systematic Reviews


a robust systematic review demands critical appraisal, ensuring the included studies meet
rigorous standards and the synthesized findings accurately reflect the existing evidence. This
section delves into the pivotal role of critical appraisal, exploring its diverse tools and vital
importance.
a. Importance of Critical Appraisal:
Imagine constructing a building with flawed bricks. No matter how skillfully architects weave
them together, the result is vulnerable to collapse. This analogy aptly captures the significance of
critical appraisal in systematic reviews. By scrutinizing the individual studies included in the
review, we ensure they possess the robustness and methodological rigor to serve as solid
building blocks for reliable knowledge.
Critical appraisal safeguards the validity and trustworthiness of systematic reviews in several
ways:
 Identifying unreliable evidence: It weeds out studies with inherent flaws in
design, execution, or analysis, preventing their potential biases from skewing the overall
synthesis.
 Strengthening confidence in findings: By filtering for high-quality studies, critical appraisal
bolsters the credibility of the review's conclusions, enabling confident application of the
findings in practice, policy, and research.
 Transparency and accountability: The systematic and documented process of critical
appraisal enhances transparency, allowing readers to assess the review's rigor and
understand the basis for its conclusions.
 Informing future research: Identifying limitations and gaps in the existing evidence
through critical appraisal paves the way for future research by highlighting areas where
further investigation is crucial.
b. Types of Critical Appraisal Tools:
Critical appraisal is not a one-size-fits-all endeavor. To cater to the diverse methodologies and
research questions encountered in systematic reviews, a veritable toolbox of appraisal tools
exists:
i. Quality Assessment Tools:
These tools help assess the methodological rigor of individual studies, focusing on aspects like:
 Randomization: Ensuring studies employ robust randomization techniques to minimize
bias.
 Blinding: Assessing whether both participants and researchers were blinded to study
group allocation, thereby reducing bias based on expectation.
 Evaluating the completeness of participant follow-up to minimize selection bias. Data
reporting: Ensuring transparent and complete reporting of study results to minimize
reporting bias.
Examples of widely used quality assessment tools include:
 Cochrane Collaboration's Risk of Bias tool: Specifically designed for randomized
controlled trials. (Higgins & Green, n.d.)
 Joanna Briggs Institute Critical Appraisal Tools: Offering tools tailored to various study
designs like qualitative research and case-control studies. (Joanna Briggs Institute, 2020)
 Mixed-Methods Appraisal Tool (MMAT): Guiding the quality assessment of studies
employing mixed methods. (Pluye & Hong, 2014)
ii. Risk of Bias Assessment Tools:
These tools delve deeper, focusing on identifying and understanding potential sources of bias
that may influence study results. These biases can be unintentional (e.g., selection bias) or
intentional (e.g., publication bias). Understanding these potential biases allows reviewers to
interpret and weigh the evidence effectively.
Examples of risk of bias assessment tools include:
 Cochrane Collaboration's Risk of Bias tool: Includes specific sections for assessing
different types of bias. (Higgins & Green, n.d.)
 Revised Cochrane Risk of Bias tool for Non-Randomized Studies (ROBINS-I): Specifically
designed for assessing risk of bias in non-randomized studies. (Sterne et al., 2016)
 Newcastle-Ottawa Quality Assessment Scale (NOS): Commonly used for assessing risk of
bias in observational studies. (Wells & Littell, n.d.)
iii. Reporting Guidelines:
Beyond individual tools, reporting guidelines offer comprehensive frameworks for conducting
and reporting systematic reviews. These guidelines ensure transparency, consistency, and
completeness in the review process, facilitating critical appraisal and enhancing the reliability of
findings.
Key reporting guidelines include:
 PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses): The gold
standard for reporting systematic reviews and meta-analyses.
 MOOSE (Meta-analysis Of Observations Studies in Epidemiology): Specifically designed
for reporting meta-analyses of observational studies.
 GRADE (Grading of Recommendations, Assessment, Development and Evaluation): A
framework for assessing the certainty of evidence and informing recommendations
6. Significance of Systematic Review Methodologies in Healthcare
In the healthcare system, systematic review methodologies take center stage, meticulously
harmonizing countless studies into a unified knowledge. Their impact transcends mere academic
rigor, echoing through vital aspects of healthcare, from evidence-based practice to clinical
guidelines and future research directions. The profound significance of systematic reviews in
shaping a sound and evidence-driven healthcare landscape is multifaceted.
a. Evidence-based Practice:
These methodologies provide clinicians with a compass of rigorous evidence, guiding
their decisions towards interventions supported by the strongest scientific foundation. By
synthesizing the best available research, systematic reviews empower clinicians to:
 Deliver optimal care: Informed by the most robust evidence, practitioners can offer their
patients treatment plans with proven efficacy and minimized risks.
 Reduce uncertainty and variability: Systematic reviews provide clarity and consistency in
clinical decision-making, minimizing the influence of personal biases and anecdote-driven
practices.
 Improve patient outcomes: Ultimately, evidence-based practice informed by systematic
reviews leads to better patient outcomes, potentially reducing morbidity, mortality, and
healthcare costs.
b. Health Policies and Decision-making:
Beyond the individual clinician, systematic reviews resonate at the level of healthcare policy.
Policymakers grappling with complex healthcare decisions, from resource allocation to public
health interventions, rely on the robust synthesis of evidence provided by systematic reviews.
These reviews serve as:
 Trustworthy informants: Policymakers can confidently base their decisions on the most
up-to-date and reliable evidence, ensuring ethical and effective resource allocation.
 Risk mitigators: By identifying potential pitfalls and highlighting gaps in
knowledge, systematic reviews inform policy decisions that minimize unintended
consequences and optimize healthcare outcomes
 Promoters of innovation: Systematic reviews not only highlight existing evidence but also
point towards areas where further research is needed, driving the development of new
interventions and preventive strategies. (Higgins & Green, n.d.)
c. Clinical Guidelines:
Clinical guidelines, those vital blueprints for optimal patient care, rely heavily on the foundation
laid by systematic reviews. These reviews offer the rigorous evidence base upon which
guideline developers can:
 Establish best practices: Guidelines informed by systematically synthesized evidence
ensure that recommended interventions are grounded in the most reliable and effective
clinical data.
 Maintain currency: Systematic reviews provide a dynamic stream of updated
evidence, allowing guidelines to continuously evolve and adapt to the ever-changing
scientific landscape.
 Reduce practice variation: By providing a unified consensus based on best
evidence, systematic reviews promote consistency in healthcare delivery, minimizing
inter-practitioner variations in care.
d. Identifying Gaps in Research:
Systematic reviews not only illuminate what we know, but also reveal what we don't. By
highlighting inconsistencies, shortcomings, and unanswered questions within the existing
research landscape, systematic reviews pinpoint crucial areas where further investigation is
needed. This allows researchers to:
 Prioritize research agendas: By identifying the most pressing knowledge gaps, systematic
reviews guide researchers towards tackling the most urgent and clinically relevant
research questions. (Higgins & Green, n.d.)
 Minimize research waste: Addressing identified gaps prevents redundancy and
duplication of research efforts, maximizing the value and impact of research resources.
 Drive innovation: Identifying uncharted territories in existing knowledge paves the way
for groundbreaking discoveries and novel interventions, propelling healthcare forward.
e. Identifying Future Research Directions:
By synthesizing the vast ocean of healthcare research, systematic reviews offer a unique bird's
eye view of the scientific landscape. This vantage point allows researchers to:
 Identify emerging trends and promising areas of investigation: Systematic reviews can
reveal patterns and connections across seemingly disparate studies, highlighting new
research avenues with significant potential impact.
 Develop novel research hypotheses: The comprehensive synthesis of existing knowledge
provided by systematic reviews can spark new ideas and questions, paving the way for
innovative research designs and methodologies.
 Direct the application of new technologies: As cutting-edge technologies
emerge, systematic reviews can inform research on their potential use in
healthcare, shaping the future of diagnosis, treatment.
7. Challenges and Limitations of Systematic Review Methodologies

While systematic reviews represent the gold standard for synthesizing evidence, their journey
from conception to conclusion is not without its turbulent waters. Understanding the challenges
and limitations inherent in these methodologies is crucial for interpreting their findings and
navigating the complexities of evidence-based healthcare. Let's dive into the seven key areas
where systematic reviews face potential obstacles:
a. Time and Resources:
Conducting a thorough systematic review is a monumental undertaking, requiring
significant time and resource investment. Sifting through countless databases, critically
appraising studies, and meticulously synthesizing findings can be resource-intensive,
potentially limiting the scope and depth of the review. (Higgins & Green, n.d.)
b. Publication Bias:
The research landscape is often tilted towards studies with statistically significant results,
making those less likely to be published. This publication bias can skew the available
evidence, leading systematic reviews to overestimate the effectiveness of interventions
or underestimate potential harms.
c. Heterogeneity of Studies:
The studies encompassed by a systematic review often vary in design, methodology, and
participant populations. This heterogeneity can pose significant challenges during
synthesis, making it difficult to draw clear and generalizable conclusions from the
combined data.
d. Quality of Included Studies:
The integrity of a systematic review hinges on the quality of the studies it includes.
Reviews incorporating poorly designed or inadequately executed studies risk amplifying
flaws and generating unreliable conclusions.
e. Inclusion and Exclusion Criteria:
Defining clear and appropriate inclusion and exclusion criteria is essential for focusing the
review on a specific research question. However, overly restrictive criteria may
inadvertently exclude valuable evidence, while overly broad criteria can introduce noise
and hinder meaningful synthesis.
f. Conflict of Interest:
Conflicts of interest, whether financial or personal, can potentially influence the conduct
and reporting of a systematic review. Ensuring transparency and addressing potential
conflicts are crucial for maintaining the scientific integrity and trustworthiness of the
review.
g. Language Bias:
The vast majority of published research is in English, potentially excluding valuable
studies conducted in other languages. This language bias can limit the generalizability of
the review findings and potentially lead to missed insights from diverse populations.
Despite these challenges, systematic reviews remain the most robust method for
synthesizing evidence. Addressing these limitations through strategies like robust search
strategies, quality assessment tools, and sensitivity analyses can mitigate their impact
and enhance the reliability of the findings. Additionally, initiatives like open access
publishing and efforts to translate key research into multiple languages can help reduce
publication and language bias.
By acknowledging and proactively addressing these challenges, researchers can ensure
that systematic reviews continue to serve as essential tools for guiding evidence-based
practice and informing healthcare policy.
Bibliography

Centre for Reviews and Dissemination, B. (2009). Systematic Reviews CRD’s guidance for undertaking
reviews in health care. York Associates.

Egger, M. (Ed.). (2009). Systematic reviews in health care: Meta-analysis in context (2. ed., [Nachdr.]).
BMJ Books.

Glasziou, P., Irwig, L., Bain, C., & Colditz, G. (n.d.). Systematic Reviews in Health Care: A Practical Guide.

Gough, D., Oliver, S., & Thomas, J. (Eds.). (2012). An introduction to systematic reviews. SAGE.

Higgins, J. P., & Green, S. (n.d.). Cochrane Handbook for Systematic Reviews of Interventions: Cochrane
Book Series.

Littell, J. H., Corcoran, J., & Pillai, V. K. (2008). Systematic reviews and meta-analysis. Oxford University
Press.

MacKenzie, H., Dewey, A., Drahota, A., Kilburn, S., Kalra, P. R., Fogg, C., & Zachariah, D. (2012).
Systematic Reviews: What They Are, Why They Are Important, and How to Get Involved. Systematic
Reviews, 4.

Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review—A new method of
systematic review designed for complex policy interventions. Journal of Health Services Research &
Policy, 10(1_suppl), 21–34. https://doi.org/10.1258/1355819054308530

Pluye, P., & Hong, Q. N. (2014). Combining the Power of Stories and the Power of Numbers: Mixed
Methods Research and Mixed Studies Reviews. Annual Review of Public Health, 35(1), 29–45.
https://doi.org/10.1146/annurev-publhealth-032013-182440

systematic reviews, Chalmers I, Altman DG (Eds), BMJ Publishing Group, London pdf—Google ፍለጋ. (n.d.).
Retrieved January 13, 2024, from
https://www.google.com/search?q=systematic+reviews%2C+Chalmers+I%2C+Altman+DG+%28Eds%29
%2C+BMJ+Publishing+Group%2C+London+pdf&sca_esv=598202578&sxsrf=ACQVn0-Djkt-
QnTmEWP4Uktlhsz3BDhPNA%3A1705172325778&ei=Zd2iZZySL77ri-
gP55iTmA8&udm=&ved=0ahUKEwjc2Ij2hduDAxW-
9QIHHWfMBPMQ4dUDCBA&uact=5&oq=systematic+reviews%2C+Chalmers+I%2C+Altman+DG+%28Eds
%29%2C+BMJ+Publishing+Group%2C+London+pdf&gs_lp=Egxnd3Mtd2l6LXNlcnAiUXN5c3RlbWF0aWMg
cmV2aWV3cywgQ2hhbG1lcnMgSSwgQWx0bWFuIERHIChFZHMpLCBCTUogUHVibGlzaGluZyBHcm91cCw
gTG9uZG9uIHBkZkgAUABYAHAAeACQAQCYAQCgAQCqAQC4AQPIAQD4AQHiAwQYACBB&sclient=gws-
wiz-serp#ip=1

Wells, K., & Littell, J. H. (n.d.). Study Quality Assessment in Systematic Reviews of Research on
Intervention Effects.

You might also like