Professional Documents
Culture Documents
I would like to thank Bereket T. (MPH, MA), Health Education/ Health Promotion lecturer from
Dire Dawa University, who tirelessly worked to enable us acquire knowledge and skill during
the course delivery & gave me a chance in the understanding of the development &
application of Systematic Review Methods. In addition, Iwould like to thank Dire Dawa
University, School of Post-Graduate Studies, and Department of Public Health to arrange the
schedule in need of teaching and learning process.
1. Introduction
Systematic review methodology is a rigorous and unbiased approach to synthesize all the
available evidence on a specific research question or topic. It is a well-defined and transparent
process that is used to identify, appraise, and synthesize relevant studies in a clear and
systematic manner. Systematic reviews are considered the best form of evidence synthesis as
they provide the most comprehensive and unbiased summary of the existing evidence. This
paper aims to provide an in-depth understanding of systematic review methodologies, including
their basics, definitions, objectives, and methods.
2. Objectives
The primary objective of this paper is to provide a comprehensive overview of systematic review
methodologies. This includes understanding the basics of systematic reviews, their definitions,
objectives, and the methods used to conduct them. Additionally, this paper aims to explore the
importance and impact of systematic review methodologies in healthcare research and decision-
making processes. To achieve this objective, the specific objectives of this paper are:
To understand the fundamentals of systematic review methodologies, including their
definition, purpose, and key characteristics.
To explore the different types of systematic reviews, such as narrative, meta-analysis,
and network meta-analysis.
To describe the process of conducting a systematic review, including defining the
research question, searching for and selecting relevant studies, and data analysis.
To examine the critical appraisal tools and techniques used to evaluate the quality of
studies included in a systematic review.
To discuss the significance of systematic review methodologies in healthcare research, policy-
making, and clinical practice.
3. Methods
To collect information for this paper, a systematic search of databases such as PubMed,
Cochrane Library, and Embase was conducted using relevant keywords such as 'systematic
review,' 'methodology,' 'evidence synthesis,' 'research synthesis,' and 'meta-analysis.'
Additionally, textbooks, articles, and guidelines on systematic reviews were reviewed. Relevant
publications and articles were also retrieved from reference lists and other related sources. The
collected information was then organized and synthesized to provide a comprehensive
understanding of the various aspects of systematic review methodologies.
4. Subject Details
At the heart of the distinction lies the concept of systematicness. Unlike traditional reviews,
which may rely on the author's expertise and subjective criteria, SRs follow a predefined,
transparent protocol (Higgins & Green, n.d.) This meticulous approach encompasses every step
from formulating the research question to synthesizing findings, minimizing potential bias and
maximizing reproducibility (Jackson et al., 2012).
The first divergences appear in the search for relevant studies. While traditional reviews might
search a specific set of databases or rely on readily available sources, SRs employ comprehensive
and systematic search strategies across multiple databases and platforms, ensuring no stone is
left unturned (Centre for Reviews and Dissemination, 2009). This thoroughness significantly
increases the likelihood of capturing all relevant evidence, reducing the risk of overlooking
crucial data points.
Moving beyond simply identifying studies, SRs engage in a critical appraisal of their
methodological quality and potential biases (Joanna Briggs Institute, 2020). This rigorous
assessment employs validated tools and criteria, ensuring only studies of sufficient rigor
contribute to the synthesis of knowledge. Traditional reviews, which often lack such systematic
appraisal, risk incorporating flawed studies, ultimately compromising the reliability of their
conclusions.
Perhaps the most defining characteristic of SRs lies in their ability to synthesize findings from
multiple studies. Through techniques like meta-analysis, SRs statistically combine quantitative
data from across relevant studies, providing a more robust and reliable estimate of the true
effect size(Higgins & Green, n.d.). Traditional reviews, often limited to qualitative summaries,
lack this quantitative synthesis, making it difficult to discern the overall weight of evidence.
Finally, SRs are characterized by their transparency and explicit discussion of limitations. The
detailed protocol, comprehensive search strategy, and critical appraisal process are all
documented and reported, allowing readers to assess the review's rigor and trustworthiness.
Traditional reviews, with their less systematic approach, often lack such transparency, making it
difficult to evaluate the robustness of their conclusions.
In conclusion, while both systematic and traditional reviews serve as valuable tools for navigating
the sea of scientific literature, the distinctive features of SRs elevate them to a new level of rigor
and reliability. Their systematic approach, critical appraisal, quantitative synthesis, and
transparency equip them to provide more robust and trustworthy evidence, ultimately guiding
us towards clearer shores of knowledge in an ever-growing ocean of research.
a. Narrative Reviews
Often mistaken for traditional reviews, narrative reviews also adhere to systematic search and
selection strategies. However, their focus lies in comprehensively summarizing and critically
appraising the identified studies, weaving a descriptive narrative of the field rather than
conducting quantitative analysis. This approach proves valuable for broad topics with limited
quantitative data or for highlighting key themes and gaps in knowledge.
b. Meta-analysis
By statistically pooling quantitative data from relevant studies, this approach generates a more
precise estimate of an effect size and increases the overall strength of the evidence. Meta-
analysis often forms an integral part of systematic reviews, but it can also be conducted
independently for existing bodies of literature (Higgins & Green, n.d.).
c. Network Meta-analysis
When comparing multiple interventions, direct head-to-head trials are not always available.
NMA, a specialized meta-analysis technique that utilizes indirect comparisons through common
comparators. By analyzing a network of studies where some interventions share comparisons,
NMA allows for broader evaluations and informs decision-making when direct evidence is scarce
(Egger, 2009)
d. Scoping Reviews
These reviews map the key concepts, existing research, and potential knowledge gaps within a
broad topic, providing a valuable first step for informing future research priorities and identifying
areas ripe for comprehensive systematic reviews.
f. Realist Reviews
Beyond simply documenting "what works," realist reviews delve deeper, seeking to understand
"why and in what contexts" interventions work. By drawing on realist theory and qualitative
data, these reviews explore the mechanisms and conditions under which interventions achieve
their effects, providing valuable insights for tailoring interventions and maximizing their impact
(Pawson et al., 2005).
Research questions (often referred to as "key questions") are analogous to the research
hypotheses of primary research studies. They should be focused and defined clearly since they
determine the scope of research the systematic review will address.(Glasziou et al., n.d.)
Clinical problems and health policies may involve many divergent questions which need to be
informed by the best available evidence. It is useful to have a classification of the divergent types
of health care questions that we may ask:
Phenomena: ‘What phenomena have been observed in a particular clinical problem, e.g.
what problems do patients complain of after a particular procedure?’
Frequency or rate of a condition or disease: ‘How common is a particular condition or
disease in a specified group?’
Diagnostic accuracy: ‘How accurate is a sign, symptom or diagnostic test in predicting the
true diagnostic category of a patient?’
Etiology and risk factors: ‘Are there known factors that increase the risk of the disease?’
Prediction and prognosis: ‘Can the risk for a patient be predicted?’
Interventions: ‘What are the events of an intervention?’
Answering each type of question requires divergent study designs, and consequently divergent
methods of systematic review. A thorough understanding of the appropriate study types for
each question is therefore vital and will greatly assist the processes of writing, appraising and
synthesizing studies from the literature(Glasziou et al., n.d.).
Broad questions that cover a range of topics may not be directly answerable and are not
appropriate for systematic reviews or meta-analyses. As an example, the question "What is the
best treatment for chronic hepatitis B?" would need to be broken down into several smaller
well-focused questions that could be addressed in individual and complementary systematic
reviews. Examples of appropriate key questions may include, "How does entecavir compare with
placebo for achieving hepatitis B e antigen (HBeAg) seroconversion in patients with chronic
HBeAg-positive hepatitis B?" and "What is the relationship between hepatitis B genotypes and
response rates to entecavir?" These and other related questions would be addressed individually
and then, ideally, considered together to answer the more general question.
Key questions for studies of the effectiveness of interventions are commonly formulated
according to the "PICO" method, which fully defines the Population, Intervention, Comparator,
and Outcomes of interest (Glasziou et al., n.d.). The acronym "PICOD" is sometimes used to
indicate that investigators must also specify which study designs are appropriate to include (eg,
all comparative studies versus only randomized trials). Other eligibility criteria may include the
timing or setting of care. Variations of these criteria should be used for systematic reviews of
other study designs, such as of cohort studies (without a comparator), studies of exposures, or
studies of diagnostic tests.
b. Search Strategy
Finding all relevant studies that have addressed a single question is not easy. There are currently
over 22 000 journals in the biomedical literature. MEDLINE indexes only 3700 of these, and even
the MEDLINE journals represent a stack of over 200 metres of journals per year. Beyond sifting
through this mass of published literature, there are problems of duplicate publications and
accessing the ‘grey literature’, such as conference proceedings, reports, theses and unpublished
studies. A systematic approach to this literature is essential in order to identify all of the best
evidence available that addresses the question. As a first step, it is helpful to find out if a
systematic review has already been done or is under way. If not, published original articles need
to be found(Glasziou et al., n.d.).
The literature search should be systematic and comprehensive to minimize error and bias
(Glasziou et al., n.d.). Most systematic reviews start with a search of an electronic database of
the literature. PubMed is almost universally used; other commonly searched databases include
Embase and the Cochrane Central Register of Controlled Trials (CENTRAL). Inclusion of additional
databases should be considered for specialized topics such as complementary or alternative
medicine, quality of care, or nursing. Electronic searches should be supplemented by searches of
the bibliographies of retrieved articles and relevant review articles and by studies known to
domain experts.
The research community has also recognized a need to incorporate the "grey literature" to
diminish the risks of publication bias (selective publication of studies, possibly based on their
results) and reporting bias (selective reporting of study results, possibly based on statistical
significance). There is no standard definition of grey literature, but it generally refers to
information obtained from sources other than published, peer-reviewed articles, such as
conference proceedings, clinical trial registries, adverse events databases, government agency
databases and documents, unpublished industry data, dissertations, and online sites. Methods to
incorporate other types of relevant information, particularly "real-world data" obtained from
analyzing databases of patients undergoing routine care, are still being developed(Littell et al.,
2008).
c. Study Selection and Inclusion Criteria
The next step is to formulate specific eligibility criteria to determine what kinds of studies should
be included or excluded in the review. Again, it is important to develop clear criteria at the
outset to guide the study selection process and other critical decisions that will be made in the
review and meta-analysis. Study eligibility criteria specify the study designs, populations,
interventions, comparisons, and outcome measures to be included and excluded. These criteria
should be derived from the overall conceptual model described above. Ideally, this will be done
in consultation with users. The a priori specification of selection criteria limits reviewers’
freedom to select studies on the basis of their results or on some other basis, protecting the
review from unexamined selection bias. If specific selection criteria are not set up at the
beginning, inclusion decisions may be based on ideological views, personal preferences,
convenience, or other factors. In any case, the reader will be left to guess how and why some
studies were included and others were not. Clear eligibility criteria allow savvy readers to
determine whether relevant studies were omitted and/ or irrelevant studies included. Explicit
inclusion and exclusion criteria also provide clear boundaries so that the review can be replicated
or extended by others(Littell et al., 2008).
To delineate the domains of inclusion criteria, we begin with the PICO framework widely used for
this purpose in the Cochrane Collaboration. PICO stands for populations, interventions,
comparisons, and outcomes—four topics that should be addressed in detail in developing study
eligibility criteria. This framework has been adapted by the Campbell Collaboration and others.
To create eligibility criteria, we specify the characteristics we are looking for in study
populations, interventions, comparisons, and outcomes. Having stated the criteria and reasons
for inclusion, we may want to add exclusion criteria to identify important characteristics that
would lead us to rule out a study(Littell et al., 2008).
d. Quality Assessment
It's the rigorous process by which reviewers scrutinize the identified studies, separating the
wheat from the chaff to ensure only the most robust and reliable evidence forms the foundation
of the final synthesis. Understanding this critical stage is essential for comprehending the
trustworthiness of any systematic review.
Flawed methodologies, biases, and publication pressures can all cast a shadow of doubt on the
findings of individual studies. Quality assessment shines a light on these potential shortcomings,
enabling reviewers to:
Identify credible studies: By applying established criteria like the Cochrane
Collaboration's Risk of Bias tool (Higgins & Green, n.d.), reviewers assess the study
design, data collection, and analysis methods, weeding out studies with inherent flaws
that could compromise their results.
Determine the strength of evidence: Different levels of methodological rigor translate to
varying degrees of confidence in the findings. Quality assessment helps differentiate
between robust, high-quality studies and those with significant limitations.
Reduce bias: Biases, both conscious and unconscious, can skew research findings. Quality
assessment tools employ strategies to identify and account for potential
biases, minimizing their impact on the overall synthesis of evidence.
To conduct a thorough quality assessment, reviewers rely on a range of validated tools and
frameworks, tailored to specific study types and research questions. Some popular examples
include:
The Cochrane Collaboration's Risk of Bias tool: This widely used tool assesses bias in
randomized controlled trials across domains like random sequence
generation, blinding, and selective reporting (Higgins & Green, n.d.).
The Joanna Briggs Institute Critical Appraisal Tools: Offering tools for various study
designs, like qualitative research and case-control studies, JBI tools focus on assessing the
methodological rigor and relevance of included studies (Joanna Briggs Institute, 2020).
Mixed-Methods Appraisal Tool (MMAT): This tool guides the quality assessment of
studies employing mixed methods research, ensuring both quantitative and qualitative
components are evaluated rigorously (Pluye & Hong, 2014).
While tools provide a structured framework, quality assessment isn't a purely mechanical
exercise. Reviewers also exercise critical judgment, considering the specific research question
and context of each study. This qualitative analysis can uncover nuances and limitations that
standardized tools might miss, further ensuring the overall trustworthiness of the review.
A rigorous quality assessment process forms the bedrock of trustworthy and impactful
systematic reviews. By ensuring only the most robust evidence contributes to the synthesis,
reviews provide reliable guidance for decision-making in healthcare, policy, and beyond.
Ultimately, this critical stage safeguards the integrity of scientific inquiry and fuels the
advancement of knowledge across diverse fields.
e. Synthesis of Findings
The final act of a systematic review is synthesizing findings, where meticulously extracted data
and analysis results coalesce into a coherent picture. This stage answers the research question
and offers a nuanced understanding of the available evidence, employing diverse approaches:
Narrative Synthesis: When quantitative data is limited or nuanced understanding is
key, this qualitative approach delves into themes, patterns, and contradictions across
studies, enriching the picture with textured insights (Popay et al., 2006).
Meta-analysis: For quantifiable questions, this technique statistically combines data to
estimate a robust overall effect size, offering precise and reliable impact assessment
(Higgins & Green, n.d.)
Mixed-Methods Synthesis: Where both quantitative and qualitative data hold value, this
approach seamlessly integrates findings, fostering a multifaceted understanding of the
research question (Pluye & Hong, 2014).
Robust synthesis hinges on key considerations:
Addressing Heterogeneity: Acknowledging and accounting for differences in study design
and context through subgroup and sensitivity analyses ensures valid and generalizable
findings (Systematic Reviews, Chalmers I, Altman DG (Eds), BMJ Publishing Group, London
Pdf - Google, n.d.).
Grading Certainty of Evidence: Systems like GRADE assess the quality and limitations of
evidence, providing transparent guidance on the level of confidence we can place in the
findings.
Explaining Discrepancies: Inconsistencies shouldn't be ignored; delving into potential
explanations like methodological factors or context-specific influences enriches the
overall understanding.
The impact of a robust synthesis is profound:
Clear and Cohesive Answers: The research question finds a succinct and understandable
answer, guiding interpretation and application of the findings.
Knowledge Gap Identification: Inconsistencies become apparent, informing future
research priorities and directions.
Evidence-Based Practice: Robustly synthesized evidence empowers informed decision-
making by policymakers, practitioners, and individuals.
The synthesis stage isn't just a summary; it's the culmination of a rigorous journey, transforming
isolated studies into a unified tapestry of knowledge. By appreciating this critical stage, you gain
the power to discern the true strength and nuances of evidence, ultimately navigating the
research landscape with confidence and insight.
While systematic reviews represent the gold standard for synthesizing evidence, their journey
from conception to conclusion is not without its turbulent waters. Understanding the challenges
and limitations inherent in these methodologies is crucial for interpreting their findings and
navigating the complexities of evidence-based healthcare. Let's dive into the seven key areas
where systematic reviews face potential obstacles:
a. Time and Resources:
Conducting a thorough systematic review is a monumental undertaking, requiring
significant time and resource investment. Sifting through countless databases, critically
appraising studies, and meticulously synthesizing findings can be resource-intensive,
potentially limiting the scope and depth of the review. (Higgins & Green, n.d.)
b. Publication Bias:
The research landscape is often tilted towards studies with statistically significant results,
making those less likely to be published. This publication bias can skew the available
evidence, leading systematic reviews to overestimate the effectiveness of interventions
or underestimate potential harms.
c. Heterogeneity of Studies:
The studies encompassed by a systematic review often vary in design, methodology, and
participant populations. This heterogeneity can pose significant challenges during
synthesis, making it difficult to draw clear and generalizable conclusions from the
combined data.
d. Quality of Included Studies:
The integrity of a systematic review hinges on the quality of the studies it includes.
Reviews incorporating poorly designed or inadequately executed studies risk amplifying
flaws and generating unreliable conclusions.
e. Inclusion and Exclusion Criteria:
Defining clear and appropriate inclusion and exclusion criteria is essential for focusing the
review on a specific research question. However, overly restrictive criteria may
inadvertently exclude valuable evidence, while overly broad criteria can introduce noise
and hinder meaningful synthesis.
f. Conflict of Interest:
Conflicts of interest, whether financial or personal, can potentially influence the conduct
and reporting of a systematic review. Ensuring transparency and addressing potential
conflicts are crucial for maintaining the scientific integrity and trustworthiness of the
review.
g. Language Bias:
The vast majority of published research is in English, potentially excluding valuable
studies conducted in other languages. This language bias can limit the generalizability of
the review findings and potentially lead to missed insights from diverse populations.
Despite these challenges, systematic reviews remain the most robust method for
synthesizing evidence. Addressing these limitations through strategies like robust search
strategies, quality assessment tools, and sensitivity analyses can mitigate their impact
and enhance the reliability of the findings. Additionally, initiatives like open access
publishing and efforts to translate key research into multiple languages can help reduce
publication and language bias.
By acknowledging and proactively addressing these challenges, researchers can ensure
that systematic reviews continue to serve as essential tools for guiding evidence-based
practice and informing healthcare policy.
Bibliography
Centre for Reviews and Dissemination, B. (2009). Systematic Reviews CRD’s guidance for undertaking
reviews in health care. York Associates.
Egger, M. (Ed.). (2009). Systematic reviews in health care: Meta-analysis in context (2. ed., [Nachdr.]).
BMJ Books.
Glasziou, P., Irwig, L., Bain, C., & Colditz, G. (n.d.). Systematic Reviews in Health Care: A Practical Guide.
Gough, D., Oliver, S., & Thomas, J. (Eds.). (2012). An introduction to systematic reviews. SAGE.
Higgins, J. P., & Green, S. (n.d.). Cochrane Handbook for Systematic Reviews of Interventions: Cochrane
Book Series.
Littell, J. H., Corcoran, J., & Pillai, V. K. (2008). Systematic reviews and meta-analysis. Oxford University
Press.
MacKenzie, H., Dewey, A., Drahota, A., Kilburn, S., Kalra, P. R., Fogg, C., & Zachariah, D. (2012).
Systematic Reviews: What They Are, Why They Are Important, and How to Get Involved. Systematic
Reviews, 4.
Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review—A new method of
systematic review designed for complex policy interventions. Journal of Health Services Research &
Policy, 10(1_suppl), 21–34. https://doi.org/10.1258/1355819054308530
Pluye, P., & Hong, Q. N. (2014). Combining the Power of Stories and the Power of Numbers: Mixed
Methods Research and Mixed Studies Reviews. Annual Review of Public Health, 35(1), 29–45.
https://doi.org/10.1146/annurev-publhealth-032013-182440
systematic reviews, Chalmers I, Altman DG (Eds), BMJ Publishing Group, London pdf—Google ፍለጋ. (n.d.).
Retrieved January 13, 2024, from
https://www.google.com/search?q=systematic+reviews%2C+Chalmers+I%2C+Altman+DG+%28Eds%29
%2C+BMJ+Publishing+Group%2C+London+pdf&sca_esv=598202578&sxsrf=ACQVn0-Djkt-
QnTmEWP4Uktlhsz3BDhPNA%3A1705172325778&ei=Zd2iZZySL77ri-
gP55iTmA8&udm=&ved=0ahUKEwjc2Ij2hduDAxW-
9QIHHWfMBPMQ4dUDCBA&uact=5&oq=systematic+reviews%2C+Chalmers+I%2C+Altman+DG+%28Eds
%29%2C+BMJ+Publishing+Group%2C+London+pdf&gs_lp=Egxnd3Mtd2l6LXNlcnAiUXN5c3RlbWF0aWMg
cmV2aWV3cywgQ2hhbG1lcnMgSSwgQWx0bWFuIERHIChFZHMpLCBCTUogUHVibGlzaGluZyBHcm91cCw
gTG9uZG9uIHBkZkgAUABYAHAAeACQAQCYAQCgAQCqAQC4AQPIAQD4AQHiAwQYACBB&sclient=gws-
wiz-serp#ip=1
Wells, K., & Littell, J. H. (n.d.). Study Quality Assessment in Systematic Reviews of Research on
Intervention Effects.