You are on page 1of 10

Journal of Clinical Epidemiology 127 (2020) 59e68

ORIGINAL ARTICLE

Quality assessment of prevalence studies: a systematic review


Celina Borges Migliavacaa,b,*, Cinara Steinb, Ver^
onica Colpanib, Zachary Munnc,
Maicon Falavignaa,b, Prevalence Estimates Reviews e Systematic Review Methodology Group
(PERSyst)
a
Programa de P os-Graduaç~ao em Epidemiologia, Universidade Federal do Rio Grande do Sul, Rua Ramiro Barcelos, 2400, CEP 90035-003, Santa Cecılia,
Porto Alegre, Rio Grande do Sul, Brazil
b
Hospital Moinhos de Vento, Porto Alegre, Rua Ramiro Barcelos, 910, CEP 90035-001, Floresta, Porto Alegre, Rio Grande do Sul, Brazil
c
Joanna Briggs Institute, Faculty of Health Sciences, University of Adelaide, Adelaide, Australia
Accepted 30 June 2020; Published online 15 July 2020

Abstract
Objectives: The objective of the study is to identify items and domains applicable for the quality assessment of prevalence studies.
Study Design and Setting: We searched databases and the gray literature to identify tools or guides about the quality assessment of
prevalence studies. After study selection, we abstracted questions applicable for prevalence studies and classified into at least one of the
following domains: ‘population and setting’, ‘condition measurement’, ‘statistics’, and ‘other’. PROSPERO registration:CRD42018088437.
Results: We included 30 tools: eight (26.7%) specifically designed to appraise prevalence studies and 22 (73.3%) adaptable for this
purpose. We identified 12 unique items in the domain ‘‘population and setting’’, 16 in the domain ‘‘condition measurement’’, and 14 in
the domain ‘‘statistics’’. Of those, 25 (59.5%) were identified in the eight specific tools. Regarding the domain ‘‘other’’, we identified
77 unique items, mainly related to manuscript writing and reporting (n 5 48, 62.3%); of those, 24 (31.2%) were identified in the eight
specific tools and 53 (68.8%) in the additional 22 nonspecific tools.
Conclusion: We provide a comprehensive set of items classified by domains that can guide the appraisal of prevalence studies, con-
duction of primary prevalence studies, and update or development of tools to evaluate prevalence studies. Ó 2020 The Authors. Published
by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Keywords: Prevalence; Cross-sectional studies; Bias; Methodological quality; Quality assessment

1. Background are also useful to evaluate the impact of health interventions


because they show changes and trends over time in condi-
Prevalence is an epidemiological measurement that repre-
tions of interest. For health technology assessments, preva-
sents the proportion of the population affected by certain
lence data are also used in the estimation of costs, being an
condition [1]. Because they reflect the importance of
essential parameter in economic models [2e5]. The number
different diseases for the society, prevalence estimates are of systematic reviews of prevalence indexed in Medline has
of great importance for health-related decision-making. For
increased more than ten-fold in the last decade [6].
instance, these estimates are used to assess the burden of
Despite the importance of prevalence studies, the risk of
different conditions, helping in the definition of priorities
bias assessment of this type of study is heterogeneous, usu-
for interventions, guideline development, and research. They
ally inappropriate and often neglected. A systematic review
conducted in 2010 identified five tools specifically developed
Competing interests: Z.M. is director of the Transfer Science program
of the Joanna Briggs Institute (JBI). The authors have no other competing to appraise prevalence studies, and the authors of that review
interests to declare. concluded that included tools presented several limitations
Declaration of interests: The authors declare the following financial in- specially regarding applicability and lack of consensus about
terests/personal relationships which may be considered as potential which domains should be assessed [7]. In comparison, there
competing interests: Z.M. is director of the Transfer Science program of
are standard, recommended, and widely used tools for other
the Joanna Briggs Institute (JBI). The authors have no other competing in-
terests to declare. study designs, such as RoB 2.0 for randomized clinical trials,
* Corresponding author. Tel.: þ55 51 997107769; fax: þ55 51 ROBINS-I for observational studies, and QUADAS-2 for
35378347. diagnostic studies [8e10].
E-mail address: celinabm7@gmail.com (C.B. Migliavaca).

https://doi.org/10.1016/j.jclinepi.2020.06.039
0895-4356/Ó 2020 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/
licenses/by-nc-nd/4.0/).
60 C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68

used to assess the quality of individual prevalence studies


what is new section [6].
For each instrument found, we conducted an internet
Key findings search for complementary material, including handbooks
 We systematically reviewed tools used to assess or manuals for the instrument in question.
risk of bias of prevalence studies.
2.3. Study selection
 We identified 30 tools; eight of them were specif-
ically designed for prevalence studies We included methodological studies, manuals, or hand-
books with general guidance or specific tools applicable for
What this adds to what was known? the critical appraise of prevalence studies. First, we re-
 There was a great variability among items assessed viewed the title and abstracts of all records identified in
in each tool. our search to select all potentially relevant studies. Then,
 Not all tools assessed all domains, and there was we assessed the full text of selected studies and included
overlap among items in some tools. studies meeting the eligibility criteria. Study selection
was conducted by two reviewers independently (C.B.M.
What is the implication and what should change and C.S.). Disagreements were solved by consensus or arbi-
now? trated by a third reviewer (V.C. or M.F.).
 We provide a comprehensive set of items useful to A tool was eligible if (1) it was developed to critically
appraise prevalence studies. appraise prevalence studies or (2) the authors stated it could
be applied to appraise prevalence studies or (3) it was used
by systematic review authors to appraise the quality of in-
dividual prevalence studies.
In light of the above, the objective of this study is to sys-
2.4. Data extraction
tematically review, evaluate, and compare available tools
designed to assess the risk of bias of prevalence studies We extracted relevant information for each tool using
in order to identify the domains and items used to evaluate predesigned and piloted tables. Data extracted included:
this type of study, providing information that could be used process of development, applicability, structure, and con-
in the development and update of tools, critical appraisal of tent of the tool. Data extraction was conducted by two in-
this type of study, and conduction of primary studies of dependent reviewers (C.B.M. and C.S.). Disagreements
prevalence. were solved by consensus or arbitrated by a third reviewer
(V.C. or M.F.).

2. Methods 2.5. Data analysis


2.1. Study design, protocol, and registration We classified each question or statement of the instru-
ments into items, which represented the objective of assess-
The present study is a systematic review. The study pro-
ment. The items were not prespecified. Each item is unique
tocol was registered on PROSPERO, under the registration
number CRD42018088437. and based on the fact that it addresses a different aspect of
quality of the study under appraisal. A new item was
created whenever the question/statement from the included
2.2. Search strategy and data sources
instrument would represent a different aspect of risk of
We searched Medline (via PubMed), Embase, and Web of bias, trying to be as sensitive as possible. Questions or
Science up to August 2019 using terms such as ‘‘prevalence’’, statements from different instruments (sometimes even
‘‘cross-sectional studies’’, and ‘‘critical appraisal’’. The from the same instrument) that assessed the same risk of
complete search strategy is presented in Additional file 1. bias aspect, but with different wording, were merged under
The search was not limited by date or language of the same item. The judgments regarding the classification
publication. of questions and statements into items and domains are
To identify studies not indexed by these databases, we available in Additional file 5. Afterward, we classified the
also screened the first 200 results on Google Scholar. We items into three key domains: ‘‘population and setting’’,
also manually searched the reference list of relevant studies ‘‘condition measurement’’, and ‘‘statistics’’. The domains
and searched for instruments on websites of institutions were defined a priori based on the main components of a
related to the topic. Moreover, we conducted a systematic prevalence research question (population and condition)
search for systematic reviews of prevalence of clinical con- and considering the importance of appropriate statistical
ditions published between February 2017 and February data analysis. If a question was applicable to appraise prev-
2018 and indexed in Medline to identify further instruments alence studies but covered a different domain (such as
C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68 61

Is the tool specific for prevalence


studies?

Yes No

Classify all questions / statements Does the study provide guidance


into items and domains. about which questions / statements
are applicable for prevalence
studies? *

Yes No

Follow the guidance and only Classify questions / statements in


classify applicable questions / applicable or not, and then only
statements into items and domains. classify applicable questions into
items and domains.

Fig. 1. Flowchart of classification of questions/statements. *In some instruments, the guidance about which questions/statements were applicable
for prevalence studies was based on study aspects such as intervention or the comparison group.

reporting or study methods), it was included under the clas- were solved by consensus or arbitrated by a third reviewer
sification ‘‘other’’. (V.C. or M.F.).
As described, we included not only tools specifically de-
signed to assess prevalence studies but also tools that could
be adapted for this purpose. Thus, not all questions from 3. Results
nonspecific tools were applicable for prevalence studies (such
as questions assessing the comparability among groups, or the 3.1. Study selection
description of intervention), and we only categorized the Our search resulted in 1,690 unique references. After se-
applicable ones. If the instrument provided guidance about lection of titles and abstracts, we assessed 105 full texts for
which questions should be used to assess prevalence studies, eligibility. Finally, we included in the review 30 tools
we followed these instructions. If not, before classyifing the [11e41]. Fig. 2 presents the flowchart of study selection.
questions into items and domains, we evaluated if they were The list of full texts excluded with reason is available in
applicable for prevalence studies or not. If classified as appli- Additional file 2.
cable, the question was categorized into items and domains as
previously described. Questions classified as not applicable
were not further evaluated (Fig. 1). 3.2. Tools specific for prevalence studies
The process of selection and classification of questions/ Out of the 30 tools, eight (26.7%) were specifically de-
statements into items and domains was conducted by two signed to appraise prevalence studies [11e19]. Table 1 sum-
reviewers independently (C.B.M. and C.S.). Discrepancies marizes the main characteristics of these tools. Among these

Records identified through Additional records identified


database searching through other sources
(n = 2.858) (n = 66)
PubMed: 870 Google Scholar and websites: 2
Embase: 1.129 Reference lists: 54
Web of Science: 859 Systematic reviews: 10

Duplicates removed
(n = 1.235)
Records screened
(n = 1.689)
Records excluded
(n = 1.590)
Full-text assessed for elegilibity
(n = 99)
Full text articles excluded
(n = 67)
Does not describe new tool (n = 36)
Full text included in qualitative Tool not applicable for prevalence studies (n = 28)
Other (n = 3)
synthesis
(n = 32)
Number of tools = 30

Fig. 2. Flowchart of study selection.


62 C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68

Table 1. Main characteristics of tools specifically designed to appraise prevalence studies


Instrument (name of the
instrument or author, Context of development Summary and reporting of
year) (clinical condition) Process of development Structure results
Al-Jader et al., 2002 Genetic disorders. First version of the tool; Seven questions, with Maximum score: 100
[11] pilot test with different answer points.
multidisciplinary options; each answer No cutoff point defined.
assessors to evaluate option with an
reproducibility and associated score.
feasibility; final
version of the tool and
test for inter-rater
agreement.
Boyle, 1998 [12] Psychiatric disorders on NR Ten questions, split in No overall summary.
general population three sections. No Descriptive reporting of
settings. predefined answer results.
options.
Giannakapoulos et al., Any clinical condition. Search for criteria to Eleven questions, split Maximum score: 19
2012 [13] define a high-quality in three sections, plus points. Studies are
study of prevalence; a question about classified in
development of the ethics. Each question accordance with their
first version; pilot with two or three total score as poor (0
tests to determine answer options; each e4), moderate (5e9),
inter-rater agreement answer option with an good (10e14), or
and reliability; the associated score. outstanding (15e19).
final version of the
tool.
Hoy et al., 2012 [14] Any clinical condition. Search for instruments; Ten questions with two Question for overall
definition of standard answer appraisal with three
important criteria to options (high risk of answer options (low
be assessed and bias/low risk of bias). risk of bias/moderate
creation of the draft risk of bias/high risk
tool; pilot tests with of bias), based on
professionals; rater’s judgment.
assessment of inter-
rater agreement, ease
of use, timeliness; the
final version of the
tool.
Loney et al., 1998 [15] Any clinical condition. Review of important Eight statements, one Maximum score: eight
criteria; development point for each points.
of the tool; pilot test criterion achieved. No cutoff point defined.
in prevalence studies
of dementia.
MORE, 2010 [[16]] Chronic conditions. Systematic search for Thirty-two questions, No overall summary
instruments to assess with different answer Descriptive reporting of
prevalence and options; each answer results.
incidence studies; option is classified as
selection of important ‘minor flaw’, ‘major
criteria; development flaw’, or ‘poor
of the first version; reporting’.
pilot test with experts
to assess face
validity, inter-rater
agreement, and
reliability; the final
version of the tool.
Silva et al., 2001 [17] Risk factors of chronic NR Nineteen questions, Maximum score: 100
diseases. split in three sections. points.
Two or three answer No cutoff point defined.
options, with an
associated score.

(Continued )
C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68 63

Table 1. Continued
Instrument (name of the
instrument or author, Context of development Summary and reporting of
year) (clinical condition) Process of development Structure results
The Joanna Briggs Any clinical condition. Systematic search for Nine questions with four Question for overall
Institute Prevalence instruments to assess standard answer appraisal with three
Critical Appraisal prevalence studies; options ( yes/no/ answer options
Tool, 2014 review and selection unclear/not (include/exclude/seek
[[18],[19]] of applicable criteria; applicable).a further info), based
development of the on rater’s judgment.
draft tool; pilot tests
with professionals to
assess face validity,
applicability,
acceptability,
timeliness, and ease
of use; the final
version of the tool.

Abbreviation: NR, not reported.


a
Ten questions in previous versions.

tools, seven (87.5%) were new tools [11e13,15e19], and quality of prevalence studies, six related to ‘‘population
one (12.5%) was an adaptation of an existing instrument and setting’’, nine related to ‘‘condition measurement’’,
[14]. Four tools (50.0%) were developed to assess studies and 10 related to ‘‘statistics’’. In addition, 24 items were
of prevalence of any clinical condition [13e15,18,19], classified as ‘‘other’’, mainly related to reporting (Table 2).
whereas the other four tools (50.0%) were developed to
appraise prevalence studies of specific medical fields
3.3. Tools adapted for prevalence studies
[11,12,16,17]; however, with some adaptations, they could
all be applied to any clinical condition. The process of devel- Among the 30 included tools, 22 (73.3%) were not spe-
opment of all instruments included search and review of rele- cific for prevalence studies. The main characteristics of
vant criteria, piloting test(s), and adjustments for the final these tools and the items and domains assessed by them
version. The median number of questions in the tools was are presented in Additional file 3 and 4, respectively. These
10, ranging from seven to 32. Four tools (50.0%) were scales, tools provided six unique additional items for the domain
with numeric results [11,13,15,17], and four (50.0%) were ‘‘population and setting’’, seven items related to ‘‘condition
descriptive checklists [12,14,16,18,19]. Among the scales, measurement’’, and four items related to ‘‘statistics’’.
only one suggested cutoff values to define the overall quality Moreover, these tools provided 53 items classified under
of the study [13]; among the checklists, two had an overall the domain ‘‘other’’ (Additional file 4).
appraisal question, but they were answered based on rater’s
judgment, without guidance of how to consider the previous
questions to define a summary assessment [14,18,19]. 3.4. Items
Regarding the domains assessed, seven tools (87.5%) We identified 710 questions/statements from the
covered all key domains [12e19]; however, there was vari- included tools that were compiled into 119 different items.
ability regarding the items assessed by each tool. Table 2 We identified 42 unique items classified under the do-
describes the items of each tool, classified by domains. In mains ‘‘population and setting’’ (12 items), ‘‘condition
the domain ‘‘population and setting’’, the main items as- measurement’’ (16 items), and ‘‘statistics’’ (14 items); of
sessed were ‘‘appropriate sampling’’ (seven tools, 87.5%), those, 25 (59.5%) were identified in the eight specific tools
‘‘appropriate response rate’’ (five tools, 62.5%), and and 17 (40.5%) were identified in the additional 22 nonspe-
‘‘representative sample’’ (four tools, 50.0%). In the domain cific tools. Table 3 summarized the items assessed in each
‘‘condition measurement’’, the main items assessed were domain among all tools.
‘‘valid measurement of condition’’ (six tools, 75.0%), In the domain ‘‘other’’, we identified 77 unique items; of
‘‘standard measurement of condition’’ (six tools, 75.0%), those, 24 (31.2%) were identified in the eight specific tools
and ‘‘reliable measurement of condition’’ (five tools, and 53 (68.8%) were identified in the additional 22 nonspe-
62.5%). In the domain ‘‘statistics’’, the main items assessed cific tools. We classified the items in the domain ‘‘other’’
were ‘‘precision of estimate’’ (six tools, 75.0%), ‘‘data into three categories: ‘‘manuscript writing and reporting’’,
analysis considering sampling’’ (three tools, 37.5%), ‘‘study protocol and methods’’, and ‘‘nonclassified’’. These
‘‘appropriate sample size’’ (two tools, 25.0%), and ‘‘sub- categories were defined after data extraction, based on our
group analysis’’ (two tools, 25.0%). Overall, these tools findings. Table 4 presents the items classified as ‘‘other’’
provided 25 unique items related to the assessment of stratified by these categories.
64 C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68

Table 2. Items assessed by tools specifically designed to appraise prevalence studies, classified by domainsa

Instrument (name of the Domain


instrument or author, year) Population and setting Condition measurement Statistics Other
Al-Jader et al., 2002 [11] Representative sample, e Precision of estimates Description of condition of
ethnic characteristics of interest, reporting of year
population source, of conduction of studies,
appropriate size of reporting of size of
population sourceb population source
Boyle, 1998 [12] Representative sample, Reliable and valid Precision of estimates, data Description of target
appropriate sampling measurement of analysis considering population, standard data
condition sampling collection
Giannakapoulos et al., Appropriate sampling and Reliable, standard, and Precision of estimates, data Description of target
2012 [13] response rate valid measurement of analysis considering population, ethics
condition response rate and special
features
Hoy et al., 2012 [14] Representative sample, Appropriate definition of Appropriate numerator and Appropriate data collection
appropriate sampling, condition, reliable, denominator parameters
and appropriate response standard, and valid
rate measurement of
condition, and
appropriate length of
prevalence period
Loney et al., 1998 [15] Appropriate sampling and Appropriate, standard, and Appropriate sample size, Appropriate study design,
appropriate response rate unbiased measurement precision of estimates description of
of condition participants, setting, and
nonresponders
MORE, 2010 [[16]] Appropriate sampling and Appropriate, reliable, Precision of estimates, Reporting of study design,
appropriate response rate standard, and valid appropriate exclusion description of study
measurement of from analysis, data objectives, reporting of
condition, assessment of analysis considering inclusion flowchart,
disease severity and sampling, subgroup description, and role
frequency of symptoms, analysis, and adjustment funding, and reporting of
type of prevalence of estimates conflict of interest, ethics
estimate (point or
period), and appropriate
length of prevalence
period
Silva et al., 2001 [17] Appropriate sample source Appropriate definition of Precision of estimates, Description of study
and appropriate sampling condition, standard and subgroup analysis, data objectives and sampling
valid measurement of analysis considering frame, quality control of
condition sampling data, applicability and
generalizability of results
The Joanna Briggs Institute Representative sample, Reliable, standard, and Appropriate sample size, Description of participants
Prevalence Critical appropriate sampling, valid measurement of appropriate statistical and setting, objective
Appraisal Tool, 2014 and appropriate response condition analysis, data analysis criteria for subgroup
[18,19] rate considering response rate definitions
a
The full set of question for every tool is presented in Additional file 5.
b
Tool specific for studies of prevalence of genetic conditions.

4. Discussion Not all domains were covered by all tools, and even
when they were covered, they were not always properly as-
In this systematic review, we identified, summarized, sessed. Some tools did not consider important aspects in-
and compared 30 instruments used for the quality assess-
side each domain, such as representativeness of sample,
ment of prevalence studies. Our results, similar to what
estimation of sample size, and appropriate measurement
was found in other reviews, show that there is great vari-
of condition, and there was an overlap among questions
ability among tools and there is no consensus about which
in the same instrument, which may lead to penalization
domains should be assessed in prevalence studies [42,43].
of the same study for the same reason more than once.
We classified all questions or statements into items and do-
Moreover, many instruments assessed not only risk of bias
mains, creating a comprehensive set of 119 items useful for
but also reporting and manuscript writing. It is important to
the assessment of prevalence studies.
C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68 65

Table 3. Unique items identified among all included tools and classified into a key domain
Population and setting (n [ 12) Condition measurement (n [ 16) Statistics (n [ 14)
 Appropriate samplea (seven tools)  Type of prevalence estimate (point or  Sample size estimationa (10 tools)
 Unbiased samplea (one tool) period) (one tool)  Appropriate sample size (seven tools)
 Representative sample (14 tools)  Appropriate length of prevalence period  Appropriate statistical analysis (13
 Appropriate sample source (two tools) (two tools) tools)
 Appropriate size of population source  Appropriate definition of condition (two  Appropriate numerator and denomina-
(one tool) tools) tor parameters (one tool)
 Ethnic characteristics of population  Appropriate measurement of condition  Appropriate exclusion from analysis
source (one tool) (11 tools) (one tool)
 Appropriate sampling (15 tools)  Accurate measurement of conditiona  Adjustment of estimates (one tool)
 Random samplinga (one tool) (one tool)  Data analysis considering sampling
 Standard selection of participantsa (two  Precise measurement of conditiona (three tools)
tools) (one tool)  Data analysis considering the response
 Participation rate of eligible personsa  Quality control of measurement meth- rate (seven tools)
(one tool) odsa (two tools)  Data analysis considering special fea-
 Appropriate response rate (19 tools)  Valid measurement of condition (21 tures (one tool)
 Assessment of nonrespondersa (one tools)  Missing data handlinga (one tool)
tool)  Reliable measurement of condition (15  Random errora (three tools)
tools)  Precision of estimate (11 tools)
 Standard measurement of condition  Subgroup analysis (five tools)
(10 tools)  Data fishinga (two tools)
 Unbiased measurement of condition
(three tools)
 Reproducible measurement of condi-
tiona (two tools)
 Assessment of disease severity and
frequency of symptoms (one tool)
 Data collection performed by investi-
gators unrelated to patientsa (one tool)
 Face validitya (one tool)
 Selective outcome reportinga (one tool)

a
Items from nonspecific tools only.

distinguish between these two concepts as poor reporting is statements into items and domains are available in
not a reflection on the quality of a study or whether the re- Additional file 5.
sults from a study are at risk of bias. The main objective of this study was to map the literature
We conducted a broad search, using important databases to identify items used to assess the risk of bias of prevalence
and including alternative data sources. Our search was very studies, generating a comprehensive bank of items. Some of
sensitive to identify tools specifically designed for preva- the items identified are similar and there may be overlap
lence studies, but we probably have not included all instru- among them in terms of their broad definition. However,
ments that could be adapted for this purpose. This could be we do believe that subtle differences in terminology and
a limitation of our study; however, we believe our results nomenclature are important in the contextualization of the
are representative of the items and domains used to tools in which they are derived. This granular approach is
appraise prevalence studies because we probably achieved justified in this case as this will facilitate the future develop-
a saturation of items, and this work is the most comprehen- ment of a new risk of bias assessment tool. As an example of
sive overview of tools to assess prevalence studies to date. the importance of identifying subtle differences between
Another possible limitation of our review is that data similar items in tools, we attempt in the following to clarify
abstraction and classification of questions into items and the differences between valid, reliable, reproducible, and un-
domains required judgment, which can lead to different de- biased measurement of the condition (which some readers
cisions by different assessors. We tried to overcome this by may have considered synonymous terms previously):
conducting the classification independently by two re-
 Valid measurement of the condition: the measurement
viewers with the assistance of third reviewers in case of dis-
of the condition is performed with methods that actu-
crepancies. In addition, our decision-making was informed
ally measures or detects what it is supposed to
by a protocol which reduced the chance of making ad hoc,
measure.
subjective decisions during the conduct of our review.
 Reliable measurement of the condition: this is related
Moreover, to enhance transparency of the process, our
to the consistency of a measure. A highly reliable
judgments regarding the classification of questions and
measure produces similar results under similar
66 C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68

Table 4. Items classified as ‘other’, categorized into three subgroups


Manuscript writing and reporting (n [ 48) Study protocol and methods (n [ 12) Nonclassified (n [ 17)
 Clear reporting of authors and affilia-  Specific objectivesa (two tools)  Importance of studya (three tools)
tionsa (one tool)  Study protocola (one tool)  Quality control of data (one tool)
 Appropriate title a(one tool)  A priori statistical analysis plana (one  Relevance of the research questiona
 Appropriate abstracta (one tool) tool) (one tool)
 Study justified by literature reviewa  Appropriate study design (11 tools)  Relevance of outcomesa (one tool)
(one tool)  Appropriate review of the existing liter-  Identification of biasa (one tool)
 Description of the problema (one tool) aturea (two tools)  Consistent resultsa (three tools)
 Theoretical frameworka (one tool)  Appropriate methodsa (three tools)  Believable resultsa (one tool)
 Clear hypothesisa (one tool)  Appropriate data collection (two tools)  Conclusion plausiblea (two tools)
 Clear study questionsa (eight tools)  Standard data collection (one tool)  Possible alternative conclusionsa (one
 Description of study objectives (13  Consideration of important variablesa tool)
tools) (one tool)  Relevance of conclusionsa (one tool)
 Description of condition of interest (10  Consideration of privacy and sensitivity  Applicability of results (one tool)
tools) of conditiona (one tool)  Generalizability of results (six tools)
 Description of target population (seven  Objective criteria for subgroup defini-  Ethics (six tools)
tools) tions (one tool)  Effect of conflict of interesta (two tools)
 Description of the setting (six tools)  Role of funding (one tool)
 Reporting of the study design (five  Bias due to fundinga (one tool)
tools)  Reader’s interpretation of studya (three
 Description of methodsa (three tools) tools)
 Reporting of the size of the population  Othera (one tool)
source (one tool)
 Description of the sampling frame (five
tools)
 Description of eligibility criteriaa (seven
tools)
 Reporting of the year of conduction of
study (one tool)
 Description of statistical analysis
(seven tools)
 Appropriate data reportinga (three
tools)
 Appropriate reporting of resultsa (10
tools)
 Reporting of inclusion flowchart (three
tools)
 Reporting of sample size (one tool)
 Reporting of the response ratea (three
tools)
 Description of nonresponders (four
tools)
 Description of participants (10 tools)
 Reporting of data collection procedur-
esa (two tools)
 Clear description of data sourcesa (one
tool)
 Description of missing dataa (two tools)
 Reporting of adjusted estimatesa (one
tool)
 Reporting of statistical significancea
(two tools)
 Reporting of clinical significancea (two
tools)
 Reporting of discussiona (one tool)
 Appropriate discussiona (three tools)
 Discussion based on resultsa (one tool)
 Reporting of all possible interpretation
of resultsa (one tool)
 Discussion of biasa (two tools)
 Discussion of limitationsa (five tools)
 Discussion of strengthsa (one tool)

(Continued )
C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68 67

Table 4. Continued
Manuscript writing and reporting (n [ 48) Study protocol and methods (n [ 12) Nonclassified (n [ 17)
 Comparison of results with the existing
literaturea (three tools)
 Description of study conclusionsa (one
tool)
 Appropriate conclusionsa (two tools)
 Conclusion based on resultsa (10 tools)
 Reporting of an additional information
sourcea (one tool)
 Description of funding (two tools)
 Reporting of conflict of interest (four
tools)
 Clear referencesa (one tool)
 Recommendations for future researcha
(two tools)

a
Items from nonspecific tools only.

conditions, so all things being equal, repeated testing Cinara Stein: Formal analysis, Investigation, Writing - re-
should produce similar results. view & editing. Ver^onica Colpani: Validation, Project
 Reproducible measurement of the condition: the mea- administration, Writing - review & editing. Zachary
surement of the condition is performed using methods Munn: Methodology, Writing - review & editing. Maicon
capable of being reproduced at a different time or Falavigna: Conceptualization, Validation, Writing - review
place and by different people & editing.
 Unbiased measurement of the condition: measure-
ment of the condition free of systematic errors that
could deviate the results from the truth.
Supplementary data
Therefore, even though the differences among items are
subtle, we do believe they are important to inform the Supplementary data to this article can be found online at
development of a new tool. https://doi.org/10.1016/j.jclinepi.2020.06.039.
It is not possible to strongly recommend a tool because
there is great variability in their content. A new tool, domain
based and with broader coverage and applicability is needed. References
However, among the currently available tools specific for [1] Fletcher R, Fletcher S, Fletcher GS. Clinical epidemiology: the es-
prevalence studies, the Joanna Briggs Institute Prevalence sentials. 5th ed. Philadelphia: Lippincott Williams & Wilkins; 2013.
Critical Appraisal Tool has a higher methodologic rigor [2] Harder T. Some notes on critical appraisal of prevalence studies:
and addresses what we consider the most important items comment on: "The development of a critical appraisal tool for use
in systematic reviews addressing questions of prevalence. Int J Health
related to the methodological quality of prevalence studies
Pol Manag 2014;3:289e90.
and may be considered the most appropriate tool [18,19]. [3] Wagner MB. Medindo a corr^eencia da doença: preval^encia ou in-
cid^encia? J Pediatr (Rio J 1998;74:157e62.
[4] Oxman AD, Schunemann HJ, Fretheim A. Improving the use of
research evidence in guideline development: 2. Priority setting.
5. Conclusions Health Res Pol Syst 2006;4:14.
[5] Rotily M, Roze S. What is the impact of disease prevalence upon
We have now identified a comprehensive set of items health technology assessment? Best Pract Res Clin Gastroenterol
and domains that is broader than any of the individual tools 2013;27:853e65.
included in this review. This data set can be now be used by [6] Borges Migliavaca C, Stein C, Colpani V, et al. How are systematic
those interested in the critical appraisal of prevalence reviews of prevalence conducted? A methodological study. BMC
Med Res Methodol 2020;20:96. https://doi.org/10.1186/s12874-020-
studies, by authors of prevalence studies, and to inform
00975-3.
the development or update of future tools to critically [7] Shamliyan T, Kane RL, Dickinson S. A systematic review of tools
appraise prevalence studies. used to assess the quality of observational studies that examine inci-
dence or prevalence and risk factors for diseases. J Clin Epidemiol
2010;63:1061e70.
[8] Sterne JA, Hernan MA, Reeves BC, Savovic J, Berkman ND,
CRediT authorship contribution statement Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias
in non-randomised studies of interventions. BMJ 2016;355:i4919.
Celina Borges Migliavaca: Formal analysis, Investiga- [9] Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ,
tion, Visualization, Data curation, Writing - original draft. Reitsma JB, et al. QUADAS-2: a revised tool for the quality
68 C.B. Migliavaca et al. / Journal of Clinical Epidemiology 127 (2020) 59e68

assessment of diagnostic accuracy studies. Ann Intern Med 2011;155: [27] Fowkes FG, Fulton PM. Critical appraisal of published research:
529e36. introductory guidelines. BMJ 1991;302:1136e40.
[10] Sterne JAC, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, [28] Gardner MJ, Machin D, Campbell MJ. Use of check lists in assessing
et al. RoB 2: a revised tool for assessing risk of bias in randomised the statistical content of medical studies. Br Med J (Clin Res Ed)
trials. BMJ 2019;366:l4898. 1986;292:810e2.
[11] Al-Jader LN, Newcombe RG, Hayes S, Murray A, Layzell J, [29] Glynn L. A critical appraisal tool for library and information
Harper PS. Developing a quality scoring system for epidemiological research. Libr Hi Tech 2006;24:387e99.
surveys of genetic disorders. Clin Genet 2002;62:230e4. [30] Kmet LS, Lee RC. Standard quality assessment criteria for evaluating
[12] Boyle MH. Guidelines for evaluating prevalence studies. Evid Based primary research papers from a variety of fieldsAHFMRHTA initia-
Ment Health 1998;1:37. tive20040213. HTA Initiative 2004:2.
[13] Giannakopoulos NN, Rammelsberg P, Eberhard L, Schmitter M. A [31] Law M, Stewart D, Pollock N, Letts L, Bosch J, Westmorland MIn:
new instrument for assessing the quality of studies on prevalence. Critical review formdquantitative studies, 20 1998:13.
Clin Oral Invest 2012;16:781e8. [32] Margetts B, Vorster H, Venter C. Evidence-based nutrition-review of
[14] Hoy D, Brooks P, Woolf A, Blyth F, March L, Bain C, et al. Assess- nutritional epidemiological studies. South Afr J Clin Nutr 2002;15:
ing risk of bias in prevalence studies: modification of an existing tool 68e73.
and evidence of interrater agreement. J Clin Epidemiol 2012;65: [33] Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.
934e9. Methodological index for non-randomized studies (minors): develop-
[15] Loney PL, Chambers LW, Bennett KJ, Roberts JG, Stratford PW. ment and validation of a new instrument. ANZ J Surg 2003;73:
Critical appraisal of the health research literature: prevalence or inci- 712e6.
dence of a health problem. Chronic Dis Can 1998;19:170e6. [34] Hong QN, Pluye P, Fabregues S, Bartlett G, Boardman F, Cargo M,
[16] Shamliyan TA, Kane RL, Ansari MT, Raman G, Berkman ND, et al. Mixed methods appraisal tool (MMAT), version 2018. Canada:
Grant M, et al. Development quality criteria to evaluate nonthera- IC Canadian Intellectual Property Office, Industry Canada; 2018.
peutic studies of incidence, prevalence, or risk factors of chronic dis- [35] Wells G, Shea B, O’Connell D, Peterson J, Welch V, Losos M, et al.
eases: pilot study of new checklists. J Clin Epidemiol 2011;64: Newcastle-Ottawa quality assessment scale cohort studies 2014.
637e57. Available at http://www.ohri.ca/programs/clinical_epidemiology/
[17] Silva LC, Ordunez P, Paz Rodriguez M, Robles S. A tool for assess- oxford.asp. Accessed August 1, 2020.
ing the usefulness of prevalence studies done for surveillance pur- [36] NIH. Quality Assessment tool for observational cohort and cross-
poses: the example of hypertension. Rev Panam Salud Publica sectional studies 2018. Available at https://www.nhlbi.nih.gov/health-
2001;10:152e60. topics/study-quality-assessment-tools. Accessed August 1, 2020.
[18] Munn Z, Moola S, Riitano D, Lisy K. The development of a critical [37] Public Health Wales Observatory. Critical appraisal checklist: cross
appraisal tool for use in systematic reviews addressing questions of sectional study 2004. Available at http://www2.nphs.wales.nhs.uk:
prevalence. Int J Health Pol Manag 2014;3:123e8. 8080/PubHObservatoryProjDocs.nsf/($All)/
[19] Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological E7B0C80995DC1BA380257DB80037C699/$File/Cross%
guidance for systematic reviews of observational epidemiological 20sectional%20study%20checklist.docx?OpenElement. Accessed
studies reporting prevalence and cumulative incidence data. Int J August 1, 2020.
Evid Based Healthc 2015;13:147e53. [38] Wong WCW, Cheung CSK, Hart GJ. Development of a quality
[20] Academy of Nutrition and Dietetics. Evidence analysis manual: steps assessment tool for systematic reviews of observational studies (QAT-
in the academy evidence analysis process. Chicago: Academy of SO) of HIV prevalence in men having sex with men and associated
Nutrition and Dietetics; 2016:106. risk behaviours. Emerging Themes Epidemiol 2008;5:23.
[21] Avis M. Reading research critically. II. An introduction to appraisal: [39] Viswanathan M, Berkman ND. Development of the RTI item bank on
assessing the evidence. J Clin Nurs 1994;3:271e7. risk of bias and precision of observational studies. J Clin Epidemiol
[22] Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a 2012;65:163e78.
critical appraisal tool to assess the quality of cross-sectional studies [40] Viswanathan M, Berkman ND, Dryden DM, Hartling L. ahrq
(AXIS). BMJ Open 2016;6:7. methods for effective health care. assessing risk of bias and con-
[23] Berra S, Elorza-Ricart JM, Estrada MD, Sanchez E. [A tool (cor- founding in observational studies of interventions or exposures:
rected) for the critical appraisal of epidemiological cross-sectional further development of the RTI item bank. Rockville, MD: Agency
studies]. Gac Sanit 2008;22:492e7. for Healthcare Research and Quality (US); 2013.
[24] The University of Manchester. Centre for occupational and environ- [41] Specialist Unit for Review Evidence (SURE). Questions to assist with
mental health (COEH) - critical appraisal.http://research.bmh. the critical appraisal of cross-sectional studies 2018. Available at
manchester.ac.uk/epidemiology/COEH/undergraduate/ https://www.cardiff.ac.uk/__data/assets/pdf_file/0010/1142974/SURE-
specialstudymodules/criticalappraisal/Accessed 1 August 2020 CA-form-for-Cross-sectional_2018.pdf. Accessed August 1, 2020.
[25] Downs SH, Black N. The feasibility of creating a checklist for the [42] Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and suscep-
assessment of the methodological quality both of randomised and tibility to bias in observational studies in epidemiology: a systematic re-
non-randomised studies of health care interventions. J Epidemiol view and annotated bibliography. Int J Epidemiol 2007;36:666e76.
Community Health 1998;52:377e84. [43] Jarde A, Losilla JM, Vives J. Methodological quality assessment
[26] DuRant RH. Checklist for the evaluation of research articles. J Ado- tools of non-experimental studies: a systematic review. An Psicol
lesc Health 1994;15:4e8. 2012;28:617e28.

You might also like