You are on page 1of 29

How to Report a Research Study

Paul Cronin, MD, MS, James V. Rawson, MD, Marta E. Heilbrun, MD, MS, Janie M. Lee, MD, MSc,
Aine M. Kelly, MD, MS, MA, Pina C. Sanelli, MD, MPH, Brian W. Bresnahan, PhD, Angelisa M. Paladin, MD
Incomplete reporting hampers the evaluation of results and bias in clinical research studies. Guidelines for reporting study design and
methods have been developed to encourage authors and journals to include the required elements. Recent efforts have been made to
standardize the reporting of clinical health research including clinical guidelines. In this article, the reporting of diagnostic test accuracy
studies, screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness assessments (CEA), recommendations and/or guidelines, and medical education studies is discussed. The available guidelines, many of which can be found at
the Enhancing the QUAlity and Transparency Of health Research network, on how to report these different types of health research are
also discussed. We also hope that this article can be used in academic programs to educate the faculty and trainees of the available resources to improve our health research.
Key Words: Cost-effectiveness assessments; diagnostic test accuracy; enhancing the quality and transparency of health research;
guidelines; medical education; recommendations; reporting; screening; systematic reviews and meta-analyses; therapy.
AUR, 2014

his article is the first in a series of two articles that will


review how to report and how to critically appraise
research in health care. In this article, the reporting
of diagnostic test accuracy and screening studies, therapeutic
studies, systematic reviews and meta-analyses, cost-effectiveness studies, recommendations and/or guidelines, and medical education studies is discussed. The available guidelines
on how to report these different types of health research are
also discussed. The second article will review the evolution
of standardization of critical appraisal techniques for health
research.
Recent efforts have been made to standardize both the
reporting and the critical appraisal of clinical health research
including clinical guidelines. In 2006, Enhancing the
QUAlity and Transparency Of health Research (EQUATOR)
network was formed to improve the quality of reporting
health research (1). Recognizing the need to critically assess
the methodological quality of studies and the wide spread deficiencies and lack of standardization in primary research
reporting, the network brought together international stakeholders including editors, peer reviewers, and developers of
guidelines to improve both the quality of research publications
and the quality of the research itself (1). Many of the

Acad Radiol 2014; 21:10881116


From the Division of Cardiothoracic Radiology, Department of Radiology,
University of Michigan Hospitals, B1 132G Taubman Center/5302, 1500 East
Medical Center Drive, Ann Arbor, MI 48109 (P.C., A.M.K.); Department of
Radiology and Imaging, Medical College of Georgia, Georgia Regents
University, Augusta, Georgia (J.V.R.); Department of Radiology, University of
Utah School of Medicine, Salt Lake City, Utah (M.E.H.); Department of
Radiology, University of Washington, Seattle, Washington (J.M.L., B.W.B.,
A.M.P.); and Department of Radiology, Weill Cornell Medical College/New
York Presbyterian Hospital, New York, New York (P.C.S.). Received March
24, 2014; accepted April 30, 2014. Address correspondence to: P.C.
e-mail: pcronin@med.umich.edu
AUR, 2014
http://dx.doi.org/10.1016/j.acra.2014.04.016

1088

presentations at the joint Radiological Alliance for Health


Service Research/Alliance of Clinical-Educators in Radiology session at the 2013 Association of University Radiologists
annual meeting highlighted reporting guidelines available at
the EQUATOR network (1). The EQUATOR network goals
are raising awareness of the crucial importance of accurate and
complete reporting of research; becoming the recognized
global center providing resources, education, and training
relating to the reporting of health research and use of reporting
guidelines; assisting in the development, dissemination, and
implementation of reporting guidelines; monitoring the status
of the quality of reporting across health research literature; and
conducting research evaluating or pertaining to the quality of
reporting (1). The desired result of these goals is to improve
the quality of health care research reporting which subsequently
improves patient care.
The EQUATOR Network Resource Centre provides upto-date resources related to health research reporting mainly
for authors of research articles; journal editors and peer reviewers; and reporting guideline developers to enable better
reporting, reviewing, and editing. Within the library for
health research reporting, the EQUATOR network has
developed and maintains a digital library that provides publications related to writing research articles; reporting guidelines and guidance on scientific writing; the use of reporting
guidelines in editorial and peer review processes; the development of reporting guidelines; and evaluations of the quality of
reporting. The library contains comprehensive lists of the
available reporting guidelines, listed by study type. These
include experimental studies, observational studies, diagnostic
accuracy studies, biospecimen reporting, reliability and agreement studies, systematic reviews and meta-analyses, qualitative research, mixed-methods studies, economic evaluations,
and quality improvement studies. The network has developed
several standards for reporting research including the

Item
1

Title/Abstract/Key words
Identify the article as a study of diagnostic
accuracy
Introduction
State the research questions or study aims, such as
estimating diagnostic accuracy or comparing
accuracy between tests or across participant groups.
Methods Participants
Describe the study population: The inclusion and
exclusion criteria, setting and locations where
data were collected.
Describe participant recruitment and sampling: How
eligible patients are identified.

Describe participant sampling:

Describe data collection: Was data collection


planned before the index test and reference
standard were performed (prospective study) or
after (retrospective study)?

Methods Test methods


Describe the reference standard and its rationale.

Describe technical specifications of material and


methods involved including how and when
measurements were taken, and/or cite references
for index tests and reference standard.

Describe the definitions and rationale for the units,


thresholds and/or categories of the index tests and
reference standard.
Describe the number, training, and expertise of the
persons executing and reading the index tests
and the reference standard.

10

Describe the scientific background, previous work on the subject, the remaining uncertainty, and, hence, the
rationale for their study.
Clearly specified research questions help the readers to judge the appropriateness of the study design and data
analysis.
Diagnostic accuracy studies describe the behavior of a test under particular circumstances and should report its
inclusion and exclusion criteria for selecting the study population. The spectrum of the target disease can vary
and affect test performance.
Was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had
received the index tests or the reference standard?
Describe how eligible subjects were identified and whether the study enrolls consecutive or random sampling of
patients. Study designs are likely to influence the spectrum of disease represented.
Was the study population a consecutive series of participants defined by the selection criteria in item 3 and 4? If
not, specify how participants were further selected.
Prospective data collection has many advantages: better data control, additional checks for data integrity and
consistency, and a level of clinical detail appropriate to the problem. As a result, there will be fewer missing or
uninterpretable data items.
Retrospective data collection starts after patients have undergone the index test and the reference standard and
often relies on chart review. Studies with retrospective data collection may reflect routine clinical practice
better than a prospective study, but also may fail to identify all eligible patients or to provide data of high quality.
The reference standard is used to distinguish patients with and without disease. When it is not possible to subject
all patients to the reference standard for practical or ethical reasons, composite reference standard is an
alternative. The components may reflect different definitions or strategies for disease diagnosis.
Describe the methods involved in the execution of index test and reference standard in sufficient detail to allow
other researchers to replicate the study. Differences in the execution of the index test and reference standard
are a potential source of variation in diagnostic accuracy.
The description should cover the full test protocol including the specification of materials and instruments
together with their instructions for use.
Test results can be truly dichotomous (eg, present or absent), have multiple categories or be continuous. Clearly
describe how and when category boundaries are used.

1089

Variability in the manipulation, processing, or reading of the index test or reference standard will affect diagnostic
accuracy.
Professional background, expertise, and prior training to improve interpretation and to reduce interobserver
variation all affect the quality of reading.
(Continued on next page)

HOW TO REPORT A RESEARCH STUDY

Use the term diagnostic accuracy in the title or abstract.


In 1991, the National Library of Medicines MEDLINE database introduced a specific keyword (MeSH heading) for
diagnostic studies: Sensitivity and Specificity.

Academic Radiology, Vol 21, No 9, September 2014

TABLE 1. STARD Items and Explanation (1328)

Item
11

12

13

14
15
16

17

19

20

21

Results Participants
Report when study was performed, including
beginning and end dates of recruitment.
Report Clinical and demographic characteristics of
the study population.
Report the number of participants satisfying the
criteria for inclusion who did or did not undergo
the index tests and/or the reference standard.
Results Test results
Report time interval between the index tests and
the reference standard, and any treatment
administered in between.
Report distribution of severity of disease (define
criteria).

Report a cross tabulation of the results of the index


tests (including indeterminate and missing results)
by the results of the reference standard; for
continuous results, the distribution of the test
results by the results of the reference standard.
Report any adverse events from performing the
index tests or the reference standard.
Results Estimates
Report estimates of diagnostic accuracy and
measures of statistical uncertainty (eg, 95%
confidence intervals).

Knowledge of the results of the reference standard can influence the reading of the index test, and vice versa,
leading to inflated measures of diagnostic accuracy.

Sensitivity, specificity, PPV, NPV, ROC, likelihood ratio and odds ratio.

Reproducibility of the index test and reference standard varies. Poor reproducibility adversely affects diagnostic
accuracy. If possible, authors should evaluate the reproducibility of the test methods used in their study and
report their procedure to do so.
Technology behind many tests advances continuously, leading to improvements in diagnostic accuracy. There
may be a considerable gap between the dates of the study and the publication date of the study report.
Description of the demographic and clinical characteristics are usually presented in a table, such as age, sex,
spectrum of presenting symptoms, comorbidity, current treatments, recruitment centers.
Describe why participants failed to receive either test.
Flow diagram is strongly recommended.

When delay occurs between doing the index test and the reference standard the condition of the patient may
change, leading to worsening or improvement of the disease. Similar concerns apply if treatment is started after
doing the index test but before doing the reference standard.
Demographic and clinical features of the study population can affect measures of diagnostic accuracy. Many
diseases are not pure dichotomous states but cover a continuum, ranging from minute pathological changes to
advanced clinical disease. Test sensitivity is often higher in studies with a higher proportion of patients with
more advanced stages of the target condition.
Cross tabulations of test results in categories and graphs of distributions of continuous results are essential to
allow scientific colleagues to (re)calculate measures of diagnostic accuracy or to perform alternative analyses,
including meta-analysis.

Not all tests are safe. Measuring and reporting of adverse events in studies of diagnostic accuracy can provide
additional information about the clinical usefulness of a particular test.
Report a value of how well the test results correspond with the reference standard. The values presented in the
report should be taken as estimates with some variation. Many journals require or strongly encourage the use of
confidence intervals as measures of precision. A 95% confidence interval is conventional.

Academic Radiology, Vol 21, No 9, September 2014

18

Describe whether or not the readers of the index


tests and reference standard were blind (masked)
to the results of the other test and describe any
other clinical information available to the readers.
Methods Statistical methods
Describe methods for calculating or comparing
measures of diagnostic accuracy, and the statistical
methods used to quantify uncertainty (eg, 95%
confidence intervals).
Describe methods for calculating test reproducibility,
if done.

CRONIN ET AL

1090

TABLE 1. (continued) STARD Items and Explanation (1328)

HOW TO REPORT A RESEARCH STUDY

TABLE 2. Checklist for Reporting the Reference Case CostEffectiveness Analysis (61)

NPV, negative predictive value; PPV, positive predictive value; ROC, receiver operating characteristic.

Provide a general interpretation of the results in the context of current evidence and its applicability in practice.
Clearly define the methodological shortcomings of the study, how it potentially affected the results and
approaches to limit its significance.
Discuss differences between the context of the study and other settings and patient groups in which the test is
likely to be used.
Provide future direction for this work in advancing clinical practice or research in this field.
Discussion
Discuss the clinical applicability of the study findings.
25

24

Report estimates of variability of diagnostic accuracy


between subgroups of participants, readers or
centers, if done.
Report estimates of test reproducibility, if done.
23

22

Report how indeterminate results, missing data and


outliers of the index tests were handled.

Uninterpretable, indeterminate, and intermediate test results pose a problem in the assessment of a diagnostic
test. By itself, the frequency of these test results is an important indicator of the overall usefulness of the test.
Furthermore, ignoring such test results can produce biased estimates of diagnostic accuracy if these results
occur more frequently in patients with disease than in those without, or vice versa.
Since variability is the rule rather than the exception, researchers should explore possible sources of
heterogeneity in results, within the limits of the available sample size. The best practice is to plan subgroup
analyses before the start of the study.
Report all measures of test reproducibility performed during the study. For quantitative analytical methods,
report the coefficient of variation (CV).

Academic Radiology, Vol 21, No 9, September 2014

Framework
1
Background of the problem
2
General framing and design of the analysis
3
Target population for intervention
4
Other program descriptors (eg, care setting, model of
delivery, timing of intervention)
5
Description of comparator programs
6
Boundaries of the analysis
7
Time horizon
8
Statement of the perspective of the analysis
Data and Methods
9
Description of event pathway
10
Identification of outcomes of interest in analysis
11
Description of model used
12
Modeling assumptions
13
Diagram of event pathway (model)
14
Software used
15
Complete description of estimates of effectiveness,
resource use, unit costs, health states, and
quality-of-life weights and their sources
16
Methods for obtaining estimates of effectiveness, costs,
and preferences
17
Critique of data quality
18
Statement of year of costs
19
Statement of method used to adjust costs for inflation
20
Statement of type of currency
21
Source and methods for obtaining expert judgment
22
Statement of discount rates
Results
23
Results of model validation
24
Reference case results (discounted at 3% and
undiscounted): total costs and effectiveness,
incremental costs and effectiveness, and incremental
cost-effectiveness ratios
25
Results of sensitivity analyses
26
Other estimates of uncertainty, if available
27
Graphical representation of cost-effectiveness results
28
Aggregate cost and effectiveness Information
29
Disaggregated results, as relevant
30
Secondary analyses using 5% discount rate
31
Other secondary analyses, as relevant
Discussion
32
Summary of reference case results
33
Summary of sensitivity of results to assumptions and
uncertainties in the analysis
34
Discussion of analysis assumptions having important
ethical implications
35
Limitations of the study
36
Relevance of study results for specific policy questions
or decisions
37
Results of related cost-effectiveness analyses
38
Distributive implications of an intervention

consolidated standards of reporting trials (CONSORT)


checklist and flow diagram for randomized control trials
(RCTs) (212); the transparent reporting of evaluations
1091

CRONIN ET AL

Academic Radiology, Vol 21, No 9, September 2014

TABLE 3. International Society for Pharmacoeconomics and Outcomes Research Randomized Control Trial Cost-Effectiveness
Analysis (ISPOR RCT-CEA) Task Force Report of Core Recommendations for Conducting Economic Analyses Alongside Clinical
Trials (62)
Trial design
1
Trial design should reflect effectiveness rather than efficacy when possible.
2
Full follow-up of all patients is encouraged.
3
Describe power and ability to test hypotheses, given the trial sample size.
4
Clinical end points used in economic evaluations should be disaggregated.
5
Direct measures of outcome are preferred to use of intermediate end points.
Data elements
6
Obtain information to derive health state utilities directly from the study population.
7
Collect all resources that may substantially influence overall costs; these include those related and unrelated to the intervention.
Database design and management
8
Collection and management of the economic data should be fully integrated into the clinical data.
9
Consent forms should include wording permitting the collection of economic data, particularly when it will be gathered from thirdparty databases and may include pre- and/or post-trial records.
Analysis
10
The analysis of economic measures should be guided by a data analysis plan and hypotheses that are drafted prior to the onset of
the study.
11
All cost-effectiveness analyses should include the following: an intention-to-treat analysis; common time horizon(s) for
accumulating costs and outcomes; a within-trial assessment of costs and outcomes; an assessment of uncertainty; a common
discount rate applied to future costs and outcomes; an accounting for missing and/or censored data.
12
Incremental costs and outcomes should be measured as differences in arithmetic means, with statistical testing accounting for
issues specific to these data (eg, skewness, mass at zero, censoring, construction of QALYs).
13
Imputation is desirable if there is a substantial amount of missing data. Censoring, if present, should also be addressed.
14
One or more summary measures should be used to characterize the relative value of the intervention.
15
Examples include ratio measures, difference measures, and probability measures (eg, cost-effectiveness acceptability curves).
16
Uncertainty should be characterized. Account for uncertainty that stems from sampling, fixed parameters such as unit costs and
the discount rate, and methods to address missing data.
17
Threats to external validityincluding protocol-driven resource use, unrepresentative recruiting centers, restrictive inclusion and
exclusion criteria, and artificially enhanced complianceare best addressed at the design phase.
18
Multinational trials require special consideration to address intercountry differences in population characteristics and treatment
patterns.
19
When models are used to estimate costs and outcomes beyond the time horizon of the trial, good modeling practices should be
followed.
Models should reflect the expected duration of the intervention on costs and outcomes.
20
Subgroup analyses based on prespecified clinical and economic interactions, when found to be significant ex post, are
appropriate. Ad hoc subgroup analysis is discouraged.
Reporting the results
21
Minimum reporting standards for cost-effectiveness analyses should be adhered to for those conducted alongside clinical trials.
22
The cost-effectiveness report should include a general description of the clinical trial and key clinical findings.
23
Reporting should distinguish economic data collected as part of the trial vs. data not collected as part of the trial.
24
The amount of missing data should be reported. If imputation methods are used, the method should be described.
25
Methods used to construct and compare costs and outcomes, and to project costs and outcomes beyond the trial period should
be described.
26
The results section should include summaries of resource use, costs, and outcome measures, including point estimates and
measures of uncertainty. Results should be reported for the time horizon of the trial, and for projections beyond the trial (if
conducted).
27
Graphical displays are recommended for results not easily reported in tabular form (eg, cost-effectiveness acceptability curves,
joint density of incremental costs and outcomes).
QALYs, quality-adjusted life years.

with nonrandomized designs (TREND) checklist for


nonrandomized trials; the standards for the reporting of
diagnostic accuracy studies (STARD) checklist and flow
diagram for diagnostic test accuracy studies (1328); the
strengthening the reporting of observational studies in
epidemiology (STROBE) checklists for cohort, casecontrol,
and cross-sectional studies; the preferred reporting items of
1092

systematic reviews and meta-analyses (PRISMA) checklist


and flow diagram for systematic reviews and meta-analyses
(29); the consolidated criteria for reporting qualitative research
(COREQ) and enhancing transparency in reporting the
synthesis of qualitative research (ENTREQ) checklists for
reporting qualitative research; standards for quality improvement reporting excellence (SQUIRE) checklist for quality

Academic Radiology, Vol 21, No 9, September 2014

TABLE 4. Standards for Developing Trustworthy Clinical


Practice Guidelines
Standard 1
Standard 2
Standard 3
Standard 4
Standard 5
Standard 6
Standard 7
Standard 8

Establishing transparency
Management of conflict of interest
Guideline development group composition
Clinical practice guideline-systematic review
intersection
Establishing evidence foundations for and rating
strength of recommendations
Articulations of recommendations
External review
Updating

Based on clinical practice guidelines we can trust, Institute of Medicine, National Academic Press, 2011 (64).

improvement studies; consolidated health economic evaluation reporting standards (CHEERS) for health economics
studies (3039); and statement on reporting of evaluation
studies in health informatics (STARE-HI) for studies of
health informatics.
HOW TO REPORT STUDIES
Screening Studies and Diagnostic Test Accuracy
Studies

There are no specific EQUATOR network recommendations


for reporting screening studies. However, in general, reporting screening studies should incorporate the items important
to diagnostic test accuracy studies. Screening is the application
of a test to detect a disease in an individual who has no known
signs or symptoms. The purpose of screening is to prevent or
delay the development of advanced disease through earlier
detection and enable treatment of disease that is both less
morbid and more effective.
Screening is distinct from other diagnostic tests in that patients undergoing screening are asymptomatic, and the chance
of having the disease of interest is lower in asymptomatic patients compared to those presenting with symptoms. For a
screening study, the most important factors to consider are
the characteristics of the population to be screened, the
screening regimens being compared, the diagnostic test performance of the screening test or tests, and the outcome measure selected. Additional considerations are the diagnostic
consequences which occur during a patients screening
episode, such as additional downstream testing for those
with positive test results and follow-up monitoring of those
with negative test results.
Studies of diagnostic tests evaluate a test for diagnosing a
disease by comparing the test in patients with and without
disease using a reference standard. A diagnostic test accuracy
study provides evidence on how well a test correctly identifies
or rules out disease and informs subsequent decisions about
treatment for clinicians, their patients, and health care providers. An example would be an assessment of the test accuracy of computed tomography pulmonary angiography to

HOW TO REPORT A RESEARCH STUDY

detect pulmonary embolism (PE) in patients with suspected


PE such as the PIOPED II trial (40). A frequently recommended and used guideline for the reporting of diagnostic test
accuracy research is STARD (1328). The objective of
the STARD initiative is to improve the accuracy and
completeness of reporting of studies of diagnostic accuracy,
to allow readers to assess the potential for bias in the study
(internal validity), and to evaluate its generalizability
(external validity) (41). The STARD statement consists of a
checklist of 25 items and recommends the use of a flow diagram which describes the design of the study and the flow of
patients (Appendix Table 1). Table 1 outlines all 25 items with
an explanation of each item. More than 200 biomedical
journals encourage the use of the STARD statement in their
instructions for authors (41). Its main advantage includes a
systematic approach to addressing the key components of
the study design, conduct, and analysis of diagnostic accuracy
studies for complete and accurate reporting. A checklist and
flowchart guide the author and/or reviewer to ensure that
all key components are addressed. Flaws in study design can
lead to biased, optimistic estimates of diagnostic accuracy.
Exaggerated and biased results from poorly designed and reported diagnostic studies could ultimately lead to erroneous
practices in clinical care and inflated health care costs. Using
the STARD criteria for complete and accurate reporting
allows the reader to detect the potential for bias in the study
(internal validity) and to assess the generalizability and applicability of the results (external validity).
There are issues with STARD. Imaging diagnostic test
technology may change faster than other diagnostic accuracy
test technology. Therefore, the generation of imaging test
technology may be very important; however, this information
is not build in to STARD. There are practical issues with diagnostic imaging tests. The reference standard is considered to
be the best available method for establishing the presence or
absence of the disease. The reference standard can be a single
method, or a combination of methods, to establish the presence of disease. It can include laboratory tests, imaging tests,
pathology but also dedicated clinical follow-up of subjects.
Although it is preferred to use the same reference standard
in all patients in a study, this may not always be possible
with diagnostic imaging tests. When multiple criteria are
used for the reference standard, it is important to describe
its rationale, patient selection, and application in the study
design to avoid bias.
Diagnostic tests are developed and improved at a fast pace but
frequently increase health care costs. It is no longer acceptable in
this era of evidence-based health care decision making to omit
critical information required for the readership, regulatory
bodies, and insurers to determine the value of a diagnostic
test. This is particularly important, because the evidence standards for regulatory test approval are unfamiliar to many in
the medical and clinical research community and do not readily
crosswalk to the elements embedded in STARD (42). However,
the demand for diagnostics will likely increase as health care
moves to more personalized medicine.
1093

Research
Study Design

Reporting Guidelines
Provided For

Reporting Guideline
Acronym

Reporting Guideline Web


Site URL

Full Text if
Available

Full Bibliographic Reference

Studies of diagnostic
accuracy

STARD

http://www.stardstatement.org/

Full-text PDF documents of


the STARD Statement,
checklist, flow diagram
and the explanation and
elaboration document

Bossuyt PM, Reitsma JB, Bruns DE,


Gatsonis CA, Glasziou PP, Irwig LM,
Lijmer JG, Moher D, Rennie D, de Vet
HC. Towards complete and accurate
reporting of studies of diagnostic
accuracy: the STARD initiative.
Standards for Reporting of Diagnostic
Accuracy.

Clinical trials, experimental


studies

Parallel group randomised


trials

CONSORT

http://www.consortstatement.org/

Full-text PDF documents of


the CONSORT 2010
Statement, CONSORT
2010 checklist,
CONSORT 2010 flow
diagram, and the
CONSORT 2010
Explanation and
Elaboration document

Schulz KF, Altman DG, Moher D, for the


CONSORT Group. CONSORT 2010
Statement: updated guidelines for
reporting parallel group randomised
trials.

Trials assessing
nonpharmacologic
treatments

CONSORT
nonpharmacological
treatment
interventions

http://www.consortstatement.org/
extensions/interventions/
non-pharmacologictreatment-interventions/

The full text of the extension


for trials assessing
nonpharmacologic
treatments

Boutron I, Moher D, Altman DG, Schulz


K, Ravaud P,
for the CONSORT group. Methods
and Processes
of the CONSORT Group: example of
an extension

Clin Chem. 2003; 49(1):16.


PMID: 12507953 (17).
BMJ. 2003; 326(7379):4144.
PMID: 12511463 (14).
Radiology. 2003; 226(1):
2428. PMID: 12511664
(13).
Ann Intern Med. 2003;
138(1):4044. PMID:
12513043 (27).
Am J Clin Pathol. 2003;
119(1):1822.
PMID: 12520693 (15).
Clin Biochem. 2003; 36(1):
27.
PMID: 12554053 (16).
Clin Chem Lab Med. 2003;
41(1):6873.
PMID: 12636052 (17).
Ann Int Med. 2010;
152(11):72632.
PMID: 20335313 (8).
BMC Medicine. 2010; 8:18.
PMID: 20334633 (7).
BMJ. 2010; 340:c332. PMID:
20332509 (5).
J Clin Epidemiol. 2010; 63(8):
83440. PMID: 20346629
(9).
Lancet. 2010; 375(9721):
1136 supplementary
webappendix.
Obstet Gynecol. 2010;
115(5):106370.
PMID: 20410783 (10).
Open Med. 2010; 4(1):6068.
PLoS Med. 2010; 7(3):
e1000251. PMID:
20352064 (12).
Trials. 2010; 11:32. PMID:
20334632 (6).
Ann Intern Med. 2008:
W60W67. PMID:
18283201 (75).

Academic Radiology, Vol 21, No 9, September 2014

Diagnostic test accuracy

CRONIN ET AL

1094

TABLE 5. Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference

Reporting randomised trials


in journal and conference
abstracts

CONSORT for abstracts

Reporting of pragmatic trials


in health care

CONSORT pragmatic
trials

http://www.consortstatement.org/
extensions/designs/
pragmatic-trials/

Reporting of harms in
randomized trials

CONSORT Harms

http://www.consortstatement.org/
extensions/data/harms/

Patient-reported outcomes
in randomized trials

CONSORT-PRO

http://www.consortstatement.org/
extensions/data/pro/

The full text of the extension


for patient reported
outcomes (PROs)

Reporting of noninferiority
and equivalence
randomized trials

CONSORT noninferiority

http://www.consortstatement.org/
extensions/designs/noninferiority-andequivalence-trials/

The full text of the extension


for noninferiority and
equivalence randomized
trials

Defining standard protocol


items for clinical trials

SPIRIT

http://www.spiritstatement.org/

The full text of the SPIRIT


2013 Statement

Systematic reviews and


meta-analyses

PRISMA

http://www.prismastatement.org/

Full-text PDF documents of


the PRISMA Statement,
checklist, flow diagram
and the PRISMA

The full text of the extension


for journal and
conference abstracts

The full text of the extension


for pragmatic trials in
health care

BMJ. 2012; 345:e5661.


PMID: 22951546 (51).

Lancet. 2008; 371(9609):


281283. PMID: 18221781
(76).

BMJ. 2008; 337:a2390.


PMID: 19001484 (77).

Ann Intern Med. 2004; 141


(10):781788. PMID:
15545678 (78).

JAMA. 2013; 309(8):814


822. PMID; 23443445
(79).

JAMA. 2012; 308(24):2594


2604. PMID: 23268518
(80).

Ann Intern Med. 2013;


158(3):200207. PMID:
23295957 (81).

PLoS Med. 2009; 6(7):


e1000097. PMID:
19621072 (82).
BMJ. 2009; 339:b2535.

1095

(Continued on next page)

HOW TO REPORT A RESEARCH STUDY

CONSORT Cluster

Academic Radiology, Vol 21, No 9, September 2014

Systematic reviews/
meta-analyses/HTA

http://www.consortstatement.org/
extensions/designs/
cluster-trials/
http://www.consortstatement.org/
extensions/data/
abstracts/

The full text of the extension


for cluster randomised
trials

Cluster randomised trials

for trials assessing nonpharmacologic


treatments.
Campbell MK, Piaggio G, Elbourne DR,
Altman DG; for the CONSORT Group.
Consort 2010 statement: extension to
cluster randomised trials.
Hopewell S, Clarke M, Moher D, Wager E,
Middleton P, Altman DG, Schulz KF,
the CONSORT Group. CONSORT for
reporting randomised trials in journal
and conference abstracts.
Zwarenstein M, Treweek S, Gagnier JJ,
Altman DG, Tunis S, Haynes B, Oxman
AD, Moher D; CONSORT group;
Pragmatic Trials in Healthcare
(Practihc) group. Improving the
reporting of pragmatic trials: an
extension of the CONSORT
statement.
Ioannidis JPA, Evans SJW, Gotzsche
PC, ONeill RT, Altman DG, Schulz K,
Moher D, for the CONSORT Group.
Better Reporting of Harms in
Randomized Trials: An Extension of
the CONSORT Statement.
Calvert M, Blazeby J, Altman DG, Revicki
DA, Moher D, Brundage MD;
CONSORT PRO Group. Reporting of
patient-reported outcomes in
randomized trials: the CONSORT PRO
extension.
Piaggio G, Elbourne DR, Pocock SJ,
Evans SJ, Altman DG; CONSORT
Group. Reporting of noninferiority and
equivalence randomized trials:
extension of the CONSORT 2010
statement.
Chan A-W, Tetzlaff JM, Altman DG,
Laupacis A, Gtzsche PC, Krleza K, Hro
 bjartsson A, Mann H,
Jeric
 C,
Dickersin K, Berlin J, Dore
Parulekar W, Summerskill W, Groves
T, Schulz K, Sox H, Rockhold FW,
Rennie D, Moher D. SPIRIT 2013
Statement: defining standard protocol
items for clinical trials.
Moher D, Liberati A, Tetzlaff J, Altman
DG, The PRISMA Group. Preferred
Reporting Items for Systematic

Research
Study Design

Reporting Guidelines
Provided For

Reporting Guideline
Acronym

Reporting Guideline Web


Site URL

Full Text if
Available
Explanation and
Elaboration

Reporting systematic
reviews in journal and
conference abstracts

PRISMA for Abstracts

Meta-analysis of individual
participant data

Economic evaluations

Economic evaluations of
health interventions

CHEERS

http://www.ispor.org/
taskforces/Economic
PubGuidelines.asp

Information about the


CHEERS Statement and a
full-text PDF copy of the
CHEERS checklist

Full Bibliographic Reference


Reviews and Meta-Analyses: The
PRISMA Statement.

BMJ. 2010; 340:c221. PMID


20139215 (86).

Eur J Health Econ. 2013;


14(3):367372. PMID:
23526140 (30).
Value Health. 2013; 16(2):
e1e5. PMID: 23538200
(31).
Clin Ther. 2013; 35(4):
356363. PMID: 23537754
(32).
Cost Eff Resour Alloc. 2013;
11(1):6. PMID: 23531194
(33).
BMC Med. 2013; 11:80.
PMID: 23531108 (34).
BMJ. 2013; 346:f1049.
PMID: 23529982 (35).
Pharmacoeconomics. 2013;
31(5):361367. PMID:
23529207 (36).
J Med Econ. 2013; 16(6):
713719. PMID: 23521434
(37).
Int J Technol Assess Health
Care. 2013; 29(2):117
122. PMID: 23587340
(38).
BJOG. 2013; 120(6):765

Academic Radiology, Vol 21, No 9, September 2014

Beller EM, Glasziou PP, Altman DG,


Hopewell S, Bastian H, Chalmers I,
Gtzsche PC, Lasserson T, Tovey D;
PRISMA for Abstracts Group.
PRISMA for Abstracts: reporting
systematic reviews in journal and
conference abstracts.
Riley RD, Lambert PC, Abo-Zaid G.
Meta-analysis of individual participant
data: rationale, conduct, and
reporting.
Husereau D, Drummond M, Petrou S,
Carswell C, Moher D, Greenberg D,
Augustovski F, Briggs AH, Mauskopf
J, Loder E. Consolidated Health
Economic Evaluation Reporting
Standards (CHEERS) statement

PMID: 19622551.
Ann Intern Med. 2009;
151(4):264269, W64.
PMID: 19622511 (83).
J Clin Epidemiol. 2009;
62(10):10061012. PMID:
19631508 (84).
Open Med. 2009; 3(3);
123130
PLoS Med. 2013; 10(4):
e1001419. PMID:
23585737 (85).

CRONIN ET AL

1096

TABLE 5. (continued) Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference

Schriger DL. Suggestions for improving


the reporting of clinical research: the
role of narrative.

Qualitative research
interviews and focus
groups

COREQ

Observational studies

For completeness,
transparency and data
analysis in case reports
and data from the point of
care.

CARE

Reliability and agreement


studies

Reliability and agreement


studies

GRRAS

Qualitative research,
systematic reviews/
meta-analyses/HTA

Synthesis of qualitative
research

ENTREQ

Qualitative research

Qualitative research
interviews and focus
groups

COREQ

Mixed-methods studies

Mixed methods studies in


health services research

GRAMMS

http://www.care-statement.
org/

http://intqhc.oxfordjournals.
org/content/19/6/349.
long

The CARE checklist and the


CARE writing template for
authors

Full text

Tong A, Sainsbury P, Craig J.


Consolidated criteria for reporting
qualitative research (COREQ): a 32item checklist for interviews and focus
groups.
Gagnier JJ, Kienle G, Altman DA, Moher
D, Sox H, Riley D; the CARE Group.
The CARE Guidelines: consensusbased clinical case reporting
guideline development.

 L, Brorson S, Donner
Kottner J, Audige
 bjartsson A,
A, Gajeweski BJ, Hro
Robersts C, Shoukri M, Streiner DL.
Guidelines for reporting reliability and
agreement studies (GRRAS) were
proposed.
Tong A, Flemming K, McInnes E, Oliver
S, Craig J. Enhancing transparency in
reporting the synthesis of qualitative
research: ENTREQ.
Tong A, Sainsbury P, Craig J.
Consolidated criteria for reporting
qualitative research (COREQ): a 32item checklist for interviews and focus
groups.
OCathain A, Murphy E, Nicholl J. The
quality of mixed methods studies in
health services research.

Int J Qual Health Care. 2007;


19(6):349357. PMID
17872937

BMJ Case Rep. 2013; http://


dx.doi.org/10.1136/bcr2013-201554
PMID: 24155002 (88).
Global Adv Health Med.
2013; 10.7453/gahmj.
2013.008
Dtsch Arztebl Int. 2013;
110(37):603608.
PMID: 24078847 Full-text
in English/Full-text in
German
J Clin Epidemiol. 2013. Epub
ahead of print. PMID:
24035173 (89).
J Med Case Rep. 2013; 7(1):
223. PMID: 24228906
(90).
J Diet Suppl. 2013; 10(4):
38190. PMID: 24237192
(91).
J Clin Epidemiol. 2011;
64(1):96106 PMID:
21130355 (92).
Int J Nurs Stud. 2011;
48(6):661671. PMID:
21514934 (93).
BMC Med Res Methodol.
2012; 12(1):181. PMID
23185978 (74).
Int J Qual Health Care. 2007;
19(6):349357. PMID:
17872937 (73).

J Health Serv Res Policy.


2008; 13(2):9298. PMID:
18416914 (94).

1097

(Continued on next page)

HOW TO REPORT A RESEARCH STUDY

Narrative in reports of
medical research

Academic Radiology, Vol 21, No 9, September 2014

Clinical trials, diagnostic


accuracy studies,
experimental studies,
observational studies
Qualitative research

770. PMID: 23565948


(39).
Ann Emerg Med. 2005;
45(4):437443. PMID:
15795727 (87).

Research
Study Design

Reporting Guidelines
Provided For

Reporting Guideline
Acronym

Quality improvement
studies

Quality improvement in
health care

SQUIRE

Health informatics

Evaluation studies in health


informatics

STARE-HI

Reporting Guideline Web


Site URL
http://squire-statement.org/

Full Text if
Available

Full Bibliographic Reference


Davidoff F, Batalden P, Stevens D,
Ogrinc G, Mooney S. Publication
guidelines for quality improvement in
health care: evolution of the SQUIRE
project.

Talmon J, Ammenwerth E, Brender J, de


Keizer N, Nykanen P, Rigby M.
STARE-HI - Statement on reporting of
evaluation studies in Health
Informatics.

Qual Saf Health Care. 2008;


17 Suppl 1:i3-i9. PMID:
18836063 (95).
BMJ. 2009; 338:a3152.
PMID: 19153129 (96).
Jt Comm J Qual Patient Saf.
2008; 34(11):681687.
PMID: 19025090 (97).
Ann Intern Med. 2008;
149(9):670676. PMID:
18981488 (98).
J Gen Intern Med. 2008;
23(12):21252130. PMID:
18830766 (99)
Int J Med Inform. 2009; 78(1):
19. PMID: 18930696
(100).

CRONIN ET AL

1098

TABLE 5. (continued) Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference

CARE, case reports; CHEERS, consolidated health economic evaluation reporting standards; CONSORT, consolidated standards of reporting trials; COREQ, consolidated criteria for reporting qualitative research; ENTREQ, enhancing transparency in reporting the synthesis of qualitative research; GRAMMS, good reporting of a mixed-methods study; GRRAS, guidelines for reporting reliability and agreement studies; HTA, health technology assessment; PRISMA, preferred reporting items for systematic reviews and meta-analyses; SPIRIT, standard protocol items:
recommendations for interventional trials; SQUIRE, standards for quality improvement reporting excellence; STARD, standards for reporting of diagnostic accuracy; STARE-HI, statement on
reporting of evaluation studies in health informatics.

Academic Radiology, Vol 21, No 9, September 2014

Academic Radiology, Vol 21, No 9, September 2014

Therapeutic Studies

The double-blind RCT is the reference standard approach


to trial design and is most comprehensively reported by
following the CONSORT Statement. The CONSORT
Statement is an evidence-based, minimum set of recommendations for reporting RCTs. It offers a standard way for authors to prepare reports of trial findings, facilitating their
complete and transparent reporting, and aiding their critical
appraisal and interpretation. The CONSORT Statement
comprises a 25-item checklist and a flow diagram (Appendix
Table 2), along with some brief descriptive text. The checklist
items focus on reporting how the trial was designed, analyzed,
and interpreted; the flow diagram displays the progress of all
participants through the trial (43). The checklist items pertain
to the content of the title, abstract, introduction, methods, results, discussion, and other information. The flow diagram is
intended to depict the passage of participants through an
RCT. The revised flow diagram depicts information from
four stages of a trial (enrollment, intervention allocation,
follow-up, and analysis). The diagram explicitly shows the
number of participants, for each intervention group, included
in the primary data analysis (43). The most up-to-date revision
of the CONSORT Statement is CONSORT 2010 (212).
Randomization is thought to improve the reliability and
validity of study results by mitigating selection biases. For
example, consider the example where both therapy A and
therapy B are options for treating a disease. It is known that
the treatment is less effective in older patients. A study is performed, which shows therapy A to be more effective. But the
mean age of patients undergoing therapy A is 10 years less
than the mean age in therapy B arm. This difference is statistically significant. Given this result, the conclusion that therapy A is better than therapy B is tempered by the knowledge
that any benefit may be due the selection of younger patients
for therapy A. With randomization, the ages between therapy
groups should not be statistically different. Thus, if a benefit of
therapy A persists, the conclusion that therapy A is more
accurate has more validity. Adherence to the principles in
the CONSORT Statement is intended to improve study
reporting by making explicit selection, randomization, and
assignment criteria to ensuring that it is the treatments, rather
than the patient factors that drive results.
To assess the strengths and limitations of RCTs, readers
need and deserve to know the quality of their methods. Previous studies have shown that reports of low-quality RCTs,
compared to reports of higher quality ones, overestimate the
effectiveness of interventions by about 30% across a variety
of health care conditions. Advantages of the CONSORT
guidelines are increased transparency allowing the reader to
better assess the strengths and limitations of the RCT and
reduced risk of overestimating effect (44).
However, the CONSORT Statement is rarely directly
applicable to therapeutic trials in radiology, as these are usually
nonpharmacologic and/or technical studies including percutaneous and endovascular interventions. These studies have

HOW TO REPORT A RESEARCH STUDY

specific issues that introduce bias to results reporting. This includes challenges because of impossible or partial blinding,
clustering, the experience of providers and centers, and
both patient and provider willingness to undergo randomization (45). Thus, the CONSORT Statement has been specifically modified to guide reporting of nonpharmacologic
treatments (46). In addition, modifications have been made
to promote systematic reporting for cohort and comparative
study designs, using the principles from the CONSORT
Statement (45,47).
The most critical modifications to the CONSORT
Statement for reporting interventional and therapeutic study
results relate to treatment and provider details. Precise descriptions of the experimental treatment and comparator should be
reported (48). In addition, descriptions of how, why, and
when treatment is modified help to demonstrate sufficient
separation of study arms, as study arm crossover may limit
the conclusions that can be drawn from interventional trials
(49). Although rarely included, it is recommended that trials
describe in detail the volume of experience of providers, as patient outcomes are directly related to experience (48,50).
Correlations between patients could be introduced on the
basis of undocumented similarities in process for instance by
a single provider or practice location. Clustering is the instance
in which the subjects in one trial arm are more like each other
than the subjects in another arm, thus reducing statistical power and introducing a challenge to interpreting study results.
For example, if therapy A is only offered in a homogenous suburban community and therapy B is offered only in a highdensity urban center, the subjects who undergo therapy A
will tend to be like each other but unlike those undergoing
therapy B. If the outcome is better in the population undergoing therapy A, fewer patients will be required to demonstrate a
statistically significant benefit of therapy A compared to therapy B. However, the benefit is due to the similarity of patients
by group, rather than the therapy itself. To overcome this clustering, it is necessary to recruit more patients and explicitly account for the similarities in each group. This accounting
should be described in the sample size calculation and statistical
reporting. Alternatively, if both therapy A and therapy B were
offered in either center and subjects were randomly assigned to
either therapy, sample size could be reduced.
The main CONSORT Statement is based on the standard two-group parallel design. However, there are several
different types of randomized trials, some of which have
different designs, interventions, and data. To help improve
the reporting of these trials, the CONSORT Group has
been involved in extending and modifying the main
CONSORT Statement for application in these various areas
including design extensions such as cluster trials, noninferiority and equivalence trials and pragmatic trials; intervention extensions for nonpharmacological treatment interventions; and
data extensions for patient-reported outcomes and harms.
Future directions for CONSORT are new CONSORT extensions and update the various CONSORT extensions to
reflect the 2010 checklist. For additional details, it is
1099

CRONIN ET AL

recommended that the reader review the Extended CONSORT statement, which may be found on the EQUATOR
network (1,51).
Meta-analyses

A systematic review sums up the best available research on a


specific question, synthesizing the results of several studies.
A meta-analysis is a quantitative statistical analysis of two or
more separate but similar experiments or studies to test the
pooled data for statistical significance (52). Systematic reviews
and meta-analyses have become increasingly important in
health care. Clinicians read them to keep up to date with their
field (53,54). When reporting a meta-analysis of diagnostic
test accuracy studies, important domains are problem formulation, data acquisition, quality appraisal of eligible studies, statistical analysis of quantitative data, and clinical interpretation
of the evidence. With regard to problem formulation, it is
important to define the question and objective of the review
and establish criteria for including studies in the review. For
data acquisition, a literature search should be conducted to
retrieve the relevant literature. For the quality appraisal of
eligible studies, variables of interest should be extracted
from the data. Studies should be assessed for quality and applicability to the clinical problem at hand. The evidence should
be summarizing qualitatively, and if appropriate, quantitatively, that is, a meta-analysis performed. With regard to the
statistical analysis of quantitative data, diagnostic accuracy
should be estimated, the data displayed, and heterogeneity
and publication bias assessed. For statistical analysis of quantitative data, the robustness of estimates of diagnostic accuracy
using sensitivity analyses (if applicable) should be assessed,
and one should explore and explain heterogeneity in test
accuracy using subgroup analysis (if applicable). For clinical
interpretation of the evidence, there should be a graphic
display of how the evidence alters the pretest probability
calculating the post-test probability. This is a lot to report.
Thankfully, there is a guideline for reporting systematic reviews and meta-analyses, PRISMA (29).
In 1996, to address the suboptimal reporting of metaanalyses, an international group developed a guidance called
the Quality Of Reporting Of Meta-analyses Statement,
which focused on the reporting of meta-analyses of RCTs
(55). In 2009, the guideline was updated to address several
conceptual and practical advances in the science of systematic
reviews and was renamed PRISMA. The aim of the PRISMA
Statement is to help authors report a wide array of systematic
reviews to assess the benefits and harms of a health care intervention. PRISMA focuses on ways in which authors can
ensure the transparent and complete reporting of systematic
reviews and meta-analyses and has adopted the definitions of
systematic review and meta-analysis used by the Cochrane
Collaboration (56). PRISMA is an evidence-based minimum
set of items for reporting in systematic reviews and metaanalyses. The aim of which is to help authors improve the
reporting of systematic reviews and meta-analyses (29).
1100

Academic Radiology, Vol 21, No 9, September 2014

PRISMA focuses on randomized trials, but PRISMA can


also be used as a basis for reporting systematic reviews of other
types of research, particularly evaluations of interventions.
The PRISMA Statement consists of a 27-item checklist and
a four-phase flow diagram (Appendix Table 3). The advantages of using PRISMA when reporting a systematic review
and meta-analysis are an inclusion of assessments of bias,
such as publication or small sample size bias, and heterogeneity.
This is important as there is overwhelming evidence for the
existence of these biases and their impact on the results of
systematic reviews (57). Even when the possibility of publication bias is assessed, there is no guarantee that systematic reviewers have assessed or interpreted it appropriately, and the
absence of reporting such an assessment does not necessarily
indicate that it was not done. However, reporting an assessment of possible publication bias is likely to be a marker of
the thoroughness of the conduct of the systematic review (57).
A limitation of PRISMA with regard to imaging studies is
that it is primarily designed for the reporting of systematic reviews and meta-analyses of therapeutic studies and RCTs
which are not the predominant research design performed
in radiology. Another issue is methodological differences in
performing and reporting systematic reviews and metaanalyses of diagnostic imaging accuracy studies compared to
therapeutic studies. Therefore, some of the PRISMA items
are not applicable to the evidence synthesis of diagnostic imaging accuracy studies. PRISMA is useful but can be difficult
to apply in diagnostic imaging accuracy studies because of the
quality and variability of the studies available. Systematic reviews and meta-analyses of diagnostic imaging accuracy the
studies have inherent high heterogeneity, and it is uncertain
if test for heterogeneity listed in PRISMA is applicable to
these systematic reviews and meta-analyses. Developing a
new reporting guideline based on the current PRISMA, but
designed for reporting systematic reviews and meta-analyses
of diagnostic imaging accuracy studies would be a helpful
future direction. This reporting guideline would have to
take into account the methodological differences of reporting
systematic reviews and meta-analyses of diagnostic imaging
accuracy studies. However, PRISMA is a reasonable tool to
ensure the transparent and complete reporting of systematic
reviews and meta-analyses.
Cost-Effectiveness Assessments

Economic evaluations of health interventions pose a particular


challenge for reporting because substantial information on
costs, outcomes, and health systems must be conveyed to allow
scrutiny of findings. Despite a growth in published health economic reports, existing reporting guidelines are not widely
adopted. Challenges include having to assess international
studies with differing systems and cost structures, studies
assessing multiple end points and multiple stakeholder perspectives, and alternative time horizons. Consolidating and
updating existing guidelines and promoting their use in an
efficient manner also add complexity. A checklist is one way

Academic Radiology, Vol 21, No 9, September 2014

to help authors, editors, and peer reviewers use guidelines to


improve reporting (3039).
In health care, if newer innovations are more expensive, researchers may conduct CEA, and/or a specific type of CEA,
costutility analysis. These are derivatives of costbenefit
analysis (CBA) and have increased in use in health care and
in imaging over time (58,59). Elixhauser et al. summarized
costbenefit and CEA studies between 1979 and 1993 (60).
CEA researchers generally have not measured benefit effects
using monetary metrics, as done in traditional CBAs. In
1993, the US Panel on Cost-Effectiveness analyses in Health
and Medicine was convened to recommend standards. Their
recommendations for standard reporting of reference case analyses are presented in Table 2 (61). The panel recommended
that reports of cost effectiveness should allow determination of
whether the results can be juxtaposed with those of other
CEAs. Elements to include in a journal report are summarized
in the checklist (61).
The International Society for Pharmacoeconomics and
Outcomes Research (ISPOR) quality improvement in costeffectiveness research task force recommended that the society
help make guidelines available to authors and reviewers to
support the quality, consistency, and transparency of health
economic and outcomes research reporting in the biomedical
literature. The concept for this task force arose in response
from the results of a survey of medical journal editors. The
survey revealed that the vast majority of respondents (either
submitting authors or reviewers) used no guidelines, requirements, or checklists for health economics or outcomes
research. Most respondents also indicated that they would
be willing to incorporate guidelines if they were made available by a credible professional health economics and outcomes
research (HEOR) organization. Although there are a number
of different health economic guidelines and checklists available in the public domain for conduct and reporting
HEOR, they are not widely used in biomedical publishing.
However, there appears to be fairly broad consensus among
available recommendations and guidelines, as well as stability
of content for several years with regard to reporting of
HEOR. Under its mandate to examine issues in quality
improvement of CER, the ISPOR RCT-CEA Task Force
Report of core recommendations for conducting economic
analyses alongside clinical trials was developed and is provided
in Table 3 (62). The CHEERS Statement attempts to optimize the reporting of health economic evaluations and
consolidate and update previous health economic evaluation
guidelines into one current, useful reporting guidance
(Appendix Table 4) (3039). The primary audiences for the
CHEERS statement are researchers reporting economic
evaluations and the editors and peer reviewers assessing
them for publication. Economic evaluations of health
interventions pose a particular challenge for reporting. The
advantage of CHEERS is it consolidates and updates
existing guidelines and promotes their use in a user-friendly
manner. CHEERS should lead to better reporting, and ultimately better health decisions (3039).

HOW TO REPORT A RESEARCH STUDY

Recommendations and/or Guidelines

Guidelines represent a bridge between research and clinical


practice and were defined by the Institute of Medicine Committee to Advise the Public Health Service on Clinical Practice
Guidelines. Clinical Practice Guidelines: Directions For A New
Program. Washington DC: National Academy Press in 1990:
Practice guidelines are systematically developed statements to assist practitioner and patient decisions
about appropriate health care for specific clinical
circumstances.
Guidelines, like all research publications, can be formatted
in many ways. In 2011, the US Institute of Medicine published eight standards for developing guidelines, summarized
in Table 4 (63). These standards include the expectation that
recommendations detail precise actions and the circumstance
it should be performed. Of note, these standards also require a
review of the guidelines with scientific and/or clinical experts
and organizations, patients, and representatives of the public.
The National Guideline Clearinghouse is hosted by the
Agency for Healthcare Research and Quality (ARHQ) and
contains over 2600 guidelines including Guidelines by the
American College of Radiology (64). The guidelines in the
database come from multiple sources. Overlapping guidelines
from different organizations may have different intended outcomes or be designed for a different patient population. Thus,
some of the guidelines in the National Guideline Clearinghouse contradict each other, for example, breast cancer
screening guidelines.
Medical Education Studies

Types of medical education studies include curricular innovations which may follow the Kern six-step process (including
problem identification, targeted needs assessment, goals and
objectives, deciding educational strategies, implementation,
and evaluation); consensus conference proceedings, identifying and addressing knowledge gaps which may use a formal
process to achieve consensus such as the Delphi method; qualitative research studies; quantitative research studies; and
mixed-methods research studies (65,66). Curricular
innovations can be subjective, assessing learner satisfaction
or self-reported confidence or objective assessing knowledge,
skills, attitudes, behaviors, or performance (65). Educational
studies can be descriptive, such as case reports/case series,
correlational (ecologic) studies, or cross-sectional studies.
They can also be analytical, such as casecontrol studies,
cohort/prospective studies, or RCTs (67). Medical education
study designs include true experimental designs, such as prospective cohort studies which have a pretest or post-test with
control group design; the Solomon four-group design, which
has two intervention and two control groups, half of each
group taking the pretest and all taking the post-test; and
the post-test only with control group design. Quasiexperimental designs include time series design (repeated
1101

CRONIN ET AL

testing of the same group), nonequivalent control group


(comparison group) design, separate sample pretest or posttest design, and separate sample pretest or post-test with control group design. Pre-experimental designs include one
group pretest or post-test design, static group comparison
design (equivalent to cross-sectional studies), and case studies
(68). Most often, educational case reports describe new
curricula, but case reports of unusual educational problems
may warrant publication as well. Correlational (ecologic)
studies examine associations between exposures and an
outcome with the unit of analysis being greater than the individuals exposed such as a geographic region. Although there
are several types of cross-sectional study designs, the most
common in medical education research are survey studies.
Previous authors have looked at the subject matter of educational research projects and found that the majority of publications focused on subjective measures such as trainee assessment
and satisfaction (69). Less often, medical education research has
focused on faculty performance. Educational projects can
focus on aspects of professionalism such as ethics, morality,
tenure, career choice, and promotion, but these types of projects are relatively uncommon. The impact of education on patient clinical outcomes should be studied but is difficult
because of the distance between education received and clinical practice but also because of confounding factors that affect
medical practitioners. The behavior of patients as a result of
educational intervention could also be studied, such as the
impact of patient educational efforts on the course of their
chronic illness but this type of study is rare. Similarly, costeffectiveness studies on education and teaching are difficult
to carry out because of the difficulty of quantifying the costs
and effectiveness of teaching and educational interventions
(69). A conceptual framework is a way of thinking about and
framing the research question for a study, representing how
educational theories, models, and systems work. The framework used to guide a study will determine which research aspects to focus on. Well-designed studies will pose the research
question in the context of conceptual framework being used
such as established and validated educational theories, models,
and evidence-based medical education guidelines (70).
Bordage et al. found that the main weaknesses of medical
educational research were a sample size too small or biased;
data instrument either inappropriate or suboptimal or insufficiently described; insufficient data presented; inaccurate or
inconsistent data reported; defective tables or figures;
text too difficult to follow or understand; insufficient or
incomplete problem statement; statistical analysis either inappropriate or incomplete or insufficiently described; overinterpretation of results; and review of literature which was
inadequate, incomplete, inaccurate, or outdated (71).
Conversely, good-quality medical educational research had a
well-designed study; a well-written manuscript; practical,
useful implications; a sample size that was sufficiently large
to detect an effect; a well-stated and -formulated problem; a
research question that is novel, important and/or timely, and
relevant and/or critical; a review of the literature that is
1102

Academic Radiology, Vol 21, No 9, September 2014

thoughtful and/or focused and/or up to date; interpretation


of the results that took into account the limitations of the
study; and/or had a unique approach to data analysis (71).
There are no specific EQUATOR network recommendations for reporting medical educational research studies. Seven
domains for improving the reporting of methods and results in
educational clinical trials have been proposed by Stiles et al.
(72). These include the introduction and background;
outcome measures; sample selection; interventions; statistical
plan; adverse events; and results (72).
Qualitative research interviews and focus groups can be a
part of medical education research. Qualitative research explores complex phenomena encountered by clinicians, health
care providers, policy makers, and consumers. A checklist for
the explicit and comprehensive reporting of qualitative studies
(in-depth interviews and focus groups) has been developed,
COREQ for interviews and focus groups which is a 32item checklist (Appendix Table 5) (73). The checklist items
are grouped into three domains: research team and reflexivity;
study design; and data analysis and reporting. This can be
found at the EQUATOR network. The advantages of
COREQ is that it can help researchers to report important aspects of the research team, study methods, context of the
study, findings, analysis, and interpretations. Also, it can
enable readers to identify poorly designed studies and inadequate reporting which can lead to inappropriate application
of qualitative research in decision making, health care, health
policy, and future research (73).
In addition, there is a reporting guideline for the reporting
of the syntheses of multiple qualitative studies. The syntheses
of multiple qualitative studies can pull together data across
different contexts, generate new theoretical or conceptual
models, identify research gaps, and provide evidence for the
development, implementation, and evaluation of health interventions. This reporting guideline is ENTREQ (74). This is a
21-item checklist (Appendix Table 6), also available at the
EQUATOR network (74).
CONCLUSIONS
The reporting of diagnostic test accuracy studies, screening
studies, therapeutic studies, systematic reviews and metaanalyses, cost-effectiveness studies, recommendations and/or
guidelines, and medical education studies is discussed in this
article. The available guidelines, which can be found at the
EQUATOR network, are summarized in Table 5. We also
hope that this article can be used in academic programs to
educate the faculty and trainees of the available resources at
the EQUATOR network to improve our health research.
REFERENCES
1. The EQUATOR Network website. http://www.equator-network.org/.
Accessed December 21, 2013.
2. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and
elaboration: updated guidelines for reporting parallel group randomised
trials. BMJ 2010; 340:c869.

Academic Radiology, Vol 21, No 9, September 2014

3. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and


elaboration: updated guidelines for reporting parallel group randomised
trials. J Clin Epidemiol 2010; 63(8):e137.
4. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and
elaboration: updated guidelines for reporting parallel group randomised
trials. Int J Surg 2012; 10(1):2855.
5. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. BMJ 2010;
340:c332.
6. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. Trials 2010; 11:32.
7. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. BMC Med
2010; 8:18.
8. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomized trials. Ann Intern
Med 2010; 152(11):726732.
9. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. J Clin Epidemiol
2010; 63(8):834840.
10. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomized trials. Obstet Gynecol
2010; 115(5):10631070.
11. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. Int J Surg
2011; 9(8):672677.
12. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. PLoS Med
2010; 7(3):e1000251.
13. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative.
Radiology 2003; 226(1):2428.
14. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative.
BMJ 2003; 326(7379):4144.
15. Bossuyt PM, Reitsma JB, Bruns DE, et al. Toward complete and accurate
reporting of studies of diagnostic accuracy. The STARD initiative. Am J
Clin Pathol 2003; 119(1):1822.
16. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Biochem 2003; 36(1):27.
17. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Chem Lab Med 2003; 41(1):6873.
18. Bossuyt PM, Reitsma JB, Bruns DE, et al. [Reporting studies of diagnostic accuracy according to a standard method; the Standards for Reporting of Diagnostic Accuracy (STARD)]. Ned Tijdschr Geneeskd 2003;
147(8):336340.
19. Bossuyt PM, Reitsma JB, Bruns DE, et al. Toward complete and accurate
reporting of studies of diagnostic accuracy: the STARD initiative. Acad
Radiol 2003; 10(6):664669.
20. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative.
AJR Am J Roentgenol 2003; 181(1):5155.
21. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Ann
Clin Biochem 2003; 40(Pt 4):357363.
22. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Radiol 2003; 58(8):575580.
23. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. The
Standards for Reporting of Diagnostic Accuracy Group. Croat Med J
2003; 44(5):635638.
24. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative.
Fam Pract 2004; 21(1):410.
25. Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.
Clin Chem 2003; 49(1):718.
26. Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.
Ann Intern Med 2003; 138(1):W1W12.

HOW TO REPORT A RESEARCH STUDY

27. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative.
Ann Intern Med 2003; 138(1):4044.
28. Pai M, Sharma S. Better reporting of studies of diagnostic accuracy. Indian J Med Microbiol 2005; 23(4):210213.
29. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health
care interventions: explanation and elaboration. PLoS Med 2009; 6(7):
e1000100.
30. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Eur J
Health Econ 2013; 14(3):367372.
31. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Value
Health 2013; 16(2):e1e5.
32. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Clin Ther
2013; 35(4):356363.
33. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Cost Eff
Resour Alloc 2013; 11(1):6.
34. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMC Med
2013; 11:80.
35. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ
2013; 346:f1049.
36. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Pharmacoeconomics 2013; 31(5):361367.
37. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. J Med
Econ 2013; 16(6):713719.
38. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Int J Technol Assess Health Care 2013; 29(2):117122.
39. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BJOG
2013; 120(6):765770.
40. Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med 2006; 354(22):
23172327.
41. The STARD website. http://www.stard-statement.org/. Accessed
February 7, 2014.
42. Johnston KC, Holloway RG. There is nothing staid about STARD: progress in the reporting of diagnostic accuracy studies. Neurology 2006;
67(5):740741.
43. The CONSORT website. http://www.consort-statement.org/. Accessed
February 7, 2014.
44. Moher D, Jones A, Lepage L, et al. Use of the CONSORT statement and
quality of reports of randomized trials: a comparative before-and-after
evaluation. JAMA 2001; 285(15):19921995.
45. Reeves BC, Gaus W. Guidelines for reporting non-randomised studies.
Forsch Komplementarmed Klass Naturheilkd 2004; 11(Suppl 1):4652.
46. Boutron I, Moher D, Altman DG, et al. Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation
and elaboration. Ann Intern Med 2008; 148(4):295309.
47. Reeves BC. A framework for classifying study designs to evaluate health
care interventions. Forsch Komplementarmed Klass Naturheilkd 2004;
11(Suppl 1):1317.
48. Salem R, Lewandowski RJ, Gates VL, et al. Research reporting standards
for radioembolization of hepatic malignancies. J Vasc Interv Radiol 2011;
22(3):265278.
49. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361(6):
569579.
50. Jacquier I, Boutron I, Moher D, et al. The reporting of randomized clinical
trials using a surgical intervention is in need of immediate improvement: a
systematic review. Ann Surg 2006; 244(5):677683.
51. Campbell MK, Piaggio G, Elbourne DR, et al. Consort 2010 statement:
extension to cluster randomised trials. BMJ 2012; 345:e5661.
52. Davey J, Turner RM, Clarke MJ, et al. Characteristics of meta-analyses
and their component studies in the Cochrane database of systematic

1103

CRONIN ET AL

53.

54.

55.

56.

57.

58.

59.
60.

61.

62.

63.
64.
65.
66.
67.

68.

69.

70.
71.

72.

73.

74.

75.

76.

reviews: a cross-sectional, descriptive analysis. BMC Med Res Methodol


2011; 11:160.
Oxman AD, Cook DJ, Guyatt GH. Users guides to the medical literature.
VI. How to use an overview. Evidence-Based Medicine Working Group.
JAMA 1994; 272(17):13671371.
Swingler GH, Volmink J, Ioannidis JP. Number of published systematic
reviews and global burden of disease: database analysis. BMJ 2003;
327(7423):10831084.
Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of
meta-analyses of randomised controlled trials: the QUOROM statement.
Quality of reporting of meta-analyses. Lancet 1999; 354(9193):18961900.
Green S, Higgins J, eds. Glossary. Cochrane handbook for systematic reviews of interventions 4.2.5. The Cochrane Collaboration. Available,
http://www.cochrane.org/resources/glossary.htm; 2005. Accessed
February 21, 2014.
Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009; 339:
b2535.
Otero HJ, Rybicki FJ, Greenberg D, et al. Twenty years of costeffectiveness analysis in medical imaging: are we improving? Radiology
2008; 249(3):917925.
Cost-effectiveness analysis registry https://research.tufts-nemc.org/
cear4/, accessed March 2, 2014.
Elixhauser A, Luce BR, Taylor WR, et al. Health care CBA/CEA: an update
on the growth and composition of the literature. Med Care 1993; 31(7
Suppl). JS111, JS8149.
Siegel JE, Weinstein MC, Russell LB, et al. Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health
and Medicine. JAMA 1996; 276(16):13391341.
Ramsey S, Willke R, Briggs A, et al. Good research practices for costeffectiveness analysis alongside clinical trials: the ISPOR RCT-CEA
Task Force report. Value Health 2005; 8(5):521533.
Institute of Medicine website http://www.nap.edu/catalog.php?record_
id=13058. Accessed February 7, 2014.
National Guidelines Clearinghouse. http://www.guideline.gov. Accessed
May 3, 2010.
Yarris LM, Deiorio NM. Education research: a primer for educators in
emergency medicine. Acad Emerg Med 2011; 18(Suppl 2):S27S35.
Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical
education. Acad Med 2004; 79(10):955960.
Carney PA, Nierenberg DW, Pipas CF, et al. Educational epidemiology:
applying population-based design and analytic approaches to study
medical education. JAMA 2004; 292(9):10441050.
Lynch DC, Whitley TW, Willis SE. A rationale for using synthetic designs in
medical education research. Adv Health Sci Educ Theory Pract 2000;
5(2):93103.
Prystowsky JB, Bordage G. An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction.
Med Educ 2001; 35(4):331336.
Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ
2009; 43(4):312319.
Bordage G. Reasons reviewers reject and accept manuscripts: the
strengths and weaknesses in medical education reports. Acad Med
2001; 76(9):889896.
Stiles CR, Biondo PD, Cummings G, et al. Clinical trials focusing on cancer pain educational interventions: core components to include during
planning and reporting. J Pain Symptom Manage 2010; 40(2):301308.
Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus
groups. Int J Qual Health Care 2007; 19(6):349357.
Tong A, Flemming K, McInnes E, et al. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol 2012; 12:181.
Boutron I, Moher D, Altman DG, et al. Methods and processes of the
CONSORT Group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med 2008; 148(4):W60W66.
Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008;
371(9609):281283.

1104

Academic Radiology, Vol 21, No 9, September 2014

77. Zwarenstein M, Treweek S, Gagnier JJ, et al. Improving the reporting of


pragmatic trials: an extension of the CONSORT statement. BMJ 2008;
337:a2390.
78. Ioannidis JP, Evans SJ, Gotzsche PC, et al. Better reporting of harms in
randomized trials: an extension of the CONSORT statement. Ann Intern
Med 2004; 141(10):781788.
79. Calvert M, Blazeby J, Altman DG, et al. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA 2013;
309(8):814822.
80. Piaggio G, Elbourne DR, Pocock SJ, et al. Reporting of noninferiority and
equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA 2012; 308(24):25942604.
81. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining
standard protocol items for clinical trials. Ann Intern Med 2013; 158(3):
200207.
82. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med
2009; 6(7):e1000097.
83. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med
2009; 151(4). 2649, W64.
84. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol
2009; 62(10):10061012.
85. Beller EM, Glasziou PP, Altman DG, et al. PRISMA for abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med
2013; 10(4):e1001419.
86. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ 2010; 340:c221.
87. Schriger DL. Suggestions for improving the reporting of clinical research:
the role of narrative. Ann Emerg Med 2005; 45(4):437443.
88. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensusbased clinical case reporting guideline development. BMJ Case Rep
2013; 2013.
89. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensusbased clinical case report guideline development. J Clin Epidemiol 2014;
67(1):4651.
90. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensusbased clinical case reporting guideline development. J Med Case Rep
2013; 7(1):223.
91. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensusbased clinical case report guideline development. J Diet Suppl 2013;
10(4):381390.
92. Kottner J, Audige L, Brorson S, et al. Guidelines for Reporting Reliability
and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol 2011;
64(1):96106.
93. Kottner J, Audige L, Brorson S, et al. Guidelines for Reporting Reliability
and Agreement Studies (GRRAS) were proposed. Int J Nurs Stud 2011;
48(6):661671.
94. OCathain A, Murphy E, Nicholl J. The quality of mixed methods studies in
health services research. J Health Serv Res Policy 2008; 13(2):9298.
95. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement in health care: evolution of the SQUIRE project. Qual Saf
Health Care 2008; 17(Suppl 1):i3i9.
96. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement studies in health care: evolution of the SQUIRE project.
BMJ 2009; 338:a3152.
97. Davidoff F, Batalden PB, Stevens DP, et al. Development of the SQUIRE
Publication Guidelines: evolution of the SQUIRE project. Jt Comm J Qual
Patient Saf 2008; 34(11):681687.
98. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for
improvement studies in health care: evolution of the SQUIRE Project.
Ann Intern Med 2008; 149(9):670676.
99. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement studies in health care: evolution of the SQUIRE project. J
Gen Intern Med 2008; 23(12):21252130.
100. Talmon J, Ammenwerth E, Brender J, et al. STARE-HIstatement on reporting of evaluation studies in health informatics. Int J Med Inform
2009; 78(1):19.

Academic Radiology, Vol 21, No 9, September 2014

HOW TO REPORT A RESEARCH STUDY

APPENDIX TABLE 1. STARD checklist for reporting of studies of diagnostic accuracy (1328).
Section and Topic

Item#

Title/Abstract/Keywords

Introduction

Methods
Participants

3
4

6
Test methods

7
8

9
10
11

Statistical methods

12
13

Results
Participants

14
15
16

Test results

17
18
19

Estimates

20
21
22
23

Discussion

24
25

On Page#
Identify the article as a study of diagnostic accuracy (recommend MeSH heading
sensitivity and specificity).
State the research questions or study aims, such as estimating diagnostic accuracy or
comparing accuracy between tests or across participant groups.
The study population: The inclusion and exclusion criteria, setting and locations where
data were collected.
Participant recruitment: Was recruitment based on presenting symptoms, results from
previous tests, or the fact that the participants had received the index tests or the
reference standard?
Participant sampling: Was the study population a consecutive series of participants
defined by the selection criteria in item 3 and 4? If not, specify how participants were
further selected.
Data collection: Was data collection planned before the index test and reference standard
were performed (prospective study) or after (retrospective study)?
The reference standard and its rationale.
Technical specifications of material and methods involved including how and when
measurements were taken, and/or cite references for index tests and reference
standard.
Definition of and rationale for the units, cut-offs and/or categories of the results of the
index tests and the reference standard.
The number, training and expertise of the persons executing and reading the index tests
and the reference standard.
Whether or not the readers of the index tests and reference standard were blind (masked)
to the results of the other test and describe any other clinical information available to the
readers.
Methods for calculating or comparing measures of diagnostic accuracy, and the statistical
methods used to quantify uncertainty (e.g. 95% confidence intervals).
Methods for calculating test reproducibility, if done.
When study was performed, including beginning and end dates of recruitment.
Clinical and demographic characteristics of the study population (at least information on
age, gender, spectrum of presenting symptoms).
The number of participants satisfying the criteria for inclusion who did or did not undergo
the index tests and/or the reference standard; describe why participants failed to
undergo either test (a flow diagram is strongly recommended).
Time-interval between the index tests and the reference standard, and any treatment
administered in between.
Distribution of severity of disease (define criteria) in those with the target condition; other
diagnoses in participants without the target condition.
A cross tabulation of the results of the index tests (including indeterminate and missing
results) by the results of the reference standard; for continuous results, the distribution
of the test results by the results of the reference standard.
Any adverse events from performing the index tests or the reference standard.
Estimates of diagnostic accuracy and measures of statistical uncertainty (e.g. 95%
confidence intervals).
How indeterminate results, missing data and outliers of the index tests were handled.
Estimates of variability of diagnostic accuracy between subgroups of participants,
readers or centers, if done.
Estimates of test reproducibility, if done.
Discuss the clinical applicability of the study findings.

STARD, standards for reporting of diagnostic accuracy.

1105

CRONIN ET AL

Academic Radiology, Vol 21, No 9, September 2014

STARD flow diagram

1106

Section/Topic

Item
No

Reported on
Page No

Checklist Item

Title and abstract


1a
1b
Introduction
Background and objectives 2a
2b
Methods
Trial design
3a
3b
Participants
4a
4b
Interventions
5
Outcomes
Sample size
Randomisation:
Sequence generation
Allocation concealment
mechanism
Implementation
Blinding

Results
Participant flow (a diagram
is strongly recommended)

8a
8b
9
10
11a
11b
12a
12b
13a

Baseline data
Numbers analysed

13b
14a
14b
15
16

Outcomes and estimation

17a

Recruitment

1107

17b

Scientific background and explanation of rationale


Specific objectives or hypotheses
Description of trial design (such as parallel, factorial) including allocation ratio
Important changes to methods after trial commencement (such as eligibility criteria), with reasons
Eligibility criteria for participants
Settings and locations where the data were collected
The interventions for each group with sufficient details to allow replication, including how and when they were actually
administered
Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed
Any changes to trial outcomes after the trial commenced, with reasons
How sample size was determined
When applicable, explanation of any interim analyses and stopping guidelines
Method used to generate the random allocation sequence
Type of randomisation; details of any restriction (such as blocking and block size)
Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps
taken to conceal the sequence until interventions were assigned
Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions
If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes)
and how
If relevant, description of the similarity of interventions
Statistical methods used to compare groups for primary and secondary outcomes
Methods for additional analyses, such as subgroup analyses and adjusted analyses
For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the
primary outcome
For each group, losses and exclusions after randomisation, together with reasons
Dates defining the periods of recruitment and follow-up
Why the trial ended or was stopped
A table showing baseline demographic and clinical characteristics for each group
For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned
groups
For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95%
confidence interval)
For binary outcomes, presentation of both absolute and relative effect sizes is recommended
(Continued on next page)

HOW TO REPORT A RESEARCH STUDY

Statistical methods

6a
6b
7a
7b

Identification as a randomised trial in the title


Structured summary of trial design, methods, results, and conclusions

Academic Radiology, Vol 21, No 9, September 2014

APPENDIX TABLE 2. CONSORT 2010 checklist of information to include when reporting a randomized trial (212).

Section/Topic
Ancillary analyses
Harms
Discussion
Limitations
Generalisability
Interpretation
Other information
Registration
Protocol
Funding

Item
No
18

Checklist Item

19

Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from
exploratory
All important harms or unintended effects in each group (for specific guidance see CONSORT for harms [28])

20
21
22

Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses
Generalisability (external validity, applicability) of the trial findings
Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence

23
24
25

Registration number and name of trial registry


Where the full trial protocol can be accessed, if available
Sources of funding and other support (such as supply of drugs), role of funders

Reported on
Page No

CRONIN ET AL

1108

APPENDIX TABLE 2. (continued) CONSORT 2010 checklist of information to include when reporting a randomized trial (212).

Academic Radiology, Vol 21, No 9, September 2014

Academic Radiology, Vol 21, No 9, September 2014

HOW TO REPORT A RESEARCH STUDY

CONSORT 2010 flow diagram

1109

Section/topic

Checklist Item

Title
1

Identify the report as a systematic review, meta-analysis, or both.

Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria,
participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications
of key findings; systematic review registration number.

Introduction
Rationale
Objectives

3
4

Describe the rationale for the review in the context of what is already known.
Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons,
outcomes, and study design (PICOS).

Methods
Protocol and registration

Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration
information including registration number.
Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language,
publication status) used as criteria for eligibility, giving rationale.
Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional
studies) in the search and date last searched.
Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated.
State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included
in the meta-analysis).
Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for
obtaining and confirming data from investigators.
List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and
simplifications made.
Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at
the study or outcome level), and how this information is to be used in any data synthesis.
State the principal summary measures (e.g., risk ratio, difference in means).
Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I2)
for each meta-analysis.
Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting
within studies).
Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which
were pre-specified.

Eligibility criteria

Information sources

Search
Study selection

8
9

Data collection process

10

Data items

11

Risk of bias in individual studies

12

Summary measures
Synthesis of results

13
14

Risk of bias across studies

15

Additional analyses

16

Results
Study selection

17

Study characteristics

18

Risk of bias within studies

19

Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each
stage, ideally with a flow diagram.
For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and
provide the citations.
Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12).

Academic Radiology, Vol 21, No 9, September 2014

Title
Abstract
Structured summary

Reported on
Page#

CRONIN ET AL

1110

APPENDIX TABLE 3. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist (29).

Synthesis of results
Risk of bias across studies
Additional analysis
Discussion
Summary of evidence

20
21
22
23
24

Limitations

25

Conclusions
Funding
Funding

26
27

For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention
group (b) effect estimates and confidence intervals, ideally with a forest plot.
Present results of each meta-analysis done, including confidence intervals and measures of consistency.
Present results of any assessment of risk of bias across studies (see Item 15).
Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]).
Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key
groups (e.g., healthcare providers, users, and policy makers).
Discuss limitations at study and outcome level (e.g., risk of bias), and at review-level (e.g., incomplete retrieval of identified
research, reporting bias).
Provide a general interpretation of the results in the context of other evidence, and implications for future research.
Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the
systematic review.

Academic Radiology, Vol 21, No 9, September 2014

Results of individual studies

HOW TO REPORT A RESEARCH STUDY

1111

CRONIN ET AL

Academic Radiology, Vol 21, No 9, September 2014

PRISMA 2009 flow diagram

1112

Section/Item
Title and abstract
Title
Abstract
Introduction
Background and
objectives
Methods
Target population and
subgroups
Setting and location
Study perspective
Comparators
Time horizon
Discount rate
Choice of health
outcomes
Measurement of
effectiveness

Item
No.
1
2

Provide an explicit statement of the broader context for the study.


Present the study question and its relevance for health policy or practice decisions.

Describe characteristics of the base case population and subgroups analysed, including why they were chosen.

11a
11b
12

13a

13b

Currency, price date,


and conversion

14

Choice of model

15

Assumptions

16

State relevant aspects of the system(s) in which the decision(s) need(s) to be made.
Describe the perspective of the study and relate this to the costs being evaluated.
Describe the interventions or strategies being compared and state why they were chosen.
State the time horizon(s) over which costs and consequences are being evaluated and say why appropriate.
Report the choice of discount rate(s) used for costs and outcomes and say why appropriate.
Describe what outcomes were used as the measure(s) of benefit in the evaluation and their relevance for the type of analysis
performed.
Single study-based estimates: Describe fully the design features of the single effectiveness study and why the single study
was a sufficient source of clinical effectiveness data.
Synthesis-based estimates: Describe fully the methods used for identification of included studies and synthesis of clinical
effectiveness data.
If applicable, describe the population and methods used to elicit preferences for outcomes.

Single study-based economic evaluation: Describe approaches used to estimate resource use associated with the
alternative interventions. Describe primary or secondary research methods for valuing each resource item in terms of its
unit cost. Describe any adjustments made to approximate to opportunity costs.
Model-based economic evaluation: Describe approaches and data sources used to estimate resource use associated with
model health states. Describe primary or secondary research methods for valuing each resource item in terms of its unit
cost. Describe any adjustments made to approximate to opportunity costs.
Report the dates of the estimated resource quantities and unit costs. Describe methods for adjusting estimated unit costs to
the year of reported costs if necessary. Describe methods for converting costs into a common currency base and the
exchange rate.
Describe and give reasons for the specific type of decision-analytical model used. Providing a figure to show model
structure is strongly recommended.
Describe all structural or other assumptions underpinning the decision-analytical model.

1113

(Continued on next page)

HOW TO REPORT A RESEARCH STUDY

Measurement and
valuation of preference
based outcomes
Estimating resources
and costs

Identify the study as an economic evaluation or use more specific terms such as cost-effectiveness analysis, and
describe the interventions compared.
Provide a structured summary of objectives, perspective, setting, methods (including study design and inputs), results
(including base case and uncertainty analyses), and conclusions.

5
6
7
8
9
10

Reported on Page
No./Line No.

Recommendation

Academic Radiology, Vol 21, No 9, September 2014

APPENDIX TABLE 4. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Items to include when reporting economic evaluations of health
interventions (3039).

Section/Item
Analytical methods

Results
Study parameters

Incremental costs and


outcomes
Characterising
uncertainty

Item
No.
17

Describe all analytical methods supporting the evaluation. This could include methods for dealing with skewed, missing, or
censored data; extrapolation methods; methods for pooling data; approaches to validate or make adjustments (such as
half cycle corrections) to a model; and methods for handling population heterogeneity and uncertainty.

18

Report the values, ranges, references, and, if used, probability distributions for all parameters. Report reasons or sources
for distributions used to represent uncertainty where appropriate.
Providing a table to show the input values is strongly recommended.
For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as
mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios.
Single study-based economic evaluation: Describe the effects of sampling uncertainty for the estimated incremental cost
and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount
rate, study perspective).
Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and
uncertainty related to the structure of the model and assumptions.
If applicable, report differences in costs, outcomes, or costeffectiveness that can be explained by variations between
subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible
by more information.

19
20a

20b
Characterising
heterogeneity

Conflicts of interest

21

22

Summarise key study findings and describe how they support the conclusions reached. Discuss limitations and the
generalisability of the findings and how the findings fit with current knowledge.

23

Describe how the study was funded and the role of the funder in the identification, design, conduct, and reporting of the
analysis. Describe other non-monetary sources of support.
Describe any potential for conflict of interest of study contributors in accordance with journal policy. In the absence of a
journal policy, we recommend authors comply with International Committee of Medical Journal Editors
recommendations.

24

Reported on Page
No./Line No.

Academic Radiology, Vol 21, No 9, September 2014

Discussion
Study findings,
limitations,
generalisability, and
current knowledge
Other
Source of funding

Recommendation

CRONIN ET AL

1114

APPENDIX TABLE 4. (continued) Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Items to include when reporting economic evaluations of
health interventions (3039).

Academic Radiology, Vol 21, No 9, September 2014

HOW TO REPORT A RESEARCH STUDY

APPENDIX TABLE 5. Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist (73).
No.

Item

Domain 1: Research team and reflexivity


Personal characteristics
1. Interviewer/facilitator
2. Credentials
3. Occupation
4. Gender
5. Experience and training
Relationship with participants
6. Relationship established
7. Participant knowledge of the interviewer
8.

Interviewer characteristics

Domain 2: Study design


Theoretical framework
9. Methodological orientation and Theory
Participant selection
10. Sampling
11. Method of approach
12. Sample size
13. Non-participation
Setting
14. Setting of data collection
15. Presence of non-participants
16. Description of sample
Data collection
17. Interview guide
18. Repeat interviews
19. Audio/visual recording
20. Field notes
21. Duration
22. Data saturation
23. Transcripts returned
Domain 3: Analysis and findings
Data analysis
24. Number of data coders
25. Description of the coding tree
26. Derivation of themes
27. Software
28. Participant checking
Reporting
29. Quotations presented
30. Data and findings consistent
31. Clarity of major themes
32. Clarity of minor themes

Guide Questions/Description

Which author/s conducted the interview or focus group?


What were the researchers credentials? e.g. PhD, MD
What was their occupation at the time of the study?
Was the researcher male or female?
What experience or training did the researcher have?
Was a relationship established prior to study commencement?
What did the participants know about the researcher? e.g. personal goals, reasons
for doing the research
What characteristics were reported about the interviewer/facilitator? e.g. Bias,
assumptions, reasons and interests in the research topic

What methodological orientation was stated to underpin the study? e.g. grounded
theory, discourse analysis, ethnography, phenomenology, content analysis
How were participants selected? e.g. purposive, convenience, consecutive,
snowball
How were participants approached? e.g. face-to-face, telephone, mail, email
How many participants were in the study?
How many people refused to participate or dropped out? Reasons?
Where was the data collected? e.g. home, clinic, workplace
Was anyone else present besides the participants and researchers?
What are the important characteristics of the sample? e.g. demographic data, date
Were questions, prompts, guides provided by the authors? Was it pilot tested?
Were repeat interviews carried out? If yes, how many?
Did the research use audio or visual recording to collect the data?
Were field notes made during and/or after the interview or focus group?
What was the duration of the interviews or focus group?
Was data saturation discussed?
Were transcripts returned to participants for comment and/or correction?

How many data coders coded the data?


Did authors provide a description of the coding tree?
Were themes identified in advance or derived from the data?
What software, if applicable, was used to manage the data?
Did participants provide feedback on the findings?
Were participant quotations presented to illustrate the themes/findings? Was each
quotation identified? e.g. participant number
Was there consistency between the data presented and the findings?
Were major themes clearly presented in the findings?
Is there a description of diverse cases or discussion of minor themes?

1115

CRONIN ET AL

Academic Radiology, Vol 21, No 9, September 2014

APPENDIX TABLE 6. Enhancing transparency in reporting the synthesis of qualitative research: the ENTREQ statement (74)
No.
1
2

3
4
5

7
8
9

10

11

12
13
14

15
16
17
18
19
20
21

1116

Item
Aim
Synthesis methodology

Guide and Description

State the research question the synthesis addresses.


Identify the synthesis methodology or theoretical framework which underpins the synthesis, and
describe the rationale for choice of methodology (e.g. meta-ethnography, thematic synthesis,
critical interpretive synthesis, grounded theory synthesis, realist synthesis, meta-aggregation,
meta-study, framework synthesis).
Approach to searching
Indicate whether the search was pre-planned (comprehensive search strategies to seek all available
studies) or iterative (to seek all available concepts until they theoretical saturation is achieved).
Inclusion criteria
Specify the inclusion/exclusion criteria (e.g. in terms of population, language, year limits, type of
publication, study type).
Data sources
Describe the information sources used (e.g. electronic databases (MEDLINE, EMBASE, CINAHL,
psycINFO, Econlit), grey literature databases (digital thesis, policy reports), relevant organisational
websites, experts, information specialists, generic web searches (Google Scholar) hand searching,
reference lists) and when the searches conducted; provide the rationale for using the data sources.
Electronic search strategy Describe the literature search (e.g. provide electronic search strategies with population terms, clinical
or health topic terms, experiential or social phenomena related terms, filters for qualitative research,
and search limits).
Study screening methods Describe the process of study screening and sifting (e.g. title, abstract and full text review, number of
independent reviewers who screened studies).
Study characteristics
Present the characteristics of the included studies (e.g. year of publication, country, population,
number of participants, data collection, methodology, analysis, research questions).
Study selection results
Identify the number of studies screened and provide reasons for study exclusion (e.g., for
comprehensive searching, provide numbers of studies screened and reasons for exclusion
indicated in a figure/flowchart; for iterative searching describe reasons for study exclusion and
inclusion based on modifications to the research question and/or contribution to theory
development).
Rationale for appraisal
Describe the rationale and approach used to appraise the included studies or selected findings (e.g.
assessment of conduct (validity and robustness), assessment of reporting (transparency),
assessment of content and utility of the findings).
Appraisal items
State the tools, frameworks and criteria used to appraise the studies or selected findings (e.g. Existing
tools: CASP, QARI, COREQ, Mays and Pope; reviewer developed tools; describe the domains
assessed: research team, study design, data analysis and interpretations, reporting).
Appraisal process
Indicate whether the appraisal was conducted independently by more than one reviewer and if
consensus was required.
Appraisal results
Present results of the quality assessment and indicate which articles if any, were weighted/excluded
based on the assessment and give the rationale.
Data extraction
Indicate which sections of the primary studies were analysed and how were the data extracted from
the primary studies? (e.g. all text under the headings results/conclusions were extracted
electronically and entered into a computer software).
Software
State the computer software used, if any.
Number of reviewers
Identify who was involved in coding and analysis.
Coding
Describe the process for coding of data (e.g. line by line coding to search for concepts).
Study comparison
Describe how were comparisons made within and across studies (e.g. subsequent studies were
coded into pre-existing concepts, and new concepts were created when deemed necessary).
Derivation of themes
Explain whether the process of deriving the themes or constructs was inductive or deductive.
Quotations
Provide quotations from the primary studies to illustrate themes/constructs, and identify whether the
quotations were participant quotations of the authors interpretation.
Synthesis output
Present rich, compelling and useful results that go beyond a summary of the primary studies (e.g. new
interpretation, models of evidence, conceptual models, analytical framework, development of a
new theory or construct).

You might also like