You are on page 1of 21

Proactive planning in the preclinical

research arena
J ohn P.A. Ioannidis, MD, DSc
Professor of Medicine, Health Research and Policy, and Statistics
Stanford University
Post-study odds of a research finding to be
true are small
When there is bias
When effect sizes are small
When studies are small
When fields are hot (many furtively competitive
teams work on them)
When there is strong interest in the results
When databases are large
When analyses are more flexible
IoannidisJ P. PLoS Medicine 2005
A research finding cannot reach
credibility over 50% unless
u<R
i.e. bias must be less than the pre-study
odds
Problems and pre-emptive moves
Bias
Small effect sizes
Small studies
Hot fields with furtive
competition
Strong interest in the
results
Large datasets
Flexible analyses
Reduce bias
Optimize effect sizes
Perform large studies
Build network
collaborations
Minimize conflicts of
interest
Obtain targeted
datasets
Specify analyses
Chavalarias and Ioannidis, J CE 2010
Mapping 235 biases in 17 million Pub Med papers
Date of download:
2/11/2014
Copyright American College of Physicians.
All rights reserved.
From: Influence of Reported Study Design Characteristics on Intervention Effect Estimates From Randomized,
Controlled Trials
Ann Intern Med. 2012;157(6):429-438. doi:10.7326/0003-4819-157-6-201209180-00537
Estimated RORs and effects on heterogeneity associated with reported study design characteristics.
Univariable analyses were based on all available data. CrI = credible interval; ROR = ratio of odds ratios.
Figure Legend:
Small/tiny effects
Effect-noise ratio options for
improvement
Anticipating the magnitude of the effect-to-bias ratio is
needed to decide whether the proposed research is even
justified.
The minimum acceptable effect-to-bias ratio may vary in
different types of designs and research fields.
J ournals may consider setting minimal design prerequisites
for accepting papers.
Funding agencies can also set minimal standards to reduce
the effect-to-bias threshold to acceptable levels.
No study is an island
Single studies
Many studies on the same topic
Many studies on the same field
Many studies on the same discipline
Can we premptivelydesign the geometry of
the research agenda?
Size of each node proportional to the
amount of information (sample size)
A c LD
Mc SD
Ms SD
N c
N s
N+bmab
N+lpnb
NT
O c
O s
T c
A c SD
T s
T+tzmb
Ts+lpnb
A s LD
A s SD
A+tzmb SD
AN SD
ANT SD
AT SD
Mc LD
Figure 2a
Biases related to the geometry
of the research agenda of randomized trials
Mauri et al, J NCI 2008
Auto-looping
Design of clinical research: an open world or isolated city-states (company-states)?
Lathyris et al., Eur J Clin Invest, 2010
Kappagoda and Ioannidis, BMJ 2012
Even the most simple research agendas
are complex
Problems with design
Poor protocols and documentation
Poor utility of information
Statistical power and outcome
misconceptions
Lack of consideration of other evidence
Subjective, non-standardized definitions
and vibration of effects
Patel, Burford, Ioannidis (submitted)
Vibration of effects
Options for improvement
Public availability/registration of protocols or
complete documentation of exploratory process
A priori examination of the utility of information:
power, precision, value of information, plans for
future use, heterogeneity considerations
Consideration of both prior and ongoing evidence
Standardization of measurements, definitions and
analyses, whenever feasible
Research workforce and
stakeholders
Statisticians and methodologists: only sporadically
involved in design, poor statistics in much of research
Laboratory scientists: perhaps even less well equipped in
methodological skills.
Conflicted stakeholders (academic clinicians or laboratory
scientists, or corporate scientists with declared or
undeclared financial or other conflicts of interest, ghost
authorship by industry)
Options for improvement
Research workforce: more methodologists should be
involved in all stages of research; enhance communication
of investigators with methodologists.
Enhance training of clinicians and scientists in quantitative
research methods and biases; opportunities may exist in
graduate curricula, and licensing examinations
Reconsider expectations for continuing professional
development, reflective practice and validation of
investigative skills; continuing methodological education.
Conflicts: involve stakeholders without financial conflicts
in choosing design options; consider patient involvement
Reproducibility practices and
reward systems
Reward mechanisms focus on the statistical significance
and newsworthiness of results rather than study quality and
reproducibility.
Promotion committees misplace emphasis on quantity over
quality.
With thousands of biomedical journals in the world,
virtually any manuscript can get published.
Researchers are tempted to promise and publish
exaggerated results to continue getting funded for
innovative work.
Researchers face few negative consequences result from
publishing flawed or incorrect results or for making
exaggerated claims.
Options for improvement
Support and reward (at funding and/or publication level)
quality, transparency, data sharing, reproducibility
Encouragement and publication of reproducibility checks
Adoption of software systems that encourage accuracy and
reproducibility of scripts.
Public availability of raw data
Improved scientometric indices; reproducibility indices.
Post-publication peer-review, ratings and comments
Towards more transparency:
registration
Level 0: no registration
Level 1: registration of study
Level 2: registration of protocol
Level 3: registration of analysis plan
Level 4: registration of analysis plan and
raw data
Level 5: open live streaming

You might also like