Professional Documents
Culture Documents
An Introduction To Meta-Analysis2014 PDF
An Introduction To Meta-Analysis2014 PDF
An Introduction To Meta-Analysis2014 PDF
In this session:
What is meta-analysis?
1 2
3 4
2. What is the size of the effect? Better designed trials show smaller effects
3. Is the effect consistent across studies? Larger trials show smaller effects
(heterogeneity)
Natural history of novel interventions
4. What is the strength of evidence for the effect?
(quality assessment) Proliferation of small underpowered trials
http://www.cochrane-handbook.org/
1
16/12/2014
7 8
11 12
2
16/12/2014
13 High physical activity & Cognitive decline (Sofi et al, J Internal Med, 2010;269:107-117) 14
Random Effects
Differences in study sample, design, measurement etc
contribute to the effect size
3
16/12/2014
19 20
21 22
23 24
4
16/12/2014
Meta-regression example
Publication bias / small study bias:
Addressing file drawer effects
To control resulting overall effect sizes for publication bias, several tests were performed.
These tests consisted of visual inspection of funnel plots (Light & Pillemer, 1984),
Rosenthal's Fail-safe N (Rosenthal, 1979), a weighted Failsafe N (Rosenberg, 2005),
Orwin's Fail-safe N (Orwin, 1983), Begg and Mazumdar's rank correlation method
(Begg & Mazumdar, 1994), Egger's regression test (Egger, Smith, Schneider, &
Minder, 1997; Sterne & Egger, 2005), trim-and-fill analysis (Duval & Tweedie, 2000)
following the approach as suggested by Peters, Sutton, Jones, Abrams and Rushton
(2007), a sensitivity analysis for publications bias as suggested by Vevea and
Woods (2005), and a method based on truncated normal distributions (Formann,
2008).
25 26
27 28
0.02
0.04
0.06
Standard error
0.08
0.1
0.12
0.14
0.16
0.18
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2
HR
29 30
5
16/12/2014
31 32
33 34
35 36
6
16/12/2014
MOOSE Checklist
MOOSE Checklist cont
Reporting of background should include
Problem definition Reporting of methods should include
Hypothesis statement Description of relevance or appropriateness of studies assembled for
Description of study outcome(s) assessing the hypothesis to be tested
Type of exposure or intervention used Rationale for the selection and coding of data (eg, sound clinical principles
Type of study designs used or convenience)
Study population Documentation of how data were classified and coded (eg, multiple raters,
blinding, and interrater reliability)
Reporting of search strategy should include Assessment of confounding (eg, comparability of cases and controls in
Qualifications of searchers (eg, librarians and investigators) studies where appropriate)
Search strategy, including time period included in the synthesis and keywords Assessment of study quality, including blinding of quality assessors;
Effort to include all available studies, including contact with authors stratification or regression on possible predictors of study results
Databases and registries searched Assessment of heterogeneity
Search software used, name and version, including special features used (eg, Description of statistical methods (eg, complete description of fixed or
explosion) random effects models, justification of whether the chosen models account
Use of hand searching (eg, reference lists of obtained articles) for predictors of study results, dose-response models, or cumulative meta-
List of citations located and those excluded, including justification analysis) in sufficient detail to be replicated
Method of addressing articles published in languages other than English Provision of appropriate tables and graphics
Method of handling abstracts and unpublished studies
Description of any contact with authors
37 38
39 40
Resources Resources
Meta-analytic methods Reporting a meta-analysis
Bax et al (2007) A systematic comparison of software dedicated to meta-analysis of causal Stroup DF, Berlin JA, Morton SC; et al. Meta-analysis of Observational Studies in
studies. BMC Medical Research Methodology 2007, 7:40 Epidemiology: A Proposal for Reporting. JAMA. 2000;283(15):2008-2012
Chinn, C. (2000). A simple method for converting an odds ratio to effect size for use in
MOOSE (Meta-Analysis of Observational Studies in Epidemiology). This checklist for
meta-analysis. Statistics in Medicine, 19:3127{3131)
reporting observational studies was developed following a workshop convened to
Duval S, Tweedie S. (2000). Trim and Fill: A Simple Funnel-Plot-Based Method of Testing address the problem of increasing diversity and variability that exist in reporting meta-
and Adjusting for Publication Bias in Meta-Analysis. Biometrics, 56(2), 455-463. analyses of observational studies. (Stroup et al., 2000). Checklist:
Greenland S. Interpretation and choice of effect measures in epidemiologic analyses. Am http://jama.ama-assn.org/cgi/content/full/283/15/2008/TABLEJST00003T1
J Epidemiol 1987;125: 7618.
Cochrane Handbook for Systematic Reviews of Interventions 4.2.6 (Sept 2006) (PDF)
Sterne JA, Egger M. (2001). Funnel plots for detecting bias in meta-analysis: Guidelines pages 147-150: 8.9 Presenting, illustrating and tabulating results. Available from
on choice of axis. Journal of Clinical Epidemiology 54 (2001) 10461055. www.cochrane-handbook.org
Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for Meta-analysis in
Medical Research. Chichester, UK: John Wiley & Sons, 2000. Including: Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for Meta-analysis in
- Chapters 3 to 9 Medical Research. Chichester, UK: John Wiley & Sons, 2000. Including:
- Chapter 10 Reporting the Results of Meta-analysis
41 42
7
16/12/2014
Acknowledgements
Contact: Geoff.Der@glasgow.ac.uk
43