An Introduction To Meta-Analysis2014 PDF

You might also like

You are on page 1of 8

16/12/2014

In this session:
What is meta-analysis?

When is it appropriate to use?


An Introduction to Meta-analysis
Statistical methods

Geoff Der Software programmes


Statistician
MRC Social and Public Health Sciences Unit Publishing meta-analyses

1 2

Karl Pearson (1904) conducted What is meta-analysis?


the first meta-analysis
commissioned by the British Statistical combination of results from two or more
government on the effects of a separate studies to answer a common question
typhoid vaccination
Why?
Gene Glass (1974) coined meta- To provide a test with more power than separate
analysis: studies
the analysis of analyses. It
connotes a rigorous alternative to To summarise numerous and inconsistent findings
the casual, narrative discussions
of research studies which typify To investigate consistency of effect across different
our attempts to make sense of samples
the rapidly expanding research
literature.
http://www.cochrane-handbook.org/

3 4

What questions are addressed? Some Background Clinical Trials


1. What is the direction of the effect? Early trials show larger effects than later trials

2. What is the size of the effect? Better designed trials show smaller effects

3. Is the effect consistent across studies? Larger trials show smaller effects
(heterogeneity)
Natural history of novel interventions
4. What is the strength of evidence for the effect?
(quality assessment) Proliferation of small underpowered trials

http://www.cochrane-handbook.org/

5 Pocock, S J Clinical Trials A Practical Approach. Wiley 1983. 6

1
16/12/2014

When is it appropriate? Statistical methods


Observational and Intervention studies Effect size measures
transformations; direction and magnitude of effect
How many studies make it worth while?
Heterogeneity: random and fixed effects
Are there additional exclusion criteria for meta-
analyses? Publication bias
Duplicate publications, e.g. in longitudinal studies
Very small studies Quality assessment and sensitivity analyses:
Poor quality bias and confounding; subgroup analysis or meta-
Results not in suitable format? (But can approach regression?
authors) Greenland, Epidemiologic Reviews 1987;9

7 8

Effect Size measure


Effect Size Measures
A statistic that summarises the observed Standardised Mean difference
intervention effect
Cohens d
examples Hedges g
Glasss
Outcome
Binary Continuous Binary outcome
OR RR SMD Binary Odds Ratio
OR RR Correlation Continuous Predictor Relative Risk
per unit
Survival
Hazard ratio
OR Odds ratio
RR Relative Risk
SMD standardized mean difference
9 10

Transforming Effect Size Measures Effect Sizes an example


Transform reported effect sizes to common measure ES = cohens d (eg RCT continuous outcome)
Eg measures of spread/variance: CI, SD, SE, IQR
Paper reports: Verdict Check/Action
Converting odds ratios to continuous outcome effect Mean, SD , N for each group ideal
sizes, or vice versa (Chinn, Statistics in Medicine, 2000;19:3127) Effect Size ideal ? Is it d ?
Mean difference + 95% CI (or SE) excellent Transform
HR ~ OR ~ RR when the risk of an event is low: <20%
(Symons et al, J Clin Epidemiol, 2002;55:893-99). Mean difference + SD good Check SD
Mean difference + p value OK ? Precision ?
Take care and check results!
t value + p value OK ? Precision ?
Online effect size calculator: Median difference ?
http://www.campbellcollaboration.org/resources/effect_size_input.php Correlation X Unlikely
R package compute.es

11 12

2
16/12/2014

Forest Plot main output of MA Example: RR with CIs

13 High physical activity & Cognitive decline (Sofi et al, J Internal Med, 2010;269:107-117) 14

Example: HR with CIs Forest Plot mean difference

Childhood IQ and risk of mortality (Calvin et al., 2010) 15 16

Example: standardised mean difference


Fixed vs Random effects
Fixed Effects
Each study is estimating the same quantity

(methods: Mantel-Haenszel, Peto odds ratio, Inverse


variance)

Random Effects
Differences in study sample, design, measurement etc
contribute to the effect size

DerSimonian and Laird method

Wald et al, Am J Medicine, 2010;123(6):522-7 17 18

3
16/12/2014

Heterogeneity Assessing heterogeneity


Visual inspection:
Variability between studies caused by differences in: - confidence intervals have poor overlap
- Study samples (e.g. healthy, clinical) Formal test:
- Interventions or outcomes - Chi-squared: are observed differences compatible
- Methodology: design, measures, quality etc. with chance alone?
(NB. low power with small number of studies; p >
Statistical heterogeneity manifests itself in the [study] 0.10 gives greater confidence of no heterogeneity)
effects being more different from each other than one
would expect due to random error (chance) alone Additionally, look at the impact of heterogeneity on
(Cochrane Handbook) your aggregate estimate: inconsistency (I2 > 50%)

But, isnt there always clinical and methodological


diversity?

19 20

Dealing with heterogeneity Subgroup Analyses


Check data! Dividing your studies by a design feature:
Choose random effect meta-analysis Participant characteristic (sex, age, clinical diagnoses,
Explore the causes of heterogeneity: geographical region)
subgroup analysis or meta-regression Study design characteristic (type of intervention,
Change the effect measure length of follow-up, type of measure used, e.g. cognitive
Exclude outlying studies function)
Consider whether a meta-analysis is the right course
NB. More subgroup analyses increase the risk of false
Must be dealt with sensitively and with a good rationale negatives and false positives (patients being denied an
for the methods used effective treatment, or given a harmful / ineffective one)

21 22

Meta-regression Subgroup analyses & meta-regression


Considerations of both
Linear regression of the effect estimates on some study
characteristic Are there enough studies that include the specified
characteristics to justify these methods?
Outcome: Study effect size
Specify the characteristics in advance
Explanatory variable: a characteristic of the studies that may
influence the magnitude of the effect
(potential effect modifier or covariate) Keep numbers of characteristics to a minimum

Is there adequate scientific rationale?


Regression is weighted by study size/precision
Does one characteristic confound another?

23 24

4
16/12/2014

Meta-regression example
Publication bias / small study bias:
Addressing file drawer effects
To control resulting overall effect sizes for publication bias, several tests were performed.
These tests consisted of visual inspection of funnel plots (Light & Pillemer, 1984),
Rosenthal's Fail-safe N (Rosenthal, 1979), a weighted Failsafe N (Rosenberg, 2005),
Orwin's Fail-safe N (Orwin, 1983), Begg and Mazumdar's rank correlation method
(Begg & Mazumdar, 1994), Egger's regression test (Egger, Smith, Schneider, &
Minder, 1997; Sterne & Egger, 2005), trim-and-fill analysis (Duval & Tweedie, 2000)
following the approach as suggested by Peters, Sutton, Jones, Abrams and Rushton
(2007), a sensitivity analysis for publications bias as suggested by Vevea and
Woods (2005), and a method based on truncated normal distributions (Formann,
2008).

Application of this multitude of differential approaches originates in the increased


awareness of problems of publication bias in general and the corresponding recent
developments of enhanced methods to account for it.
Pietschnig et al, Intelligence, 2010;38:314-23.

25 26

Funnel plots Funnel plots

27 28

Funnel plots Publication bias: Example 1


cognitive epidemiology
0

0.02

0.04

0.06
Standard error

0.08

0.1

0.12

0.14

0.16

0.18
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2
HR

Childhood IQ and risk of mortality (Calvin et al., 2010)

29 30

5
16/12/2014

Trim-and-fill Quality assessment


1 trim off the asymmetric part of the
funnel To control for bias, particularly in observational studies

2 use the symmetric remainder to Use a quality checklist/tool:


estimate the true centre (eg Moher, 1995 for RCTs ; Sanderson, 2007 for observational studies)
3 replace the trimmed studies and
their missing counterparts Independent quality scoring (and blinded, if poss)
4 estimate the true mean and its
variance from the filled funnel plot 1. Forest Plot ordered by quality score.
Is there an association?
2. Quality score as in meta-regression

Duval & Tweedie, Biometrics, 2000;56(2):455-63. 3. Exclude low quality studies

31 32

Software: specially built programmes Comprehensive Meta-Analysis


Comprehensive Meta-Analysis (CMA)
MetAnalysis
MetaWin
MIX - Free
RevMan - Free
WEasyMA

Bax et al, BMC Med Res Meth, 2007;7:40.

33 34

Standard statistical software


Publishing a meta-analysis
R
http://cran.r-project.org/web/packages/rmeta/rmeta.pdf
Consider which journals have an interest in publishing
STATA meta-analyses - what are their instructions to authors?
http://www.medepi.net/meta/software/STATA_Metaanalysis_commands
_V6_March2004.pdf
Does the quantitative reporting of results from meta-
SAS analysis reduce the need for qualitative discussion
http://www.senns.demon.co.uk/SAS%20Macros/SASMacros.html
more typical of a systematic review?
WinBUGS (Bayesian)
http://www.openbugs.info/w/ Are there standard protocols for writing up? Yes,
MOOSE

Stroup et al, JAMA, 2000;283(15):2008-12.

35 36

6
16/12/2014

MOOSE Checklist
MOOSE Checklist cont
Reporting of background should include
Problem definition Reporting of methods should include
Hypothesis statement Description of relevance or appropriateness of studies assembled for
Description of study outcome(s) assessing the hypothesis to be tested
Type of exposure or intervention used Rationale for the selection and coding of data (eg, sound clinical principles
Type of study designs used or convenience)
Study population Documentation of how data were classified and coded (eg, multiple raters,
blinding, and interrater reliability)
Reporting of search strategy should include Assessment of confounding (eg, comparability of cases and controls in
Qualifications of searchers (eg, librarians and investigators) studies where appropriate)
Search strategy, including time period included in the synthesis and keywords Assessment of study quality, including blinding of quality assessors;
Effort to include all available studies, including contact with authors stratification or regression on possible predictors of study results
Databases and registries searched Assessment of heterogeneity
Search software used, name and version, including special features used (eg, Description of statistical methods (eg, complete description of fixed or
explosion) random effects models, justification of whether the chosen models account
Use of hand searching (eg, reference lists of obtained articles) for predictors of study results, dose-response models, or cumulative meta-
List of citations located and those excluded, including justification analysis) in sufficient detail to be replicated
Method of addressing articles published in languages other than English Provision of appropriate tables and graphics
Method of handling abstracts and unpublished studies
Description of any contact with authors
37 38

MOOSE Checklist cont Resources


Reporting of results should include
Graphic summarizing individual study estimates and overall estimate
Introduction to meta-analysis
Table giving descriptive information for each study included Cochrane Handbook for Systematic Reviews of Interventions 4.2.6 (Sept 2006) (PDF)
Results of sensitivity testing (eg, subgroup analysis) pages 97-166, or, the latest version available to view online: Higgins JPT, Green S
Indication of statistical uncertainty of findings (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.2
[updated September 2009]. The Cochrane Collaboration, 2009. Available from
www.cochrane-handbook.org
Reporting of discussion should include
Quantitative assessment of bias (eg, publication bias) Stangl DK, Berry DA. Meta-analysis in Medicine and Health Policy, New York, NY: Marcel
Justification for exclusion (eg, exclusion of nonEnglish-language citations) Dekker, 2000. [Large focus on Bayesian approach].
Assessment of quality of included studies
Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for Meta-analysis in
Medical Research. Chichester, UK: John Wiley & Sons, 2000. Including:
Reporting of conclusions should include - Chapter 16 on Meta-analysis of Epidemiological and Observational Studies
Consideration of alternative explanations for observed results
Generalization of the conclusions (ie, appropriate for the data presented Wolf FM. (1986). Meta-analysis: quantitative methods for research synthesis. Sage
and within the domain of the literature review) Publications.
Guidelines for future research
Disclosure of funding source

39 40

Resources Resources
Meta-analytic methods Reporting a meta-analysis
Bax et al (2007) A systematic comparison of software dedicated to meta-analysis of causal Stroup DF, Berlin JA, Morton SC; et al. Meta-analysis of Observational Studies in
studies. BMC Medical Research Methodology 2007, 7:40 Epidemiology: A Proposal for Reporting. JAMA. 2000;283(15):2008-2012
Chinn, C. (2000). A simple method for converting an odds ratio to effect size for use in
MOOSE (Meta-Analysis of Observational Studies in Epidemiology). This checklist for
meta-analysis. Statistics in Medicine, 19:3127{3131)
reporting observational studies was developed following a workshop convened to
Duval S, Tweedie S. (2000). Trim and Fill: A Simple Funnel-Plot-Based Method of Testing address the problem of increasing diversity and variability that exist in reporting meta-
and Adjusting for Publication Bias in Meta-Analysis. Biometrics, 56(2), 455-463. analyses of observational studies. (Stroup et al., 2000). Checklist:
Greenland S. Interpretation and choice of effect measures in epidemiologic analyses. Am http://jama.ama-assn.org/cgi/content/full/283/15/2008/TABLEJST00003T1
J Epidemiol 1987;125: 7618.
Cochrane Handbook for Systematic Reviews of Interventions 4.2.6 (Sept 2006) (PDF)
Sterne JA, Egger M. (2001). Funnel plots for detecting bias in meta-analysis: Guidelines pages 147-150: 8.9 Presenting, illustrating and tabulating results. Available from
on choice of axis. Journal of Clinical Epidemiology 54 (2001) 10461055. www.cochrane-handbook.org
Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for Meta-analysis in
Medical Research. Chichester, UK: John Wiley & Sons, 2000. Including: Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for Meta-analysis in
- Chapters 3 to 9 Medical Research. Chichester, UK: John Wiley & Sons, 2000. Including:
- Chapter 10 Reporting the Results of Meta-analysis

41 42

7
16/12/2014

Acknowledgements

Thanks to Catherine Calvin who helped prepare

version 1 of this presentation

Contact: Geoff.Der@glasgow.ac.uk

43

You might also like