You are on page 1of 27
‘Umesh dWioien eas Eis Nedra ay “eto Coovare tana to RrecheFamacaogche aro ew, an tay “Cente Sais eee, Lunes of Ota Ox “Ota Cae, Oana ospalResasch rete, tr, canads ‘ora of lanai, Plea Pensa LSA “et Cacrane Cet, ooeagen, Drak Depaent ol Hygena Faro Lnvesty of ter seholet Mee, ‘scheolofnirgsn wy Thy Coleg, bl, pubic tens *Depatners fede, inca Epoaraiy an Beasts, ener Urs Hamton Orne, comes nen Syseri ReVN o halo Pbk Heath ac Prevry Cr (CAR, Urey clans Maas, Neteinds Drpatnert of enihay and CcrmrantyMedere, Faye Mee, tana ria, Canc ‘Conespondence to alesivn rave ‘Accept Sune2009 Chetisas 8 2009:33082700 ‘ox omseom27e0 RESEARCH METHODS & REPORTING The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration Alessandro Liberati? Douglas G Altman,’ jennifer Tetzlaff Cynthia Mulrow, Peter C Gotzsche,* John PA loannidis,” Mike Clarke,*° P Devereaux,” jos Kieijnen,”" David Moher"? Introduction Systematic reviews and meta-analyses are essential tools for summarising evidence accurately and elably. They help clinicians keep up to date; provide evidence for policy makers to judge risks, benefits, and harms of healthcare behaviours and interventions; gather together and summarise related research for patients and their carers; provide a starting point for clinieal practice guideline developers; provide summaties of previous research for funders wishing to support new sescarch;! and help editors judge the merits of publishing reports of new studies” Recent data suggest that atleast 2500 new systematic reviews reported in English are indexed in Medline annually Unfortunately, there is considerable evidence that key information is often poorly reported in systematic reviews, thus diminishing their potential usefulness." As is tue forall research, systematic reviews should be reported folly and transparently to allow readers to asses the strengths andl weaknesses ofthe investigation.” That rationale led tthe development of the QUOROM (quay of reporting of meta-analysis) statement; those detailed reporting recommendations were published in 1999." In this paper we describe the updating of that ‘guidance, Our aim isto ensure lear presentation of what was planned, done, and found ina systematic review. ‘Terminology used to describe systematic reviews and smetacanalyses has evolved over time and varies across different groups of researchers and authors (see box I at end of document). In this document we adopt the defin- tions used by the Cochrane Collaboration.” systematic review attempts to collate all empirical evidence that fits pre specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected 10 mininise bias, thus providing reliable findings from which conclusions can be drawn and dec sions made. Meta-analysis isthe us of statistical methods to summarise and combine the results of independent studies. Many systematic reviews contain meta-analyses, but not al The QUOROM statement and its evolution into PRISMA ‘The QUOROM statement, developed in 1996 and published in 1999, was conceived as a reporting ‘guidance for amthors reporting a meta-analysis of ran- cdomised trials. Since then, much has happened. First, knowledge about the conduct and reporting of syster- atic reviews has expaneled considerably. For example, the Cochrane Library’s Methodology Register (which includes reports of studies relevant to the methods for systematic reviews) now contains more than 11000 entries (March 21X08), Second, there have been many ‘conceptual advances, such as “outcome-level”assess- ‘ments ofthe ssk of bias,!"" that apply to systematic reviews. Third, authors have increasingly used system- atic reviews to summarise evidence other than that pro- vided by randomised tials, However, despite advances, the quality ofthe con- uct and reporting of systematic reviews remains well shor of ideal." All ofthese issues prompted the need for an update and expansion of the QUOROM state- ‘ment, Of note, recognising that the updated statement now addresses the above conceptual and methodo- logical issues anel may aso have broader applicability than the original QUOROM statement, we changed the name of the reporting guidance to PRISMA (pre- ferred reporting items for systematic reviews and meta- analyses) Development of PRISMA ‘The PRISMA statement was developed by a group ‘f 29 review authors, methodologists, clinicians, medi- ‘al editors, and consumers. They attended a three ‘day meeting in 2005 and participated in extensive ‘post meeting electronic correspondence. A consensus [process that was informed by evidence, whenever pos- sible, was used to develop a 27-tem checklist (able 1) and a four;phase flow diagram (fig 1) (also available as extra items on bmy.com for researchers to down- load and re-use). lems deemed essential for transpar- ‘ent reporting of a systematic review were included in the checklist. The flow diagram originally proposed by QUOROM was aso modified to show numbers of ‘identified records, exeluded atiles, and included stud- ies, Afer II revisions the group approved the checklist, flow diagram, and tis explanatory paper ‘The PRISMA statement itself provides further details regarding its background and development.” This = }OMLNE {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN [RESEARCH METHODS & REPORTING ‘one accompanying explanation and elaboration document explains the meaning and rationale for each checklist item. A few PRISMA Group participants volunteered to help draft specific items for this document, and four of these (DGA, AL, DM, and JT) met on several occasions to further refine the document, which was circulated and ‘ultimately approved by the larger PRISMA Group. Scope of PRISMA PRISMA focuses on ways in which authors ean ensure the transparent and complete reporting of systematic reviews and meta-analyses. t doesnot address directly oF ina detailed manner the conduct of systematic reviews, for which other guides are available.'"* ‘We developed the PRISMA statement and this explan- atory document to help authors report a wide array of systematic reviews to assess the benefits and harms of a ‘healthcare intervention. We consider most ofthe checklist items relevant when reporting systematic reviews of non: randomised studies assessing the benefits andl harms of interventions. However, we recognise that authors who address questions relating o aetiology, diagnosis, or progr nosis, for example, and who review epidemiological or diagnostic accuracy studies may need to modify of incor- porate additional items for their systematic reviews. How to.se this paper ‘We modeled this explanation and elaboration document after those prepared for other reporting guidelines." ‘To maximise the benefit ofthis document, we encour age people to read it in conjunction with the PRISMA statement." ‘We present each checklist item and follow it with a ‘published exemplar of good reporting fr that item. (We edited some examples by removing citations or web addresses, or by spelling out abbreviations) We then explain the pertinent isse, the rationale for including, the item, and relevant evidence from the literature, when- cever possible. No systematic search was earried out to identify exemplars and evidence. We also include seven boxes atthe end of the document that provide a more ‘comprehensive explanation of certain thematic aspects of the methodology and conduct of systematic reviews. Although we focus on a minimal list f items to con- sider when reporting a systematic review, we indicate places where additional information is desirable to Improve transparency ofthe review process, We present the items numerically from | to 27; however, authors reed not address items in this particular order in their reports. Rather, what is important is thatthe information for each tem is given somewhere within the report. ‘The PRISMA checklist Title and abstract Item 1: Tile [Kdentiy the report as a systematic review, meta-analysis, or both. Examples “Recurrence rates of video-assisted tho racoscopic versus open surgery in the prevention of recurrent pnenmothoraces:a systematic review of ran- domised and non-randomised tials" “Mortality in randomised trials of antioxidant supple ‘ments for primary and secondary prevention: system. atic review and meta-analysis”! Explanation Authors should identify their repost as.a systematic review or meta-analysis. Terms such as “review” or “overview” do not describe for readers Whether the review was systematic or whether a meta analysis was performed. A recent survey found that 3% ‘of 300 authors did not mention the terms “systematic review" or “meta-analysis” in the tte or abstract oftheir systematic review: Although sensitive search strategies have been developed to identify systematic reviews,” inchsion ofthe term systematic review or meta-analysis in the ttle may improve indexing and identification We adivise authors to use informative tiles that make key information easily accessible to readers. Ideally, a title reflecting the PICOS approach (participants, nter- ventions, comparators, outcomes, ancl study’ design) (see item L1 and box 2) may help readers as it provides key information about the seope ofthe review. Specify. ing the design\s) ofthe studies included, as shown in the examples, may also help some readers and those searching databases. Some journals recommend “indicative titles” that indicate the topie matter of the review, while others require declarative titles that give the review’s main conclusion. Busy practitioners may prefer to see the ‘conclusion ofthe review inthe ttle, but declarative tiles ‘ean oversimplify or exaggerate findings. Thus, many journals and methodologists prefer indicative titles as used in the examples above. em 2: Structred summary Provide a structed summary including, as applicable, backgrounds objectives; datasources; study eligibility teria, participants, and interventions; study appraisal and. synthesis methods; results imitations; conehisions and implications of key findings; finding for the systematic review: and systematic review registration number. Example “Context The role and dose of oral vitamin D supplementation in nonvertebral fracture prevention have not been well established Objective. To estimate the effectiveness of vitamin D supplementation in preventing hip and nonvertebral fractures in older persons. Data Sours. A systematic review of English and now: English articles using MEDLINE and the Cochrane Controlled Trials Register (1960-2005), and EMBASE (1991-2005). Additional studies were identified by con- {acting clinical experts and searching bibliographies and abstracts presented at the American Society for Bone and Mineral Research (1995-2004), Search terms included randomised eonteolled tal (RCT), controlled clinical tral, random allocation, double-blind method, cholecaleiferol, ergocaliferol, 25-hydroxyvitamin D, fractures, humans, elderly, falls, and bone density ‘Study Section: Only double-blind RCT: of orl vita: ‘min D supplementation (cholecalefero, ergocaleiterol) with or without calcium supplementation vs caleium supplementation or placebo in older persons (>60 ‘years) that examined hip or nonvertebral fractures were included {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN RESEARCH METHODS & REPORTING Data Extraction. Independent extraction of articles by 2 authors using predefined data fields inchuding study «quality indicators. Data Synthesis: All pooled analyses were based on random-effects models. Five RCTs for hip fracture (0=9294) and 7 RCTs for nonvertebral fracture risk (029820) met our inclusion eriteria. All rials used cholecaliferol, Heterogeneity among studies for both hip and nonvertebral fracture prevention was observed, which disappeared after pooling RCT with low-dose (400 10) and higher-dose vitamin D (7004800 TU/d), separately. A vitamin D dose of 700 to 800 IU/4 reluced the relative risk (RR) of hip fraeture by 26% (3 RCTs with 5572 persons; pooled RR, 0.74; 95% con- fidence interval [CI], 0.61-0.88) and any nonvertebral fracture by 28% (5 RCTs with 6098 persons; pooled RR, 0.77; 95% Cl, 0680.87) ws calcium or placebo. No significant benefit was observed for RCTs with 400 1U/a vitamin D (2 RCE: with 3722 persons; pooled RR for hip fracture, 1.15; 95% CI, 0188-1.50 and pooled RR for any nonvertebral fracture, 1.03; 95% C1, 0.86 124). Condasion: Oral vitamin D supplementation between 700 to 808) IU/d appears to reduce the risk of hip and any nonvertebral fractures in ambulatory or insitation- alised elderly persons, An oral vitamin D dose of 40) TU/d isnot sufficient for fracture prevention.” Explanation Abstracts provide key information that enables readers to understand the scope, processes, and findings ofa review and to decide whether to ead the fall report. The abstract may be all that is readily available to a reader, for example, in a bibliographic database, The abstract shonld present a balanced and realistic assessment ofthe review’ findings that mirrors, albeit briefly, the main text of the report. We agree with others thatthe quality of reporting in abstracts presented at conferences and in journal publications needs improvement While we do not Uniformly favour a specific format over another, we generally recommend structured abstracts. Structured abstracts provide readers with a series of headings pet= taining to the purpose, conduct, findings, and conchn- sions ofthe systematic review being reported.” They give readers more complete information and facilitate finding information more easily than unstructured abstracts. highly structured abstract ofa systematic review could include the following headings: Context (or Back ground; Objective (or Pures), Data sources Study selection (ox Blegiiity criteria: Study appraisal and Syathessmeth- ‘ods or Data extractionand Data sythesi; Results, Limita- tions, and) Conclusions (or Implication}. Alternatively, a simpler structure could cover but collapse some ofthe above headings (such as label Stud selection and Study appraisal as Review mabod or omit some headings such as Background and Liniations In the highly structured abstract mentioned above, authors use the Background heading to set the context for readers and explain the impostance of the review question, Under the Objatiesheading, they ideally use elements of PICOS (see box 2) to state the primary objective ofthe review. Under a Data souresheading, they summarise sources that were searched, any’ lan guage or publication type restrictions, and the start and end dates of searches. Study selection statements then ideally describe who selected studies using what inclusion criteria. Data extraction methods statements describe appraisal methods during data abstraction and the methods used to integrate or summarise the data, ‘The Data synthesis section is where the main results of the review are reported. Ifthe review ineludes meta- analyses, authors should provide numerical results with confidence intervals for the most important outcomes. Ideally, they should specify the amount of evidence in these analyses (numbers of studies and numbers of par- ticipants). Under a Limitations heading, anthors might describe the most important weaknesses of included studies as well as limitations ofthe review process. ‘Then authors should provide clear and balanced Can- clasionsthat are closely linked to the objective and find ings of the review. Additionally, it would be helpful if anthors inchided some information about funding forthe review. Finally, although protocol registration for systematic reviews i till not common practice, if authors have registered theie review or received a regis: tration number, we recommend providing the registra tion information atthe end of the abstract. ‘Taking all the above considerations into account, the intrinsic tension between the goal of completeness of the abstract and its keeping into the space limit often set by journal editors is recognised as a major challenge Introduction tem 3: Rationale Describe the rationale for the review in the context of ‘what is already known, “Reversing the trend of inezeasing weight for height in children has proven dificult. Itis widely accepted that increasing energy expenditure and redue- ‘ng energy intake form the theoretical basis for man. agement. Therefore, interventions aiming to increase physical aetivity and improve diet are the foundation of efforts to prevent and treat childhood obesity. Such lifestyle interventions have been supported by recent systematic reviews, as well as by the Canadian Paediat: sic Society, the Royal College of Pediatrics and Child Health, and the American Academy of Pediatrics. How ever, these interventions are fraught with poor adher «ence. Thus, schooFbased interventions are theoretically appealing because adherence with interventions can be improved. Consequently, many local governments have enacted or are considering policies that mandate increased physical activity in schools, although the effect of such interventions on body composition has not been assessed." Explanation Readers need to understand the sationale behind the study and what the systematic review may add to what is already known. Authors should tell readers whether their report is a new sys tematic review or an update of an existing one. Ifthe review is an update, authors should state reasons forthe update, including what has been added tothe evidence ONLINE {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN [RESEARCH METHODS & REPORTING ‘one base since the previous version of the review. ‘An ideal background or introetion that ses context for readers might include the following. Fist, authors ‘might define the importance of the review question from different perspectives (such as public health, individual patient, or health policy). Second, authors might briefly mention the current state of knowledge and its imitations. As inthe above example, informa: tion about the effects of several different interventions ‘may be available that helps readers understand why potential relative benefits or harms of particular inter ventions need review. Third, authors might whet read 1s’ appetites by clearly stating what the review aims to add. They also could discuss the extent to which the limitations of the existing evidence base may be ‘overcome by the review. Ham 4: Objectives Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS). Example “To examine whether topical or inal ‘minal antibiotics reduce catheter-related bloodstream infection, we reviewed randomised, controlled tials that assessed the efficacy ofthese antibiotics for primasy prophylaxis against catheterrelated bloodstream infec- tion and mortality compared with no antibiotic therapy indus undergoing hemodialysis." Explanation The questions being addressed, and the rationale for them, are one of the most cxtcal pasts ofa systematic review. They should be sated precisely and explicitly so that readers can understand quickly the review’: scope and the potential applicability ofthe review to their interests. Framing questions so that they include the following five “PICOS” components ‘may improve the explicitness of review questions: (1) the patient population or disease being addressed (P), (2) the interventions or exposure of interest (), (3) the comparators (C), (4) the main outeome or endpoint of intrest (O), and (5) the study designs chosen (8). For ‘more detail regarding PICOS, see box 2. Good review questions may be narrowly focused ‘or broad, depending on the overall objectives ofthe review. Sometimes broad questions might increase the applicability of the results and facilitate detection of bias, exploratory analyses, and sensitivity analyses. ‘Whether narrowly focused or broad, precisely stated review objectives are critical as they help define other ‘components ofthe review process such asthe eligibility criteria item 6) and the search for relevant literate fitems 7 and 8). Methods am 5: Protocol and registration Indicate ifa review protocol exists, if and where it ean be accessed (such asa web adress), and, if available, provide registration information including the reista- ‘ion number Example “Methods of the analysis and inclusion criteria were specified in advance and documented in protocol"? Explanation A protocol is important because it pre-specifies the objectives and methods of the sys- tematic review: For instance, a protocol specifies out ‘comes of primary interest, how reviewers will extract information about those outeomes, and methods that reviewers might use to quantitatively summarise the ‘outcome data (see item 13}. Having a protocol ean help restrict the likelihood of biased post hoe decisions in review methods, such as selective outcome reporting. Several sources provide guidance about elements include in the protocol fora systematic review." For -meta-analyses of individual patientlevel data, we advise authors to describe whether a protocol was explicitly designed and whether, when, and how participating collaborators endorsed it"! Anthors may modify protocols during the research, and readers should not antomatically consider such ‘motifications inappropriate. For example, legitimate ‘modifications may extend the period of searches to include older or newer studies, broaden eligibility cri- teria that proved too narrow, or add analyses ifthe primary analyses sugges that additional ones are war- ranted, Authors should, however, describe the modifi cations and explain thei rationale Although worthwhile protocol amendments are common, one must consider the effects that protocol ‘modifications may have on the results ofa systematic review, especialy ifthe primary outcome is changed. Bias from selective outcome reporting in randomised trials has been well documented. An examination of 47 Cochrane reviews revealed indirect evidence for posible selective reporting bas for systematic reviews. Almost all (n= 3) contained @ major change, such as the addition or deletion of outcomes, between the pro- tocol and the full publication.“ Whether (or to what extent) the changes reflected bias, however, was not clear. For example, it has been rather common not describe outcomes that were not presented! in any of the inchuded studies. Registration ofa systematic review, typically with a protocol and registration number, is not yet common, but some opportunites exist" Registration may pos sibly reduce the risk of multiple reviews addressing the same question,‘ reduce publication bias, and provide sreater transparency when updating systematic reviews. Of note, a survey of systematic reviews indexed in ‘Medline in November 2004 fond that reports of pro- ‘tocol use had inereased to about 46% from 8% noted in previous surveys.” The improvement was de mostly to Cochrane reviews, which, by requirement, have a published protocol? Hem 6: Eligibility criteria Specily suudy characteristics (auch as PICOS, length of follow-up) and report characteristics (such as years considered, language, publication status) use as eiteria for eligibility, giving rationale Examples Types of studies: “Randomised clinical ‘vials studying the administration of hepatitis B vaccine to CRF [chronic renal failure] patients, with or without dlalysis. No language, publication date, or publication stats restritions were imposed...” ‘Types of participants: “Participants of any age with {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN RESEARCH METHODS & REPORTING CRF or receiving dialysis (haemodialysis or peritoneal dialysis) were considered. CRE was defined as serum creatinine greater than 200 mol/L for a period of ‘more than Six months or individuals receiving dialysis (haemodialysis or peritoneal dialysis). .Renal trans plant patients were excluded fiom this review as these individuals are immunosuppressed and are receiving immunosuppressant agents to prevent rejection of theit twansplanted organs, and they have essentially normal renal fiction.” ‘Types of intervention: “Tiials comparing the benef cial and harmful effects of hepatitis B vaccines with adjuvant or tokine co-nterventions fan] trials eom- paring the beneficial and harmful effects of immu- noglobulin prophylaxis, This review was limited to studies looking at active immunisation. Hepatitis B vaccines (plasma or recombinant (yeast) derived) ofall 'ypes, dose, and regimens versus placebo, control vac- ‘Types of outeome measures: “Primary outcome measures: Seroconversion, e, proportion of patients with adequate anti-HBs response (>10 1U/Lor Sample Ratio Units). Hepatitis B infections (as measured by hepatitis B core antigen (HBeAg) positivity or persistent HBsAg positivity), both acute and ehronie. Acute (pri- ‘mary) HBV [hepatitis B virus infections were defined as seroconversion to HBsAg positivity or development of gM ant-HBe. Chronic HBV infections were defined as the persistence of HBsAg for more than six months or HBsAg positivity and liver biopsy compatible with «diagnosis or chronie hepatitis B. Secondary outcome rmeasires: Adverse evens of hepatitis B vaccinations. [and]... mortality.” Explanation Knowledge ofthe eligibility criteria is essential in appraising the validity, applicability, and comprehensiveness ofa review. Thus, authors should "unambiguously specify eligibility criteria used in the review. Carefully defined eligibility eritera inform various steps ofthe review methodology. They infu- tence the development of the search strategy and serve to ensure that studies are selected in a systematic and unbiased manner, A study may be described in multiple reports, and ‘one report may describe mutipe stdies, Therefore, we separate eligibility criteria into the following two eom- ponents: study characteristics and report characteristics. Both need to be reported, Study eligibility riteria are likely to inchude the populations, interventions, compa- rators, ontcomes, and study designs of interest (PICOS, see box 2}, 3s wel as other study-specfc elements such as specifying a minimum length of follow-up. Authors should state whether studies will be exeluded because they do not include (or report) specific outcomes to help readers ascertain whether the systematic review may be biased asa consequence of selective reporting. ® Report eligibility criteria are likely to include lan- guage of publication, publication stams (such as inelu- sion of unpublished material and abstract), and year of publication. Inclusion or not of non-English lan- guage literature,” unpublished data, or older data can influence the effect estimates in meta-analyses." Caution may need to be exercised in including all identified stdies due to potential differences in the risk of bias such as, for example, selective reporting in abstracts." Hem 7: Information sources Describe all information sources inthe search (such as databases with dates of coverage, contact with study authors to identify additional studies) and date last searched. Example “Studies were identified by searching electronic databases, scanning reference lists of asticles and consultation with experts in the field and drug companies...No limits were applied for language and foreign papers were translated. This search was applied to Medline (1966 - Present, CancerLit (1975 - Present, and adapted for Embase (1980 - Present), Science Cite tion Index Expanded (1981 - Present) and Pre-Medline tleetronie databases. Cochrane and DARE (Database of Abstracts of Reviews of Effectiveness) databases were reviewed...The last search was run on 19 June 2001 In addition, we handsearched! contents pages of Jour= nal of Clinical Oncology 2001, European Journal of Cancer 2001 and Bone 2001, together with abstracts printed in these journals 190-2001, A limited update literature search was performed from 19 June 2001 to 81 December 2003." ion The National Library of Medicine’ Medline database is one of the most comprehensive sources of healthcare information in the world. Like any database, however, its coverage is not complete and varies according tothe field. Retrieval from any single database, even by an experienced searcher, may be imperfect, which is why detailed reporting isimpor- tant within the systematic review. ‘Ata minimum, for each database searched, authors should report the database, platform, or provider (such as Ovid, Dialog, PibMed) and the start and ene! dates for the search of each database. This information lets readers assess the currency of the review, which is impostant because the publication time-lag outdates the results of some reviews." This information should also make updating more efficent Anthors should also report who developed and conducted the search In addition to searching databases, authors should report the use of supplementary approaches to identify stuclies, such as hand searching of jourmals, checking reference lists, searching trials registries or regulae tory agency websites.” contacting mamufacturers, or contacting authors, Authors should also report if they attempted to acquite any missing information (such as fn stuly methods or rests) from investigators or spon- sors: its useful to describe briefly who was contacted and what unpublished information was obtained. dem 8: Search Present the fil electronic search strategy for atleast one ‘major database, including any limits used, such that it could be repeated Examples In text: “We used the following search terms to search ll al registers and databases: immu- rnoglobulin’; IVIG; sepsis: septic shock; septicaemia; and septicemia..." ONLINE {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN [RESEARCH METHODS & REPORTING ‘one In appendix: “Search strategy: MEDLINE (OVID) 01. immunoglobulins! 02. immunoglobulin: 03. vig 04. Lor2or3 05, sepsis! (6, sepsisw 07. septic shock! (8, septe shock (09, septicemia’ 10 septicaemia. 11, septicemiatw 12.5 or 0r7 or Sor or Wor Il 13. 4and 12 14. randomised controlled tials 15. randomisee-controlledrial- pt 16. conteolle-= evergecy department GE = astoentert = inverou;PO=y moh ‘able3}| Example of assessment ofthe iskofbias: Quality measures ofthe randomised controlled tials that failed tofllilanyoneofsixmarkes of validity. Adapted from DevereauxetaP™ onceatmentof Heathcae providers ‘outcome sssssor ate randomization RcTstoppedeay bodes Datacllecorsbinded Winged wy No No ves ves ves Paldeman ver ves No No ves ag ves Ne Ne Yee ves ban Yer Yes Novercept er Yes ‘abled | Example of summary results: Heterotopic ossification nals comparing radiotherapy to non-steroidal ant-inflammatory drugs after major hip procedures and fractures. Adapted fom Pakos eta™* author tye) adaterapy sao erate (995) 1269 2% 20155 364% sa 998) 277 26% ‘8/77 ase aii 997) jie ors 18/113 159% rai 9) 22166 a3% isa recy sre (998) 9133 7.29% 18139 sere Bremen suhre 957) 919 arate ns 355% eles 1997) fot 50% 46/185 254% SAD = orseradl anna Terminology ‘The terminology used to describe systematic reviews and meta-analyseshas evolved overtime and varies between fields Different ferms have been used byaifferent groups, such as educators and psychologists. The conduct ofa systematic review comprises several explicit and reproducible steps, such as Identifying all kely relevant records, selecting eligible studies, assessing the risk of bas, extracting data, qualitative synthesis ote included studies, and possibly meta-analyses. Initially this entire process was termed a meta-analysis and was so defined inthe QUOROM statement.* More recenty, especiallyin healthcare research, there has been atrend towards preferring the term systematic review. Ifquantitative synthesis is performed, this ast stage alone is referred toasa meta-analysis. The Cochrane Collaboration uses this terminology,’ under which a meta-analysis, performed, isacomponentof a systematic review. Regardless ofthe question addressed and the completes involved, itis abvays possible to complete a systematic review of existing dat, but nat always possible or desirable, to quantitatively synthesise results because of clinical, methodological, or statistical diferences across the included studies. Conversely, with prospective accumulation of studies and datasets where the plan is eventually to combine them, the term “(prospectve) meta-analysis” ‘may make more sense than “systematic review.” Forretrospectve effrts, one possiblity isto use the term systematic review forthe whole process up tothe point when one decides whether to perform a quantitative sythesis. fa quantitative sithesisis performed, some researchers refer to this asa meta-analysis. Ths definition is siilarto that found inthe curent edition ofthe Dictionary of Epidemiology. ‘While we recognise that the use ofthese terms s inconsistent and theres residual disagreement among the members ofthe panel working on PRISMA, \we have adopted the definitions used by the Cochrane Collaboration” Systematic review A systematic review attempts to colat all empirical evidence that fits pre-specified eligibility criteria to answera specficresearch ‘question. uses explicit, systematic methods that are selected wit a view to minimising bia, thus providing reliable findings from which conclusions ‘an be drawn and decisions made."** The key characteristics ofa systematic review ae (a) a clearly stated set of objectives with an explicit, reproducible methodology; () a systematic search that attempts to identify al studies that would meet the eligibility criteria; (0) an assessment of the validity ofthe Findings ofthe included studies, such as through the assessment of riskf bias; and () systematic presentation and synthesis ofthe characteristics and findings ofthe included studies. Meta-analysis Meta-analysis the use of statistical techniques to integrate and summarise the results ofncluded studies. Many systematic reviews Contain meta-analyses, but natal. By combining information from al relevant studies, meta-analyses can provide more precise estimates af he effects of healthcare than those derived from theindvidual studies included withina review. = }OMLNE {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN RESEARCH METHODS & REPORTING Box2 | Helping to develop the research question(s}: the PICOS approach Formulating relevant and precise uestionsthat can beansweredin a systematic review car be complex and ie consuming Astructured approach orfaming {questions thatusesfve components may hel facilitate the process. This approach is commonly knawn by the acronym “PICOS" where each etterreferstoa component: th patient population othe disease being addressed, the interventions or exposure (), tne comparator grbup (Cte outcomeorendpoint(O, and the study design chosen 5). Issues lating to PCOS affect several PRISMA items tems 6, 8,9, 10,11, and 18). + P_Providing information aboutthe population requires precise defintion of agroup of participant (often patients), suchas men overthe age of 65 years, thelr defining characteristics of interest (often disease, and possibly the setting ofcae considered, sucha an acte care hospital ‘# Thelnterventons (exposures) under consideration inthe systematic review need tobe ansparenty reported, For example, the reviewers answeraquestion regarding the association between a woman's prenatal exposureto folic ace and subsequent offspring’s neural tube defects, poring the dose, quency, and duration ffolicacid usedin diferent studesistiklyto be importantforreadesto nterprethe revew’sesuts and conclusion. Otherinterventions (exposures) rmghtinclude dlagnostc preventive, or therapeutic reatments; arrangements of specific processes of care; lifestyle changes; psychosocial or educational Interventions; orris factors. + C-Claty poring the comparator (conto) group intevention(s)—such susual care, drug, orplacebo— is essential forreadestofully understand the selection ‘tera of primary studies includedin te systematicreview, and might bea source of heterogeneity investigators haveto deal with. Comparators are often pooty described, Cleary reporting whatthe intervention is compared wth simporant and may sometimes haveimplcationsfothe inclusion of studiesinareview—many reviews compare ith "standard care,” which sothenvise undefined; tis shoud be properly addressed by autho. + 0—The outcomes ofthe intervention being asessed—such as mortality, morbidity, symptoms, or quality off Improvements~shouldbe clearly specified asthey are required tojnterpretthe validity and generaisabilty ofthe systematicreview sesuts ‘+ 5-Frally, the typeof study design(s) included inthe review should be reported, Some eviews include only reports ofrandomised als, whereas thershave broader design cteria and include randomised tals and certantypes of observational tudes. Sillotherrevews, such asthose specially answering questions related toharms,may include a wide arity of designsranging rom cohort stuiesto case reports, Whatever study designs are included nthe review, these should bereported. Independently fom how dificult itis todentiy the components ofthe research question, the important poitisthat a structured appraach is preferable and this extends beyond systematic reviews of effectiveness. Ideally the PICOS criteria shouldbe formulated a prio, inthe systematicreview’s protocol, although some revisions might be required because ofthe teativenature ofthe review process. Authors ae encouraged orepotthelt PCOS criteria and whether ‘any modifications were made during the review process. Auseful example in thisrealm the appendixof the “systematic reviews of water fluoridation” Lndertaken by the Centre for Reviews and Dissemination.” Box3 | Identification of study reports and data extraction Comprehensive searches usually resultin a large number ofidentfied records, a much smallernumber of studies included in the systematic review, andeven Fewer of these studies included in any meta-analyses. Reports of systematic reviews often provide ite detalas othe methods used bythe review team inthis process. Beaders are often left with what can be described as the “X les” phenomenon, asitis unclear what occurs between the initial set ofidentiied records and those finally included in the review. ‘Sometimes, review authors simply report the number afincluded studies; more often they report the intial numberof identified recordsand the number ofincluded studies. Rarely althaugh his s aptimal fr readers, do eview authors report the number ofidentifed records, the smallernumiber of potentially relevantstudies, andthe even smaller numberof included studles, by outcome. Review authors also need to diferentiate between the number ofreportsand studies. Often therevillnt bea 1:1 ratio ofreportsto studies and this information needs to be describedin the systematic review report ‘dell, the dentification of study reports should be reported as ext in combination with use ofthe PRISMA low diagram. While we recommend use ofthe flow diagram, a small numberof reviews might be particulat simple and canbe suficently described with afew brie sentences of text. More general, review ‘authors wil needto report the proces used foreach ste: screening the identified records; examining the ul ex of potentially relevant studies (and reporting thenumiber that couldnotbe obtained); and applying eliibilty criteria to select the included studies. ‘Such descriptions should also detail how potential eligible records were promoted to thenext stage ofthe review such as fulltext screning) and tthe nal stage o this process, the included studies. Often review teams have thre response options for excluding records or promoting them tothe next stage of the winnowing process: *yes,"*no," and “maybe.” ‘Similarly some detal shouldbe reported on who participated and how such processes were completed. For example, single person may screen the identified records while a second person independently examines a small sampleof them. Te entire winnowing processis one of“good bookkeeping” ‘whereby interested readers should be able to work backwards from the included studiesto come up withthe same numbers of identified records There's often a paucity ofinfommation describing the data extraction processesin reports ofsystematic reviews. Authotsmay simply reporthat “relevant” data were extracted from each included study with itl infomation about the processes used for data extraction. maybe useful for readersto know whether systematic review’s authors developed, a prior ort, a data extraction frm, whether multiple forms were used, the number of questions, whether the Form was pilot tested, and who completed the extraction. For example itis important forreaders to know whether ane or mare people extracted data, andif 0, whether this was completed independently, whether“consensus” data were used inthe analyses, andthe review team completed an informal taining exercise or amorefomal reliability exercise. ‘one {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN RESEARCH METHODS & REPORTING Box 4 | Study quality and risk of bias inthis pape, and elsewhere," we soughtto usea new term for many eaders, namely, sk of las, for evaluating each included study ina systematic review. Previous papers!" tendedto use the term “quality.” When carving outa systematic review we believe its important to distinguish between qualtyand riskofbiasandto focus on evaluating and reporing the latter. Quality is often the best the authors have ben able todo, For example, authorsmay report the results of surgical alsin which blinding ofthe outcome assessors was not part ofthe trials conduct, Eventhough this may have been the bestmethodology ‘the researchers were able todo, ther are sil theoretical grounds fr believing thatthe study was susceptibleto (rskof) bias. ‘Assessing the isko bias shuld be part ofthe conduct and reporting of any systematic review. inal situations we encourage systematic reviewers to think ahead carefully about what risks ofbias (methodological and clinica) may have abearing on the results oftheir systematic reviews For systematic reviewers, understanding the riskfbias onthe results of studiesis often dificult, because the reports onl 2 surogate ofthe actual conduct ofthe study Thre some suggestion that the report may not bea reasonable facsimile ofthe study, although ths view snot shared byall*2 There are three main ways to assess risk ofbias—indvidual eomponents, checklists and scales, There ae a great many scales available, although we caution against thelr use based on theoretical grounds” and emerging empirical evidence.™ Checklists are ess frequently used and potentiallyhave the same problems as scales. We advocate using a component approach and one thats based on damains for which theres good empirical evidence and perhaps strong clinical rounds. The new Cochrane risk af bas tool isone such camponent approach. ‘The Cochrane risk bias tool consists of five items for which there is empirical evidence fr their biasing influence on the estimates ofan nterention's efflectivenessin randomised tals (sequence generation, allocation concealment, blinding, Incomplete outcome data, and selective outcome reporting) and acatcallitem called “other sources bias” Theres also some consensus that these items canbe applied for evaluation of studies across diverse clinical area.” Otherriskof bias items maybe topic or even study specific—thatis, they may stem rom some pecularty ofthe research topic orsome special feature of the design ofa speciicstuy. These peculiarities need tobe investigated ona case-by-case bass, based on clinical and methodologicalacumen, and there can bbenogeneral recipe nal situations, systematic reviewers need to hinkahead carefully about what aspects of study quality may havea bearing onthe results. BoxS |; Whether to combine data Deciding whetherto combine data involves statistical clinical, and methodological considerations. The statistical decisions are perhaps the most technical and evidence-based. These are more thoroughly discussed in box6. The clinical and methodological decision are generally based on discussions within the review team and may be more subjective. nical considerations wll beinfluenced by the question the eviews attempting to address. Broad questions might provide more “license” to combine ‘more disparate studies, suchas whether “Ritalin s effective inincreasing focused attention in people diagnosed with attention deficit hyperactivity disorder (ADHO)." Hereauthors might electto combine report of stdiesinvolvng children and adults. Ifthe clinical question is more focused, such as whether "Ritalin iseffective inincreasing classroom attention in previously undiagnosed ADHD children who have no comorbid conditions," its likely that afferent decisions regarding sythesis of studies are taken by authors. In any case authors should describe their clinical decisions inthe syteraticreiew report Deciding whether to combine dataalsohas a methodological component. Reviewers may decide natto combine studies oflaw riskofias with those high risk ofbias (seeitems 12 and 19) For example, for subjective outcomes, systematic review authors maynot wish to combine assessments that were completed under blind conditions with thasethat were not For any particular question there may nat beat” or “wrong” choice concerning synthesis, as such decisions arelikely complex. However, asthe choice ‘may besublective, authors should be tansparentasto theirkey decisions and describe them for readers. ‘Boxs i: Meta-analysis and assessment of consistency (heterogeneity) ‘Meta-analysis: statistical combination ofthe results of multiple studies itis flat studies should havetheirresults combined statistical, otherissues must be considered because there are many waystoconducta metacanalysis. Different effect measures canbe used for both binary anc continuous outcomes (see item 13). Aso, there aretwo commonly used statistical modes for combining dataina meta-analysis." The fxedeect modelassumes that there is acommon treatment effectforallincluded studies; tis assumed thatthe observed ddferencesin results acoss studies reflect andom varition.*" The andom-effecs made assumes thatthere isn common treatment effectforalincluded ‘studies butratherthatthe variation ofthe effects across studies fllosa particular distibution In arandom effects modeltisbelevedthatthe included studies represent random sample roma largepopulaton of studies adcressing the question interest." ‘Thereisno consensusabout whether to usefied-orrandom-efecs models, and both areinwide use. The following differences have influenced some researchers regarding theicholce between them. The random effects model gives moreweightto the results of smaller tialsthan does thefixed-ffectanayss which maybe Undesirable as small tls maybe nftior and mast proneto publication bia. The fxed-ffect model considers only within study variably, whereas he random ‘effects model considersbothwithin- and between-tudy variably. Thisiswhy a fxed-effect analysis tends to givenarower confidence intervals thats, provides seater precision) then arandomffects analysis. "2" Inthe absence of any between-study heterogeneity te fxed-and randormefects estimates ill coincide, Inaddition, there are diferent methods for performing both types of meta-analysis Common fied ffect approaches are Mantel-Haenszel andinverse variance, ‘whereas random fects analyses usuall/use the DerSimonian andLalrd approach, although other methods exis, including Bayesian meta-analysis." Inthe presence of demonstrable betweer-study heterogeneity (see belo), some considerthat the use ofa ixed-ffect analysis is counterintuitive because their rmainassumption s violated, Others argue thatitis inappropriate to conduct any meta-analysiswhen theres unexplained variability across trl resuts. the reviewers decide notto combine the data quantitatively, dangeristhateventually they nay end up using quast-quantitative rules ofpoor validity suchas vote ‘counting ofhow many studies ave nominaly significant results) forinterpretingthe evidence. Statistical methods to combine data exist foralmost any complex sation thatmayarise na systematicreiew, butone has tobe aware oftheir assumptions and imitations to avd misapphing or misinterpretingthese methods. Assessment of consistency (heterogenely) ‘We expect some variation (inconsistency inthe resis of diferent studies dueto chance alone Varabiliyn excess ofthat dueto chance elects ue ferences inthe results ofthe tals, andis called “heterogeneity.” The conventional statistical approach to evaluating heterogenetyisa "test Cochran's) butithaslow ower when there are few studies and excessive power when there are many studes.™ By contast, the statistic quantifies the amount of variation in results acoss ‘studies beyond that expected by chance and soi prefereble to," represents the percentage ofthe total variation in estimated effects across studies thatis ‘duetoheterogeneityratherthan to chance; some authors consideran value less than 25% s low.” However, also suffer from lage ncertantyinthe common situation where only afew studies are avaliable and reporting the uncertainty in (Such as 95% confidence interva) may be helpful. "When there arefew ‘studies, inferences about heterogeneity should be cautious. ‘When considerable heterogeneitys observed its advisable to consider possible reasons In particular, the heterogeneity maybe duetodifeences between subgroups of tues (se item 16), Aso, data extraction erorsare acommon cause of substantial heterogeneityin results with continuous outcomes." = }OMLNE yBUKdoo £q parsaroig ‘san6 Aq 1Z0z YOIeW ¥ UO /wI09 fwig'MnAy-dhy WOH p9PEOIUMEG “600 AINr JZ UO DOLZaTWA/ZELI OF Se PaUsHIaNd ISH) -rWE [RESEARCH METHODS & REPORTING Box7 | Bias caused by selective publication of studies or results within studies Systematic reviews aim to incorporate information fom allelevantstuies. he absence ofinformation fom some tudes may pose serious threat tothe vallty ofareview. Data maybe incomplete because some tudes were nat published, or because of incomplete or inadequate reporting within published article These prablemsare often summarised as publication bas," although the bias arises fam non-publicationoffulstudies and selective publication ofresulsin elation to ‘therfindings. Nor publication of esearch findings dependentonthe actual results 'sanimportantiskofbiastoasystematicrevew andmeta-anasis. Missing studies Several empirical investigations have show thatthe findings from clinical tals are moe likely tobe publishedithe results ae statistically significant (P0.05) than fthey are no." *** Forexample, of 500 oncology tials with more han 200 participants fr which preliminary resuts were presented at aconference of the American Society of Clinical Oncology, 81% with P0,05 were published in ful within five years com pared with only 68% of those with Px0.05."* ‘Also, among published studies, those with statistical significant results are published sooner than those with non-significant findings. When some ‘tudes are missing for these reasons, the avallable results will be biased tovards exaggerating the effect of anintervetion. Missing outcomes Inmany systematic reviews only some othe eligible studies (often a minority) can beincludedin a meta-analysis fora speciicoutcome. For some studies, the ‘outcome may not be measured or may be measured but noteported. The former will ot eadt bias, but the latter could Evidence s accumulating that selective reporting bas is widespread and of considerable importance.” in addition, data foragven outcome maybe analysed in multiple ways andthe choice of presentation influenced by the results obtained. na study of 102 randomised tras, comparison of published reports with rial protocols showed that a median of 38% efficacy and 50% safety outcomes per tral respectively, were not avaliable formeta-anayss Statistically significant outcomes had higher odds of being fly reported in publications when compared with non-significant outcomes for both efcacy (Gooled odds ratio 2.4 (95% confidence interval 1.40 4.0) and safety 6.7 (1.8 o 12) data, Several other studies have had similar findings.”°"* [Missing studies may increasingly be identified from trials reistres. Evidence of missing outcomes may come from comparison with the study protacel, available, orby careful examination ofpublished articles." Study publication bias and selective outcome reporting are dificult to exclude or very from the avalable results, especially when few studies are avaliable. ifthe available data ae affected by either (orbatt) of the above biases, smaller studies would tend show lager estimates ofthe effets of the intervention. Thus one possiblity isto investigate the elation between effect size and sample size or more specially, precision ofthe effect estimate). Graphical methods, especialy the funnel plot2” and analytic methods (uch as Eggers tes) are often used,” although their interpretation canbe problematic "27 Stctly speaking, such analyses investigate “small study bias"; there may be many reasons why smaller studies havesystematcally diferent effect sizes than larger ‘des, of which reporting bias just one2**Severa alternative tess for bias have also been proposed, beyond the anes testing small study ias,2™ but none an be considered a gold standard. Although evidence that smaller studies had large estimated effect than large ones may suggest the possibility that the available evidences biased, misiterpretaion of such datas common. tieation erature search TTT (RAST Databases: PubMed, EMBASE. and the Cochrane bray, een] Comet ieee Meeting absvacts: VEG, DOW, and Intemational Workshop ofthe European Melcobcter Study Group Seneing int engahrguge aero Noses ster ps enord i Sean combed 139) hoctmcaisunened f+ Noord V Ae nna ont suginy 4 ee seated pen assessed for eligibility [™ excluded, with reasons 'Not first line eradication therapy (n—=44) — a tc Notreconmended dose (1-6) ‘No of tudes incued in qultatve sates Matte pubteations (r=3) No of studies nduded in quanttative sess (meta-analsi) Included (27) 1 Fig | Flow ofiformation through the different phases of — roe tne systematic review. eloed 6) Not fst ine eradcaton therapy (7-2) f+ elcobcter yor status neared evaluated (2) Results provided anyon per protocol bass (=I) Notrecommended dose (1) Includes (m2) caine a ee 7 day v 10 day therapy(o=i0) | ‘aaytherapy res) | teraoyines) | Fig2 [Example iow diagram of study selection. DOW estive Disease Week; UEGW = United European Gastroenterology Week. Adapted from Fuccoetal™ ‘one {YOUxKdoo Aq paoa}olg S=N6 kq 1 Z0Z YOIEIY p Uo (HOD TWIG’ MMWY-dyY WON} Papeo|UMO ‘Godz ANP 1Z UO DOLZGTwANgEL'O} Se PaUStNd IY :rW RESEARCH METHODS & REPORTING Description Tetracyne- Tetracycline: Relative risk Relative isk wom Speen Cee SKC) ned OCD Aeocel 198992 3163 2153 260220720 aa 19859 me 2128 = stir 30) ava 19920 saa 3/51 1 19049076) Bayinae 2007675 S120 on —tneossiasn Colmenero 1969 7152 559 1390540470 omen 19940 ano 9 Lt ass (0.251083.70) Dorado 398807" spr aya teresa soy 20088” 715 ai —L— 101010390) Kosmidis 19820 ano rho ++ os0 00510467 Monje 19934 616 isa t—— 270810920 Rodrigue Zapata 987% 3/32 136 ———- 380037103050) Solera 19916 22/34 3136 cunt Solera 19956 28/100 si94 rer casi0ss7 Taxa 65%) sot sor = 250046510320) Total vers: 94 Qeracyneifamplci), {45 teuncyinestreptomyeit) 0102 051 2 5 10 zs Favours Feveure Testforheterogeneiy 27.68, 12, P08, \=0% tetaeyline- tetracycline: Test for overalefect: 24.94, 0.001 fifamptcin streptomycin Fig | Example ofsummary rests: Overalfallure (defined as failure of assigned regimen orelapse) wit tetracycline ifampcin versus tetrayctine streptomycin. Adapted rom Skalsky etal” tet size (1/50) aed mean dtference Fig4 [Example of funnel plot showing evidence of considerable asymmety. Appleton et tandard error. Adapted from = }OMLNE {UOIKdoo Aq paya}OId ISANB hq | ZOZ YOIEIN F Uo OS TiuE:MALWy-ANM| WON PapeO|UMOG “GOOe Ane Lz UO DOZZa WA!GEL! "OF Se PaUsHANd ISHN -rN [RESEARCH METHODS & REPORTING ‘one 20 a a 6

You might also like