You are on page 1of 7

800596

research-article2018
AJMXXX10.1177/1062860618800596American Journal of Medical QualityLee et al

Article
American Journal of Medical Quality

Developing and Testing a Chart


1­–7
© The Author(s) 2018
Article reuse guidelines:
Abstraction Tool for ICU Quality sagepub.com/journals-permissions
DOI: 10.1177/1062860618800596
https://doi.org/10.1177/1062860618800596

Measurement ajmq.sagepub.com

Jarone Lee, MD, MPH1 , J. Matthew Austin, PhD2,


Jungyeon Kim, PhD, MBA3 , Paola D. Miralles, BS4,
Haytham M. A. Kaafarani, MD, MPH1, Peter J. Pronovost, MD, PhD, FCCM5,
Vipra Ghimire, MA, MPH2, Sean M. Berenholtz, MD, MHS, FCCM2,
Karen Donelan, ScD, EdM1, and Elizabeth Martinez, MD, MHS1†

Abstract
Quality measures are increasingly used to measure the performance of providers, hospitals, and health care systems.
Intensive care units (ICUs) are an important clinical area in hospitals, given that they generate high costs and present
high risks to patients. Yet, currently, few valid and clinically significant ICU-specific outcome measures are reported
nationally. This study reports on the creation and evaluation of new abstraction tools that evaluate ICU patients
for the following clinically important outcomes: central line–associated bloodstream infection, methicillin-resistant
Staphylococcus aureus, gastrointestinal bleed, and pressure ulcer. To allow ICUs and institutions to compare their
outcomes, the tools include risk-adjustment variables that can be abstracted from the chart.

Keywords
intensive care unit, gastrointestinal bleeding, central line–associated bloodstream infection, pressure ulcer, methicillin-
resistant Staphylococcus aureus

The intensive care unit (ICU) is a large and growing com- addition, achieving consensus on measures can be
ponent of the health care system, consuming approxi- unwieldy in a dynamic and complex health care system.
mately $81 billion or 13.4% of US health care dollars Over the past decade, researchers at the Massachusetts
annually.1 Despite the important role the ICU plays in General Hospital (MGH) and Johns Hopkins Medicine
providing lifesaving interventions for those patients at (JHM) embarked on an effort to develop and validate
imminent risk of death, patient outcomes vary consider- quality outcome measures for ICUs. The first phase of the
ably, and there is no consistent relationship between cost work began with an extensive review of extant measures;
and the quality of care delivered.2,3 Though there has through an iterative process, 9 quality measures were
been much progress in measuring the safety of patient identified that are important to the ICU.6,7 In the second
care delivered in various types of health care settings, phase of the work, expert reviews and a national clinician
very few valid or clinically significant ICU-specific out- survey were used to assess those measures for their utility
come measures currently exist among nationally reported
quality measures. 1
Massachusetts General Hospital/Harvard Medical School, Boston, MA
Some health care stakeholders—such as consumers, 2
Johns Hopkins University, Baltimore, MD
payers, and policy makers—have historically preferred 3
Harvard University School of Public Health, Boston, MA
outcome measures to process measures in measuring the 4
Beth Israel Deaconess Medical Center, Boston, MA
5
quality of care, as there often is limited evidence linking United Healthcare, Minnetonka, MN
process measures to improved patient outcomes (eg, †Deceased.
reduce rates of preventable harms).4,5 Valid outcome mea-
sures can provide important information that clinicians Corresponding Author:
Jarone Lee, MD, MPH, Departments of Surgery and Emergency
can use to improve care. Traditionally, developing out- Medicine, Massachusetts General Hospital/Harvard Medical School,
come measures has required extensive resources to manu- 165 Cambridge Street, Suite 810, Boston, MA 02114.
ally abstract clinical data from paper medical records. In Email: jarone.lee@mgh.harvard.edu
2 American Journal of Medical Quality 00(0)

in health care quality improvement efforts—specifically needed to measure the 4 outcomes, additional variables
the degree to which indications for each of the 9 measures were included to aid risk adjustment and improve attribu-
could be identified through chart abstraction and poten- tion of the quality issue to the specific ICU stay.
tially preventable in the ICU setting.7 This article reports In terms of using a specific risk adjuster, the SOFA
the final phase of this work wherein 4 ICU-related mea- score was selected for accuracy and feasibility as it has
sures are defined and a new clinical abstraction tool is only a minimal number of data elements. The SOFA score
piloted. requires the collection of only 6 data elements, as com-
pared with other scores (eg, APACHE II) that require 16
Methods elements.8

This study was approved by the institutional review


boards at both MGH and JHM. Piloting the Abstraction Tool
An abstraction tool was developed and pilot tested on
Delphi Process to Determine Quality Measures patient cases from all the adult ICUs at both sites (MGH
and JHM). The ICUs included both general medical and
and Definitions surgical ICUs, as well as specialized ICUs that admitted
As previously described and published, ICU physicians dedicated neurologic and cardiovascular patients. The
and nurses were surveyed on possible ICU-related harms goals of the pilot test were the following: (1) understand
and conditions.7 Based on their ratings across 4 dimen- the agreement between administrative billing data and the
sions (impact on care, preventability, feasibility, and judgment of a clinical expert based on abstracted data, (2)
importance), 5 harms were identified as being most favor- understand the interrater reliability of the data abstraction
able for measure development. These 5 harms included tool across multiple abstractors at the same site, (3) con-
central line–associated bloodstream infection (CLABSI), duct a qualitative study of the time required to complete
methicillin-resistant Staphylococcus aureus (MRSA), the data abstraction tool and the difficulty posed by the
gastrointestinal bleed (GIB), pressure ulcer (PU), and tool, and (4) recommend modifications to the tool based on
pulmonary embolism (PE).7 the abstraction experience and review by the physicians.
Initial definitions and risk factors for these 5 quality
measures were created by local experts at MGH and Sample Size Estimate. The sample size of charts to be
JHM. Twenty-three expert ICU physicians and nurses abstracted at each site was based on the number of charts
were identified and participated in one of two 120-minute that would be needed to assess interrater reliability of the
video conferences to discuss the definitions, calculations, data abstraction tool. Using a 5% incidence of disease,
risk factors, and risk adjustment for each measure. Risk based on an average estimate of the 3 most common out-
adjustment options included a universal score (eg, Acute come measures, an estimated sample size of ⩾186
Physiology and Chronic Health Evaluation [APACHE], records total (N ⩾ 93 at each institution) was needed to
Sequential Organ Failure Assessment [SOFA], Elixhauser test a κ = 0.8 (80% agreement beyond chance) against κ
and Charlson Comorbidity Indexes), measure-specific = 0.6, with 5% type I error and 80% power. Because the
outcomes, or a combination of a universal score with incidence of some complications may be less than 5%, 30
measure-specific outcomes.8,9 Participants rated the charts were added to each site. As such, a total sample
importance of each risk factor for the measure using a size of 245 was calculated with 186 in the final assess-
5-point Likert-type scale (strongly agree, agree, neither ment of the data abstraction tool.
agree nor disagree, disagree, strongly disagree). A prede-
termined, 75% or greater agreement, which included Case and Control Selection.  The study team conducted a
strongly agree and agree, was required for consensus on review of 186 medical records from all adult ICUs at
each risk factor. Each participant received a $100 MGH and JHM. The team included all records of patients
honorarium. who had a billing code for one of the 4 outcome measures
Based on the responses and the 75% agreement rule, between January 1, 2010, and December 31, 2012.
CLABSI, PU, GIB, and MRSA were determined to be the Because cases were not distributed evenly across that
highest rated in the 4 categories of impact on care, pre- time period and the study team wanted equal representa-
ventability, feasibility, and importance. The PE measure tion of newer and older cases, the sample was divided
was determined by experts to be important but not pre- into 6-month blocks, and the team randomly selected
ventable or likely to improve care. The 4 measures and the equal numbers of cases and controls within each 6-month
risk factors deemed important by the participants were block. Specifically, International Classification of Dis-
used to create a chart abstraction tool; it was pilot tested eases, Ninth Revision, Clinical Modification codes were
by 2 ICU physicians. In addition to core data elements used for each outcome: CLABSI (996.62 and 999.31),
Lee et al 3

GIB (531, 569.3, 578.0, 578.0, 578.1, 578.9, 772.4, and Data Analysis
792.1), MRSA (38.13, 41.12, 482.42, V02.54, and
V12.04), and PU (707.23; 707.24, and 707.25). The distribution of the abstracted charts was analyzed by
The study team used known billing codes and hospital sex, age, hospital LOS, and ICU LOS. Interrater reliabil-
billing data from each site to screen for cases in which a ity of each question in the data abstraction tool was calcu-
patient had an ICU length of stay (LOS) ⩾3 days and at lated using a percentage agreement and kappa statistic.
least 1 of the 4 adverse outcomes. For controls, the team Each question in the abstraction tool was categorized
screened each site for patients with ICU LOS ⩾3 days as Good Agreement, Poor Agreement, Blank, or Conflict
and none of the 4 adverse outcomes. Agreement based on a comparison of the calculated κ sta-
tistic at each site. For Good Agreement, both sites had to
Chart Abstraction Method. Two data abstractors at each have κs in the fair-to-almost-perfect categories (κ =
site accessed the full patient chart (electronic, paper) and 0.21-1.00). For Poor Agreement questions, both sites had
reviewed it for key elements using the data abstraction to have κs in the poor-to-slight categories (κ = <0.21).
tool. At MGH, 2 nurse abstractors and a physician Blank questions were when both sites left the questions
reviewed the completed tool and chart for each patient for without an answer. Last, Conflict Agreement questions
final determination of whether 1 or more of the 4 out- were when the 2 sites differed in their κ categories.
comes was present. At JHM, 2 physicians, a nurse, and 2 Next, the study team analyzed the agreement between
epidemiologists reviewed the completed tool and chart the diagnosis from the billing data and the diagnosis iden-
for each patient for final determination of whether 1 or tified by an external reviewer (eg, physician, epidemiolo-
more of the 4 outcomes was present. gist). For each outcome, the team compared how often
Data were entered and managed using Research the outcome identified in the billing data was also identi-
Electronic Data Capture (REDCap), an electronic data fied by expert chart review.
capture tool that was hosted at Partners HealthCare.10
REDCap is a secure, web-based application designed to Results
support data capture for research studies, providing (1) an
intuitive interface for validated data entry, (2) audit trails
Descriptive Statistics
for tracking data manipulation and export procedures, (3) Overall, 186 charts were abstracted—93 cases and 93
automated export procedures for seamless data down- controls (Table 1). More males than females were
loads to common statistical packages, and (4) procedures included in the study, and the majority of patients were
for importing data from external sources. older than 65 years of age (50.6%). Average hospital LOS
was 21.9 days, and an average ICU LOS was 9.2 days.
Feedback on the Abstraction Tool.  The study team debriefed Although hospital LOS and ICU LOS were similar at
with all the abstractors to assess the time needed for and the both sites, patients at site 1 were older.
difficulty of capturing the data elements required in the tool.
During the debriefing, the focus was on the following
related use of the tool: the source of the data element (elec-
Interrater Agreement
tronic, paper, nursing notes, physician notes, or other), the Interrater agreement within site was congruent for most
time it took the abstractor to locate the required data ele- questions in the abstraction tool, with similar agreement
ment, and elements that could not be found in the chart. (Good or Poor) at both sites on 90 of the 117 questions
Also, a subjective assessment was conducted of the overall (76.9%; Table 2). The breakdown of agreement by cate-
ease and feasibility of abstracting each chart using a Likert- gory is shown in Table 2.
type rating scale (1-5) to assess the difficulty in finding the Certain sections of the extraction tool had a majority
data elements and whether the abstractor needed to make a of κs in the Good Agreement category (Table 3). Good
subjective judgment while performing the chart review. agreement sections included Demographics (Questions
Abstractors were asked to rank the clarity of the 1-15), Laboratory and Microbiology Results (Questions
instructions of the different sections of the tool using a 19-24), and Medications (Questions 25-30). The ques-
Likert-type scale with the following categories: very tions that had Poor Agreement at both sites were focused
clear, somewhat clear, not very clear, and not at all clear. on specific times and sites of events, questions related to
Similarly, abstractors were asked to compare the different the weight of the patient, and questions related to subse-
sections within the tool for difficulty in completing: more quent events, such as whether the patient had a second
difficult to complete, about the same, and less difficult to intubation or PU. Similarly, the questions that were left
complete. Abstractors also were asked for specific com- blank were most likely the subsequent event questions,
ments about their experience with the tool and the abstrac- such as questions about the third and fourth intubation/
tion process. ulcers. There also was Poor Agreement at both sites on
4 American Journal of Medical Quality 00(0)

Table 1.  Basic Demographics.

Site 1 Site 2 Total

ICU Characteristics # of ICUs # of Beds # of ICUs # of Beds # of ICUs # of Beds


  Total adult ICUs 6 112 6 113 12 225
  Medical ICU 1 18 1 24 2 42
  Surgical ICU 1 20 2 35 3 55
  Cardiovascular ICUs 2 34 2 30 4 64
  Neurologic ICU 1 22 1 24 2 46
  Mixed ICU 1 18 0 0 1 18

Patient Characteristics n % n % n %
Sample size 93 93 186  
Sex  
 Male 51 54.8% 50 53.8% 101 54.3%
 Female 41 44.1% 38 40.9% 79 42.5%
 Missing 1 1.1% 5 5.4% 6 3.2%
Age  
  18-44 years 10 10.8% 13 14.0% 23 12.4%
  45-64 years 24 24.7% 30 32.3% 54 29.0%
  65 years and older 35 37.6% 43 46.2% 78 41.9%
 Missing 24 26.9% 7 7.5% 31 16.7%
Length of stay  
  Mean hospital (days) 22.0 21.7 21.9  
  Mean ICU (days) 8.9 9.5 9.2  

Abbreviation: ICU, intensive care unit.

Table 2.  Summary of κ Agreement.

Equal Agreement at
Site 1 Site 2 Both Sites

  n % n % n %
Good agreement 81 69.2% 84 71.8% 73 62.4%
  Almost perfect to substantial 46 52 53  
  Moderate to fair 35 32 20  
Poor agreement 21 17.9% 14 12.0% 17 14.5%
  Slight to poor 9 5 7  
 Expected < actual 12 9 10  
Blank 15 12.8% 19 16.2% 10 8.5%
Conflict agreement n/a n/a 17 14.5%
Total questions 117 100.0% 117 100.0% 117 100.0%

the data sources (paper vs electronic) used to extract the 22 CLABSI cases identified by billing data were
information for all sections (Questions 12, 18, 21, 24, 30, confirmed as CLABSI upon chart abstraction and exter-
35, and 40). nal chart review. Similarly, PU and GIB had poor agree-
ment between billing data and chart review, with many
Comparison of Billing Discharge Diagnosis and cases deemed as “unable to be determined” or not
answered. For MRSA, 48% of the cases identified by
Physician Chart Review billing data were confirmed by the external reviewer,
Table 4 shows the relationship between the diagnosis 20% were not confirmed by the external reviewer, and
from the discharge/billing data and the diagnosis iden- the remaining cases were either missing or could not be
tified after external review. Overall, less than 14% of determined.
Lee et al 5

Table 3.  Κ Agreements by Section of the Abstraction Tool.

Section Good Agreement Poor Agreement Blank Conflict Agreement Total


Demographics 12 7 0 4 23
Site Immediately Prior to This ICU Admission 1 1 2 0 4
Patient Characteristics on This ICU Admission 1 2 0 0 3
Lab Values 5 0 0 1 6
Microbiology 2 1 0 0 3
Medications 5 1 0 0 6
Acute Diagnoses 4 1 0 1 6
Risk Factors 22 3 6 4 35
Outcomes of Interest (central line–associated 8 0 0 4 12
bloodstream infection)
Outcomes of Interest (methicillin-resistant 4 0 2 2 8
Staphylococcus aureus)
Outcomes of Interest (pressure ulcer) 3 0 0 1 4
Outcomes of Interest (gastrointestinal bleed) 6 1 0 0 7
Total 73 17 10 17 117

Table 4.  Agreement Between the Discharge Diagnosis From the very clear and a few in the somewhat clear categories.
Billing Data and Physician Review. Abstractors ranked only the microbiology section as some-
Site 1 Site 2 Total what clear. One abstractor ranked the patient characteris-
tics section as not very clear; however, all other abstractors
  n % n % n % ranked it very clear. Results are shown in Supplemental
CLABSI   Appendix A, available with the article online.
 Yes 1 9.1% 2 18.0% 3 13.6% Abstractors found certain sections of the abstraction
 No 10 90.9% 6 54.6% 16 72.7% tool consistently difficult and time-consuming to com-
  Cannot be 0 0.0% 3 27.3% 3 13.6% plete, as shown in Supplemental Appendix B. These sec-
determined tions included Microbiology, Acute Diagnoses, and Risk
 Missing 0 0.0% 0 0.0% 0 0.0% Factors. This was expected as the data from both elec-
MRSA   tronic and paper charts for these sections can be difficult
 Yes 10 83.3% 2 15.4% 12 48.0% to access or can span large periods of time. Additionally,
 No 2 16.7% 3 23.1% 5 20.0% some of these sections require a certain level of subjec-
  Cannot be 0 0.0% 6 46.2% 6 24.0% tive decision making based on how well the clinical teams
determined documented their decisions.
 Missing 0 0.0% 2 15.4% 2 8.0% Many of the abstractors’ specific comments were
PU   related to the nuances of hospital-specific medical records
 Yes 1 8.3% 0 0.0% 1 4.2% and unclear documentation (eg, documentation of drugs,
 No 7 58.3% 1 8.3% 8 33.3%
PUs, central lines). Abstractors consistently stated that
  Cannot be 4 33.3% 0 0.0% 4 16.7%
the acute diagnosis related to the ICU stay was difficult to
determined
 Missing 0 0.0% 11 92.0% 11 45.8%
find because it required reading through many notes and
GI bleed   required their clinical judgment.
 Yes 0 0.0% 0 0.0% 0 0.0%
 No 12 100.0% 0 0.0% 12 52.2% Modification of the Tool
  Cannot be 0 0.0% 0 0.0% 0 0.0%
determined The study team updated the abstraction tool to improve
 Missing 0 0.0% 11 100.0% 11 47.8% the questions based on interrater agreement and specific
feedback from the abstractors and external reviewers.
Abbreviations: CLABSI, central line–associated bloodstream infection; The team also removed any questions that were extrane-
GI, gastrointestinal; MRSA, methicillin-resistant Staphylococcus aureus;
PU, pressure ulcer. ous to determining any of the 4 outcomes. For example,
questions on risk factors for GIB that did not help its
Time for Completing and Difficulty of the determination were removed. A conscious decision was
made to do this at this stage because the research on mod-
Abstraction Tool
ifiable risk factors for all of these outcomes continues to
Abstractors ranked the instructions and clarity of the evolve. As a result, the team viewed the abstraction tool
instructions as straightforward to use, with most ranks in as a vehicle for ICUs to use to determine if the outcome
6 American Journal of Medical Quality 00(0)

is attributable to the ICU. It then will be up to the ICU and and on multiple different electronic databases. One specific
hospital leadership to determine the risk factors and what example of how this can happen is if laboratory values are
was potentially modifiable. written in a daily progress note and also in a laboratory
Next, the laboratory values and questions were rear- reporting database. The abstractors could get the same
ranged into a new section dedicated to calculating com- information from 2 different areas of the chart and data
mon risk-adjustment elements. For example, the new from the 2 sources may not agree. In addition, some data
section includes variables to calculate the SOFA score, a elements were not available in electronic records for all
well-validated risk adjustment for critically ill ICU years and paper records were used for some data elements.
patients.11-14 Ultimately, the single abstraction tool was This also may have contributed to inconsistencies. Sites
separated into 4 separate tools, one for each outcome. using these tools should specify the location of the data item
Each tool included only questions related to its outcomes. in the record to assure accuracy of abstraction.
The final tools have 3 major sections: (1) Demographics, For all 4 outcomes, agreement was poor between the
(2) Risk Adjustment, and (3) Outcome of Interest (eg, diagnosis listed in the discharge/billing data and the diagno-
CLABSI). The final 4 tools can be found in Supplemental sis gleaned from the chart review. This highlights how bill-
Appendices C, D, E, and F, respectively. ing data can be prone to error if used to determine a patient’s
outcome. Many other studies have found similar results.15-18
The lack of agreement between these 2 sources could be
Discussion the result of varying definitions. For example, the definition
This study presents the results of a Delphi process to of CLABSI has evolved over the years. The criteria used to
determine ICU-specific outcome measures and their diagnose CLABSI may be different between the patient
associated definitions. This work builds on the work of encounter and at the time of abstraction. Similarly, there
Rogers et al6 and Martinez et al7 that identified 9 outcome likely is variation in how GIB is diagnosed among provid-
measures meaningful to ICU clinicians. Based on a group ers. Some clinicians might bill for GIB only when massive
of expert ICU clinicians, 4 of the 9 measures were deemed (ie, requiring multiple blood transfusions), whereas other
most important, clinically relevant, and potentially pre- clinicians might bill for GIB when the stool is positive for
ventable through measurement: (1) CLABSI, (2) GIB, (3) blood and the patient did not require any transfusions. As a
MRSA, and (4) PU. result, it may not be surprising that the billing diagnosis did
In addition, results are shared of the pilot test of a not correlate well with the chart review diagnosis.
novel abstraction tool for 4 ICU-related outcome mea- Data for 2 outcomes (MRSA and CLABSI) are col-
sures. The pilot found that a majority of the 117 questions lected and reported by the National Healthcare Safety
in the abstraction tool were easily extracted from the Network (NHSN). The NHSN is voluntary and not all
medical charts at both sites with good agreement. Many hospitals and ICUs report. Even at hospitals that report to
of the questions that were left blank or had a disparity in the NHSN, there may be a role for these tools to help
their κs between the sites (deemed “Conflict Agreement”) hospitals and ICU administrators determine outcomes of
were questions that looked at specific times or subse- patient cases in real time. Otherwise the NHSN currently
quent events, such as second and third PUs. Interestingly, does not collect data on PUs and GIBs.
a majority of these conflict questions had κs that were
very disparate between the 2 sites. Many of these conflict
questions had close to perfect agreement at one site and
Limitations
poor agreement or were left blank at the other site. This This study has several limitations. First, the time from
was most likely related to site-specific issues involving conduct of the study to publication was longer than antic-
how medical records were kept. For example, site 1 might ipated, driven primarily by the illness and death of the
have all its microbiologic lab data on a paper-based chart, original principal investigator on the project (EM).
whereas site 2 might have the same data in an electronic- However, the study team believes the results and learn-
based chart (ie, easily found). The study team elected to ings from this project are as relevant as ever, as the US
keep these questions in unless they were deemed not use- health care system continues to seek ways to better mea-
ful for the tool based on feedback from the external sure and improve health care. Second, since the start of
reviewers and abstractors. the project, there were changes in the national definitions
In each section, there was poor agreement within each of CLABSI and PU. This may have affected the agree-
site among both abstractors. This was especially true for ment seen in diagnoses from the tool and chart review.
dates and times, patient weights, and recording of PU stage
at ICU admission. These differences reflect the difficulty
Conclusion
and complexity of the transition from paper to electronic
records, as well as the complexity of record structures Despite the difficulties faced in measuring the quality of
whereby the same data could be housed in different ways health care provided, we must continue to create, iterate,
Lee et al 7

and validate outcome measures and abstraction tech- 5. Parast L, Doyle B, Damberg CL, et al. Challenges in assess-
niques. The measures developed need to be clinically rel- ing the process-outcome link in practice. J Gen Intern Med.
evant and lead to improved patient care. This study 2015;30:359-364.
highlights the limitations of billing data for gauging out- 6. Rogers RS, Pronovost P, Isaac T, et al. Systematically
seeking clinicians’ insights to identify new safety measures
come measures and the importance of a robust process for
for intensive care units and general surgery services. Am J
developing and validating quality measures. Additionally,
Med Qual. 2010;25:359-364.
creating and building quality metrics will help better 7. Martinez EA, Donelan K, Henneman JP, et al. Identifying
define a common language that all ICUs can use to discuss meaningful outcome measures for the intensive care unit.
these important issues. Developing measures is not easy, Am J Med Qual. 2014;29:144-152.
but if we hope to improve the quality of care, we need to 8. Knaus WA, Draper EA, Wagner DP, Zimmerman JE.
collectively struggle through how to best measure results. APACHE II: a severity of disease classification system.
Crit Care Med. 1985;13:818-829.
Authors’ Note 9. Ladha KS, Zhao K, Quraishi SA, et al. The Deyo-Charlson
and Elixhauser-van Walraven Comorbidity Indices as pre-
Dr Martinez was the original principal investigator and vision
dictors of mortality in critically ill patients. BMJ Open.
behind this project. She designed and implemented this project
2015;5(9):e008990.
until her unexpected death in 2013.
10. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N,
Conde JG. Research Electronic Data Capture (REDCap)—a
Acknowledgments metadata-driven methodology and workflow process for
We would like to acknowledge Ann-Marie Audet, MD, MSc, providing translational research informatics support. J
from The Commonwealth Fund. Abstraction of all records was Biomed Inform. 2009;42:377-381.
performed by Jennifer Mills, Margaret Goldstein, Sandy 11. Vincent JL, de Mendonca A, Cantraine F, et al. Use of the
Swoboda, Laurie Smith, and Tracey Smith, our Nurse SOFA score to assess the incidence of organ dysfunction/
Abstractors. Bonny B. Blanchfield assisted with data validation, failure in intensive care units: results of a multicenter, pro-
cleaning, and programming. Maggie Cantara, Rhonda Holbrook, spective study. Working group on “sepsis-related prob-
Melanie Curless, and Dr Sara Cosgrove helped review the cases. lems” of the European Society of Intensive Care Medicine.
Crit Care Med. 1998;26:1793-1800.
Declaration of Conflicting Interests 12. Vincent JL, Moreno R, Takala J, et al. The SOFA (Sepsis-
related Organ Failure Assessment) score to describe organ
The authors declared no potential conflicts of interest with respect dysfunction/failure. On behalf of the Working Group on
to the research, authorship, and/or publication of this article. Sepsis-Related Problems of the European Society of Intensive
Care Medicine. Intensive Care Med. 1996;22:707-710.
Funding 13. Ferreira FL, Bota DP, Bross A, Melot C, Vincent JL. Serial
The authors disclosed receipt of the following financial support evaluation of the SOFA score to predict outcome in criti-
for the research, authorship, and/or publication of this article: cally ill patients. JAMA. 2001;286:1754-1758.
This work was supported by The Commonwealth Fund (Grant 14. Cardenas-Turanzas M, Ensor J, Wakefield C, et al. Cross-
#20070334). validation of a Sequential Organ Failure Assessment score-
based model to predict mortality in patients with cancer
ORCID iDs admitted to the intensive care unit. J Crit Care. 2012;27:673-
680.
Jarone Lee https://orcid.org/0000-0002-4532-8523 15. Preen DB, Holman CD, Lawrence DM, Baynham NJ,

Jungyeon Kim https://orcid.org/0000-0002-6261-6084 Semmens JB. Hospital chart review provided more accu-
rate comorbidity information than data from a general
References practitioner survey or an administrative database. J Clin
1. Halpern NA. Can the costs of critical care be controlled? Epidemiol. 2004;57:1295-1304.
Curr Opin Crit Care. 2009;15:591-596. 16. Powell H, Lim LL, Heller RF. Accuracy of administrative
2. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas data to assess comorbidity in patients with heart disease.
FL, Pinder EL. The implications of regional variations in An Australian perspective. J Clin Epidemiol. 2001;54:687-
Medicare spending. Part 2: health outcomes and satisfac- 693.
tion with care. Ann Intern Med. 2003;138:288-298. 17. Jollis JG, Ancukiewicz M, DeLong ER, Pryor DB,

3. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas Muhlbaier LH, Mark DB. Discordance of databases
FL, Pinder EL. The implications of regional variations in designed for claims payment versus clinical information
Medicare spending. Part 1: the content, quality, and acces- systems. Implications for outcomes research. Ann Intern
sibility of care. Ann Intern Med. 2003;138:273-287. Med. 1993;119:844-850.
4. Dy SM, Chan KS, Chang HY, Zhang A, Zhu J, Mylod D. 18. Romano PS, Roos LL, Luft HS, Jollis JG, Doliszny K.
Patient perspectives of care and process and outcome qual- A comparison of administrative versus clinical data:
ity measures for heart failure admissions in US hospitals: coronary artery bypass surgery as an example. Ischemic
how are they related in the era of public reporting? Int J Heart Disease Patient Outcomes Research Team. J Clin
Qual Health Care. 2016;28:522-528. Epidemiol. 1994;47:249-260.

You might also like