You are on page 1of 10

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.

com

BMJ Quality & Safety Online First, published on 6 May 2016 as 10.1136/bmjqs-2015-005201

ORIGINAL RESEARCH

Opportunities to improve clinical


summaries for patients at hospital
discharge
Erin Sarzynski,1,2 Hamza Hashmi,3 Jeevarathna Subramanian,3
Laurie Fitzpatrick,1 Molly Polverento,1,2 Michael Simmons,4
Kevin Brooks,2 Charles Given1,2

Additional material is
published online only. To view
please visit the journal online
(http://dx.doi.org/10.1136/bmjqs2015-005201).
1

Department of Family Medicine,


Michigan State University
College of Human Medicine,
East Lansing, Michigan, USA
2
Institute for Health Policy,
Michigan State University
College of Human Medicine,
East Lansing, Michigan, USA
3
Grand Rapids Medical
Education Partners, Grand
Rapids, Michigan, USA
4
Sparrow Health System,
Lansing, Michigan, USA
Correspondence to
Dr Erin Sarzynski, Michigan State
University College of Human
Medicine, 965 Fee Road, East
Lansing, MI 48824, USA; erin.
sarzynski@hc.msu.edu
Received 28 December 2015
Revised 30 March 2016
Accepted 15 April 2016

To cite: Sarzynski E,
Hashmi H, Subramanian J,
et al. BMJ Qual Saf Published
Online First: [ please include
Day Month Year]
doi:10.1136/bmjqs-2015005201

ABSTRACT
Background Clinical summaries are electronic
health record (EHR)-generated documents given
to hospitalised patients during the discharge
process to review their hospital stays and inform
postdischarge care. Presently, it is unclear
whether clinical summaries include relevant
content or whether healthcare organisations
configure their EHRs to generate content in a
way that promotes patient self-management
after hospital discharge. We assessed clinical
summaries in three relevant domains: (1)
content; (2) organisation; and (3) readability,
understandability and actionability.
Methods Two authors performed independent
retrospective chart reviews of 100 clinical
summaries generated at two Michigan hospitals
using different EHR vendors for patients
discharged 1 April 30 June 2014. We
developed an audit tool based on the
Meaningful Use view-download-transmit
objective and the Society of Hospital Medicine
Discharge Checklist (content); the Institute of
Medicine recommendations for distributing easyto-understand print material (organisation); and
five readability formulas and the Patient
Education Materials Assessment Tool (readability,
understandability and actionability).
Results Clinical summaries averaged six pages
(range 312). Several content elements were
universally auto-populated into clinical
summaries (eg, medication lists); others were not
(eg, care team). Eighty-five per cent of clinical
summaries contained discharge instructions,
more often generated from third-party sources
than manually entered by clinicians. Clinical
summaries contained an average of 14 unique
messages, including non-clinical elements
irrelevant to postdischarge care. Medication list
organisation reflected reconciliation mandates,
and dosing charts, when present, did not carry
column headings over to subsequent pages.

Summaries were written at the 8th12th grade


reading level and scored poorly on assessments
of understandability and actionability. Inter-rater
reliability was strong for most elements in our
audit tool.
Conclusions Our study highlights opportunities
to improve clinical summaries for guiding
patients postdischarge care.

INTRODUCTION
Prompted by the Health Information
Technology for Economic and Clinical
Health Act of 2009, hospitals in the USA
are eligible to receive incentive payments
from the Centers for Medicare and
Medicaid Services (CMS) by using certified electronic health record (EHR) technology to achieve Meaningful Use (MU)
objectives.1 2 Implemented in three
stages, the overall goal of MU is to use
EHRs to engage patients and families and
improve care coordination and clinical
outcomes.3 Among several MU objectives
in stages 1 and 2, eligible hospitals must
provide patients the ability to view,
download and transmit (VDT) information about their hospital stays.4 5 During
early versions of stage 1, hospitals could
attest to this objective by distributing
EHR-generated
clinical
summaries
( paper documents) to patients during the
hospital discharge process.6 Clinical summaries are provider-to-patient documents, as opposed to discharge
summaries or summary of care documents, which are provider-to-provider
documents. While MU no longer requires
clinical summaries, their provision is
becoming standard of care for hospital
discharges across the USA.7 Thus, clinicians have an additional opportunity to
promote self-management by reviewing

Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Copyright Article author (or their employer) 2016. Produced by BMJ Publishing Group Ltd under licence.

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
clinical summaries with patients and their caregivers
during the hospital discharge process.
Clinical summaries are document templates that
auto-populate clinical information by extracting data
from various sections of patients EHR charts and
soliciting additional information in the form of template headings. EHR vendors offer healthcare systems
the opportunity to customise clinical summary templates according to local preferences, which clinicians
can modify manually or by point-and-click menu
options. Required elements of the clinical summary
were originally defined by stage 1 for eligible hospitals
(core objectives #11 and #12), which was updated in
2014 and became the VDT objective.48 The intended
purpose of clinical summaries is twofold: (1) to summarise patients hospital stays; and (2) to provide
patients and caregivers the information necessary to
self-manage and navigate their postdischarge care.
This document is critical, since prior studies indicate
that patients understanding of key aspects of postdischarge care is poor.911 Moreover, refining the
content and organisation of discharge documentation,
ideally with patient input, is key to improving transitional care for vulnerable patients, including older
adults and those with limited health literacy.1216
The overall goal of our study was to assess clinical
summaries in three key domains relevant to guiding
patients postdischarge care: (1) content; (2) organisation; and (3) readability, understandability and actionability. These domains provide a framework for
assessing how patients may perceive the educational
tools they receive during the hospital discharge
process. To accomplish this goal, we assessed clinical
summaries produced by two different commercially
available EHR vendors, which were customised at two
different hospitals (herein denoted hospital/vendor A
and hospital/vendor B).
METHODS
Study design and sample

This pilot study was a retrospective chart review of


clinical summaries produced at two Michigan hospitals
using different commercially available EHR systems.
Both hospitals used 20112014 hybrid editions of certified EHR technology during the evaluation period
and reported on the same stage 1 MU viewdownload
transmit criteria.4 Eligible patients were 18 years old
and discharged home from academic internal medicine
services from 1 April to 30 June 2014. We excluded
patients hospitalised under observation status and
those discharged to care facilities.
The academic internal medicine units at both hospitals include physicians who rotate on and off service.
The academic medicine unit at hospital A has 12
attending physicians (rotate every 2 weeks) and 36
resident physicians (rotate every 4 weeks) to cover
four services. Each service consists of one attending
physician and three resident physicians. The academic
2

medicine unit in hospital B has five attending physicians and 36 resident physicians (rotate every
4 weeks) to cover one service, which consists of one
attending physician and two resident physicians. On
average, each academic service at hospital A discharges
45 patients per month, and the academic service at
hospital B discharges 35 patients per month. Resident
physicians perform discharges at both hospitals, and
this workflow generates clinical summaries behind
the scenes, which bedside nurses print and review
with patients before discharge. We identified more
than 100 eligible subjects at each institution during
the 3-month sampling period. We sorted eligible subjects alphabetically by last name in 2-week increments
(first 2 weeks of each 4-week block), and performed
129 sequential chart reviews (66 at hospital A and 63
at hospital B) until we identified 50 that met criteria
at each institution (n=100). This sampling scheme
allowed for the greatest diversity in discharging providers (n=9 and n=6 for hospital A and hospital B,
respectively).
Measures
Audit tool

Clinical summaries represent the product of an MU


objective and its implementation in clinical practice.
Thus, we developed an audit tool by merging MU
standards with national guidelines for transitional care
and validated tools to assess patient educational materials. Specifically, we assessed clinical summary
content, organisation, readability, understandability
and actionability. One author at each site (ES and
HH) printed and de-identified clinical summaries for
auditing and abstracted relevant demographic and
health variables. In a subsequent step, two nonclinician authors (LF and MP) performed independent
audits based on the tool designed by the senior author
(ES). Specifically, the senior author demonstrated how
to apply the audit tool (detailed below) to clinical
summaries from each of the two sites and provided
annotated examples to use as references. Finally, a different author (KB) conducted analyses to assess interrater reliability.
Content

We assessed content according to the MU VDT objective by selecting a subset of items with face validity for
informing patients postdischarge care.4 5 We excluded
seven items from the VDT objective due to their lack
of face validity for informing patients postdischarge
care, including allergy list, vital signs at discharge, lab
results at discharge, summary of care document, care
plan, demographics and smoking status. Two nonclinician authors (LF and MP) each independently
assessed the presence or absence of the following
eight content items: patient name, admission/discharge date and location, reason for hospitalisation,
inpatient care team, procedures performed during
Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
admission, problem list, medication list and discharge
instructions. Finally, three content items from a discharge checklist endorsed by the Society of Hospital
Medicine were included: (1) follow-up appointments
scheduled before discharge; (2) advise patients about
anticipated problems (red flags); and (3) provide a
specific 24/7 call-back phone number in case of immediate postdischarge needs.17 We included these
content items because they are common and important elements of transitional care, but not included
among the criteria necessary for hospitals to meet the
VDT objective. We defined discharge instructions as
either (1) manually entered by clinicians; or (2)
inserted from a third-party source (eg, generic or
boiler plate instructions for specific medical conditions). Reviewers gave no credit for generic instructions unrelated to patients primary reason for
hospitalisation (eg, if you are a smoker, we encourage
you to quit). For each clinical summary, two authors
(LF and MP) evaluated yes/no whether each of the
11 content items appeared in the document.
Organisation

We assessed organisation according to the Institute of


Medicines recommendation to promote health literate
healthcare organisations.18 We selected two criteria
relevant to organising patient educational materials
from the Eighth Attribute, which states, A health literate health care organization designs and distributes
print, audiovisual, and social media content that is easy
to understand and act on. Specifically, we selected (1)
focus on a limited number of messages; and (2)
sequence information in a logical order (ie, primary
diagnosis listed first). Two authors (LF and MP)
assessed the number of unique messages, defined as
each of the 11 content criteria (defined above), plus
any of the following: hospital logo and mission statement, home healthcare referrals, follow-up laboratory
or radiology requisitions, inventory of patient belongings, personalised instructions to access EHR-tethered
patient portal, generic discharge instructions (eg, call
911 if you have chest pain), signature section, or any
other section demarcated by a change in spacing or
font. The same two authors evaluated yes/no whether
clinical summaries listed the primary diagnosis (identified in the provider-to-provider discharge summary)
first among patients comorbid conditions.
Readability, understandability and actionability

We calculated readability scores using Health Literacy


Advisor (HLA) software, a Microsoft Word plug-in.19
Two authors (ES and HH) copied/pasted clinical summaries from their respective EHRs into Word, which
was necessary to perform automated readability
assessments. Moreover, two authors (MS and ES) confirmed that the copy/paste process did not affect performance of automated readability assessments, since
HLA software does not require document prepping
prior to analysis, which is a benefit compared with
Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

build-in readability software.20 Authors selected five


readability scales due to their prevalence, prior validation and relevance to health communication: (1)
Simple Measure of Gobbledygook (Precise SMOG);
(2) Fry-based Electronic Readability Formula; (3)
FORCAST Readability Scale; (4) FleschKincaid
Grade Level and (5) the Flesch Reading Ease.20
Authors chose to assess clinical summaries using
several readability formulas, since documents contain
a mixture of the components emphasised in each
scale: short-syllable and long-syllable words, sentences
and paragraphs, as well as bulleted lists and tables.
The Precise SMOG assesses the frequency of polysyllabic words and is well suited for healthcare applications because of its consistent results and higher level
of expected comprehension.21 The Fry-based
Electronic Readability Formula assesses the average
number of sentences and syllables per 100 words. The
FORCAST Readability Scale assesses the number of
single-syllable words per 150 (ideal for lists). The
FleschKincaid Grade Level assesses the average
number of syllables per word and the average number
of words per sentence. The first four scales estimate
readability based on traditional grade levels, while the
Flesch Reading Ease assesses material on a 0100
scale (higher scores indicate improved readability).20
Lastly, we used the Patient Education Materials
Assessment Tool (PEMAT) to evaluate clinical
summary understandability and actionability.22 This
Agency for Healthcare Research and Quality
endorsed toolkit instructs assessors to agree or disagree with up to 26 statements about educational
materials (only 24 relevant to print materials). Scores
range from 0% to100%, with higher scores indicating
that the material is easier to understand and act on.
The PEMAT demonstrates strong internal consistency,
reliability and construct validity.23 Following careful
review of the PEMAT Users Guide, two non-clinician
authors (LF and MP) independently evaluated each of
the 100 clinical summaries and assessed them according to the PEMAT scoring rubric. Michigan State
University and its affiliate hospitals Institutional
Review Boards approved our protocol.
Statistical analysis

We generated descriptive statistics for each site according to metrics in our audit tool, reported as means
and ranges for continuous variables and frequencies
and percentages for categorical variables. We assessed
inter-rater reliability using Spearmans rank correlation coefficient () for discrete variables and Cohens
for categorical variables.2426 Since depends on
the prevalence of attributes, it deteriorates when contingency tables contain too many zeros. In such
instances, we calculated the prevalence-adjusted ,
which is a better estimate of the true nature of
reviewer agreement.27 Moreover, we generated graphic
displays of readability assessments (from HLA software)
3

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
Table 1 Demographics of patient population and average length
of their clinical summaries
Demographics
Age (mean, range)
Sex (male)
Number of comorbid conditions
(mean, range)
Length of stay (mean, range)
Clinical summary: number of
pages (mean, range)

Hospital/vendor
A (n=50)

Hospital/vendor
B (n=50)

52 (1890)
50%
2.1 (06)

54 (2487)
34%
2.9 (07)

2.9 (19)
5.8 (310)

4.5 (149)
7.3 (512)

by estimated grade level. We performed all analyses


using JMP for SAS V.12.1 (Cary, North Carolina, USA).
RESULTS
Discharged patients were middle-aged and had multiple
chronic conditions (table 1). On average, clinical summaries were six pages, but the number of pages varied
considerably (range 312, table 1). De-identified examples are available for hospital/vendor A (see online supplementary appendix 1) and hospital/vendor B (see
online supplementary appendix 2).
Content

Some MU elements were universally auto-populated


into clinical summaries ( problem lists and medication
lists), while other key elements were never included
Table 2

(inpatient care team; table 2). Mostbut not allclinical summaries contained discharge instructions (77%
and 93% at hospitals A and B, respectively). At both
sites, clinicians were more likely to insert third-party
generic patient educational materials than to manually
enter personalised discharge instructions (table 2).
Elements endorsed by the Society of Hospital
Medicine were inconsistently included. The percentage
of follow-up appointments scheduled prior to discharge was higher at hospital B than hospital A (43%
and 17%, respectively), which may reflect differences
in discharge planning policies at the two sites. Only
half of the summaries included condition-relevant red
flags to watch for after discharge. However, templates
produced at both institutions auto-populated generic
warnings (eg, face, arms, speech, time (FAST) scale for
stroke (see online supplementary appendix 1) or call
911 if you have any chest pain (see online supplementary appendix 2)). Clinical summaries universally failed
to include a 24/7 call-back phone number in case
patients had immediate postdischarge concerns (table
2). Inter-rater agreement was very good for most
content elements (>0.8 or >0.8), with the exception
of clearly identifying the reason for hospitalisation
(=0.57, hospital/vendor A), including disease-specific
discharge instructions (=0.72, hospital/vendor B),
and advising about red flags (both sites, =0.60 and
=0.24, respectively). Overall agreement for the

Assessing clinical summaries for patient-centred content


Hospital/vendor A (n=50)

Hospital/vendor B (n=50)

Content criteria

Reviewer 1

Reviewer 2

IRR*

Reviewer 1

Reviewer 2

IRR*

Meaningful use view-download-transmit


Patient name
Admission/discharge: date and location
Reason for hospitalisation
Inpatient care team (primary and consultants)
Procedures performed during admission
Problem list (updated)
Medication list (reconciled)
Discharge instructions
Manually entered by provider
Generic/third-party only (boiler-plate)

50 (100%)
50 (100%)
46 (92%)
0
1 (2%)
50 (100%)
50 (100%)
39 (78%)
27 (54%)
33 (66%)

50
50
39
0
0
50
50
38
27
32

=1.00
=1.00
=0.57
=1.00
=0.96
=1.00
=1.00
=0.94
=1.00
=0.87

50
25
49
0
0
50
50
49
19
38

50
25
46
0
0
50
50
44
26
42

=1.00
=1.00
=0.88
=1.00
=1.00
=1.00
=1.00
=0.72
=0.48
=0.75

Overall IRR

=0.92 (95% CI=0.880.96)


Reviewer 1

(100%)
(100%)
(78%)

(100%)
(100%)
(76%)
(54%)
(64%)

Reviewer 2

(100%)
(50%)
(98%)

(100%)
(100%)
(98%)
(38%)
(76%)

(100%)
(50%)
(92%)

(100%)
(100%)
(88%)
(52%)
(84%)

=0.95 (95% CI=0.930.98)


IRR*

Reviewer 1

Reviewer 2

IRR*

Society of hospital medicine


Follow-up appointments scheduled
18/115 (16%)
23/131 (18%)
=0.89
46/110 (42%)
49/112 (44%)
=0.98
before discharge (%)
Advise about anticipated problems (red flags)
27 (54%)
25 (50%)
=0.60
23 (46%)
0
=0.24
Provide specific 24/7 call-back number
0
0
=1.00
0
0
=1.00
*IRR=inter-rater reliability, calculated as Cohens for categorical data and Spearmans rank correlation coefficient () for discrete data.
Hospital/vendor B provided clinical summaries to patients without a discharge date, information that was included in the first two pages, but retained by
the hospital (see online supplementary appendix 2).
Based on prevalence-adjusted .
Differences in the percentage of follow-up appointments scheduled before hospital discharge may reflect variations in discharge planning practices at each
site. Denominator reflects total number of follow-up appointments, which was >1 for many patients.

Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
Table 3

Assessing clinical summaries for patient-centred organisation


Hospital/vendor A (n=50)

Organisation

Reviewer 1

Reviewer 2

Hospital/vendor B (n=50)
IRR

Reviewer 1

Reviewer 2

IRR

Elements of the Eighth Attribute*


Number of unique messages
15.5
13.8
=0.25
14.2
10.5
=0.53
Primary diagnosis listed first
23 (46%)
19 (38%)
=0.84
22 (44%)
16 (32%)
=0.67
*Selected items relevant to clinical summary organisation from the Eighth Attribute in the Ten Attributes of Health Literate Health Care Organizations.18
IRR=inter-rater reliability, calculated as Cohens for categorical data and Spearmans rank correlation coefficient () for discrete data.

selected MU VDT content elements was very good at


both sites (=0.92 for hospital/vendor A and =0.95
for hospital/vendor B).
Organisation

Clinical summaries contained an average of 15 unique


messages at hospital A, and 12 unique messages at
hospital B (table 3). Clinical summaries at hospital/
vendor A contained non-clinical data (eg, inventory of
patient belongings (see online supplementary appendix
1)), and those from hospital/vendor B contained duplicative medication lists (eg, one reconciled list and one
consolidated list (see online supplementary appendix
2)), which may obscure key content. Less than half of
clinical summaries listed the primary discharge diagnosis first (42% and 38% at hospitals A and B, respectively; table 3). Moreover, clinical summaries
produced at both sites included medication lists formatted based on reconciliation mandates, generating
separate medication subsections (eg, start, continue,
change or stop). Inter-rater reliability was fair to moderate (=0.250.53) for assessing the number of
unique messages, and good to very good (=0.67
0.84) for listing the primary diagnosis first (table 3).

Readability, understandability and actionability

Document language was universally above the recommended 6th grade reading level, averaging 8th12th
grade, depending on the scale used (figure 1).
Importantly, the Precise SMOG estimates readability
at 23 grade levels higher than the Flesh-Kincaid
scale, reflecting the higher level of expected comprehension for health-related materials.21 28 Frequently,
diagnoses auto-populated into patients problem lists
referenced an International Classification of Diseases
code, thereby prompting inclusion of medical jargon.
For example, one clinical summary included pulmonary oedema cardiac cause and respiratory failure with
hypercapnia in the problem list (see online supplementary appendix 1). Documents scored poorly on
PEMAT understandability (range 15%40%) and
actionability (32%41%) assessments (table 4).
Reviewers agreed that clinical summaries scored
highly for using the active voice (PEMAT #5; table
4). By contrast, they universally scored summaries
deficient in seven areas (=1.00 for all of the following): making their purpose completely evident
(PEMAT #1), avoiding distracting content (PEMAT
#2), using informative headers (PEMAT #9),

Figure 1 Clinical Summaries: Readability. Clinical summaries exceed the recommended sixth grade reading level (indicated by black
horizontal line). The Precise SMOG (Simple Measure of Gobbledygook) assesses the frequency of polysyllabic words. The Fry-based
Electronic Readability Formula assesses the average number of sentences and syllables per 100 words. The FORCAST Readability Scale
assesses the number of single-syllable words per 150 (ideal for lists). The FleschKincaid Grade Level assesses the average number of
syllables per word and the average number of words per sentence. The Flesch Reading Ease assesses material on a 0100 scale
(higher scores indicate improved readability), rather than a specific grade level. Scores were 61.6 and 54.0 for hospital/vendor A and
hospital/vendor B, respectively.
Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
Table 4

Assessing clinical summaries for understandability and actionability


Hospital/vendor A (n=50)

Hospital/vendor B (n=50)

Patient Education Materials Assessment Tool (PEMAT)

Reviewer 1

Reviewer 2

Reviewer 1

Reviewer 2

Understandability
1. Material makes its purpose completely evident
2. Material does not include distracting content
3. Material uses common, everyday language
4. Medical terms defined and used only to familiarise
5. Material uses active voice
6. Numbers are clear and easy to understand
7. Material does not expect user to perform calculations
8. Material breaks information into short sections
9. Materials sections have informative headers
10. Material presents information in a logical sequence
11. Material provides a summary
12. Material uses visual cues to draw attention to key points
15. Material uses visual aids whenever possible to clarify content
16. Materials visual aids reinforce rather than distract from content
17. Materials visual aids have clear titles or captions
18. Material uses illustrations that are clear and uncluttered
19. Material uses simple tables with clear row/column headings
Overall PEMAT understandability score

0%
0%
10%
8%
98%
20%
98%
40%
0%
0%
0%
0%
0%
0%
0%
0%
0%
18.4%

0%
0%
74%
40%
100%
20%
82%
100%
0%
0%
0%
100%
40%
40%
40%
40%
0%
39.6%

1.00
1.00
0.02
0.27
0.96
0.33
0.69
0.12
1.00
1.00
1.00
0.00
0.31
0.64
0.64
0.64
1.00

0%
0%
0%
0%
100%
0%
100%
4%
0%
0%
0%
0%
0%
N/A
N/A
N/A
0%
14.6%

0%
0%
0%
0%
94%
10%
66%
100%
0%
0%
0%
100%
0%
N/A
N/A
N/A
0%
26.4%

1.00
1.00
1.00
1.00
0.88
0.80
0.39
0.12
1.00
1.00
1.00
0.00
1.00
1.00
1.00
1.00
1.00

Overall IRR

=0.55 (95% CI=0.500.60)


Reviewer 1

Reviewer 2

=0.72 (95% CI=0.670.77)


*

Reviewer 1

Reviewer 2

Actionability
20. Material clearly identifies at least one action the user can take
100%
96%
0.92
100%
98%
21. Material addresses user directly when describing actions
100%
98%
0.96
100%
100%
22. Material breaks down any action into explicit steps
4%
2%
0.88
0%
0%
23. Material provides a tangible tool to help the user take action
0%
0%
1.00
6%
0%
24. Material provides simple instructions of how to perform
N/A
0%
0.00
N/A
4%
calculations
25. Material explains how to use charts/tables/diagrams to
0%
N/A
0.00
N/A
N/A
take actions
26. Material uses visual aids to make it easier to act on
0%
0%
1.00
0%
0%
on instructions
Overall PEMAT Actionability Score
34.1%
32.2%
41.2%
33.7%
Overall IRR
=0.56 (95% CI=0.490.63)
=0.76 (95% CI=0.700.81)
* is Cohens for categorical data.
Based on prevalence-adjusted .
Overall PEMAT scores for understandability and actionability range from 0% to100%, with higher scores indicating better understandability and
actionability, respectively. PEMAT #13 and #14 only apply to audio-visual content (not print materials).
N/A = not applicable.

providing information in a logical sequence (PEMAT


#10), providing an overall summary (PEMAT #11),
and underusing visual aids such as charts (PEMAT
#26), which when present lacked clear row and
column headings (PEMAT #19). Overall, inter-rater
reliability was moderate for hospital/vendor A (=0.55
to =0.56) and good for hospital/vendor B (=0.72 to
=0.76).
DISCUSSION
The aim of this pilot study was to assess EHRgenerated clinical summaries according to their

*
0.96
1.00
1.00
0.88
0.00
1.00
1.00

content, organisation and understandability. Results


highlight opportunities to improve clinical summaries
for guiding patients care following hospital discharge.
Overall, we found that clinical summaries were lengthy,
disorganised, lacked key content and scored poorly on
assessments of understandability and actionability.
While clinical summaries average six pages, they universally failed to identify members of the care team,
including the discharging provider, or a specific 24/7
call-back phone number in case of problems immediately following discharge. Equally worrisome, only
40% of clinical summaries listed a patients primary
Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
discharge diagnosis first among their list of comorbid
conditions. Medication lists were organised based on
reconciliation mandates, generating subsections (eg,
start, continue, change or stop) rather than consolidating into a patient-centred list based on standard dosing
times (eg, refrigerator list) as guidelines recommend.29 30 Furthermore, clinical summaries were
written well above the sixth grade reading level, most
scoring between the eighth and 12th grade level.
Finally, clinical summaries scored poorly on assessments
of understandability and actionability, with reviewers
agreeing that documents scored zero on at least one
third of the relevant PEMAT items at both sites (deficient in 8 of the 24 items for print materials).
Medication lists embedded within clinical summaries provide a clear example of the deficits identified in
our study. For example, lengthy medication lists
spanned multiple pages, and subsequent pages lacked
column headings, which could make dosing instructions difficult to interpret (see online supplementary
appendix 1, p. 3 of 9). Moreover, only 16% of medication lists containing short-acting insulin provided
explicit dosing instructions. Instead, most offered a
general statement to use on a sliding scale without
defined parameters (see online supplementary appendix 2, p. 6 of 12). Lastly, organising lists based on reconciliation mandates may lead to confusion when
EHR systems over-interpret differences between preadmission and discharge medication regimens. For
example, note the new versus old dosing instructions for sevelamer (see online supplementary appendix 2, p. 8 of 12), where the only difference is take
with snacks versus take with meals.
Inter-rater agreement was very good for the
content elements included in our scoring rubric
(overall >0.90 at both sites). By contrast, inter-rater
agreement was only moderate for the organisation
elements and the PEMAT understandability and
actionability scores. It is possible that reliability is
lower for the PEMAT scores due to level of subjectivity in some of its measures. Since the traditional
statistic is heavily influenced by prevalence,31 it can
result in falsely low scores when variance between
reviewers is zero. We observed this problem in
instances when both reviewers unanimously agreed
that elements were absent (eg, none of the clinical
summaries identified members of patients care
teams). In such circumstances, we used the
prevalence-adjusted , which overcomes some of the
limitations of the traditional and more accurately
reflects the true degree of inter-rater reliability.
Our work is novel because it is the first study to
evaluate EHR-generated clinical summaries in the
acute care setting. Notably, literature on clinical summaries exists for the outpatient setting, but not for the
equivalent document provided to patients during the
hospital discharge process.3234 Moreover, these
studies assess patient and provider perceptions of
Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

clinical summaries, with limited data on health outcomes, aside from self-reported medication adherence
in one study.32 Regardless, our results are consistent
with others in identifying opportunities to improve
clinical summaries. Importantly, we agree that revisions should incorporate feedback from end-users,
including providers (since their workflow generates
clinical summaries) and patients, who are the ultimate
recipients.
Despite the need to improve clinical summaries for
patients, it is unclear how EHR vendors, healthcare
systems and clinicians negotiate their overlapping
responsibilities to generate and refine them. For
example, starting from off-the-shelf EHR software,
how does a healthcare system optimally customise features to refine clinical summary templates? Moreover,
how can clinicians ensure succinct, highly relevant
patient educational tools without a prompt to preview
documents before nurses print them for patients?
Moving forward, we need greater transparency to
understand how local hospital EHR customisation and
clinician-specific workflows influence clinical summary
templates to generate usable documents for patients.
Ideally, future programmes will address these sociotechnical factors35interactions between patients, providers and health information technology workflows
affecting EHR-generated products and their implementation in clinical practice.
Study limitations relate to our pilot design, including small sample size and two-site locations, which
prevents broad generalisation of our results. In deciding relevant content metrics omitted by MU, we chose
the Society of Hospital Medicine checklist.17 However,
we acknowledge that other regulatory organisations
may recommend different content elements depending
on the population served. While providers can insert
generic discharge instructions into clinical summaries
through point-and-click menu options, it is possible to
modify instructions, which is a variable we did not
assess. Finally, this work did not assess patients perceptions or understanding of their clinical summaries, nor
their ability to carry out action items embedded within
the documents. In the future, we plan to elicit patients
feedback for improving clinical summaries based on
the domains of our audit tool: content, organisation
and understandability. In addition, we will broaden the
scope of our evaluation by increasing sample size and
evaluating summaries from multiple institutions.
Despite these limitations, it is important to disseminate knowledge of suboptimal clinical summaries
because of their broad implications. While MU no
longer mandates clinical summaries, CMS continues to
support their provision as a clinical best practice.36 37
Thus, given their evolution from MU and widespread
use in US hospitals, clinical summaries demand critical
evaluation to ensure that the final product optimally
leverages EHR technology to improve patient care.
Ideally, future incentive programmes will incorporate
7

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research
established recommendations, such as health literacy
best practices adopted by the Re-Engineered Discharge
Program (Project RED) and the Universal Medication
Schedule, which promote understanding of complex
medication regimens at hospital discharge.29 30
Integrating these and similar patient-centred principles
into refined clinical summary templates could positively impact the 35 million patients discharged from
US hospitals annually.38
In conclusion, we found that currently produced
clinical summaries are lengthy, omit or obscure key
discharge information, written at the 8th12th grade
reading level, and score poorly on assessments of
understandability and actionability. Since vendors,
healthcare systems and clinicians share overlapping
responsibility for generating clinical summaries, they
should collaborate to solicit feedback from patients
(end-users) to improve their product.
Acknowledgements The authors thank Julia Adler-Milstein,
PhD at the University of Michigan and Judy Arnetz, PhD at
Michigan State University for reviewing drafts of our
manuscript.
Contributors Study concept and design: ES and CG.
Acquisition, analysis or interpretation of data: all authors.
Drafting of the manuscript: ES and CG. Critical revision of the
manuscript for important intellectual content: all authors.
Statistical analysis: ES and KB. Administrative, technical or
material support: HH, JS, LF, MP, MS and KB. Study
supervision: ES and CG.
Funders This work was supported by the Michigan Department
of Health and Human Services Contract #20151533-00 (HIT
Resource Center) and Michigan State University Institute for
Health Policy.

10

11

12

13

Competing interests ES reports income from the Center for


Medical Education for her role as a commentator in Continuing
Medical Education (CME) audio publications.
Ethics approval Michigan State University and its affiliate
hospitals institutional review boards approved our protocol.

14

Provenance and peer review Not commissioned; externally


peer reviewed.
15

REFERENCES
1 The Health Information Technology for Economic and Clinical
Health (HITECH) Act of 2009. 2009. http://healthit.gov/sites/
default/files/hitech_act_excerpt_from_arra_with_index.pdf
(accessed 29 Mar 2016).
2 Centers for Medicare & Medicaid Services. Eligible Hospital
Information. http://www.cms.gov/Regulations-and-Guidance/
Legislation/EHRIncentivePrograms/Eligible_Hospital_
Information.html (accessed 29 Mar 2016).
3 Centers for Medicare & Medicaid Services. Meaningful Use
Definition & Objectives. http://www.healthit.gov/
providers-professionals/meaningful-use-definition-objectives
(accessed 29 Mar 2016).
4 Eligible Hospital and CAH Meaningful Use Core and Menu
Set Objectives: Stage 1 (2014 Definition). 2014. http://www.
cms.gov/Regulations-and-Guidance/Legislation/
EHRIncentivePrograms/Downloads/EH_CAH_MU_
TableOfContents.pdf (accessed 29 Mar 2016).
5 Stage 2: Eligible Hospital and Critical Access Hospital:
Meaningful Use Core Measures: Measure 6 of 16. 2014. http://
www.cms.gov/Regulations-and-Guidance/Legislation/

16

17

18

19

20

EHRIncentivePrograms/downloads/Stage2_HospitalCore_6_
PatientElectronicAccess.pdf (accessed 29 Mar 2016).
Eligible Hospital and CAH Meaningful Use Table of Contents:
Core and Menu Set Objectives: Stage 1 (2013 Definition).
http://www.cms.gov/Regulations-and-Guidance/Legislation/
EHRIncentivePrograms/Downloads/Hosp_CAH_MU-toc.pdf
(accessed 29 Mar 2016).
United States Department of Health and Human Services.
2015 Edition Health Information Technology (Health IT)
Certification Criteria, 2015 Edition Base Electronic Health
Record (EHR) Definition, and ONC Health IT Certification
Program Modifications: Final Rule. 2015. https://www.
federalregister.gov/articles/2015/10/16/2015-25597/2015edition-health-information-technology-health-it-certificationcriteria-2015-edition-base (accessed 29 Mar 2016).
EHR Incentive Programs 2014 CEHRT Rule: Quick Guide.
http://www.cms.gov/Regulations-and-Guidance/Legislation/
EHRIncentivePrograms/Downloads/CEHRT2014_FinalRule_
QuickGuide.pdf (accessed 29 Mar 2016).
Horwitz LI, Moriarty JP, Chen C, et al. Quality of discharge
practices and patient understanding at an academic medical
center. JAMA Intern Med 2013;173:171522.
Makaryus AN, Friedman EA. Patients understanding of their
treatment plans and diagnosis at discharge. Mayo Clin Proc
2005;80:9914.
Engel KG, Buckley BA, Forth VE, et al. Patient understanding of
emergency department discharge instructions: where are
knowledge deficits greatest? Acad Emerg Med 2012;19:
E103544.
Coleman EA, Chugh A, Williams MV, et al. Understanding and
execution of discharge instructions. Am J Med Qual
2013;28:38391.
Roundtable on Health Literacy, Board on Population Health
and Public Health Practice, Institute of Medicine. Facilitating
Patient Understanding of Discharge Instructions: Workshop
Summary. Washington DC: National Academies Press (US),
2014. http://www.ncbi.nlm.nih.gov/books/NBK268657/
(accessed 29 Mar 2016).
Buckley BA, McCarthy DM, Forth VE, et al. Patient input into
the development and enhancement of ED discharge instructions:
a focus group study. J Emerg Nurs 2013;39:55361.
Snow V, Beck D, Budnitz T, et al. Transitions of Care
Consensus policy statement: American College of Physicians,
Society of General Internal Medicine, Society of Hospital
Medicine, American Geriatrics Society, American College Of
Emergency Physicians, and Society for Academic Emergency
Medicine. J Hosp Med 2009;4:36470.
Pignone M, DeWalt DA, Sheridan S, et al. Interventions to
improve health outcomes for patients with low literacy. J Gen
Intern Med 2005;20:18592.
Halasyamani L, Kripalani S, Coleman E, et al. Transition of
care for hospitalized elderly patientsDevelopment of a
discharge checklist for hospitalists. J Hosp Med
2006;1:35460.
Brach C, Keller D, Hernandez L, et al. Ten Attributes of
Health Literate Health Care Organizations. 2012. http://nam.
edu/perspectives-2012-ten-attributes-of-health-literate-healthcare-organizations/ (accessed 29 Mar 2016).
Health Literacy Innovations. The Health Literacy Advisor.
http://www.healthliteracyinnovations.com/products/hla
(accessed 29 Mar 2016).
Health Literacy Innovations. Focus on Readability and
Readability Indices. 2010. http://www.

Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Original research

21

22

23

24
25
26
27

28
29

healthliteracyinnovations.com/resources/hli_publications/
(accessed 29 Mar 2016).
Wang L-W, Miller MJ, Schmitt MR, et al. Assessing readability
formula differences with written health information materials:
application, results, and recommendations. Res Soc Adm Pharm
RSAP 2013;9:50316.
The Patient Education Materials Assessment Tool (PEMAT)
and Users Guide. 2013. http://www.ahrq.gov/professionals/
prevention-chronic-care/improve/self-mgmt/pemat/ (accessed 29
Mar 2016).
Shoemaker SJ, Wolf MS, Brach C. Development of the Patient
Education Materials Assessment Tool (PEMAT): A new
measure of understandability and actionability for print and
audiovisual patient information. Patient Educ Couns
2014;96:395403.
Rosner B. Fundamentals of biostatistics. 7th edn. Duxbury
Press, 2015.
Landis J, Koch G. The measurement of observer agreement for
categorical data. Biometrics 1977;33:15977.
Altman D. Practical Statistics for Medical Research. Chapman
& Hall/CRC, 1990.
Sim J, Wright CC. The kappa statistic in reliability studies: use,
interpretation, and sample size requirements. Phys Ther
2005;85:25768.
Safeer RS, Keenan J. Health literacy: the gap between
physicians and patients. Am Fam Physician 2005;72:4638.
Jack BW, Chetty VK, Anthony D, et al. A Reengineered
Hospital Discharge Program to Decrease Rehospitalization: a
Randomized Trial. Ann Intern Med 2009;150:17887.

Sarzynski E, et al. BMJ Qual Saf 2016;0:19. doi:10.1136/bmjqs-2015-005201

30 Wolf MS, Curtis LM, Waite K, et al. Helping patients simplify


and safely use complex prescription regimens. Arch Intern Med
2011;171:3005.
31 Feinstein A, Cicchetti D. High agreement but low kappa:
1. The problems of two paradoxes. J Clin Epidemiol
1990;43:5439.
32 Pavlik V, Brown AE, Nash S, et al. Association of patient recall,
satisfaction, and adherence to content of an Electronic Health
Record (EHR)generated after visit summary: a randomized
clinical trial. J Am Board Fam Med 2014;27:20918.
33 Neuberger M, Dontje K, Holzman G, et al. Examination of
office visit patient preferences for the After-Visit Summary
(AVS). Perspect Health Inf Manag 2014;11:1d.
34 Emani S, Ting DY, Healey M, et al. Physician perceptions and
beliefs about generating and providing a clinical summary of
the office visit. Appl Clin Inform 2015;6:57790.
35 Sittig DF, Singh H. A new sociotechnical model for studying
health information technology in complex adaptive
healthcare systems. Qual Saf Health Care 2010;19(Suppl 3):
i6874.
36 New CMS NPRM Offers Significant Meaningful Use
Flexibility in 2015 through 2017 Program Years. 2015. http://
www.himss.org/News/NewsDetail.aspx?ItemNumber=41714
(accessed 29 Mar 2016).
37 Meaningful Use and Your Practice. 2015. http://www.aafp.org/
practice-management/regulatory/mu.html (accessed 29 Mar
2016).
38 FastStatsHospital Utilization. http://www.cdc.gov/nchs/fastats/
hospital.htm (accessed 29 Mar 2016).

Downloaded from http://qualitysafety.bmj.com/ on May 6, 2016 - Published by group.bmj.com

Opportunities to improve clinical summaries


for patients at hospital discharge
Erin Sarzynski, Hamza Hashmi, Jeevarathna Subramanian, Laurie
Fitzpatrick, Molly Polverento, Michael Simmons, Kevin Brooks and
Charles Given
BMJ Qual Saf published online May 6, 2016

Updated information and services can be found at:


http://qualitysafety.bmj.com/content/early/2016/05/06/bmjqs-2015-00
5201

These include:

References

This article cites 20 articles, 4 of which you can access for free at:
http://qualitysafety.bmj.com/content/early/2016/05/06/bmjqs-2015-00
5201#BIBL

Email alerting
service

Receive free email alerts when new articles cite this article. Sign up in the
box at the top right corner of the online article.

Notes

To request permissions go to:


http://group.bmj.com/group/rights-licensing/permissions
To order reprints go to:
http://journals.bmj.com/cgi/reprintform
To subscribe to BMJ go to:
http://group.bmj.com/subscribe/