You are on page 1of 10

Overview

Developing a balanced scorecard for a • Introduction to the Hospital Reports


system of hospitals • Some lessons learned from the Reports
• Use a framework and hospitals’ strategic priorities
Linking indicators, adjusting indicators, and using data from routinely collected
– “What is the right scorecard model?”
sources
• Risk adjust data or use peer groups for fair comparisons
– “How can we make fair comparisons of patient satisfaction?”
• Pay attention to data and data quality
Program on Health Outcomes, University of North Carolina at Chapel Hill,
– “Can we identify fraud and abuse in Ontario hospital data?”
14th November 2003
• Carefully define performance dimensions and indicators
– “How can we integrate women’s health and equity?”
HRRC
Hospital Report • Ongoing work
Research Collaborative

Ontario, Canada’s largest province

– Population: 11,874,400
Part 1: – Capital is Toronto (4.7m)
Introduction to the Hospital Reports: – Hospital Sites: 226
– Hospital Corps: 140
A balanced scorecard for Ontario hospitals
– ~85% of hospital revenue
from government
USA – Family Physicians: 10,155
– Specialists: 11,327

Hospital Report adapts balanced scorecard Project organization combines grant,


framework for Ontario Hospitals contract, and clinical trial characteristics
• Reports for inpatient care, emergency departments, chronic
care, rehabilitation, mental health, and women’s health
Sponsors Research Collaborative Ethics & Safety
Quadrant Data Source Committees
Hosp.
Hosp. Min.
Min.
System integration & change Management & patient surveys Assoc
Assoc Health
Health UUofofTT/ /Hospital
Hospital

(learning and growth) 44Universities


Universities Ethics
Ethics
44Institutes
Institutes
Clinical utilization & outcomes Discharge abstracts 34Faculty
34 Faculty
12Staff
12 Staff
(internal processes)
Advisory
Advisory Safety
Safety
Patient satisfaction Patient surveys Committee
Committee Committee
Committee
(customer satisfaction)
Financial performance & condition Trial balance statements
(financial performance)

Baker and Pink (1995)

1
Indicator development also consistent
Report development follows standard model
Example: Clinical Utilization and Outcomes
ReviewLiterature
Literature ExcludeRare
Exclude Rare ReviewResults
Review Results
Review
andGuidelines
Guidelines AdverseEvents
Adverse Events withClinical
with ClinicalPanel
Panel
and

1 year 1 year 1.5 years 1.5 years


Establish Feasibility Regional Hospital Annual
Principles Study Report Level Redevelopment
Report

CompleteChart
Complete Chart PerformRisk
Perform Risk ApplyFilters
Apply Filtersand
and
Audit
Audit Adjustment&&22ndnd
Adjustment ReviewData
Review Data
Clinical Review
Clinical Review Mining
Mining
Research Collaborative CIHI & Research
Collaborative
Focusgroup
Focus grouptesting
testingofof
indicatorsby
indicators byhospital
hospital
groups
groups

Standardized processes reduce costs of


Funding for reports supports research
Reports
1,400,000
• Five year work-plan exceeds CDN $12m
1,200,000
1,200,000
• Contract supports investigator driven research
1,000,000

750,000 Feasibility Study


• Hospital Report funds can be matched to other grants
800,000

600,000 Regional Report • Report provides database on hospital resources,


processes, performance, and context
450,000 425,000
400,000 Hospital Level
220,000
215,000 Reports
200,000
• All research subject to ethics review and all reporting
0
1998 1999 2001 2002 subject to safety committee

Ontario hospitals are using Hospital Report


Evaluation process supports redevelopment
framework
• Complete Evaluations • Two years after start of project, (2000) over half of surveyed
– 6 public focus groups hospitals used some Hospital Report elements
– 4 chart audits
• Most hospitals now report patient satisfaction, finance, and learning
– 1 set of case studies of hospitals
– Regional focus groups with senior hospital leaders
and growth with similar indicators
– Survey of managers and leaders in hospital
System Clinical Patient Financial
– Application of Can Med Assoc standards Integration & Utilization & Satisfaction Performance
– Usability test for consumer web version Change Outcomes & Condition
• Future Evaluations Mean number
of indicators
6 (2-10) 8 (2 – 14) 5 (2 – 13) 5 (1 – 9)
– Survey of link to hospital strategy
– Focus group testing of new reporting system in HR
quadrants
– New data quality system (range)

Yap, Baker, and Brown (unpub.)

2
Lessons learned from developing a standard
balanced scorecard for hospitals
1. Assess feasibility
2. Specify the objectives early on
3. Establish principles to guide development
Part 2: 4. Consult with practitioners
Some lessons learned from the Reports 5. Use a solid framework and hospitals’ strategic priorities
6. Carefully define performance dimensions and indicators
7. Pay attention to data and data quality
8. Risk adjust data or use peer groups for fair comparisons
9. Carefully consider methods for reporting relative perf.
10. Let hospitals get used to the scorecard before public

Brown, Anderson, Baker, Blackstien-Hirsch, McKillop, Murray, and Pink (unpub.)

The Balanced Scorecard links indicators


based on strategy
Hospital Scorecards Duke Children’s Proposed US Hospital
in Ontario Hospital Scorecard Model

Lesson #5
Financial Performance Stakeholder
Mission
and Condition Satisfaction

Use a framework and hospitals’ strategic Patient Satisfaction Customers Financial Financial
Viability

priorities or
“What is the right scorecard model?” Clinical Utilization
and Outcomes
Internal Processes Clinical
Procedures

System Integration and Research Education Leaning and


Change Management and Teaching Growth

Baker and Pink, 1995; Meliones, 2001; Zelman, 1999

Cross-sectional analysis of specific


Treat Strategy as a Testable Hypothesis
indicators shows few relationships
• Pink, Murray, and McKillop, 2003 Strategic Objective:
“Increase efficiency of care for patients.”
– Weak, inverse, significant relationship between efficiency and
patient satisfaction
Performance
• Soberman et al, 2001 and Lindsay et al, 2003 Performance
Driver
Driver
Strategic
Strategic
Strategic
Strategic Outcome
Outcome
– No relationship between guideline use and clinical outcomes (LEAD)
(LEAD)
Initiative
Initiative (LAG)
(LAG)
• Porcellato, Stewart, and Brown, 2003 Reduce
Guidelineuse
Guideline use
ALCdays
days
Reduce Referralto
torehab
rehab ALC
Referral
– Weak-to-moderate relationships among measures of women’s LengthofofStay
Length Stay
Complications
Complications
ALOS
ALOS
Costs per case
Costs per case
health performance Sharedguidelines
Shared guidelines

After Kaplan & Norton, 1996

3
… and Test the hypothesis in one Report … and across time (1997- 2000)
Strategic
Strategic Strategic
Strategic
Strategic Objective: Outcome Strategic Objective: Outcome
Outcome Outcome
“Increase efficiency of care” (LAG) “Increase efficiency of care” (LAG)
(LAG) (LAG)
Performance
Performance Cholecystectomy Performance
Performance StrokeALOS
ALOS(’98)
(’98)
Driver(LEAD)
(LEAD) Cholecystectomy (0.377) Driver(LEAD)
(LEAD) Stroke (0.247)
Driver Complications
Complications Driver

Clin.IT
Clin. ITUse
Use PatientCare
Patient Care%%(’97)
(’97)
(-0.374) (-0.301)
Strategic
Strategic Strategic
Strategic
Goal
Goal Goal
Goal
Performance
Performance
Performance
Performance Costs Prerequisite
Prerequisite Costs
Driver(LEAD)
Driver (LEAD) Costs Costs
perWeighted
per Weighted perWeighted
per Weighted
Case Costper
Cost perCase
Case(’98)
(’98) (0.499) Case(2000)
(2000)
InformationIntensity
Information Intensity Case Case
(-0.230)

Brown, Anderson, Baker, McKillop, Murray, Pink, 2002 Brown, Anderson, Baker, McKillop, Murray, Pink, 2002

Next steps on strategy and framework…

• Further validate framework


– Does financial performance help determine satisfaction
– What determines financial performance Lesson #8
– Can we identify benchmarks and magnet hospitals Risk adjust data or use peer groups for fair
• Link indicators to specific strategies comparisons
– Strategic priorities survey “How can we make fair comparisons of
• Identify determinants of performance and expand to patient satisfaction?”
include access of care and ambulatory care

Two quadrants adjust data for patient Satisfaction with Emergency Departments
characteristics has dropped in Ontario
• Clinical utilization and outcomes • Satisfaction with overall care, nursing care, and
– Adjustment based on co-morbidities, age, and sex MDs higher than with facilities
– Logistic model for complications (strong models) • Results lower for large hospitals & Toronto
– Life-table model for length of stay (strong models)
– Logistic model for readmissions (weak models)
• Risk adjustment for health status, age, and sex has
little effect on scores
• Patient satisfaction
– Sex differences greatest for continuity of care, nursing
– Adjustments based on age, sex, perceived seriousness,
perceived health status, previous use, and who filled out – Pediatric hospitals have large effect on risk adjustments
survey (very weak models)

4
Perception and reality in waiting time
Waiting times and satisfaction in ED
satisfaction
• Comparison of linked data from 125 emergency • The mean LOS in the ED was 2.29 hours, the
departments and 24,134 patient surveys: median was 1.37 hours in 1999
• Cochrane-Armitage test for trend between the • 83.5% rate the overall quality as good or excellent
difference in LOS (patient – hospital) with patient • 82.5 % rate loyalty as good or excellent
loyalty and overall satisfaction • Hospital LOS not correlated with loyalty (0.05,
• Multiple stepwise regression on difference in LOS p<0.0001) or overall satisfaction (0.07, p<0.0001)
– age, sex, self-rated health and seriousness of problem,
triage level, and survey completion by other, day and
time of visit, and the hospital type and region

Perceived versus actual length of stay (LOS)


Acceptable waiting time for each triage level
in the ED (57% agreement)

Perceived LOS (hours) Triage level


Perceived 1 2 3 4 5 p
time to n=65 n=382 n=1782 n=2735 n=1513
Actual LOS (hours) <2 2-6 6 – 12 > 12 treatment
<2 9438 5180 602 313 < 1 hr 81.4 88.9 85.9 81.8 85.3 <0.001
2-6 1536 3495 589 187 1-2 hrs 38.9 49.3 51.0 45.3 40.7 <0.001
> 2 hrs 0.0 18.2 14.4 16.3 15.6 <0.01
6 – 12 225 418 367 121
> 12 186 263 115 107

1= resuscitation, 2=emergent, 3=urgent, 4=semi-urgent, 5=non-urgent


Moty, Croxford, and Brown (unpub.) Moty, Croxford, and Brown (unpub.)

Loyalty, overall satisfaction, and Regression results (n = 9618) for accuracy of


acceptability of different waiting times perceptions
Spearman correlation Estimate 95% CI P
Acceptability of: Loyalty Overall Quality Intercept 0.56 0.38 0.75 <0.001
Seen by a nurse within 15 0.29 0.38 Triage level -0.18 -0.27 -0.09 <0.001
minutes
Health before visit 0.02 0.01 0.03 <0.001
To be registered 0.29 0.32
Time of the visit 0.03 0.01 0.05 <0.001
To see a nurse 0.38 0.45
Seriousness of the problem -0.07 -0.09 -0.05 <0.001
To see a physician 0.46 0.53
Age -0.01 -0.02 -0.005 <0.001
To receive treatment 0.50 0.51
Community vs teaching 0.03 -0.02 0.08 0.30
Perceived waiting time 0.34 0.40
Small vs teaching -0.03 -0.09 0.03 0.26
p<0.0001
Moty, Croxford, and Brown (unpub.)
Moty, Croxford, and Brown (unpub.)

5
Three hypothetical cases Consequences for risk adjustment
Case 1 Case 2 Case 3
• Association with triage level and self-rated
Triage level Urgent Urgent Resuscitation
seriousness suggests problem is with primary care
Health before Excellent Excellent Very poor
Visit time 12pm-7am 7am-3pm 12pm-7am
users in Emergency Departments
Seriousness Moderately Not serious Extremely – New survey question “do you have a regular physician
of the visit serious serious in the community” is strongest risk adjustment factor
Age 30 70 20 • Association with triage level and other factors
Hospital type Teaching Small Community prompts closer look at range of explanatory
Region Toronto North East variables…
Time effect +0.02 -0.20 +0.58

Moty, Croxford, and Brown (unpub.)

Cosmo-geophysical determinants of
Data from several different sources
satisfaction
• How good is our current risk adjustment model for • ED patient satisfaction data for 1999
patient satisfaction or are silly risk adjustment • Lunar cycle period for each visit from the USN
factors more important than serious factors – Full moon is sundown to sunrise when full moon in sky
– A number of studies of financial markets find strong
• US-Canada exchange rate and the closing prices
predictors of financial performance in strange places.
for TSE 300
– Can we find the same associations with ED patient
satisfaction using stepwise regressions? – Also modeled one-day lag for financial data
• Coal prices from CANSIM and number of servers
connected to the Internet
• Binary variable for weekday or weekend visit

Results of the regression model (I) Results of the regression model (II)

Independent variable Parameter estimate p-value


Independent variable Parameter estimate p-value
Loyalty
Cos (phase of the moon) -0.54361 0.0163 Satisfaction with waiting time
Nursing care
Sin (phase of the moon) -1.02979 0.0002
Sin (phase of the moon) -0.54807 0.0075
Cos (phase of the moon) -0.60976 0.0211
Satisfaction with physician care
Change in the TSE +27.54579 0.0387
Sin (phase of the moon) -0.77653 0.0006
Exchange rate (lag of 1) +29.81395 0.0307

Brown, Croxford, and Murray (unpub.) Brown, Croxford, and Murray (unpub.)

6
Satisfaction is different by phase of the
Is this relevant to practice?
moon
• Debate is varied on effect of full moon
1.5

– No effect on volumes but some effect on dog bites


• In Canada, effect sparks a debate in local papers
1

0.5
when Friday the 13th and a full moon converge
PROFESSIONAL
0
• "All the wild women and all the wild men come out to play …A
0 50 100 150 200 250 300 350 400
full moon is the time to avoid emerg."
FACILITY
-0.5 » Cambridge Memorial Hospital emergency room nurse Kathryn
McGarry.
LOYALTY
-1
• "Ahh, it's a load of bunk,"
» Waterloo Regional Police Sergeant Mike Allard.
-1.5

D E G R E E S ( 0 = FU LL M O O N , 1 8 0 = N E W M O O N )

Brown, Croxford, and Murray (unpub.) Cambridge Reporter 13 June, 2003

Next steps on risk adjusting patient


Caveats and other stories
satisfaction with emergency department care
• Toronto consumers vs Ontario patients • Begin work with NRC-Picker data that will include event
– Dissatisfied Toronto consumers less loyal based questions (did you receive…) and perception based
questions (how do you feel about…)
– Toronto respondents rate physicians and overall
– Study differences in impact of risk adjustment on event and
impressions differently perception based questions
• Seasonality, increased capacity, or improvement – Study association between event based questions and staff and
– Some hospitals show increase in satisfaction over time capital in emergency departments
– Study association with volume of care provided in ED
– most likely to be hospitals in holiday areas or engaged
– Study association with clinical characteristics of patients through
in QI activities around satisfaction linked (with consent) discharge abstracts and surveys
• No denominational effect

Data quality is limited


Example: Accuracy of complications

100

Lesson #7 80

Pay attention to data and data quality 60


True Positive %
or 40 True Negative %

“Can we identify fraud and abuse in Ontario 20

hospital data?” 0
I

p
ol
F
AM

Hi
CH

Ch

Brown and Anderson (unpub.)

7
DRG creep comes to Canada with new No evidence links intentional up-coding to
funding formula MRD organization and resources
• Around the time of introduction of efficiency Item p value Association
(Significance)
bonuses, apparent sickness of patients in some Coders specialize in diagnoses 0.052 Borderline significant,
hospitals increases dramatically higher creep with
specialized coders
• Data on creep (unjustified increases in diagnostic Coders specialize in patient groups 0.211 Not significant
coding) compared to:
Have more coders than budgeted 0.426 Not significant
– Data on health records policies (70 hospitals)
Use a grouper to determine coding NA Most use a grouper
– Chart audit data (8 hospitals)
Outsource coding, report to a finance NA Too few outsource or
director report

Brown, Preyra, Flintoft, Blackstien-Hirsch, Choudhry, and Choudhry (unpub.)

No evidence links intentional up-coding to Little evidence links intentional up-coding to


coder education clinical interaction & oversight
Item p value Association Item p value Association
(Significance)
(Significance)

Use of internal case-mix education 0.803 Not significant Change diagnoses after grouper review 0.869 Not significant

Use of external case-mix education NA Too few use external Change diagnoses using guideline 0.405 Not significant
after grouper review
And the effect of data quality practices is unclear Nurses only note new diagnoses 0.782 Not significant
Unilaterally change MD diagnoses 0.335 Not significant
Use of internal data quality audits 0.018* Significant, higher creep
with audits Any consultation with MD 0.030* Significant, creep higher
with MD consult
Support provincial data monitoring NA Almost all support

Brown, Preyra, Flintoft, Blackstien-Hirsch, Choudhry, and Choudhry (unpub.) Brown, Preyra, Flintoft, Blackstien-Hirsch, Choudhry, and Choudhry (unpub.)

Accuracy of data and coding creep Tentative (very tentative) conclusions

• Intentional up-coding might be occurring for pre-admission • The comparison of MRD characteristics and creep
co-morbidities (not significant) provides no consistent evidence for or against
• Intentional up-coding might not be occurring for post- intentional up-coding
admission co-morbidities (not significant) – Creep is higher in hospitals that have coders who
specialize in specific diagnoses (and may be better
Odds Ratios trained to find more co-morbidities) (for intentional up-
Pre-admission co- High True Positive Not significant (4.5) coding)
morbidities High False Positive Not significant (4.5) – Creep is higher in hospitals where coders consult with
Post-Admission co- High True Positive Not significant (0.2) MDs (against intentional up-coding)
morbidities High False Positive Not significant (2.0) – And creep is higher in hospitals that perform internal
data quality audits (unclear)
Brown, Choudhry, and Choudhry (unpub.)

8
Next steps on data quality

• Test agreement between chart audit and routinely


collected data using regression models with Lesson #6
variable for increase in efficiency
Carefully define performance dimensions
• Extend chart audits to chronic care data and mental
health data and indicators or
• Use flow charting to identify weaknesses in data “How can we integrate women’s health and
quality based on processes equity?”
• Compare health records practices to chart audit
results

Is Women’s Health Performance Related to


Equity and women’s health in Canada
Overall Performance?
Vaginal-to-Abdominal Hysterectomy Ratio
• Equity is key value in Canadian health care system 6.00

but there is little performance measurement related 5.00

to equity
Vaginal-to-Abdominal Hysterectomy Ratio

• Women’s health is focus of federal and provincial


4.00

policies but little performance measurement in 3.00

women’s health beyond sex-specific measures


2.00

1.00

0.00
Ontario Hospitals

Is Women’s Health Performance Related to Can improvements in women’s health


Overall Performance? improve overall performance?
Patient Satisfaction with Nursing Care
V:A Ratio Complic- LOS
ations

V: A Ratio -0.19 -0.55*

Complic-
ations -0.19 0.28**

LOS -0.55* 0.28**

* statistically significant

9
Next steps on women’s health

• Study use of women’s health equity indicators in


standard CQI model (40 hospitals)
• Study relationships between different types of
equity indicators (e.g. equity of access vs. equity of
outcome) using tracer quality measures and Part 3: Ongoing work
economic/epidemiological techniques
• Study relationship between organizational culture
related to women’s health and performance on
equity indicators

Other ongoing work based on the Hospital


Report project
• Psychometric analysis of accreditation survey data
to identify potential indicators for reporting
• Comparison of public and professional opinions
around acceptable levels of performance
• Analysis of relationship between nurse work life and
patient satisfaction with care
• Comparison of different methods of identifying
benchmark hospitals for study
• Development of a dummies guide for the Reports

10

You might also like