You are on page 1of 11

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process

Joseph G. Van Matre, UniVerSity Of ALAbAMA At birMinghAM Karen e. Koch, nOrth MiSSiSSippi MedicAL center
2009, ASQ

The introduction of quality management to healthcare in the early 1990s, encouraged by a series of Institute of Medicine reports, has naturally led to a concern with the measurement of quality as provided by hospitals, individual providers, and other entities. Given the intricate nature of modern healthcare, these measures are varied and, in some cases, quite complex. Some measures require severity adjustments; some do not. They are used by consumers, payers, governments, and providers for a myriad of purposes. These measures also are used to demonstrate the quality and effectiveness of clinical process management. This article reviews the major types of clinical process and outcome measures and their rationale, use, advantages, disadvantages, and utilization in Malcolm Baldrige National Quality Award applications. Key words: improvement measures, Malcolm Baldrige National Quality Award outcome measures, performance measures, process measures, quality measurement, selection measures

W. Edwards Deming had a list of deadly diseases and obstacles that would hinder an organizations transformation into one practicing quality management principles. One of the obstacles was Our problems are different (Deming 1986, 130). Thus, in the 1980s one often heard responses from healthcare such as We are not making widgets here, Every patient is different, or The industrial model wont work in healthcare. A further problem was that healthcare lacked the international competitive pressures that stimulated the American auto, office equipment, and electronics industries to begin their quality initiatives. Today, however, healthcare is a member of the quality community, and evidence of this is abundant. For example, the American Society for Quality (ASQ) has a vibrant healthcare division, which is the fastest growing division in ASQ (ASQ Healthcare Division 2008). Note also that although healthcare only became eligible for the Malcolm Baldrige National Quality Award (MBNQA) in 1999 (12 years after the award was initiated), healthcare accounted for more than half of the 85 Baldrige Award applications in 2008, and received six of the 13 site visits, three times as many as any other category (NIST 2008). The 1990 book Curing Health Care (Berwick et al. 1990) was one of the catalysts that helped bring the principles of quality management to healthcare organizations by demonstrating these ideas and

18 QMJ VOL. 16, nO. 1/ 2009, ASQ

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
their applicability in a series of projects. Among the principles espoused in Curing Health Care and elsewhere (Hackman, Richard, and Wageman 1995; McLaughlin and Kaluzny 2006; Van Matre 1992) are: 1) continuous improvement and 2) data-based decision making. Underlying these two principles is the assumption of measurement measurement to inform as to where improvement is needed and measurements/data to guide the decisions as to what to change to bring about the improvement. Research has indicated that a wider scope of measurement is indeed associated with improved results (Evans 2004). In fact, financial results can sometimes lead to clinical process improvement projects. For example, North Mississippi Medical Center (NMMC) data showed that the treatment of community acquired pneumonia (CAP) for Medicare patients was resulting in a loss (Medicare reimbursement minus cost) of $750,000 per year. Following the improvement project, costs went down about 10 percent, while both mortality and length-of-stay (LOS) results were improved by more than 30 percent (Englert, Davis, and Koch 2001). The Baldrige Award criteria focus heavily on the use of data and information, and include clinical quality outcome and process measures as well as other organizational results, such as patient and stakeholder satisfaction, work force engagement, and measures of financial sustainability, similar to the concept of the balanced scorecard (Kaplan and Norton 1992). Many clinical measures of quality are publically available; for example, the Hospital Compare Web site includes 26 common outcome and process measures of clinical healthcare quality, all reported for the majority of U.S. hospitals (see Table 1). The public reporting of clinical data, often referred to as transparency, is believed by many to be a major motivating factor in driving hospital continuous quality improvement (CQI) efforts. (Galvin 2005; Chassin 2002; Hibbard, Stockard, and Tusler 2005). While clinical quality measures are well understood within the healthcare community, they are often difficult to understand by quality professionals in other sectors, many of whom may be involved as Baldrige Award examiners for healthcare applications. This article has two goals: 1) to describe clinical quality process measures and outcome measures and how each type of measure is used to assess and improve healthcare quality and 2) to illustrate how clinical process and outcome measures can be better linked and reported to demonstrate healthcare quality in the MBNQA process.


Quality measures can be classified into two general subject headings: outcome measures and process measures. One appeal of outcome measures is that they track results of direct importance to consumers and purchasers; the most important of these results is concerned with the patients survival. Thus, mortality rate (for example, mortality rate for coronary artery bypass graft (CABG) surgery) is an oft-cited outcome measure because of the volume, risk, and expense associated with such surgery. To be useful and offer relevant comparisons, mortality rates need to be disease and/or procedure specific and risk adjusted. Risk adjustments take into account such factors as the patients age and gender, severity of disease, and comorbidities such as diabetes and hypertension. Risk-adjusted data more readily allow for a fair and accurate comparison of provider outcomes as they mitigate the effect of patient mix differentials. Other outcome measures include hip-replacement postoperative hemorrhage or hematoma, and inpatient mortality rate for acute myocardial infarction (AMI) patients (Premier 2006). While acknowledging the relevance of outcome measures, Eddy (1998) points out other advantages. He notes that while the proportion of women receiving a mammogram (a process measure) is indicative of a plans preventive medicine, the plans breast cancer death rate measures not only preventive aspects of care but also the imaging equipments quality, the radiology technicians skills, the radiologists expertise in reading the film, the follow-up of positive results, and the oncologists treatment quality. They [outcome measures] aggregate the effects of all of the things plans do for a condition, and they leave plans 19

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
table 1 publicly available clinical process and outcome measures.
clinical indicator
Acute myocardial infarction (AMI)

process measure
Aspirin on arrival a Aspirin prescribed at discharge a ACEI or ARB for LVSD a Adult smoking cessation counseling a Beta-blocker prescribed at discharge a Beta-blocker at arrival a Fibrinolytic therapy received w/i 30 minutes of arrival a Primary PCI received w/i 90 minutes of hospital arrival a Discharge instructions a Evaluation of LVS function a ACEI or ARB for LVSD a Adult smoking cessation counseling a

outcome measure
Inpatient mortality b 30-day readmission b 30-day mortality c

Heart failure

30-day readmission b 30-day mortality c


Oxygenation assessment a Pneumococcal vaccination a Blood cultures performed w/i 24 hours prior to or 24 hours after hospital arrival for patients who were transferred b Blood cultures performed in the ED prior to initial antibiotic received in the hospital a Adult smoking cessation counseling a Initial antibiotic received within six hours of hospital arrival a Initial antibiotic selection a Influenza vaccination a Prophylactic antibiotic within one hour prior to surgical incision a Prophylactic antibiotic selection a Prophylactic antibiotic discontinued w/i 24 hrs after surgery end (48 hrs for CABG) a Surgery patients with appropriate hair removal b Surgery patients on beta-blocker therapy prior to admission who received a beta-blocker during perioperative period b Surgery patients with recommended venous thromboembolism prophylaxis ordered a Surgery patients who received appropriate prophylaxis w/i 24 hours prior to surgery to 24 hours after surgery a

30-day readmission b 30-day mortality c

Surgical care improvement project (SCIP)

Colorectal surgery patients with immediate postop normothermia b Cardiac surgery patients with controlled 6 a.m. postop blood glucose b Vaginal birth after caesarian b Inpatient neonatal mortality b 3rd or 4th degree laceration b

Pregnancy and related conditions

Childrens asthma

Reliever medications (like albuterol) a Systematic corticosteroids a Home management care plan document b Fibriniolytic therapy within 30 min b Median time to transfer to another facility for acute coronary intervention b Aspirin on arrival b Median time to ECG b Antibiotic timing and selection b Admission screening b Physical restraint b Seclusion b Multiple antipsychotic medications at discharge (with appropriate justification) b Post discharge continuing care plan (transmitted) b
2009, ASQ

Hospital outpatients AMI Surgical Hospital-based inpatient psychiatric services

Key: atJc and hospital compare measures; btJc measure only; chospital compare measure only ACEIangiotensin-converting enzyme inhibitor; ARBangiotensin receptor blockers; CABGcoronary artery bypass graft; edemergency department; LVS(d)left ventricular systolic (dysfunction)

20 QMJ VOL. 16, nO. 1/ 2009, ASQ

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
free to determine for themselves the best things to do (Eddy 1998, 11). This freedom to innovate, however, is not a plus in all situations. And, as later noted, while process measures are prescriptive, they tell plans what to do but leave plans free in how to do it. Although outcome measures may be determined for a hospital or a health plan, many argue that physician-specific data will be most useful to consumers (Landon et al. 2003). As one drills down to the physician level, however, sample sizes may be inadequate to detect statistically significant differences. For example, suppose Surgeon A has a 2 percent CABG mortality rate and Surgeon B has a 4 percent rate; hence, although only a 2 percent difference, Bs outcome is twice as bad as Surgeon A. In order to detect such a difference with 80 percent power at a significance level of 5 percent, sample sizes (CABG patients) of 1141 would be required for each surgeon (Minitab 2005). Since cardiac surgeons in Pennsylvania perform a median of 80 CABG surgeries per year and the maximum volume in 2004 was 215 cases (PHC4 2004), several years would be required to amass the necessary data, and the technology may change in the interim (perhaps to beating heart, offpump surgery), thus making older data difficult to compare with current data. Another drawback of outcome measures is that: the conclusion that a variation in outcome is due to a difference in quality of care is essentially a diagnosis of exclusion. If one cannot explain the variation in terms of differences in the type of patient, in how the data were collected, or in terms of chance, then quality of care becomes a possible explanation. However, the conclusions that differences in outcome are due to differences in quality of care will always be tentative and open to the possibility that the apparent association between a given unit and poor outcome is due to confounding by some other factor that was not measured or measured inadequately (Mant 2001, 477). Eddy also points out that outcomes are always subject to a chance element, but when they also are rare, delayed (for example, five-year cancer survival rate), and confounded, then blind adherence to outcomes will produce inaccurate resultsand cause patients, physicians, and plans to make bad decisions (Eddy 1998, 17). Thus, outcome measures are subject to debate and contention, for example, our patients are sicker, the risk adjustment was inadequate, or patient noncompliance accounts for the difference. Process measures can help deal with such impediments. Process measures assess specific components of the encounters between providers and patients, and are often used to determine the degree of adherence to evidence-based recommendations for clinical practice. For example, administering prophylactic antibiotics to surgery patients within an hour before incision is a process measure known to be effective in preventing post-operative wound infections. Data for a major southeastern city during 2004-2005 show that compliance rates for hospitals within the city ranged from 63 to 95 percent, while the national percentile was 93 percent and the national average was 74 percent (USDHHS 2006). Note the large variation in these rates as compared to CABG mortality where rates might range from 1.5 to 5 percent. For prophylactic antibiotics, local differences from the national mean are as large as 21 percent, and between local hospitals as large as 32 percent. Further, this rate is determined for several surgeries, including CABG, knee, hip, colon, and hysterectomy. With both larger samples and more variation among providers than for a single outcome (say CABG mortality), the data are more likely to detect statistically significant differences among hospitals. Process measures enjoy other advantages vis--vis outcome measures. First, process measures typically represent evidence-based desired interventions for specific conditions or situations. Thus, every patient without contraindications should receive the intervention, and there is usually no need to risk adjust the data. All pneumonia patients should receive antibiotics within six hours of hospital arrival (Mitka 2007), surgery patients should receive antibiotics within one hour of incision, and heart attack (AMI) patients should receive aspirin upon hospital arrival, all unless contra-indicated. Not requiring risk adjustment eliminates the contention associated with adequacy and accuracy, and simplifies data collection and measurement calculation. 21

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
Another advantage of process measures is their ease of interpretation. Process measures are direct measures of the quality of care and, as such, more are better. If the rate is low, the remedy is evident: apply the intervention more often. And, without the confounding that accompanies outcome measures, the process measure can by directly influenced by actions taken by the hospital or clinician. Press (2004, 208) notes of all quality indicators available today, the most easy, comprehensive, and cost effective to obtain are process measures. Four cautions should be noted concerning process measures. First, they are not likely to be meaningful to consumers. Consumers are much more interested in mortality and infection rates rather than percentage of patients receiving aspirin or pre-op antibiotics. Second, process measures must be related to outcomes. The mere fact that a process measure becomes part of a report card means resources will be devoted to it. These resources are wasted without evidence linking process to outcome, thus the emphasis on evidencebased interventions. Third, the collection of process measures is labor intensive and can take resources from clinical services and quality improvement activities. Finally, some process measures may be affected by patient preferences. For example, some women refuse to have a mammogram because of the discomfort associated with the procedure or because of an aversion to radiation. But for most processes, compliance is not an issue (for example, thrombolytics for heart attacks) or can be assumed to be similar across plans (Eddy 1998, 18). It is apparent that both process and outcome measures will be useful for the foreseeable future. When discussing similar questions, industrial quality authorities reached these conclusions: If you dont look at result [outcome] metrics, you wont know whether the process is working or not. If you dont look at process metrics, you wont be able to figure out what worked or didnt work. Managers must understand that process causes results and that process metrics tell you why results metrics happened. The purpose of studying bothis to pinpoint what must be done to improve the process and thus improve the results (Shiba, Graham, and Walden 1993, 483). Hospitals track and report both outcomes and related processes for at least their internal quality improvement activities. The Institute of Medicine (IOM) has defined quality as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge (IOM 1990). The desired health outcomes are clearly referring to patient outcomes/ results, while health services that are consistent with current professional knowledge would include process measures. To assess whether process quality adheres to professional standardscan be done by creating a list of quality indicators that describe a process of care that should occur for a particular type of patient or clinical circumstance and by evaluating whether patients care is consistent with the indicators (IOM 2001). Adherence to such process measures gained particular importance following a New England Journal of Medicine paper that revealed that patients were receiving only about half of the recommended medical care for a variety of needs including acute care, chronic care, and preventive care, and that such deficiencies were serious threats to the publics health (McGlynn et al. 2003). This study has been called the most comprehensive healthcare quality study ever done anywhere in the world (Kenney 2008). Medical data have been recorded for centuries. In 1665 bills of mortality were introduced in England to provide statistics on deaths caused by the plague (Welcome Library 2008). An early attempt to associate the process of care with outcome was Louis study of pneumonia and blood letting, a standard practice in medicine for more than 2500 years. He concluded that early, aggressive blood letting was not beneficial to the patient (Best and Neuhauser 2005). Louis study did not include the absence of blood letting, just more or less. The process measures that are advocated today, whether by The Joint Commission (TJC), HEDIS, Hospital Compare, Leapfrog, or some other entity, are usually based on evidence from randomized controlled trials that have proven efficacy in a controlled setting. The usefulness of these measures may not have been demonstrated in a wider setting. Though there

22 QMJ VOL. 16, nO. 1/ 2009, ASQ

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
are limited studies that focus on hospital process performance and outcomes, several recent studies indicate that the process measures are related to outcomes in variety of hospital settings. For example, a 2006 study was among the first to link this variability in hospital process performance with patient outcomes. These data support the use of guideline-based process measures as an important means of assessing an institutions quality of care (Peterson et al. 2006, 1918). Three months later, TJCs measures for AMI were found to explain 6 percent of the variation in the risk-adjusted 30-day mortality rate (Bradley et al. 2006). More recently, the Hospital Compare process measures were found to be associated with reductions in the risk-adjusted mortality rates for three conditions: AMI, heart failure, and pneumonia (Jha 2007). Finally, a study of the patient safety practices advocated by the Leapfrog Group found Consumers who choose hospitals identified by Leapfrog as having begun to implement patient safety practices will likely find hospitals with better process quality and lower mortality rates (Jha, Zhonghe, and Epstein 2008). Proponents of the measurement efforts believe that linkages between process measures and outcomes will become more vivid as administrative claims data are broadened to include additional data that will lead to improvements in the risk adjustment models; several additions are being made in 2008 (Pine 2007). The seven-category application is limited to 50 pages. Item 7.1 is worth 100 of the applications overall 1000 points and the typical application devotes three-and-a-half to four pages to it. This item, Health Care Outcomes, includes both healthcare process measures, patient outcome measures, and other information. The criteria state overall, this is the most important item in the healthcare criteria (NIST 2008, 45). Item 7.1 should provide results for the organizations most meaningful clinical services (for example, trauma outcomes for hospitals with a large emergency services/trauma program) as well as for some of the standard services the organization provides. All hospitals are required to routinely report standardized information on their performance to TJC. TJC has established core measures for the following services: AMI, heart failure, pneumonia, surgery care, pregnancy and related conditions, hospital-based inpatient psychiatric services, hospital outpatient department (implemented 1/08), and childrens asthma care (implemented 1/08) (see Table 1) (Mitka 2007). TJC core measure data are publicly available on the Hospital Compare Web site. And although the Baldrige Award examiner is not permitted to seek external confirmation of the results presented in the application, one would anticipate the applicant to provide some of these core measures in the application, along with a comparison of itself to local competitors using the Hospital Compare information. Healthcare results should be reported on all components of the organization. For example, a multifaceted hospital should provide appropriate results for all of its units (acute care, home care, long-term care), and a multiple hospital system should provide results for all of its hospitals. In addition, results should be trended over time, typically several years, and a benchmark should be provided for comparison. In reality, many Baldrige applicants select measures that are favorable to their application by having good trends over several years (or at least multiple quarters) and solid benchmarks that their measures are approaching. Benchmarks are sometimes challenging to identify, but they are necessary for the applicant to demonstrate and illustrate its achievements. The

Use of Clinical Process and Outcome Measures in MBNQA Program Applications

The MBNQA criteria are organized into seven categories. Categories one through six describe the organizations leadership, planning, customer service, information management, human resources, and process management. Category seven describes the organizations results associated with the approaches described in categories one through six. A healthcare application would require clinical quality results in Item 7.1. Healthcare criteria are updated annually and can be accessed on the National Institute for Standards and Technology (NIST) Web site. 23

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
Figure 1 pneumonia processes and outcome.
100% In-Process Indicators 80% 60% 40% 20% 0% Good Good Good Good Outcomes 20% 16% 12% 8% 4% Good 0% 03 04 05 06 07Q3 Mortality Benchmark (Top 10%) RB

03 04 05 06 07- 03 04 05 06 07- 03 04 05 06 07- 03 04 05 06 07Q3 Q3 Q3 Q3 Oxygenation assessment Blood culture collected prior antibiotic admin Blood culture collected w/i 24 hrs of admission Adult smoking advice/counseling Initial antibiotic w/i 4 hrs of arrival Benchmark (Top 10%)

optimal benchmark is a top-tier performance (for example, top 10 percent) from a large, preferably national, pool of competitors. It is not uncommon for hospitals to use several national clinical databases to compare their clinical processes and outcomes, such as Society for Thoracic Surgery (STS), American College of Cardiology (ACC), or Premier or (formerly CareScience). These databases provide benchmarks, and some of them will perform severity adjustment, such that the hospital can assess its morbidity and mortality rates compared to what would be expected for the given population of patients. As noted, TJCaccredited hospitals are required to submit core measures data, so these data are readily available for Item 7.1, and the applicant can use TJCs nationwide pool of mandatory submissions for a benchmark.

Case Study: North Mississippi Medical Center

An examination of the approach of a 2006 MBNQA healthcare winner, North Mississippi Medical Center (NMMC), in presenting healthcare data will provide an overview as to providing optimal amount of

meaningful clinical process and outcome information in a condensed space. The NMMC application can be viewed online at the NIST Web site. Four of NMMCs category 7.1 figures are provided in Figures 1-4 as demonstrations of how process and outcome measure can be presented. Like other Baldrige Award applications, NMMC used abbreviations and acronyms to shorten its text and allow for more content. For example, the RB in the upper right-hand corner of Figures 1-4 is an abbreviation for run the business, which means that the figures information is routinely monitored as a standard quality measure. A common application shorthand is the use of arrows to indicate which direction is optimal for the process or outcome graphic. For example, in Figure 1 it is clear that close to 100 percent is desirable for the process measures, but it is optimal to have the outcome (mortality) as close to zero as possible. NMMC developed a new format for presenting its data, such that several years (trends) of significant process data are presented in the same figure as several years of related outcomes data. It enables readers to rapidly visualize the cause (process improvements) and effect (decreased pneumonia mortality) and develop

24 QMJ VOL. 16, nO. 1/ 2009, ASQ

2009, ASQ

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
Figure 2 Congestive heart failure processes.
RB 100% In-Process Indicators 80% 60% 40% 20% 0% Good Good Good Good 03 04 05 06 07- 03 04 05 06 07- 03 04 05 06 07- 03 04 05 06 07Q3 Q3 Q3 Q3
2009, ASQ 2009, ASQ

Discharge instructions ACEI usage Benchmark (Top 10%)

LVF assessment Adult smoking advice/counseling

Figure 3 30-day readmission rate, an overall outcome measure.

18% 17% 16% 15% 14% 13% 12% 11% 10% 2001 2001 NMMC 2001 2001 2001 2001 15.9% 15.7% 14.9% 14.4% 13.6% 14.1% Medicare Data - 30-Day Readmissions Good RB

MS Medicare defined state peers

their assessments accordingly. The four graphs on the left side of Figure 1 describe four process measures that are considered essential for the optimal management of patients who are admitted to the hospital with pneumonia (assessing the patients oxygenation status, procuring a blood culture prior to beginning antibiotics, beginning antibiotics within four hours of admission, and providing smoking cessation counseling to patients who smoke). Based on clinical evidence that demonstrates if these processes are followed then the clinical outcomes should improve, these processes

are among the TJCs core measures. As noted earlier, more is better with regard to process measures, and the goal is 100 percent. In the graph, however, the benchmark is not quite 100 percent because it is set as the top 10 percent of the data pool. Four years worth of data are provided so the process trend can be observed. The right side of Figure 1 provides a picture of an outcome (mortality) of these same pneumonia patients, again, the same four-year time span, and also compared to the top 10 percent of hospitals in the TJC Core Measures national database. 25

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
Figure 4 Venous thrombosis protocol results.
All Service LinesDVT Protocol Outcomes RB 7 6 5 13 4 10 3 2 1 Nov 2004 (Baseline) # DVTs Nov 2005 (Post implementation) National per 1,000 patients Nov 2006 0 # Cases per 1,000 patients
2009, ASQ

30 # Cases with DVT 25 20 15 10 5 0



NMMC per 1,000 patients

Another display of the use of essential and already collected data is provided in Figure 2, which demonstrates four more of TJCs core measures. Congestive heart failure (CHF) is a high-volume chronic disease in NMMCs service area, so these evidence-based processes (assessing left ventricular function, prescribing angiotensin-converting enzyme inhibitors (ACEIs), and providing discharge instructions as well as smoking cessation advice) are essential in not only treating patients while in the hospital but in keeping them from being readmitted in the future. Note that all four measures have mostly positive trends, although there are slight decreases in discharge instruction and ACEI usage between 2006 and 2007. More important, however, the level of compliance on all four measures is approaching 100 percent. Hospital readmission rate is demonstrated in Figure 3. It describes all of the Medicare patients who were readmitted to the hospital within 30 days of being discharged. The comparison group consists of NMMCs peer hospitals (similar size and services within Mississippi). These data are particularly significant because several of the previously described quality improvement efforts resulted in improved outcomes such as a decrease in length of stay and/or cost of care. Figure 3 is an additional outcome indicator and shows that NMMCs patients were readmitted less often than patients in peer hospitals, thereby demonstrating

that the lower lengths of stay did not harm patients (that is, patients were not discharged prematurely). An example of the outcome improvement of a specific process improvement is found in Figure 4. The term hardwiring is applied to a process improvement by making it an automatic standard of practice. In order to prevent venous thrombosis in all at-risk patients, the standard medical orders include prevention measures, such that the physician would have to cancel prevention measures rather than remember to include them. Although Figure 4 uses the national average of cases per 1000 patients as an external benchmark, it provides one year of the occurrence of deep vein thrombosis prior to hardwiring the prevention process and then two years of outcome data with the process in place. As noted, the intervention was implemented in 2005 and there was a significant decrease in the occurrence of this adverse outcome in 2006. In fact, the DVT rate at NMMC in 2006 is less than one sixth of the national rate.

This article provides a better understanding of the measurement of clinical quality, the most important aspect of a healthcare organization. The advantages and disadvantages of using only process or outcome measures to evaluate clinical quality have been discussed and support the merged and complimentary

26 QMJ VOL. 16, nO. 1/ 2009, ASQ

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
use of process and outcome measures. Process measures should be current and meaningful to the care of the patient, and their effect on the outcome of patient care should be based on significant evidence. Stakeholders use such measures to improve care (hospitals and providers) and select care (payers and patients). These measures should be compiled in order to show trends in practice as well as comparison to an external benchmark. The discipline of trending and using external comparison compels a healthcare organization to understand its processes and improve its outcomes. In addition, healthcare organizations are able to use their existing process and outcome measures to demonstrate cause-and-effect relationships in assessment processes, such as the MBNQA. These results can be valuable to nonhealthcare practitioners who may be faced with evaluating clinical quality in performance excellence award programs. The importance of measurements of healthcare performance is clear, driven by quality advocates such as IHI, payers seeking value, and consumers. Much research, however, remains to be done. Clinical trialdriven medical research and the increasing growth of evidence-based medicine will lead to new measures that need to be examined for incorporation into the set of process measures useful in assessing quality. In addition, measures known to affect outcomes for certain maladies (for example, AMI or diabetes) such as patient compliance with the prescribed drug regimen are related to socioeconomic variables such as age, education, and marital status (Ho 2006). Future research is needed to explore the use of such data in the refinement of risk adjustment models so that outcome data become even more meaningful. To create valid measures of quality at the physician level, the point of interest of most consumers, new measurement methods need to be explored that avoid the current small sample problems. There also will need to be research into the methods used to analyze quality data. For example, the Hospital Compare Web site provides data on the 30-day risk adjusted mortality from AMI as compared to the U.S. national rate. The site classifies nine hospitals as above average, 4302 as average, and none as being below average. (U.S. DHHS 2008) Some would argue that such a classification is of little value, and the data must be examined more critically. Finally, as the measures continue to expand, there has been a movement to create a summary or index combining several measures (composite measures). Their various methods of calculation and their pros and cons need to be more fully explored (Van Matre 2006).
reFerences ASQ Healthcare Division. 2008. Healthcare division newsletter (June/ August). Available at: hcd-2008-jul-aug.pdf. Baldrige National Quality Program. 2008. Health care criteria for performance excellence. Gaithersburg, Md.: National Institute of Standards and Technology (NIST). Berwick, D. M., A. Blanton Godfrey, and J. Roessner. 1990. Curing health care: New srategies for quality improvement. San Francisco: Jossey-Bass Publishers. Best, M., and D. Neuhauser. 2005. Pierre Charles Alexandre Louis: Master of the spirit of mathematical clinical science. Quality & Safety in Health Care 14:462-464. bradley, e. h., J. herrin, b. elbel et al. 2006. hospital quality for acute myocardial infarction. Journal of the American Medical Association 297, no. 1:72-78. Chassin, M. R. 2002. Achieving and sustaining improved quality: Lessons from New York State and cardiac surgery. Health Affairs 21, no. 4:40-51. Deming, W. E. 1986. Out of the crisis. cambridge, Mass.: Mit center for Advanced engineering Study. Eddy, D. M. 1998. Performance measurement: Problems and solutions. Health Affairs 17, no. 4:7-25. Englert, J., K. M. Davis, and K. E. Koch. 2001. Using clinical practice to improve care. Journal on Quality Improvement 27, no. 6:291-301. Evans, J. R. 2004. An exploratory study of performance measurement systems and relationships with performance results. Journal of Operations Management 22:219-232. Galvin, R. 2005. A deficiency of will and ambition: A conversation with Donald Berwick. Health Affairs. Web Exclusive, W5-1-W5-9 (January 12). Hackman, J. Richard, and R. Wageman. 1995. Total quality management: empirical, conceptual, and practical issues. Administrative Science Quarterly 40:309-342. Hibbard, J. H, J. Stockard, and M. Tusler. 2005. Hospital performance reports: Impact on quality, market share, and reputation. Health Affairs 24, no. 4:1150-1160. ho, p. M., J. A. Spertus, f. A. Masoudi et al. 2006. impact of medication therapy on mortality after acute myocardial infarction. Archives of Internal Medicine 166 (September 25):1842-1847. 27

Understanding Healthcare Clinical Process and Outcome Measures and Their Use in the Baldrige Award Application Process
institute of Medicine. 1990. Medicare: A strategy for quality asssurance, volume I, ed. K. N. Lohr. Washington, D.C.: National Academy press. institute of Medicine (iOM). 2001. Crossing the quality chasm: A new health system for the 21st century. Washington, D.C.: The national Academies press. Jha, A. K., J. E. Orav, L. I. Zhonghe, and A. Epstein. 2007. The inverse relationship between mortality rates and performance in the hospital quality alliance measures. Health Affairs 26, no. 4:1104-1110. Jha, A. K., J. E. Orav, A. B. Ridgway, J. Zheng, and A. M. Epstein. 2008. Does the leapfrog program help identify highquality hospitals? The Joint Commission Journal on Quality and Patient Safety 34, no. 6:318-325. Kaplan, R. S., and D. P. Norton. 1992. The balanced scoreboard Measures that drive performance. Harvard Business Review 70, no. 1:71-79. Kenney, C. 2008. The best practice. New York: Public Affairs. Landon, B., S. Normand, D. Blumenthal, and J. Daley. 2003. Physician clinical performance assessment: Prospects and barriers. Journal of the American Medical Association 290: 1183-1189. Malcolm Baldrige National Quality Award Health Care Criteria. Available at: Mant, J. 2001. Process versus outcome indicators in the assessment of quality of health care. International Journal for Quality in Health Care 13, no. 6:475-480. McGlynn, E. A., S. M. Asch, J. Adams, et al. 2003. The quality of health care delivered to adults in the United States. The New England Journal of Medicine 348:2635-45. McLaughlin, C. P., and A. D. Kaluzny. 2006. Defining quality improvement. in Continuous Quality Improvement in Health Care, 3rdedition. Sudbury, Mass.: Jones and Barlett. Minitab, Inc. 2005. Statistical software for Windows, release 14.20. State college, pa: Minitab, inc. Mitka, M. 2007. JACHO Tweaks emergency departments pneumonia treatment standards. Journal of the American Medical Association 297, no. 16 (April 25):1758-1759. NIST Tech Beat. 2008. Thirteen to receive Baldrige visits. Available at: htm#baldrige. North Mississippi Medical Center. 2006. Award recipient summary. Available at Application_Summary.pdf. Pennsylvania Health Care Cost Containment Council (PHC4). 2004. Pennsylvania guide to coronary artery by-pass graft surgery 2004. Available at: docs/cabg2004report.pdf. Peterson, E. D. et al. 2006. Association between hospital process performance and outcomes among patients with acute coronary syndromes. Journal of the American Medical Association 297, no. 16:1912-1920. Pine, M., H. S. Jordan, A. Elixhauser et al. 2007. Enhancement of claims data to improve risk adjustment of hospital mortality. Journal of the American Medical Association 297, no. 1:71-76. premier, inc. 2006. centers for Medicare and Medicaid Services (CMS)/Premier hospital quality incentive demonstration project: Findings from year one. Charlotte, N.C.: Premier, Inc. Press, I. 2004. The measure of quality. Quality Management in Health Care 13, no. 4:202-209. Shiba, S., A. Graham, and D. Walden. 1993. A new American TQM. cambridge, Mass.: productivity press. The Joint Commission (TJC). 2007. Understanding the quality of care measures. Available at: background/qualityofcaremeasures.aspx. U.S. Department of Health and Human Services (USDHHS). 2006. Hospital Compare. Available at: U.S. Department of Health and Human Services (USDHHS). 2008. Hospital compare. Available at Van Matre, J. G. 1992. The D*A*T approach to total quality management. Journal of the American Health Information Management Association 63, no. 11(November):38-44. Van Matre, J. G. 2006. All-or-none measurement of health care quality. Journal of the American Medical Association 296, no. 4:392. Welcome Library. 2008. Mortality statistics in England and Wales. Available at: WTL038911.html. BIoGraphIes Joseph G. Van Matre is a professor in the School of Business, University of Alabama at Birmingham. His doctorate in business statistics is from the University of Alabama, and his MBA and bachelors degrees are from Auburn University. His research interests are focused on healthcare quality. He is the author of several books and articles and editor of the book Foundations of TQM: A Readings Book. He also teaches in the Executive Masters in Healthcare Administration Program. Van Matre can be reached by e-mail at Karen e. Koch is the director of the Patient Focused Improvement Department at North Mississippi Medical Center (NMMC). She implemented NMMCs Clinical Pharmacy program in 1987 and has since managed its IRB, research, and grants programs. Koch coordinated NMMSs Malcolm Baldrige National Quality Award (MBNQA) application process for three application cycles and is an MBNQA examiner.

28 QMJ VOL. 16, nO. 1/ 2009, ASQ