You are on page 1of 10

International Journal for Quality in Health Care 2008; Volume 20, Number 3: pp.

162 171 Advance Access Publication: 13 March 2008

10.1093/intqhc/mzn008

An international review of projects on hospital performance assessment


OLIVER GROENE1,2, JUTTA K. H. SKAU3 AND ANNE FRLICH4
1

Research and Education, Avedis Donabedian Research Institute, Barcelona, Spain, 2CIBER Epidemiology and Public Health (CIBERESP), Barcelona, Spain, 3Institute of Public Health, University of Copenhagen, Copenhagen, Denmark, and 4Bispebjerg Hospital, Copenhagen, Denmark

Abstract
Background. Assessing the quality of health care has become increasingly important in health care in response to growing demands from purchasers, providers, clinicians and the public. Given the increase in projects and programs to assess performance in health care in the last 15 years, the purpose of this paper is to review current indicator projects for hospital performance assessment and compare them to the Performance Assessment Tool for Quality Improvement in Hospitals (PATH), an initiative by the WHO Regional Ofce for Europe. Methodology. We identied current indicator projects through a systematic literature search and through contact with experts. Using an inductive approach based on a review of the literature, we identied 10 criteria for the comparison of indicator projects. We extracted data and contacted the coordinators of each indicator project to validate this information. In addition, we carried out interviews with coordinators to gather additional information on the evaluation of the respective projects. Results. We included 11 projects that appear to have adopted a common methodology for the design and selection of indicators; however, major differences exist with regard to the philosophy, scope and coverage of the projects. This relates in particular to criteria such as participation, disclosure of results and dimensions of hospital performance assessed. Conclusion. Hospital performance assessment projects have become common worldwide, and initiatives such as the WHO PATH project need to be well coordinated with existing projects. Our review raised questions regarding the impact of hospital performance assessment that should be pursued in further research. Keywords: quality improvement, quality management, quality indicators, measurement of quality

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

Introduction
Assessing the quality of health care has become increasingly important for different stakeholders such as health care providers, decision makers and purchasers of health care in response to growing demands to ensure transparency, control costs and reduce variations in clinical practice [1]. While 10 15 years ago hospital performance assessment was an innovative eld [2], many projects are now operational in European countries and worldwide [3]. In addition, several initiatives are being supported by international organizations as World Health Organization (WHO) and Organization for Economic Co-operation and Development (OECD) [4, 5]. However, the goals and strategies of these projects are very diverse and national projects from different countries pursuing similar objectives may adopt strategies that are based on fundamentally different philosophies regarding participation in the project and disclosure of results [6]. It is, therefore, of

interest to focus on the effects of different strategies for quality improvement, a question which is also addressed by a current research project supported by the European Commission [7]. Furthermore, with the creation of national initiatives for hospital performance assessment in many countries, the added value of international projects needs to be evaluated. Therefore, the purpose of this paper is to review current indicator projects on hospital performance assessment and compare them with the Performance Assessment Tool of quality improvement in Hospitals (PATH), which is an international initiative coordinated by the WHO Regional Ofce for Europe aiming to support hospitals in assessing their performance, questioning their own results and translating them into actions for improvement [5]. This comparison was also carried out in order to assess the possible overlaps and competition in hospital performance initiatives and identify approaches for an evaluation of performance assessment

Address reprint requests to: Jutta K. H. Skau, Institute of Public Health, University of Copenhagen, ster Farimagsgade 5, 1014 Copenhagen K, Denmark. E-mail: jskau@stud.ku.dk
International Journal for Quality in Health Care vol. 20 no. 3 # The Author 2008. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved

162

Review of projects on hospital performance assessment

projects. Based on the comparative review, we discuss differences in the perspectives and strategies of these projects with a particular focus on the approach towards a possible evaluation of performance indicator projects in the future.

comprehensive denitions for each criterion, if possible, based on the Medical Subject Heading (MeSH) or other denitions that were consented by respected quality agencies. A description of the nal list of criteria that were used for the comparative analysis is presented in Table 1. Validation After the information was collected, we contacted each project coordinator and asked them to validate the information we had obtained. Furthermore, we carried out short telephone interviews with project coordinators from 8 of the 10 indicator projects to gather additional information and opinions on evaluation of indicator projects. Interviews lasted a maximum of 15 min, were recorded on Dictaphone and subsequently transcribed for the comparative review. The nal results were once again validated by the project coordinators.

Methodology
Identifying hospital performance assessment projects We carried out a systematic literature search in PUBMED to identify current and relevant hospital performance assessment projects, applying the following search strategy: (i) Indicator* OR stand*, (ii) qualit*, (iii) hospital performance, (iv) project* OR program* OR tool* and (v) measure* OR assessm*. We included articles in the search that were published in English after 1995 and found 73 relevant hits. After the initial screening, we identied 14 articles on 10 different projects. This list was then reviewed by a researcher in the Methods of Assessing Response to Quality Improvement (MARQuiS) project [8] who provided information on 18 additional projects from 13 countries. These projects were not included in the nal analysis as no additional information was available in publicly accessible sources. Moreover, they could not be characterized as independent projects as the others in the analysis, but rather being some side-activity of another quality program. Identication of criteria We determined the criteria to compare the projects using an inductive approach. This consisted of exploring the relevant literature, documents and websites of the identied projects and highlighting information reecting the design of the project. Moreover, we put a particular focus on identifying criteria on the evaluation of these projects. We then sought

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

Results
Overall, 11 hospital performance indicator projects were included (10 national projects and PATH) for a comparative review according to the 10 criteria described above (see Tables 2 4). Abbreviations used in the following, and links for further information, are included in Box 1. In the following, we highlight the results for each column presented in Tables 2 4. Assessing the performance of hospitals appears to be a relatively new area in the eld of health sciences and hospital management in Europe, as only four non-European projects (ACHS, JCAHO, OHA and QIP) and CIST Scotland (though in a different organizational structure) were launched before 2000. COMPAQH, the Dutch project on reporting of performance in hospital, and PATH are some of the most recent projects, being initiated in 2003. None of the projects

Box 1. Abbreviations and relevant websites


ACHS BQS CIST COMPAQH IQIP JCAHO MARQuiS NIP OAH OECD PATH QIP Australian Council on Health care Standards, http://www.achs.org.au Bundesgeschaftsstelle Qualitatssicherung, http://www.bqs-outcome.de Clinical Indicators Support Team, NHS Quality improvement Scotland, http://www.indicators.scot.nhs.uk COordination pour la Mesure de la Performance et Amelioration de la Qualite Hospitaliere, http://ifr69 l ` .vjf.inserm.fr/compaqh International Quality Indicator Project, http://www.internationalqip.com Joint Commission Accreditation of Health Care Organization, http://www.jointcommission.org Method of Assessing Response to Quality Improvement Strategies, http://www.marquis.be The National Indicator Project, http://www.nip.dk Ontario Hospitals Association, http://www.oha.com Organization for Economic Co-operation and Development The Performance Assessment Tool for Quality Improvement in Hospitals: www.pathqualityproject.eu Quality Indicator Project, http://www.qiproject.org The Dutch project on reporting of performance in hospitals, http://www.rivm.nl/ Verein Outcome: www.vereinoutcome.ch World Health Organization, http://www.euro.who.int

WHO

163

O. Groene et al.

Table 1 Description of the 10 criteria Criteria Dimension of hospital performance assessed Number individual indicators and of groups of indicators Description Dimensions addressed by the indicator project were compared with the six-dimensional framework of the WHO PATH project: Clinical effectiveness, Safety, Staff orientation, Efciency, Patient centeredness, Responsive governance [5]. Def. of indicator: A quantitative measure that can be used to monitor and evaluate the quality of important governance, management, clinical, and support functions that affect patient Outcomes [9]. Use of scientic tools (assessment of reliability, validity, sensitivity, specicity, hearing requests, pilot testing etc.) in the phase of developing the indicators and involvement of stakeholders (expert group, user groups (health professionals and hospital management staff) and patients). Voluntary or non-voluntary. Number of hospitals participating in the project. The kind of data that is used, e.g. routine data, prospective data, surveys, quality data and retrospective data. Whether the results from individual providers are disclosed to the public. Communication tools applied by the core coordination group of the project to disseminate results to individual hospital and other stakeholders, workshops and Structural dialogue The time elapsed between data collection and feedback to the participant. The annual budget of the project.

.............................................................................................................................................................................

Development of indicators

Participation Number of participants Data collection Public disclosure Feedback mechanism

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

Feedback time Budget

covered all dimensions of the PATH project. However, Clinical effectiveness was covered by nine other projects (ACHS, CIST, COMPAQH, BQS, JCAHO NIP, QIP, Verein Outcome, the Dutch project on reporting of performance in hospitals). Except PATH, the domain of Staff orientation was covered by only one project (COMPAQH). During the interviews, it became clear that the performance dimensions are dened very heterogeneously, and there appears to be no consensus on the conceptual demarcation of efciency and clinical effectiveness. The number of indicators for each project ranges from 36 (JCAHO) to .300 (ACHS) indicators. PATH can be distinguished here from other projects as it contains a set of core indicators, which need to be collected by all hospitals, and tailored indicators, which can be chosen by hospitals based on capacities, interests and appropriateness to the context. Some national projects divide indicators into subject areas, but due to the lack of consensus on concepts it is difcult to compare these subject areas. All the projects used expert groups, e.g. different stakeholders (clinical staff, hospital management, insurance companies) in the primary phase of developing the indicators. The expert groups contributed to the scientic analysis as well as systematic literature reviews and rating/consensus methods, such as Delphi method or nominal group techniques. It is not clear to what extent indicators are assessed by each project in terms of their psychometric properties. Projects differ in the extent to which users such as health professionals and managers on the one hand and patients and citizens on the other were involved in the selection and design of indicators.

Five projects are based on voluntary participation (ACHS, COMPAQH, OAH, QIP and PATH), whereas ve projects are based on non-voluntary participation (CIST Scotland, BQS, JCAHO, NIP and the Dutch projects on reporting of performance in hospitals). Verein Outcome contains both non-voluntary and voluntary participation. The coverage of the projects ranges from 43 (COMPAQH) hospitals to 9935 hospitals (JCAHO). All projects use a combination of routine data and prospectively collected data for the analysis. Eight projects also use indicators that are based on surveys (ACHS, COMPAQH, JCAHO, OAH, QIP, Verein Outcome, the Dutch projects on reporting of performance in hospitals and PATH), and only three projects construct indicators based on audits (QIP, NIP and PATH). The results of the data analysis are available to the public in ve projects (CIST Scotland, NIP, JCAHO, Verein Outcome and the Dutch projects on reporting of performance in hospitals); for six projects, only providers have access to the results (ACHS, COMPAQH, BQS, OAH, QIP and PATH). In terms of feedback mechanisms, the projects apply similar tools such as websites, newsletters, annual reports, scientic articles etc. Seven projects use workshops as an educational feedback loop (ACHS, CIST Scotland, COMPAQH, JCAHO, NIH, QIP and Verein Outcome). There are major differences though with regard to the reported feedback time, and the range of responses ranging from 1 week (COMPAQH) to 1 year (CIST Scotland) may indicate different denitions for data collection and reporting periods, different approaches as to the scope of data collection and analysis as well as different procedures and technological solutions to

164

Review of projects on hospital performance assessment

facilitate reporting, such as using computerized procedure for the production of hospital performance reports. One project also performs a so-called structural dialogue (BQS). It can be described as a hearing request from the users linked to a feedback loop provided by the indicator projects in order to identify statistical outliers, interpret performance and identify quality improvement actions. The structural dialogue differs from a workshop because the main purpose is to identify

possible problems and quality improvement activities with regard to each indicator, while the main purpose of the before mentioned workshops is to educate the users to understand how to collect and interpret data. With regard to the projects budget, we were only able to obtain the budget for QIP and ACHS through publicly available documents. For other projects, we obtained information directly from the project coordinators. Furthermore, only

Table 2 Overview on projects (content and development) Country and year of launching the project Performance Assessment Tool for Quality Improvement in Hospitals (PATH), 2003 [5,10] Dimension of indicators Numbers of indicators 17 core indicators (48 incl. all tracers) and 24 tailored indicators (47 incl. all tracers) 22 subjects with 308 indicators Development of indicators Participation

..............................................................................................................................................................................

Clinical effectiveness, efciency, Staff orientation, responsive governance, safety, patient centeredness Australian council on health Clinical effectiveness, care standards (ACHS), safety, efciencya Indicator project development commenced Australia, 1989 [11,12] BQS-Bundesauswertungen, Clinical effectiveness Germany, 2000 [13]

Expert group Nominal Voluntary group technique User evaluation (Hospital level) Pilot testing Expert group User evaluation Pilot testing External review Voluntary

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

Clinical Indicators support team (CIST), NHS Quality Improvement in Scotland Scotland, 2000b [14,15] COMPAQH, France, 2003 [16] Joint Commission on Accreditation of Health Care Organisations (JCAHO) (ORYX), USA, 1997 [17] Ontario Hospital Association (OHA), (Hospital reports), Canada, 1997 [18] Quality Indicator Project (QIP) USA, 1984d [19,20] The National Indicator project (NIP), Denmark, 2000 [21] Reporting of performance in Dutch hospitals, The Netherlands, 2003 [22]

Clinical effectiveness

Mandatory programme: 17 subjects with 169 indicators The voluntary program: 14 subjects with 95 indicators Seven subjects with 64 indicators

Expert groups Pilot testing User evaluation Validation of the methodology Expert group User groups

Non-voluntary

Non-voluntary

Clinical effectiveness, staff orientation, patient centeredness Clinical effectiveness, efciency, safety, patient centerednessc

Eight national priorities with 43 indicators Five subjects with 36 tailored indicators

Expert group Delphi method (two rounds)on user groups Pilot testing Expert groups Request hearing Pilot testing

Voluntary

Non-voluntary

Efciency, responsive governance, patient centeredness Clinical effectiveness, efciency, patient centeredness, safety Clinical effectiveness, efciency, patient centeredness, safety Clinical effectiveness, patient-centeredness, safety, efciency

Four subjects with 47 indicators (2003)

Expert group Hearing request Involvement of clinical staff

Voluntary

Four subjects with 47 indicators Seven subjects with 87 indicators

Expert group Pilot testing Voluntary

Three subsets with 39 indicators (2004)

Expert groups Hearing Non-voluntary request (both with patient groups and user groups) Pilot testing Expert groups Medical Non-voluntary societies Hearing request Validity testing
(continued )

165

O. Groene et al.

Table 2 Continued Country and year of launching the project Verein Outcome, 2000, Switzerland, 2000 [23,24] Dimension of indicators Numbers of indicators clinical effectiveness, efciency, patient centeredness, safety, responsive governancee 19 subjects with 118 indicators Development of indicators Expert group Co-operation with, medical societies and scientic institutes Pre-testing Pilot testing Evaluation Participation Non-voluntary for hospitals in Zurich, Bern, Solothurn and Aargau. Hospitals from other areas are voluntary

..............................................................................................................................................................................

ACHS are using similar denitions of the dimensions. The dimensions can be linked to clinical effectiveness, efciency and safety. The CIST team carries out most of the analysis and data co-ordination for the production of annual indicator reports and web publications. However, additionally, staff at NHS Quality Improvement Scotland is also involved in producing indicator reports. Staff from NHS Quality Improvement Scotland has since 1993 contributed with data to national indicators. CIST is the successor to the work of these other teams. c JCAHO is using similar denitions of the dimensions. The dimensions can be linked to clinical effectiveness, efciency, safety and patient centeredness. d QIP also exist as a modied structure in 12 countries outside US, called the International QIP. The 12 countries are using the same set of indicators as QIP in US plus indicators, which is developed in collaboration with the respective country. More then 300 hospitals participate in IQIP. The total budget for QIP and IQIP is 10 million US dollars. e The Verein Outcome is using Messthemen which is similar dimension. E.g. is Verein Outcome working with the Messthema Austrittsmanagement which is similar to the denition responsive governance.
b

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

four projects (QIP, COMPAQH, NIP and the Dutch projects on reporting of performance in hospitals) could inform us of the exact budget. The budget for the longer established projects may indicate the administrative, methodological and technological requirements for the successful and efcient operation of a quality indicator project.

are substantial differences in the underlying philosophy of the projects as characterized, for example, by a policy towards disclosure of the results to the public vs. restricted use of data for internal quality improvement. It is further difcult to describe the focus of the projects as the performance dimensions are dened in a heterogeneous way.

Discussion
This paper has briey explored 11 indicator projects to assess performance in hospitals. Based on the available public documents and websites, 10 criteria of the projects were determined. Our inductive approach to developing criteria does have its limitations as it is based on publicly available information on performance indicator projects and articles published on performance assessment projects, and we might have overlooked some valuable information addressing other criteria. Furthermore, it can be criticized that we involved only well-established indicator projects in the assessment and that we neglected smaller initiatives that are side projects of other quality assessment or improvement projects and cannot be characterized as performance assessment projects. However, the strength of our approach is that each project coordinator validated the use of the criteria, the information gathered from publications and web pages as well as the information extracted from interviews. We believe that this validation process adds substantially to the paper as the comparability of published information alone may be limited due to the lack of consensus on denitions and concepts. The review showed that the projects differ quite signicantly according to some of the criteria. In particular, there

Development and reporting of indicators The procedures for developing indicators appear to be similar. As mentioned in the results, all projects used expert groups and user groups in the phase of developing and selecting indicators, which indicates a combination of top-down and bottom-up approach; however, we were not able to retrieve further information on the assessment of the specic properties of each indicator in the various projects, such as an assessment of reliability, validity, sensitivity and specicity. The comparison indicates that the involvement of patients and citizens in developing indicators is more common in public-disclosure initiatives. The most common measurement areas include, not surprisingly, clinical effectiveness, patient centeredness and patient safety. Considering the decreasing length of stay in hospitals worldwide and the fact that patient-relevant outcomes can only be measured in a longer time perspective, hospital performance assessment initiatives may in the future be confronted with major methodological challenges and the need to broaden assessment domains. Some initiatives (PATH, OHA and Verein Outcome) already address the responsive governance domain, and it can be expected that in the future a stronger focus will be expected by nancers and users to address longer-term and sector-wide performance assessments.

166

Table 3 Overview over projects (participation and data collection) Country and year of launching the project PATH, 2003 Number of participants 51 hospitals from 6 countries participated in the rst pilot-test; 135 hospitals from 9 countries are participating in the second phase of the project. 600 Hospitalsa Data collection Prospective data Qualitative data Surveys Public disclosure No

............................................................................................................................................................................................................................................

Australian council on health care standards (ACHS) Indicator project development commenced, Australia, 1989 BQS-Bundesauswertungen, Germany, 2000 Clinical Indicators support team (CIST), NHS Quality Improvement in Scotland, Scotland, 2000 COMPAQH, France, 2003 Joint Commission on Accreditation of Health Care Organisations (JCAHO), (ORYX), USA, 1997 Ontario Hospital Association, (Hospital reports), Canada, 1997 Quality Indicator Project USA, 1984 The National Indicator project, Denmark, 2000 Reporting of performance in Dutch hospitals, The Netherlands, 2003 Verein Outcome, Switzerland, 2000

Prospective data Routine data Retrospective data Surveys Prospective data Prospective data Routine data Prospective data Retrospective data Survey Routine data Survey Routine data Qualitative data Surveys Prospective data Routine data Audit surveys Combination of routine data and prospective data Audits Prospective data Routine data Retrospective data Prospective data Routine data Retrospective data Qualitative data Surveys

No

1501 Hospitals 328 Hospitals 43 Hospitals (Participated in the second pilot testing) 9935 Hospitalsb 120 Hospitals .1000 Hospitals 80 Public Hospitals All of the 96 Hospitals in The Netherlands 53 Hospitals (39 hospitals grouped by headquarters canton: Zurich, Bern, Solothurn and Aragau)

No Yes No Yes No No Yes (annual reports) Yes, publication of an aggregated annual report as well as an individual report Yes, publication of aggregated data e.g. cantonal publications (concerned performance demonstration of the hospitals)

Review of projects on hospital performance assessment

a ACHS identify the members as Health Centre Organizations (HCO), which not only include hospitals but also nursing homes, hospice, ambulatories, General Practices etc. In total, there are .800 HCO. b The hospitals are divided into the ve subjects where they participate: AMI, 2890 hospitals; heart failure, 3166 hospitals; pneumonia, 3120 hospitals; pregnancy and related conditions, 751 hospitals; surgical infection prevention, 759 hospitals. In total, it is 9935 hospitals. Therefore, there is a possibility that some of the participating hospitals are counted twice.

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

167

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

168

O. Groene et al.

Table 4 Overview over projects (feedback mechanisms) Country and year of launching the project PATH, 2003 Feedback mechanism Website, Newsletter and scientic articles, Annual Conference Feedback time 11 months from receipt of data to preparation of performance report in pilot phase, 4 months feedback time (estimated) in second phase of the project 2 weeks Budget Approx. 60.000E annually (including in-kind contributions, excluding development costs) AU$ 6 632 048 (2005) The clinical indicator project is only a component of this amount

............................................................................................................................................................................................................................................

Australian council on health care standards (ACHS) Indicator project development commenced, Australia, 1989

BQS-Bundesauswertungen, Germany, 2000

Clinical Indicators support team (CIST), NHS Quality Improvement in Scotland Scotland, 2000 COMPAQH, France, 2003 Joint Commission on Accreditation of Health Care Organisations (JCAHO) (ORYX), USA, 1997 Ontario Hospital Association, (Hospital reports), Canada, 1997 Quality Indicator Project USA, 1984 The National Indicator project, Denmark, 2000

Website Ongoing evaluation of member satisfaction Conferences and Workshops Newsletter/annual reports Scientic publications Individual data reports State-based or corporate data reports Website Individual reports and annual reports structured dialogue Conference Scientic articles Website Annual report and Scientic articles Workshops and presentations

3 months

12 18 months

Website Annual reports and Scientic articles Evaluation from members Workshops Website Accreditation Annual Conference and workshops Website Newsletter and Scientic articles Conferences User evaluation Website Newsletter and scientic articles Workshops Website Annual reports (of case-mix corrected data) Scientic articles Conferences and workshops (regional level) Monthly returns of raw data National audit Websites Annual reports, Reports and Scientic articles Website Newsletter, Annual reports Scientic articles Bench-marking Workshops Data analysis

From 1 week to 4 months 6 months

Funded by public and private health insurance on a per-inpatient-case basis* 138.000 (2005/06) The work of CIST is only a component of the total cost of the production of national Scottish indicators 470.000 E Not available The ORYX-project is a component of the total budget for JCAHO Ca. CAN$ 200 mill. (2005) 8.1 million US dollars 800.000 E

1 year 1 month 1 month

Reporting of performance in Dutch hospitals The Netherlands, 2003 Verein Outcome, Switzerland, 2000

4 6 months Approximately 1 2 months

E 400.000 2/10 of a percent of the hospitals costs

There are 16 million cases. Each involved organization receives from health insurance for each case an amount of: hospitals E 0.58. Regional ofces E0.280.68 and National agency (BQS) E0.28.

Review of projects on hospital performance assessment

In terms of reporting data, some projects provide results only to hospitals, whereas others also inform the public. In our results appears a strong association between participation and public disclosure, e.g. all the projects that are based on non-voluntary participation disclose the results to the public, except the Bundesgeschaftsstelle Qualitatssicherung (BQS) in Germany. Projects that are based on voluntary participation stress that the use of performance data is for internal purposes only. Public disclosure of performance, dened as an opportunity for the consumer ( patient), to demand information on the quality of healthcare (clinical effectiveness, hygiene, health care etc.) in hospitals, is a controversial issue. Recent research indicates that patients do make use of such information to choose providers, and public disclosure can thus be seen as a motivational factor for the hospitals to work and invest in better performance assessment systems [25]. However, other studies report that consumers judge information on quality as more trustworthy and easier to understand if they obtain it from family and friends or from their own past experiences [26, 27]. There appears to be a risk therefore that consumers may not be able to interpret the information generated by hospital performance assessment projects and their more intense involvement is warranted to ensure appropriate design of feedback mechanisms. Competition We are aware of co-existing indicator projects at local/ regional, national and international level [3]. For example, in Germany, where all hospitals are obliged to participate in the BQS project some hospitals also participate on a voluntary basis in the International Quality Indicator Project (iQIP) (www.internationalqip.com) and in PATH. This may reect that some hospitals are interested in participating in more projects and comparing additional performance data at crossnational level. However, it also entails the risks of duplication of strategies. With the development of indicator projects all over Europe, such a risk needs to be addressed in particular if existing projects collect similar indicators with slightly different denitions, which could result in a high burden of data collection and could impact negatively on the motivation of health professionals to participate and contribute to such projects. To meet this challenge, a mapping exercise could be carried out on the type, scope and actual denitions of indicators used in different projects, which, given the number of indicators used by the different initiatives, is beyond the scope of this paper. In addition, further research should address resources required for data collection, in particular, since new projects use routine data for performance assessment and quality improvement in an attempt to reduce the amount of time withdrawn from clinical activities [28, 29]. PATH compared with the national projects PATH as a tool for quality improvement in hospitals based on voluntary participation shares approaches and strategies with other projects based on voluntary participation. As PATH is both a exible and a comprehensive framework, it can be

applied to different national contexts and is of particular use in countries where no indicator projects exists or where existing projects narrowly focus on selected dimensions of hospital performance assessment [10]. To our knowledge, only one other hospital performance assessment project operates at international level, the IQIP. While IQIPs strength lies in the robustness of its procedures and the scientic approach in the use of indicators, an advantage of PATH is its multidimensional framework for comprehensive performance assessment. The early development phase of PATH as compared to the other projects both entails advantages and disadvantages. As a project in development, PATH is very sensitive to feedback from users and can easily incorporate suggestions from the eld in new procedures for data reporting and analysis. A disadvantage of PATH though is that the resources for the coordination of services, research and development are very limited and WHO needs to take a strategic decision on the positioning of the project in the future. Furthermore, considering the differences in hospital information systems throughout Europe, major efforts need to be made to ensure the comparability of data. It also needs to be ensured that the indicator sets reect measurement areas that are of relevance to a broad range of facilities. However, given the current shortage of hospital performance assessment initiatives in Central and Eastern European countries and considering the movement of patients across European borders (and hence the need to assess the quality of cross-border care), there are many opportunities for developing PATH.

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

Conclusions
The purpose of this paper was to provide a review of 10 current national indicator projects that are measuring hospital quality and compare them with the WHO PATH project. The review identied many similarities in hospital performance assessment projects, such as the use and scope of indicators; however, we also identied fundamental differences in the philosophy of the projects, which should be addressed by further research. Given the conceptual differences in the design, and the overlap of some projects, it would be useful to assess the comparative effect and costs of different indicator projects. Furthermore, many projects are based on a set of assumptions (e.g. that information systems get better, that clinicians use the data to discuss results, that patients use the performance report to select care provider) and we found little information on a systematic evaluation of these assumptions in relation to the projects identied. Finally, for international projects such as PATH, it will be essential to coordinate with national authorities in order to avoid overlaps in indicators denitions. Mapping exercises could be carried out to avoid that similar areas of interest are covered by slightly different indicator denitions.

Acknowledgements
We acknowledge the insightful comments of Dr. Paul Bartels, Director of the Unit of Clinical quality and patient

169

O. Groene et al.

safety and the National Indicator Project, Denmark and Dr. Eileen Spencer, Researcher in the Method Assessing Response Quality Improvement Strategies project and Researcher at the University of Manchester. We also thank the representatives of the contacted projects for their contributions to validating the information and providing helpful comments during interviews: Mrs. Christine Farraway, Team LeaderPerformance and Outcome Service, Australian Council on Health care and Standards; Mrs. Joanne Maharaj, Project AssistantHospital Report & Patient Satisfaction, Ontario Hospital Association, Canada; Dr. Margaret McLeod, Quality Improvement Programme Manager and Dr. Donald Morrison from National Health Service Quality Improvement in Scotland; Dr. Christine Coudert, Head of the project COordination pour la Mesure de la Performance et Amelioration de la Qualite Hospitaliere l ` (COMPAQH); Mr Burkhard Fischer, Bundesgeschaftsstelle Qualitatssicherung (BQS); Sharon Sprenger, Project Director Group on Core Performance Measurement and Nancy Lawler Associate Project Director, Joint Commission Accreditation of Health Organisations; Professor Vahe Kazandjian, President Center for Performance Sciences, Quality Indicator Project; Mr. Dirk Wiedenhofer, Verein Outcome; Professor Gert Westert (National Institute for Public Health and the Environment and Tilburg University) and Mr. Jan Haeck MD, Senior Inspector from the The Netherlands Health Care Inspectorate (IGZ), The Netherlands.

Methods of Assessing Response to Quality Improvement (MARQuIS) Project. Manchester: University of Manchester, 2005. 9. JCAHO. Characteristics of clinical indicators. QRB Qual Rev Bull 1989;11:330 9. 10. Groene O. PATHPerformance Assessment Tool for quality improvement in Hospitals. Copenhagen: WHO Regional Ofce for Europe, 2006. http://www.euro.who.int/document/E89742.pdf (15 October 2007, date last accessed). 11. Australian Council on Healthcare Standards. Clinical Indicator Results for Australia and New Zealand 19982005, Determining the Potential to Improve Quality of Care. 7th edn. Ultimo: ACHS, 2006. 12. Collopy BT, Williams J, Rodgers L et al. The ACHS Care Evaluation Program: A decade of achievement. J Qual Clin Practice 2000;20:3641. 13. Bundesgeschaftsstelle Qualitatssicherung. Qualitat sichtbar machenBQS-Qualitatsreport (in German) 2006. Dusseldorf: BQS, 2007. 14. Clinical Indicator Support Team. NHS Quality Improvement Scotland, Scotland: http://www.indicators.scot.nhs.uk/Index. htm (15 October 2007, date last accessed). 15. The Clinical Support TeamNHS Scotland. Clinical Outcome Indicators. Edinburg: CIST, 1999. 16. Grenier-Sennelier C, Corriol C, Daucourt V et al. Developing quality indicators in hospitals: the COMPAQH project. Rev Epidemiol Sante Publiq 2005;53:IS22 IS33. 17. The Joint Commission, USA. http://www.jointcommission. org/ (15 October 2007, date last accessed). 18. Ontario Hospital Association, Canada. http://www.oha.com/ (15 October 2007, date last accessed). 19. Kazandjian V Matthes N, Wicker KG. Are performance indiA, cators generic? The international experience of the Quality Indicator Project. J Eval Clin Pract 2003;9:265 76. 20. Quality Indicator Project, USA. http://www.qiproject.org/ (15 October 2007, date last accessed). 21. The Danish National Indicator Project, Denmark. http://www. nip.dk/ (15 October 2007, date last accessed) 22. Berg M, Meijerink Y, Gras M et al. Feasibility rst: developing public performance indicators on patient safety and clinical effectiveness for Dutch hospitals. Health Policy 2005;75:59 73. 23. Luthi JC, McClellan WM, Flanders DW et al. Quality of health care surveillance systems: review and implementation in the Swiss setting. Swiss Med WKLY 2002;132:4619. 24. Verein Outcome, Austria: http://www.vereinoutcome.ch/de/ home/index.asp (15 October 2007, date last accessed). 25. McGuckin M, Waterman R, Shubin A. Consumer attitudes about health care-acquired infections and hand hygiene. Am J Med Qual 2006;21:342 46. 26. Mannion R, Goddard M. Public Disclosure of comparative clinical performance data; lessons from the Scottish experience. J Eval Clin Pract 2003;9:277 286.
Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

References
1. Campbell SM, Roland MO, Buetow SA. Dening quality of care. Soc Sci Med 2000;51:1611 25. 2. Conn J. Leading indicatorProgram in Maryland has analyzed hospital quality for 20 years. Modern Healthcare 2005;15:38 39. 3. Groene O. Vorschlage der WHO zur umfassenden Leistungsbewertung im Krankenhaus [Suggestions of WHO for Comprehensive Hospital Performance Assessment]. Gesundheitsokonomie Qualitatsmanagement (in German) 2006; 11: 226 33. 4. Arah OA, Westert GP, Hurst J et al. Conceptual framework for the OECD Health Care Quality Indicators project. Int J Qual Health Care 2006;18:5 13. 5. Veillard J, Champagne F, Klazinga N et al. A performance assessment framework for hospitals: the WHO Regional Ofce for Europe PATH project. Int J Qual Hosp Care 2005;17:48796. 6. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Serv Manage Res 2002;15:126 137. 7. Sunol R. Methods of Assessing Response to Quality Improvement (MARQuIS) Project. www.marquis.be (31 January 2008, date last accessed). 8. Spencer E, Walshe K. Health Care Quality Strategies in Europe A survey of quality improvement policies and strategies in health care systems of member states of the European Union. Deliverable 6 of the

170

Review of projects on hospital performance assessment

27. Hibbard JH, Jewett JJ. Will quality report cards help consumers? Health Aff (Millwood) 1997;16:218 28. 28. Becker A, Beck U, Pfeuffer B, Mantke R. Qualitatssicherung mit routinedateb- ergebnisqualitat und Kosten. Das Krakenhaus (in German) 2006;9:748 55.

29. Brennan A, Sampson F, Deverill M. Can we use routine data to evaluate organisation change? Lessons from the evaluation of business process re-engineering in a UK teaching hospital. Health Serv Manage Res 2005;18:265 76. Accepted for publication 2 February 2008

Downloaded from intqhc.oxfordjournals.org by guest on November 10, 2010

171

You might also like