Professional Documents
Culture Documents
Contents
CHAPTER 1: THE MEANING OF QUALITY IMPROVEMENT IN HEALTHCARE ........................ 7
Attributes of quality in healthcare............................................................................................................. 7
Quality Improvement in Healthcare ......................................................................................................... 8
Quality Improvement Science................................................................................................................. 11
The Importance of Context to the Meaning of QI .................................................................................. 14
Quality Improvement as Patient-Centred Care ...................................................................................... 14
Underlying Principles for Quality Improvement in Healthcare .............................................................. 16
The Role of Context in Quality Improvement ......................................................................................... 19
Assessing the Role of the Context in Quality Improvement ................................................................... 20
CHAPTER 2: QUALITY IMPR OVEMENT MODELS, THEORIES AND FRAMEWORKS ........... 30
The Importance of Theory and Models in QI .......................................................................................... 30
Using Theories in Planning and Evaluating Change Interventions ........................................................ 31
Individual-Level Theories of Change ....................................................................................................... 32
Theories Related to Interpersonal Interaction ....................................................................................... 34
Theories Related to the Organizational Context .................................................................................... 37
The Need for Theory-Informed Research ............................................................................................... 40
Developing and Applying Programme Theory ........................................................................................ 48
Linking Theories, Tools and Strategies.................................................................................................... 49
QI Models ................................................................................................................................................ 50
Managing Change in Quality Improvement ............................................................................................ 51
Measurement of ‘Change’ in Quality Improvement ............................................................................... 53
CHAPTER 3: INSTITUTIONALIZING QUALITY IMPROVEMENT IN HEALTH CARE...................................... 66
Quality Improvement Methodology........................................................................................................ 66
Quality Improvement Work as Systems and Processes.......................................................................... 68
Quality Improvement Planning ............................................................................................................... 69
Developing a Theory-Informed Intervention .......................................................................................... 71
The Process of Institutionalizing QI in Healthcare Practice .................................................................... 74
Identifying desired improvements .......................................................................................................... 75
Monitoring and Evaluation for QI .......................................................................................................... 77
‘Not all changes are improvements but all improvement involves change. Changing the systems
that deliver care is the cornerstone of quality improvement.’ Deming
Definition of quality
“Doing the right thing, at the right time, in the right way, for the right person…and getting the best
possible results”. In health care, the term quality refers to the delivery of the right care to the right
patient at the right place and time with the right resources. Quality is both implicitly and explicitly
linked to ‘effectiveness’, where it refers to safety, efficient service delivery, and quality of patient
care (Andrews et al, 1997; Bodenheimer, 1999; Chassin and Galvin, 1998; Berwick, 1998; Bates
and Gawande, 2000). Donabedian (1980) defines quality as ‘the ability to achieve desirable
objectives using legitimate means. A system can only be said to be performing, in this case
achieving desired objectives, if it delivers high quality interventions, care or services. The quality
of care’ is increasingly referred to as ‘performance’ (Schneider et al, 1999; Marshall et al, 2000;
Jenks, 2000), but may well be ‘quality of technical performance’ from its current measurements
(Blumenthal, 1996; Feinstein, 2002). Quality of care becomes a proxy for the quality of the whole
health system where the main business is clinical care. Quality of care may also refer to the
governance of healthcare systems (Buetow and Roland, 1999; Heard et al, 2001; Friedman, 2002).
It has been observed that healthcare will not realize its full potential unless change making (quality
improvement) becomes a routine practice, that is, “an intrinsic part of No index entries
found.everyone’s job, every day, in all parts of the system, and a process that should benefit from
the use of a wide variety of tools and methods ” (Batalden and Davidoff, 2007).
Accessible: People should be able to get the right care at the right time in the right setting by the
right healthcare provider.
Effective: People should receive care that works, based on the best available scientific information.
Safe: People should not be harmed by errors or accidents when they receive care.
Patient-centred: Healthcare providers should offer services in a way that is sensitive to an
individual’s needs and preferences.
Equitable: People should get the same quality of care regardless of who they are and where they
live. Efficient: The health system should continually look for ways to reduce waste, including
waste of supplies, equipment, time, ideas and information.
Appropriately Resourced: The health system should have enough qualified providers, funding,
information, equipment, supplies and facilities to look after people’s health needs.
Integrated: All parts of the health system should be organized, connected and work with one
another to provide high-quality care.
In QI the aim is to improve quality overall by reducing unnecessary variation and focusing on
what happens most often rather than what happens relatively rarely. QI thrives in learning
environments that strive to improve the system and its processes rather than trying to eliminate
an outlier event.
3) QI can be seen as a relationship between people, process, and possibility (Savage et al, 2016).
People are the motor that drives the work, that is , the stakeholders and actors induce something
to happen. Process refers to the (scientific) approach (methodology) to learning about and
improving the organization that leads to improvement in quality. In health care, quality
improvement refers to “…the combined and unceasing efforts of everyone—health care
professionals, patients and their families, researchers, planners and educators—to make the
changes that will lead to better patient outcomes (health), better system performance (care) and
better professional development” (Batalden and Davidoff, 2007).
4) The commonly accepted model for improvement is the plan-do-study-act (PDSA) cycle, which
asks three essential questions (Deming, 2000; Langley et al, 1996): What are we trying to
accomplish? How will we know that a change is an improvement? What changes can we make
that will result in an improvement? This model can be used repeatedly to test a series of
consecutive changes.
Healthcare outcomes
Quality improvement (QI) is defined as better patient experience and outcomes achieved through
cha provider behaviour and organization through using a systematic change method and strategies.
(The key elements in this definition are the combination of a ‘change’ (improvement) and a
‘method’ (an approach with appropriate tools), while paying attention to the context, in order to
achieve better outcomes). Thus QI is a proven, effective way to improve care for patients, residents
and clients, and to improve practice for staff. In the healthcare system, there are always
opportunities to optimize, streamline, develop and test processes, and QI should be a continuous
process and an integral part of the organization, that is, everyone’s work, regardless of role or
position within the organization.
The QI process
QI in healthcare refers to the broad range of activities of varying degrees of complexity and
methodological and statistical rigor through which healthcare providers develop, implement and
assess interventions, identify those that work well and implement them more broadly in order to
improve clinical practice. QI draws on a wide variety of methodologies, approaches and tools. QI
can be conceptualized as an umbrella term which encompasses many different systematic ‘change
methods’ to support improvement and better outcomes for patients and services. However, many
of these share some simple underlying principles, including a focus on:
a) Understanding the problem, with a particular emphasis on what the data tell you
b) Understanding the processes and systems within the organization – particularly the patient
pathway – and whether these can be simplified
c) Analyzing the demand, capacity and flow of the service
d) Choosing the tools to bring about change, including leadership and clinical engagement, skills
development, and staff and patient participation
e) Evaluating and measuring the impact of a change
The QI methodology
a) QI is a general term referring to a body of systematic knowledge, which some call a science or
a multi-discipline (Ovretveit, 2013). It refers to a set of methods that have been found to be
effective in improving care, the different strategies for addressing specific quality and safety
problems (such as hospital acquired infections, or communication problems between services),
and the different programmes for improving performance or safety issues (such as clinical
guidelines development or accreditation).
b) QI is a formal scientific approach to the analysis of performance and the systematic efforts to
improve it. One can only manage quality when one can measure and monitor quality (Eagle
and Davies, 1993; Ibrahim, 2001; Thompson and Harris, 2001). Over the last several
decades, health care has become increasingly complex and costly, consequently, healthcare
organizations struggle to provide equitable, affordable, safe, timely, and high-quality
healthcare, while still containing cost and satisfying patients and families. QI refers to the
employment of systematic changes to patient care processes so as to achieve improvement in
patient outcomes and safety, improve the patient and family experience, and the increase value
of care delivered.
c) QI refers to basically a process of change in human behaviour that is driven largely by
experiential learning. Thus development and adoption of QI interventions depends a lot on
changes in social policy, programmes or practices, within a specific context or environment of
healthcare delivery. As such, the evolution, development and success of improvement
interventions has much in common with changes in social policy and programmes. QI uses
rigorous methodology to evaluate systemic changes to patient care processes in an effort to
improve patient outcomes, patient and family experience of care, or the safety and value of the
care delivered (Kurowski et al, 2015). At the same time, the high stakes of clinical practice
demand that we provide the strongest possible evidence on exactly how, and whether,
improvement interventions work.
d) QI methods involve multiple sequential changes over time and utilize continuous measurement
and analysis. In complex and dynamic systems such as in healthcare, QI allows for rapid testing
and evaluation of new processes and methods for delivering care so as to achieve better patient
safety or patient outcomes (Kurowski et al, 2015).
Quality Improvement Science
This refers to the system of knowledge underpinning QI described by Edwards Deming (2000).
There are four components of knowledge that underpin quality improvement: Appreciation of a
system; Understanding of variation; Theory of knowledge; and Psychology. Successful
improvements can only be achieved when all four components are addressed. Deming (2000)
posits that it is impossible for improvement to occur without the following action: developing,
testing and implementing changes.
Appreciation of the system
In applying Deming’s concepts to health care, most patient care outcomes or services result from
a complex system of interaction between health-care professionals, treatment procedures and
medical equipment. Therefore, medical professionals and trainees should appreciate the
interdependencies and relationships among all of these components of the healthcare system
(doctors, nurses, patients, treatments, equipment, procedures, theatres and so on) thereby
increasing the accuracy of predictions about any impact that changes may have on the system.
Understanding of variation
Variation is the differences between two or more things that are similar. There is extensive
variation in health care and patient outcomes can differ from one ward to another, from one
hospital to another and one region or country to another. Variation, a feature of most systems, may
be related to shortages of personnel, drugs or beds can lead to variations of care. Deming urges
people to ask questions about variation, including that related to treatment outcomes. For instance,
do the three patients returned to theatres after surgery indicate a problem with surgery? Did the
extra nurse on duty make a difference with patient care or was it a coincidence? The ability to
answer such questions is part of the reason for undertaking improvement activities QI science is
rooted in quasi-experimental research design and strong statistical theory such that when
systematically applied across sites, can produce generalizable knowledge about interventions that
improve health care quality, safety, and value (Ryan, 2011; Stromer, 2013). Thus it maintains its
rigor as a scientific method and ability to improve outcomes. In routine practice, QI provides an
essential set of tools specifically devised to bridge the quality chasm, that is, address the gaps
between the level at which a healthcare system currently functions and the level at which it has
potential to function under optimal conditions (Kohn et al, 2000; Chun et al, 2014).
Theory of knowledge change
Deming posits that the theory of knowledge requires us to make predictions that any changes we
make will lead to an improvement. Predicting the results of a change is a necessary step to enable
a plan to be made even though the future is certain. Building knowledge by making changes and
measuring the results or observing the differences is the foundation of the science of improvement.
Psychology
There is need to understand the psychology of how people interact with each other and the system
in inducing change. Making a change, whether it is small or large, will have an impact and
knowledge of psychology helps to understand how people might react, and why they might resist
change, even if it is for good. The potential different reactions must be factored in when making
an improvement change.
The Institute for Healthcare Improvement endorses a QI method based on the Model for
Improvement (Langley et al, 2009), which focuses on five guiding principles, which have
characteristics of the research process:
a) Knowing why you need to improve (presence of a care or quality gap)
b) Having feedback mechanisms to show whether improvement is happening (relevant data)
c) Instituting effective changes that result in improvement (QI plans, strategies and actions)
d) Testing the change before attempting to implement (pilot and feasibility studies or
availability of empirical data)
e) Knowing when and how to make the change permanent (sustainability or institutionalizing
improvements into routine practice)
The Importance of Context to the Meaning of QI
To understand a QI intervention clearly, healthcare one needs to understand how the intervention
relates to general knowledge of the care problem that necessitates improvement. This requires the
authors to place their work within the context of issues that are known to impact the quality of
care. Context means ‘‘to weave together’’. The context thus refers to the interweaving of the issues
that stimulated the improvement idea and several spatial, social, temporal and cultural factors
within the local setting, all of which form the “canvas upon which improvement is painted”. The
explanation of context should go beyond a description of physical setting, but should include the
organization (types of patients served, staff providing care and care processes before introducing
the intervention) , the governance structure, the health information systems, and the logistical
framework so as to enable reviewers and readers determine if findings from the study are likely to
be transferable to be generalized by transferability (readers are able to relate them to their own
care setting). In studies with multiple sites, a table or matrix can be a convenient way to summarize
similarities differences in context across sites. The table can specify the structures, processes,
people and patterns of care that are unique to each site and assist the reader in interpreting results.
Quality Improvement as Patient-Centred Care
Patient-centred care is defined as ‘health care that establishes a partnership among practitioners,
patients and their families(when appropriate) to ensure that decisions respect patients’ wants, needs
and the preferences and that patients have the education and support they need to make decisions
and participate in their own care’ (IOM, 2001)}. Patient-centred care is increasingly
acknowledged as an integral part of quality in health care, and improving patient centeredness is
one of the six aims of the Institute’s of Medicines (IOM) Health Care Quality Initiative according
to which health care should be safe, effective, patient-centred, timely, efficient and equitable. Yet,
firstly, the reasons for a patient-centred approach from a quality improvement perspective are not
always clear to all stakeholders. QI projects may put a focus only on a particular aspect of patient
centeredness, Secondly, many QI initiatives imply that adding a patient survey to existing
performance measures will be sufficient to realize patient-centred care. While this may be
informative, it may not be very effective. Moreover, there appears to be a selection bias towards a
few established instruments capturing generic patient experience or satisfaction thereby ignoring
some of the broader challenges in assessing patient centredness. Thirdly, there are concerns with
regard to common strategies to improve patient centredness. The focus on patient-centredness has
continuously evolved in the literature and in recent years has been greatly emphasized in policy
initiatives. The literature on strategies to improve patient-centred care highlighted that ‘patient-
centred care is a widely used phrase but a complex and contested concept’(Lewin et al, 2001). A
patient-centred approach from a quality improvement perspective Involves improving patients’
rights, improving health gain and contributing to organizational learning.
Improving patients’ rights
Patients’ rights embrace arguments of democratization (according to which a paternalistic
relationship between patient and professional would contradict the notions of democratic
societies), operationalized in hospital settings in terms of policies to ensure confidentiality,
informed consent, information about treatment and care and issues related to professional-patient
interaction (Gerteis et al, 1993; Rotter and Larson, 2002). Participation in health care is an ends in
itself (Berwick, 2009).
Improving health gain
The health gain perspective addresses the implications of patient-centred care on patient behaviour,
recovery and outcomes. Research suggests that patient centredness is associated with better
compliance, patient satisfaction, better recovery and health outcomes, augmentation of tolerance
for stress and pain levels, reduced readmission rates and better seeking of follow-up care (Lazarus,
2000; Hibbard et al, 2005; Jack et al, 2009; Balik et al, 2011).
Organizational learning
Another rationale for patient-centred care and an important focus from a QI perspective is
organizational learning. In order for organizations to learn, personal context-specific knowledge
needs to be transferred into systematic and formal knowledge. Knowledge-dependent
organizations constantly revise knowledge at all organizational levels in order to inform process
alignment, innovation, product development and service provision. In hospitals, patients’
knowledge has traditionally been ignored as potential contributions to assessing, improving and
implementing work processes. Patients can contribute significantly to health-care improvements,
in particular through their assessment of non-clinical aspects of care as the care environment as
well as the care process. Why patient survey data are not systematically used in QI efforts may be
due to organizational barriers (lack of priority or supporting infrastructures), professional barriers
(skepticism, resistance to change) or data-related barriers (lack of timely feedback or lack of
specificity and discrimination
contrast, when measuring for QI, the learning develops through the process. The research question
for QI, instead of asking whether an intervention works, is phrased by asking how (and how much)
the intervention can be made to work in a given situation and what will constitute ‘success’.
Consequently, the hypothesis changes throughout the QI project and the data will be ‘good enough’
rather than perfect.
Understanding the process
Access to data is vital when assessing whether there is a problem that necessitates improvement.
However, the data may not in itself explain why the problem exists. Part of addressing the QI
problem requires understanding the process by which the problem occurs. Process mapping is a
tool used to chart each step of a QI process, mapping the pathway or journey through part or the
entire journey, and supporting processes. Process mapping is more useful as a tool to engage QI
teams to understand how the different steps fit together, which steps add value to the process, or
which steps are irrelevant.
Improving reliability
Once a process is understood, a key focus of QI is to improve the reliability of the system and
clinical processes, not only to mitigate against waste and defects in the system, but also to reduces
error and harm. Systematic QI approaches such as Lean seek to redesign system and clinical
pathways, create more standardized working and develop error-free processes that deliver high
quality, consistent care and improve efficiency in use of resources.
Demand, capacity and flow
A capacity problem is usually blamed for persistence of backlogs, waiting lists and delays in a
service. Such a problem implies that there is insufficient staff, machines or equipment to deal with
the volume of patients. Without data to estimate the demand (the number of patients requiring
access to the service) and the flow (when the service is needed), it is difficult to pinpoint capacity
a s being responsible. The capacity deficit may be in the wrong place, or occurs at the wrong time.
Planning QI requires a detailed understanding of the variation and relationship between demand,
capacity and flow. For example, demand may be often relatively stable and flow may be predicted
in terms of peaks and troughs, so that problem may be variation may be in the capacity available
at specific times.
Involving every individual in the organization is key
Motivating, involving and engaging staff is key. Evidence about successful QI shows that it is not
necessarily the method or approach used that predicts success, but the way in which the change is
introduced. Factors that contribute to success include leadership, staff engagement and
client/patient participation, as well as training and education. It is important to involve all relevant
staff, including non-clinical staff, who are often the first point of contact for patients. Also, it is
critical to break down traditional hierarchies for this multidisciplinary approach to ensure that all
perspectives and ideas are considered. Capability building and facilitated support are key elements
of building clinical commitment to improvement. Other important aspects include:
a) Involving the clinical team early on when setting aspirations and goals
b) Ensuring senior clinical involvement and peer influence
c) Involving clinical networks across organizational boundaries
d) Providing evidence that the change has been successful elsewhere
e) Embedding an understanding of quality improvement into training and education of
healthcare professionals
Involving patients and co-design
Patients, carers and the wider public have a critical role to play in QI, both in designing
improvements and in monitoring whether QI initiatives have the desired impact. Staff must
constantly ask the question ‘How do we know what constitutes good care, and how do we achieve
it?’ Engaging patients and carers in QI provides the answer. However, patients may define quality
differently from clinicians and managers, such that what they view as the ‘problem’ or value within
a system may be different from the clinicians or managers. So QI leaders need to question how
patient involvement is embedded in their organizations’ quality improvement programmes.
Unintended consequences of QI
Can QI have unintended consequences? At times, change in one area can cause pressure in another,
thereby causing ‘unintended consequences of QI’. For example, improved early discharge may
lead to increased readmission. In these circumstances, leaders need to anticipate and monitor for
these potential consequences using a set of balancing measures, and may need to make decisions
about scheduling or sequencing of initiatives. QI is likely to be more effective if it is addressed at
a whole-system level rather than a number of disconnected projects, and must be approached as a
long-term, sustained change effort.
‘Context’ for QI is defined as factors that potentially mediate the effect of the intervention,
including leadership and governance, interpersonal relationships, organizational resources, health
information systems and data availability and critical human resources (Ovretveit, 2011; Kaplan
et al, 2012; Tomoaia-Cotisel et al, 2013 ). Context is important for most phenomena of health care
and health (Sorensen et al, 2003; Kaplan et al, 2010; Kathol and Kathol, 2010) particularly in
explaining individual decision-making (Weiner, 2004; Weiner et al, 2010) and patient safety
(Phillips et al, 1998; Ovretveit et al, 2011; Taylor et al, 2011). However contextual factors are
rarely recorded, analyzed, or reported in research reports. Because context is important to
interpreting and applying findings, attempts to replicate research often fail, and efforts to translate
research into practice often equally fail because contextual factors important for understanding and
knowledgably synthesizing findings across studies in meta-analyses and evidence-based
guidelines remain unclear (McCormack et al, 2002; Hawe et al, 2004). While a tremendous
available research demonstrates effectiveness of strategies to improve quality and enhance patient
safety, the contextual factors affecting the implementation and effectiveness of these strategies are
not well understood (Shojania et al, 2004; Bate et al, 2014). Grimshaw et al, (2001) in their
systematic review of interventions to change provider behavior, highlighted concerns about the
strength of the evidence base on effectiveness, advising that majority of interventions are effective
under some but not all circumstances.
Few studies investigate contextual and implementation factors in detail (Scott, 2009), which
invalidates the findings, partly due to lack of theoretically sound research methods that elucidate
why interventions work (or do not work) (Conry et al, 2012). Even so, the occurrence of mixed
effect and success rates of strategies to improve quality and safety in health care are dependent
partly on the different contextual factors in contexts in which the interventions are planned and
implemented. These factors operate by influencing the effectiveness of quality improvement
interventions at the level of the micro-system (Kaplan et al, 2010; Dixon-Woods et al, 2011;
Ovretveit, 2011; Kringos et al, 2015). An intervention that works in one setting does not
necessarily work in another. Regarding quality improvement effectiveness, the common questions
asked are whether and why the initiatives worked. Yet, more often, that data on all such factors
are not available thus affecting the potential generalizability of the findings on the effectiveness of
QI strategies. Moreover, the broader question ‘why, when, where, and for whom QI interventions
work most effectively’ is of much greater concern and practical importance (Foy et al, 2011).
Besides, a thorough understanding of the underlying mechanisms that make an intervention work,
has the potential for enabling successful application of the intervention in other settings and help
improving its effectiveness if replicated. Context is a necessary component of the “ingredients of
change” (Rycroft-Malone et al, 2002).
organizational QI (Veloski et al, 2006; Benn et al, 2009; Ivers et al, 2012). Organizational
factors encompass QI leadership, sponsors, culture supportive of QI, robustness of
organizational QI strategies, physician payment structure).
3) QI support and capacity: QI support and capacity (data infrastructure, resource availability,
workforce focus on QI). The presence of functional information technology (IT) systems
facilitates data collection for effectiveness of QI interventions (Grimshaw et al, 2004; Ong and
Cojera, 2011). Insufficient administrative support impacts negatively on the effectiveness of
interventions promoting safety cultures (Weaver et al, 2011) or on strategies aimed at
implementing quality indicators (De Vos et al, 2009).
4) Microsystems: Clinical micro-systems have previously been described as the key settings in
which QI interventions are implemented (Pronovost et al, 2006; Godfrey et al, 2008; Blegen
et al, 2010; Mitchell et al, 2010; Pronovost et al, 2010). The influence may result from effect
on staff morale and skepticism of health care professionals towards the positive impact of QI
interventions. Interventions that may necessitate seeking and alignment physicians’ views on
the content and implementation of interventions (Shepherd et al, 2004; Chaudry et al, 2006;
Ong and Cojera, 2011), include training or education in the proper use of QI strategies (such
as safety checklists, accreditation standards) and integrating QI strategies in the working
practices of health professionals (de Vos et al, 2009; Ko et al, 2011). The microsystem
encompasses QI leadership, culture supportive of QI, capability for improvement, motivation
to change.
5) The QI team: The QI team encompasses team diversity, physician involvement, availability of
subject matter experts, prior experience with QI, team leadership, team decision-making
process, team norms and team QI skills. Training of practice members, characteristics affecting
how they work together, and leadership are often relevant contextual factors. The composition
of the QI team is a major determinant for QI effectiveness (Aboelela et al, 2007; Stone et al,
2008; Damian et al, 2010). ‘Subject matter experts’, where more than one team member has
detailed knowledge about the outcome, process, or system being changed is beneficial for the
range of QI strategies.
6) Assorted factors: Several contextual factors not addressed in the MUSIQ tool influence
success of QI. The elements implemented; when, and the period of time over which this
happens. The specific operational changes that are sought, the specific QI method or approach
employed, involvement of staff and patients in QI, feedback on performance to clinicians), the
formal program identity (such as demonstration project; pilot project; organizational
transformation). Others include success history (such as experience with transformation,
burnout, adaptive reserve) and provision of a safe place to experiment and even fail. Also,
patient/client involvement in development , and unique factors related to the intervention group
(such as specific disease or specific demographic sub-population). In addition, the main
intervention objectives and outcomes (such as health status, patient satisfaction, financial
stability. Furthermore, other factors include trigger events, task strategic importance to the
organization. Others unique to the contexts include, in particular, structural factors of service
organization, including turnover of staff or bed occupancy, workload and time constraints (Ong
and Cojera, 2011), guidelines or computerized decision support systems) (Garg et al, 2005;
Chan et al, 2012). One theme, implementation pathways, captures locally relevant elements of
an intervention, including operational changes (such as addition of new employees, redefined
roles, team communication strategies, feedback loops) as well as objectives of the intervention
and outcomes (say, health status of targeted populations or populations, patient satisfaction,
and financial stability).
Tips for evaluating the context
1) Engage diverse perspectives: (Research participants (organizations, patients and clinicians,
investigative team); Relevant theoretical models; synthesize prior research, and engage
potential end users of study findings
2) Consider multiple levels: From the macro to the micro, assess interlinkages and interactions
between levels
3) Evaluate the evolution of contextual factors over time: assess initial conditions and history,
analyzing changes over the course of the study.
4) Look at both formal and informal systems and culture: Look for (mis)alignments, Be sensitive
to the locus of power, Appraise internal and external motivations; evaluate resources, support,
and financial and other incentives
5) Assess (often nonlinear) interactions between contextual factors: Assess both the process and
outcome of studies, report within the body of scientific articles key contextual factors that
others would need to know (1) to understand what happened in the study and why, and (2) to
be able to transport and knowledgeably reinvent the project in another situation
References
Aboelela SW, Stone PW, Larson EL. Effectiveness of bundled behavioural
Blumenthal D., Quality of health care, part 1: quality of care— what is it? N Engl J Med 1996;
335: 891–894.
Bodenheimer T., The American health care system: the movement for improved quality in health
care. N Engl J Med 1999; 340: 488–492.
Buetow SA, Roland M. Clinical governance: bridging the gap between managerial and clinical
approaches to quality of care. Qual Health Care 1999; 8: 184–190.
Chan AJ, Chan J, Cafazzo JA, et al. Order sets in health care: a systematic review of their
effects. Int J Technol Assess Health Care. 2012;28:235–40.
Chassin MR, Galvin RW. The urgent need to improve health care quality: National Institute of
Medicine National Roundtable on Health Care Quality. J Am Med Assoc 1998; 280: 1000–1005.
Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on
quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144:742–52
Conry MC, Humphries N, Morgan K, et al. A 10 year (2000–2010) systematic review of
interventions to improve quality of care in hospitals. BMC Health Serv Res. 2012;12:275.
Coulter A, Ellins J. Effectiveness of strategies for informing, educating, and involving
patients. BMJ 2007;335:24–7.
Damiani G, Pinnarelli L, Colosimo SC, et al. The effectiveness of computerized clinical
guidelines in the process of care: a systematic review. BMC Health Serv Res.
2010;10:2.
Davila F: What is an acceptable and specific definition of quality healthcare? Baylor Univ Med
Centre Proc 2002, 15:84–85.
de Vos M, Graafmans W, Kooistra M, et al. Using quality indicators to improve hospital care: a
review of the literature. Int J Qual Health Care. 2009;21:119–29.
Dixon-Woods M, Bosk CL, Aveling EL, et al. Explaining Michigan: developing an ex post theory
of a quality improvement program. Milbank Q. 2011;89:167–205.
Donabedian A., Explorations in Quality Assessment and Monitoring.The Definition of Quality and
Approaches to its Assessment. Vol. 1. Ann Arbor, MI: Health Administration Press, 1980.
Eagle CJ, Davies JM. Current models of ‘quality’ – an introduction for anaesthetists. Can J
Anaesth 1993; 40: 851–862.
Feinstein AR., Is “quality of care” being mislabeled or mismeasured? Am J Med 2002; 112: 472–
478.
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
25
Flodgren G, Pomey MP, et al. Effectiveness of external inspection of compliance with standards
in improving healthcare organization behaviour, healthcare professional behaviour or patient
outcomes. Cochrane Database Syst Rev. 2011;11:Cd008992.
Foy R, Ovretveit J, Shekelle PG, et al. The role of theory in research to develop and evaluate the
implementation of patient safety practices. BMJ Qual Saf. 2011;20:453–9
Freedman DB., Clinical governance—bridging management and clinical approaches to quality in
the UK. Clin Chim Acta 2002; 319: 133–141.
Gandjour A, Kleinschmit F, et al: An evidence-based evaluation of quality and efficiency
indicators. Qual Manag Healthc 2002, 10:41–52.
Gardner LA, Snow V, Weiss K, et al: Leveraging improvement in quality and value in healthcare
through a clinical performance framework: a recommendation of the American College of
Physicians. Am J Med Qual 2010, 25:336–342.
Garg AX, Adhikari NK, et al. Effects of computerized clinical decision support systems on
practitioner performance and patient outcomes: a systematic review. JAMA. 2005; :1223–38
Gerteis M, Edgman-Levitan S, Daley J et al. Through the patient’s eyes: understanding and
promoting patient-centred care. San Francisco: Jossey Bass Publishers, 1993.
Glasgow JM, Scott-Caziewell JR, Kaboli PJ. Guiding inpatient quality improvement: a
systematic review of Lean and Six Sigma. Jt Comm J Qual Patient Saf. 2010; 36: 533–40.
Godfrey MM, Melin CN, Muething SE, et al. Clinical microsystems, Part 3. Transformation of
two hospitals using microsystem, mesosystem, and macrosystem strategies. Jt Comm J Qual
Patient Saf. 2008;34:591–603.
Griffiths P, Renz A, Hughes J, Rafferty AM. Impact of organization and management factors on
infection control in hospitals: a scoping review. J Hosp Infect. 2009;73:1–14.
Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic
reviews of interventions. Med Care. 2001;39 Suppl 2:Ii2–45.
Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline
dissemination and implementation strategies. Health Technol Assess. 2004;8:1–72.
Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local
context within a cluster randomized community intervention trial. J Epidemiol Community
Health. 2004; 58(9):788-793.
Heard SR, Schiller G, Aitken M, Fergie C, McCready Hall L. Continuous quality improvement:
educating towards a culture of clinical governance. Qual Health Care 2001; 10: 70–78.
Hibbard JH, Mahoney ER, Stockard J et al. Development and testing of a short form of the
patient activation measure. Health Serv Res 2005;40: 1918–30.
Hysong S, Khan M, Petersen L: Passive monitoring versus active assessment of clinical
performance. Med Care 2011, 49:883–890.
Ibrahim JE. Performance indicators from all perspectives. Int J Qual Health Care 2001; 13: 431–
432
Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century.
Washington, DC: IOM, 2001.
Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and
healthcare outcomes. Cochrane Database Syst Rev. 2012;6: Cd000259.
Jencks SF., Clinical performance measurement—a hard sell. JAMA 2000; 283: 2015–2016.
Kaplan HC, Brady PW, Dritz MC, et al. The influence of context on quality improvement success
in health care: a systematic review of the literature. Milbank Q. 2010;88(4):500-559.
Kaplan HC, Froehle CM, Cassedy A, et al. An exploratory analysis of the model for understanding
success in quality. Health Care Man Rev. 2013;38:325–38
Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in
Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf.
2012;21:13–20.
Kathol RG, Kathol MH. The need for biometrically- and contextually-sound care plans in complex
patients. Ann Intern Med. 2010;153(9): 619-620.
Kazandjian VA, Wicker K, Matthes N, Oqunbo S: Safety is part of quality: a proposal for a
continuum in performance measurement. J Eval Clin Prac 2008, 14:354–359.
Kitson AL, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a
conceptual framework. Qual Health Care 1998;7:149–58.
Ko HC, Turner TJ, Finnigan MA. Systematic review of safety checklists for use by medical care
teams in acute hospital settings—limited evidence of effectiveness. BMC Health Serv Res.
2011;11:211
Kringos DS, Sunol R, Wagner C, et al. The influence of context on the effectiveness of hospital
quality improvement strategies: a review of systematic reviews. BMC Health Services
Research201515:277
Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. (1996).The Improvement Guide: A
Practical Approach to Enhancing Organizational Performance, San Francisco: Jossey-Bass
Publishers
Lazarus RS. Toward better research on stress and coping. Am Psychol 2000;55: 665–73.
Lewin SA, Skea ZC, Entwistle VA et al. Interventions for providers to promote a patient-
centred approach in clinical consultations. Cochrane Database Syst Rev 2001;4:CD003267.
Loeb J: The current state of performance measurement in healthcare. Int J Qual Healthcare 2004,
16:5–9.
Main C, Moxham T, Wyatt JC, et al. Computerized decision support systems in order
communication for diagnostic, screening or monitoring test ordering: systematic reviews of the
effects and cost-effectiveness of systems. Health Technol Assess. 2010;14:1–227
Mainz J: Defining and classifying clinical indicators for quality improvement. Int J Qual
Healthcare 2003, 15:523–530.
Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data:
what do we expect to gain? A review of the evidence. J Am Med Assoc 2000; 283: 1866–1874.
McCormack B, Kitson A, Harvey G, Getting evidence into practice: the meaning of ‘context.’
Mitchell IA, McKay H, Van Leuvan C, et al. A prospective controlled trial of the effect of a
multi-faceted intervention on early recognition and intervention in deteriorating hospital patients.
Resuscitation. 2010;81:658–66.
Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital
transfers. Jt Comm J Qual Patient Saf. 2011;37:274–84
Ovretveit J. Contemporary quality improvement. Cad. Saúde Pública, Rio de Janeiro 2013;
29(3):424-426
Ovretveit J. (2009) Does improving quality save money? A review of the evidence of which
improvements to quality reduce costs to healthcare service providers. London; Health Fountain
Ovretveit J. Understanding the conditions for improvement: research to discover which context
influences affect improvement success. BMJ Qual Saf. 2011;20 Suppl 1:i18–23.
Ovretveit JC, Shekelle PG, Dy SM, et al. How does context affect interventions to improve patient
safety? An assessment of evidence from studies of five patient safety practices and proposals for
research. BMJ Qual Saf. 2011;20(7):604-610.
Pencheon D: Developing a sustainable health and care system: lessons for research and policy. J
Health Serv Res Policy 2013, 18:193.
Phillips KA, Morrison KR, Andersen R, Aday LA. Understanding the context of healthcare
utilization: assessing environmental andprovider-related variables in the behavioral model of
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related
bloodstream infections in the ICU. NEJM. 2006; 355:2725–32.
Pronovost PJ, Goeschel CA, et al. Sustaining reductions in catheter related bloodstream
infections in Michigan intensive care units: observational study. BMJ. 2010;340-309
Purbey S, Mukherjee K, Bhar C: Performance measurement system for healthcare processes. Int J
Product Perform Manag 2007, 56:241–251.
Roter D, Larson S. The Roter interaction analysis system (RIAS): utility and flexibility for
analysis of medical interactions. Patient Educ Couns 2002;46:243–51.
Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual
framework. Qual Saf Health Care. 2002;11: 174–80.
Savage C, Parke L, von Knorring M, Mazzocato P. Does lean muddy the quality improvement
waters? A qualitative study of how a hospital management team understands lean in the context of
quality improvement. BMC Health Services Research (2016) 16:588
Schull M, Guttman A, Leaver C, et al. Prioritizing performance measurement for emergency
department care: consensus on evidence-based quality of care indicators. Can J Emerg Med 2011,
13:300–309.
Scott I. What are the most effective strategies for improving quality and safety of health care?
Intern Med J. 2009;39:389–400.
Shepperd S, Parkes J, et al. Discharge planning from hospital to home. Cochrane Database Syst
Rev. 2004;1:Cd000313
Shojania KG, McDonald KM, Wachter RM. Closing the quality gap: A critical analysis of quality
improvement strategies. Rockville: Agency for Healthcare Research and Quality; 2004.
Sorensen G, Emmons K, Hunt MK, et al. Model for incorporating social context in health behavior
interventions: applications for cancer prevention for working-class, multiethnic populations. Prev
Med. 2003;37(3):188-197.
Stone PW, Pogorzelska M, et al. Hospital staffing and health care-associated
Data and facts are not like pebbles on a beach, waiting to be picked up and collected. They can
only be perceived and measured through an underlying theoretical and conceptual framework,
which defines relevant facts, and distinguishes them from background noise’ (Wolfson, 1994).
Since interaction of factors at multiple levels may influence the success or failure of QI
interventions (Ferlie and Shortell 2001; Grol 1997; Shortell et al. 2000), understanding of these
factors (the obstacles and incentives for change) is crucial to an effective intervention (Grol and
Grimshaw 2003; Grol and Wensing 2004; van Bokhoven et al, 2003). Thus understanding of the
theoretical assumptions and hypotheses behind these factors is critical as it enables the
consideration of theory-based interventions for QI. Currently, most specific models or approaches
are based on implicit (and potentially biased) personal beliefs about human behavior and change
(Grol, 1997). There is need for a set of theories regarding change in health care and argue for a
more systematic use of theories in planning and evaluating changes in clinical practice.
1) Process theories refer to the preferred implementation activities: how they should be planned,
organized, and scheduled in order to be effective (the organizational plan) and how the target
group will utilize and be influenced by the activities (the utilization plan).
2) Impact theories describe hypotheses and assumptions about how a specific intervention will
facilitate a desired change, as well as the causes, effects, and factors determining success (or
the lack of it) in improving health care.
Cognitive theories of change management focus on the (rational) processes of thinking and action
by individual professionals. Rational decision-making theories assume that in order to provide
optimal care, professionals must consider and balance the advantages and disadvantages of
different alternative behaviors. Such theories regard the provision of convincing information about
risks and benefits and pros and cons as crucial to performance change. Other cognitive theories
are more descriptive and illustrate how decisions are actually made: clinicians do not act rationally
but instead decide on the basis of their previous experiences and contextual information (Schmidt,
1984). In making a diagnosis, physicians use so-called illness scripts, or cognitive structures in
which they have organized their knowledge of a specific health problem and in which previous
experiences with specific patients are crucial to further decisions (Botti and Reeve 2003; van
Leeuwen et al. 1995). The cognitive theories explain behavior in terms of health professionals’
lack of relevant (scientific) information, incorrect expectations about the consequences of their
behavior, or attributions of outcomes to causes outside their control. Therefore, to change
performance, it might be critical to concentrate on how professionals think and make decisions
about their daily work and support more effective ways of decision making, for instance, by
supplying detailed guidelines, decision aids, and evidence-based clinical pathways and protocols.
2) Consistency
Cognitive mechanisms may prevent rational decision making. Professionals may use obsolete
information or poor experiences as the basis for performance (change) (Choudry et al, 2005). For
instance, people prefer consistency in thinking and acting and so make choices that may not be
rational but fit existing opinions, needs, and behaviors. Thus if they do not like repeated hand
washing or doubt its effects, they may interpret or seek information that confirms their beliefs.
Also, people may seek an external explanation for specific events (infections) or behaviors instead
of an internal explanation in order to make it more acceptable to themselves or to fit it better to
their existing perceptions (Jones et al, 1972).
Educational Theories
1) Problem-Based Learning
Most educational theories focus more on motivation rather than cognitions to learn (and change).
For instance, adult learning theories state that people learn better and are more motivated to change
when they start with problems that they have had in practice than when they are pressured or
confronted with abstract information like guidelines (Holm 1998; Mann 1994; Merriam 1996;
Norman and Schmidt, 1992). Most healthcare professionals have wide experiences that they can
use as a source for learning and changing (Smith, Singleton, and Hilton, 1998). Differences
between novices and experts in health professions have been reported (Botti and Reeve, 2003; van
Leeuwen et al. 1995). For instance, in order to improve care for indwelling urethral catheters, care
providers first need to experience a problem (for instance, that their behavior may lead to catheter-
related complications in their patients) before they are motivated to do something about it. Here
the theory offers a framework in which to structure a discussion, identifying and applying past
experiences to solve this complicated problem within the current work setting. Not all care
providers have the competence or motivation for self-directed learning or self-assessment
(Norman, 2002). Professionals may also have different motives in regard to (self-directed) learning
and changing (Fox and Bennett, 1998; Stanley et al, 1993). Examples of these include a desire for
more social interaction, for meeting external expectations (including pressure from patients or
colleagues), for better serving others or society, for increasing professional competence or
professional status, for financial rewards, or for relief from boring routines or job frustrations.
2) Learning Style
Professionals’ personal learning style is another factor that influence change. There are four
learning styles (Lewis and Bolden,1989): activist (people who like new experiences and therefore
accept but also abandon innovations quickly), reflective (people who want to consider all options
very carefully before changing), theoretical (people who prefer rigorous analysis and thought
before changing), and pragmatic (people who prefer to act on the basis of practical experience
with an innovation). These different learning styles, individual learning needs and personal
motives in healthcare professionals influence ‘change’.
Motivational Theories
The “theory of planned behavior” states that any given behavior by professionals is influenced by
their individual intentions (or motivation) to perform the specific behavior and these intentions are
determined largely by attitudes toward the behavior, perceived social norms, and perceived control
related to the behavior (Ajzen 1991). Attitudes toward a specific behavior are determined by the
expected outcomes of the behavior and the positive or negative appraisal of these outcomes
(whether it is worth the extra effort). The perceived social norms are influenced by the behavior
of other professionals (particularly colleagues.
2) Self efficacy
Most of the theories related to social interaction discuss determinants of quality improvement
(change) in the interaction between an individual professional and others, such as the influence of
key individuals and opinion leaders, participation in social networks and teams, and the role of
leadership.
Several theories focus on effective communication aimed at changing individual attitudes and
behaviors as individuals interact through communication:
Bandura’s “social cognitive theory” (1986) explains the behavior of individuals in terms of
personal factors, behavioral factors, and context-related factors. Important contextual factors are
material or non-material rewards from others (such as positive feedback from peers or opinion
leaders) as well as modeling of the behavior by others. The basic assumption of this theory is that
there is a continuous interaction among a professional, his or her performance, and the social
environment, which reinforce one another in ‘changing’ performance. Likewise, through
Modeling, an individual can observe in others that it is possible to perform the desired behavior
and that it will lead to the expected results.
Theories of the diffusion of innovations state that the adoption of new ideas and technologies is
largely influenced by the structure of social networks and by specific individuals in or at the
margins of these networks (Rogers, 1995). The links between individuals within a network and the
threshold effects of adopting innovations are strongly linked. Relevant network characteristics that
may influence an effective transfer of information are the strength of the ties between members of
the network, the differences between interacting individuals (in networks of like individuals,
innovations are less likely to be adopted), and the proportion of the population that has already
adopted an innovation (Gladwell 2000; Valente 1996).
The social influence theories stress existing norms and values in the social network of
professionals as critical in influencing ‘change’. Performance in daily practice is assumed to be
based not on a conscious consideration of the advantages and disadvantages of specific behavior
but on the social norms in the network that define appropriate performance (Greer, 1988; Mittman
et al, 1992), such that ‘change’ occurs only after a local consensus is achieved. Interactions within
the social network, the views and expectations of significant peers, and the availability of
education influence effective implementation of innovations or changes.
Teamwork is seen as a way to tackle the fragmentation of care and improve patients’ quality of
both primary and hospital care (Clemmer et al, 1998; Firth-Cozens, 1998). Teams are also used to
improve care for specific groups (Shortell et al, 2004; Wasson et al, 2003), such as patients with a
chronic disease. The success of teams relies on their working toward a common, clear goal, such
that effective teams help clinical systems do their work, define and assign tasks and roles, train
individuals to perform tasks, and establish clear structures and processes for communication
(Grumbach and Bodenheimer, 2004). Factors that influence teamwork include the presence of a
team champion (Shortell et al, 2004), information sharing and trust (Firth-Cozens, 1998), team
vision, participation (how much the team participates in making decisions and whether team
members feel confident in proposing new ideas), task orientation (the commitment of team
members to perform to their optimum), support for innovation (West, 1990), an “structural
factors” such as team size, group composition (mix of skills), and geographical proximity or
separation of the team (Firth-Cozens, 1998). Studies in hospitals found that better team functioning
was significantly associated with better performance (Wheelan et al, 2003; Friedman and Berger,
2004), emphasizing that efforts should aim at encouraging team collaboration in healthcare.
Both formal and informal leaders can be very influential in changing clinical practice or
implementing new procedures or processes. Effective leadership promotes, guarantees, or (in some
circumstances) blocks innovations. This may occur through holding formal authority; controlling
scarce resources; possessing key information, expertise, or skills needed to achieve valued aims;
being part of a strong social network; or belonging to a dominant culture (Donaldson 1995;
Ovretveit 2004). Specific types of leadership probably are effective for particular innovations in
particular settings.
Total Quality Management (TQM), sometimes called Continuous Quality Improvement (CQI),
emphasizes the continuous improvement of (multidisciplinary) processes in healthcare in order to
better meet customers’ needs (Blumenthal and Kilo, 1998; Shortell et al, 1998). Inadequate
performance is perceived as a failure of the system, so that real change can be achieved only by
changing the whole system (Berwick 1989). Changing the organizational culture, identifying the
leadership, and building teams are components of this approach. The basic principles of TQM are
comprehensive, organization-wide efforts to improve quality, a focus centered on the patients (or
customers), continuous improvements and redesign of care processes by encouraging alternating
cycles of change followed by relative stability, management by facts, a positive view of people,
ongoing training for all staff, and a key role for the leadership ((Berwick and Nolan 1998; Plsek et
al, 2003). PDSA cycles (Plan-Do-Study-Act cycles) to improve the provision of care (continuous
learning about change by introducing a change and reflecting on it) are an important tool for TQM.
Theories of integrated care stress the radical or gradual redesign of the steps in providing care.
Models for changing processes, such as Business Process Redesign (BPR) and disease
management, focus on improved organizing and managing the care of specific categories of
patients so that their needs are more readily met and costs are reduced. Change is often better
achieved by redesigning multidisciplinary care processes than by influencing professional
decision making. It usually includes topdown, management-driven approaches in which current
practices and processes are analyzed, reconsidered, and basically redesigned (Rogers 2003). These
approaches often include organizing new collaborations of care providers, allocating tasks
differently, transferring information more effectively. Traditional boundaries between disciplines
are thereby less relevant, and multidisciplinary collaboration is crucial.
2) Complexity Theory
Complexity theory refers to systems behavior and systems change, starting from the assumption
that because the world of health care has become increasingly complex, it is important to observe
and improve systems as a whole instead of dividing them into parts or components. This theory
sees hospitals, primary care teams, or care organized around a specific disease or problem (stroke,
diabetes, infection control) as “complex adaptive systems.” These are defined as “a collection of
individual agents (components, elements) with the freedom to act in ways that are not always
totally predictable, and whose actions are interconnected, so that one agent’s actions changes the
context for other agents” (Plsek and Greenhalgh, 2001). The many components of complex
systems continuously interact, and these interactions are more important than the discrete actions
of individual agents or components (Sweeny and Griffiths, 2002). Such systems cannot be
adequately understood by analyzing their constituent parts. One implication of complexity theory
is that comprehensive plans with detailed targets for parts of the systems rarely improve patients’
care in complex systems. Rather, the focus should be on the system as a whole with simple goals
or minimal specifications (Plsek and Wilson, 2001), because the behavior of a complex system is
usually unpredictable over time, and small influences in one part of the system often have a large
impact elsewhere in the system or even outside the system. According to complexity theory, it is
important not to concentrate on single parts of this system, rather set broad targets for change.
An organization’s culture can be altered to change performance (Scott et al, 2003b, Scott et al,
2003a). Organizational culture refers to “something an organization possesses,” an “attribute,” or
may refers to the “whole character and experience of organizational life” (Scott et al. 2003a). To
form a culture, a group must have stability, shared experience and history. Over time, the group
learns to cope with its problems of external demands and internal integration and teaches these
values and underlying assumptions to new members. Therefore, culture consists of not only
observable features (such as a company’s mode of dress) but also a body of tacit knowledge
(information that people unconsciously possess). To improve quality, health care organizations
may need to develop a quality culture that emphasizes learning, teamwork and customer focus
(Ferlie and Shortell, 2001). Methods for promoting a quality culture start with the leadership’s
embracing the promotion of quality through the articulation of the organization’s mission and
vision, the engagement of people throughout the organization in quality, and attention to learning
(Boan and Funderburk, 2003). Several studies confirm the relationship between organizational
culture and health care performance (Scott et al, 2003b; Shortell et al, 1995). Cultures stressing
group affiliation, teamwork, and coordination were associated with greater improvement in
quality. The model’s four ideal cultural orientations are:
a) A group or clan culture, emphasizing flexibility and change and characterized by strong human
relations, teamwork, affiliation, and a focus on the internal organization;
b) A developmental culture, emphasizing growth, creativity, flexibility, and adaptation to the
external environment;
c) A rational culture, externally focused but control oriented, emphasizing productivity and
achievement and external competition; and
d) A hierarchical culture, stressing stability especially in the internal organization, uniformity,
and a close adherence to rules (Stock and McDermott 2001).
4) Theory of Organizational Learning and Knowledge Management
Many proposals for implementation research projects or QI studies use models or frameworks to
guide their implementation planning. However, many of the models used are not based on theory,
or are based only loosely on underlying theory from which they are derived. While at their
development the models may have been linked to theories, these models are commonly restated
and reinterpreted, and the original tight linkage with theory is lost. A fully developed theory, in
the context of QI, is a theory that explains behavior change, and addresses the question: how and
why do people or organizational entities behave as they do? Given their current behavior, what
would motivate them to change behavior? What could explain the change in behavior? At the
organizational or system level, a theory should provide testable hypotheses and guidance to change
(or action) at both the individual and higher levels of the organization, addressing the subunit or
microsystem, or the unit level where the intervention (or change) is expected to occur (such as a
unit in a ward, a whole ward, a clinic, the whole hospital, or services at a district level or higher
levels). For instance, theories guiding social marketing could be used to explain (together with
ecological models), competition for scarce resources within that organization. Also, a model of
communication at the interpersonal level may explain the strategy of introducing planned to have
impact at the organizational level (combination of individual, interpersonal and organizational
theories). Theories inform the models that provide the foundation or infrastructure of the change
and the ingredients of the QI change (Sales et al, 2006).
Grand theory such as a theory of social inequality (Schon, 1991), is formulated at a high level of
abstraction, enabling it to make generalisations that apply across many different domains. While
such abstract or overarching theory does not usually provide specific rules that can be applied to
particular situations, it enables one to construct particular descriptions and themes’ and can reveal
assumptions and world-views that would otherwise remain under-articulated or internally
contradictory. Middle (or ‘mid’)-range theories are delimited in their area of application, and are
intermediate between minor working hypotheses’ and the ‘all-inclusive speculations comprising a
master conceptual scheme’. The initial formulation and reformulation of grand and mid-level
theories is useful in QI as it improves understanding a problem or guides development of specific
interventions. For example, the theory of the diffusion of innovations (Rogers, 2003; Grol and
Grimshaw, 2003) is a mid-range theory commonly used in QI especially interventions that rely on
social and professional networks, as they explain what make innovations easier to try and how to
tailor innovations to make them consistent with existing systems (Lipsey, 1993; Weiss, 1995;
Rogers et al, 2000; Chen, 2005). Likewise, the Normalisation Process Theory (May, 2013)
describes how practices can become routinely embedded in social contexts.
Initiatives to improve quality and safety in healthcare frequently result in limited changes for the
better or no meaningful changes at all, and the few that are successful are often hard to sustain or
replicate in new contexts (Holen, 2004; Dixon-Woods et al, 2013). This is partly due to the
enormous complexity of healthcare delivery systems, including their challenging technical, social,
institutional and political contexts. However, failure may be attributed to persistent failure employ
informal and formal theory in planning and executing improvement efforts (Davies et al, 2010;
Foy et al, 2011). The explicit application of theory could shorten the time development of
improvement interventions, optimize their design, identify contextual factors that may facilitate
success, and enhance learning from those efforts (Foy et al, 2011; Grol et al, 2007, French et al,
2012; Grol et al, 2013; Marshall et al, 2013).
Failure to use theory may lead to confusion about the results of QI efforts. For instance, the
effective cardiac treatment (AFFECT) study reported negative results from a trial of administrative
data feedback in improving hospital performance on key indicators of cardiac care (Beck et al,
2005). The study design was guided by empirical results and insights from previous studies, but
no explicit theories of individual or organizational behavior change were applied in planning the
design or conducting the study. While several limitations were acknowledged, the authors did not
address the question of ‘‘why’’ efforts were unsuccessful beyond pointing to elements that could
have been improved. Theoretical perspectives, such as those underlying the use of opinion leaders
to influence key stakeholders within the target organizations in the study, or the concept of
intensity or dose of intervention, could have markedly improved the design and conduct of the
study as well as the interpretation of results. Therefore, for interventions to induce planned change
in healthcare, theory provides clues to the mechanism(s) by which the intervention is or is not
successful. Without explicit attention to theory, many key aspects of the intervention may be
ignored.
The theory selected must be used rather than mentioned. Even when theory is used to frame or
inform a QI study, it may then be largely ignored in the selection of strategies, interventions,
selection of tools or measurements, and in interpretation of results (Sales, 2006). One problem
with having little or no theoretical basis for intervention planning is that strategies adopted for
implementation, and tools selected as mechanisms to induce behavior change, are neither tightly
linked to strategy nor to any underlying theory. Thus a theory is mentioned to inform the design,
but is not “used”. As a result, there is little reason to believe a priori that the strategies and actions
for change (which constitute the intervention), may succeed in inducing the QI (behavior) change.
Any QI intervention that proposes an approach that mentions a theoretical framework for change
(that specifies reasons for behavior change at the individual, interpersonal, organizational or
system-wide) should indicate how the theory is applied as part of both the planning process and
implementation phase. As part of this approach, models are considered, strategies selected, and
tools chosen (created, adopted, and/or adapted for use in the implementation process) in line with
the theory or theoretical framework (Sales et al, 2006).
The explicit use of theory anchors the intervention to the context, beyond motivating intervention
planning, design and conduct, as it explains the interaction between individual, organizations and
contexts in which the QI intervention occurs. Use of theory may be most helpful when the targeted
action takes place in an organization with multiple actors, multiple layers, and complex factors
affecting decision-making processes, which characterizes almost any health care organization. The
interaction, particularly in complex organizations such as those in healthcare, is critical to selecting
appropriate theory to predict both individual behavior change, and change in an organizational
context and the influence of the external environment. There are many diverse theories that
describe processes contributing to organizational change, context and culture (.Davis et al, 1995;
Ferlie, 1997; Ferlie et al, 2000; Rycroft-Malone et al, 2002; Eccles et al, 2003; Walker et al, 2003;
Grol and Wensing, 2004; Rhydderch et al, 2004). Theories of organizational change rarely apply
to planned activities of change, particularly when the change operates at levels within the
organization, and do not necessarily affect the organization as a whole (Sales et al, 2006). Thus
choice must be linked to the particular context where the desired change is.
Using theories explicitly
Making explicit the theoretical assumptions behind the choice of interventions should be important
to both researchers and change agents, for several reasons:
1) The use of theory can offer a generalizable framework for considering effectiveness across
different clinical conditions and settings (Eccles et al., 2005).
2) Basing interventions or a change program on different theoretical assumptions should prevent
overlooking important factors (Iceberg Group, 2006).
3) Several factors at different levels of health care (professional, social context, organizational or
economic) usually are important to improving patient care (Ferlie and Shortell, 2001; Grol,
1997), so hypotheses regarding effective change that are derived from different theories should
be useful.
4) Use of theory-driven QI change interventions helps in deciding on the best approaches, as the
theory highlights the drivers of change and the nature of change expected (personal,
interpersonal, organization, system-wide or impact-wide changes).
5) Delineating the quality (patient safety or healthcare) problem and choosing what interventions
are effective begins with a synthesis of the literature. Failure to use a theory creates problems
when in applying evidence from a systematic review of such quality improving interventions
(Peterson, 2005). For instance, a QI intervention on audit and feedback to decision making
about how best to use audit and feedback in future intervention efforts, noted authors’ inability
to glean information on key aspects of conducting audit and feedback from the published
literature (Foy et al, 2005). Failure to explicitly use a theory therefore impedes learning “why”
and “how” from prior efforts, other than identifying other than success or failure in specific
attempts.
6) The need for more effective use of formal theory in improvement is increasingly an imperative
because application of formal theory enables the maximum exploitation of learning and
accumulation of knowledge, and promotes the transfer of learning from one project, one
context, one challenge, to the next. Where hypothesis-testing clinical research may demand the
development of and rigorous adherence to, fixed study protocols and invariant interventions,
QI is different, and may require repeated adjustment and refinement of interventions, often in
a series of experiential learning cycles, in order to use interventions that are intentionally
adapted in light of emergent information and evaluation (Parry et al, 2013; Lagoa et al, 2014).
Understanding how individuals solve particular problems in field settings requires a strategy
of moving back and forth from the world of theory to the world of action. Without theory, one
can never understand the general underlying mechanisms that operate in many appearances in
different situations. If not harnessed to empirical problems, theoretical work can spin off under
its own momentum, reflecting little of the empirical world.
How to use a theory in quality improvement research or interventions
Most attempts to implement evidence-based practices in clinical settings are either only partially
successful, or unsuccessful, in the attempt (Oxman et al, 1995; Grimshaw et al, 2002; Eccles et al,
2004; Holden, 2004; Shojania and Grimshaw, 2004; Eccles et al, 2005). Explicitly outlining and
understanding some form of theory that explains the reason for why an intervention may work to
induce planned change is a critical step in planning interventions to change provider or patient
behavior, particularly in order to promote evidence-based quality care. In quality improvement,
there may be a reluctance to examine theoretical bases for planning implementation activities and
efforts. This arises partly from the perceived need to differentiate between the nature of quality
improvement activities and the nature of the research component inbuilt or inherent in quality
improvement, where initiatives that focus solely on QI may not perceive the need and relevant of
a theory of change. Yet there is need for careful consideration of theory in planning to implement
evidence-based practices into clinical care. The theory should be tightly linked to strategic
planning through careful choice or creation of the design, choice of interventions, evaluation of
the context and an implementation strategies or framework (Sales et al, 2006). Strategies should
be linked to specific interventions and/or intervention components to be implemented. The choice
of tools should match the interventions and overall strategy, linking back to the original theory and
framework, so that investigators can assess a need to modify the theory or not (Sales et al, 2006).
In most studies where there is an attempt to implement planned change in clinical processes, theory
is used inappropriately, if at all.
and what intervention could effect desirable change? Tailoring an intervention to a specific context
requires development of tools that are usually very specific to both the intervention and the context
in which the intervention will take place. While several tools exist, most are specific to the
intervention or context where they were developed.
The importance of linking the problem, the context, the theory and the QI strategy
The number of quality improvement (QI) initiatives is increasing in an attempt to improve quality
of care, improve performance or reduce unwarranted variation. While it is essential to understand
the effectiveness of these initiatives, many commonly lack underlying theory linking a change to
its intended outcome, which inhibits the ability to demonstrate causality and hinder widespread
uptake (Shojania et al, 2005; Davies et al, 2010; Foy et al, 2011). Programme theory is used to
describe an intervention and its hypothesized effects in a particular context and is critical to support
both high-quality evaluation and the development of interventions and implementation plans
(Weiss, 1997; Grol et al, 2007; Dixon-Woods et al, 2012).
Often in QI, a theory is not used. Sometimes only the source of the problem is identified but not
an accompanying theory of change. Improvement interventions are also commonly launched
without either a good outcome measurement plan or the baseline data required for meaningful
time-series analyses (Pronovost et al, 2007; Walshe, 2007; Scott 2009; Pronovost 2011). This
often results in improvement interventions that remain unclear about the specifics of the desired
behaviours, the social and technical processes they seek to alter, the means by which the proposed
interventions might achieve their hoped-for effects in practice, and the methods by which their
impact will be assessed. Even published descriptions of what the intervention consists of are often
poor. Failure to use the various elements of formal theory adequately has frustrated the
understanding of effectiveness of improvement interventions, and limits learning that may inform
planning of future interventions. Failure to employ a theory leads to poor understanding of what
an intervention really consists of, what it does, and how it works curtails the meaningful replication
of interventions that were successful in their original context. Without a good theoretical grasp of
the underlying theory and its critical components or constructs, improvers may adopt the label or
outward appearance attached to a successful intervention, which does not permit them to reproduce
its impact. This anomaly may explain the studies that come up with contradictory findings, such
as from checklists (Haynes et al, 2009; Aveling et al, 2013) or explain the limited effectiveness of
interventions (Hillman et al, 2001; Winters et al, 2013).
Developing and Applying Programme Theory
Use of program theory in QI
The identification and articulation of programme theory can support effective design, execution
and evaluation of quality improvement (QI) initiatives. Programme theory includes an agreed aim,
potential interventions to achieve this aim, anticipated cause/effect relationships between the
interventions and the aim and measures to monitor improvement. One such theory is the Action
Effect Method. Developing the Action Effect Method begins by building a driver diagram,
followed by iteration over several rounds of improvement initiatives. This results in Specification
of the elements required to fully articulate the programme theory of a QI initiative. Development
of programme theory can provide a means to tackle common social challenges of QI such as
creating a shared strategic aim and increasing acceptance of interventions. While QI methods for
the identification and articulation of theory and causal relationships exist. The action effect method
is a systematic and structured process to identify and articulate a QI initiative’s programme theory.
The method connects potential interventions and implementation activities with an overall
improvement aim through a diagrammatic representation of hypothesized and evidenced
cause/effect relationships. Measure concepts, in terms of service delivery and patient and system
outcomes, are identified to support evaluation. The action effect method provides a framework to
guide the execution and evaluation of a QI initiative, a focal point for other QI methods and a
communication tool to engage stakeholders. A clear definition of what constitutes a well-
articulated programme theory is provided to guide the use of the method and assessment of the
fidelity of its application.
An improvement team should begin by sketching out an intervention, then identifying its
components and the relationships that link their application with the desired outcomes. After this,
a theory of change is used. Grand and mid-range theories can be especially helpful in generalising
learning from situations that initially appear new and unique, partly through distinguishing
proximal causes (the most immediate action that makes something happen) from distal causes
(deeper structures that may lie behind patterns of effects). Combined formal and informal theory
can serve more effectively as the basis for decision-making and action than either kind of theory
by itself. In important ways, this blending of informal and formal theories resembles the process
of formulating accurate diagnoses in medical practice. The value of combining informal and
formal theory highlights the point that improvement interventions do not always need to flow
deductively from established formal theories.
When systems-like thinking that underpins systems theory is applied to healthcare systems, within
the system are recognized as being as important as the component parts. Interdisciplinary
relationships, such as those among disciplines like nursing, medicine, social work, and
administration that are central to social processes in a healthcare system, cannot be taken for
granted. Planning in healthcare systems often involves little attention to these relationships and
frequently fails because unanticipated behaviors emerge from the unanticipated interaction of the
component parts. Systems thinking approach helps to prevent system failure and therefore support
QI by enabling healthcare workers to:
1) Improve communication among subsystems within the larger system
2) Create and manage effective teams
3) Establish trust through generative relationships
4) Support interdisciplinary collaborative practices
5) Recognize the importance of conflict-management education
6) Focus on processes rather than staff
7) Reduce power differentials between groups and subsystems
8) Embrace ongoing education
9) Improve morale through autonomy and point-of-service involvement
10) Encourage creativity and innovative problem solving
11) Strengthen the hierarchical components that support quality
12) Emphasize behavioral competency as well as skill competency
QI Models
Quality Improvement is a formal approach to the analysis of performance and systematic efforts
to improve it. There are numerous models used. Some commonly discussed include FADE, PDSA,
CQI: Continuous Quality Improvement and TQM: Total Quality Management. These models are
all means to get at the same thing: Improvement. They are forms of ongoing effort to make
performance better. In industry, quality efforts focus on topics like product failures or work-related
injuries. In administration, one can think of increasing efficiency or reducing re-work. In medical
practice, the focus is on reducing medical errors and needless morbidity and mortality. The
following may be employed:
1) 5S strategy (Sort, Set, Shine, Standardize and Sustain),
2) Continuous Quality Improvement (TCQI
PDSA (plan, do, study, act) cycle approach of small scale, rapid tests of change is a recognized
approach to achieving this. Using this approach changes can be tested, refined and re-tested a
number of times until the change is reliable, quickly and with minimal resource use. The PDSA
Model for Improvement provides a framework for developing, testing and implementing changes
that lead to improvement. You must resist the temptation to rush into organizational or
departmental changes to systems without testing the change first to check that it actually brings
about improvement. For example if unreliable commode cleaning is identified through use of the
IPS QITs, then a solution to the problem should be tested with one staff member and one commode,
and if successful increased to two staff and so on. If unsuccessful an alternative approach can be
tested.
Measurement of ‘Change’ in Quality Improvement
If you cannot measure it, you cannot improve it: Lord Kelvin (1824 -1907).
Measurement is vital for quality improvement. There are three sets of ‘measures’ required for
quality improvement:
Outcome measures: These are the results of care processes and measure the results of quality
improvement work. In infection prevention and control the outcome measure can be rates of
specific infections e.g. surgical site infection, or new cases of MRSA bacteremia. Outcome
measures are important as motivators to improve and ways of celebrating success.
Structure and Process measures: Measuring what actually happens in care is central to improving
quality. The IPS Quality Improvement Tools are designed to facilitate the measurement of
structure and process in infection prevention and control.
Balancing measures: It is sometimes necessary when making changes to care systems to look for
and examine any potential ‘side effects’ of the change, i.e. an unintended and adverse effect. An
example is when making changes to reduce the length of hospital stay; is the readmission rate
increased? For quality improvement the main purpose of measurement is to learn about the
processes that we are seeking to improve.
The characteristics of measurement for learning and improvement are:
a) Measure just what you need to measure and no more (make the measurement quick and easy
to do as far as possible)
b) Measure frequently and regularly and use simple and easy to understand ways of feeding back
measurement to care workers engaged in improvement work (e.g. using simple annotated run
charts). Presentation of the results of QIT use will achieve this.
c) Mainly measure processes to see if we are doing what we should be doing, and doing
it reliably using the PITs then the RITs on a regular basis
d) Use measurement to learn not blame,
e) Quality improvement methods and tools, based as they are on industrial approaches, give us
the opportunity to make real breakthroughs in healthcare quality and in particular safety.
The focus within quality improvement on systems thinking, reliability, testing changes and
measurement has prompted IPS to move away from traditional ‘audit tools’ and develop this suite
of Quality Improvement Tools, and to endorse this approach to reducing the risk of infection and
making safety the norm in care settings. These tools will assist all care workers to measure and
improve their systems of infection prevention and control.
References
Ajzen, I. 1988. Attitudes, Personality and Behaviour. Milton Keynes, U.K.: Open University Press.
Ajzen, I. 1991. The Theory of Planned Behaviour. Organizational Behavior and Human Decision
Processes 50:179–211.
Argyris, C., and D. Sch¨on. 1978. Organizational Learning: A Theory of Action Perspective.
Reading, Mass.: Addison-Wesley.
Ashford, A.J. 1998. Behavioural Change in Professional Practice. Supporting the Development
of Effective Implementation Strategies. Newcastle upon Tyne: Centre for Health Services
Research.
Aveling E, McCulloch P, Dixon-Woods M. A qualitative study comparing experiences of the
surgical safety checklist in hospitals in high-income and low-income countries. BMJ Open
2013;3:e003039
Bandura, A. 1986. Social Foundation of Thought and Action: A Social Cognitive Theory. New
York: Prentice-Hall.
Bandura, A. 1997. The Anatomy of Stages of Change. American Journal of Health Promotion
12:8–10.
Bartholomew, L.K., G.S. Parcel, G. Kok, and N.H. Gottlieb. 2001. Intervention Mapping:
Designing Theory- and Evidence-Based Health Promotion Programs. New York: McGraw-Hill.
Batalden, P.B., and P.K. Stoltz. 1993. A Framework for the Continual Improvement of Health
Care. Joint Commission Journal on Quality and Patient Safety 19:424–52.
Beck CA, Richard H, Tu JV, Pilote L. Administrative data feedback for effective cardiac treatment:
AFFECT, a cluster randomized trial. JAMA. 2005;294:309–17.
Berwick, D.M. 1989. Continuous Improvement as an Ideal in Health Care. New England Journal
of Medicine 320(1):53–56.
Berwick, D.M., A.B. Godfrey, and J. Roessner. 1990. Curing Health Care. San Francisco: Jossey-
Bass.
Berwick, D.M., and T.W. Nolan. 1998. Physicians as Leaders in Improving Health Care. Annals
of Internal Medicine 128:289–92.
Blumenthal, D., and C.M. Kilo. 1998. A Report Card on Continuous Quality Improvement. The
Milbank Quarterly 76:625–48.
Bower, P., S. Campbell, C. Bojke, and B. Sibbald. 2003. Team Structure, Team Climate and the
Quality of Care in Primary Care: An Observational Study. Quality and Safety in Health Care
12(4):273–79.
Eccles MP, Grimshaw JM. Selecting, presenting and delivering clinical guidelines: are there any
‘‘magic bullets’’? Med J Aust. 2004;180(suppl): S52–S54.
Ferlie E, Fitzgerald L, Wood M. G evidence into clinical practice: an organisational behaviour
perspective. J Health Serv Res Policy. 2000;5:96–102.
Ferlie E. Large-scale organizational and managerial change in health care: a review of the
literature. J Health Serv Res Policy. 1997;2:180–9.
Ferlie, E.B., S.M. Shortell. 2001. Improving the Quality of Health Care in the United Kingdom
and the United States: A Framework for Change. The Milbank Quarterly 79(2):281–315.
Firth-Cozens, J. 1998. Celebrating Teamwork. Quality in Health Care 7:S3–S7.
Fishbein, M., and I. Ajzen. 1975. Belief, Attitude, Intention and Behavior. New York:Wiley.
Fox, R.D., and N.L. Bennett. 1998. Learning and Change: Implications for Continuing Medical
Education. British Medical Journal 316:466–68.
Foy R, Eccles MP, Jamtvedt G, et al. What do we know about how to do audit and feedback?
Pitfalls in applying evidence from a systematic review. BMC Health Serv Res. 2005;5:50.
Foy R, Ovretveit J, Shekelle PG, et al. The role of theory in research to develop and evaluate the
implementation of patient safety practices. BMJ Qual Saf 2011;20:453–9.
Foy, R., G. MacLennan, et al. 2002. Attributes of clinical recommendations that influence change
in practice following audit and feedback. Journal of Clinical Epidemiology 55:717–22.
Frambach RT, N. Schillewaert. 2002. Organizational innovation adoption. A multi-level
framework of determinants and opportunities for future research. J Business Res 55:163–76.
French SD, Green SE, O’Connor DA, et al. Developing theory-informed behaviour change
interventions to implement evidence into practice: a systematic approach using the Theoretical
Domains Framework. Implementation Science 2012, 7:38
Friedman, D.M., and D.L. Berger. 2004. Improving Team Structure and Communication. Archives
of Surgery 139:1194–98.
Garavelli, A.C., M. Gorgoglione, and B. Scozzi. 2002. Managing Knowledge Transfer by
Knowledge Technologies. Technovation 22:269–79.
Gardner B, Whittington C, McAteer J, et al. Using theory to synthesize evidence from behaviour
change interventions: the example of audit and feedback. Soc Sci Med 2010;70:1618–25.
Garside, P. 1998. Organizational Context for Quality: Lessons from the Fields of Organizational
Development and Change Management. Quality in Health Care 7:S8–S15.
Grol, R., and M.Wensing. 2005b. Effective Implementation: A Model. In Improving Patient Care;
the Implementation of Change in Clinical Practice, edited by R. Grol, M. Wensing, and M. Eccles,
41–58.
Grol, R., M.Wensing, and M. Eccles, eds. 2005. Improving Patient Care; the Implementation of
Change in Clinical Practice. Oxford: Elsevier.
Grumbach, M.D., and T. Bodenheimer. 2004. Can Health Care Teams Improve Primary Care
Practice? Journal of the American Medical Association 291(10):1246–51.
Ham, C. 2003. Improving the Performance of Health Services: The Role of Clinical Leadership.
Lancet 361:1978–80.
Haynes AB, Weiser TG, Berry WR, et al. Safe Surgery Saves Lives Study Group. A surgical
safety checklist to reduce morbidity and mortality in a global population. N Engl J Med
2009;360:491–9.
Hillman K, Parr M, Flabouris A, et al. Redefining in-hospital resuscitation: the concept of the
medical emergency team. Resuscitation 2001;48:105–10.
Holden JD. Systematic review of published multi-practice audits from British general practice. J
Eval Clin Pract. 2004;10:247–72.
Holm, H.A. 1998. Quality Issues in Continuing Medical Education. British Medical Journal
316:621–24.
ICEBeRG). 2006. Designing Theoretically-Informed Implementation Interventions.
Implementation Science 1:4.
Jones, E.E., D.E. Kannouse, H.H. Kelley, R.E. et al. 1972. Attribution: Perceiving the Causes of
Behavior. Morristown, N.J.: General Learning Press.
Kitson, A., G. Harvey, B. McCormack. 1998. Enabling the Implementation of Evidence Based
Practice: A Conceptual Framework. Quality in Health Care 7(3):149–58.
Kok, G.J., H. De Vries, A.N. Mudde, V.J. Strecher. 1991. Planned Health Education and the Role
of Self-Efficacy: Dutch Research. Health Education Research 6:231–38.
L¨ahteenm¨aki, S., J. Toivonen, M. Mattila. 2001. Critical Aspects of Organizational Learning
Research and Proposals for Its Measurement. British Journal of Management 12:113–29.
Laffel, G., and D. Blumenthal. 1989. The Case for Using Industrial Quality Management Science
in Health Care Organization. JAMA 262:2869–73.
Lagoa CM, Bekiroglu K, Lanza ST, et al. Designing adaptive intensive interventions using
methods from engineering. J Consult Clin Psychol 2014;82:868–78.
Langley, G., K. Nolan, T. Nolan, C.L. Norman, and L.P. Provost. 1996. The Improvement Guide.
San Francisco: Jossey-Bass.
Lewis, A.P., and K.J. Bolden. 1989. General Practitioners and Their Learning Styles. Journal of
the Royal College of General Practitioners 39:187–99.
Lipsey MW. Theory as method: small theories of treatments. New Dir Programme Eval
1993;57:5–38.
Lomas, J., and R.B. Haynes. 1988. A Taxonomy and critical review of tested strategies for the
application of clinical practice recommendations: from “official” to “individual” clinical policy.
American Journal of Preventive Medicine 4(suppl.):77–94.
Loo, R. 2003. Assessing “Team Climate” in Project Teams. International Journal of Project
Management 21:511–17.
Maibach, E., and D.A. Murphy. 1995. Self-Efficacy in Health Promotion Research and Practice:
Conceptualization and Measurement. Health
Mann, K.V. 1994. Educating Medical Students: Lessons from Research in Continuing Education.
Academic Medicine 69:41–47.
Marshall M, Pronovost P, Dixon-Woods M. Promotion of improvement as a science. Lancet
2013;381:419–21.
May C. Towards a general theory of implementation. Implement Sci 2013;8:18.
McGuire, W. 1981. Theoretical foundation of campaigns. In Public Communications Campaigns,
edited by R. Rice andW. Paisley. Beverly Hills, Calif.: Sage.
McGuire, W. 1985. Attitudes and Attitude Change. In The Handbook of Social Psychology, 2nd
ed., edited by G. Lindzey and E. Aronson, 233–46. Beverly Hills, Calif.: Sage.
Merriam, S.B. 1996. Updating our knowledge of adult learning. Journal of Continuing Education
in the Health Professions 16:136–43.
Michie, S., C. Abraham. 2004. Interventions to change health behaviours: evidence-based or
evidence-inspired? Psychology and Health 19(1):29–49.
Michie, S., M. Johnston, C. Abraham, R. et al. 2005. Making Psychological Theory Useful for
ImplementingEvidence Based Practice: A Consensus Approach. Quality and Safety in Health Care
14:26–33.
Michie S, Fixsen D, Grimshaw JM, Eccles MP: Specifying and reporting complex behaviour
change interventions: the need for a scientific method. Implement Sci 2009, 4:40.
Mittman BS, X. Tonesk, PD Jacobson. 1992. Implementing Clinical Practice Guidelines: Social
Influence Strategies and Practitioner Behaviour Change. Quality Review Bulletin 18:413–22.
Nevis, E.C., A.J. DiBella,J.M. Gould. 1995. Understanding Organizations as Learning Systems.
Sloan Management Review 36:73– 85.
Norman, G.R. 2002. Research in Medical Education: Three Decades of Progress. British Medical
Journal 324:1560–62.
Norman, GR, HG. Schmidt. 1992. The Psychological Basis of Problem-Based Learning: A Review
of the Evidence. Academic Medicine 67:557–65.
Nylenna, M., E. Falkum. O.G. Aasland. 1996. Keeping Professionally Updated: Perceived Coping
and CME Profiles among Physicians. Journal of Continuing Education in the Health Professions
16:241–49.
O¨ rtenblad, A. 2002. A Typology of the Idea of Learning Organization. Management Learning
33(2):213–30.
Ovretveit, J. 1999. A Team Quality Improvement Sequence for Complex Problems. Quality in
Health Care 8:239–46.
Ovretveit, J. 2004. The Leaders’ Role in Quality and Safety Improvement; a Review of Research
and Guidance; the “Improving Improvement Action Evaluation Project.” Fourth Report.
Stockholm:
Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102
trials of interventions to improve professional practice. CMAJ 1995;153:1423–31.
Parry GJ, Carson-Stevens A, Luff DF, et al. Recommendations for evaluation of health care
improvement initiatives. Acad Pediatr 2013;13(6 Suppl):S23–30.
Peterson ED. Optimizing the science of quality improvement. JAMA. 2005;294:369–71.
Petty, R.E., and R.T. Cacioppo. 1986. The Elaboration Likelihood Model of Persuasion. In
Advances in Experimental Social Psychology, edited by L. Berkowitz, 123–205. New York:
Academic Press.
Petty, R.E., D.T. Wegener, and L.R. Fabrigar. 1997. Attitudes and Attitude Change. Annual
Review of Psychology 48:609–48.
Plsek PE. Tutorial: management and planning tools of TQM. Qual Manag Health Care 1993;1:59–
72
Plsek, P., L. Solberg, R. Grol. 2003. Total Quality Management and Continuous Quality
Improvement. In Oxford Textbook of Primary Medical Care, edited by R. Jones et al., 490–95.
Oxford: Oxford
Plsek, P.E., T. Greenhalgh. 2001. Complexity Science: The Challenge of Complexity in Health
Care. British Medical Journal 323:625–28.
Prochaska, J.O., and W.F. Velicer. 1997. The Transtheoretical Model of Health Behavior Change.
American Journal of Health Promotion 12:38–48.
Pronovost PJ, Miller M, Wachter RM. The GAAP in quality measurement and reporting. JAMA
2007;298:1800–2.
Provost LP. Analytical studies: a framework for quality improvement design and analysis. BMJ
Qual Saf 2011;20(Suppl 1):i92–6.
Rhydderch M, Elwyn G, Marshall M, Grol R. Organisational change theory and the use of
indicators in general practice. Qual Saf Health Care. 2004;13:213–7.
Robertson, N., R. Baker, H. Hearnshaw. 1996. Changing the Clinical Behaviour of Doctors: A
Psychological Framework. Quality in Health Care 1:51–54.
Rogers EM. Diffusion of innovations. 5th edn. New York, NY: Free Press, 2003.
Rogers PJ, Petrosino A, Huebner TA, et al. Programme theory evaluation: practice, promise, and
problems. New Dir Eval 2000;87:5–13.
Rogers, E.M. 1983. Diffusion of Innovations. New York: Free Press.
Rogers, E.M. 1995. Diffusion of Innovations. 4th ed. NewYork: Free Press.
Rogers, S. 2003. Continuous Quality Improvement: Effects on Professional Patient Outcomes
(Protocol for a Cochrane Review). In The Cochrane Library, no. 2. Oxford: Update Software.
Rossi, P., H. Freeman, M. Lipsey. 1999. Evaluation: A Systematic Approach. 6th ed. Newberry
Park, Calif.: Sage.
Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual
framework. Qual Saf Health Care. 2002;11: 174–80.
Sales, A, Smith J, Curran G, Kochevar, L. Models, strategies, and tools. Theory in implementing
evidence-based findings into health care practice. J Gen Intern Med 2006; 21:S43–49
Scarbrough, H., and J. Swan. 2001. Explaining the diffusion of knowledge management: the role
of fashion. British Journal of Management 12:3–12.
Schein, E.H. 1985. Organizational Culture and Leadership. San Francisco: Jossey-Bass.
Schmidt, H., ed. 1984. Tutorials in Problem-Based Learning. Assen/ Maastricht: Van Gorcum.
Schon DA. The reflective practitioner: how professionals think in action. Aldershot, UK:
Ashgate Publishing, 1991.
Scott I. What are the most effective strategies for improving quality and safety of health care?
Intern Med J 2009;39:389–400.
Scott, T., R. Mannion, H. Davies, M.N. Marshall. 2003a. Implementing Culture Change in Health
Care: Theory and Practice. International Journal for Quality in Health Care 15(2):111–18.
Scott, T., R. Mannion, M. Marshall, H. Davies. 2003b. Does Organizational Culture Influence
Health Care Performance? A Review of the Evidence. Journal of Health Services Research and
Policy 8:105–17.
Scott, W.R. 1990. Innovation in Medical Care Organizations. A Synthetic Review. Medical Care
Review 47:165–92.
Senge, P.M. 1990. The Fifth Discipline; the Art and Practice of the Learning Organization.
London: Random House.
Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health
Aff (Millwood) 2005;24:138–50.
Shojania KG, Grimshaw JM. Still no magic bullets: pursuing more rigorous research in quality
improvement. Am J Med. 2004;116:778–80.
Shortell, S.M., C.L. Bennett, G.R. Byck. 1998. Assessing the Impact of Continuous Quality
Improvement on Clinical Practice: What It Will Take to Accelerate Progress. The Milbank
Quarterly 76:593–624.
Shortell, S.M., J.A. Marsteller, M. Lin, M.L. et al. 2004. The Role of perceived team effectiveness
in improving chronic illness care. Medical Care 42(11):1040–48.
Shortell, S.M., J.L. O’Brien, J.M. Carman, et al 1995. Assessing the Impact of Continuous Quality
Improvement/Total Quality Management: Concept versus Implementation. Health Services
Research 30(2):377–401.
Shortell, S.M., R.H. Jones, A.W. Rademaker, et al. 2000. Assessing the impact of total quality
management and organizational culture on multiple outcomes of care for coronary
van Bokhoven, M.A., G. Kok, T. van derWeijden. 2003. Designing a Quality Improvement
Intervention: A Systematic Approach. Quality and Safety in Health Care 12(3):215–20.
van Leeuwen, Y.D., S.S.L. Mol, M.C. Pollemans, et al. 1995. Change in Knowledge of General
Vandenbroucke JP. Observational research, randomised trials, and two views of medical science.
PLoS Med 2008;5:e67.
Wagner, E.H. 2000. The Role of Patient Care Teams in Chronic Disease Management. British
Medical Journal 320:569–72.
Wagner, E.H., B.T. Austin, M. van Korff. 1996. Organizing Care for Patients with Chronic Illness.
The Milbank Quarterly 74(4):511–44.
Walker AE, Grimshaw J, Johnston M, et al. PRIME—PRocess modelling in ImpleMEntation
research: selecting a theoretical basis for interventions to change clinical practice. BMC Health
Walker, A.E., J.M. Grimshaw, E.M. Armstrong. 2001. Salient Beliefs and Intentions to Prescribe
Antibiotics for Patients with a Sore Throat. British Journal of Health Psychology 6(4):347–60.
Walshe K. Understanding what works--and why--in quality improvement: the need for theory-
driven evaluation. Int J Qual Health Care 2007;19:57–9.
Walshe K. Understanding what works—and why—in quality improvement: the need for theory-
driven evaluation. Int J Qual Health Care 2007;19:57–9.
Weiss C. Nothing as practical as a good theory: exploring theory-based evaluation for
comprehensive community initiatives for children and families. In: Connell J, Kuchisch A, Schorr
LB, et al. eds. New approaches to evaluating community initiatives: concepts, methods and
contexts. 1st edn. New York, NY: Aspen Institute, 1995:65–92
Weiss CH. Theory-based evaluation: past, present, and future. New Dir Eval 1997;1997:41–55.
Wensing, M., H. Wollersheim, and R. Grol. 2006. Organizational Interventions to Implement
Improvements in Patient Care: A Structured Review of Reviews. Implementation Science
Wensing, M., M. Bosch, R. Foy, T. et al. 2005. Factors in Theories on Behaviour Change to Guide
Implementation and Quality Improvement in Health Care. Nijmegen: Centre for Quality of Care
Research (WOK).
West, M.A. 1990. The Social Psychology of Innovation in Groups. In Innovation and Creativity
atWork: Psychological and Organizational Strategies, edited by M.A.West and J.L. Farr, 4–36.
Chichester:Wiley.
Wheelan, S.A., C.N. Burchill, F. Tilin. 2003. The link between teamwork and patients’ outcomes
in intensive care. American Journal of Critical Care 12:527–34.
Winters BD, Weaver SJ, Pfoh ER, et al. Rapid-response systems as a patient safety strategy: a
systematic review. Ann Intern Med 2013;158(5_Part_2):417–25.
Wolfe, R.A. 1994. Organizational Innovation: Review, Critique and Suggested Research
Directions. Journal of Management Studies 31:405–31.
Wolfson M. Social propioception: measurement, data and information from a population health
perspective. In Evans RG, Barer ML, Marmor T, eds, Why are Some People Healthy and Others
Not? New York, NY: Aldine de Gruyter, 1994: p. 309.
Quality improvement is an approach or process that seeks to address one or more of the categories
of ‘quality’. Successful ‘industrial’ approaches which have addressed both systems and processes
in order to improve outcome have increasingly been applied in healthcare settings and it is these
approaches that have influenced the development of this new generation of monitoring tools
produced by the IPS.
Systems thinking
Systems thinking views every care organization and care process as a system and the outcomes that
system produces. This is the opposite of outcomes (adverse ones in particular) being considered to
result from the failings of individuals who can be trained or exhorted to do better. Systems-thinking
asks you to consider the context (including the environment in which care is practiced) and whether
it is designed to reduce error and promote patient safety and best practice. The environment
includes the physical environment but also the systems and processes (the ways of doing things)
that happen within it. The Process Improvement Tools can assist in highlighting problems within
the environment and clinical practice which may require change to improve patient outcomes.
Quality improvement approaches
Business process reengineering
This approach involves a fundamental rethinking of how an organization’s central processes are
designed, with change driven from the top, by a visionary leader. Organizations are restructured
around key processes (defined as activities, or sets of activities) rather than specialist functions.
moving away from traditional approaches, organizations can identify waste and become more
streamlined
Experience-based co-design
This is an approach to improving patients’ experience of services, through patients and staff
working in partnership to design services or pathways. Data are gathered through in-depth
interviews, observations and group discussions and analysed to identify ‘touch points’ – aspects
of the service that are emotionally significant. Staff are shown an edited film of patients’ views
about their experiences before staff and patients come together in small groups to develop service
improvements.
Lean
This is a quality management system that draws on the way some Japanese car manufacturers,
including Toyota, manage their production processes. The approach focuses on five principles:
customer value; managing the value stream; regulating flow of production (to avoid quiet patches
and bottlenecks); reducing waste; and using ‘pull’ mechanisms to support flow. Using ‘pull’ means
responding to actual demand, rather than allowing the organizational needs to determine
production levels.
Statistical process control
This approach examines the difference between natural variation (known as ‘common cause
variation’) and variation that can be controlled (‘special cause variation’). The approach uses
control charts that display boundaries for acceptable variation in a process. Data are collected over
time to show whether a process is within control limits in order to detect poor or deteriorating
performance and target where improvements are needed.
Theory of constraints
The theory of constraints came from a simple concept similar to the idea that a chain is only as
strong as its weakest link. The theory recognizes that movement along a process, or chain of tasks,
will only flow at the rate of the task that has the least capacity. The approach involves identifying
the constraint (or bottleneck) in the process and getting the most out of that constraint (since this
rate-limiting step determines the system’s output, the entire value of the system is represented by
what flows through this bottleneck) recognizing the impact of mismatches between the variations
in demand and variations in capacity at the process constraint.
Total quality management (TQM)
Total quality management, also known as continuous quality improvement, is a management
approach that focuses on quality and the role of the people within an organization to develop
changes in culture, processes and practice. Rather than a process, it is a philosophy that is applied
to the whole organization, encompassing factors such as leadership, customer focus, evidence-
based decision making and a systematic approach to management and change.
Principles of Quality Improvement
When quality is considered from the IOM's perspective, then an organization's current system is
defined as how things are done now, whereas health care performance is defined by an
organization's efficiency and outcome of care, and level of patient satisfaction. Quality is directly
linked to an organization's service delivery approach or underlying systems of care. To achieve a
different level of performance (i.e., results) and improve quality, an organization's current system
needs to change. While each QI program may appear different, a successful program always
incorporates the following four key principles:
1) QI work as systems and processes
2) Focus on patients
3) Focus on being part of the team
4) Focus on use of the data
Quality Improvement Work as Systems and Processes
To make improvements, an organization needs to understand its own delivery system and key
processes. The concepts behind the QI approaches in this toolkit recognize that both resources
(inputs) and activities carried out (processes) are addressed together to ensure or improve quality
of care (outputs/outcomes). A health service delivery system can be small and simple, such as, an
immunization clinic, or large and complex, like a large managed-care organization
1) Activities or processes within a health care organization contain two major components: 1)
what is done (what care is provided), and 2) how it is done (when, where, and by whom care
is delivered). Improvement can be achieved by addressing either component; however, the
greatest impact for QI is when both are addressed at the same time.
2) Process mapping is a tool commonly used by an organization to better understand the health
care processes within its practice system. This tool gained popularity in engineering before
being adapted by health care. A process map provides a visual diagram of a sequence of events
that result in a particular outcome. By reviewing the steps and their sequence as to who
performs each step, and how efficiently the process works, an organization can often visualize
opportunities for improvement. The process mapping tool may also be used to evaluate or
redesign a current process.
3) Specific steps are required to deliver optimal health care services. When these steps are tied to
pertinent clinical guidelines, then optimal outcomes are achieved. These essential steps are
referred to as the critical (or clinical) pathway. The critical pathway steps can be mapped as
described above. By mapping the current critical pathway for a particular service, an
organization gains a better understanding of what and how care is provided. When an
organization compares its map to one that shows optimal care for a service that is congruent
with evidence-based guidelines (i.e., idealized critical pathway), it sees other opportunities to
provide or improve delivered care.
Quality Improvement Planning
A QI plan is a detailed, and overarching organizational work plan for a health care organization's
clinical and service quality improvement activities. It includes essential information on how your
organization will manage, deploy, and review quality throughout the organization.
Elements of a QI plan
An effective QI plan includes the following key elements:
1) Description of organizational mission, program goals, and objectives
2) Definition of key quality terms/concepts
3) Description of how QI projects are selected, managed, and monitored
4) Description of training and support for staff involved in the QI process
5) Description of quality methodology (such as PSDA, Six Sigma) and quality tools/techniques
to be utilized throughout the organization
6) Description of communication plan of planned QI activities and processes, and how updates
will be communicated to the management and staff on a regular basis
7) Description of measurement and analysis, and how it will help define future QI activities
8) Description of evaluation/quality assurance activities that will be utilized to determine the
effectiveness of the QI plan’s implementation
Focus on Being Part of the Team
At its core, QI is a team process. Under the right circumstances, a team harnesses the knowledge,
skills, experience, and perspectives of different individuals within the team to make lasting
improvements. A team approach is most effective when:
a) The process or system is complex
b) No one person in an organization knows all the dimensions of an issue
c) The process involves more than one discipline or work area
d) Solutions require creativity
e) Staff commitment and buy-in are needed
Focus on Use of the Data
Data is the cornerstone of QI. It is used to describe how well current systems are working; what
happens when changes are applied, and to document successful performance. Using data:
a) Separates what is thought to be happening from what is really happening
b) Establishes a baseline (Starting with low scores is okay)
c) Reduces placement of ineffective solutions
d) Allows monitoring of procedural changes to ensure that improvements are sustained
e) Indicates whether changes lead to improvements
f) Allows comparisons of performance across sites
Both quantitative and qualitative methods of data collection are helpful in QI
efforts. Quantitative methods involve the use of numbers and frequencies that result in measurable
data. This type of information is easy to analyze statistically and is familiar to science and health
care professionals. Examples in a health care setting include:
1) Finding the average of a specific laboratory value
2) Calculating the frequencies of timely access to care
3) Calculating the percentages of patients that receive an appropriate health screening
Qualitative methods collect data with descriptive characteristics, rather than numeric values that
draw statistical inferences. Qualitative data is observable but not measurable, and it provides
important information about patterns, relationships between systems, and is often used to provide
context for needed improvements. Common strategies for collecting qualitative data in a health
care setting are:
1) Patient and staff satisfaction surveys
2) Focus-group discussions
3) Independent observations
A health care organization already has considerable data from various sources, such as, clinical
records, practice management systems, satisfaction surveys, external evaluations of the
population's health, and others. Focusing on existing data in a disciplined and methodical way
allows an organization to evaluate its current system, identify opportunities for improvement, and
monitor performance improvement over time.
When an organization wants to narrow its focus on specific data for its QI program, one strategy
is to adopt standardized performance measures. Since performance measures include specific
requirements that define exactly what data is needed for each measure, they target the data to be
collected and monitored from the other data that is available to an organization. The clinical quality
measures identified in this toolkit are examples of standardized measures that an organization,
such as a safety net provider, may consider for adoption. They are designed to measure care
processes that are common to safety net providers and are relevant to populations served. They
narrow an organization's choices of what data to collect and measure.
improvement requires a systematic approach with a strong rationale for design and explicit
reporting of the intervention development process (des Jarlais et al, 2004’; Baker et al, 2008;
Boultron et al, 2008).
One option is to use theory to inform the design of implementation interventions (Eccles et al,
2004). The UK Medical Research Council’s (MRC) guidance for developing complex
interventions informed by theory (Campbell et al, 2000, MRC, 2008; Crepaz et al, 2008) is useful
as a general approach to designing an implementation intervention. The multiple theories and
frameworks of individual and organizational behaviour change that exist, tend to, conceptually,
have overlapping constructs (Ferlie and Shortell, 2001; Grol et al, 2007). Since only few of these
theories have been tested in robust research in healthcare settings, there is currently no systematic
basis for determining which among the various theories available predicts behaviour or behaviour
change most precisely (Noar and Zimmerman, 2005), or which is best suited to underpin
implementation research (Grol et al, 2007; Lipke et al, 2008). Theories that have been used in
previous implementation research include PRECEDE (Predisposing, Reinforcing, and Enabling
Constructs in Educational Diagnosis and Evaluation), diffusion of innovations, information
overload, and social marketing (Davies et al, 2010). One important approach in quality
improvement is to support individual health professionals to modify their clinical behaviour in
response to evidence-based guidance (Ferlie and Shortell, 2001). The reason why it is critical to
focus on this level is that much of health care is delivered in the context of an interpersonal
relationship that arises from the encounter between a health professional and a patient. This makes
healthcare professional clinical behaviours, in themselves or in the context of interaction with
patients/clients, an important proximal determinant of the quality of care that patients receive,
especially if other factors in the context are put into consideration.
Development of implementation interventions can draw on theory, evidence, and practical issues
in the following ways. Theory can be used to understand the factors that might influence the
clinical behaviour change (individual, interpersonal or organizational) that is being targeted, to
underpin possible techniques that could be used to change clinical behaviour (Mitchie et al, 2005),
and to clarify how such techniques might work (Beck et al, 2002; Lane et al, 2007; Gillard et al,
2004). Evidence can inform which clinical behaviours should be changed, and which potential
behaviour change techniques and modes of delivery are likely to be effective (Michie and
Johnston, 2004; Michie and Lester, 2005; Michie et al, 2008; Forsetlund et al, 2009; . Practical
issues then determine which behaviour change techniques are feasible with available resources,
and which are likely to be acceptable in the relevant setting and to the targeted health professional
group (Foy et al, 2007; McAteer et al, 2007; MacKenzie et al, 2008).
There are several steps involved in developing a theory-based intervention: (French et al, 2012)
1) STEP 1: Who needs to do what, differently?
a) Identify the evidence-practice gap
b) Specify the behaviour change needed to reduce the evidence-practice gap
c) Specify the health professional group whose behaviour needs changing
2) STEP 2: Using a theoretical framework, which barriers and enablers need to be addressed?
a) From the literature, and experience of the development team, select which theory(ies),
or theoretical framework(s), are likely to inform the pathways of change
b) Use the chosen theory(ies), or framework, to identify the pathway(s) of change and the
possible barriers and enablers to that pathway
c) Use qualitative and/or quantitative methods to identify barriers and enablers to
behaviour change
3) STEP 3: Which intervention components (behaviour change techniques and mode(s) of
delivery) could overcome the modifiable barriers and enhance the enablers?
a) Use the chosen theory, or framework, to identify potential behaviour change techniques
to overcome the barriers and enhance the enablers
b) Identify evidence to inform the selection of potential behaviour change techniques and
modes of delivery
c) Identify what is likely to be feasible, locally relevant, and acceptable and combine
identified components into an acceptable intervention that can be delivered
4) STEP 4: How can behaviour change be measured and understood?
a) Identify mediators of change to investigate the proposed pathways of change
b) Select appropriate outcome measures
c) Determine feasibility of outcomes to be measured
The areas in care that deviate from guidelines are classified as follows:
1) Site of care delivery (such as emergency clinic instead of outpatient office care delivery)
2) Clinical data collected at the visit
3) Diagnostic testing performed at the visit
4) Empiric therapy prescribed
5) Office follow-up scheduled at an appropriate interval to ensure improvement with treatment
regimen prescribed
The team collects data on the process for 4 weeks and then constructs a Pareto chart.
When the QI team is assembled and prepared to integrate quality improvements into its
organization, the focus then becomes the actual implementation. This section describes QI
processes at a high operational level. The content is intended to provide answers for these reflection
questions, as an organization makes specific decisions about what it wants to improve and how to
actually accomplish the work:
What are the desired improvements?
How are changes and improvements measured?
1) Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible
causes for an effect or problem and sorts ideas into useful categories.
2) Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that
can be adapted for a wide variety of purposes.
3) Control charts: Graphs used to study how a process changes over time.
4) Histogram: The most commonly used graph for showing frequency distributions, or how often
each different value in a set of data occurs.
5) Pareto chart: Shows on a bar graph which factors are more significant.
6) Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a
relationship.
7) Stratification: A technique that separates data gathered from a variety of sources so that
patterns can be seen (some lists replace “stratification” with “flowchart” or “run chart
Once measures are identified, an organization then determines its data collection frequency and
sampling. More frequent data collection allows an organization to focus its QI efforts more
aggressively. Monthly data collection is suggested, but collection on a quarterly basis is adequate,
if necessary. An organization's processes and procedures needs to be established for consistent
reviews and analyses of the performance measurement data by the staff. The data is analyzed to
identify trends and progress toward an organization's goals. This type of analysis also identifies
opportunities for improvement, allowing the QI team to focus its efforts and ensure that system
changes result in improvement.
Developing the Key Drivers Diagram
For success, QI initiatives need a firm grounding in theory (Davidoff et al, 2015; Kurowski et al,
2015). Davidoff et al. (2015) clarify the importance of theory by outlining three levels of theory
(grand, big, and small) t importance of theory in improvement work. Grand theory is the most
abstract and makes generalizations that apply across many domains. Big, or mid-range, theories
bridge the gap between grand and small by outlining concepts that can be applied across
improvement projects, such as the theory of diffusion of innovations (Rogers, 2003). Small or
program theories are practical, accessible, and specific to a single improvement project or
intervention. They specify, often in the form of a logic model or key driver diagram, the
components of an improvement project (or interventions) intended to address the intervention’s
expected outcomes (or drivers) leading to the desired improvement in the process (the specific
aim) and the methods for assessing those outcomes.
While generating the key driver diagram, getting input from all stakeholders to ensure that all
essential pieces of the process are identified. One useful method for describing the components of
a key driver diagram is using the question ‘What? to frame the drivers and ‘How?’ to frame the
interventions. The key driver diagram should be frequently revisited, and the program theory
revised by the team as additional information is obtained during observation of the system and
testing of interventions. This is an iterative process whereby interventions will be added or
previous interventions modified from the iterative trial-and-learning process of the model for
improvement. Once an initial list of key drivers has been agreed upon, it is time for the ‘good
ideas’ to be added to the key driver diagram (Krowski et al, 2015). These good ideas are the
proposed interventions based on the failure mode analysis. Arrows connecting the interventions
to the appropriate key drivers can be used to denote which key driver(s) will be affected by a given
intervention. These arrows will also be updated frequently, as the results of testing an intervention
may reveal effects on a driver that had not previously been linked. The team constructs a key driver
diagram which includes the following components (Langley et al, 2009): Global aim, specific aim,
key drivers and interventions.
Ideally, a small, balanced family of measures, including at least one outcome measure, should be
identified for an improvement initiative (Provost and Murray, 2011). The team must work to
understand how the data can be obtained for these measures, the accuracy of the data, and how
often the data can be collected. Once the team has collected the data, it is important to understand
the baseline performance of the system. Data for each measure is typically graphically plotted over
time using run charts or Shewart (control) charts. The graphical nature of these charts makes them
ideal for the evaluation of frequent changes in a measure since individual data points are displayed,
allowing for maximum visualization of variation over time (Perla et al, 2011). Understanding
system variation is a critical concept when working to improve a process or outcome. Run charts
make it possible to determine if the variation in your system is secondary to changes made or to
other inherent causes of variation in the system. Common cause (normal) variation is the variation
that is inherent to the system. This variation is typically explained by unknown factors constantly
active within the system. Common cause variation is often described as the ‘noise’ in the system
and, if singularly present, represents a ‘stable system’ A stable system may be preferred if it is
performing well; however, it may also represent a poorly performing system in which changes are
needed. Special cause variation is secondary to factors not inherent to the system. Special cause
variation may be desired or not desired depending on the historical stability and performance of
your system. It represents variation that is outside of the system’s baseline experience. When a
special cause event occurs, it is a signal that there is a new factor not typically part of the system
impacting the system’s performance. These events may represent favorable or unfavorable
changes to the system. Ideally, during active improvement, special cause events signal
improvements to the process or outcomes as a result of the team’s interventions. For run charts,
there are probability-based rules to determine special cause, and control limits are calculated for
Shewhart (control) charts as an additional method for determining special cause.
Shewhart or Deming cycle) (Langley et al, 2009). The PDSA cycle is a useful, four-step process
to test theory and implement change. The four stages of the PDSA cycle are as follows: (plan) the
change to be tested or implemented, (do) carry out the test of change with careful measurement,
(study) the data before and after the change and reflect on the knowledge obtained, and (act) plan
the next test. PDSA cycles are used to test an idea or theory through trial, assessing the change or
impact, and making interventions based on these small tests. Each intervention is based on theory
and should be tested on a small scale, sometimes on only one or two patients. Once the test shows
improvement, these theories can be ramped up to include a larger population. There are many
benefits from starting small, and growing these tests to include large audiences. For example, when
interventions are disruptive to opinions or existing processes, small tests can help generate buy-in
from those involved in the testing to support larger-scale test. Multiple PDSA cycles are often
linked together in a PDSA ramp, where small tests of changes are tested and adapted on
progressively larger scale, to get from the initial idea to a change that is ready for implementation.
Most projects will require multiple parallel PDSA ramps addressing multiple key drivers to
achieve the aim. It is important to annotate all SPC charts with PDSA cycles/ramps to visually
temporally track the impact of these cycles/ramps on the process, outcome, and balancing
measures.
Using the PDSA cycles for individual problem solving
Here the change involves individual decision-making to achieve QI, and the individual decision-
making does not affect other members, processes or context. Such an individual must understand
their role in the process of QI and must be empowered to make appropriate decisions. The
necessary steps include Identifying a problem ( analysis of the problem); Analyzing the problem
(using intuition, individual problem solving or consultation); Developing possible solutions
(which may be validated through dialogue or consultation) and Testing and implementing change:
Plan (Choose a hypothesis for solving the problem, consult); Do (Test the hypothesized solution);
Study (Verify if change was as planned, assess if change led to improvement); Act (Maintain the
change if successful, revise and modify plan if change not achieved or change not adequate).
Here the change involves individual working as teams for decision-making to achieve QI, and the
individual decision-making influences decision-making or performance of the other team members
(thus affecting processes or context). Such individuals must understand their role in the team
process of QI and must be empowered to make appropriate decisions that sequentially improve
performance of the whole team. The necessary steps include:
Step 1: Identifying a problem that requires solution through mutual efforts of the whole team (
analysis of the problem identified by whole teams or team leaders), such as reduced patient waiting
time, reduced infection rates or reduced postoperative complications. The constitution of QI teams
is critical to represent all key players and should aim at achieving consensus.
Step 2: Analyzing the problem (using intuition, using available or new data, or consultation);
Developing possible solutions (which may be validated through dialogue or consultation) and
Testing and implementing change
a) Plan (Choose a hypothesis for solving the problem, consult). Process description tools such as
flow charts, run charts and cause-and effect diagrams may be used.
b) Do (Identify bench marks, Identify targets, Identify indicators or success)
c) Test the hypothesized solution); Study (Verify if change was as planned, assess if change led
to improvement); Act (Maintain the change if successful, revise and modify plan if change not
achieved or change not adequate).
d) Collect baseline data
Step 3: Develop
a) Interventions may be re-tested, modified, adapted initially individually, and later sequentially
or together
b) Study (Assess if interventions are tested and implemented according to plan), implement
measurements according to targets, indicators of QI; Verify if the QI intervention led to
improvement or unexpected results.
c) Act (Implement intervention on a permanent basis if successful; Modify and retest intervention
as necessary)
Using the PDSA cycles for systematic team problem solving or process improvement
This approach is used for recurrent, chronic problems where there is a need to identify the root
cause of the problems. The tools used include cause-and effect diagrams, root cause analysis, and
testing theories (for possible cause of problem or success of interventions). Such QI initiatives may
require an opportunity for improvement (implementation momentum). It is important to identify
problems that are high risk (have the most negative effects due to poor quality), high volume (occur
often or have large effect), or are problem prone (susceptibility to errors is high).
Step 1: Identifying a problem that requires solution through mutual efforts of the whole team (
analysis of the problem identified by whole teams or team leaders), such as reduced patient waiting
time, reduced infection rates or reduced postoperative complications. The constitution of QI teams
is critical to represent all key players and should aim at achieving consensus.
a) Analyze possible causes and rank the causes using root-cause analysis
b) Analyzing the context of the problem (using intuition, using available or new data, or
consultation);
c) Analyze the processes involved in the activities related to the problem (Who, Where, When,
How, Why). Use a flow chart, check sheets or affinity diagrams.
d) Identify possible solutions (which may be validated through dialogue or consultation) and
before testing and implement change. May use ranking options such as voting (single voting,
multivoting or weighted voting) after selected criteria. Expert decision-making and systems
modeling (considering inputs, processes, outputs, activities, effects and impacts) may also be
used depending on the broadness or complexity of the problem to be addressed.
e) Choose a hypothesis for solving the problem, consult). Process description tools such as flow
charts, run charts and cause-and effect diagrams may be used.
f) Identify bench marks, targets and indicators or success
g) Develop and test empirically the hypothesized solution); Study (Verify if change was as
planned, assess if change led to improvement); Act (Maintain the change if successful, revise
and modify plan if change not achieved or change not adequate).
h) Collect baseline data (What data, How is it collected, analyzed and interpreted?).
i) Display and stratify data visually: Use pie histograms, line graphs charts, pareto diagrams or
run charts to display data, stratifying for key sub groups.
Step 3: Develop
c) Identify (using brain storming, creative thinking, bench marking and affinity analysis)
interventions to address the root causes of the problems
d) Rank interventions (Cost, feasibility, free from negative effects, reach, management support
needed, community support needed and timeliness). May use prioritization tools such as
voting, prioritization matrices, expert decision making or systems modeling.
a) Identify the prerequisites for implementation (what is needed, what must be in place, when,
and why) before implementing QI interventions
b) Develop the implementation plan (the step-by-step process by which the intervention is
implemented). Interventions may be re-tested, modified, adapted initially individually, and
later sequentially or together. Using a Gant chart is critical to visually display the order of
activities.
c) Study (Assess if interventions are tested and implemented according to plan), implement
measurements according to targets, indicators of QI; Verify if the QI intervention led to
improvement or unexpected results. Identify what did not go as planned or went wrong.
d) Act (Implement intervention on a permanent basis if successful; Modify and retest intervention
as necessary). Assign responsibility (Who to do what, when, how and why)
e) Identify and address resistance to change
f) Develop a prevention plan to address the potential negative effects of the implementation. May
use SWOT analysis, SPOT analysis or Force field analysis
g) If successful, develop a sustainability plan (dissemination plan, scale up plan, integration
plans, opportunity to standardize the interventions)
definitions of these measures that are clear and project-specific. There are three different types of
measures which are often discussed in quality improvement work.
1) Outcome measure indicates the performance of the system under study and relates directly to
the specific aim. This measure is often directly related to a patient or patient-care-related
outcome. The team decides how to measure value of care but believes that, for example,
decreasing delays in antibiotic initiation, minimizing the number of changed prescriptions
based on lack of insurance coverage, and minimizing the number of unplanned office and/or
ED visits for the same illness maximizes the value of the care they deliver.
2) Process measure indicates if a key step in the process change has been accomplished. The QI
team identifies, say, use of appropriate first-line antibiotics for pneumonia, as directed by the
most recent evidence-based guideline as a process measure. Given the difficulty in directly
measuring the value of care, the team decides to start primarily tracking their process measure
over time to assess the impact of their interventions.
3) Balancing measure indicates performance of related processes/outcomes to ensure that those
measures are being maintained or improved, and also allows the QI team to monitor for
unintended consequences of their process improvement work. Common examples in health
care include adverse patient outcomes such as hospital readmissions or treatment failures.
Sustainability and Sustainability Plans
Once the changes have been adapted to a point where the team identifies they are ready for
adoption, the teams focus shifts to implementation of the change into everyday practice. This
includes revising the process map to accurately depict the new process, revising any job or process
descriptions to match the new process, and planning to train new members of the practice group
on the new process.
Sustainability plan
A sustainability plan includes the following:
1) Deciding which performance measures will continue to be monitored using run charts?
2) Developing a systematic (and ideally automated) process for obtaining and integrating the
data that comprise the measures
3) Determining who will be responsible for evaluating the performance measure on an ongoing
basis (i.e., the process owner)
4) Establishing measure parameters to guide the process owner’s decisions about when to
address deterioration in performance
5) Articulating the process owner’s role in addressing performance deterioration (e.g., power to
reconvene the team and launch new series of explorations to understand why and how the
process is failing)
Recommendations for a sustainable QI effort
The following are five recommendations based on this project that represent approaches local
quality leaders should consider as they work to develop an effective QI program.
1) Make your QI efforts about significant sustainable quality
Successful projects are those that people believe in and want to see become successful. Far too
often, the people affected by a QI project (if not even the actual QI team) are told they must
change in order to meet some internal or external requirement. This is a setting where change
resistance may be maximized and chances of project success minimized. These situations will
often be marked by initial improvements followed by immediate quality degradation of
improvements after project completion. This type of effect could explain the overall system-
wide response on the discharges before noon outcome. If enough QI teams treated the goal as
something they had to do to satisfy a request from central office, the teams would have had
sufficient buy-in to get the initial improvements to report, but once no one was monitoring and
reporting rates of discharge before noon providers returned to their original discharge process.
The key here is to encourage QI teams to early on identify and properly communicate a project
value (ideally for all of the stakeholders) that goes beyond simply meeting arbitrary
requirements. If properly done the larger healthcare community should have the necessary
motivation to improve and sustain those improvements.
2) Aim for real change, not just cosmetic change
While effective QI will include education, an effective QI team must work to understand the
process and what about that process allows poor quality to occur. Then the team can identify
ways to change the process that will eliminate sources of poor quality. Education can then
focus on helping providers understand the new process and the benefits of that process. In
contrast, education that relies solely on encouraging providers to perform better, which they
will strive to do, is unlikely to sufficiently support efforts and will not lead to lasting
improvements in quality. Another important consideration is that the QI team should make
sure there is a definable and consistent process relevant to the outcome of interest. Quite often
the issue in healthcare is that there are few standardized processes making it difficult to broadly
implement changes when everyone performs differently. This potential issue was the
motivating factor for developing the FIX classification approach that separated facilities with
high variability (No change) from those that had low variability but did not improve (No
benefit).
3) Empower and excite.
Change is most lasting when those who provide frontline care are involved and truly excited
about the QI project. The data in this study indicated that staff were critical to supporting QI;
the real question was how to most efficiently utilize staff in achieving goals. While it is
critically important that those who formulate the strategic plan for an organization make it clear
that they value and support QI, there is only so much that management in many health care
systems can do to effect change. Instead, it must be the frontline leaders who recognize a
quality problem, communicate the need for change, and motivate those around them to
overcome the challenge. Additionally, it is these people who understand how a process truly
occurs and can best identify the waste or potential sources of error. Only when there is true
energy at the front lines for supporting and making a change, is it possible to achieve long term
quality.
4) Measure and evaluate.
Measures of data collection were frequently used to separate different performance categories in
the decision tree models. In short, it is impossible to 156 improve quality if there is no clear
understanding about the current state of performance. Similarly, sustaining performance requires
monitoring performance and being prepared to respond should new sources of error emerge. This
process has its own challenges as hospitals must carefully identify how frequently to collect and
report data, as well as how to ensure that data are reported in a format that local quality leaders
can interpret and use to develop plans of action.
5) Dream big, start small
All QI approaches include some level of focus on continuous improvement and monitoring. The
continuous improvement process serves many critical purposes, but perhaps most importantly
recognizes that most processes are subject to multiple sources of waste or error. This means that
QI teams need the ability to systematically and sequentially tackle different issues rather than
feeling like a successful project must tackle all problems with a single intervention. In addition to
keeping the team from tackling too large of a project, this approach helps teams meet individual
goals which can be an excellent way to keep interest and excitement about the project.
References
Ashford AJ: Behavioural change in professional practice: supporting the development of effective
implementation strategies. Newcastle upon Tyne: Centre for Health Services Research, Report No
88 1998.
Baker EA, Brennan Ramirez LK, Claus JM, Land G: Translating and disseminating research- and
practice-based criteria to support evidencebased intervention planning. J Public Health Manag
Pract 2008, 14(2): 124–130.
Beck RS, Daughtridge R, Sloane PD: Physician-patient communication in the primary care office:
a systematic review. J Am Board Fam Pract 2002, 15(1):25–38.
Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P: Extending the CONSORT statement to
randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med
2008, 148(4): 295–309
Campbell M, Fitzpatrick R, Haines A, et al: Framework for design and evaluation of complex
interventions to improve health. BMJ 2000, 321(7262):694–696.
Chun J, Bafford AC. History and background of quality measurement. Clin Colon Rectal Surg.
2014; 27(1):5–9.
Craig P, Dieppe P, Macintyre S, et al: Developing and evaluating complex interventions: the new
Medical Research Council guidance. BMJ 2008, 337:a1655.
Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of
guideline dissemination and implementation strategies and interpretation of the results of rigorous
evaluations. Implement Sci 2010, 5:14.
Des Jarlais DC, Lyles C, Crepaz N: Improving the reporting quality of nonrandomized evaluations
of behavioral and public health interventions: the TREND statement. Am J Public Health 2004,
94(3): 361–366.
Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N: Changing the behavior of healthcare
professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol
2005, 58(2):107–112.
Ferlie EB, Shortell SM: Improving the quality of health care in the United Kingdom and the United
States: a framework for change. Milbank Q 2001, 79(2):281–315.
Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops:
effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2009, Issue
2:Art. No.: CD003030.
Foy R, Francis JJ, Johnston M, et al. The development of a theory-based intervention to promote
appropriate disclosure of a diagnosis of dementia. BMC Health Serv Res 2007, 7:207.
French SD, Green SE, O’Connor DA, et al. Developing theory-informed behaviour change
interventions to implement evidence into practice: a systematic approach using the Theoretical
Domains Framework. Implementation Science 2012, 7:38
Griffin SJ, Kinmonth AL, Veltman MW, et al. Effect on health-related outcomes of interventions
to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam
Med 2004, 2(6):595–608
Grimshaw JM, Thomas RE, MacLennan G, et al: Effectiveness and efficiency of guideline
dissemination and implementation strategies. Health Technol Assess 2004, 8(6):1–84.
Grol R, Berwick DM, Wensing M: On the trail of quality and safety in health care. BMJ 2008,
336(7635):74–76.
Grol RP, Bosch MC, Hulscher ME, et al: Planning and studying improvement in patient care: the
use of theoretical perspectives. Milbank Q 2007, 85(1):93–138.
Hrisos S, Eccles M, Johnston M, et al. Developing the content of two behavioural interventions:
using theory-based interventions to promote GP management of upper respiratory tract infection
without prescribing antibiotics #1. BMC Health Serv Res 2008, 8:11.
ICEBeRG: Designing theoretically-informed implementation interventions. Implement Sci 2006,
1:4.
Kohn LT, Corrigan JM, Donaldson MS. To ERR is human: building a safer health system.
Washington: National Academies Press; 2000
Kurowski EM, Schondelmeyer AC, Brown, C, et al. A practical guide to conducting quality
improvement in the healthcare setting. Curr Treat Options Peds 2015; 1:380–392
Lane C, Rollnick S: The use of simulated patients and role-play in communication skills training:
a review of the literature to August 2005. Patient Educ Couns 2007, 67(1–2):13–20.
Langley GJMR, Nolan KM, Nolan TW, et al. The improvement guide: a practical approach to
enhancing organizational performance. 2nd ed. San Francisco: Jossey-Bass; 2009.
Lippke S, Ziegelmann JP: Theory-based health behavior change: developing, testing, and applying
theories for evidence-based interventions. Appl Psychol 2008, 57(4):698–716.
McAteer J, Stone C, Fuller R, Slade R, Michie S: Translating self-regulation theory into a hand-
hygiene behaviour intervention for UK healthcare workers. Health Psychology Review 2007,
1(Supplement 1):302
McKenzie JE, French SD, O’Connor DA, et al: IMPLEmenting a clinical practice guideline for
acute low back pain evidence-based manageMENT in general practice (IMPLEMENT): Cluster
randomised controlled trial study protocol. Implement Sci 2008, 3:11.
Medical Research Council: A framework for development and evaluation of RCTs for complex
interventions to improve health. London: MRC; 2000.
Medical Research Council: Developing and evaluating complex interventions: new guidance.
London: MRC; 2008.
Michie S, Johnston M, Abraham C, et al, on behalf of the "Psychological Theory" Group: Making
psychological theory useful for implementing evidence based practice: a consensus approach.
Quality Safety in Health Care 2005, 14(1):26–33.
Michie S, Johnston M, Francis J, et al: From theory to intervention: mapping theoretically derived
behavioural determinants to behaviour change techniques. Appl Psychol 2008, 57(4):660–680.
Michie S, Johnston M: Changing clinical behaviour by making guidelines specific. BMJ 2004,
328(7435):343–345.
Michie S, Lester K: Words matter: increasing the implementation of clinical guidelines. Qual Saf
Health Care 2005, 14(5):367–370.
Noar SM, Zimmerman RS: Health Behavior Theory and cumulative knowledge regarding health
behaviors: are we moving in the right direction?. Health Educ Res 2005, 20(3):275–290.
Perla RJ, Provost LP,Murray SK. The run chart: a simple analytical tool for learning from
variation in healthcare processes. BMJ Qual Saf. 2011;20(1):46–51
Provost LP, Murray S. The health care data guide: learning from data for improvement. John
Wiley & Sons; 2011
Rogers E. Diffusion of Innovations. Fifth Edition ed. New York: Simon & Schuster; 2003.
Rothman JA. "Is there nothing more practical than a good theory?": Why innovations and advances
in health behavior change will arise if interventions are used to test and refine theory. International
Journal of Behavioral Nutrition and Physical Activity 2004, 1:11
Ryan TP. Statistical methods for quality improvement. John Wiley & Sons; 2011.
Strome TL. Healthcare analytics for quality and performance improvement. John Wiley & Sons;
2013.
van Bokhoven MA, Kok G, van der Weijden T: Designing a quality improvement intervention: a
systematic approach. Qual Saf Health Care 2003, 12(3):215–220.
The World Health Organization (WHO) Research Priority Setting Working Group states that
‘‘understanding the magnitude of the problem and the main contributing factors that lead to patient
harm is essential to devise effective and efficient solutions for different contexts and
environments and to build safer health systems.’’ Many health systems and facilities engage
regularly in activities referred to as quality assurance, quality improvement, performance
improvement or audit. (Although quality assurance and audit are terms used to refer to practices
that aim to review how care is being delivered and compare it with a set of explicit criteria to
determine how it can be improved, quality improvement is a term that is meant to encompass both
prospective and retrospective activities that are meant to improve care by determining why
preventable harms or systematic inefficiencies occur and by designing techniques to improve
them) to monitor and improve the care they provide to patients.
In addition, growing awareness of these concerns and possible strategies to address them has led
to a significant increase in the amount of research being conducted related to patient safety. Such
research is designed to document the extent, nature, and possible determinants of patient safety
incidents and to understand the effectiveness of interventions designed to prevent or reduce them.
Patient safety research is often included under the broader category of quality improvement, and
more generally of health services research. In fact, methods used to conduct patient safety research
are similar to those used in these other broader quality improvement and research activities,
including, for example, retrospective review of medical records, prospective observational data
collection, and randomized controlled trials. Patient safety research ideally results in interventions
and strategies that can be implemented in health care settings as a means of safety improvement
actions. International ethical guidelines for research require third party oversight of research by an
ethics review committee (REC) and also outline both principles and actions that should be
implemented as part of the ethical conduct of human research. As patient safety research and health
services research have become more widespread, ethics literature related to quality improvement
and patient safety activities and research has grown tremendously.
has experienced a serious incident that has had or could have an important effect on their health or
quality of life, the organization has an obligation to ensure that the incident is disclosed to the
patient, and measures are established to prevent similar occurrences. In addition, the organization
has an obligation to ensure that further measurement of actual practice is carried out to verify that
the system or process involved has been improved and that the situation is unlikely to recur.
Campbell et al (200) offer advice on QI design for complex interventions should, that it should
follow a sequential approach involving 4 steps:
1) Development of the theoretical basis for an intervention;
2) Definition of components of the intervention (using modelling, simulation techniques or
qualitative methods);
3) Exploratory studies to develop further the intervention and plan a definitive evaluative study
(using a variety of methods);
4) Definitive evaluative study (using quantitative evaluative methods, predominantly randomized
designs).
This framework demonstrates the interrelation between quantitative evaluative methods and
other methods; it also makes explicit that the design and conduct of quantitative evaluative
studies should build upon the findings of other quality improvement research.
Check on effectiveness of actions implemented
QI and clinical audit projects aim to improve or maintain the quality or safety of patient care.
However, there is a risk that the proposed changes taken to achieve improvements will be
ineffective or even possibly harmful. Therefore, changes in patient care or service delivery need
to be risk assessed to pre-empt what could go wrong during the implementation of a change and
to identify what to do if it does (Nelson, 2004; Cave and Nichols, 2007; Davidoff, 2007). QI or
clinical audit projects that do not achieve needed changes related to patient safety or provision of
patient care may fail to meet the ethical responsibilities of healthcare professionals or organizations
to improve quality. If a project indicates that effective practice is not now being provided to
patients, it would be unethical to continue to provide substandard care and to withhold
improvements in practice from patients. On the other hand, lessons learned about the clinical
impact and outcomes of successful projects that have achieved substantial improvements should
be disseminated within the organization in order to promote organizational learning and spread the
implementation of improvements.
Quality improvement (QI) refers to basically a process of change in human behaviour that is driven
largely by experiential learning. Thus development and adoption of quality improvement
interventions depends a lot on changes in social policy, programmes or practices, within a specific
context or environment of healthcare delivery. To understand a quality improvement intervention
clearly, readers need to understand how the intervention relates to general knowledge of the care
problem that necessitates improvement. This requires the authors to place their work within the
context of issues that are known to impact the quality of care. Context means ‘‘to weave together’’.
The context thus refers to the interweaving of the issues that stimulated the improvement idea and
several spatial, social, temporal and cultural factors within the local setting, all of which form the
“canvas upon which improvement is painted” (Ogrinc etal, 2008). The explanation of context
should go beyond a description of physical setting, but should include the organization (types of
patients served, staff providing care and care processes before introducing the intervention) , the
governance structure, the health information systems, and the logistical framework so as to enable
reviewers and readers determine if findings from the study are likely to be transferable to be
generalized by transferability (readers are able to relate them to their own care setting). In studies
with multiple sites, a table or matrix can be a convenient way to summarize similarities differences
in context across sites. The table can specify the structures, processes, people and patterns of care
that are unique to each site and assist the reader in interpreting results.
Whereas controlled trials attempt to control the context to avoid selection bias, quality
improvement studies often seek to describe and understand the context in which the delivery of
care occurs. Pawson et al (2005) propose using a form of inquiry known as ‘‘realist evaluation’’
to explore complex, multi-component programmes that are designed to change performance. the
relevant questions in realist evaluation are: ‘‘what is it about this kind of intervention that works,
for whom, in what circumstances, in what respects and why?’’Answering these questions within a
quality improvement report requires a thoughtful and thorough description of the background
circumstances into which the change was introduced. The description of the background
knowledge and local context of care need to be detailed. Placing information into the exact
category is less important than ensuring the background knowledge, local context and local
problem are fully described. However, evidence-based clinical practice demands researchers
provide irrefutable evidence on how, and whether, and why quality improvement interventions
work (Davidoff and Batalden, 2005; Ogrinc et al, 2008).
Therefore, proposals on quality improvement should seek to maintain the scientific and
methodological rigor that is necessary to generate generalizable evidence through a systematic
process of scientific inquiry that also follows acceptable ethical standards. The SQUIRE guidelines
are not exclusive of other guidelines. For instance, a quality improvement project or effectiveness
study that proposes to use a randomized controlled trial design should seriously consider using the
SPIRIT and CONSORT guidelines as well as the SQUIRE guidelines. Likewise, an improvement
project that uses extensive observational or qualitative techniques should consider the STROBE
guidelines, the QRSR guidelines as well as the SQUIRE guidelines. In case there is validation of
a prediction model or estimating the diagnostic accuracy of a new intervention or investigation,
then researchers should plan to follow both the TRIPOD and STARD guidelines.
The title: Indicate that the proposal concerns and focuses on initiatives to improve healthcare
(broadly defined as including any of the following: quality, safety, effectiveness, patient-
centeredness, timeliness, cost, efficiency, and equity of healthcare). These areas of quality
parameters need to be explicit. The title may also indicate the aim of the intervention, type of
setting, approach to quality improvement or expected/intended outcomes approach.
The background
The background should provide a brief, non-selective summary of current knowledge of the care
problem being addressed, the gap in knowledge, the characteristics of organizations in which it
occurs, and the nature of possible interventions to improve quality. The literature on quality
improvement and patient safety and should highlight papers that are primarily theoretical and some
that are large-scale studies about improving quality. Including the operational definition of the
terms ‘‘quality’’, ‘‘quality improvement’’ or ‘‘patient safety’’ is important for readers to identify
the content and context of the quality improvement. Current MeSH headings include healthcare
quality, access and evaluation;quality assurance, health care; quality control; quality indicators,
health care; quality of health care; and total quality management. In case these are used, there
should be operational definitions.
1) The introduction: The introduction to a quality improvement paper should explicitly describe
the existing quality gap in relation to acceptable definitions of quality or patient safety. To be
as specific as possible, authors should describe the known standard or achievable best practice,
provide evidence that the local practice is not meeting those standard, highlight consequences
of this deficit, highlight the need for the proposed approach to quality improvement, and
emphasize the social value to be gained from improving the practice, service environment or
patient/client safety.
2) The research problem: A clear description of the problem, and why it is considered relevant
and amenable for quality improvement initiatives. From this problem, a clear purpose,
hypothesis or research question should be developed. There should be a summary of what is
currently known about the problem, including relevant previous studies, highlighting the
design, outcome measures, results and limitations.
3) The theory, theoretical model or conceptual model: Quality improvement is basically a change
in behavior. The quality improvement study should be clear about the expected “change” that
leads to improvement, for instance, personal changes, changes in interpersonal interaction,
organizational change, or system-wide change (as in change in multiple factors and parameters
of a health system). Therefore, informal or formal frameworks, models, conceptual models,
and/or theories should be used to explain the problem and how the proposed intervention is
expected to work. The intervention, as well as any reasons or assumptions that were used to
develop the intervention(s), and reasons why the intervention(s) was expected to work, should
be explicit, in line with the theory or conceptual model for the improvement or ‘change’. In
case the model is borrowed from another discipline and is to be adapted to the study, the
reasons for choosing the model, the strength of the model or theory, the intended modifications
of the model and plans to assess model or theory fit have to be explicit.
4) The Context: Contextual elements considered important at the outset of introducing
the intervention(s). The introduction should describe the nature and severity of the specific
local problem or system dysfunction to be addressed.
5) Aims or objectives: For aims or objectives, the proposal should describe the specific aim
(changes/improvements in care processes and patient outcomes) of the proposed intervention.
It should also specify who (champions, supporters) and what (events, observations) triggered
the decision to make changes, and why now (timing).
Methods section
The intervention: Approach chosen for assessing the impact of the intervention(s), and approach
used to establish whether the observed outcomes were due to the intervention(s)
Study measures
a) The description of the methods of evaluation outlines what the study shall use to quantify
improvement, why the measures were chosen and how the investigators shall obtain the data.
Measures chosen for studying processes and outcomes of the intervention(s), including
rationale for choosing them, their operational definitions, and their validity and reliability
b) Description of the approach to the ongoing assessment of contextual elements that contributed
to the success, failure, efficiency, and cost
Data analysis
a) The analysis plan is intimately related to the study design. The analysis plan for quality
improvement data should show that the quality improvement initiatives or strategy resulted
into change (which is often multi-faceted) and led to measurable differences in the process,
outcome or impact measures.
b) Qualitative and quantitative methods may be used to draw inferences from the data
c) The data analysis should include methods for understanding variation within the data between
or within participants, including the effects of time as a variable
1) Identify relevant elements of setting or settings (for example, geography, physical resources,
organizational culture, history of change efforts), and structures and patterns of care (for
example, staffing, leadership) that provided context for the intervention
2) Explain the actual course of the intervention (for example, sequence of steps, events or phases;
type and number of participants at key points) preferably using a time-line diagram or flow
chart
4) Describe how and why the initial plan evolved, and the most important lessons learned from
that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)
1) Data on changes observed in measures of patient outcome (for example, morbidity, mortality,
function, patient/staff satisfaction, service utilisation, cost, care disparities)
Ethical considerations
The ethical principles of autonomy (do not deprive freedom), beneficence (act to benefit the
patient, avoiding self-interest), non-maleficence (do not harm), justice (fairness and equitable care)
and do your duty (adhering to one’s professional and organizational responsibilities) underpin the
delivery of health care and quality improvement efforts. The same principles should underpin the
planning, implementation, and publishing of quality improvement research. The research proposal
should describes ethical aspects of implementing and studying the improvement, such as privacy
concerns, protection of participants’ physical wellbeing, potential harms or risks to participants,
author conflicts of interest, formal ethical approvals and permissions, including data storage, data
sharing and material transfer.
Data interpretation
a) Once the measures have been chosen, the investigator needs to develop operational data
definitions, collection forms and determine how the data will be collected. The methods of data
collection and data quality management should be described concisely so that others may
replicate the project. Initial steps of the intervention(s) and their evolution over time (such as
time-line diagram, flow chart, or table), including modifications made to the intervention
during the project
j) Potential reasons for any differences between observed and anticipated outcomes, including
the influence of context
Limitations
b) Factors that might have limited internal validity such as confounding, bias, or imprecision in
the design, methods, measurement, or analysis
groups to be included or excluded (for example, patient characteristics such as gender, race,
ethnicity, age or disease site, or staff characteristics, such as profession or role in a healthcare
organization) need to be justified (O’Kane, 2007). In addition, the potential burdens or risks and
the potential benefits of QI or clinical audit projects should be distributed fairly across the
population of patients who are served by the healthcare organization.
Important terms to consider in determining when the IRB is responsible for overseeing the rights,
safety, and well-being of human research participants include research and human subjects.
Human subjects research is defined as systematic investigation, including research development,
testing, and evaluation, designed to develop or contribute to generalizable knowledge, involving
humans. Human subject is defined as a living individual about whom an investigator conducting
research obtains (a) data through intervention or interaction with the individual or (b) or gets access
to identifiable private information (USDHHS, OHRP, 2004; USDHHS, OHRP, 2008). The OHRP
has a critical role in protecting human subjects during research activities (USDHHS, OHRP, 2009;
Wagner, 2003).
As more and more healthcare providers become involved in quality improvement and research
activities, it is challenging to determine when an activity constitutes a quality improvement (QI),
that is an activity not requiring IRB approval or a QI activity requiring IRB approval as compared
with a research activity requiring IRB approval. Along that continuum is a grey zone. This
differentiation is difficult, as evidenced by the report of a study that investigated expert opinion
congruence between QI leaders, chairs of IRBs, and journal editors found varying levels of
congruence (Lindenauer et al, 2002).
projects beyond the health institution or system to undergo IRB approval is controversial
(OHRP (USDHHS, 2009), as the sole intent to publish the findings of a QI project is deemed
insufficient criterion for determining whether a QI activity involves research. The regulatory
definition under 45 CFR 46.102(d) is “Research means a systematic investigation including
research development, testing and evaluation designed to develop or contribute to
generalizable knowledge.” Planning to publish an account of a QI project does not necessarily
mean that the project fits the definition of research (USDHHS, 2009). To distinguish a QI
project from a research project when submitting a manuscript for publication, ( if the QI project
has not undergone IRB approval), the following headings have been recommended: Issue,
Imperative for Project, Procedures of Collecting and Evaluating Data, Information Found,
and Lessons Learned (Plattborze et al., 2010, p. 291).
d) It may not always be clear who is accountable for the effective conduct of QI and clinical audit
projects, and who is responsible for ensuring that ethical issues are identified, considered and
addressed (Bellin and Dubbler, 2001). Even then, it may not always be clear whether a QI
initiative is audit or research (Hughes, 2005). Therefore, a healthcare organization needs to
ensure that these projects have appropriate independent ethics review and oversight as part of
the clinical governance arrangements in the organization (Cretin et al, 2000; Morris and
Dracup, 2007). The ethical oversight structure also should include the organization’s patient
safety programme because these activities also can involve risks to patients (Rix and Cutting,
1996; Perneger, 2004; Wade, 2005; Boult and Maddern, 2007). Oversight should protect
patients from ad hoc or poorly conceived projects and should ensure that the organization has
a robust strategic programme that is achieving substantial improvements in the quality and
safety of patient care
e) Some organizations have considered that a Research Ethics Committee can be asked to oversee
QI and clinical audit projects from an ethics perspective(Bottrel, 2007) Another suggestion has
been that the Chair of a Research Ethics Committee could be asked for guidance in relation to
ethical issues in QI or clinical audit projects and could authorize projects that involve no more
than minimal risk to patients. However, a number of reasons have been given for not involving
a Research Ethics Committee in QI and clinical audit activities including the following: There
are significant differences between research and QI or clinical audit with regard to the
obligations of a healthcare organization. Research is an optional activity in a healthcare
4) Provide for ethical consideration of a QI or clinical audit project that is designed to contain or
control or reduce costs
5) Include carrying out QI and clinical audit projects in job descriptions and performance
appraisals for all clinical staff
6) Teach staff about the organization’s policies and systems for identifying and managing ethics
issues in QI and clinical audit projects
7) Track completion of QI and clinical audit projects
8) Review potential publication of QI or clinical audit projects
4) Some QI activities involve the testing of alternative systems or methods for organizing or
delivering care. This type of activity most appropriately should be identified as QI research.
Such projects typically involve patients accessing care or services that differs from established
best practice or usual clinical care, and therefore, meet criteria that define a research study.
These QI research projects require formal ethical committee application and review (Weiserbs
et al, 2009). The results of the interventions being tested in the research are unknown, and
therefore, patients are at risk of not receiving care that will benefit them. Even then, patients
may be harmed by participation (or non-participation) in the interventions.
References
Barton A. Monitoring body is needed for audit. BMJ 1997;315:1465.
Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight.
Am J Public Health 2001;91(9):1512–7.
Bottrell MM. Accountability for the conduct of quality-improvement projects. In: Jennings B,
Baily MA, Bottrell M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory
Issues; 2007, 129–144. Available at: www. thehastingscenter.org/wp-content/uploads/Health-
Care- Quality-Improvement.pdf.
Boult M, Maddern GJ. Clinical audits: why and for whom. ANZ J Surg 2007;77:572–8.
Brown LH, Shah MN, Menegazzi JJ. Research and quality improvement: drawing lines in the grey
zone (editorial). Prehosp Emerg Care 2007;11:350–1.
Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex
interventions to improve health. BMJ 2000;321:694–6.
Candib LM. How turning a QI project into ‘research’ almost sank a great program. Hastings Center
Report 2007;37:26–30.
Carr ECJ. Talking on the telephone with people who have experienced pain in hospital: clinical
audit or research? J Adv Nurs 1999;29(1):194–200.
Casarett D, Karlawish JHT, Sugarman J. Determining when quality improvement initiatives
should be considered research. JAMA 2000;283(17):2275–80.
Cave E, Nichols C. Clinical audit and reform of the UK research ethics review system. Theor
Med Bioeth 2007;28(3):181¬203.
Choo V. Thin line between research and audit (commentary). Lancet 1998;352:337–8.
Cretin S, Lynn J, Batalden PB, Berwick DM. Should patients in quality-improvement activities
have the same protections as participants in research studies? JAMA 2000;284(14):1786.
Davidoff F. Publication and the ethics of quality improvement. In: Jennings B, Baily MA, Bottrell
M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory Issues; 2007, 101–
6. Available at: www. thehastingscenter.org/wp-content/uploads/Health-Care- Quality-
Improvement.pdf.
Doezema D, Hauswald M. Quality improvement or research: a distinction without a difference?
IRB 2002; 24:9–12.
Doyal L. Preserving moral quality in research, audit, and quality improvement. Qual Saf Health
Care 2004;13:11–2.
Gerrish K, Mawson S. Research, audit, practice development and service evaluation: Implications
for research and clinical governance. Practice Development in Health Care 2005;4(1):33–9.
Hagen B, O’Beirne M, Desai S, Stingl M, Pachnowski CA, Hayward S. Innovations in the Ethical
Review of Health-related Quality Improvement and Research: The Alberta Research Ethics
Community Consensus Initiative (ARECCI). Healthc Policy 2007;2(4):1–14.
Hughes R. Is audit research? The relationships between clinical audit and social research. Int J
Health Care Qual Ass 2005;18(4):289–99.
Kaktins, N. M. (2009). Faculty guide to the institutional review board process. Nurse Educator,
34, 244-248.
Kinn S. The relationship between clinical audit and ethics. J Med Ethics 1997;23:250–3.
Kotzer, A. M. & Milton, J. (2007). An eduction initiative to increase staff knowledge of
Institutional Review Board guidelines in the USA. Nursing and Health Sciences, 9, 103-106.
Layer T. Ethical conduct recommendations for quality improvement projects. J Healthc Qual
2005;25(4):44–6.
Lemaire F. Informed consent and studies of a quality improvement program (letter). JAMA 2008;
300:1762.
Lindenauer, P. K., Benjamin, E. M., Naglieri-Prescod, D., et al (2002). The role of the institutional
review board in quality improvement: A survey of quality officers, institutional review board
chairs and journal editors. American Journal of Medicine, 113, 575-579.
Lo B, Groman M. Oversight of quality improvement. Focusing on benefits and risks. Arch Intern
Med 2003;163(12):1481–6.
Lowe J, Kerridge I. Implementation of guidelines for no-CPR orders by a general medical unit in
a teaching hospital. Aust N Z J Med 1997;27(4):379–83.
Lynn J. When does quality improvement count as research? Human subject protection and theories
of knowledge. Qual Saf Health Care 2004;13:67–70.
Lynn, J., Baily, M. A., Bottrell, M., et al. (2007). The ethics of using quality improvement methods
in health care. Annals of Internal Medicine, 146, 666-674.
Markman M. The role of independent review to ensure ethical quality-improvement activities in
oncology: a commentary on the national debate regarding the distinction between quality-
improvement initiatives and clinical research. Cancer 2007;110(12):2597–600.
Maxwell DJ, Kaye KI. Multicentre research: negotiating the ethics approval obstacle course
(letter). Med J Aust 2004;181(8):460.
McNett, M, Lawry, K. (2009). Research and quality improvement activities: When is institutional
review board review needed? Journal of Neuroscience Nursing, 41, 344-347.
Miller FG, Emanuel EJ. Quality improvement research and informed consent. N Engl J Med
2008;358(8):765–7.
Morris PE, Dracup K. Quality improvement or research? The ethics of hospital project oversight.
Am J Crit Care 2007;16:424–6.
Neff MJ. Institutional Review Board consideration of chart reviews, case reports, and
observational studies. Respir Care 2008;53(10):1350–3.
Nelson WA. Proposed ethical guidelines for quality improvement. Healthc Exec 2014; 29(2):52,
54–5.
O’Kane ME. Do patients need to be protected from quality improvement? In: Jennings B, Baily
MA, Bottrell M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory Issues;
2007, pp. 89–99. Available at: www. thehastingscenter.org/wp-content/uploads/Health-Care-
Quality-Improvement.pdf.
Ogrinc G, Nelson WA, Adams SM, O’Hara AE. An instrument to differentiate between clinical
research and quality improvement. IRB 2013;35(5):1–8.
Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement
Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration.
Qual Saf Health Care 2008;17(Suppl I):i13–i32.
Palevsky PM, Washington MS, Stevenson JA, et al. Improving compliance with the dialysis
prescription as a strategy to increase the delivered dose of hemodialysis: an ESRD network 4
quality improvement project. Adv Ren Replace Ther 2000;7(4 Suppl 1):S21–30.
Pawson R, Greenhalgh T, Harvey G, et al. Realist review—a new method of systematic review
designed for complex policy interventions. J Health Serv Res Policy 2005;10(suppl 1):21–34.
Pfadenhauer LM, Gerhadus A, Mozygemba K et al, Making sense of complexity in context and
implementation: the Context and Implementation of Complex Interventions (CICI) framework.
Implementation Science (2017) 12:21
Perneger TV. Why we need ethical oversight of quality improvement projects. Int J Qual Health
Care 2004;16(5):343–44.
Platteborze, L. S., Young-McCaughan, S., King-Letzkus, I, et al, (2010). Performance
improvement/research advisory panel: A model for determining whether a project is a performance
or quality improvement activity or research. Military Medicine, 175, 289-291.
Reynolds J, Crichton N, Fisher W, Sacks S. Determining the need for ethical review: a three-stage
Delphi study. J Med Ethics 2008;34:889–94.
Rivera, S. (2008). Clinical research from proposal to implementation: What every clinical
investigator should know about the institutional review board. Journal of Investigative Medicine,
56, 975-984.
Rix G, Cutting K. Clinical audit, the case for ethical scrutiny? Int J Health Care Qual Ass
1996;9(6):18–20.
Siegel MD, Alfano S. The ethics of quality improvement research. Crit Care Med 2009;37(2):791–
2.
Sims, J. M. (2008). An introduction to institutional review boards. Dimensions of Critical Care
Nursing, 27, 223-225.
Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight and quality improvement initiatives. Qual
Saf Health Care 2010;19(4):271–4.
United States Department of Health & Human Services Office for Human Research Protections.
(2004). Human subject regulations decision charts. Retrieved from
http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm
United States Department of Health & Human Services Office for Human Research Protections.
(2008). Guidance on engagement of institutions in human subjects research. Retrieved from
http://www.hhs.gov/ohrp/humansubjects/guidance/engage08.html
United States Department of Health & Human Services, Office for Human Research Protections.
(2009). Quality improvement activities frequently asked questions. Retrieved from
http://www.hhs.gov/ohrp/qualityfaq.html
Wade DT. Ethics, audit, and research: all shades of grey. BMJ 2005;330:468–73.
Wagner, R. M. (2003). Ethical review of research involving human subjects: When and why is
IRB review necessary? Muscle & Nerve, 28, 27-39.
Weiserbs KF, Lyutic L, Weinberg J. Should quality improvement projects require IRB approval?
(letter). Acad Med 2009;84(2):153.
Whicher D, Kass N, Saghai Y, et al. The views of quality improvement professionals and
comparative effectiveness researchers on ethics, IRBs, and oversight. J Empir Res Hum Res Ethics
2015;10(2):132–44.
Wilson A, Grimshaw G, Baker R, Thompson J. Differentiating between audit and research: postal
survey of health authorities’ views. BMJ 1999;319:1235.
Wise LC. Ethical issues surrounding quality improvement activities. J Nurs Adm 2007;37(6):272–
8.
Countries and international organizations have increasing interest in how health systems perform.
This has led to the development of performance indicators for monitoring, assessing, and managing
health systems to achieve effectiveness, equity, efficiency, and quality. Although the indicators
populate conceptual frameworks, it is often not very clear just what the underlying concepts might
be or how effectiveness is conceptualized and measured. Furthermore, there is a gap in the
knowledge of how the resultant performance data are used to stimulate improvement and to ensure
health care quality. Performance Improvement represents a critical strategy for improving health
system performance as well as hospital quality of care. QI is an iterative systematic approach to
planning and implementing continuous improvement in performance. QI emphasizes continuous
examination and improvement of work processes whereby teams of organizational members,
trained in basic statistical techniques and problem solving tools, use available data for decision-
making in order to improve quality or performance. The systemic focus of performance
improvement emphasizes the increasing recognition that the quality of the care delivered by
clinicians depends on the performance capability of the organizational systems in which they work.
Patient safety
While individual clinician competence remains key for patient safety, the capability of
organizational systems to prevent errors, coordinate care among settings and practitioners, and
ensure that relevant, accurate information is available when needed are increasingly seen as
critical elements in providing high quality care (Institute of Medicine, 2000). As an indication of
the growing emphasis on organizational systems of care, the Joint Commission on Accreditation
of Healthcare Organizations, the National Committee for Quality Assurance, and the Peer Review
Organizations of the Centers for Medicare and Medicaid in the United States have all encouraged
hospitals to use QI methods. While QI holds promise for improving quality of care, hospitals that
adopt QI often struggle with its implementation (Shortell et al, 1998). Implementation refers to the
transition period, following a decision to adopt a new idea or practice, when intended users put
that new idea or practice into use, such as when clinical and nonclinical staff begin applying QI
principles and practices to improve clinical care processes (Klein and Sorra, 1996; Rogers, 2003).
Successful implementation is critical to the effectiveness of a QI initiative (Blumenthal and Kilo,
1998; Shortell et al, 1998). However, QI implementation poses several demands on individuals
and and organizations. It requires sustained leadership, extensive training and support, robust
measurement and data systems, realigned incentives and human resources practices, and cultural
receptivity to change (Shortell et al, 1998; Ferlie and Shortell, 2001; Institute of Medicine, 2001;
Meyer et al. 2004). Also, the systemic nature of many quality problems implies that the
effectiveness of a QI initiative may depend on its implementation across many conditions,
disciplines, and departments, which further adds on the challenges (Gustofson et al. 1997;
Blumenthal and Kilo, 1998; Meyer et al. 2004). If successful, though, implementing QI in this
manner creates resilient long-lasting infrastructure for enhancing organization-wide quality.
quality problems, use of scientific methods and statistical tools by these teams to monitor and
analyze work processes, and use of process-management tools (such as flow charts and run charts
that graphically depict steps in a clinical process) to help team members use collective knowledge
effectively.
Performance improvement and multi-disciplinary teams
Cross-functional teams play an integral role in QI because most vital work processes span
individuals, disciplines, and departments. Cross-functional teams bring together the many clinical
professionals and nonclinical hospital staff members who perform a process to document the
process in its entirety, diagnose the causes of quality problems, and develop and test possible
solutions to address them with systematic analysis conducted done at the organizational level.
(Shortell et al, 1998). Several studies have examined the structures, processes, and relationships
common to designing, organizing, and implementing hospital QI efforts (Barsness et al. 1993a, b;
Blumenthal and Edwards. 1995; Gilman and Lammers, 1995; Shortell, 1995; Shortell et al. 1995b;
Weiner et al, 1996; Weiner et al, 1997; Shortell, and Alexander, 1997; Westphal et al, 1997;
Berlowitz et al. 2003). The findings of these studies are that hospitals vary widely in terms of: (1)
their approach to implementing QI; (2) the extent to which QI has ‘‘penetrated’’ core clinical
processes; and (3) the degree to which QI practices have been diffused across clinical areas. Few
of these examined the effectiveness of hospital QI practices. With few exceptions (Westphal et al,
1997; Shortell et al. 2000), most have used perceptual measures of impact or self-reported
estimates of cost or clinical impact rather than objectively derived measures of clinical quality
(Gilman and Lammers 1995; Shortell et al. 1995b; Carlin et al 1996; O’Connor et al. 1996; Gordian
and Ballard 1997; Goldberg et al. 1998; Ferguson et al. 2003).
front-line workers. Moreover, senior managers who participate in QI teams may develop a
deeper understanding of the root causes of quality problems and feel greater ownership of
recommended solutions that such teams generate. As a result, senior managers may be more
willing to commit the resources and make the policy changes necessary to ameliorate systemic
causes of quality problems. Widespread physician participation in QI teams may also be
critical to QI effectiveness because physicians play a critical role in clinical resource allocation
decisions and possess the clinical expertise needed to differentiate appropriate from
inappropriate variation in care processes. Pervasive physician participation may not only
enhance the quality of analysis and problem solving in QI teams, but also support the
implementation of changes recommended by such teams. Research indicates that peer
influence can be a powerful lever for provider behavior change. Widespread physician
participation in QI teams may facilitate those changes in physician behavior needed to address
quality problems. A blended whereby hospitals exhibit higher values on hospital-level quality
indicators by encouraging many organizational members to participate in QI activities, yet
limiting hospital deployment of QI to a few organizational units. Greater participation of
hospital staff and senior managers in QI teams is positively associated with higher values on
several hospital-level quality indicators, not just one or two. Perhaps intensive mobilization of
organizational personnel within organizational units (such as acute inpatient care) creates the
‘‘critical mass’’ necessary to overcome the structural, cultural, and technical barriers that often
obstruct organization-wide application of QI or otherwise restrict the gains from QI activity to
a few clinical outcomes.
Another practical issue: the role of physicians in clinical QI efforts. Lack of physician involvement
represents the single most important obstacle to the success of clinical QI (Berwick et al,1990;
Board 1992; Health Care Advisory Board ,1992; McLaughlin and Kaluzny 1994; Blumenthal and
Edwards 1995; Shortell 1995). Physicians play a central role in clinical resource allocation
decisions and possess the clinical expertise needed to differentiate appropriate from inappropriate
variation in care processes. Physicians are reluctant to participate in QI projects because of distrust
of hospital motives, lack of time, and fear that reducing variation in clinical processes will
compromise their ability to vary care to meet individual needs (Blumenthal and Edwards 1995;
Shortell 1995; Shortell et al, 1995a). Study results suggest that widespread physician participation
in QI teams, while perhaps desirable, might not be necessary. Widespread participation of hospital
staff and senior managers, it seems, is more important, at least for the hospital-level quality
indicators examined here. Rather than attempting to mobilize much of the medical staff, hospital
leaders could perhaps secure needed physician input by involving selected physicians on an as-
needed basis.
In managing for results, budgets are developed in relation to inputs, activities and outputs, while
the aim is to manage towards achieving the outcomes and impacts.
Performance indicators
Suitable indicators need to be specified to measure performance in relation to inputs, activities,
outputs, outcomes and impacts. The challenge is to specify indicators that measure things that are
useful from a management and accountability perspective. This means managers need to be
selective when defining indicators. Defining a good performance indicator requires careful
analysis of what is to be measured. One needs to have a thorough understanding of the nature of
the input or output, the activities, the desired outcomes and impacts, and all relevant definitions
and standards used in the field. For this reason it is important to involve subject experts and line
managers in the process.
A good performance indicator should be:
a) Reliable: the indicator should be accurate enough for its intended use and respond to changes
in the level of performance.
b) Well-defined: the indicator needs to have a clear, unambiguous definition so that data will be
collected consistently, and be easy to understand and use.
c) Verifiable: it must be possible to validate the processes and systems that produce the indicator.
d) Cost-effective: the usefulness of the indicator must justify the cost of collecting the data.
e) Appropriate: the indicator must avoid unintended consequences and encourage service
delivery improvements, and not give managers incentives to carry out activities simply to meet
a particular target.
f) Relevant: the indicator must relate logically and directly to an aspect of the institution's
mandate, and the realization of strategic goals and objectives.
Institutions should include performance indicators related to the provision of goods and services.
These describe the interface between government and the public, and are useful for monitoring
and improving performance as it is relevant to the citizens of the country.
Where possible, indicators that directly measure inputs, activities, outputs, outcomes and impacts
should be sought. This is not always possible and in such instances, proxy indicators may need to
be considered. Typical direct indicators include, cost or price, distribution, quantity, quality,
dates and time frames, adequacy and accessibility.
a) Cost or Price indicators are both important in determining the economy and efficiency of
service delivery.
b) Distribution indicators relate to the distribution of capacity to deliver services and are critical
to assessing equity across geographical areas, urban-rural divides or demographic categories.
Such information could be presented using geographic information systems.
c) Quantity indicators relate to the number of inputs, activities or outputs. Quantity indicators
should generally be time-bound; e.g. the number of inputs available at a specific point in time,
or the number of outputs produced over a specific time period.
d) Quality indicators reflect the quality of that which is being measured against predetermined
standards. Such standards should reflect the needs and expectations of affected parties while
balancing economy and effectiveness. Standards could include legislated standards and
industry codes.
e) Dates and time frame indicators reflect timeliness of service delivery. They include service
frequency measures, waiting times, response time, turnaround times, time frames for service
delivery and timeliness of service delivery.
f) Adequacy indicators reflect the quantity of input or output relative to the need or demand -
"Is enough being done to address the problem?".
g) Accessibility indicators reflect the extent to which the intended beneficiaries are able to
access services or outputs. Such indicators could include distances to service points,
travelling time, waiting time, affordability, language, accommodation of the physically
challenged. All government institutions are encouraged to pay particular attention to
developing indicators that measure economy, efficiency, effectiveness and equity using data
collected through these and other direct indicators.
h) Economy indicators: explore whether specific inputs are acquired at the lowest cost and at
the right time; and whether the method of producing the requisite outputs is economical.
Economy indicators only have meaning in a relative sense. To evaluate whether an institution
is acting economically, its economy indicators need to be compared to similar measures in
other state institutions or in the private sector, either in South Africa or abroad. Such indicators
can also be compared over time, but then prices must be adjusted for inflation.
i) Efficiency indicators: explore how productively inputs are translated into outputs. An
efficient operation maximizes the level of output for a given set of inputs, or it minimizes the
inputs required to produce a given level of output. Efficiency indicators are usually measured
by an input: output ratio or an output: input ratio. These indicators also only have meaning in
a relative sense. To evaluate whether an institution is efficient, its efficiency indicators need to
be compared to similar indicators elsewhere or across time. An institution's efficiency can also
be measured relative to predetermined efficiency targets.
j) Effectiveness indicators: explore the extent to which the outputs of an institution achieve the
desired outcomes. An effectiveness indicator assumes a model of how inputs and outputs relate
to the achievement of an institution's strategic objectives and goals. Such a model also needs
to account for other factors that may affect the achievement of the outcome. Changes in
effectiveness indicators are only likely to take place over a period of years, so it is only
necessary to evaluate the effectiveness of an institution every three to five years; or an
institution may decide to evaluate the effectiveness of its different programmes on a rolling 3-
5 year schedule.
k) Equity indicators: explore whether services are being provided impartially, fairly and
equitably. Equity indicators reflect the extent to which an institution has achieved and been
able to maintain an equitable supply of comparable outputs across demographic groups,
regions, urban and rural areas, and so on.
Often specific benefit-incidence studies will be needed to gather information on equity. The aim
of such studies would be to answer the question: "Who benefits from the outputs being delivered?"
Usually equity is measured against benchmark standards or on a comparative basis.
Institutions may also use the results of opinion surveys as indicators of their performance. Such
indicators should not replace the above two categories of indicators, but rather complement them.
If an institution uses such surveys, it is important that they be professionally designed.
This is the current level of performance that the institution aims to improve. The initial step in
setting performance targets is to identify the baseline, which in most instances is the level of
performance recorded in the year prior to the planning period. So, in the case of annual plans, the
baseline will shift each year and the first year's performance will become the following year's
baseline. Where a system for managing performance is being set up, initial baseline information
is often not available. This should not be an obstacle - one needs to start measuring results in
order to establish a baseline.
Performance targets
These express a specific level of performance that the institution, programme or individual
is aiming to achieve within a given time period.
Performance standards
These standards express the minimum acceptable level of performance, or the level of performance
that is generally expected. These should be informed by legislative requirements, departmental
policies and service-level agreements. They can also be benchmarked against performance levels
in other institutions, or according to accepted best practices. The decision to express the desired
level of performance in terms of a target or a standard depends on the nature of the performance
indicators. Often standards and targets are complementary. For example, the standard for
processing pension applications is 21 working days, and a complementary target may be to process
90 per cent of applications within this time. Performance standards and performance targets should
be specified prior to the beginning of a service cycle, which may be a strategic planning period or
a financial year. This is so that the institution and its managers know what they are responsible for,
and can be held accountable at the end of the cycle. While standards are generally "timeless",
targets need to be set in relation to a specific period. The targets for outcomes will tend to span
multi-year periods, while the targets for inputs, activities and outputs should cover either quarterly
or annual periods. An institution should use standards and targets throughout the organization, as
part of its internal management plans and individual performance management system. A useful
set of criteria for selecting performance targets is the "SMART" criteria:
a) Specific: the nature and the required level of performance can be clearly identified
b) Measurable: the required performance can be measured
c) Achievable: the target is realistic given existing capacity
d) Relevant: the required performance is linked to the achievement of a goal
the levels of the planning process. Specifying appropriate outputs often involves extensive policy
debates and careful analysis. The process of defining appropriate outputs needs to take into
consideration what is practical and the relative costs of different courses of action. It is also
important to assess the effectiveness of the chosen intervention.
Step 3: Select the most important indicators
There is no need to measure every aspect of service delivery and outputs. Fewer measures may
deliver a stronger message. Institutions should select indicators that measure important aspects of
the service that is being delivered, such as critical inputs, activities and key outputs. When selecting
indicators, it is important to keep the following elements in mind:
a) Clear communication: the indicators should communicate whether the institution is achieving
the strategic goals and objectives it set itself. The indicators should also be understandable to
all who need to use them.
b) Available data: the data for the chosen indicators needs to be readily available.
c) Manageability: the number of indicators needs to be manageable. Line managers would be
expected to track a greater number of indicators pertaining to a particular programme than,
say, the head official of the institution or the executive authority
Step 4: Set realistic performance targets
When developing indicators there is always a temptation to set unrealistic performance targets.
However, doing so will detract from the image of the institution and staff morale. Effective
performance management requires realistic, achievable targets that challenge the institution and
its staff. Ideally, targets should be set with reference to previous and existing levels of achievement
(i.e. current baselines), and realistic forecasts of what is possible. Where targets are set in relation
to service delivery standards it is important to recognize current service standards and what is
generally regarded as acceptable. The chosen performance targets should:
a) Communicate what will be achieved if the current policies and expenditure programmes are
maintained
b) Enable performance to be compared at regular intervals - on a monthly, quarterly or annual
basis as appropriate
c) Facilitate evaluations of the appropriateness of current policies and expenditure programmes.
Step 5: Determine the process and format for reporting performance
Performance information is only useful if it is consolidated and reported back into planning,
budgeting and implementation processes where it can be used for management decisions,
particularly for taking corrective action. This means getting the right information in the right
format to the right people at the right time. Institutions need to find out what information the
various users of performance information need, and develop formats and systems to ensure their
needs are met.
Step 6: Establish processes and mechanisms to facilitate corrective action
Regular monitoring and reporting of performance against expenditure plans and targets enables
managers to manage by giving them the information they need to take decisions to keep service
delivery on track. The information should help managers establish:
a) What has happened so far?
b) What is likely to happen if the current trends persist, say, for the rest of the financial year?
c) What actions, if any, need to be taken to achieve the agreed performance targets?
Measuring, monitoring and managing performance are integral to improving service delivery.
e) Processes to review performance and take management action to ensure service delivery
stays on track
f) Processes to evaluate performance at the end of a service delivery period.
g) Processes to ensure that responsibility for managing performance information is included in
the individual performance agreements of line managers and other officials
h) An identified set of performance indicators for reporting for oversight purposes.
Management capacity
The accounting officer or head official of an institution must ensure there is adequate capacity to
integrate and manage performance information with existing management systems. Each
institution will need to decide on the appropriate positioning of the responsibility to manage
performance information. Ideally, this capacity should be aligned to the planning and financial
management functions. This responsibility needs to focus on the overall design and management
of indicators, data collection, collation and verification processes within the institution. Where
such systems are lacking, it is necessary to support the relevant line manager to put them in place.
Line managers remain responsible for establishing and running performance information systems
within their sections, and for using performance information to make decisions.
4) Healthcare organizations have limited resources and many view performance improvement
activities as merely a cost center and not adding value to the organization (lack of
commitment).
5) There exists in healthcare a culture of shame, blame and fear associated with medical errors
and undesirable performance.
6) Turf issues among professionals (such as physicians and administrators) and departments (such
as admitting and nursing) are common problems.
7) Time constraints are often cited as a reason for not being able to participate in performance
improvement activities. Historically, administrators/management have not made giving staff
time to participate in improvement activities a priority.
8) Team members and others come to the project with their own agendas and work to achieve
their own goals that may or may not be in the best interest of the project.
9) Large improvement projects that drag on for long periods of time and lose focus or have little
success may suffer from loss of momentum.
10) The performance improvement process is too complex and unwieldy. Teams get bogged
down in minutia instead of rapid cycles of improvement that obtain results and reinforce that
the process does work.
References
Berwick, D. M., A. B. Godfrey, and J. Roessner. 1990. Curing Health Care: New Strategies
for Quality Improvement. San Francisco: Jossey-Bass.
Blumenthal, D., and J. N. Edwards. 1995. ‘‘Involving Physicians in Total Quality Management:
Results of a Study.’’ In Improving Clinical Practice: Total Quality Management and the Physician,
edited by D. Blumenthal and A. C. Sheck, pp. 229–66. San Francisco: Jossey-Bass.
Blumenthal, D., and C. M. Kilo. 1998. ‘‘A Report Card on Continuous Quality Improvement.’’
Milbank Quarterly 76 (4): 625–48, 511.
Bradley, E. H., J. Herrin, J. A. Mattera, E. S. et al 2005. ‘‘Quality Improvement Efforts and
Hospital Performance: Rates of Beta-Blocker Prescription after Acute Myocardial Infarction.’’
Medical Care 43 (3): 282–92.
Carlin, E., R. Carlson, J. Nordin. 1996. ‘‘Using continuous quality improvement tools to improve
pediatric immunization rates.’’ Journal on Quality Improvement 22 (4): 277–88.
Carman, J. M., SM. Shortell, RW. Foster, et al. 1996. ‘‘Keys for Successful Implementation of
Total Quality Management in Hospitals.’’ Health Care Management Review 21 (1): 48–60.
Dean JW, Bowen DE. 1994. Management theory and total quality management: improving
research and practice through theory development. Acad. Manag Rev 19 (3): 459–80.
Dubois, R. W., and R. H. Brook. 1988. ‘‘Preventable Deaths: Who, How Often, and Why?’’
Annals of Internal Medicine 109 (7): 582–9.
Ferlie, E. B., S. M. Shortell. 2001. ‘‘Improving the Quality of Health Care in the United Kingdom
and the United States: A Framework for Change.’’ Milbank Quarterly 79 (2): 281–315.
Gallivan, MJ. 2001. Information technology diffusion: a review of empirical research. Database
for Advances in Information Systems 32: 51–85.
Gilman, S. C., J. C. Lammers. 1995. ‘‘Tool use and team success in CQI: are all tools created
equal?’’ Quality Management in Health Care 4 (1): 56–61.
Hackman JR, R. Wageman. 1995. Total quality management: empirical, conceptual, and practical
issues. Administrative Science Quarterly 40 (2): 309–42.
Halm, E. A., C. Horowitz, A. Silver, et al. 2004. Limited impact of a multicenter intervention to
improve the quality and efficiency of pneumonia care. Chest 126 (1): 100–7.
Institute of Medicine. 2000. To Err Is Human. Washington, DC: National Academy Press.
Institute of Medicine. 2001. Crossing the Quality Chasm. Washington, DC: National Academy
Press.
Kinsman L, E James, J Ham 2004. An interdisciplinary, evidence-based process of clinical
pathway implementation increases pathway usage. Lippincotts Case Management 9 (4): 184–96.
Kishnan, R., A. B. Shani, R. M. Grant, R. Baer. 1993. ‘‘In Search of Quality Improvement:
Problems of Design and Implementation.’’ Academy of Management Executive 7 (4): 7–20.
Klein, K. J., A. B. Conn, J. S. Sorra. 2001a. ‘‘Implementing Computerized Technology.’’
Journal of Applied Psychology 86 (5): 811–24.
Klein KJ, JS Sorra. 1996. The challenge of implementation. Acad Manag Rev 21 (4): 1055–80.
Leape LL. 1994. ‘‘Error in Medicine.’’ JAMA 272; (23): 1851–7.
Lurie, J. D., E. J. Merrens, J. Lee, and M. E. Splaine. 2002. ‘‘An Approach to Hospital
Quality Improvement.’’ Medical Clinics of North America 86 (4): 825–45.
McLaughlin, C. P., A. D. Kaluzny. 1994. Continuous Quality Improvement in Health Care:
Theory, Implementation, and Applications. Gaithersburg, MD: Aspen Publishers, Inc.
Mitchell, P. H., S. M. Shortell. 1997. ‘‘Adverse Outcomes and Variations in Organization
of Care Delivery.’’ Medical Care 35 (11 suppl): N19–32.
O’Brien JL, Shortell SM, Hughes F, et al 1995. An integrative model for organization-wide quality
improvement: lessons from the field.’’ Quality Management in HealthCare 3 (4): 19–30.
Powell, T. C. 1995. ‘‘Total Quality management as competitive advantage: a review and empirical
study.’’ Strategic Management Journal 16 (1): 15–37.
Rogers EM. 2003. The Diffusion of Innovations. New York: Free Press.
Shortell, S. M. 1995. ‘‘Physician Involvement in quality improvement: issues, challenges and
recommendations.’’ In improving clinical practice: total quality management and the physician,
edited by D. Blumenthal, A.C. Sheck, pp. 207–17. San Francisco: Jossey-Bass.
Shortell, S. M., C. L. Bennett, G. R. Byck. 1998. ‘‘Assessing the Impact of Continuous
Quality Improvement on Clinical Practice: What It Will Take to Accelerate Progress.’’ Milbank
Quarterly 76 (4): 593–624, 510.
Shortell, S. M., R. H. Jones, A. W. Rademaker, et al. 2000. ‘‘Assessing the Impact of Total Quality
Management and Organizational Culture on MultipleOutcomes of Care for Coronary Artery
Bypass Graft Surgery Patients.’’ Medical Care 38 (2): 207–17.
Shortell, S. M., D. Z. Levin, J. L. O’Brien, E. F. Hughes. 1995a. ‘‘Assessing the Evidence on CQI:
Is the Glass Half Empty or Half Full?’’ Hospital & Health Services Administration 40 (1): 4–24.
Shortell, S. M., J. L. O’Brien, J. M. Carman, et al. 1995b. ‘‘Assessing the Impact of Continuous
Quality Improvement/Total Quality Management: Concept versus Implementation.’’
Health Services Research 30 (2): 377–401.
Wakefield, B. J., M. A. Blegen, T. Uden-Holman, et al. 2001. ‘‘Organizational culture, continuous
quality improvement, and medication administration error reporting.’’ American Journal of
Medical Quality 16 (4): 128–34.
Weiner, B. J., J. A. Alexander, and S. M. Shortell. 1996. ‘‘Leadership for Quality Improvement in
Health Care: Empirical Evidence on Hospital Boards, Managers, and Physicians.’’ Medical Care
Research and Review 53 (4):397–416.
Weiner BJ, Alexander JA, Shortell SM, et al. Quality improvement implementation and hospital
performance on quality indicators. Health Services Research 2006; 41:2
Weiner, B. J., S. M. Shortell, and J. A. Alexander. 1997. ‘‘Promoting Clinical Involvement
in Hospital Quality Improvement Efforts: The Effects of Top Management, Board, and Physician
Leadership.’’ Health Services Research 32 (4): 491–510.
Westphal, J. D., R. Gulati, and S. M. Shortell. 1997. ‘‘Customization or Conformity?’’
Administrative Science Quarterly 42 (2): 366–94.
QI activities can create harm when privacy and confidentiality are breached or have unfairly
affected patients. Furthermore, the lack of a clearly applied distinction between QI and research
along with the lack of QI ethical standards serves as an incentive for some to designate a research
study as a QI activity, thus circumventing the more rigorous research review process. The extent
of this problem is not known, yet it does present another ethical concern. Due to both the increase
in QI activities and the potential for patient privacy breaches, wasted resources and violations in
professional integrity, there is need to ensure that QI activities are conducted within the context
of ethical behavior. These activities ought to be facilitated and monitored within the context of an
ethical framework to protect participants and the validity of the activity. Patient safety concerns
are common among health-care systems worldwide. Preventable harms result in pain, suffering,
and even death for patients and lead to increased costs for medical systems. Patient safety concerns
are now regarded as a serious public health threat. QI can be defined as systematic data-guided
activities designed to bring about immediate improvements in the delivery of health care within a
specific unit, institution, or system (Lynn et al., 2007). The purpose of QI activities is to determine
or improve quality, improve patient services, and/or improves the performance or provision of
health care usually within a specific health care unit, institution, organization or system
(Platteborze et al., 2010).
Quality Improvement Efforts and Ethical Standards
Quality care is a patient expectation and a responsibility of clinicians. Understanding the
relationship between quality and ethics can strengthen efforts to provide safe, high-quality care in
an ethical manner. Such an understanding will allow for providers and executives to see the
synergy between quality improvement efforts and ethics initiatives. Ethics is both the foundation
for quality healthcare and a driver for achieving quality healthcare. Quality and safety of care is
an expectation of all patients and is typically a prominent part of a healthcare facility’s mission
statement. Patients expect that the delivery of their care will be ethical, and this is often described
in a healthcare organization’s value statement. The expectation for (and the goal of) delivering
ethical and quality care reflect a strong and interdependent linkage between the two concepts.
Quality care is built on ethical standards and principles, and ethical practices foster quality care,
making the two inseparable. Just as quality and ethics are linked, so should healthcare programs
and quality improvement efforts.
Ethics is the foundation of quality
Several fundamental ethical principles drive the goal of providing high quality healthcare. The
principles are: autonomy (do not deprive freedom), beneficence (act to benefit the patient, avoiding
clinician or executive self-interest), nonmaleficence (do not harm), justice (fairness and equitable
care), and do your duty (adhering to one’s professional and organizational responsibility). These
ethical principles form the foundation for a healthcare organization’s mission, staff members’
values and clinicians’ professional activities. Adhering to these principles and organizational
values is required to ensure quality care and patient safety, which makes it an organization’s
mandate, to ensure that quality care is achieved in all
patient encounters. Therefore, ethics is the driver behind the goal of quality healthcare.
Ethics is the foundation for the defining dimensions of quality care.
The Institute of Medicine’s report, Crossing the Quality Chasm: A New Health System for the 21st
Century (2001), describes the key dimensions of care that need improvement. Care should be:
safe, effective, patient-centered, timely, efficient and equitable, all elements that are synergistic
with ethics and founded in the ethical concepts of quality care. For example, a patient-centered
approach to healthcare means providing a respectful adherence to the patient’s preferences and
values through a shared decision-making process. Such an approach is founded on the ethical
principles of autonomy and self-determination and is delineated in most healthcare organizations’
ethical standards of practice, an informed consent policy. Health equity is another aspect of quality
care that reflects an ethical understanding that all patients should receive quality care regardless of
their personal characteristics or socioeconomic status. Equity is based on the ethics concepts of
distributive justice and fairness.
The gaps between evidence-based practice and actual patient care delivered in healthcare
organizations are well documented. Healthcare professionals and organizations have an ethical
obligation to close the gaps in implementation of best practices and to overcome patient care
quality and safety shortcomings. Disciplined and focused QI efforts can increase the effectiveness
and safety of healthcare, and therefore, can be seen as an ethical imperative in healthcare services.
Failure to undertake QI projects could be harmful if the lack of participation perpetuates unsafe,
unnecessary or ineffective clinical practice. Widely accepted ethical standards exist for many
activities carried out in healthcare organizations, such as medical treatment and research. However,
arrangements for ensuring that QI and clinical audit projects conform to appropriate ethical
standards seem to be fragmented, and such standards have not been clearly or thoroughly
described. Many people think that only research studies require ethics review and that a QI project
or a clinical audit, which may involve using data that have been previously captured for patient
care, cannot have ethical implications. However, this assumption may not be justified. Any activity
that poses a risk of psychological or physical harm to any patient should have ethical consideration.
This includes clinical audit aimed at QI.
Healthcare organizations should provide ethical oversight of QI projects and clinical audits
because: Patients or carers can potentially experience burdens or risks through their participation
in these activities. Also, some patients may benefit at the expense of others. Besides, projects
undertaken may not represent priorities for improving care based on risk-benefit analysis from a
patient care perspective. Though QI and clinical audit projects have a different intent and focus,
the requirement for ethical consideration and oversight of QI activities should be no less stringent
than what is mandated for clinical research. Even then, QI activities can create potential conflicts
of interest when findings indicate shortfalls in care. The ethical duties of a healthcare organization
to all its patients need to be considered formally in such situations. Moreover, QI projects that are
not carried out properly are unlikely to benefit patients or patient care, and may even compromise
patient safety. If QI or clinical audit projects are poorly designed and unlikely to yield useful
results, the activity is not ethically justified. Furthermore, clinicians, intentionally or
unintentionally, could avoid the research ethics review process by designating a project as a QI
project or clinical audit rather than as research, inadvertently subjecting patients/participants to
unnecessary risk in this circumstance. True research on QI interventions or the QI process itself
may not be recognized as research, and therefore, may not have appropriate ethics review.
Why QI interventions May Require Ethical Review
Ethics review of proposed research studies is required because, while there should be clinical
equipoise (that is, genuine uncertainty whether a treatment will be beneficial) there is risk that the
person may receive a treatment that is not optimal or may even be harmful (Lo and Groman, 2003).
Participation in research is voluntary, and therefore each participant in a research study is entitled
to choose whether or not to be a research participant. Individuals who volunteer to participate in
research should be safeguarded through effective ethical review of proposed research projects. It
is necessary to distinguish research, clinical audit and QI projects to ensure that each activity has
the appropriate type of ethics review or ethical oversight, though often, there is significant overlap,
particularly in implementation research and pragmatic clinical trials. A number of concepts have
been suggested as the basis for differentiating between research and QI or clinical audit, such as
purpose, systematic approach, production of generalisable new knowledge, treatment or allocation,
intention to publish, and focus on human participants. These concepts have not been validated as
reliably discriminating between research and QI studies. However, as QI studies become more
popular and sophisticated, many of these concepts can potentially apply to both research and QI
studies.
to follow that would ensure that any ethical issues embedded in a project are identified and
managed appropriately.
The ethics literature related to patient safety has devoted attention to when patient safety activities
should be considered research for the purposes of requiring ethical oversight, outlining various
criteria that can be helpful for making such a determination. These criteria fall into
several broad categories including the purpose of the project, the design of the project, whether
those directing and/or funding the project are internal or external to the institution where the
project will be implemented, and the generalizability of the project’s results to other settings or
future patients. Each of these criteria has been recommended as a useful indicator of whether a
project constitutes research and whether it must be reviewed by a REC.
Generation of new knowledge versus implementing practices based on existing knowledge
1) Understanding the purpose of a patient safety activity, whether the project is intended to
generate new knowledge or is intended to implement practices based on existing knowledge is
relevant in determining whether a project should be considered research. If the stated purpose
of a project is to generate new knowledge and if it is designed with the scientific rigor to be
able to actually produce such knowledge, then, that project likely would be considered research
requiring ethical review. Yet patient safety activities designed to measure compliance with
recommended strategies, such as hand washing, or to improve compliance in individual
settings, generally would not be considered research.
2) Several aspects of the design of a patient safety activity have been cited as relevant to
determining whether the activity constitutes research. Research projects commonly rely on
strict protocol designs, whereas patient safety activities are generally more flexible because the
objective of these projects is often to bring about immediate improvements in care. QI (quality
improvement} methods often require repeated modifications in the initial protocol as
experience accumulates over time and as the desired changes engage the changing factors in
the context (local structures, processes, patterns, habits, and traditions). Also, QI project do not
discriminate participants. When the project involves randomization, it is more likely to be a
research project as projects involving randomization are generally less flexible than other
evaluation designs.
3) The turnaround time from data collection to implementation also matters in distinguishing
research from QI projects. The QI results are reported back to the health-care organization(s)
or teams where the project was implemented in a timely manner, usually as an iterative process,
and this is a necessary part of the design. This suggests that patient safety activities may be
more likely to provide direct feedback (and implement changes) to those who were involved
than research projects are. Also, in many patient safety activities, the results are continuously
reported back to clinicians and clinical managers and changes to the protocol can be made
quickly, based on the data and fed back.
Project Funding Source: External Versus Internal
The funding source is a relevant criterion for determining whether a project constitutes research.
Patient safety activities funded by external sources are more likely to be research, whereas
activities funded through internal institutional sources are more likely to not be research. However,
a project’s funding source is not a relevant criterion, but rather, the project’s intentions and goals.
Generalizability of the Study Findings
Research is being predominantly to benefit future patients or patients in other settings rather than
the participants involved with the project, unlike QI interventions, which are primarily designed
to benefit participants in a timely manner. Generalizable knowledge refers to the applicability of
the results to other settings, other practitioners, and other patients as well as to the enduring nature
of the knowledge gained. However, there is disagreement regarding the point at which knowledge
generated should be considered generalizable. Generalizability may imply that the results of a
project are applicable across settings in other organizations outside of those involved in the study.
Also, projects initially designed to improve care at a local setting may have results
that could be applied to other settings as well, making it difficult to delineate the point at which
the results of a project count as generalizable knowledge. One approach to identifying whether
results are potentially generalizable or not is to review the project’s hypothesis. If the hypothesis
is worded more generally, the findings of the study are potentially meant to be broadly applicable
to society and future patients. However, if a QI project’s hypothesis clearly specifies a time and
place where the results are meant to apply, then the project is less likely to be viewed as a research
project.
may mean that the interventions are a comparison of organizational behavior rather than being
research on human subjects.
Whether the primary intention is to disseminate the results or not may be suggestive of whether
the QI project is research or not. Plans to broadly disseminate the results of a project can be
considered a proxy for the intent to produce generalizable knowledge and the primary goal of a QI
project should not be to disseminate the information to a larger audience. If this is the
case, then the purpose is no longer to improve internal processes but rather to contribute to
generalizable knowledge and thus should be treated as a research activity, so making the
interventions research projects. This criterion may discourage patient safety professionals from
disseminating the results of their projects, which has potential to delay uptake of the QI knowledge
gained.
Oversight of patient safety practice activities
There is disagreement on whether it is useful to try to make a distinction between patient safety
research versus practice, suggesting instead that ethical protection should be in place for all such
activities (Casarett et al, 2000; Byers and Aragon, 2003; Harrington, 2007; Diamond et al, 2011;
Dovey et al, 2011). The critical issue is not whether they are QI implementers are doing research;
it is whether appropriate steps are taken to protect those people who participate in their efforts to
improve care. The guiding principle should be that activities whose goals extend beyond the
immediate interests of patients should be interpreted as research and should undergo independent
review to ensure that patient interests are protected and patient safety is optimized. Requiring
patient safety programs to undergo oversight and approval by ethics committees is necessary as
those who designed the project may have a conflict of interest. However, requiring all patient
safety projects that collect systematic data collection to undergo review by RECs may become a
disincentive for clinicians, administrators, and other health-care staff who are passionate about QI,
who may be hindered from collecting rigorous data that addresses patient safety questions.
Ethical oversight may not be required for all patient safety projects, but this should be left to the
institutional guidelines and national ethical review structures, rather than individual practitioners
or healthcare institutions, to make the decisions of which projects should be reviewed (Bellin and
Dubler, 2001; Doezema 2002; Kass et al, 2008; Nerenz, 2009; Platteborze et al, 2010). Ethical
oversight should be required for patient safety or QI research projects where the risk of harm to
participants is greater than minimal risk, where minimal risk is defined as the amount of risk
inherent in clinical practice, and where reliable confidentiality measures are in place. If ethics
oversight were to be expanded to all patient safety activities (rather than those strictly defined as
research), current RECs may not be appropriate bodies of oversight (Bottrell, 2006; Grady, 2007;
Lynn et al, 2007). Many RECs are already overburdened with reviewing research protocols, and
may not have the expertise to review, as methods used in patient safety research projects often
differ from those used for research of health technologies or health-care interventions. Also,
protocols for patient safety research are often more flexible and more closely integrated with
clinical care than other research, and members of RECs may be less familiar with the methods
used to conduct these activities (Newhouse et al, 2006; Lemaire, 2008; Cacchione, 2011). Even
so, projects submitted to RECs for review may lead to confusion among REC members regarding
whether a patient safety research project should undergo expedited or full committee review.
Besides, where multisite patient safety projects are reviewed by RECs, different RECs can vary
widely in their review of those projects (Doyal, 2004, Ezatt et al, 2011).
Baily et al (2008) have put forward models they think would be more appropriate forms of
oversight for patient safety and quality improvement efforts. They recommend 3 levels of
oversight for different types of quality improvement projects (Redman, 2007; Baily, 2008; McNett
and Lawry, 2009; Siegel, 2009):
1) Professional responsibility of QI such as minimal risk activities that ‘‘are simple in design, so
there is no need for methodological review,’’ for projects whose effects ‘‘are very local, in the
sense that their success or failure will have no repercussions on other parts of the organization’’
2) Local management review and supervision of quality improvement for ‘‘activities designed to
improve care in the local setting that require at least some monitoring by management’’
3) QI projects involving human subjects that ought to be reviewed by a REC.
REC members and investigators need to consider whether any of the interventions are
experimental, whether the introduction of the protocol increases risks to patients, as well as
whether the interventions could have been introduced in to clinical care without doing research. If
all of the interventions are based on evidence-based standards and present no additional risk
beyond standard clinical care and if the intervention could have been introduced in to clinical care
without the specific informed consent of patients, then rights of patients are not violated if
informed consent is not obtained (Cretin et al, 2008; Tapp et al, 2010). Patients should be
prohibited from opting out of minimal risk quality improvement activities, given the importance
of such activities for ongoing high-quality patient care (Baily et al, 2006). Although this guidance
from Baily and colleagues is helpful, there is a need for additional guidance on the necessity of
and/or best practices for obtaining consent when entire clinical teams are the subject of patient
safety research. Patient safety checklist studies, for example, sometimes document whether the
team, as a whole, attended to certain activities, or projects review medical charts revealing an entire
team’s interactions with a patient. Ethical concerns pertaining to the use of deception in patient
safety research are closely linked to an ethical duty of truth telling and to foundational
commitments to respect for persons.
ethical conduct of quality improvement. The suggested ethical requirements were offered in an
article in the May 1, 2007, issue of Annals of Internal Medicine by Joanne Lynn, MD, and
colleagues. The authors’ suggested requirements include that a QI activity have:
1) Social or scientific value—The anticipated improvement from the QI activity should justify
the effort in the use of time and resources.
2) Scientific validity—The QI activity must be methodologically sound.
3) Fair patient selection—The participants in the QI activity should be selected to achieve fairness
in the benefits and burdens of the intervention.
4) Favorable benefit/risk ratio—The QI activity should limitrisks, such as privacy and
confidentiality,and maximize benefits to participants.
5) Respect for participants—The QI activity is designed to protect patients’ confidentiality and
make them aware of findings relevant totheir care. Also, participants should receive basic
information regarding the activity.
6) Informed consent—When the activity is more than minimum risk, informed consent should be
sought.
7) Independent review—The proposed activity should be reviewed to ensure it meets the ethical
standards in place.
References
Baily MA, Bottrell M, Lynn J, et al. Ethics of using QI methods to improve health care quality and
safety. Hastings Center Special Report. July-August 2006. Available at: http://
Baily MA. Harming through protection? N Engl J Med. 2008;358:768-769.
Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight.
Am J Public Health. 2001;91:1512-1517.
Byers JF, Aragon ED. What quality improvement professionals need to know about IRBs. J
Healthc Qual. 2003;25:4Y10.
Cacchione PZ. When is intuitional review board approval necessary for quality improvement
projects? Clin Nurs Res. 2011;20:3-6
Casarett D, Karlawish JH, Sugarman J. Determining when quality improvement initiatives should
be considered research: proposed criteria and potential implications. JAMA. 2000;283:2275-2280.
Cretin S, Keeler EB, Lynn J, et al. Should patients in quality-improvement activities have the same
protections as participants in research studies? JAMA. 2000;284:1786-1788
Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication
guidelines: the beginning of a consensus project. Qual Saf Health Care. 2005;14:319-325.
Diamond LH, Kliger AS, Goldman RS, et al. Quality improvement projects: how do we protect
patients’ rights? Am J Med Qual. 2004;19:25Y27.
Doezema D, Hauswald M. Quality improvement or research: a distinction without a difference?
IRB. 2002;24:9-12.
Dovey S, Hall K, Makeham M, et al. Seeking ethical approval for an international study in primary
care patient safety. Br J Gen Pract. 2011;61:197Y204.
Doyal L. Preserving moral quality in research, audit, and quality improvement. Qual Saf Health
Care. 2004;13:11-12.
Ezzat H, Ross S, Dadelszen P, et al. Ethical review as a component of institutional approval for a
multicenter continuous qualityimprovement project: the investigator’s perspective. BMC Health
Serv Res. 2010;10:223-229.
Grady C. Quality improvement and ethical oversight. Ann Intern Med. 2007;146:680-681.
Harrington, L. Quality improvement, research, and the institutional review board. J Healthc Qual.
2007;29:4-9.
Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century.
Washington, DC: National Academies Press; 2001.
Kass N, Pronovost PJ, Sugarman J, et al. Controversy and quality improvement: lingering
questions about ethics, oversight, and patient safety research. Jt Comm J Qual Patient
Safety2008;34:349-353.
Kass NE, Pronovost PJ. Quality, safety, and institutional review boards: navigating ethics and
oversight in applied health systems research. Am J Med Qual. 2011;26:157-159.
Lemaire F. Informed consent and studies of a quality improvement program. JAMA.
2008;300:1762.
Lo B, Groman M. Oversight of quality improvement: focusing on benefits and risks. Arch Intern
Med. 2003;163:1481-1486.
Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health
care. Ann Intern Med. 2007;146:666-673.
Lynn J. When does quality improvement count as research? Human subject protection and theories
of knowledge. Qual Saf Health Care 2004;13:67Y70.
McNett M, Lawry K. Research and quality improvement activities: when is institutional review
board review needed? J Neurosci Nurs. 2009;41:344-347.
Miller FG, Emanuel EJ. Quality-improvement research and informed consent. N Engl J Med.
2008;358:765-767.
Nelson WA, Gardent PB. Ethics and quality improvement. Quality care and ethical principles
cannot be separated when considering quality improvement activities. Healthc Exec. 2008;23:40-
Nerenz DR. Ethical issues in using data from quality management programs. Eur Spine J.
2009;18(Suppl 3):S321YS330.
Newhouse RP, Poe S, Pettit JC, et al. The slippery slope: differentiating between quality
improvement and research. J Nursing Adm. 2006;36:211-219.
Perneger TV. Why we need ethical oversight of quality improvement projects. Int J Qual Health
Care. 2004;16:343-344.
Platteborze LS, Young-McCaughan S, King-Letzkus I, et al. Performance improvement/research
advisory panel: a model for determining whether a project is a performance or quality or quality
improvement activity or research. Mil Med. 2010 Apr;175(4):289-91.
Siegel MD, Alfano SL. The ethics of quality improvement research. Crit Care Med. 2009;37:791-
792.
Tapp L, Edwards A, Elwyn G, et al. Quality improvement in general practice: enabling general
practitioners to judge ethical dilemmas. J Med Ethics. 2010;36:184-188.
Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight and quality improvement initiatives. Qual
Saf Health Care. 2010;19:271-274.
Weiserbs KF, Lyutic L, Weinberg J. Should quality improvement projects require IRB approval?
Acad Med. 2009;84:153.
Wise L. Quality improvement or research? A report from the trenches. Am J Crit Care. 2008;17:98-
99.
World Health Organization. Global Priorities for Research in Patient Safety (first edition).
December 2008. Available at: http://www.who.int/patientsafety/research/priorities/
www.thehastingscenter.org/Publications/SpecialReports/.
‘While healthcare organizations are initiating a number of strategies to improve care and respond
to changing regulatory and policy requirements, many clinicians practicing in them have not
received training on quality and safety as a part of their formal education’
In the last two decades, much effort has been invested to improve health by advancing the quality
of health care with an emphasis of promoting patient safety and reducing medical error (Berwick,
1989; Berwick, 1996). Accordingly, many training programs on quality improvement (QI) have
been developed for practitioners in health care (Mohr et al, 2003; Patow et al, 2009). Most of these
programs are constructed for continuous professional development (CPD) purposes aimed at
individuals who have completed their preservice clinical training, and relatively few are designed
specifically for medical residents. Numerous barriers exist to implementing these CPD based QI
training programs in residency training programs, including a lack of dedicated time in the core
residency curriculum, limited faculty who have the expertise and/or interest in the topic, and a
paucity of infrastructural support and financial resources. residents in a clinical QI initiative varied
widely (Mohr et al, 2003; Patow et al, 2009). Few studies described the educational impact of
residents’ participation in QI and even fewer studies identified specific improvement in patient
health outcomes. More recent studies have focused on the development of core residency-specific
QI curricula (Mohr et al, 2003; Djuricich et al, 2004; Holmboe et al, 2005; Kim et al, 2010).
analysis of data and reflection on what was learned from each cycle, leaving the ‘act’ phase to
determine (from that data) what modifications can be made. Due to the similarities of these two
models, teaching QI to undergraduate students using an experiential learning teaching method
could be extremely effective (Canal et al, 2007).
Social learning theory
Bandura’s Social Learning Theory suggests that learning occurs in social contexts through
continuous interactions with others by the process of observation. This means that learning may
occur as a result of observing good behaviour demonstrated by a group or individual but equally
as a result of the consequences of poor behaviour; this process is called ‘modelling’. Understanding
and combining Experiential Learning and Social Learning Theory may assist in our understanding
of how QI educational interventions may work (or not) and allow us to formulate an appropriate
hypothesis. For example, considering both impact theories, a hypothesis that experiential learning
would impact most positively on students’ skills knowledge and attitude could be made. However,
the influence of observed behaviours in the social learning context may dictate whether a positive
or negative impact on student behaviour occurs. QI should not be limited to the theoretical learning
of technical skills involved (PDSA cycles and run charts) but should include the soft skills (social
psychology of change, understanding the organization and structure of care, understanding the
context), the learning skills (problem-based learning, team learning, experiential learning, focus
on competencies, critical reflection, action learning) and indeed the interactions between them.
Why teach quality improvement?
The preservice training in QI often leaves a lot to be desired. Many programs are delivered over a
short span of time (ranging from one day to one month elective blocks), thus creating the
uncertainly of whether any short-term knowledge gains are sustained. Also, while theoretical
constructs are taught, there is no or minimal component of clinical applicability. A practice-based
QI elective rotation offered to inservice care providers may offer much needed benefit. Evidence
for this is that trainees who completed a QI project demonstrated superior knowledge retention of
QI skills on objective testing when compared to non-completers (Ogrinc et al, 2004). A
competency is an observable ability of a health professional, integrating multiple components such
as knowledge, skills, values and attitudes. Since competency is observable, it can be measured and
assessed to ensure acquisition (Frank and Danoff, 2007; Frank et al, 2010; Frank et al, 2010b).
Competency based education refers to an approach to preparing health professionals for practice
that is fundamentally oriented to graduate outcome abilities and organized around competencies
derived from an analysis of societal and patient needs. Competency-based training involves
moving away from a strictly time-based training model towards one that identifies the specific
knowledge, skills, and abilities needed for practice (Frank and Danoff, 2007; Frank et al, 2020,
Frank et al, 2020b). Acquisition, application and sustainability of quality improvement skills are
considered to be core competencies.
How can we teach quality improvement?
An emerging priority in medical education is the need to facilitate learners’ acquisition of quality
improvement (QI) competencies. There is an increasing focus on improving healthcare in order to
ensure higher quality, greater access and better value for money. In line with that goal, training
programmes have been developed to teach health professionals and students formal QI methods.
Many implementation and feasibility barriers to sustaining a successful QI curriculum have been
described (Godwin, 2001; Wong et al, 2008; Arbuckle et al, 2013; Wong et al, 2013). Such barriers
include the developmental stage, insufficient QI knowledge among faculty, a lack of value placed
on QI by the institution, competing curricular/clinical demands, unsupportive leadership, and the
absence of a promotion pathway. Several questions arise from the QI training initiatives: What
types of training about formal QI techniques are available for health professionals? What evidence
is there about the most effective methods for training clinicians in QI? What should be the content
of QI training curricula? How should the QI training be delivered to provide value for money?
What types of training about formal quality improvement techniques are available for health
professionals? What evidence is there about the most effective methods for training clinicians in
quality improvement?
The training should position quality improvement as a process that has several interrelated or
similar approaches, one of which is criteria-based audit. Clinical audit aims at continuous
improvement of the quality of care through systematic and critical review of current practice
against explicit criteria or standards developed to suit a specific setting or context for
implementation of change. The audit is a regular multidisciplinary activity by which all
participants of care including doctors, nurses and other health professionals carry out a systematic
review of their own practice. Implementation of audit should follow standard acceptable
procedures and guidelines that maximize patient safety and maintain professional values (Shaw
and Costain, 1989). QI knowledge may improve after didactic curriculum, particularly if a
competence-based curriculum is employed. While the goal is to improve patient care, the process
may compromise patient safety and rights, especially through breach of confidentiality. Also, there
must be institutional oversight to ensure that QI interventions follow acceptable standards. The
data collected during the process of audit should be handled with care, and individual data
concerning care-givers, patients or health professionals must be treated confidentially. Clinical
audit needs realistic timeframe and necessary resources as well as tolerant culture of learning
organizations. Furthermore the success of clinical audit depends on the commitment and support
of the management of the organizations. Clinical audit could relatively easily be embodied into
the current practice of peer-review processes and other quality improvement initiatives.
The aims of provider training in Quality Improvement
The aims of provider training in QI is to move a health provider from one who knows just basic
foundation knowledge, skills and value through one who applies the skills to a provider who
demonstrates basic competency in quality improvement tin healthcare. Key questions: What
should we teach? How should we teach? How will we measure the results? (Cleghorn and Baker,
2000; Hayden et al, 2002; Varkey et al, 2006; Wong et al, 2007; Wong et al, 2008; Wong and
Roberts, 2008). Quality improvement modules for medical and nursing students tend to focus on
techniques such as audit and plan, do, study, act (PDSA) cycles. Most courses run by academic
institutions tend to be unidisciplinary and classroom-based or undertaken during clinical
placements. However, there is an increasing acknowledgement of the value of multidisciplinary
training, especially in practical work-based projects, that contain a practical component.
Simulation is also becoming popular as a training approach. Continuing professional development
training about QI appears to be growing at a faster rate than university education. Ongoing
education includes workshops, online courses, collaboratives and ad hoc training set up to support
specific improvement projects. There is a growing trend for training which supports participants
to put what they have learned into practice or to learn key skills ‘on the job’.
The training approaches most commonly published include:
There is some evidence that training students and health professionals in quality improvement may
improve knowledge, skills and attitudes. Care processes may also be improved in some instances.
However, the impact on patient health outcomes, resource use and the overall quality of care
remains uncertain. This necessitates instituting training basing on a theoretical approach that
ensures learning. Most evaluations of training focus on perceived changes in knowledge rather
than assessing applied skills or delving deeper into the longer-term outcomes for professionals and
patients. Programmes which incorporate practical exercises and work-based activities tend to
achieve better in terms of acquisition of competences, and evaluations of these approaches are
more likely to find positive changes in care processes and patient outcomes. Active learning
strategies, where participants put quality improvement into practice, are more effective than
didactic classroom styles alone.
healthcare providers practicing in them need training on quality and safety as a part of their formal
or in-service training.
Lack of knowledge and skills among clinicians and managers or negative attitudes are significant
barriers to improving quality in healthcare. Training health professionals in quality improvement
has the potential to impact positively on attitudes, knowledge and behaviours. Quality
improvement may be defined as a way of approaching change in healthcare that focuses on self-
reflection, assessing needs and gaps, and considers how to improve in a multifaceted manner. In
this definition, training about quality improvement aims to create a culture of continuous reflection
and a commitment to ongoing improvement in quality. Training aims to provide practitioners, care
providers and managers with the skills and knowledge needed to assess the performance of
healthcare and individual and population needs, to understand the gaps between current activities
and best practice and be in position to devise theories, strategies, tools and techniques address the
quality gap (Audet et al, 2005; Bataldem and Davidoff, 2007;Van Hoof et al, 2011).
Components
Needs assessment
1) Include data showing a gap between current and best practice
2) Include data showing how practices or teams could be improved
Content
1) Identify evidence-based sources for core and general programme content
2) Describe key learning from implementing known best practice
3) Discuss data before and after successful implementation
4) Include as an objective ‘by the end of course, participants will be able to summarise evidence
on…’
5) Allow time for questions about the pros and cons of evidence
Application
1) Show trainees how evidence relates to participants’ work environment
2) Ask participants to show how they will apply the evidence to their work environment
Key steps
Step 1: Determine the QI competencies that trainees must demonstrate at the end of the
curriculum
Step 2: Determine the needs of incoming trainees and set learning objectives for the curriculum
To be most effective, training should assess the needs of learners, and target both the content and
training approach appropriately, illustrating how the content applies to the participants’ work
environment ((Ogrinc et al, 2003; Ogrinc et al 2004; Price, 2005; Ogrinc et al, 2011; Paulman,
2010). Ideally, by the end of the training, trainees should be able to:
8) Identify areas to change within a process and recognize whether changes are successful (how
to use PICK process (possible, implement, challenge, kill) charts and prioritization matrices to
organize their recommendations, as well as how to work with the QI facilitators and clinic
managers to develop sustainable implementation strategies).
9) Develop and implement a basic quality improvement project
Most quality improvement methods are based on the application of continuous quality
improvement theory developed by the manufacturing industry. The principle underpinning quality
improvement was that quality was not something controlled at the end of the line, but rather
throughout the entire work process. Medical students can begin to understand the role of quality
improvement methods by:
1) Asking about measures that improve quality and safety;
2) Recognizing that good ideas can come from anyone;
3) Being aware that the situation in the local environment is a key factor in trying to make
improvements;
4) Being aware that the way people think and react is as important as the structures and processes
in place;
5) Realizing that the spread of innovative practices is a result of people adopting new processes
Quality improvement methods have successfully addressed this gap and provide clinicians with
the tools to: (i) identify a problem; (ii) measure the problem; (iii) develop a range of interventions
designed to fix the problem; and (iv) test whether the interventions worked.
General topics
The training content in competence-based QI training should also include general topics such as:
1) Quality Improvement Knowledge Application Tool (QIKAT) (Varkey et al, 2009; Singh et
al, 2014).
2) Pre-post and post-test (Tests of theoretical and applied knowledge and skills self-assessment,
pre/post months later)
3) Resident satisfaction
4) Formative self assessment of attitudes and skills with feedback
5) Problem solving through performance-based assessments
1) Learner performance
2) Learner satisfaction
3) Clinical outcomes
References
Arbuckle MR, Weinberg M, Cabaniss DL, et al. Training psychiatry residents in quality
improvement: An integrated, year-long curriculum. Acad Psychiatry. 2013;37:42–45
Barber KH, Schultz K, Scott A, et al. Teaching Quality Improvement in Graduate Medical
Education: An Experiential and Team-Based Approach to the Acquisition of Quality Improvement
Competencies. Acad Med. 2015 Oct; 90(10): 1363–1367.
Batalden P, Davidoff F. Teaching quality improvement: the devil is in the details. JAMA
2007;298(9):1059-1061.
Berwick DM: A primer on leading the improvement of systems. BMJ 1996, 312(7031):619–622.
Berwick DM: Continuous improvement as an ideal in health care. N Engl J Med 1989, 320(1):53–
56.
Boonyasai RT, Windish DM, Chakraborti C et al. Effectiveness of teaching quality improvement
to clinicians: a systematic review. JAMA 2007;298(9):1023-1037.
Canal DF, Torbeck L, Djuricich AM. Practice-based learning and improvement: a curriculum in
continuous quality improvement for surgery residents. Arch Surg 2007;142(5):479-482.
Clarke A, Fitzpatrick P, Hurley M, et al. Audit in health care--the process of reviewing quality.
Research Committee of the Faculty of Public Health Medicine. Ir Med J. 1999;92(1):230–231
Cleghorn GD, Baker GR: What faculty need to learn about improvement and how to teach it to
others. J Interprof Care 2000, 14(2):147–159.
Da Dalt L, Callegaro S, Mazzi A et al. A model of quality assurance and quality improvement for
post-graduate medical education in Europe. Med Teach 2010;32(2):e57-64.
Daniel DM, Casey DE Jr, Levine JL et al. Taking a unified approach to teaching and implementing
quality improvements across multiple residency programs: the Atlantic Health experience. Acad
Med 2009;84(12):1788- 1795.
Diaz VA, Carek PJ, Dickerson LM, Steyer TE. Teaching quality improvement in a primary care
residency. Jt Comm J Qual Patient Saf 2010;36(10):454-460
Djuricich AM, Ciccarelli M, et al: A continuous quality improvement curriculum for residents:
addressing core competency, improving systems. Acad Med 2004, 79(10 Suppl):S65–S67.
Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among
internal medicine residents BMC Medical Education 2014, 14:252
Frank JR, Danoff D: The CanMEDS initiative: implementing an outcomes-based framework of
physician competencies. Med Teach 2007, 29(7):642–647.
Frank JR, Mungroo R, Ahmad Y, et al: Toward a definition of competency-based education in
medicine: a systematic review of published definitions. Med Teach 2010, 32(8):631–637.
Frank JR, Snell LS, Cate OT, et al:Competency-based medical education: theory to practice. Med
Teach 2010, 32(8):638–645. Godwin M. Conducting a clinical practice audit. Fourteen steps to
better patient care. Can Fam Physician. 2001;47:2331–2333
Gould BE, Grey MR, et al. Improving patient care outcomes by teaching quality improvement to
medical students in community-based practices. Acad Med 2002;77(10):1011-1018.
Hayden SR, Dufel S, Shih R: Definitions and competencies for practice-based learning and
improvement. Acad Emerg Med 2002, 9(11):1242–1248.
Holmboe ES, Prince L, Green M: Teaching and improving quality of care in a primary care internal
medicine residency clinic. Acad Med 2005, 80(6):571–577.
Van Hoof TJ, Meehan TP. Integrating essential components of quality improvement into a new
paradigm for continuing education. J Contin Educ Health Prof 2011;31(3):207- 214.
Huntington JT, Dycus P, Hix C et al. A standardized curriculum to introduce novice health
professional students to practice-based learning and improvement: a multi-institutional pilot study.
Qual Manag Health Care 2009;18(3):174-181
Kim CS, Lukela MP, Parekh VI, et al. Teaching internal medicine residents quality improvement
and patient safety: a lean thinking approach. Am J Med Qual 2010, 25(3):211–217.
Mohr JJ, Randolph GD, et al, Integrating improvement competencies into residency education: a
pilot project from a pediatric continuity clinic. Ambul Pediatr 2003, 3(3):131–136.
Ogrinc G, Headrick LA, Morrison LJ, Foster T: Teaching and assessing resident competence in
practice-based learning and improvement. J Gen Intern Med 2004, 19(5 Pt 2):496–500.
Ogrinc G, Headrick LA, Mutha S, et al: A framework for teaching medical students and residents
about practice-based learning and improvement, synthesized from a literature review. Acad Med
2003, 78(7):748–756.
Ogrinc G, Nierenberg DW, Batalden PB. Building experiential learning about quality
improvement into a medical school curriculum: the Dartmouth experience. Health Aff
2011;30(4):716- 722.
Patow CA, Karpovich K, Riesenberg LA, et al. Residents’ engagement in quality improvement:
a systematic review of the literature. Acad Med 2009, 84(12):1757–1764.
Paulman P. Integrating quality improvement into a family medicine clerkship. Fam Med
2010;42(3):164- 165.
Rawlins R. Local research ethics committees. Research discovers the right thing to do; audit
ensures that it is done right. BMJ. 1997 Nov 29;315(7120):1464–1464.
Shaw CD, Costain DW. Guidelines for medical audit: seven principles. BMJ. 1989 Aug
19;299(6697):498–499.
Singh MK, Ogrinc G, Cox KR, et al. The Quality Improvement Knowledge Application Tool
Revised (QIKAT-R). Acad Med 2014, 89(10):1386–1391
Varkey P, Reller MK, et al.An experiential interdisciplinary quality improvement education
initiative. Am J Med Qual 2006, 21(5):317–322.
Varkey P, Gupta P, Bennet KE. An innovative method to assess negotiation skills necessary for
quality improvement. Am J Med Qual 2008;23(5):350-355.
Varkey P, Gupta P, Arnold JJ, Torsher LC. An innovative team collaboration assessment tool for
a quality improvement curriculum. Am J Med Qual 2009;24(1):6-11
Voss JD, May NB, Schorling JB, Lyman JA et al. Changing conversations: teaching safety and
quality in residency training. Acad Med 2008;83(11):1080- 1087
Walsh T, Jairath N, Paterson MA, Grandjean C. Quality and safety education for nurses clinical
evaluation tool. J Nurs Educ 2010;49(9):517-522.
Weeks W, Robinson J, Brooks W, Batalden P. Using early clinical experiences to integrate quality-
improvement learning into medical education. Acad Med 2000;75:81-84.
Wong BM, Etchells EE, Kuper A, et al: Teaching quality improvement and patient safety to
trainees: a systematic review. Acad Med 2010, 85(9):1425–1439.
Wong BM, Kuper A, et al. Sustaining quality improvement and patient safety training in graduate
medical education: Lessons from social theory. Acad Med. 2013;88:1149–1156
Wong RY, Hollohan K, et al: A descriptive report of an innovative curriculum to teach quality
improvement competencies to internal medicine residents. Can J Gen Int Med 2008, 3(1):26–29.
Wong RY, Hollohan K, Roberts JM, et al. A descriptive report of an innovative curriculum to
teach quality improvement competencies to internal medicine residents. Can J Gen Intern
Med. 2008;3:26–29.
Wong RY, O Kassen B, Hollohan K, et al: A new interactive forum to promote awareness and
skills in quality improvement among internal medicine residents: a descriptive report. Can J Gen
Int Med 2007, 2(1):35–36.
Wong RY, Roberts JM: Practical tips for teaching postgraduate residents continuous quality
improvement. Open Gen and Intern Med J 2008, 2:8–11.
and expense. Some patients will be lost to follow-up, and their characteristics and outcomes may
differ substantially from those for whom data are available. Many desired outcomes, such as health
status and readmission, require collection of data directly from patients, and inaccurate telephone
numbers, addresses, and lack of patient cooperation with follow-up efforts may limit efforts to
collect this information.
Time Frame Considerations in Tracking Outcomes
For acute, catastrophic conditions, in-hospital treatment is followed by transition to long-term
care for a chronic condition. When judging the quality of care provided by an individual or
institution, should the outcomes assessment be restricted to the initial hospitalization only or
should longer-term assessments be included as well? Two rationales support the need for longer-
term assessments. First, although certain interventions can positively influence short-term
survival (eg, 30 days), the full impact of these and other interventions are manifest only months
or years after discharge. Second, patient care does not end with the patient’s discharge from the
hospital. Rather, a smooth transition with the outpatient primary care clinician is an essential
component of high-quality care. In addition, secondary is as important as many acute therapeutic
decisions. In-hospital provider assumes a responsibility for appropriate communication with the
patient’s primary care physician.
The millennium development goals (MDGs) on health focused on combating maternal and child
mortality and a relatively small number of diseases (UN, 2015) These efforts boosted disease-
specific (vertical) funding for health services and in some cases were accompanied by strong
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
165
accountability mechanisms including measurement of outcomes and service quality (de Jong et al,
2016). SDG 3 and its targets encompass more conditions, and, by including non-communicable
diseases, are also more complex to attain than the MDGs. As we move into the SDG era, the
funding and delivery streams are being interconnected and integrated into broader health systems
to promote more rational and patient-centred health care across a wide range of health needs. This
is observed at both global 6and country levels. The logistics of integration, including ensuring
technical efficiency, will be challenging, but may also provide an opportunity for adoption of best
practices in quality management in areas ranging from stand-alone vertical programmes to the
broader health system (Obure et al, 2016).
As in high income countries where the impact of health-service quality on health outcomes has
been well documented (IOM 2001; Kelly and Hurst, 2006; McGlynn and Adams 2014), data from
low- and middle-income countries poor quality is increasingly showing a failure to attain expected
health-care improvements. However, not all interventions led to performance improvement.
Indeed studies from India, Malawi and Rwanda have shown that greater access to institutional
deliveries and antenatal care did not lead to reductions in maternal and newborn mortality as it was
not accompanied by corresponding improvement in quality of care (Souza et al, 2013; Powell-
Jackson et al, 2015; Godlonton et al, 2014). Also, higher than predicted maternal mortality
occurred in hospitals in high mortality lower-income countries, despite good availability of
essential medicines, suggesting clinical management gaps or treatment delays for women who
develop obstetric complications ( souza et al 2013). In Malawi, about 30%
of all outpatients who were meant to benefit from a malaria treatment intervention received
incorrect treatment (Steinhardt et al, 2014). Also, in India, an interevtion of tuberculosis therapy
failed as providers frequently gave inaccurate care to tuberculosis patients (Das et al, 2015), for
instance only 11 of 201 private practitioners provided correct tuberculosis management (Achanta
et al, 2013).
have access to affordable and quality health services. But if those services are poor quality, people
are unlikely to use them or agree to pay higher taxes or insurance premiums for them (Basinga et
al, 2011; Witter et al, 2012).
Resolving ethical concerns
There is also an ethical dimension to quality of care. While the right to health care is widely
accepted, less has been said about the quality of this care. First, whereas one of the core principles
of medicine is to do no harm, there is still minimal systematic measurement of patient safety in the
health systems of low- and middle-income countries (Wilson et al, 2012; Aranaz-Andrés et al,
2011; Nguyen et al, 2015). Second, little is known about whether wealth inequalities are associated
with the quality of care. Yet the quality of care is inversely proportional to the need of the
population (Hart 1971). It is unclear how the quality of services available to poor people compares
with that of richer people in the same country. Yet the quality of care should be monitored and
evaluated regardless of who provides the care, i.e. equally in private and public settings, and for
both curative and preventive care. A third ethical issue is defining the quality baseline, especially
in developing countries where quality standards are lacking such as in countries with extremely
constrained health resources. Whether doctors in such countries should follow the same guidelines
as in high income countries is debatable. Some people argue that less effective care is ethically
acceptable in situations where the alternativeis no care, but this assumes that the care will still
bring substantial benefit to patients (Victora et al, 2016; Persad et al, 2016). The minimum
effectiveness that is tolerable or acceptable given the costs of health-care provision to governments
and to families, and the legitimate expectations of people receiving the care need to be balanced.
And once a minimum standard is defined, the pursuit of a higher level of quality must be balanced
with its attendant cost as much as the need to guarantee the minimum level of care quality to the
entire population (Donabedian, 1988). Each country needs to define a quality frontier that situates
their aspirations for quality within realistic budget constraints and that recognizes trade-offs
between speed of expanding services and ensuring minimum quality standards. Following
Donabedian’s theory of quality of care, three dimensions of quality of care that need to be tracked
and, ideally, linked: (i) structure (facility infrastructure, management and staffing), (ii) process
(technical (clinical) quality and patient experience) and (iii) outcomes (patient satisfaction, return
visits and health outcomes). In high-income countries the main measures of quality have typically
been patient outcomes that are sensitive to health-care practices, such as the association between
skilled nursing and hospital readmissions (Howell et al, 2014; Newman et al, 2014; Kasteridis et
al, 2015).
There are calls to reconsider the importance of process measures that can provide concrete
guidance on where to begin improvement efforts. Many low and middle-income countries lack the
health information systems to collect these care-sensitive outcome measures, so that it is
reasonable to begin with inputs and process measures. Inputs, such as water, sanitation and
electricity, represent the minimum threshold for a functioning health-care facility; this is
sometimes termed service readiness. Most of the existing efforts to measure quality have
emphasized this tangible element of care, yet a cabinet full of unexpired medicines does not
necessarily translate into good clinical care, and the connection between inputs and processes is
poorly understood. Much more emphasis is needed on measuring the processes of care - the content
and nature of clinical interactions and the intangible elements of care underlying those interactions
(such as health-sector organization, facility management and staff training and motivation).
Ultimately, there is need for evidence linking quality of care to health outcomes, and this is why
the benchmarking of quality of care in the specific context of low- and middle-income countries
is necessary. Given the constrained resources, it is essential for the quality of care measurement
framework to prioritize the questions asked to identify the limitations on what is being done.
Structure
Data for measuring the structure dimension of quality care, including facility infrastructure,
staffing and clinical training, come from routine health-facility records and surveys. Record
systems are often incomplete and inaccurate and reporting delays, often resulting in out-of-date
information, are of little use Mphatswe et al, 2012; Nicol et al, 2013; Kihuba et al, 2014; Nickerson
et al, 2015). Also, routinely collected health data are not standardized, precluding comparison
across and, sometimes, within countries (Ferrinho et al, 2012). Periodic health-facility surveys can
provide better quality data, but such surveys describe the situation at one point in time and are
restricted to a few services, typically excluding non-communicable diseases, injuries and mental
health, for example. A recent comprehensive review of health facilityassessment tools in low- and
middle-income countries found that among t he 10 tools that met the study´s inclusion criteria
there was substantial variation in their content and comprehensiveness.
Process
Measures of process quality of health care include both its technical quality and the experience of
the patients receiving the care. The tools available for assessment of provision of clinical care
include standardized patients, clinical vignettes, abstraction of medical records, simulations or
clinical drills, and direct clinical observations (Luck et al, 2006;Aung et al, 2012). Standardized
patients are trained actors who make an unannounced visit to a health-care facility and present
symptoms of a simulated condition; they complete an assessment checklist on the clinical actions
of the provider after the visit (Luck et al, 2006;Aung et al, 2012). In clinical vignettes, practitioners
follow a written clinical case, responding to questions that replicate certain stages of an actual
clinic visit, such as taking a history, ordering tests and prescribing a treatment plan. Providers’
responses are scored against evidence-based criteria for managing the simulated disease (Franco
et al, 2002; Luck et al, 2006;Aung et al, 2012).
improvement efforts. The scope of inquiry into drivers of quality must extend beyond the facility
and the immediate health-care team; good quality depends on district-wide service organization,
pre-service training and community accountability mechanisms, among many other factors.
To understand the root causes of quality gaps, whether for technical or non-technical quality, it is
necessary to obtain perspectives on quality from a range of health-system stakeholders. Face-to-
face interviews with patients, and written surveys, are typically used to measure the patient
experience (Nesbitt et al, 2013; Ng et al, 2014; Wagenaar et al, 2016). Patients are best-positioned
to determine whether care aligns with their values and preferences, and to convey their experience
of provider communication, service convenience and so on (Tzelepis et al ,2015).47 The expansion
of communication technology and social media provides new opportunities for getting feedback
on quality of care and returning relevant information back to users. Recommendations to improve
the measurement of quality of care and its impact on improving health outcomes in lower-income
countries include improving data collection methods and instruments; expanding the scope of
measurements; and translating the data for policy impact. The six recommendations are:
1) Redouble efforts to improve and institutionalize civil registration and vital statistics
systems;
2) Reform facility surveys and strengthen routine information systems;
3) Innovate new quality measures for low-resource contexts;
4) Assess the patient perspective on quality;
5) Invest in national quality data; and
6) Translate quality evidence for policy impact
3) Design data systems to support internal quality needs and spinoff external quality measures:
use a four-step process to support internal quality measurement and external reporting for
selection and accountability: build quality measures into workflows on the basis of key process
analysis, to have the greatest impact on the most patients; for a high-priority key process,
explicitly design a data system (intermediate processes, final outcomes, patient experience and
cost results) around the care delivery process, ‘roll up’ accountability measures at a clinic,
hospital, region, system, state and national level; and provide transparent reporting on quality
and value to promote learning, healthy competition on key results and to ensure public
accountability.
4) Use return on measurement investment: select measures taking into account the cost of data
collection and outcomes and costs.
5) Establish ongoing process for refining and selecting core measures: build stakeholder
agreement on vital, standard measures of performance that are used by payers, regulators,
consumers and accreditors to promote public reporting and value-based purchasing schemes
across different payers and to harmonise regulation, accreditation and certification.
References
Achanta S, Jaju J, Kumar AM, et al. Tuberculosis management practices by private practitioners
in Andhra Pradesh, India. PLoS ONE. 2013;8(8):e71119.
Aranaz-Andrés JM, Aibar-Remón C, et al. Prevalence of adverse events in the hospitals of five
Latin American countries: results of the ‘Iberoamerican study of adverse events’ (IBEAS). BMJ
Qual Saf. 2011;20(12):1043-51.
Aung T, Montagu D, Schlein K, et al. Validation of a new method for testing provider clinical
quality in rural settings in low- and middle-income countries: the observed simulated patient. PLoS
ONE. 2012;7(1):e30196.
Basinga P, Mayaka S, Condo J. Performance-based financing: the need for more research. Bull
World Health Organ. 2011 Sep 01;89(9):698–9.
Bilimoria KY. Facilitating quality improvement: pushing the pendulum back toward process
measures. JAMA. 2015 Oct 06;314(13):1333–4.
Kujawski S, Mbaruku G, Freedman LP, et al. Association between disrespect and abuse during
childbirth and women’s confidence in health facilities in Tanzania. Matern Child Health J. 2015
Oct;19(10):2243–50.
Luck J, Peabody JW, Lewis BL. An automated scoring algorithm for computerized clinical
vignettes: evaluating physician performance against explicit quality criteria. Int J Med Inform.
2006 Oct-Nov;75(10-11):701–7.
McGlynn EA, Adams JL. What makes a good quality measure? JAMA. 2014 Oct
15;312(15):1517–8
Mphatswe W, Mate KS, et al. Improving public health information: a data quality intervention in
KwaZulu-Natal, South Africa. Bull WHO 2012 ;90(3):176–82.
Nesbitt RC, Lohela TJ, Manu A, et al. Quality along the continuum: a health facility assessment
of intrapartum and postnatal care in Ghana. PLoS ONE. 2013;8(11):e81089.
Neuman MD, Wirtalla C, Werner RM. Association between skilled nursing facility quality
indicators and hospital readmissions. JAMA. 2014 ;312(15):1542–51.
Ng M, Fullman N, Dieleman JL, et al. Effective coverage: a metric for monitoring universal health
coverage. PLoS Med. 2014 Sep;11(9):e1001730.
Nguyen HT, Nguyen TD, et al. Medication errors in Vietnamese hospitals: prevalence, potential
outcome and associated factors. PLoS ONE. 2015;10(9):e0138284.
Nickerson JW, Adams O, Attaran A, et al. Monitoring the ability to deliver care in low- and
middle-income countries: a systematic review of health facility assessment tools. Health Policy
Plan. 2015 Jun;30(5):675–86.
Nicol E, Bradshaw D, Phillips T, Dudley L. Human factors affecting the quality of
Obure CD, Jacobs R, Guinness L, et al ; Integra Initiative. Does integration of HIV and sexual and
reproductive health services improve technical efficiency in Kenya and Swaziland? An application
of a two-stage semi parametric approach incorporating quality measures. Soc Sci Med. 2016
Feb;151(151):147–56.
Persad GC, Emanuel EJ. The ethics of expanding access to cheaper, less effective treatments.
Lancet. 2016 Aug 27;388(10047):932–4.
Powell-Jackson T, Mazumdar S, Mills A. Financial incentives in health: new evidence from
India’s Janani Suraksha Yojana. J Health Econ. 2015 Sep;43:154–69.
Souza JP, Gülmezoglu AM, Vogel J, et al. Moving beyond essential interventions for reduction of
maternal mortality (the WHO Multicountry Survey on Maternal and Newborn Health): a cross-
sectional study. Lancet. 2013 May 18;381(9879):1747–55.
Steinhardt LC, Chinkhumba J, Wolkon A, et al. Quality of malaria case management in Malawi:
results from a nationally representative health facility survey. PLoS ONE. 2014;9(2):e89050.
Sustainable Development Goals. 17 goals to transform our world [Internet]. New York: United
Nations; 2015. http://www.un.org/sustainabledevelopment/sustainable-development-goals/
Tzelepis F, Sanson-Fisher RW, et al. Measuring the quality of patient-centered care: why patient-
reported measures are critical to reliable assessment. Patient Prefer Adherence. 2015;9:831–5.
Victora CG, Requejo JH, Barros AJ, et al. Countdown to 2015: a decade of tracking progress for
maternal, newborn, and child survival. Lancet. 2016 May 4;387 (10032):2049–59.
Wagenaar BH, Sherr K, Fernandes Q, Wagenaar AC. Using routine health information systems
for well-designed health evaluations in low- and middle-income countries. Health Policy Plan.
2016 Feb;31(1):129–35.
Wilson RM, Michel P, Olsen S, et al.; WHO Patient Safety EMRO/AFRO Working Group. Patient
safety in developing countries: retrospective estimation of scale and nature of harm to patients in
hospital. BMJ. 2012 Mar 13;344 mar13 3:e832.
Witter S, Fretheim A, Kessy FL, Lindahl AK. Paying for performance to improve the delivery of
health interventions in low- and middle-income countries. Cochrane Database Syst Rev. 2012 Feb
15;2(2):CD007899.