You are on page 1of 175

1

QUALITY IMPROVEMENT IN HEALTHCARE

Theories, Methods and Application

Dan Kabonge Kaye

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


2

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


3

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


4

Contents
CHAPTER 1: THE MEANING OF QUALITY IMPROVEMENT IN HEALTHCARE ........................ 7
Attributes of quality in healthcare............................................................................................................. 7
Quality Improvement in Healthcare ......................................................................................................... 8
Quality Improvement Science................................................................................................................. 11
The Importance of Context to the Meaning of QI .................................................................................. 14
Quality Improvement as Patient-Centred Care ...................................................................................... 14
Underlying Principles for Quality Improvement in Healthcare .............................................................. 16
The Role of Context in Quality Improvement ......................................................................................... 19
Assessing the Role of the Context in Quality Improvement ................................................................... 20
CHAPTER 2: QUALITY IMPR OVEMENT MODELS, THEORIES AND FRAMEWORKS ........... 30
The Importance of Theory and Models in QI .......................................................................................... 30
Using Theories in Planning and Evaluating Change Interventions ........................................................ 31
Individual-Level Theories of Change ....................................................................................................... 32
Theories Related to Interpersonal Interaction ....................................................................................... 34
Theories Related to the Organizational Context .................................................................................... 37
The Need for Theory-Informed Research ............................................................................................... 40
Developing and Applying Programme Theory ........................................................................................ 48
Linking Theories, Tools and Strategies.................................................................................................... 49
QI Models ................................................................................................................................................ 50
Managing Change in Quality Improvement ............................................................................................ 51
Measurement of ‘Change’ in Quality Improvement ............................................................................... 53
CHAPTER 3: INSTITUTIONALIZING QUALITY IMPROVEMENT IN HEALTH CARE...................................... 66
Quality Improvement Methodology........................................................................................................ 66
Quality Improvement Work as Systems and Processes.......................................................................... 68
Quality Improvement Planning ............................................................................................................... 69
Developing a Theory-Informed Intervention .......................................................................................... 71
The Process of Institutionalizing QI in Healthcare Practice .................................................................... 74
Identifying desired improvements .......................................................................................................... 75
Monitoring and Evaluation for QI .......................................................................................................... 77

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


5

Developing the Key Drivers Diagram .................................................................................................... 78


Using Run Charts ..................................................................................................................................... 80
Using Plan-Do-Study-Act (PDSA) Cycles and QI tools ............................................................................. 80
Using Measurements to Tell that a Change is an Improvement ............................................................ 84
Sustainability and Sustainability Plans .................................................................................................... 85
CHAPTER 4: RESEARCH IN QUALITY IMPROVEMENT .............................................................................. 92
Why There Is Need for Research in Quality Improvement ..................................................................... 92
Developing a Proposal or Using Existing Data ........................................................................................ 93
Proposals of Research Proposals on Quality Improvement in Healthcare Systems ............................... 95
Including Plans to Manage Ethical Issues in Quality Improvement Proposals..................................... 101
The Role Independent Oversight by Ethics Committees and Institutions ............................................ 103
Qualifying Whether QI Interventions Constitute Research .................................................................. 104
Ethical Issues in Research in Quality Improvement ............................................................................. 107
CHAPTER 5: PERFORMANCE IMPROVEMENT PROJECTS IN HEALTHCARE SYSTEMS ................................ 113
Performance Improvement and Organizational Context in Healthcare Systems................................. 114
Hypotheses for Performance Improvement Projects in Healthcare .................................................... 115
Challenges to Institutionalizing QI in Healthcare.................................................................................. 117
Key Performance Improvement Concepts ............................................................................................ 118
Performance Improvement Targets ..................................................................................................... 122
Developing Performance Indicators ..................................................................................................... 124
Integrated Performance Information Structures and Systems............................................................. 126
Barriers to Performance Improvement .................................................................................................. 127
CHAPTER 6: ETHICAL ISSUES OF QUALITY IMPROVEMENT .................................................................. 132
Quality Improvement Efforts and Ethical Standards ............................................................................ 133
Why Ethics of QI Matters ...................................................................................................................... 134
Why QI interventions May Require Ethical Review ............................................................................ 135
Independent Review Boards and review of QI Project Proposals ........................................................ 137
The Importance of Addressing Ethical Issues in QI Interventions ........................................................ 138
Creating an Ethical Framework for QI Activities ................................................................................... 143
CHAPTER 7: TEACHING QUALITY IMPROVEMENT ................................................................................ 148
Theoretical Frameworks That Inform Choice of Approach to QI Training ............................................ 148
The Philosophy of an Ideal QI Training Curriculum .............................................................................. 152

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


6

Creating, Choosing or Adapting a QI Curriculum ................................................................................ 153


CHAPTER 8: APPLICATION OF QUALITY IMPROVEMENT IN PRACTICE ................................................. 162
Involvement of Healthcare Providers ................................................................................................... 162
Methodological Challenges in Quantifying Healthcare Quality ............................................................ 163
Challenges to Data Quality.................................................................................................................... 163
Measuring Quality of Care as a Neglected Driver of Improved Health ................................................ 164
Selecting Performance Measures ......................................................................................................... 170
Measuring what matters –achieving balance and parsimony .............................................................. 170

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


7

CHAPTER 1: THE MEANING OF QUALITY IMPROVEMENT IN HEALTHCARE

‘Not all changes are improvements but all improvement involves change. Changing the systems
that deliver care is the cornerstone of quality improvement.’ Deming

Definition of quality
“Doing the right thing, at the right time, in the right way, for the right person…and getting the best
possible results”. In health care, the term quality refers to the delivery of the right care to the right
patient at the right place and time with the right resources. Quality is both implicitly and explicitly
linked to ‘effectiveness’, where it refers to safety, efficient service delivery, and quality of patient
care (Andrews et al, 1997; Bodenheimer, 1999; Chassin and Galvin, 1998; Berwick, 1998; Bates
and Gawande, 2000). Donabedian (1980) defines quality as ‘the ability to achieve desirable
objectives using legitimate means. A system can only be said to be performing, in this case
achieving desired objectives, if it delivers high quality interventions, care or services. The quality
of care’ is increasingly referred to as ‘performance’ (Schneider et al, 1999; Marshall et al, 2000;
Jenks, 2000), but may well be ‘quality of technical performance’ from its current measurements
(Blumenthal, 1996; Feinstein, 2002). Quality of care becomes a proxy for the quality of the whole
health system where the main business is clinical care. Quality of care may also refer to the
governance of healthcare systems (Buetow and Roland, 1999; Heard et al, 2001; Friedman, 2002).
It has been observed that healthcare will not realize its full potential unless change making (quality
improvement) becomes a routine practice, that is, “an intrinsic part of No index entries
found.everyone’s job, every day, in all parts of the system, and a process that should benefit from
the use of a wide variety of tools and methods ” (Batalden and Davidoff, 2007).

Attributes of quality in healthcare


QI means different things to different people. From a practical point of view, QI in health care
involves making changes for the better, either to the care of an individual patient or to the running
of one or more parts of a healthcare system. The terms ‘quality’ and ‘quality improvement’ mean
different things to different people in different circumstances. Within healthcare, there is no
universally accepted definition of ‘quality’. However, the US Institute of Medicine, defines quality
of healthcare as ‘the degree to which health services for individuals and populations increase the
likelihood of desired health outcomes and are consistent with current professional knowledge’.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


8

Accessible: People should be able to get the right care at the right time in the right setting by the
right healthcare provider.
Effective: People should receive care that works, based on the best available scientific information.
Safe: People should not be harmed by errors or accidents when they receive care.
Patient-centred: Healthcare providers should offer services in a way that is sensitive to an
individual’s needs and preferences.
Equitable: People should get the same quality of care regardless of who they are and where they
live. Efficient: The health system should continually look for ways to reduce waste, including
waste of supplies, equipment, time, ideas and information.
Appropriately Resourced: The health system should have enough qualified providers, funding,
information, equipment, supplies and facilities to look after people’s health needs.
Integrated: All parts of the health system should be organized, connected and work with one
another to provide high-quality care.

Quality Improvement in Healthcare


The meaning of Quality improvement (QI) in healthcare
There are several definitions and meanings of QI: Most definitions characterize QI as a ‘systematic
approach’ using ‘specific methods’ to achieve ‘successful and sustained improvement’.
1) Ovretveit (2009) describes QI as ‘better patient experiences and outcomes’ achieved through
‘changing’ provider behaviour and organization through using a ‘systematic change method
and strategies’. The emphasis is on change which brings about improvement through
an approach (specific methods or tools) to attain better outcome. Projects are usually at the
heart of QI, and can involve initiatives aimed at improving flow and/or increasing patient
satisfaction, or be built around any initiative that aims to reduce error, examine variation or
service, change the work environment, or optimize health care inventory. Conducting a QI
project may entail obtaining generalizable scientific evidence from the published literature,
applying the evidence to the care of a patient or to a re-engineering process for one or more
parts of a clinical system and measuring any performance improvement (Batalden et al 2003).
2) Quality improvement (QI) differs from quality assurance (QA). Whereas the main aim of QA
is to demonstrate that something meets certain requirements or criteria, QI is the process by
which desirable results are achieved. Also, QA may work in the short term but its results tend
not to be sustained, while QI results are more sustainable, especially when QI is done correctly.
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
9

In QI the aim is to improve quality overall by reducing unnecessary variation and focusing on
what happens most often rather than what happens relatively rarely. QI thrives in learning
environments that strive to improve the system and its processes rather than trying to eliminate
an outlier event.
3) QI can be seen as a relationship between people, process, and possibility (Savage et al, 2016).
People are the motor that drives the work, that is , the stakeholders and actors induce something
to happen. Process refers to the (scientific) approach (methodology) to learning about and
improving the organization that leads to improvement in quality. In health care, quality
improvement refers to “…the combined and unceasing efforts of everyone—health care
professionals, patients and their families, researchers, planners and educators—to make the
changes that will lead to better patient outcomes (health), better system performance (care) and
better professional development” (Batalden and Davidoff, 2007).
4) The commonly accepted model for improvement is the plan-do-study-act (PDSA) cycle, which
asks three essential questions (Deming, 2000; Langley et al, 1996): What are we trying to
accomplish? How will we know that a change is an improvement? What changes can we make
that will result in an improvement? This model can be used repeatedly to test a series of
consecutive changes.
Healthcare outcomes
Quality improvement (QI) is defined as better patient experience and outcomes achieved through
cha provider behaviour and organization through using a systematic change method and strategies.
(The key elements in this definition are the combination of a ‘change’ (improvement) and a
‘method’ (an approach with appropriate tools), while paying attention to the context, in order to
achieve better outcomes). Thus QI is a proven, effective way to improve care for patients, residents
and clients, and to improve practice for staff. In the healthcare system, there are always
opportunities to optimize, streamline, develop and test processes, and QI should be a continuous
process and an integral part of the organization, that is, everyone’s work, regardless of role or
position within the organization.
The QI process
QI in healthcare refers to the broad range of activities of varying degrees of complexity and
methodological and statistical rigor through which healthcare providers develop, implement and
assess interventions, identify those that work well and implement them more broadly in order to

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


10

improve clinical practice. QI draws on a wide variety of methodologies, approaches and tools. QI
can be conceptualized as an umbrella term which encompasses many different systematic ‘change
methods’ to support improvement and better outcomes for patients and services. However, many
of these share some simple underlying principles, including a focus on:
a) Understanding the problem, with a particular emphasis on what the data tell you
b) Understanding the processes and systems within the organization – particularly the patient
pathway – and whether these can be simplified
c) Analyzing the demand, capacity and flow of the service
d) Choosing the tools to bring about change, including leadership and clinical engagement, skills
development, and staff and patient participation
e) Evaluating and measuring the impact of a change
The QI methodology
a) QI is a general term referring to a body of systematic knowledge, which some call a science or
a multi-discipline (Ovretveit, 2013). It refers to a set of methods that have been found to be
effective in improving care, the different strategies for addressing specific quality and safety
problems (such as hospital acquired infections, or communication problems between services),
and the different programmes for improving performance or safety issues (such as clinical
guidelines development or accreditation).
b) QI is a formal scientific approach to the analysis of performance and the systematic efforts to
improve it. One can only manage quality when one can measure and monitor quality (Eagle
and Davies, 1993; Ibrahim, 2001; Thompson and Harris, 2001). Over the last several
decades, health care has become increasingly complex and costly, consequently, healthcare
organizations struggle to provide equitable, affordable, safe, timely, and high-quality
healthcare, while still containing cost and satisfying patients and families. QI refers to the
employment of systematic changes to patient care processes so as to achieve improvement in
patient outcomes and safety, improve the patient and family experience, and the increase value
of care delivered.
c) QI refers to basically a process of change in human behaviour that is driven largely by
experiential learning. Thus development and adoption of QI interventions depends a lot on
changes in social policy, programmes or practices, within a specific context or environment of
healthcare delivery. As such, the evolution, development and success of improvement

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


11

interventions has much in common with changes in social policy and programmes. QI uses
rigorous methodology to evaluate systemic changes to patient care processes in an effort to
improve patient outcomes, patient and family experience of care, or the safety and value of the
care delivered (Kurowski et al, 2015). At the same time, the high stakes of clinical practice
demand that we provide the strongest possible evidence on exactly how, and whether,
improvement interventions work.
d) QI methods involve multiple sequential changes over time and utilize continuous measurement
and analysis. In complex and dynamic systems such as in healthcare, QI allows for rapid testing
and evaluation of new processes and methods for delivering care so as to achieve better patient
safety or patient outcomes (Kurowski et al, 2015).
Quality Improvement Science
This refers to the system of knowledge underpinning QI described by Edwards Deming (2000).
There are four components of knowledge that underpin quality improvement: Appreciation of a
system; Understanding of variation; Theory of knowledge; and Psychology. Successful
improvements can only be achieved when all four components are addressed. Deming (2000)
posits that it is impossible for improvement to occur without the following action: developing,
testing and implementing changes.
Appreciation of the system
In applying Deming’s concepts to health care, most patient care outcomes or services result from
a complex system of interaction between health-care professionals, treatment procedures and
medical equipment. Therefore, medical professionals and trainees should appreciate the
interdependencies and relationships among all of these components of the healthcare system
(doctors, nurses, patients, treatments, equipment, procedures, theatres and so on) thereby
increasing the accuracy of predictions about any impact that changes may have on the system.
Understanding of variation

Variation is the differences between two or more things that are similar. There is extensive
variation in health care and patient outcomes can differ from one ward to another, from one
hospital to another and one region or country to another. Variation, a feature of most systems, may
be related to shortages of personnel, drugs or beds can lead to variations of care. Deming urges
people to ask questions about variation, including that related to treatment outcomes. For instance,
do the three patients returned to theatres after surgery indicate a problem with surgery? Did the

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


12

extra nurse on duty make a difference with patient care or was it a coincidence? The ability to
answer such questions is part of the reason for undertaking improvement activities QI science is
rooted in quasi-experimental research design and strong statistical theory such that when
systematically applied across sites, can produce generalizable knowledge about interventions that
improve health care quality, safety, and value (Ryan, 2011; Stromer, 2013). Thus it maintains its
rigor as a scientific method and ability to improve outcomes. In routine practice, QI provides an
essential set of tools specifically devised to bridge the quality chasm, that is, address the gaps
between the level at which a healthcare system currently functions and the level at which it has
potential to function under optimal conditions (Kohn et al, 2000; Chun et al, 2014).
Theory of knowledge change
Deming posits that the theory of knowledge requires us to make predictions that any changes we
make will lead to an improvement. Predicting the results of a change is a necessary step to enable
a plan to be made even though the future is certain. Building knowledge by making changes and
measuring the results or observing the differences is the foundation of the science of improvement.
Psychology
There is need to understand the psychology of how people interact with each other and the system
in inducing change. Making a change, whether it is small or large, will have an impact and
knowledge of psychology helps to understand how people might react, and why they might resist
change, even if it is for good. The potential different reactions must be factored in when making
an improvement change.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


13

Conceptual Frameworks Associated with Deming’s Four Domains of Profound Knowledge

A QI initiative has the following features:


1) Local interdisciplinary teams empowered and trained to set goals for improvement
2) Teams identifying causes of problems, barriers to quality or flaws in system design that lead
to poor quality
3) Teams trying out different ideas for improving how care is delivered in multiple brief, small
experiments of change
4) Teams conducting frequent, targeted measurement of quality in a way that gives them instant
feedback on whether the changes they are testing are heading in the right direction
The role of measurement in QI
QI activities require health professionals to collect and analyze data generated by the processes of
health care. For example, one cannot study the change in study habits without obtaining some
information about current study habits and the environment. One first needs the data to see if there
is a problem with study habits and, second, one decides what information is required to measure
whether there are improvements. Thus measurement is an essential component of quality
improvement because it forces people to examine and analyze what they do and how they do it.
Most activities in health care can be measured, and should be measured for quality improvements.
When individuals use the appropriate measures to measure change, significant improvements can
be made. All quality improvement methods rely on measurement.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


14

The Institute for Healthcare Improvement endorses a QI method based on the Model for
Improvement (Langley et al, 2009), which focuses on five guiding principles, which have
characteristics of the research process:
a) Knowing why you need to improve (presence of a care or quality gap)
b) Having feedback mechanisms to show whether improvement is happening (relevant data)
c) Instituting effective changes that result in improvement (QI plans, strategies and actions)
d) Testing the change before attempting to implement (pilot and feasibility studies or
availability of empirical data)
e) Knowing when and how to make the change permanent (sustainability or institutionalizing
improvements into routine practice)
The Importance of Context to the Meaning of QI
To understand a QI intervention clearly, healthcare one needs to understand how the intervention
relates to general knowledge of the care problem that necessitates improvement. This requires the
authors to place their work within the context of issues that are known to impact the quality of
care. Context means ‘‘to weave together’’. The context thus refers to the interweaving of the issues
that stimulated the improvement idea and several spatial, social, temporal and cultural factors
within the local setting, all of which form the “canvas upon which improvement is painted”. The
explanation of context should go beyond a description of physical setting, but should include the
organization (types of patients served, staff providing care and care processes before introducing
the intervention) , the governance structure, the health information systems, and the logistical
framework so as to enable reviewers and readers determine if findings from the study are likely to
be transferable to be generalized by transferability (readers are able to relate them to their own
care setting). In studies with multiple sites, a table or matrix can be a convenient way to summarize
similarities differences in context across sites. The table can specify the structures, processes,
people and patterns of care that are unique to each site and assist the reader in interpreting results.
Quality Improvement as Patient-Centred Care
Patient-centred care is defined as ‘health care that establishes a partnership among practitioners,
patients and their families(when appropriate) to ensure that decisions respect patients’ wants, needs
and the preferences and that patients have the education and support they need to make decisions
and participate in their own care’ (IOM, 2001)}. Patient-centred care is increasingly

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


15

acknowledged as an integral part of quality in health care, and improving patient centeredness is
one of the six aims of the Institute’s of Medicines (IOM) Health Care Quality Initiative according
to which health care should be safe, effective, patient-centred, timely, efficient and equitable. Yet,
firstly, the reasons for a patient-centred approach from a quality improvement perspective are not
always clear to all stakeholders. QI projects may put a focus only on a particular aspect of patient
centeredness, Secondly, many QI initiatives imply that adding a patient survey to existing
performance measures will be sufficient to realize patient-centred care. While this may be
informative, it may not be very effective. Moreover, there appears to be a selection bias towards a
few established instruments capturing generic patient experience or satisfaction thereby ignoring
some of the broader challenges in assessing patient centredness. Thirdly, there are concerns with
regard to common strategies to improve patient centredness. The focus on patient-centredness has
continuously evolved in the literature and in recent years has been greatly emphasized in policy
initiatives. The literature on strategies to improve patient-centred care highlighted that ‘patient-
centred care is a widely used phrase but a complex and contested concept’(Lewin et al, 2001). A
patient-centred approach from a quality improvement perspective Involves improving patients’
rights, improving health gain and contributing to organizational learning.
Improving patients’ rights
Patients’ rights embrace arguments of democratization (according to which a paternalistic
relationship between patient and professional would contradict the notions of democratic
societies), operationalized in hospital settings in terms of policies to ensure confidentiality,
informed consent, information about treatment and care and issues related to professional-patient
interaction (Gerteis et al, 1993; Rotter and Larson, 2002). Participation in health care is an ends in
itself (Berwick, 2009).
Improving health gain
The health gain perspective addresses the implications of patient-centred care on patient behaviour,
recovery and outcomes. Research suggests that patient centredness is associated with better
compliance, patient satisfaction, better recovery and health outcomes, augmentation of tolerance
for stress and pain levels, reduced readmission rates and better seeking of follow-up care (Lazarus,
2000; Hibbard et al, 2005; Jack et al, 2009; Balik et al, 2011).

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


16

Organizational learning
Another rationale for patient-centred care and an important focus from a QI perspective is
organizational learning. In order for organizations to learn, personal context-specific knowledge
needs to be transferred into systematic and formal knowledge. Knowledge-dependent
organizations constantly revise knowledge at all organizational levels in order to inform process
alignment, innovation, product development and service provision. In hospitals, patients’
knowledge has traditionally been ignored as potential contributions to assessing, improving and
implementing work processes. Patients can contribute significantly to health-care improvements,
in particular through their assessment of non-clinical aspects of care as the care environment as
well as the care process. Why patient survey data are not systematically used in QI efforts may be
due to organizational barriers (lack of priority or supporting infrastructures), professional barriers
(skepticism, resistance to change) or data-related barriers (lack of timely feedback or lack of
specificity and discrimination

Why the QI context matters in quality improvement


In health care, using the QI is the framework, processes have characteristics that can be measured,
analyzed, improved, and controlled. QI entails continuous efforts to achieve stable and predictable
process results, that is, to reduce process variation and improve the outcomes of these processes
both for patients and the health care organization and system (Berwick et al, 1988; Williamson et
al, 2012; Bardsley, 2012; Pencheon, 2013). Achieving sustained QI requires commitment from the
entire organization, particularly from top-level management. Standardized assessments of
healthcare performance have been widely implemented internationally via accreditation standards,
benchmarks, and/or key performance indicators (Mainz, 2003; Loeb, 2004; Purbey et al, 2007).
Many measures related to healthcare quality exist, however, most of them relate to hospitals
(Davila, 2002; Hysong et al, 2011; Kazandjian et al, 2008; Schull et al, 2011) or nursing care
(Gandjour et al, 2002; Gardner et al, 2010).

Underlying Principles for Quality Improvement in Healthcare


Data and measurement for improvement
Measurement and gathering data are vital elements of both attempts to improve quality (or
performance), and for assessing the impact of QI. However, measuring for improvement differs
from the two better-known types of measurement: measuring for research, which tests whether an
intervention works, and measuring for judgment, which helps managers gauge performance. In

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


17

contrast, when measuring for QI, the learning develops through the process. The research question
for QI, instead of asking whether an intervention works, is phrased by asking how (and how much)
the intervention can be made to work in a given situation and what will constitute ‘success’.
Consequently, the hypothesis changes throughout the QI project and the data will be ‘good enough’
rather than perfect.
Understanding the process
Access to data is vital when assessing whether there is a problem that necessitates improvement.
However, the data may not in itself explain why the problem exists. Part of addressing the QI
problem requires understanding the process by which the problem occurs. Process mapping is a
tool used to chart each step of a QI process, mapping the pathway or journey through part or the
entire journey, and supporting processes. Process mapping is more useful as a tool to engage QI
teams to understand how the different steps fit together, which steps add value to the process, or
which steps are irrelevant.
Improving reliability
Once a process is understood, a key focus of QI is to improve the reliability of the system and
clinical processes, not only to mitigate against waste and defects in the system, but also to reduces
error and harm. Systematic QI approaches such as Lean seek to redesign system and clinical
pathways, create more standardized working and develop error-free processes that deliver high
quality, consistent care and improve efficiency in use of resources.
Demand, capacity and flow
A capacity problem is usually blamed for persistence of backlogs, waiting lists and delays in a
service. Such a problem implies that there is insufficient staff, machines or equipment to deal with
the volume of patients. Without data to estimate the demand (the number of patients requiring
access to the service) and the flow (when the service is needed), it is difficult to pinpoint capacity
a s being responsible. The capacity deficit may be in the wrong place, or occurs at the wrong time.
Planning QI requires a detailed understanding of the variation and relationship between demand,
capacity and flow. For example, demand may be often relatively stable and flow may be predicted
in terms of peaks and troughs, so that problem may be variation may be in the capacity available
at specific times.
Involving every individual in the organization is key

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


18

Motivating, involving and engaging staff is key. Evidence about successful QI shows that it is not
necessarily the method or approach used that predicts success, but the way in which the change is
introduced. Factors that contribute to success include leadership, staff engagement and
client/patient participation, as well as training and education. It is important to involve all relevant
staff, including non-clinical staff, who are often the first point of contact for patients. Also, it is
critical to break down traditional hierarchies for this multidisciplinary approach to ensure that all
perspectives and ideas are considered. Capability building and facilitated support are key elements
of building clinical commitment to improvement. Other important aspects include:
a) Involving the clinical team early on when setting aspirations and goals
b) Ensuring senior clinical involvement and peer influence
c) Involving clinical networks across organizational boundaries
d) Providing evidence that the change has been successful elsewhere
e) Embedding an understanding of quality improvement into training and education of
healthcare professionals
Involving patients and co-design
Patients, carers and the wider public have a critical role to play in QI, both in designing
improvements and in monitoring whether QI initiatives have the desired impact. Staff must
constantly ask the question ‘How do we know what constitutes good care, and how do we achieve
it?’ Engaging patients and carers in QI provides the answer. However, patients may define quality
differently from clinicians and managers, such that what they view as the ‘problem’ or value within
a system may be different from the clinicians or managers. So QI leaders need to question how
patient involvement is embedded in their organizations’ quality improvement programmes.
Unintended consequences of QI
Can QI have unintended consequences? At times, change in one area can cause pressure in another,
thereby causing ‘unintended consequences of QI’. For example, improved early discharge may
lead to increased readmission. In these circumstances, leaders need to anticipate and monitor for
these potential consequences using a set of balancing measures, and may need to make decisions
about scheduling or sequencing of initiatives. QI is likely to be more effective if it is addressed at
a whole-system level rather than a number of disconnected projects, and must be approached as a
long-term, sustained change effort.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


19

The Role of Context in Quality Improvement


“Despite growing acknowledgement within the research community that the implementation of
research into practice is a complex and messy task, conceptual models describing the process still
tend to be unidimensional, suggesting some linearity and logic.” (Kitson et al, 1998)

‘Context’ for QI is defined as factors that potentially mediate the effect of the intervention,
including leadership and governance, interpersonal relationships, organizational resources, health
information systems and data availability and critical human resources (Ovretveit, 2011; Kaplan
et al, 2012; Tomoaia-Cotisel et al, 2013 ). Context is important for most phenomena of health care
and health (Sorensen et al, 2003; Kaplan et al, 2010; Kathol and Kathol, 2010) particularly in
explaining individual decision-making (Weiner, 2004; Weiner et al, 2010) and patient safety
(Phillips et al, 1998; Ovretveit et al, 2011; Taylor et al, 2011). However contextual factors are
rarely recorded, analyzed, or reported in research reports. Because context is important to
interpreting and applying findings, attempts to replicate research often fail, and efforts to translate
research into practice often equally fail because contextual factors important for understanding and
knowledgably synthesizing findings across studies in meta-analyses and evidence-based
guidelines remain unclear (McCormack et al, 2002; Hawe et al, 2004). While a tremendous
available research demonstrates effectiveness of strategies to improve quality and enhance patient
safety, the contextual factors affecting the implementation and effectiveness of these strategies are
not well understood (Shojania et al, 2004; Bate et al, 2014). Grimshaw et al, (2001) in their
systematic review of interventions to change provider behavior, highlighted concerns about the
strength of the evidence base on effectiveness, advising that majority of interventions are effective
under some but not all circumstances.

Few studies investigate contextual and implementation factors in detail (Scott, 2009), which
invalidates the findings, partly due to lack of theoretically sound research methods that elucidate
why interventions work (or do not work) (Conry et al, 2012). Even so, the occurrence of mixed
effect and success rates of strategies to improve quality and safety in health care are dependent
partly on the different contextual factors in contexts in which the interventions are planned and
implemented. These factors operate by influencing the effectiveness of quality improvement
interventions at the level of the micro-system (Kaplan et al, 2010; Dixon-Woods et al, 2011;

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


20

Ovretveit, 2011; Kringos et al, 2015). An intervention that works in one setting does not
necessarily work in another. Regarding quality improvement effectiveness, the common questions
asked are whether and why the initiatives worked. Yet, more often, that data on all such factors
are not available thus affecting the potential generalizability of the findings on the effectiveness of
QI strategies. Moreover, the broader question ‘why, when, where, and for whom QI interventions
work most effectively’ is of much greater concern and practical importance (Foy et al, 2011).
Besides, a thorough understanding of the underlying mechanisms that make an intervention work,
has the potential for enabling successful application of the intervention in other settings and help
improving its effectiveness if replicated. Context is a necessary component of the “ingredients of
change” (Rycroft-Malone et al, 2002).

Assessing the Role of the Context in Quality Improvement


For complex phenomena in health care and health, this systematic scientific way of generating
new knowledge is enriched and made whole by considering context. Context factors are often
interrelated or interlinked. The Model for Understanding Success in Quality (MUSIQ) tool
(Kaplan et al, 2012; Kaplan et al, 2013) is a valid and reliable tool used to facilitate research on
the contextual factors affecting QI strategies. The tool identifies 25 contextual factors for quality
improvement, covering six overarching themes, namely external environment, organization,
quality improvement capacity, the clinical microsystem, the quality improvement team, and other
miscellaneous issues.
1) External environment: external motivators, project sponsorship: These include financial
incentives or administrative support for QI strategies (Bloom 2005; Conry et al, 2012), such
as clinical decision support systems (CDSS), monitoring an evaluation and accreditation
programmes. Membership in a larger network often has implications for success of initiatives
to bring about practice change (Tomoaia-Cotisel et al, 2013).
2) Organizational context: A supportive organizational culture is critical for success of QI
initiatives (Griffith et al, 2009; Glasgow wt al, 2010; Flodgren et al, 2011). This includes
having clear a QI team, implementing clear handover systems (Ong and Cojera, 2011), having
policies and operating procedures, presence of active support for training or patient care,
participation of the leadership in the QI initiatives, incident reporting and use of clinical
decision support systems (Main et sal, 2010), as well as embedding of feedback systems in

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


21

organizational QI (Veloski et al, 2006; Benn et al, 2009; Ivers et al, 2012). Organizational
factors encompass QI leadership, sponsors, culture supportive of QI, robustness of
organizational QI strategies, physician payment structure).
3) QI support and capacity: QI support and capacity (data infrastructure, resource availability,
workforce focus on QI). The presence of functional information technology (IT) systems
facilitates data collection for effectiveness of QI interventions (Grimshaw et al, 2004; Ong and
Cojera, 2011). Insufficient administrative support impacts negatively on the effectiveness of
interventions promoting safety cultures (Weaver et al, 2011) or on strategies aimed at
implementing quality indicators (De Vos et al, 2009).
4) Microsystems: Clinical micro-systems have previously been described as the key settings in
which QI interventions are implemented (Pronovost et al, 2006; Godfrey et al, 2008; Blegen
et al, 2010; Mitchell et al, 2010; Pronovost et al, 2010). The influence may result from effect
on staff morale and skepticism of health care professionals towards the positive impact of QI
interventions. Interventions that may necessitate seeking and alignment physicians’ views on
the content and implementation of interventions (Shepherd et al, 2004; Chaudry et al, 2006;
Ong and Cojera, 2011), include training or education in the proper use of QI strategies (such
as safety checklists, accreditation standards) and integrating QI strategies in the working
practices of health professionals (de Vos et al, 2009; Ko et al, 2011). The microsystem
encompasses QI leadership, culture supportive of QI, capability for improvement, motivation
to change.
5) The QI team: The QI team encompasses team diversity, physician involvement, availability of
subject matter experts, prior experience with QI, team leadership, team decision-making
process, team norms and team QI skills. Training of practice members, characteristics affecting
how they work together, and leadership are often relevant contextual factors. The composition
of the QI team is a major determinant for QI effectiveness (Aboelela et al, 2007; Stone et al,
2008; Damian et al, 2010). ‘Subject matter experts’, where more than one team member has
detailed knowledge about the outcome, process, or system being changed is beneficial for the
range of QI strategies.
6) Assorted factors: Several contextual factors not addressed in the MUSIQ tool influence
success of QI. The elements implemented; when, and the period of time over which this
happens. The specific operational changes that are sought, the specific QI method or approach

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


22

employed, involvement of staff and patients in QI, feedback on performance to clinicians), the
formal program identity (such as demonstration project; pilot project; organizational
transformation). Others include success history (such as experience with transformation,
burnout, adaptive reserve) and provision of a safe place to experiment and even fail. Also,
patient/client involvement in development , and unique factors related to the intervention group
(such as specific disease or specific demographic sub-population). In addition, the main
intervention objectives and outcomes (such as health status, patient satisfaction, financial
stability. Furthermore, other factors include trigger events, task strategic importance to the
organization. Others unique to the contexts include, in particular, structural factors of service
organization, including turnover of staff or bed occupancy, workload and time constraints (Ong
and Cojera, 2011), guidelines or computerized decision support systems) (Garg et al, 2005;
Chan et al, 2012). One theme, implementation pathways, captures locally relevant elements of
an intervention, including operational changes (such as addition of new employees, redefined
roles, team communication strategies, feedback loops) as well as objectives of the intervention
and outcomes (say, health status of targeted populations or populations, patient satisfaction,
and financial stability).
Tips for evaluating the context
1) Engage diverse perspectives: (Research participants (organizations, patients and clinicians,
investigative team); Relevant theoretical models; synthesize prior research, and engage
potential end users of study findings
2) Consider multiple levels: From the macro to the micro, assess interlinkages and interactions
between levels
3) Evaluate the evolution of contextual factors over time: assess initial conditions and history,
analyzing changes over the course of the study.
4) Look at both formal and informal systems and culture: Look for (mis)alignments, Be sensitive
to the locus of power, Appraise internal and external motivations; evaluate resources, support,
and financial and other incentives
5) Assess (often nonlinear) interactions between contextual factors: Assess both the process and
outcome of studies, report within the body of scientific articles key contextual factors that
others would need to know (1) to understand what happened in the study and why, and (2) to
be able to transport and knowledgeably reinvent the project in another situation

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


23

References
Aboelela SW, Stone PW, Larson EL. Effectiveness of bundled behavioural

interventions to control healthcare-associated infections: a systematic review of the

literature. J Hosp Infect. 2007;66:101–8.


Andrews LB, Stocking C, Krizek T et al. An alternative strategy for studying adverse events in
medical care. Lancet 1997; 349: 309–313.
Balik B, Conway J, Zipperer L et al. Achieving an Exceptional Patient and Family Experience
of Inpatient Hospital Care. IHI Innovation Series White Paper. Cambridge, MA: Institute for
Healthcare Improvement, 2011.
Bardsley M: Understanding patterns of complex care: putting the pieces together. J Health Serv
Res Policy 2012, 17:195.
Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare?
Qual Saf Health Care. 2007;16(1):2–3
Bate P, Robert G, Fulop N, et al. Perspectives on context. A selection of essays considering the
role of context in successfull quality improvement. London: The Health Foundation; 2014.
Bates DW, Gawande AA. Error in medicine: what have we learned? Ann Intern Med 2000; 132:
763–767.
Benn J, Koutantji M, Wallace L, et al. Feedback from incident reporting: information and action
to improve patient safety. Qual Saf Health Care. 2009;18:11–21
Berwick DM., Developing and testing changes in delivery of care. Ann Intern Med 1998; 128:
651–656.
Berwick DM: Toward an applied technology for quality measurement in health. Med Decis
Making 1988, 8:253–258.
Berwick DW. What ‘Patient-Centered’ should mean: confessions of an extremist. Health Aff
2009; 28:w555–65
Blegen MA, Sehgal NL, Alldredge BK, et al. Improving safety culture on adult medical units
through multidisciplinary teamwork and communication interventions: the TOPS Project. Qual
Saf Health Care. 2010;19:346–50.
Bloom BS. Effects of continuing medical education on improving physician clinical care and
patient health: a review of systematic reviews. Int J Technol Assess Health Care. 2005;21:380–5

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


24

Blumenthal D., Quality of health care, part 1: quality of care— what is it? N Engl J Med 1996;
335: 891–894.
Bodenheimer T., The American health care system: the movement for improved quality in health
care. N Engl J Med 1999; 340: 488–492.
Buetow SA, Roland M. Clinical governance: bridging the gap between managerial and clinical
approaches to quality of care. Qual Health Care 1999; 8: 184–190.
Chan AJ, Chan J, Cafazzo JA, et al. Order sets in health care: a systematic review of their
effects. Int J Technol Assess Health Care. 2012;28:235–40.
Chassin MR, Galvin RW. The urgent need to improve health care quality: National Institute of
Medicine National Roundtable on Health Care Quality. J Am Med Assoc 1998; 280: 1000–1005.
Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on
quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144:742–52
Conry MC, Humphries N, Morgan K, et al. A 10 year (2000–2010) systematic review of
interventions to improve quality of care in hospitals. BMC Health Serv Res. 2012;12:275.
Coulter A, Ellins J. Effectiveness of strategies for informing, educating, and involving
patients. BMJ 2007;335:24–7.
Damiani G, Pinnarelli L, Colosimo SC, et al. The effectiveness of computerized clinical

guidelines in the process of care: a systematic review. BMC Health Serv Res.

2010;10:2.
Davila F: What is an acceptable and specific definition of quality healthcare? Baylor Univ Med
Centre Proc 2002, 15:84–85.
de Vos M, Graafmans W, Kooistra M, et al. Using quality indicators to improve hospital care: a
review of the literature. Int J Qual Health Care. 2009;21:119–29.
Dixon-Woods M, Bosk CL, Aveling EL, et al. Explaining Michigan: developing an ex post theory
of a quality improvement program. Milbank Q. 2011;89:167–205.
Donabedian A., Explorations in Quality Assessment and Monitoring.The Definition of Quality and
Approaches to its Assessment. Vol. 1. Ann Arbor, MI: Health Administration Press, 1980.
Eagle CJ, Davies JM. Current models of ‘quality’ – an introduction for anaesthetists. Can J
Anaesth 1993; 40: 851–862.
Feinstein AR., Is “quality of care” being mislabeled or mismeasured? Am J Med 2002; 112: 472–
478.
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
25

Flodgren G, Pomey MP, et al. Effectiveness of external inspection of compliance with standards
in improving healthcare organization behaviour, healthcare professional behaviour or patient
outcomes. Cochrane Database Syst Rev. 2011;11:Cd008992.
Foy R, Ovretveit J, Shekelle PG, et al. The role of theory in research to develop and evaluate the
implementation of patient safety practices. BMJ Qual Saf. 2011;20:453–9
Freedman DB., Clinical governance—bridging management and clinical approaches to quality in
the UK. Clin Chim Acta 2002; 319: 133–141.
Gandjour A, Kleinschmit F, et al: An evidence-based evaluation of quality and efficiency
indicators. Qual Manag Healthc 2002, 10:41–52.
Gardner LA, Snow V, Weiss K, et al: Leveraging improvement in quality and value in healthcare
through a clinical performance framework: a recommendation of the American College of
Physicians. Am J Med Qual 2010, 25:336–342.
Garg AX, Adhikari NK, et al. Effects of computerized clinical decision support systems on
practitioner performance and patient outcomes: a systematic review. JAMA. 2005; :1223–38
Gerteis M, Edgman-Levitan S, Daley J et al. Through the patient’s eyes: understanding and
promoting patient-centred care. San Francisco: Jossey Bass Publishers, 1993.
Glasgow JM, Scott-Caziewell JR, Kaboli PJ. Guiding inpatient quality improvement: a
systematic review of Lean and Six Sigma. Jt Comm J Qual Patient Saf. 2010; 36: 533–40.
Godfrey MM, Melin CN, Muething SE, et al. Clinical microsystems, Part 3. Transformation of
two hospitals using microsystem, mesosystem, and macrosystem strategies. Jt Comm J Qual
Patient Saf. 2008;34:591–603.
Griffiths P, Renz A, Hughes J, Rafferty AM. Impact of organization and management factors on
infection control in hospitals: a scoping review. J Hosp Infect. 2009;73:1–14.
Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic
reviews of interventions. Med Care. 2001;39 Suppl 2:Ii2–45.
Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline
dissemination and implementation strategies. Health Technol Assess. 2004;8:1–72.
Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local
context within a cluster randomized community intervention trial. J Epidemiol Community
Health. 2004; 58(9):788-793.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


26

Heard SR, Schiller G, Aitken M, Fergie C, McCready Hall L. Continuous quality improvement:
educating towards a culture of clinical governance. Qual Health Care 2001; 10: 70–78.
Hibbard JH, Mahoney ER, Stockard J et al. Development and testing of a short form of the
patient activation measure. Health Serv Res 2005;40: 1918–30.
Hysong S, Khan M, Petersen L: Passive monitoring versus active assessment of clinical
performance. Med Care 2011, 49:883–890.
Ibrahim JE. Performance indicators from all perspectives. Int J Qual Health Care 2001; 13: 431–
432
Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century.
Washington, DC: IOM, 2001.
Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and
healthcare outcomes. Cochrane Database Syst Rev. 2012;6: Cd000259.
Jencks SF., Clinical performance measurement—a hard sell. JAMA 2000; 283: 2015–2016.
Kaplan HC, Brady PW, Dritz MC, et al. The influence of context on quality improvement success
in health care: a systematic review of the literature. Milbank Q. 2010;88(4):500-559.
Kaplan HC, Froehle CM, Cassedy A, et al. An exploratory analysis of the model for understanding
success in quality. Health Care Man Rev. 2013;38:325–38
Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in
Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf.
2012;21:13–20.
Kathol RG, Kathol MH. The need for biometrically- and contextually-sound care plans in complex
patients. Ann Intern Med. 2010;153(9): 619-620.
Kazandjian VA, Wicker K, Matthes N, Oqunbo S: Safety is part of quality: a proposal for a
continuum in performance measurement. J Eval Clin Prac 2008, 14:354–359.
Kitson AL, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a
conceptual framework. Qual Health Care 1998;7:149–58.
Ko HC, Turner TJ, Finnigan MA. Systematic review of safety checklists for use by medical care
teams in acute hospital settings—limited evidence of effectiveness. BMC Health Serv Res.
2011;11:211

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


27

Kringos DS, Sunol R, Wagner C, et al. The influence of context on the effectiveness of hospital
quality improvement strategies: a review of systematic reviews. BMC Health Services
Research201515:277
Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. (1996).The Improvement Guide: A
Practical Approach to Enhancing Organizational Performance, San Francisco: Jossey-Bass
Publishers
Lazarus RS. Toward better research on stress and coping. Am Psychol 2000;55: 665–73.
Lewin SA, Skea ZC, Entwistle VA et al. Interventions for providers to promote a patient-
centred approach in clinical consultations. Cochrane Database Syst Rev 2001;4:CD003267.
Loeb J: The current state of performance measurement in healthcare. Int J Qual Healthcare 2004,
16:5–9.
Main C, Moxham T, Wyatt JC, et al. Computerized decision support systems in order
communication for diagnostic, screening or monitoring test ordering: systematic reviews of the
effects and cost-effectiveness of systems. Health Technol Assess. 2010;14:1–227
Mainz J: Defining and classifying clinical indicators for quality improvement. Int J Qual
Healthcare 2003, 15:523–530.
Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data:
what do we expect to gain? A review of the evidence. J Am Med Assoc 2000; 283: 1866–1874.
McCormack B, Kitson A, Harvey G, Getting evidence into practice: the meaning of ‘context.’
Mitchell IA, McKay H, Van Leuvan C, et al. A prospective controlled trial of the effect of a
multi-faceted intervention on early recognition and intervention in deteriorating hospital patients.
Resuscitation. 2010;81:658–66.
Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital
transfers. Jt Comm J Qual Patient Saf. 2011;37:274–84
Ovretveit J. Contemporary quality improvement. Cad. Saúde Pública, Rio de Janeiro 2013;
29(3):424-426
Ovretveit J. (2009) Does improving quality save money? A review of the evidence of which
improvements to quality reduce costs to healthcare service providers. London; Health Fountain
Ovretveit J. Understanding the conditions for improvement: research to discover which context
influences affect improvement success. BMJ Qual Saf. 2011;20 Suppl 1:i18–23.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


28

Ovretveit JC, Shekelle PG, Dy SM, et al. How does context affect interventions to improve patient
safety? An assessment of evidence from studies of five patient safety practices and proposals for
research. BMJ Qual Saf. 2011;20(7):604-610.
Pencheon D: Developing a sustainable health and care system: lessons for research and policy. J
Health Serv Res Policy 2013, 18:193.
Phillips KA, Morrison KR, Andersen R, Aday LA. Understanding the context of healthcare
utilization: assessing environmental andprovider-related variables in the behavioral model of
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related
bloodstream infections in the ICU. NEJM. 2006; 355:2725–32.
Pronovost PJ, Goeschel CA, et al. Sustaining reductions in catheter related bloodstream
infections in Michigan intensive care units: observational study. BMJ. 2010;340-309
Purbey S, Mukherjee K, Bhar C: Performance measurement system for healthcare processes. Int J
Product Perform Manag 2007, 56:241–251.
Roter D, Larson S. The Roter interaction analysis system (RIAS): utility and flexibility for
analysis of medical interactions. Patient Educ Couns 2002;46:243–51.
Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual
framework. Qual Saf Health Care. 2002;11: 174–80.
Savage C, Parke L, von Knorring M, Mazzocato P. Does lean muddy the quality improvement
waters? A qualitative study of how a hospital management team understands lean in the context of
quality improvement. BMC Health Services Research (2016) 16:588
Schull M, Guttman A, Leaver C, et al. Prioritizing performance measurement for emergency
department care: consensus on evidence-based quality of care indicators. Can J Emerg Med 2011,
13:300–309.
Scott I. What are the most effective strategies for improving quality and safety of health care?
Intern Med J. 2009;39:389–400.
Shepperd S, Parkes J, et al. Discharge planning from hospital to home. Cochrane Database Syst
Rev. 2004;1:Cd000313
Shojania KG, McDonald KM, Wachter RM. Closing the quality gap: A critical analysis of quality
improvement strategies. Rockville: Agency for Healthcare Research and Quality; 2004.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


29

Sorensen G, Emmons K, Hunt MK, et al. Model for incorporating social context in health behavior
interventions: applications for cancer prevention for working-class, multiethnic populations. Prev
Med. 2003;37(3):188-197.
Stone PW, Pogorzelska M, et al. Hospital staffing and health care-associated

infections: a systematic review of the literature. Clin Infect Dis. 2008;47:937–44.


Taylor SL, Dy S, Foy R, et al. What context features might be important determinants of the
effectiveness of patient safety practice interventions? BMJ Qual Saf. 2011;20(7):611-617.
Thompson BL, Harris JR. Performance measures: are we measuring what matters? Am J Prev Med
2001; 20: 291–293.
Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, et al. Context matters: the experience of 14
research teams in systematically reporting contextual factors important for practice change. Ann
Fam Med. 2013;11 Suppl 1:S115–23.
Veloski J, Boex JR, et al. Systematic review of the literature on assessment, feedback and
physicians’ clinical performance: BEME Guide No. 7. Med Teach. 2006;28:117–28.
Weaver SJ, Lubomksi LH, Wilson RF, et al. Promoting a culture of safety as a patient safety
strategy: a systematic review. Ann Intern Med. 2013;158:369–74
Weiner SJ, Schwartz A, Weaver F, et al. Contextual errors and failures in individualizing patient
care: a multicenter study. Ann Intern Med. 2010;153(2):69-75.
Weiner SJ. Contextualizing medical decisions to individualize care: lessons from the qualitative
sciences. J Gen Intern Med. 2004;19(3): 281-285.
Williamson P, Altman D, Blazeby J, Clarke M, Gargon E: Driving up the quality and relevance of
research through the use of agreed core outcomes. J Health Serv Res Policy 2012, 17:1.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


30

CHAPTER 2: QUALITY IMPR OVEMENT MODELS, THEORIES AND


FRAMEWORKS

Data and facts are not like pebbles on a beach, waiting to be picked up and collected. They can
only be perceived and measured through an underlying theoretical and conceptual framework,
which defines relevant facts, and distinguishes them from background noise’ (Wolfson, 1994).

The Importance of Theory and Models in QI


Sometimes new scientific findings, best practices, or clinical guidelines are easily implemented in
practice. Most of the time, however, improving patient care is not easy, particularly if an
innovation requires complex changes in clinical routines, better collaboration among disciplines,
changes in patients’ behavior, or changes in the organization of care. Most health care
improvements target factors related to individual professionals’ knowledge, routines, or attitudes
(Grimshaw et al, 2004) yet such QI may be impeded by a much broader range of economic,
administrative, and organizational factors or those relating to patients’ beliefs or behavior. A
consistent finding in articles on QI in health care is that change is difficult to achieve. Most
interventions are targeted at health care professionals. But success in achieving change may be
influenced by factors other than those relating to individual professionals, and theories may help
explain whether change is possible. This calls for a more systematic use of theory in planning and
evaluating QI interventions in clinical practice. Different theories can be used to generate testable
hypotheses regarding factors that influence the implementation of change, and how different
theoretical assumptions lead to different QI strategies (Michie et al, 2009).

Since interaction of factors at multiple levels may influence the success or failure of QI
interventions (Ferlie and Shortell 2001; Grol 1997; Shortell et al. 2000), understanding of these
factors (the obstacles and incentives for change) is crucial to an effective intervention (Grol and
Grimshaw 2003; Grol and Wensing 2004; van Bokhoven et al, 2003). Thus understanding of the
theoretical assumptions and hypotheses behind these factors is critical as it enables the
consideration of theory-based interventions for QI. Currently, most specific models or approaches
are based on implicit (and potentially biased) personal beliefs about human behavior and change
(Grol, 1997). There is need for a set of theories regarding change in health care and argue for a
more systematic use of theories in planning and evaluating changes in clinical practice.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


31

The Complexity of Changing Practices


In their attempt to develop a unifying model of the diffusion of innovations in health care,
Greenhalgh et al (2004) found that the available (theoretical) literature on this issue is large,
diverse, and complex. The presence of multiple and often unpredictable interactions arising in
particular contexts and settings determine the ‘change’ success or failure (during implementing
changes). This calls for a need for studies of hypothesized links between interventions and
outcomes and the need for refinement of the mechanisms by which and within which these links
produce (or fail to produce) change. Research and planning for ‘change’ should recognize the
interaction between an intervention and the complex setting in which it is used. For instance, in
the study of factors influencing the improvement of coronary bypass surgery (Shortell et al, 2000),
a need was identified for more detailed analysis of how microsystems of care provision can be
improved by combinations of interventions at different levels.

Using Theories in Planning and Evaluating Change Interventions


A theory refers to any description that asserts that a meaningful interaction exists between
variables (causal theory), as is any account that provides a coherent picture, in the form of a map
or model, of a complex phenomenon or interaction, and statement or model that may describe how
an independent variable changes the behaviour of a dependent variable (explanatory theory)
(Vandenbroucke, 2008). Prominent explanatory theories in natural sciences include the theory of
evolution, the periodic table of the elements, and the Mendelian inheritance). For most changes in
health care, a range of factors interact at different levels (patients, professionals, interactions
among professionals in teams, the organizational context, and the economic and political context)
to determine whether and to what extent change is achieved. Thus for any innovation to be
implemented successfully, it is necessary to identify the potential interacting determinant factors,
derived from different theories that need to be tested for their singular or combined influence on
‘change’. While theories can be found in many disciplines and scientific areas, some are useful in
explaining the “change” in QI in health care. A theory may be defined as “a system of ideas or
statements held as an explanation or account of a group of facts or phenomena” (Michie and
Abraham, 2004). Theories differ widely in their focus, perspective, and underlying paradigms, and
may be divided into impact theories and process theories (Rossi et al, 1999). The ideal model for
change in health care would encompass both types of theories.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


32

1) Process theories refer to the preferred implementation activities: how they should be planned,
organized, and scheduled in order to be effective (the organizational plan) and how the target
group will utilize and be influenced by the activities (the utilization plan).
2) Impact theories describe hypotheses and assumptions about how a specific intervention will
facilitate a desired change, as well as the causes, effects, and factors determining success (or
the lack of it) in improving health care.

Individual-Level Theories of Change


Cognitive Theories

1) Rational Decision Making

Cognitive theories of change management focus on the (rational) processes of thinking and action
by individual professionals. Rational decision-making theories assume that in order to provide
optimal care, professionals must consider and balance the advantages and disadvantages of
different alternative behaviors. Such theories regard the provision of convincing information about
risks and benefits and pros and cons as crucial to performance change. Other cognitive theories
are more descriptive and illustrate how decisions are actually made: clinicians do not act rationally
but instead decide on the basis of their previous experiences and contextual information (Schmidt,
1984). In making a diagnosis, physicians use so-called illness scripts, or cognitive structures in
which they have organized their knowledge of a specific health problem and in which previous
experiences with specific patients are crucial to further decisions (Botti and Reeve 2003; van
Leeuwen et al. 1995). The cognitive theories explain behavior in terms of health professionals’
lack of relevant (scientific) information, incorrect expectations about the consequences of their
behavior, or attributions of outcomes to causes outside their control. Therefore, to change
performance, it might be critical to concentrate on how professionals think and make decisions
about their daily work and support more effective ways of decision making, for instance, by
supplying detailed guidelines, decision aids, and evidence-based clinical pathways and protocols.

2) Consistency

Cognitive mechanisms may prevent rational decision making. Professionals may use obsolete
information or poor experiences as the basis for performance (change) (Choudry et al, 2005). For
instance, people prefer consistency in thinking and acting and so make choices that may not be

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


33

rational but fit existing opinions, needs, and behaviors. Thus if they do not like repeated hand
washing or doubt its effects, they may interpret or seek information that confirms their beliefs.
Also, people may seek an external explanation for specific events (infections) or behaviors instead
of an internal explanation in order to make it more acceptable to themselves or to fit it better to
their existing perceptions (Jones et al, 1972).

Educational Theories

1) Problem-Based Learning

Most educational theories focus more on motivation rather than cognitions to learn (and change).
For instance, adult learning theories state that people learn better and are more motivated to change
when they start with problems that they have had in practice than when they are pressured or
confronted with abstract information like guidelines (Holm 1998; Mann 1994; Merriam 1996;
Norman and Schmidt, 1992). Most healthcare professionals have wide experiences that they can
use as a source for learning and changing (Smith, Singleton, and Hilton, 1998). Differences
between novices and experts in health professions have been reported (Botti and Reeve, 2003; van
Leeuwen et al. 1995). For instance, in order to improve care for indwelling urethral catheters, care
providers first need to experience a problem (for instance, that their behavior may lead to catheter-
related complications in their patients) before they are motivated to do something about it. Here
the theory offers a framework in which to structure a discussion, identifying and applying past
experiences to solve this complicated problem within the current work setting. Not all care
providers have the competence or motivation for self-directed learning or self-assessment
(Norman, 2002). Professionals may also have different motives in regard to (self-directed) learning
and changing (Fox and Bennett, 1998; Stanley et al, 1993). Examples of these include a desire for
more social interaction, for meeting external expectations (including pressure from patients or
colleagues), for better serving others or society, for increasing professional competence or
professional status, for financial rewards, or for relief from boring routines or job frustrations.

2) Learning Style

Professionals’ personal learning style is another factor that influence change. There are four
learning styles (Lewis and Bolden,1989): activist (people who like new experiences and therefore
accept but also abandon innovations quickly), reflective (people who want to consider all options

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


34

very carefully before changing), theoretical (people who prefer rigorous analysis and thought
before changing), and pragmatic (people who prefer to act on the basis of practical experience
with an innovation). These different learning styles, individual learning needs and personal
motives in healthcare professionals influence ‘change’.

Motivational Theories

Several theories focus specifically on “a motivation to change,” determined by attitudes,


perceptions, values and intentions regarding the desired change or performance (Ajzen 1988;
Fishbein and Ajzen, 1975; Kok et al, 1991). Implementation strategies can address all these factors,
although the motivational theories have been used mainly in the field of health promotion. Walker
et al (2001) used this theory of behavior in a study of physicians’ intentions to prescribe antibiotics
to patients presenting with an uncomplicated sore throat.

1) The theory of planned behavior

The “theory of planned behavior” states that any given behavior by professionals is influenced by
their individual intentions (or motivation) to perform the specific behavior and these intentions are
determined largely by attitudes toward the behavior, perceived social norms, and perceived control
related to the behavior (Ajzen 1991). Attitudes toward a specific behavior are determined by the
expected outcomes of the behavior and the positive or negative appraisal of these outcomes
(whether it is worth the extra effort). The perceived social norms are influenced by the behavior
of other professionals (particularly colleagues.

2) Self efficacy

The perception or expectations of control or self-efficacy (Bandura, 1986; Bandura, 1997;


Maibach and Murphy 1995) represents the belief that one can really achieve the desired change in
the specific setting. Self-efficacy expectations can be related to the behavior itself (“Am I able to
do this?”), to the social context (“Can I resist social pressure?”) and the pressure related to the
behavior (“Can I perform the behavior under tension?”).

Theories Related to Interpersonal Interaction


Theories Related to Social Interaction

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


35

Most of the theories related to social interaction discuss determinants of quality improvement
(change) in the interaction between an individual professional and others, such as the influence of
key individuals and opinion leaders, participation in social networks and teams, and the role of
leadership.

1) Theories about Communication

Several theories focus on effective communication aimed at changing individual attitudes and
behaviors as individuals interact through communication:

a) The Persuasion-Communication Model presents a stepwise model of persuasion: exposure to


a message, attention to that message, comprehension of the arguments and conclusions,
acceptance of the arguments, retention of the content, and attitude change (McGuire, 1985).
To be successful, communication should be adapted for each step so that factors in the source
of the message (such as credibility, status) and in the recipient (such as intelligence, prior
knowledge, involvement) are critical.
b) The Elaboration Likelihood Model (Petty and Cacioppo 1986; Petty et al, 1997). Two distinct
routes of information: a central or systematic process, in which a message is carefully
considered and compared with other messages and beliefs, and a peripheral or heuristic
process, in which a message is less intensively considered. Individuals are more responsive to
peripheral cues, such as the source and format of the message and the reaction of others.
Changes induced by the central, systematic route are likely to persist longer. Important factors
in this model are ‘persuasiveness of a message’ (Burnstein, 1982; Petty and Wegener, 1998),
‘repetition’ of the message, ‘novelty’ of the message, ‘perceived validity’ of the message,
message training, personal relevance, and functionality.
2) Social Learning Theory

Bandura’s “social cognitive theory” (1986) explains the behavior of individuals in terms of
personal factors, behavioral factors, and context-related factors. Important contextual factors are
material or non-material rewards from others (such as positive feedback from peers or opinion
leaders) as well as modeling of the behavior by others. The basic assumption of this theory is that
there is a continuous interaction among a professional, his or her performance, and the social
environment, which reinforce one another in ‘changing’ performance. Likewise, through

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


36

Modeling, an individual can observe in others that it is possible to perform the desired behavior
and that it will lead to the expected results.

3) Social Network Theories

Theories of the diffusion of innovations state that the adoption of new ideas and technologies is
largely influenced by the structure of social networks and by specific individuals in or at the
margins of these networks (Rogers, 1995). The links between individuals within a network and the
threshold effects of adopting innovations are strongly linked. Relevant network characteristics that
may influence an effective transfer of information are the strength of the ties between members of
the network, the differences between interacting individuals (in networks of like individuals,
innovations are less likely to be adopted), and the proportion of the population that has already
adopted an innovation (Gladwell 2000; Valente 1996).

4) Social Influence Theories

The social influence theories stress existing norms and values in the social network of
professionals as critical in influencing ‘change’. Performance in daily practice is assumed to be
based not on a conscious consideration of the advantages and disadvantages of specific behavior
but on the social norms in the network that define appropriate performance (Greer, 1988; Mittman
et al, 1992), such that ‘change’ occurs only after a local consensus is achieved. Interactions within
the social network, the views and expectations of significant peers, and the availability of
education influence effective implementation of innovations or changes.

5) Theories Related to Team Effectiveness

Teamwork is seen as a way to tackle the fragmentation of care and improve patients’ quality of
both primary and hospital care (Clemmer et al, 1998; Firth-Cozens, 1998). Teams are also used to
improve care for specific groups (Shortell et al, 2004; Wasson et al, 2003), such as patients with a
chronic disease. The success of teams relies on their working toward a common, clear goal, such
that effective teams help clinical systems do their work, define and assign tasks and roles, train
individuals to perform tasks, and establish clear structures and processes for communication
(Grumbach and Bodenheimer, 2004). Factors that influence teamwork include the presence of a
team champion (Shortell et al, 2004), information sharing and trust (Firth-Cozens, 1998), team

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


37

vision, participation (how much the team participates in making decisions and whether team
members feel confident in proposing new ideas), task orientation (the commitment of team
members to perform to their optimum), support for innovation (West, 1990), an “structural
factors” such as team size, group composition (mix of skills), and geographical proximity or
separation of the team (Firth-Cozens, 1998). Studies in hospitals found that better team functioning
was significantly associated with better performance (Wheelan et al, 2003; Friedman and Berger,
2004), emphasizing that efforts should aim at encouraging team collaboration in healthcare.

Theories Related to the Organizational Context


Theories of Leadership and Governance

Both formal and informal leaders can be very influential in changing clinical practice or
implementing new procedures or processes. Effective leadership promotes, guarantees, or (in some
circumstances) blocks innovations. This may occur through holding formal authority; controlling
scarce resources; possessing key information, expertise, or skills needed to achieve valued aims;
being part of a strong social network; or belonging to a dominant culture (Donaldson 1995;
Ovretveit 2004). Specific types of leadership probably are effective for particular innovations in
particular settings.

1) Theories of Innovative Organizations

Theories of innovative organizations focus on characteristics of organizations that determine


whether and how much they are able to implement innovations (Wolfe, 1994). Some organizations
are more innovative than others. Innovativeness seems is associated with highly specialized
individual roles, a high level of professionalism, decentralized decision-making, easily available
technical knowledge, good internal and external communication, and a positive attitude toward
change among leaders and managers (Damanpour 1991).

2) Theory of Total Quality Management

Total Quality Management (TQM), sometimes called Continuous Quality Improvement (CQI),
emphasizes the continuous improvement of (multidisciplinary) processes in healthcare in order to
better meet customers’ needs (Blumenthal and Kilo, 1998; Shortell et al, 1998). Inadequate
performance is perceived as a failure of the system, so that real change can be achieved only by

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


38

changing the whole system (Berwick 1989). Changing the organizational culture, identifying the
leadership, and building teams are components of this approach. The basic principles of TQM are
comprehensive, organization-wide efforts to improve quality, a focus centered on the patients (or
customers), continuous improvements and redesign of care processes by encouraging alternating
cycles of change followed by relative stability, management by facts, a positive view of people,
ongoing training for all staff, and a key role for the leadership ((Berwick and Nolan 1998; Plsek et
al, 2003). PDSA cycles (Plan-Do-Study-Act cycles) to improve the provision of care (continuous
learning about change by introducing a change and reflecting on it) are an important tool for TQM.

Theories of Integrated Care

1) Change of Processes of Care

Theories of integrated care stress the radical or gradual redesign of the steps in providing care.
Models for changing processes, such as Business Process Redesign (BPR) and disease
management, focus on improved organizing and managing the care of specific categories of
patients so that their needs are more readily met and costs are reduced. Change is often better
achieved by redesigning multidisciplinary care processes than by influencing professional
decision making. It usually includes topdown, management-driven approaches in which current
practices and processes are analyzed, reconsidered, and basically redesigned (Rogers 2003). These
approaches often include organizing new collaborations of care providers, allocating tasks
differently, transferring information more effectively. Traditional boundaries between disciplines
are thereby less relevant, and multidisciplinary collaboration is crucial.

2) Complexity Theory

Complexity theory refers to systems behavior and systems change, starting from the assumption
that because the world of health care has become increasingly complex, it is important to observe
and improve systems as a whole instead of dividing them into parts or components. This theory
sees hospitals, primary care teams, or care organized around a specific disease or problem (stroke,
diabetes, infection control) as “complex adaptive systems.” These are defined as “a collection of
individual agents (components, elements) with the freedom to act in ways that are not always
totally predictable, and whose actions are interconnected, so that one agent’s actions changes the
context for other agents” (Plsek and Greenhalgh, 2001). The many components of complex

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


39

systems continuously interact, and these interactions are more important than the discrete actions
of individual agents or components (Sweeny and Griffiths, 2002). Such systems cannot be
adequately understood by analyzing their constituent parts. One implication of complexity theory
is that comprehensive plans with detailed targets for parts of the systems rarely improve patients’
care in complex systems. Rather, the focus should be on the system as a whole with simple goals
or minimal specifications (Plsek and Wilson, 2001), because the behavior of a complex system is
usually unpredictable over time, and small influences in one part of the system often have a large
impact elsewhere in the system or even outside the system. According to complexity theory, it is
important not to concentrate on single parts of this system, rather set broad targets for change.

3) Theories about Organizational Culture

An organization’s culture can be altered to change performance (Scott et al, 2003b, Scott et al,
2003a). Organizational culture refers to “something an organization possesses,” an “attribute,” or
may refers to the “whole character and experience of organizational life” (Scott et al. 2003a). To
form a culture, a group must have stability, shared experience and history. Over time, the group
learns to cope with its problems of external demands and internal integration and teaches these
values and underlying assumptions to new members. Therefore, culture consists of not only
observable features (such as a company’s mode of dress) but also a body of tacit knowledge
(information that people unconsciously possess). To improve quality, health care organizations
may need to develop a quality culture that emphasizes learning, teamwork and customer focus
(Ferlie and Shortell, 2001). Methods for promoting a quality culture start with the leadership’s
embracing the promotion of quality through the articulation of the organization’s mission and
vision, the engagement of people throughout the organization in quality, and attention to learning
(Boan and Funderburk, 2003). Several studies confirm the relationship between organizational
culture and health care performance (Scott et al, 2003b; Shortell et al, 1995). Cultures stressing
group affiliation, teamwork, and coordination were associated with greater improvement in
quality. The model’s four ideal cultural orientations are:

a) A group or clan culture, emphasizing flexibility and change and characterized by strong human
relations, teamwork, affiliation, and a focus on the internal organization;
b) A developmental culture, emphasizing growth, creativity, flexibility, and adaptation to the
external environment;

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


40

c) A rational culture, externally focused but control oriented, emphasizing productivity and
achievement and external competition; and
d) A hierarchical culture, stressing stability especially in the internal organization, uniformity,
and a close adherence to rules (Stock and McDermott 2001).
4) Theory of Organizational Learning and Knowledge Management

A “learning organization” is defined as “an organization skilled at creating, acquiring and


transferring knowledge, and at modifying its behavior to reflect new knowledge and insights”
(Garvin 1993). Individuals learn as agents for the organization, and their knowledge is stored in
the organization’s memory (such as embedded routines) (O¨rtenblad 2002). ‘Learning’ is seen as
a characteristic of the organization because it retains knowledge and expertise even after
individuals leave (DiBella et al, 1998; Nevis et al, 1995). The boundaries between theories of
“organizational learning” and “knowledge management” are unclear. Learning organizations are
mostly associated with training, organizational development, and human resources development
and that knowledge management is mostly associated with information technology, intellectual
capital, and the use of information systems (Garavelli et al, 2002, Scarbrough and Swan, 2001).
Central to both theories is the idea that only through individuals’ learning are organizational
routines changed, and therefore, improving an organization’s learning ability requires favorable
conditions for individuals’ learning (L¨ahteenm¨aki et al, 2001). Organizations usually have formal
and informal structures for the acquisition, dissemination, and integration of knowledge (Nevis et
al, 1995).

The Need for Theory-Informed Research


In quality improvement, many hypotheses could represent theories or models. For example, a
hypothesis that “Introducing a new guideline on care of urinary tract infections will reduce the
rate of infection” makes theoretical (causal and explanatory) claims if the expectation is that “the
guideline will describe and justify to practitioners the correct standards of care”. The key
challenge for QI practitioners is not simply to base their work on theory but to make explicit the
informal and formal theories they are actually using.

Grand, Big and Small Theories

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


41

Many proposals for implementation research projects or QI studies use models or frameworks to
guide their implementation planning. However, many of the models used are not based on theory,
or are based only loosely on underlying theory from which they are derived. While at their
development the models may have been linked to theories, these models are commonly restated
and reinterpreted, and the original tight linkage with theory is lost. A fully developed theory, in
the context of QI, is a theory that explains behavior change, and addresses the question: how and
why do people or organizational entities behave as they do? Given their current behavior, what
would motivate them to change behavior? What could explain the change in behavior? At the
organizational or system level, a theory should provide testable hypotheses and guidance to change
(or action) at both the individual and higher levels of the organization, addressing the subunit or
microsystem, or the unit level where the intervention (or change) is expected to occur (such as a
unit in a ward, a whole ward, a clinic, the whole hospital, or services at a district level or higher
levels). For instance, theories guiding social marketing could be used to explain (together with
ecological models), competition for scarce resources within that organization. Also, a model of
communication at the interpersonal level may explain the strategy of introducing planned to have
impact at the organizational level (combination of individual, interpersonal and organizational
theories). Theories inform the models that provide the foundation or infrastructure of the change
and the ingredients of the QI change (Sales et al, 2006).

Grand theory such as a theory of social inequality (Schon, 1991), is formulated at a high level of
abstraction, enabling it to make generalisations that apply across many different domains. While
such abstract or overarching theory does not usually provide specific rules that can be applied to
particular situations, it enables one to construct particular descriptions and themes’ and can reveal
assumptions and world-views that would otherwise remain under-articulated or internally
contradictory. Middle (or ‘mid’)-range theories are delimited in their area of application, and are
intermediate between minor working hypotheses’ and the ‘all-inclusive speculations comprising a
master conceptual scheme’. The initial formulation and reformulation of grand and mid-level
theories is useful in QI as it improves understanding a problem or guides development of specific
interventions. For example, the theory of the diffusion of innovations (Rogers, 2003; Grol and
Grimshaw, 2003) is a mid-range theory commonly used in QI especially interventions that rely on
social and professional networks, as they explain what make innovations easier to try and how to
tailor innovations to make them consistent with existing systems (Lipsey, 1993; Weiss, 1995;

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


42

Rogers et al, 2000; Chen, 2005). Likewise, the Normalisation Process Theory (May, 2013)
describes how practices can become routinely embedded in social contexts.

Role of theory in explaining change in QI interventions

Initiatives to improve quality and safety in healthcare frequently result in limited changes for the
better or no meaningful changes at all, and the few that are successful are often hard to sustain or
replicate in new contexts (Holen, 2004; Dixon-Woods et al, 2013). This is partly due to the
enormous complexity of healthcare delivery systems, including their challenging technical, social,
institutional and political contexts. However, failure may be attributed to persistent failure employ
informal and formal theory in planning and executing improvement efforts (Davies et al, 2010;
Foy et al, 2011). The explicit application of theory could shorten the time development of
improvement interventions, optimize their design, identify contextual factors that may facilitate
success, and enhance learning from those efforts (Foy et al, 2011; Grol et al, 2007, French et al,
2012; Grol et al, 2013; Marshall et al, 2013).

Failure to use theory may lead to confusion about the results of QI efforts. For instance, the
effective cardiac treatment (AFFECT) study reported negative results from a trial of administrative
data feedback in improving hospital performance on key indicators of cardiac care (Beck et al,
2005). The study design was guided by empirical results and insights from previous studies, but
no explicit theories of individual or organizational behavior change were applied in planning the
design or conducting the study. While several limitations were acknowledged, the authors did not
address the question of ‘‘why’’ efforts were unsuccessful beyond pointing to elements that could
have been improved. Theoretical perspectives, such as those underlying the use of opinion leaders
to influence key stakeholders within the target organizations in the study, or the concept of
intensity or dose of intervention, could have markedly improved the design and conduct of the
study as well as the interpretation of results. Therefore, for interventions to induce planned change
in healthcare, theory provides clues to the mechanism(s) by which the intervention is or is not
successful. Without explicit attention to theory, many key aspects of the intervention may be
ignored.

The theory selected must be used rather than mentioned. Even when theory is used to frame or
inform a QI study, it may then be largely ignored in the selection of strategies, interventions,

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


43

selection of tools or measurements, and in interpretation of results (Sales, 2006). One problem
with having little or no theoretical basis for intervention planning is that strategies adopted for
implementation, and tools selected as mechanisms to induce behavior change, are neither tightly
linked to strategy nor to any underlying theory. Thus a theory is mentioned to inform the design,
but is not “used”. As a result, there is little reason to believe a priori that the strategies and actions
for change (which constitute the intervention), may succeed in inducing the QI (behavior) change.
Any QI intervention that proposes an approach that mentions a theoretical framework for change
(that specifies reasons for behavior change at the individual, interpersonal, organizational or
system-wide) should indicate how the theory is applied as part of both the planning process and
implementation phase. As part of this approach, models are considered, strategies selected, and
tools chosen (created, adopted, and/or adapted for use in the implementation process) in line with
the theory or theoretical framework (Sales et al, 2006).

The explicit use of theory anchors the intervention to the context, beyond motivating intervention
planning, design and conduct, as it explains the interaction between individual, organizations and
contexts in which the QI intervention occurs. Use of theory may be most helpful when the targeted
action takes place in an organization with multiple actors, multiple layers, and complex factors
affecting decision-making processes, which characterizes almost any health care organization. The
interaction, particularly in complex organizations such as those in healthcare, is critical to selecting
appropriate theory to predict both individual behavior change, and change in an organizational
context and the influence of the external environment. There are many diverse theories that
describe processes contributing to organizational change, context and culture (.Davis et al, 1995;
Ferlie, 1997; Ferlie et al, 2000; Rycroft-Malone et al, 2002; Eccles et al, 2003; Walker et al, 2003;
Grol and Wensing, 2004; Rhydderch et al, 2004). Theories of organizational change rarely apply
to planned activities of change, particularly when the change operates at levels within the
organization, and do not necessarily affect the organization as a whole (Sales et al, 2006). Thus
choice must be linked to the particular context where the desired change is.
Using theories explicitly

Making explicit the theoretical assumptions behind the choice of interventions should be important
to both researchers and change agents, for several reasons:

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


44

1) The use of theory can offer a generalizable framework for considering effectiveness across
different clinical conditions and settings (Eccles et al., 2005).
2) Basing interventions or a change program on different theoretical assumptions should prevent
overlooking important factors (Iceberg Group, 2006).
3) Several factors at different levels of health care (professional, social context, organizational or
economic) usually are important to improving patient care (Ferlie and Shortell, 2001; Grol,
1997), so hypotheses regarding effective change that are derived from different theories should
be useful.
4) Use of theory-driven QI change interventions helps in deciding on the best approaches, as the
theory highlights the drivers of change and the nature of change expected (personal,
interpersonal, organization, system-wide or impact-wide changes).

5) Delineating the quality (patient safety or healthcare) problem and choosing what interventions
are effective begins with a synthesis of the literature. Failure to use a theory creates problems
when in applying evidence from a systematic review of such quality improving interventions
(Peterson, 2005). For instance, a QI intervention on audit and feedback to decision making
about how best to use audit and feedback in future intervention efforts, noted authors’ inability
to glean information on key aspects of conducting audit and feedback from the published
literature (Foy et al, 2005). Failure to explicitly use a theory therefore impedes learning “why”

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


45

and “how” from prior efforts, other than identifying other than success or failure in specific
attempts.
6) The need for more effective use of formal theory in improvement is increasingly an imperative
because application of formal theory enables the maximum exploitation of learning and
accumulation of knowledge, and promotes the transfer of learning from one project, one
context, one challenge, to the next. Where hypothesis-testing clinical research may demand the
development of and rigorous adherence to, fixed study protocols and invariant interventions,
QI is different, and may require repeated adjustment and refinement of interventions, often in
a series of experiential learning cycles, in order to use interventions that are intentionally
adapted in light of emergent information and evaluation (Parry et al, 2013; Lagoa et al, 2014).
Understanding how individuals solve particular problems in field settings requires a strategy
of moving back and forth from the world of theory to the world of action. Without theory, one
can never understand the general underlying mechanisms that operate in many appearances in
different situations. If not harnessed to empirical problems, theoretical work can spin off under
its own momentum, reflecting little of the empirical world.
How to use a theory in quality improvement research or interventions
Most attempts to implement evidence-based practices in clinical settings are either only partially
successful, or unsuccessful, in the attempt (Oxman et al, 1995; Grimshaw et al, 2002; Eccles et al,
2004; Holden, 2004; Shojania and Grimshaw, 2004; Eccles et al, 2005). Explicitly outlining and
understanding some form of theory that explains the reason for why an intervention may work to
induce planned change is a critical step in planning interventions to change provider or patient
behavior, particularly in order to promote evidence-based quality care. In quality improvement,
there may be a reluctance to examine theoretical bases for planning implementation activities and
efforts. This arises partly from the perceived need to differentiate between the nature of quality
improvement activities and the nature of the research component inbuilt or inherent in quality
improvement, where initiatives that focus solely on QI may not perceive the need and relevant of
a theory of change. Yet there is need for careful consideration of theory in planning to implement
evidence-based practices into clinical care. The theory should be tightly linked to strategic
planning through careful choice or creation of the design, choice of interventions, evaluation of
the context and an implementation strategies or framework (Sales et al, 2006). Strategies should
be linked to specific interventions and/or intervention components to be implemented. The choice

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


46

of tools should match the interventions and overall strategy, linking back to the original theory and
framework, so that investigators can assess a need to modify the theory or not (Sales et al, 2006).
In most studies where there is an attempt to implement planned change in clinical processes, theory
is used inappropriately, if at all.

The role of theory in informing the QI strategy


The theory not only informs the strategy of QI interventions, but also provides overall direction
for planning, conduct, measurement and evaluations. QI may include more than one intervention,
or have multiple interventions at multiple levels (individual, interpersonal or organizational). The
theory may also inform or address barriers and maximizing use of facilitators of change (which
may be foreseen or emerge during implementation) as well as the process of change (process of
implementing the QI intervention). Assessment of potential barriers and facilitators should be
precursors to strategy selection or be concurrent as part of strategy planning (Sales et al, 2006).
While development of strategy, and strategic planning for implementing an intervention, are often
not included in the process of planning to initiate behavior change, it is important to engage in a
systematic, strategic planning process before initiating an intervention or set of interventions. If
the theory underlying the planned change includes both individual- level theory and change at
some level above that of the individual, an assessment of both the organizational readiness to
change and existing organizational culture and climate are critical parts of both strategic planning
and implementation.
The role of the theory in mapping the strategy to the interventions
The theories inform both the strategy and selection of interventions. Once a guiding strategy is
selected based on the underlying theory or theories guiding the study, it is critical to map the
strategy to interventions. Here additional theories on promoting uptake of evidence-based practice
may be critical in informing the whole QI process (Davis et al, 1992; Davis et al, 1995; Grimshaw
et al, 2001; Grimshaw et al, 2003). The lack of effectiveness of QI interventions may be due to
several factors. Lack of tight linkage to theory, as well as lack of tight linkage to problem diagnosis
can decrease the likelihood of successful implementation, as issues of organizational and other
contextual factors may not have been appropriately addressed. The choice of intervention, which
is the focus of most QI implementation studies, should be dependent primarily on the selected
theory: why do people behave as observed in this setting,

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


47

and what intervention could effect desirable change? Tailoring an intervention to a specific context
requires development of tools that are usually very specific to both the intervention and the context
in which the intervention will take place. While several tools exist, most are specific to the
intervention or context where they were developed.
The importance of linking the problem, the context, the theory and the QI strategy

The number of quality improvement (QI) initiatives is increasing in an attempt to improve quality
of care, improve performance or reduce unwarranted variation. While it is essential to understand
the effectiveness of these initiatives, many commonly lack underlying theory linking a change to
its intended outcome, which inhibits the ability to demonstrate causality and hinder widespread
uptake (Shojania et al, 2005; Davies et al, 2010; Foy et al, 2011). Programme theory is used to
describe an intervention and its hypothesized effects in a particular context and is critical to support
both high-quality evaluation and the development of interventions and implementation plans
(Weiss, 1997; Grol et al, 2007; Dixon-Woods et al, 2012).

Often in QI, a theory is not used. Sometimes only the source of the problem is identified but not
an accompanying theory of change. Improvement interventions are also commonly launched
without either a good outcome measurement plan or the baseline data required for meaningful
time-series analyses (Pronovost et al, 2007; Walshe, 2007; Scott 2009; Pronovost 2011). This
often results in improvement interventions that remain unclear about the specifics of the desired
behaviours, the social and technical processes they seek to alter, the means by which the proposed
interventions might achieve their hoped-for effects in practice, and the methods by which their
impact will be assessed. Even published descriptions of what the intervention consists of are often
poor. Failure to use the various elements of formal theory adequately has frustrated the
understanding of effectiveness of improvement interventions, and limits learning that may inform
planning of future interventions. Failure to employ a theory leads to poor understanding of what
an intervention really consists of, what it does, and how it works curtails the meaningful replication
of interventions that were successful in their original context. Without a good theoretical grasp of
the underlying theory and its critical components or constructs, improvers may adopt the label or
outward appearance attached to a successful intervention, which does not permit them to reproduce
its impact. This anomaly may explain the studies that come up with contradictory findings, such

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


48

as from checklists (Haynes et al, 2009; Aveling et al, 2013) or explain the limited effectiveness of
interventions (Hillman et al, 2001; Winters et al, 2013).
Developing and Applying Programme Theory
Use of program theory in QI
The identification and articulation of programme theory can support effective design, execution
and evaluation of quality improvement (QI) initiatives. Programme theory includes an agreed aim,
potential interventions to achieve this aim, anticipated cause/effect relationships between the
interventions and the aim and measures to monitor improvement. One such theory is the Action
Effect Method. Developing the Action Effect Method begins by building a driver diagram,
followed by iteration over several rounds of improvement initiatives. This results in Specification
of the elements required to fully articulate the programme theory of a QI initiative. Development
of programme theory can provide a means to tackle common social challenges of QI such as
creating a shared strategic aim and increasing acceptance of interventions. While QI methods for
the identification and articulation of theory and causal relationships exist. The action effect method
is a systematic and structured process to identify and articulate a QI initiative’s programme theory.
The method connects potential interventions and implementation activities with an overall
improvement aim through a diagrammatic representation of hypothesized and evidenced
cause/effect relationships. Measure concepts, in terms of service delivery and patient and system
outcomes, are identified to support evaluation. The action effect method provides a framework to
guide the execution and evaluation of a QI initiative, a focal point for other QI methods and a
communication tool to engage stakeholders. A clear definition of what constitutes a well-
articulated programme theory is provided to guide the use of the method and assessment of the
fidelity of its application.

An improvement team should begin by sketching out an intervention, then identifying its
components and the relationships that link their application with the desired outcomes. After this,
a theory of change is used. Grand and mid-range theories can be especially helpful in generalising
learning from situations that initially appear new and unique, partly through distinguishing
proximal causes (the most immediate action that makes something happen) from distal causes
(deeper structures that may lie behind patterns of effects). Combined formal and informal theory
can serve more effectively as the basis for decision-making and action than either kind of theory
by itself. In important ways, this blending of informal and formal theories resembles the process

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


49

of formulating accurate diagnoses in medical practice. The value of combining informal and
formal theory highlights the point that improvement interventions do not always need to flow
deductively from established formal theories.

Linking Theories, Tools and Strategies


1) Clarity of theoretical concepts: ‘Has the case been made for the independence of constructs
from each other?’
2) Clarity of relationships between constructs: ‘Are the relationships between constructs clearly
specified?’
3) Measurability: ‘Is an explicit methodology for measuring the constructs given?’
4) Testability: ‘Has the theory been specified in such a way that it can be tested?’
5) Being explanatory: ‘Has the theory been used to explain/account for a set of observations?’
6) Statistically or logically?;
a) Describing causality: ‘Has the theory been used to describe mechanisms of change?’
b) Achieving parsimony: ‘Has the case for parsimony been made?’
c) Generalisability: ‘Have generalisations been investigated across behaviours,
populations and contexts?’
d) Having an evidence base: ‘Is there empirical support for the propositions?’
Organizational theory
Organizational theory further substantiates the importance of systems thinking by emphasizing the
synergy in individuals’ interactions, communication, and behavior (Argyris, 1957, 1964). Current
approaches to solving quality problems are not entirely working or are not achieving results
quickly enough.
Systems theory
System theory offers a framework for quality improvement (QI) in healthcare systems because
systems theory supports systems thinking. Systems thinking is a discipline that allows us to see
the whole system and the relationships of the parts rather than just the isolated parts. High-quality
care is more likely in systems where relationships and interrelationships are considered important.
When relationships are considered important, greater emphasis is placed on effective
communication, team building, conflict management, behavioral competencies and skill
competencies, process management, and education, because these elements strengthen
relationships. The deliberate application of systems theory can support QI in healthcare systems.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


50

When systems-like thinking that underpins systems theory is applied to healthcare systems, within
the system are recognized as being as important as the component parts. Interdisciplinary
relationships, such as those among disciplines like nursing, medicine, social work, and
administration that are central to social processes in a healthcare system, cannot be taken for
granted. Planning in healthcare systems often involves little attention to these relationships and
frequently fails because unanticipated behaviors emerge from the unanticipated interaction of the
component parts. Systems thinking approach helps to prevent system failure and therefore support
QI by enabling healthcare workers to:
1) Improve communication among subsystems within the larger system
2) Create and manage effective teams
3) Establish trust through generative relationships
4) Support interdisciplinary collaborative practices
5) Recognize the importance of conflict-management education
6) Focus on processes rather than staff
7) Reduce power differentials between groups and subsystems
8) Embrace ongoing education
9) Improve morale through autonomy and point-of-service involvement
10) Encourage creativity and innovative problem solving
11) Strengthen the hierarchical components that support quality
12) Emphasize behavioral competency as well as skill competency

QI Models
Quality Improvement is a formal approach to the analysis of performance and systematic efforts
to improve it. There are numerous models used. Some commonly discussed include FADE, PDSA,
CQI: Continuous Quality Improvement and TQM: Total Quality Management. These models are
all means to get at the same thing: Improvement. They are forms of ongoing effort to make
performance better. In industry, quality efforts focus on topics like product failures or work-related
injuries. In administration, one can think of increasing efficiency or reducing re-work. In medical
practice, the focus is on reducing medical errors and needless morbidity and mortality. The
following may be employed:
1) 5S strategy (Sort, Set, Shine, Standardize and Sustain),
2) Continuous Quality Improvement (TCQI

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


51

3) Total Quality Management (TQM)


4) Theories for QI
There are a variety of QI models currently in use and five are highlighted here. Two of the models
highlighted, Care Model and Lean Model, provide a framework to improve patient care. The other
three models, Model for Improvement, FADE, and Six Sigma, focus on processes that monitor the
results of measures:
1) Care Model: There are six fundamental aspects of care identified in the Care Model,
which creates a system that promotes high-quality disease and prevention management. It
does this by supporting productive interactions between patients, who take an active part
in their care, and providers, who have the necessary resources and expertise.
2) Lean Model: This model defines value by what a customer (i.e., patient) wants. It maps
how the value flows to the customer (i.e., patient), and ensures the competency of the
process by making it cost effective and time efficient.
3) Model for Improvement: This model focuses on three questions to set the aim or
organizational goal, establish measures, and select changes. It incorporates Plan-Do-
Study-Act (PDSA) cycles to test changes on a small scale.
4) FADE: There are four broad steps to the FADE QI model:
i. Focus? define process to be improved
ii. Analyze? collect and analyze data
iii. Develop? develop action plans for improvement
iv. Execute? implement the action plans, and Evaluate? measure and monitor the
system to ensure success
5) Six Sigma: Six Sigma is a measurement-based strategy for process improvement and
problem reduction. It is completed through the application of the QI project and
accomplished with the use of two Six Sigma models: 1) DMAIC (define, measure, analyze,
improve, control), which is designed to examine existing processes, and 2) DMADV
(define, measure, analyze, design, verify) which is used to develop new processes.
Managing Change in Quality Improvement
Improvement requires change, but not every change is an improvement. The approach used by
most organizations is to adopt a strategy for managing change and train their staff to facilitate the
improvement process. There are a number of change processes being used, including:

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


52

 Trial-and-error or jumping to solutions without sufficient study


 Extensive study of a problem which can lead to "analysis paralysis"
 Best practices by adopting someone else's success
 Top-down - leaders decide what changes are made
All of these change strategies have pros and cons and work under certain situations. When dealing
with the high stakes of clinical care, a prudent approach has gained popularity with QI teams
around the globe. The Model for Improvement is a strategy to systematically and effectively
manage change, which stemmed from the work of William Edwards Deming, also known as the
founder of continuous QI. The model has two parts:
 Part 1 presents three fundamental questions, which can be addressed in any order:
 What are we trying to accomplish?
 How will we know that a change is an improvement?
 What changes can we make that will result in improvement?
 Part 2 is the Plan-Do-Study-Act (PDSA) cycle to test and implement changes in real-work
settings. The PDSA cycle guides the test of change to determine if the change is an
improvement.
Testing Change – Using Plan Do Study Act (PDSA) Cycle

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


53

PDSA (plan, do, study, act) cycle approach of small scale, rapid tests of change is a recognized
approach to achieving this. Using this approach changes can be tested, refined and re-tested a
number of times until the change is reliable, quickly and with minimal resource use. The PDSA
Model for Improvement provides a framework for developing, testing and implementing changes
that lead to improvement. You must resist the temptation to rush into organizational or
departmental changes to systems without testing the change first to check that it actually brings
about improvement. For example if unreliable commode cleaning is identified through use of the
IPS QITs, then a solution to the problem should be tested with one staff member and one commode,
and if successful increased to two staff and so on. If unsuccessful an alternative approach can be
tested.
Measurement of ‘Change’ in Quality Improvement
If you cannot measure it, you cannot improve it: Lord Kelvin (1824 -1907).
Measurement is vital for quality improvement. There are three sets of ‘measures’ required for
quality improvement:
Outcome measures: These are the results of care processes and measure the results of quality
improvement work. In infection prevention and control the outcome measure can be rates of

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


54

specific infections e.g. surgical site infection, or new cases of MRSA bacteremia. Outcome
measures are important as motivators to improve and ways of celebrating success.
Structure and Process measures: Measuring what actually happens in care is central to improving
quality. The IPS Quality Improvement Tools are designed to facilitate the measurement of
structure and process in infection prevention and control.
Balancing measures: It is sometimes necessary when making changes to care systems to look for
and examine any potential ‘side effects’ of the change, i.e. an unintended and adverse effect. An
example is when making changes to reduce the length of hospital stay; is the readmission rate
increased? For quality improvement the main purpose of measurement is to learn about the
processes that we are seeking to improve.
The characteristics of measurement for learning and improvement are:
a) Measure just what you need to measure and no more (make the measurement quick and easy
to do as far as possible)
b) Measure frequently and regularly and use simple and easy to understand ways of feeding back
measurement to care workers engaged in improvement work (e.g. using simple annotated run
charts). Presentation of the results of QIT use will achieve this.
c) Mainly measure processes to see if we are doing what we should be doing, and doing
it reliably using the PITs then the RITs on a regular basis
d) Use measurement to learn not blame,
e) Quality improvement methods and tools, based as they are on industrial approaches, give us
the opportunity to make real breakthroughs in healthcare quality and in particular safety.
The focus within quality improvement on systems thinking, reliability, testing changes and
measurement has prompted IPS to move away from traditional ‘audit tools’ and develop this suite
of Quality Improvement Tools, and to endorse this approach to reducing the risk of infection and
making safety the norm in care settings. These tools will assist all care workers to measure and
improve their systems of infection prevention and control.

References
Ajzen, I. 1988. Attitudes, Personality and Behaviour. Milton Keynes, U.K.: Open University Press.
Ajzen, I. 1991. The Theory of Planned Behaviour. Organizational Behavior and Human Decision
Processes 50:179–211.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


55

Argyris, C., and D. Sch¨on. 1978. Organizational Learning: A Theory of Action Perspective.
Reading, Mass.: Addison-Wesley.
Ashford, A.J. 1998. Behavioural Change in Professional Practice. Supporting the Development
of Effective Implementation Strategies. Newcastle upon Tyne: Centre for Health Services
Research.
Aveling E, McCulloch P, Dixon-Woods M. A qualitative study comparing experiences of the
surgical safety checklist in hospitals in high-income and low-income countries. BMJ Open
2013;3:e003039
Bandura, A. 1986. Social Foundation of Thought and Action: A Social Cognitive Theory. New
York: Prentice-Hall.
Bandura, A. 1997. The Anatomy of Stages of Change. American Journal of Health Promotion
12:8–10.
Bartholomew, L.K., G.S. Parcel, G. Kok, and N.H. Gottlieb. 2001. Intervention Mapping:
Designing Theory- and Evidence-Based Health Promotion Programs. New York: McGraw-Hill.
Batalden, P.B., and P.K. Stoltz. 1993. A Framework for the Continual Improvement of Health
Care. Joint Commission Journal on Quality and Patient Safety 19:424–52.
Beck CA, Richard H, Tu JV, Pilote L. Administrative data feedback for effective cardiac treatment:
AFFECT, a cluster randomized trial. JAMA. 2005;294:309–17.
Berwick, D.M. 1989. Continuous Improvement as an Ideal in Health Care. New England Journal
of Medicine 320(1):53–56.
Berwick, D.M., A.B. Godfrey, and J. Roessner. 1990. Curing Health Care. San Francisco: Jossey-
Bass.
Berwick, D.M., and T.W. Nolan. 1998. Physicians as Leaders in Improving Health Care. Annals
of Internal Medicine 128:289–92.
Blumenthal, D., and C.M. Kilo. 1998. A Report Card on Continuous Quality Improvement. The
Milbank Quarterly 76:625–48.
Bower, P., S. Campbell, C. Bojke, and B. Sibbald. 2003. Team Structure, Team Climate and the
Quality of Care in Primary Care: An Observational Study. Quality and Safety in Health Care
12(4):273–79.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


56

Burnstein, E. 1982. Persuasion as Argument Processing. In Contemporary Problems in Group


Decision Making, edited by H. Brandstatter, J.H. Davis, and G. Stocher-Kreichgauer, 103–24. New
York: Academic Press.
Chen H. Practical programme evaluation: assessing and improving planning, implementation, and
effectiveness. Education Research 10:37–50.
Choudry NK, RH Fletcher, SB. Soumerai. 2005. Systematic Review: The Relationship between
Clinical Experience and Quality of Health Care. Annals of Internal Medicine 142:260–73.
Damanpour, F. 1991. Organizational Innovation: A Meta-Analysis of Effects of Determinants and
Moderators. Academy of Management Journal 34:555–90.
Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of
guideline dissemination and implementation strategies and interpretation of the results of rigorous
evaluations. Implement Sci 2010;5:14.
Davis DA, Thomson MA, et al. Changing physician performance. A systematic review of the
effect of continuing medical education strategies. JAMA. 995;274:700–5
Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME. A
review of 50 randomized controlled trials. JAMA. 1992;268:1111–7.
DiBella, A.J., E.C. Nevis, and J.M. Gould. 1996. Understanding Organizational Learning
Capability. Journal of Management Studies 33:361–79.
Dixon-Woods M, Leslie M, Tarrant C, et al. Explaining matching Michigan: an ethnographic study
of a patient safety programme. Implement Sci 2013;8:70.
Dixon-Woods M, McNicol S, Martin G. Ten challenges in improving quality in healthcare: lessons
from the Health Foundation’s programme evaluations and relevant literature. BMJ Qual Saf
2012;21:876–84.
Donaldson, L. 1995. Conflict, Power, Negotiation. British Medical Journal 310:104–7.
Eagly, A.H., and S. Chaiken. 1993. The Psychology of Attitudes. Fort Worth: Harcourt Brace
Jovanovich.
Eccles M, Grimshaw J, Campbell M, Ramsay C. Research designs for studies evaluating the
effectiveness of change and improvement strategies. Qual Saf Health Care. 2003;12:47–52.
Eccles M, Grimshaw J, et al. Changing the behavior of healthcare professionals: the use of theory
in promoting the uptake of research findings. J Clin Epidemiol. 2005;58:107–12.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


57

Eccles MP, Grimshaw JM. Selecting, presenting and delivering clinical guidelines: are there any
‘‘magic bullets’’? Med J Aust. 2004;180(suppl): S52–S54.
Ferlie E, Fitzgerald L, Wood M. G evidence into clinical practice: an organisational behaviour
perspective. J Health Serv Res Policy. 2000;5:96–102.
Ferlie E. Large-scale organizational and managerial change in health care: a review of the
literature. J Health Serv Res Policy. 1997;2:180–9.
Ferlie, E.B., S.M. Shortell. 2001. Improving the Quality of Health Care in the United Kingdom
and the United States: A Framework for Change. The Milbank Quarterly 79(2):281–315.
Firth-Cozens, J. 1998. Celebrating Teamwork. Quality in Health Care 7:S3–S7.
Fishbein, M., and I. Ajzen. 1975. Belief, Attitude, Intention and Behavior. New York:Wiley.
Fox, R.D., and N.L. Bennett. 1998. Learning and Change: Implications for Continuing Medical
Education. British Medical Journal 316:466–68.
Foy R, Eccles MP, Jamtvedt G, et al. What do we know about how to do audit and feedback?
Pitfalls in applying evidence from a systematic review. BMC Health Serv Res. 2005;5:50.
Foy R, Ovretveit J, Shekelle PG, et al. The role of theory in research to develop and evaluate the
implementation of patient safety practices. BMJ Qual Saf 2011;20:453–9.
Foy, R., G. MacLennan, et al. 2002. Attributes of clinical recommendations that influence change
in practice following audit and feedback. Journal of Clinical Epidemiology 55:717–22.
Frambach RT, N. Schillewaert. 2002. Organizational innovation adoption. A multi-level
framework of determinants and opportunities for future research. J Business Res 55:163–76.
French SD, Green SE, O’Connor DA, et al. Developing theory-informed behaviour change
interventions to implement evidence into practice: a systematic approach using the Theoretical
Domains Framework. Implementation Science 2012, 7:38

Friedman, D.M., and D.L. Berger. 2004. Improving Team Structure and Communication. Archives
of Surgery 139:1194–98.
Garavelli, A.C., M. Gorgoglione, and B. Scozzi. 2002. Managing Knowledge Transfer by
Knowledge Technologies. Technovation 22:269–79.
Gardner B, Whittington C, McAteer J, et al. Using theory to synthesize evidence from behaviour
change interventions: the example of audit and feedback. Soc Sci Med 2010;70:1618–25.
Garside, P. 1998. Organizational Context for Quality: Lessons from the Fields of Organizational
Development and Change Management. Quality in Health Care 7:S8–S15.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


58

Garvin, D. 1993. Building a Learning Organization. Harvard Business Review :78–91.


Greenhalgh, T., G. Robert, F. MacFarlane, et al.2004. Diffusion of innovations in service
organizations: systematic review and recommendations. The Milbank Quarterly 82(4):581–629.
Greer, A.L. 1988. The State of the Art versus the State of the Science. The Diffusion of New
Medical Technologies into Practice. International Journal of Technology Assessment in Health
Care 4:5–26.
Grimshaw J, McAuley LM, Bero LA, et al. Systematic reviews of the effectiveness of quality
improvement strategies and programmes. Qual Saf Health Care. 2003;12:298–303.
Grimshaw JM, Eccles MP, et al. Changing physicians’ behavio: what works and thoughts on
getting more things to work. J Contin Educ Health Prof. 2002;22:237–43.
Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of
systematic reviews of interventions. Med Care. 2001; 39(suppl 2):2–II45.
Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in
patients’ care. Lancet 2003;362:1225–30.
Grol R, Wensing M, Eccles M, et al eds. Improving patient care: the implementation of change in
health care. Hoboken, NJ: John Wiley & Sons, 2013.
Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence based
practice. Med J Aust. 2004;180(6 Suppl): S57–S60.
Grol RPTM, Bosch MC, Hulscher MEJL, et al. Planning and studying improvement in patient
care: the use of theoretical perspectives. Milbank Q 2007;85:93–138.
Grol, R. 1992. Implementing Guidelines in General Practice Care. Quality in Health Care 1:184–
91.
Grol, R. 1997. Personal Paper. Beliefs and Evidence in Changing Clinical Practice. British Medical
Journal 315:418–21.
Grol, R., and J. Grimshaw. 2003. From Best Evidence to Best Practice: Effective Implementation
of Change in Patients’ Care. Lancet 11(362):1125–30.
Grol, R., and M. Wensing. 2005a. Characteristics of Successful Innovations. In Improving Patient
Care; the Implementation of Change in Clinical Practice, edited by R. Grol, M. Wensing, M.
Eccles, Grol, R, M.Wensing. 2004. What Drives Change? Barriers to and incentives for achieving
evidence-based practice. Medical Journal of Australia 180(6):S57–S60.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


59

Grol, R., and M.Wensing. 2005b. Effective Implementation: A Model. In Improving Patient Care;
the Implementation of Change in Clinical Practice, edited by R. Grol, M. Wensing, and M. Eccles,
41–58.
Grol, R., M.Wensing, and M. Eccles, eds. 2005. Improving Patient Care; the Implementation of
Change in Clinical Practice. Oxford: Elsevier.
Grumbach, M.D., and T. Bodenheimer. 2004. Can Health Care Teams Improve Primary Care
Practice? Journal of the American Medical Association 291(10):1246–51.
Ham, C. 2003. Improving the Performance of Health Services: The Role of Clinical Leadership.
Lancet 361:1978–80.
Haynes AB, Weiser TG, Berry WR, et al. Safe Surgery Saves Lives Study Group. A surgical
safety checklist to reduce morbidity and mortality in a global population. N Engl J Med
2009;360:491–9.
Hillman K, Parr M, Flabouris A, et al. Redefining in-hospital resuscitation: the concept of the
medical emergency team. Resuscitation 2001;48:105–10.
Holden JD. Systematic review of published multi-practice audits from British general practice. J
Eval Clin Pract. 2004;10:247–72.
Holm, H.A. 1998. Quality Issues in Continuing Medical Education. British Medical Journal
316:621–24.
ICEBeRG). 2006. Designing Theoretically-Informed Implementation Interventions.
Implementation Science 1:4.
Jones, E.E., D.E. Kannouse, H.H. Kelley, R.E. et al. 1972. Attribution: Perceiving the Causes of
Behavior. Morristown, N.J.: General Learning Press.
Kitson, A., G. Harvey, B. McCormack. 1998. Enabling the Implementation of Evidence Based
Practice: A Conceptual Framework. Quality in Health Care 7(3):149–58.
Kok, G.J., H. De Vries, A.N. Mudde, V.J. Strecher. 1991. Planned Health Education and the Role
of Self-Efficacy: Dutch Research. Health Education Research 6:231–38.
L¨ahteenm¨aki, S., J. Toivonen, M. Mattila. 2001. Critical Aspects of Organizational Learning
Research and Proposals for Its Measurement. British Journal of Management 12:113–29.
Laffel, G., and D. Blumenthal. 1989. The Case for Using Industrial Quality Management Science
in Health Care Organization. JAMA 262:2869–73.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


60

Lagoa CM, Bekiroglu K, Lanza ST, et al. Designing adaptive intensive interventions using
methods from engineering. J Consult Clin Psychol 2014;82:868–78.
Langley, G., K. Nolan, T. Nolan, C.L. Norman, and L.P. Provost. 1996. The Improvement Guide.
San Francisco: Jossey-Bass.
Lewis, A.P., and K.J. Bolden. 1989. General Practitioners and Their Learning Styles. Journal of
the Royal College of General Practitioners 39:187–99.
Lipsey MW. Theory as method: small theories of treatments. New Dir Programme Eval
1993;57:5–38.
Lomas, J., and R.B. Haynes. 1988. A Taxonomy and critical review of tested strategies for the
application of clinical practice recommendations: from “official” to “individual” clinical policy.
American Journal of Preventive Medicine 4(suppl.):77–94.
Loo, R. 2003. Assessing “Team Climate” in Project Teams. International Journal of Project
Management 21:511–17.
Maibach, E., and D.A. Murphy. 1995. Self-Efficacy in Health Promotion Research and Practice:
Conceptualization and Measurement. Health
Mann, K.V. 1994. Educating Medical Students: Lessons from Research in Continuing Education.
Academic Medicine 69:41–47.
Marshall M, Pronovost P, Dixon-Woods M. Promotion of improvement as a science. Lancet
2013;381:419–21.
May C. Towards a general theory of implementation. Implement Sci 2013;8:18.
McGuire, W. 1981. Theoretical foundation of campaigns. In Public Communications Campaigns,
edited by R. Rice andW. Paisley. Beverly Hills, Calif.: Sage.
McGuire, W. 1985. Attitudes and Attitude Change. In The Handbook of Social Psychology, 2nd
ed., edited by G. Lindzey and E. Aronson, 233–46. Beverly Hills, Calif.: Sage.
Merriam, S.B. 1996. Updating our knowledge of adult learning. Journal of Continuing Education
in the Health Professions 16:136–43.
Michie, S., C. Abraham. 2004. Interventions to change health behaviours: evidence-based or
evidence-inspired? Psychology and Health 19(1):29–49.
Michie, S., M. Johnston, C. Abraham, R. et al. 2005. Making Psychological Theory Useful for
ImplementingEvidence Based Practice: A Consensus Approach. Quality and Safety in Health Care
14:26–33.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


61

Michie S, Fixsen D, Grimshaw JM, Eccles MP: Specifying and reporting complex behaviour
change interventions: the need for a scientific method. Implement Sci 2009, 4:40.
Mittman BS, X. Tonesk, PD Jacobson. 1992. Implementing Clinical Practice Guidelines: Social
Influence Strategies and Practitioner Behaviour Change. Quality Review Bulletin 18:413–22.
Nevis, E.C., A.J. DiBella,J.M. Gould. 1995. Understanding Organizations as Learning Systems.
Sloan Management Review 36:73– 85.
Norman, G.R. 2002. Research in Medical Education: Three Decades of Progress. British Medical
Journal 324:1560–62.
Norman, GR, HG. Schmidt. 1992. The Psychological Basis of Problem-Based Learning: A Review
of the Evidence. Academic Medicine 67:557–65.
Nylenna, M., E. Falkum. O.G. Aasland. 1996. Keeping Professionally Updated: Perceived Coping
and CME Profiles among Physicians. Journal of Continuing Education in the Health Professions
16:241–49.
O¨ rtenblad, A. 2002. A Typology of the Idea of Learning Organization. Management Learning
33(2):213–30.
Ovretveit, J. 1999. A Team Quality Improvement Sequence for Complex Problems. Quality in
Health Care 8:239–46.
Ovretveit, J. 2004. The Leaders’ Role in Quality and Safety Improvement; a Review of Research
and Guidance; the “Improving Improvement Action Evaluation Project.” Fourth Report.
Stockholm:
Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102
trials of interventions to improve professional practice. CMAJ 1995;153:1423–31.
Parry GJ, Carson-Stevens A, Luff DF, et al. Recommendations for evaluation of health care
improvement initiatives. Acad Pediatr 2013;13(6 Suppl):S23–30.
Peterson ED. Optimizing the science of quality improvement. JAMA. 2005;294:369–71.
Petty, R.E., and R.T. Cacioppo. 1986. The Elaboration Likelihood Model of Persuasion. In
Advances in Experimental Social Psychology, edited by L. Berkowitz, 123–205. New York:
Academic Press.
Petty, R.E., D.T. Wegener, and L.R. Fabrigar. 1997. Attitudes and Attitude Change. Annual
Review of Psychology 48:609–48.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


62

Plsek PE. Tutorial: management and planning tools of TQM. Qual Manag Health Care 1993;1:59–
72
Plsek, P., L. Solberg, R. Grol. 2003. Total Quality Management and Continuous Quality
Improvement. In Oxford Textbook of Primary Medical Care, edited by R. Jones et al., 490–95.
Oxford: Oxford
Plsek, P.E., T. Greenhalgh. 2001. Complexity Science: The Challenge of Complexity in Health
Care. British Medical Journal 323:625–28.
Prochaska, J.O., and W.F. Velicer. 1997. The Transtheoretical Model of Health Behavior Change.
American Journal of Health Promotion 12:38–48.
Pronovost PJ, Miller M, Wachter RM. The GAAP in quality measurement and reporting. JAMA
2007;298:1800–2.
Provost LP. Analytical studies: a framework for quality improvement design and analysis. BMJ
Qual Saf 2011;20(Suppl 1):i92–6.
Rhydderch M, Elwyn G, Marshall M, Grol R. Organisational change theory and the use of
indicators in general practice. Qual Saf Health Care. 2004;13:213–7.
Robertson, N., R. Baker, H. Hearnshaw. 1996. Changing the Clinical Behaviour of Doctors: A
Psychological Framework. Quality in Health Care 1:51–54.
Rogers EM. Diffusion of innovations. 5th edn. New York, NY: Free Press, 2003.
Rogers PJ, Petrosino A, Huebner TA, et al. Programme theory evaluation: practice, promise, and
problems. New Dir Eval 2000;87:5–13.
Rogers, E.M. 1983. Diffusion of Innovations. New York: Free Press.
Rogers, E.M. 1995. Diffusion of Innovations. 4th ed. NewYork: Free Press.
Rogers, S. 2003. Continuous Quality Improvement: Effects on Professional Patient Outcomes
(Protocol for a Cochrane Review). In The Cochrane Library, no. 2. Oxford: Update Software.
Rossi, P., H. Freeman, M. Lipsey. 1999. Evaluation: A Systematic Approach. 6th ed. Newberry
Park, Calif.: Sage.
Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual
framework. Qual Saf Health Care. 2002;11: 174–80.
Sales, A, Smith J, Curran G, Kochevar, L. Models, strategies, and tools. Theory in implementing
evidence-based findings into health care practice. J Gen Intern Med 2006; 21:S43–49

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


63

Scarbrough, H., and J. Swan. 2001. Explaining the diffusion of knowledge management: the role
of fashion. British Journal of Management 12:3–12.
Schein, E.H. 1985. Organizational Culture and Leadership. San Francisco: Jossey-Bass.
Schmidt, H., ed. 1984. Tutorials in Problem-Based Learning. Assen/ Maastricht: Van Gorcum.
Schon DA. The reflective practitioner: how professionals think in action. Aldershot, UK:
Ashgate Publishing, 1991.
Scott I. What are the most effective strategies for improving quality and safety of health care?
Intern Med J 2009;39:389–400.
Scott, T., R. Mannion, H. Davies, M.N. Marshall. 2003a. Implementing Culture Change in Health
Care: Theory and Practice. International Journal for Quality in Health Care 15(2):111–18.
Scott, T., R. Mannion, M. Marshall, H. Davies. 2003b. Does Organizational Culture Influence
Health Care Performance? A Review of the Evidence. Journal of Health Services Research and
Policy 8:105–17.
Scott, W.R. 1990. Innovation in Medical Care Organizations. A Synthetic Review. Medical Care
Review 47:165–92.
Senge, P.M. 1990. The Fifth Discipline; the Art and Practice of the Learning Organization.
London: Random House.
Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health
Aff (Millwood) 2005;24:138–50.
Shojania KG, Grimshaw JM. Still no magic bullets: pursuing more rigorous research in quality
improvement. Am J Med. 2004;116:778–80.
Shortell, S.M., C.L. Bennett, G.R. Byck. 1998. Assessing the Impact of Continuous Quality
Improvement on Clinical Practice: What It Will Take to Accelerate Progress. The Milbank
Quarterly 76:593–624.
Shortell, S.M., J.A. Marsteller, M. Lin, M.L. et al. 2004. The Role of perceived team effectiveness
in improving chronic illness care. Medical Care 42(11):1040–48.
Shortell, S.M., J.L. O’Brien, J.M. Carman, et al 1995. Assessing the Impact of Continuous Quality
Improvement/Total Quality Management: Concept versus Implementation. Health Services
Research 30(2):377–401.
Shortell, S.M., R.H. Jones, A.W. Rademaker, et al. 2000. Assessing the impact of total quality
management and organizational culture on multiple outcomes of care for coronary

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


64

van Bokhoven, M.A., G. Kok, T. van derWeijden. 2003. Designing a Quality Improvement
Intervention: A Systematic Approach. Quality and Safety in Health Care 12(3):215–20.
van Leeuwen, Y.D., S.S.L. Mol, M.C. Pollemans, et al. 1995. Change in Knowledge of General
Vandenbroucke JP. Observational research, randomised trials, and two views of medical science.
PLoS Med 2008;5:e67.
Wagner, E.H. 2000. The Role of Patient Care Teams in Chronic Disease Management. British
Medical Journal 320:569–72.
Wagner, E.H., B.T. Austin, M. van Korff. 1996. Organizing Care for Patients with Chronic Illness.
The Milbank Quarterly 74(4):511–44.
Walker AE, Grimshaw J, Johnston M, et al. PRIME—PRocess modelling in ImpleMEntation
research: selecting a theoretical basis for interventions to change clinical practice. BMC Health
Walker, A.E., J.M. Grimshaw, E.M. Armstrong. 2001. Salient Beliefs and Intentions to Prescribe
Antibiotics for Patients with a Sore Throat. British Journal of Health Psychology 6(4):347–60.
Walshe K. Understanding what works--and why--in quality improvement: the need for theory-
driven evaluation. Int J Qual Health Care 2007;19:57–9.
Walshe K. Understanding what works—and why—in quality improvement: the need for theory-
driven evaluation. Int J Qual Health Care 2007;19:57–9.
Weiss C. Nothing as practical as a good theory: exploring theory-based evaluation for
comprehensive community initiatives for children and families. In: Connell J, Kuchisch A, Schorr
LB, et al. eds. New approaches to evaluating community initiatives: concepts, methods and
contexts. 1st edn. New York, NY: Aspen Institute, 1995:65–92
Weiss CH. Theory-based evaluation: past, present, and future. New Dir Eval 1997;1997:41–55.
Wensing, M., H. Wollersheim, and R. Grol. 2006. Organizational Interventions to Implement
Improvements in Patient Care: A Structured Review of Reviews. Implementation Science
Wensing, M., M. Bosch, R. Foy, T. et al. 2005. Factors in Theories on Behaviour Change to Guide
Implementation and Quality Improvement in Health Care. Nijmegen: Centre for Quality of Care
Research (WOK).
West, M.A. 1990. The Social Psychology of Innovation in Groups. In Innovation and Creativity
atWork: Psychological and Organizational Strategies, edited by M.A.West and J.L. Farr, 4–36.
Chichester:Wiley.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


65

Wheelan, S.A., C.N. Burchill, F. Tilin. 2003. The link between teamwork and patients’ outcomes
in intensive care. American Journal of Critical Care 12:527–34.
Winters BD, Weaver SJ, Pfoh ER, et al. Rapid-response systems as a patient safety strategy: a
systematic review. Ann Intern Med 2013;158(5_Part_2):417–25.
Wolfe, R.A. 1994. Organizational Innovation: Review, Critique and Suggested Research
Directions. Journal of Management Studies 31:405–31.
Wolfson M. Social propioception: measurement, data and information from a population health
perspective. In Evans RG, Barer ML, Marmor T, eds, Why are Some People Healthy and Others
Not? New York, NY: Aldine de Gruyter, 1994: p. 309.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


66

CHAPTER 3: INSTITUTIONALIZING QUALITY IMPROVEMENT IN HEALTH CARE


Quality Improvement Methodology
‘The ultimate goal is to manage quality. But you cannot manage it until you have a way to
measure it, and you cannot measure it until you can monitor it’
Florence Nightingale

Quality improvement is an approach or process that seeks to address one or more of the categories
of ‘quality’. Successful ‘industrial’ approaches which have addressed both systems and processes
in order to improve outcome have increasingly been applied in healthcare settings and it is these
approaches that have influenced the development of this new generation of monitoring tools
produced by the IPS.
Systems thinking
Systems thinking views every care organization and care process as a system and the outcomes that
system produces. This is the opposite of outcomes (adverse ones in particular) being considered to
result from the failings of individuals who can be trained or exhorted to do better. Systems-thinking
asks you to consider the context (including the environment in which care is practiced) and whether
it is designed to reduce error and promote patient safety and best practice. The environment
includes the physical environment but also the systems and processes (the ways of doing things)
that happen within it. The Process Improvement Tools can assist in highlighting problems within
the environment and clinical practice which may require change to improve patient outcomes.
Quality improvement approaches
Business process reengineering
This approach involves a fundamental rethinking of how an organization’s central processes are
designed, with change driven from the top, by a visionary leader. Organizations are restructured
around key processes (defined as activities, or sets of activities) rather than specialist functions.
moving away from traditional approaches, organizations can identify waste and become more
streamlined
Experience-based co-design
This is an approach to improving patients’ experience of services, through patients and staff
working in partnership to design services or pathways. Data are gathered through in-depth
interviews, observations and group discussions and analysed to identify ‘touch points’ – aspects
of the service that are emotionally significant. Staff are shown an edited film of patients’ views

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


67

about their experiences before staff and patients come together in small groups to develop service
improvements.
Lean
This is a quality management system that draws on the way some Japanese car manufacturers,
including Toyota, manage their production processes. The approach focuses on five principles:
customer value; managing the value stream; regulating flow of production (to avoid quiet patches
and bottlenecks); reducing waste; and using ‘pull’ mechanisms to support flow. Using ‘pull’ means
responding to actual demand, rather than allowing the organizational needs to determine
production levels.
Statistical process control
This approach examines the difference between natural variation (known as ‘common cause
variation’) and variation that can be controlled (‘special cause variation’). The approach uses
control charts that display boundaries for acceptable variation in a process. Data are collected over
time to show whether a process is within control limits in order to detect poor or deteriorating
performance and target where improvements are needed.
Theory of constraints
The theory of constraints came from a simple concept similar to the idea that a chain is only as
strong as its weakest link. The theory recognizes that movement along a process, or chain of tasks,
will only flow at the rate of the task that has the least capacity. The approach involves identifying
the constraint (or bottleneck) in the process and getting the most out of that constraint (since this
rate-limiting step determines the system’s output, the entire value of the system is represented by
what flows through this bottleneck) recognizing the impact of mismatches between the variations
in demand and variations in capacity at the process constraint.
Total quality management (TQM)
Total quality management, also known as continuous quality improvement, is a management
approach that focuses on quality and the role of the people within an organization to develop
changes in culture, processes and practice. Rather than a process, it is a philosophy that is applied
to the whole organization, encompassing factors such as leadership, customer focus, evidence-
based decision making and a systematic approach to management and change.
Principles of Quality Improvement
When quality is considered from the IOM's perspective, then an organization's current system is
defined as how things are done now, whereas health care performance is defined by an

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


68

organization's efficiency and outcome of care, and level of patient satisfaction. Quality is directly
linked to an organization's service delivery approach or underlying systems of care. To achieve a
different level of performance (i.e., results) and improve quality, an organization's current system
needs to change. While each QI program may appear different, a successful program always
incorporates the following four key principles:
1) QI work as systems and processes
2) Focus on patients
3) Focus on being part of the team
4) Focus on use of the data
Quality Improvement Work as Systems and Processes
To make improvements, an organization needs to understand its own delivery system and key
processes. The concepts behind the QI approaches in this toolkit recognize that both resources
(inputs) and activities carried out (processes) are addressed together to ensure or improve quality
of care (outputs/outcomes). A health service delivery system can be small and simple, such as, an
immunization clinic, or large and complex, like a large managed-care organization
1) Activities or processes within a health care organization contain two major components: 1)
what is done (what care is provided), and 2) how it is done (when, where, and by whom care
is delivered). Improvement can be achieved by addressing either component; however, the
greatest impact for QI is when both are addressed at the same time.
2) Process mapping is a tool commonly used by an organization to better understand the health
care processes within its practice system. This tool gained popularity in engineering before
being adapted by health care. A process map provides a visual diagram of a sequence of events
that result in a particular outcome. By reviewing the steps and their sequence as to who
performs each step, and how efficiently the process works, an organization can often visualize
opportunities for improvement. The process mapping tool may also be used to evaluate or
redesign a current process.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


69

3) Specific steps are required to deliver optimal health care services. When these steps are tied to
pertinent clinical guidelines, then optimal outcomes are achieved. These essential steps are
referred to as the critical (or clinical) pathway. The critical pathway steps can be mapped as
described above. By mapping the current critical pathway for a particular service, an
organization gains a better understanding of what and how care is provided. When an
organization compares its map to one that shows optimal care for a service that is congruent
with evidence-based guidelines (i.e., idealized critical pathway), it sees other opportunities to
provide or improve delivered care.
Quality Improvement Planning
A QI plan is a detailed, and overarching organizational work plan for a health care organization's
clinical and service quality improvement activities. It includes essential information on how your
organization will manage, deploy, and review quality throughout the organization.
Elements of a QI plan
An effective QI plan includes the following key elements:
1) Description of organizational mission, program goals, and objectives
2) Definition of key quality terms/concepts
3) Description of how QI projects are selected, managed, and monitored
4) Description of training and support for staff involved in the QI process
5) Description of quality methodology (such as PSDA, Six Sigma) and quality tools/techniques
to be utilized throughout the organization
6) Description of communication plan of planned QI activities and processes, and how updates
will be communicated to the management and staff on a regular basis

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


70

7) Description of measurement and analysis, and how it will help define future QI activities
8) Description of evaluation/quality assurance activities that will be utilized to determine the
effectiveness of the QI plan’s implementation
Focus on Being Part of the Team
At its core, QI is a team process. Under the right circumstances, a team harnesses the knowledge,
skills, experience, and perspectives of different individuals within the team to make lasting
improvements. A team approach is most effective when:
a) The process or system is complex
b) No one person in an organization knows all the dimensions of an issue
c) The process involves more than one discipline or work area
d) Solutions require creativity
e) Staff commitment and buy-in are needed
Focus on Use of the Data
Data is the cornerstone of QI. It is used to describe how well current systems are working; what
happens when changes are applied, and to document successful performance. Using data:
a) Separates what is thought to be happening from what is really happening
b) Establishes a baseline (Starting with low scores is okay)
c) Reduces placement of ineffective solutions
d) Allows monitoring of procedural changes to ensure that improvements are sustained
e) Indicates whether changes lead to improvements
f) Allows comparisons of performance across sites
Both quantitative and qualitative methods of data collection are helpful in QI
efforts. Quantitative methods involve the use of numbers and frequencies that result in measurable
data. This type of information is easy to analyze statistically and is familiar to science and health
care professionals. Examples in a health care setting include:
1) Finding the average of a specific laboratory value
2) Calculating the frequencies of timely access to care
3) Calculating the percentages of patients that receive an appropriate health screening
Qualitative methods collect data with descriptive characteristics, rather than numeric values that
draw statistical inferences. Qualitative data is observable but not measurable, and it provides
important information about patterns, relationships between systems, and is often used to provide

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


71

context for needed improvements. Common strategies for collecting qualitative data in a health
care setting are:
1) Patient and staff satisfaction surveys
2) Focus-group discussions
3) Independent observations
A health care organization already has considerable data from various sources, such as, clinical
records, practice management systems, satisfaction surveys, external evaluations of the
population's health, and others. Focusing on existing data in a disciplined and methodical way
allows an organization to evaluate its current system, identify opportunities for improvement, and
monitor performance improvement over time.
When an organization wants to narrow its focus on specific data for its QI program, one strategy
is to adopt standardized performance measures. Since performance measures include specific
requirements that define exactly what data is needed for each measure, they target the data to be
collected and monitored from the other data that is available to an organization. The clinical quality
measures identified in this toolkit are examples of standardized measures that an organization,
such as a safety net provider, may consider for adoption. They are designed to measure care
processes that are common to safety net providers and are relevant to populations served. They
narrow an organization's choices of what data to collect and measure.

Developing a Theory-Informed Intervention


Even with improved understanding of the biological processes that determine health and illness, it
is noted that rates of disease morbidity , mortality or disability reflect people's behavioral practices
(1}. This suggests that there may be benefit from systematic improvements behavioral practices
such as hygiene, diet and lifestyle provide ample motivation to develop initiatives to elicit changes
in health behavior (van Borkhoven et al, 2003; Grimshaw et al, 2004; Rothman, 2004; Iceberg,
2006; Grol et al, 2008; Davies et al, 2010). Quality improvemet initiatives are about initiatives for
mplementation of interventions designed to change clinical practice behaviour and improve the
uptake of evidence into practice (French et al, 2012). Yet, in practice, such implementation
initiatives have had limited success (Grimshaw et al, 2004), partly due to a lack of explicit rationale
for the intervention choice and the use of inappropriate methods to design the interventions
(Iceberg, 2006; Grol et al, 2008). The design of implementation interventions for quality

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


72

improvement requires a systematic approach with a strong rationale for design and explicit
reporting of the intervention development process (des Jarlais et al, 2004’; Baker et al, 2008;
Boultron et al, 2008).

One option is to use theory to inform the design of implementation interventions (Eccles et al,
2004). The UK Medical Research Council’s (MRC) guidance for developing complex
interventions informed by theory (Campbell et al, 2000, MRC, 2008; Crepaz et al, 2008) is useful
as a general approach to designing an implementation intervention. The multiple theories and
frameworks of individual and organizational behaviour change that exist, tend to, conceptually,
have overlapping constructs (Ferlie and Shortell, 2001; Grol et al, 2007). Since only few of these
theories have been tested in robust research in healthcare settings, there is currently no systematic
basis for determining which among the various theories available predicts behaviour or behaviour
change most precisely (Noar and Zimmerman, 2005), or which is best suited to underpin
implementation research (Grol et al, 2007; Lipke et al, 2008). Theories that have been used in
previous implementation research include PRECEDE (Predisposing, Reinforcing, and Enabling
Constructs in Educational Diagnosis and Evaluation), diffusion of innovations, information
overload, and social marketing (Davies et al, 2010). One important approach in quality
improvement is to support individual health professionals to modify their clinical behaviour in
response to evidence-based guidance (Ferlie and Shortell, 2001). The reason why it is critical to
focus on this level is that much of health care is delivered in the context of an interpersonal
relationship that arises from the encounter between a health professional and a patient. This makes
healthcare professional clinical behaviours, in themselves or in the context of interaction with
patients/clients, an important proximal determinant of the quality of care that patients receive,
especially if other factors in the context are put into consideration.

Development of implementation interventions can draw on theory, evidence, and practical issues
in the following ways. Theory can be used to understand the factors that might influence the
clinical behaviour change (individual, interpersonal or organizational) that is being targeted, to
underpin possible techniques that could be used to change clinical behaviour (Mitchie et al, 2005),
and to clarify how such techniques might work (Beck et al, 2002; Lane et al, 2007; Gillard et al,
2004). Evidence can inform which clinical behaviours should be changed, and which potential

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


73

behaviour change techniques and modes of delivery are likely to be effective (Michie and
Johnston, 2004; Michie and Lester, 2005; Michie et al, 2008; Forsetlund et al, 2009; . Practical
issues then determine which behaviour change techniques are feasible with available resources,
and which are likely to be acceptable in the relevant setting and to the targeted health professional
group (Foy et al, 2007; McAteer et al, 2007; MacKenzie et al, 2008).

There are several steps involved in developing a theory-based intervention: (French et al, 2012)
1) STEP 1: Who needs to do what, differently?
a) Identify the evidence-practice gap
b) Specify the behaviour change needed to reduce the evidence-practice gap
c) Specify the health professional group whose behaviour needs changing
2) STEP 2: Using a theoretical framework, which barriers and enablers need to be addressed?
a) From the literature, and experience of the development team, select which theory(ies),
or theoretical framework(s), are likely to inform the pathways of change
b) Use the chosen theory(ies), or framework, to identify the pathway(s) of change and the
possible barriers and enablers to that pathway
c) Use qualitative and/or quantitative methods to identify barriers and enablers to
behaviour change
3) STEP 3: Which intervention components (behaviour change techniques and mode(s) of
delivery) could overcome the modifiable barriers and enhance the enablers?
a) Use the chosen theory, or framework, to identify potential behaviour change techniques
to overcome the barriers and enhance the enablers
b) Identify evidence to inform the selection of potential behaviour change techniques and
modes of delivery
c) Identify what is likely to be feasible, locally relevant, and acceptable and combine
identified components into an acceptable intervention that can be delivered
4) STEP 4: How can behaviour change be measured and understood?
a) Identify mediators of change to investigate the proposed pathways of change
b) Select appropriate outcome measures
c) Determine feasibility of outcomes to be measured

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


74

The Process of Institutionalizing QI in Healthcare Practice


The first task of a team seeking to make quality improvement in healthcare is to understand the
‘what’, through an assessment of the care situation and development of a specific aim statement,
in collaboration with leadership, frontline staff and patients/clients in response to an observed
problem (care or quality gap). The aim statement may be subdivided into a global aim, which
denotes the long-term goals of the process under evaluation, and the specific aim, which identifies
the more narrow scope of the current team’s work (Kurowski et al, 2015. The specific aim for a
project, should be specific, measurable, actionable, relevant, and time-bound (Langley et al, 2009).
Thus, the specific aim statement should clearly state the process/system which will be the subject
of the work, the desired outcome, the timeline during which the team will accomplish the work,
and the magnitude of change that is expected. This necessitates identifying baseline and goal. The
current workflow is mapped by direct observation and all observations are shared among the team.
The team then identifies potential failure modes for each step in the process, using a simplified
failure modes and effects analysis (FMEA) (Kurowski et al, 2015).

The areas in care that deviate from guidelines are classified as follows:
1) Site of care delivery (such as emergency clinic instead of outpatient office care delivery)
2) Clinical data collected at the visit
3) Diagnostic testing performed at the visit
4) Empiric therapy prescribed
5) Office follow-up scheduled at an appropriate interval to ensure improvement with treatment
regimen prescribed

The team collects data on the process for 4 weeks and then constructs a Pareto chart.
When the QI team is assembled and prepared to integrate quality improvements into its
organization, the focus then becomes the actual implementation. This section describes QI
processes at a high operational level. The content is intended to provide answers for these reflection
questions, as an organization makes specific decisions about what it wants to improve and how to
actually accomplish the work:
 What are the desired improvements?
 How are changes and improvements measured?

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


75

 How is staff organized to accomplish the work?


 How can QI models be leveraged to accomplish improvements effectively and efficiently?
 How is change managed?
Identifying desired improvements
In a health care organization, team members may suggest multiple areas that need ongoing
measurement or improvement. The first task is to focus on one or more improvement areas, but it
is recommended that no more than a few be selected. The following may be considered during the
process of selecting opportunities for improvement:
1) What are the funding agency's expectations?
2) What are the regulatory or monitoring agency's requirements?
3) What are the patients' issues and concerns?
4) What are the staff's issues and concerns?
5) What are the leadership's priorities?
An organization's processes that are weighted more heavily for improvement have one or more of
the following characteristics:
1) High volume, affecting a large number of patients
2) High frequency
3) High risk, placing patients at risk for poor outcomes
4) Longstanding
5) Multiple unsuccessful attempts to resolve in the past
6) Strong and differing opinions on cause or resolution of the problem
Brainstorming is a valuable approach for generating ideas on additional opportunities for
improvement. When performed in a structured manner, in a lively roundtable session led by a
facilitator, it allows ideas to flow freely without debate or judgment. Subsequently, the ideas are
reviewed, discussed, and clarified. During this stage, ideas are considered based on their projected
time and resource requirements. Data collection efforts that may involve staff members outside
the team are also taken into account. Then the team members rank and prioritize the areas based
on organizational goals and needs, and a list of areas for improvement is identified.
For most teams, choosing improvement opportunities is an iterative process. After an organization
creates a prioritized list using the methods described, it performs as many areas as feasible,
considering the reality of its available resources and organizational constraints.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


76

Tools for quality improvement in healthcare

1) Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible
causes for an effect or problem and sorts ideas into useful categories.
2) Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that
can be adapted for a wide variety of purposes.
3) Control charts: Graphs used to study how a process changes over time.
4) Histogram: The most commonly used graph for showing frequency distributions, or how often
each different value in a set of data occurs.
5) Pareto chart: Shows on a bar graph which factors are more significant.
6) Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a
relationship.
7) Stratification: A technique that separates data gathered from a variety of sources so that
patterns can be seen (some lists replace “stratification” with “flowchart” or “run chart

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


77

Monitoring and Evaluation for QI


An organization may already have existing data to track improvement. It needs to monitor the data
that accurately reflects how a particular system is functioning, which requires an organization to
focus on specific and well-defined data sets when monitoring QI. To know which data to use and
how to use it, understanding these three related concepts is important.
a) Performance measures in a health care setting are derived from practice guidelines. Data that
is defined into specific measurable elements provides practitioners with a meter to measure the
quality of their care. Performance measures are designed to measure systems of care.(3)
b) Performance measurement is a process by which an organization monitors important aspects
of its programs, systems, and processes. In this context, performance measurement includes
the operational processes used to collect data necessary for the performance measure(s).
c) Performance management is a forward-looking process used to set goals and regularly check
progress toward achieving those goals. In practice, this involves goal setting, looking at the
actual data for performance measures, and acting on results to improve the performance toward
those goals.
Used together, these three concepts form the basis for a QI data infrastructure. An organization
should choose performance measures that reflect the care system targeted for improvement, and
then set up a data collection system to document its performance. After the data is collected, then
an organization analyzes the performance data and acts on that information. The ongoing process
of collecting data, analyzing the data, introducing change based on that analysis, and again
collecting data, is referred to as the improvement cycle.
Before choosing performance measures, a QI team needs to consider parameters specific to its
organization, such as, resources, constraints, and the population served. Good performance
measures are always:
a) Relevant and based on a condition that frequently occurs and/or has a great impact on the
patients at their facility
b) Measurable and can be realistically and efficiently measured with the facility's finite resources
c) Accurate and based on accepted guidelines or developed through formal group decision-
making methods
d) Feasible and can realistically be improved given the capacity of the organization's clinical
services and patient population

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


78

Once measures are identified, an organization then determines its data collection frequency and
sampling. More frequent data collection allows an organization to focus its QI efforts more
aggressively. Monthly data collection is suggested, but collection on a quarterly basis is adequate,
if necessary. An organization's processes and procedures needs to be established for consistent
reviews and analyses of the performance measurement data by the staff. The data is analyzed to
identify trends and progress toward an organization's goals. This type of analysis also identifies
opportunities for improvement, allowing the QI team to focus its efforts and ensure that system
changes result in improvement.
Developing the Key Drivers Diagram
For success, QI initiatives need a firm grounding in theory (Davidoff et al, 2015; Kurowski et al,
2015). Davidoff et al. (2015) clarify the importance of theory by outlining three levels of theory
(grand, big, and small) t importance of theory in improvement work. Grand theory is the most
abstract and makes generalizations that apply across many domains. Big, or mid-range, theories
bridge the gap between grand and small by outlining concepts that can be applied across
improvement projects, such as the theory of diffusion of innovations (Rogers, 2003). Small or
program theories are practical, accessible, and specific to a single improvement project or
intervention. They specify, often in the form of a logic model or key driver diagram, the
components of an improvement project (or interventions) intended to address the intervention’s
expected outcomes (or drivers) leading to the desired improvement in the process (the specific
aim) and the methods for assessing those outcomes.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


79

Key drivers diagram

While generating the key driver diagram, getting input from all stakeholders to ensure that all
essential pieces of the process are identified. One useful method for describing the components of
a key driver diagram is using the question ‘What? to frame the drivers and ‘How?’ to frame the
interventions. The key driver diagram should be frequently revisited, and the program theory
revised by the team as additional information is obtained during observation of the system and
testing of interventions. This is an iterative process whereby interventions will be added or
previous interventions modified from the iterative trial-and-learning process of the model for
improvement. Once an initial list of key drivers has been agreed upon, it is time for the ‘good
ideas’ to be added to the key driver diagram (Krowski et al, 2015). These good ideas are the
proposed interventions based on the failure mode analysis. Arrows connecting the interventions
to the appropriate key drivers can be used to denote which key driver(s) will be affected by a given
intervention. These arrows will also be updated frequently, as the results of testing an intervention
may reveal effects on a driver that had not previously been linked. The team constructs a key driver

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


80

diagram which includes the following components (Langley et al, 2009): Global aim, specific aim,
key drivers and interventions.

Using Run Charts

Ideally, a small, balanced family of measures, including at least one outcome measure, should be
identified for an improvement initiative (Provost and Murray, 2011). The team must work to
understand how the data can be obtained for these measures, the accuracy of the data, and how
often the data can be collected. Once the team has collected the data, it is important to understand
the baseline performance of the system. Data for each measure is typically graphically plotted over
time using run charts or Shewart (control) charts. The graphical nature of these charts makes them
ideal for the evaluation of frequent changes in a measure since individual data points are displayed,
allowing for maximum visualization of variation over time (Perla et al, 2011). Understanding
system variation is a critical concept when working to improve a process or outcome. Run charts
make it possible to determine if the variation in your system is secondary to changes made or to
other inherent causes of variation in the system. Common cause (normal) variation is the variation
that is inherent to the system. This variation is typically explained by unknown factors constantly
active within the system. Common cause variation is often described as the ‘noise’ in the system
and, if singularly present, represents a ‘stable system’ A stable system may be preferred if it is
performing well; however, it may also represent a poorly performing system in which changes are
needed. Special cause variation is secondary to factors not inherent to the system. Special cause
variation may be desired or not desired depending on the historical stability and performance of
your system. It represents variation that is outside of the system’s baseline experience. When a
special cause event occurs, it is a signal that there is a new factor not typically part of the system
impacting the system’s performance. These events may represent favorable or unfavorable
changes to the system. Ideally, during active improvement, special cause events signal
improvements to the process or outcomes as a result of the team’s interventions. For run charts,
there are probability-based rules to determine special cause, and control limits are calculated for
Shewhart (control) charts as an additional method for determining special cause.

Using Plan-Do-Study-Act (PDSA) Cycles and QI tools


The third fundamental question in the Model for Improvement is answered through iterative testing
of small changes to the process referred to as plan-do-study-act (PDSA) cycles (also known as the

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


81

Shewhart or Deming cycle) (Langley et al, 2009). The PDSA cycle is a useful, four-step process
to test theory and implement change. The four stages of the PDSA cycle are as follows: (plan) the
change to be tested or implemented, (do) carry out the test of change with careful measurement,
(study) the data before and after the change and reflect on the knowledge obtained, and (act) plan
the next test. PDSA cycles are used to test an idea or theory through trial, assessing the change or
impact, and making interventions based on these small tests. Each intervention is based on theory
and should be tested on a small scale, sometimes on only one or two patients. Once the test shows
improvement, these theories can be ramped up to include a larger population. There are many
benefits from starting small, and growing these tests to include large audiences. For example, when
interventions are disruptive to opinions or existing processes, small tests can help generate buy-in
from those involved in the testing to support larger-scale test. Multiple PDSA cycles are often
linked together in a PDSA ramp, where small tests of changes are tested and adapted on
progressively larger scale, to get from the initial idea to a change that is ready for implementation.
Most projects will require multiple parallel PDSA ramps addressing multiple key drivers to
achieve the aim. It is important to annotate all SPC charts with PDSA cycles/ramps to visually
temporally track the impact of these cycles/ramps on the process, outcome, and balancing
measures.
Using the PDSA cycles for individual problem solving

Here the change involves individual decision-making to achieve QI, and the individual decision-
making does not affect other members, processes or context. Such an individual must understand
their role in the process of QI and must be empowered to make appropriate decisions. The
necessary steps include Identifying a problem ( analysis of the problem); Analyzing the problem
(using intuition, individual problem solving or consultation); Developing possible solutions
(which may be validated through dialogue or consultation) and Testing and implementing change:
Plan (Choose a hypothesis for solving the problem, consult); Do (Test the hypothesized solution);
Study (Verify if change was as planned, assess if change led to improvement); Act (Maintain the
change if successful, revise and modify plan if change not achieved or change not adequate).

Using the PDSA cycles for team problem solving

Here the change involves individual working as teams for decision-making to achieve QI, and the
individual decision-making influences decision-making or performance of the other team members

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


82

(thus affecting processes or context). Such individuals must understand their role in the team
process of QI and must be empowered to make appropriate decisions that sequentially improve
performance of the whole team. The necessary steps include:

Step 1: Identifying a problem that requires solution through mutual efforts of the whole team (
analysis of the problem identified by whole teams or team leaders), such as reduced patient waiting
time, reduced infection rates or reduced postoperative complications. The constitution of QI teams
is critical to represent all key players and should aim at achieving consensus.

Step 2: Analyzing the problem (using intuition, using available or new data, or consultation);
Developing possible solutions (which may be validated through dialogue or consultation) and
Testing and implementing change

a) Plan (Choose a hypothesis for solving the problem, consult). Process description tools such as
flow charts, run charts and cause-and effect diagrams may be used.
b) Do (Identify bench marks, Identify targets, Identify indicators or success)
c) Test the hypothesized solution); Study (Verify if change was as planned, assess if change led
to improvement); Act (Maintain the change if successful, revise and modify plan if change not
achieved or change not adequate).
d) Collect baseline data

Step 3: Develop

a) Identify (using brain storming, bench marking and affinity analysis


b) Rank interventions

Step 4: Test and implement possible interventions

a) Interventions may be re-tested, modified, adapted initially individually, and later sequentially
or together
b) Study (Assess if interventions are tested and implemented according to plan), implement
measurements according to targets, indicators of QI; Verify if the QI intervention led to
improvement or unexpected results.
c) Act (Implement intervention on a permanent basis if successful; Modify and retest intervention
as necessary)

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


83

Using the PDSA cycles for systematic team problem solving or process improvement

This approach is used for recurrent, chronic problems where there is a need to identify the root
cause of the problems. The tools used include cause-and effect diagrams, root cause analysis, and
testing theories (for possible cause of problem or success of interventions). Such QI initiatives may
require an opportunity for improvement (implementation momentum). It is important to identify
problems that are high risk (have the most negative effects due to poor quality), high volume (occur
often or have large effect), or are problem prone (susceptibility to errors is high).

Step 1: Identifying a problem that requires solution through mutual efforts of the whole team (
analysis of the problem identified by whole teams or team leaders), such as reduced patient waiting
time, reduced infection rates or reduced postoperative complications. The constitution of QI teams
is critical to represent all key players and should aim at achieving consensus.

Step 2: Problem analysis

a) Analyze possible causes and rank the causes using root-cause analysis
b) Analyzing the context of the problem (using intuition, using available or new data, or
consultation);
c) Analyze the processes involved in the activities related to the problem (Who, Where, When,
How, Why). Use a flow chart, check sheets or affinity diagrams.
d) Identify possible solutions (which may be validated through dialogue or consultation) and
before testing and implement change. May use ranking options such as voting (single voting,
multivoting or weighted voting) after selected criteria. Expert decision-making and systems
modeling (considering inputs, processes, outputs, activities, effects and impacts) may also be
used depending on the broadness or complexity of the problem to be addressed.
e) Choose a hypothesis for solving the problem, consult). Process description tools such as flow
charts, run charts and cause-and effect diagrams may be used.
f) Identify bench marks, targets and indicators or success
g) Develop and test empirically the hypothesized solution); Study (Verify if change was as
planned, assess if change led to improvement); Act (Maintain the change if successful, revise
and modify plan if change not achieved or change not adequate).
h) Collect baseline data (What data, How is it collected, analyzed and interpreted?).

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


84

i) Display and stratify data visually: Use pie histograms, line graphs charts, pareto diagrams or
run charts to display data, stratifying for key sub groups.

Step 3: Develop

c) Identify (using brain storming, creative thinking, bench marking and affinity analysis)
interventions to address the root causes of the problems
d) Rank interventions (Cost, feasibility, free from negative effects, reach, management support
needed, community support needed and timeliness). May use prioritization tools such as
voting, prioritization matrices, expert decision making or systems modeling.

Step 4: Test and implement possible interventions

a) Identify the prerequisites for implementation (what is needed, what must be in place, when,
and why) before implementing QI interventions
b) Develop the implementation plan (the step-by-step process by which the intervention is
implemented). Interventions may be re-tested, modified, adapted initially individually, and
later sequentially or together. Using a Gant chart is critical to visually display the order of
activities.
c) Study (Assess if interventions are tested and implemented according to plan), implement
measurements according to targets, indicators of QI; Verify if the QI intervention led to
improvement or unexpected results. Identify what did not go as planned or went wrong.
d) Act (Implement intervention on a permanent basis if successful; Modify and retest intervention
as necessary). Assign responsibility (Who to do what, when, how and why)
e) Identify and address resistance to change
f) Develop a prevention plan to address the potential negative effects of the implementation. May
use SWOT analysis, SPOT analysis or Force field analysis
g) If successful, develop a sustainability plan (dissemination plan, scale up plan, integration
plans, opportunity to standardize the interventions)

Using Measurements to Tell that a Change is an Improvement


This second fundamental question in the Model for Improvement is answered with the observation
of data over time. The QI team should identify appropriate measures to track with operational

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


85

definitions of these measures that are clear and project-specific. There are three different types of
measures which are often discussed in quality improvement work.
1) Outcome measure indicates the performance of the system under study and relates directly to
the specific aim. This measure is often directly related to a patient or patient-care-related
outcome. The team decides how to measure value of care but believes that, for example,
decreasing delays in antibiotic initiation, minimizing the number of changed prescriptions
based on lack of insurance coverage, and minimizing the number of unplanned office and/or
ED visits for the same illness maximizes the value of the care they deliver.
2) Process measure indicates if a key step in the process change has been accomplished. The QI
team identifies, say, use of appropriate first-line antibiotics for pneumonia, as directed by the
most recent evidence-based guideline as a process measure. Given the difficulty in directly
measuring the value of care, the team decides to start primarily tracking their process measure
over time to assess the impact of their interventions.
3) Balancing measure indicates performance of related processes/outcomes to ensure that those
measures are being maintained or improved, and also allows the QI team to monitor for
unintended consequences of their process improvement work. Common examples in health
care include adverse patient outcomes such as hospital readmissions or treatment failures.
Sustainability and Sustainability Plans
Once the changes have been adapted to a point where the team identifies they are ready for
adoption, the teams focus shifts to implementation of the change into everyday practice. This
includes revising the process map to accurately depict the new process, revising any job or process
descriptions to match the new process, and planning to train new members of the practice group
on the new process.
Sustainability plan
A sustainability plan includes the following:
1) Deciding which performance measures will continue to be monitored using run charts?
2) Developing a systematic (and ideally automated) process for obtaining and integrating the
data that comprise the measures
3) Determining who will be responsible for evaluating the performance measure on an ongoing
basis (i.e., the process owner)

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


86

4) Establishing measure parameters to guide the process owner’s decisions about when to
address deterioration in performance
5) Articulating the process owner’s role in addressing performance deterioration (e.g., power to
reconvene the team and launch new series of explorations to understand why and how the
process is failing)
Recommendations for a sustainable QI effort
The following are five recommendations based on this project that represent approaches local
quality leaders should consider as they work to develop an effective QI program.
1) Make your QI efforts about significant sustainable quality
Successful projects are those that people believe in and want to see become successful. Far too
often, the people affected by a QI project (if not even the actual QI team) are told they must
change in order to meet some internal or external requirement. This is a setting where change
resistance may be maximized and chances of project success minimized. These situations will
often be marked by initial improvements followed by immediate quality degradation of
improvements after project completion. This type of effect could explain the overall system-
wide response on the discharges before noon outcome. If enough QI teams treated the goal as
something they had to do to satisfy a request from central office, the teams would have had
sufficient buy-in to get the initial improvements to report, but once no one was monitoring and
reporting rates of discharge before noon providers returned to their original discharge process.
The key here is to encourage QI teams to early on identify and properly communicate a project
value (ideally for all of the stakeholders) that goes beyond simply meeting arbitrary
requirements. If properly done the larger healthcare community should have the necessary
motivation to improve and sustain those improvements.
2) Aim for real change, not just cosmetic change
While effective QI will include education, an effective QI team must work to understand the
process and what about that process allows poor quality to occur. Then the team can identify
ways to change the process that will eliminate sources of poor quality. Education can then
focus on helping providers understand the new process and the benefits of that process. In
contrast, education that relies solely on encouraging providers to perform better, which they
will strive to do, is unlikely to sufficiently support efforts and will not lead to lasting
improvements in quality. Another important consideration is that the QI team should make

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


87

sure there is a definable and consistent process relevant to the outcome of interest. Quite often
the issue in healthcare is that there are few standardized processes making it difficult to broadly
implement changes when everyone performs differently. This potential issue was the
motivating factor for developing the FIX classification approach that separated facilities with
high variability (No change) from those that had low variability but did not improve (No
benefit).
3) Empower and excite.
Change is most lasting when those who provide frontline care are involved and truly excited
about the QI project. The data in this study indicated that staff were critical to supporting QI;
the real question was how to most efficiently utilize staff in achieving goals. While it is
critically important that those who formulate the strategic plan for an organization make it clear
that they value and support QI, there is only so much that management in many health care
systems can do to effect change. Instead, it must be the frontline leaders who recognize a
quality problem, communicate the need for change, and motivate those around them to
overcome the challenge. Additionally, it is these people who understand how a process truly
occurs and can best identify the waste or potential sources of error. Only when there is true
energy at the front lines for supporting and making a change, is it possible to achieve long term
quality.
4) Measure and evaluate.
Measures of data collection were frequently used to separate different performance categories in
the decision tree models. In short, it is impossible to 156 improve quality if there is no clear
understanding about the current state of performance. Similarly, sustaining performance requires
monitoring performance and being prepared to respond should new sources of error emerge. This
process has its own challenges as hospitals must carefully identify how frequently to collect and
report data, as well as how to ensure that data are reported in a format that local quality leaders
can interpret and use to develop plans of action.
5) Dream big, start small
All QI approaches include some level of focus on continuous improvement and monitoring. The
continuous improvement process serves many critical purposes, but perhaps most importantly
recognizes that most processes are subject to multiple sources of waste or error. This means that
QI teams need the ability to systematically and sequentially tackle different issues rather than

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


88

feeling like a successful project must tackle all problems with a single intervention. In addition to
keeping the team from tackling too large of a project, this approach helps teams meet individual
goals which can be an excellent way to keep interest and excitement about the project.

References

Ashford AJ: Behavioural change in professional practice: supporting the development of effective
implementation strategies. Newcastle upon Tyne: Centre for Health Services Research, Report No
88 1998.
Baker EA, Brennan Ramirez LK, Claus JM, Land G: Translating and disseminating research- and
practice-based criteria to support evidencebased intervention planning. J Public Health Manag
Pract 2008, 14(2): 124–130.
Beck RS, Daughtridge R, Sloane PD: Physician-patient communication in the primary care office:
a systematic review. J Am Board Fam Pract 2002, 15(1):25–38.
Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P: Extending the CONSORT statement to
randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med
2008, 148(4): 295–309
Campbell M, Fitzpatrick R, Haines A, et al: Framework for design and evaluation of complex
interventions to improve health. BMJ 2000, 321(7262):694–696.
Chun J, Bafford AC. History and background of quality measurement. Clin Colon Rectal Surg.
2014; 27(1):5–9.
Craig P, Dieppe P, Macintyre S, et al: Developing and evaluating complex interventions: the new
Medical Research Council guidance. BMJ 2008, 337:a1655.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in


improvement. BMJ Qual Saf. 2015;24 (3):228–38
Davidson KW, Goldstein M, Kaplan RM, et al: Evidence-based behavioral medicine: what is it
and how do we achieve it? Behav Med 2003, 26 (3):161–171

Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of
guideline dissemination and implementation strategies and interpretation of the results of rigorous
evaluations. Implement Sci 2010, 5:14.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


89

Des Jarlais DC, Lyles C, Crepaz N: Improving the reporting quality of nonrandomized evaluations
of behavioral and public health interventions: the TREND statement. Am J Public Health 2004,
94(3): 361–366.
Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N: Changing the behavior of healthcare
professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol
2005, 58(2):107–112.
Ferlie EB, Shortell SM: Improving the quality of health care in the United Kingdom and the United
States: a framework for change. Milbank Q 2001, 79(2):281–315.
Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops:
effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2009, Issue
2:Art. No.: CD003030.
Foy R, Francis JJ, Johnston M, et al. The development of a theory-based intervention to promote
appropriate disclosure of a diagnosis of dementia. BMC Health Serv Res 2007, 7:207.
French SD, Green SE, O’Connor DA, et al. Developing theory-informed behaviour change
interventions to implement evidence into practice: a systematic approach using the Theoretical
Domains Framework. Implementation Science 2012, 7:38

Griffin SJ, Kinmonth AL, Veltman MW, et al. Effect on health-related outcomes of interventions
to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam
Med 2004, 2(6):595–608
Grimshaw JM, Thomas RE, MacLennan G, et al: Effectiveness and efficiency of guideline
dissemination and implementation strategies. Health Technol Assess 2004, 8(6):1–84.
Grol R, Berwick DM, Wensing M: On the trail of quality and safety in health care. BMJ 2008,
336(7635):74–76.
Grol RP, Bosch MC, Hulscher ME, et al: Planning and studying improvement in patient care: the
use of theoretical perspectives. Milbank Q 2007, 85(1):93–138.
Hrisos S, Eccles M, Johnston M, et al. Developing the content of two behavioural interventions:
using theory-based interventions to promote GP management of upper respiratory tract infection
without prescribing antibiotics #1. BMC Health Serv Res 2008, 8:11.
ICEBeRG: Designing theoretically-informed implementation interventions. Implement Sci 2006,
1:4.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


90

Kohn LT, Corrigan JM, Donaldson MS. To ERR is human: building a safer health system.
Washington: National Academies Press; 2000
Kurowski EM, Schondelmeyer AC, Brown, C, et al. A practical guide to conducting quality
improvement in the healthcare setting. Curr Treat Options Peds 2015; 1:380–392
Lane C, Rollnick S: The use of simulated patients and role-play in communication skills training:
a review of the literature to August 2005. Patient Educ Couns 2007, 67(1–2):13–20.
Langley GJMR, Nolan KM, Nolan TW, et al. The improvement guide: a practical approach to
enhancing organizational performance. 2nd ed. San Francisco: Jossey-Bass; 2009.
Lippke S, Ziegelmann JP: Theory-based health behavior change: developing, testing, and applying
theories for evidence-based interventions. Appl Psychol 2008, 57(4):698–716.
McAteer J, Stone C, Fuller R, Slade R, Michie S: Translating self-regulation theory into a hand-
hygiene behaviour intervention for UK healthcare workers. Health Psychology Review 2007,
1(Supplement 1):302
McKenzie JE, French SD, O’Connor DA, et al: IMPLEmenting a clinical practice guideline for
acute low back pain evidence-based manageMENT in general practice (IMPLEMENT): Cluster
randomised controlled trial study protocol. Implement Sci 2008, 3:11.
Medical Research Council: A framework for development and evaluation of RCTs for complex
interventions to improve health. London: MRC; 2000.
Medical Research Council: Developing and evaluating complex interventions: new guidance.
London: MRC; 2008.
Michie S, Johnston M, Abraham C, et al, on behalf of the "Psychological Theory" Group: Making
psychological theory useful for implementing evidence based practice: a consensus approach.
Quality Safety in Health Care 2005, 14(1):26–33.
Michie S, Johnston M, Francis J, et al: From theory to intervention: mapping theoretically derived
behavioural determinants to behaviour change techniques. Appl Psychol 2008, 57(4):660–680.
Michie S, Johnston M: Changing clinical behaviour by making guidelines specific. BMJ 2004,
328(7435):343–345.
Michie S, Lester K: Words matter: increasing the implementation of clinical guidelines. Qual Saf
Health Care 2005, 14(5):367–370.
Noar SM, Zimmerman RS: Health Behavior Theory and cumulative knowledge regarding health
behaviors: are we moving in the right direction?. Health Educ Res 2005, 20(3):275–290.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


91

Perla RJ, Provost LP,Murray SK. The run chart: a simple analytical tool for learning from
variation in healthcare processes. BMJ Qual Saf. 2011;20(1):46–51
Provost LP, Murray S. The health care data guide: learning from data for improvement. John
Wiley & Sons; 2011
Rogers E. Diffusion of Innovations. Fifth Edition ed. New York: Simon & Schuster; 2003.
Rothman JA. "Is there nothing more practical than a good theory?": Why innovations and advances
in health behavior change will arise if interventions are used to test and refine theory. International
Journal of Behavioral Nutrition and Physical Activity 2004, 1:11
Ryan TP. Statistical methods for quality improvement. John Wiley & Sons; 2011.
Strome TL. Healthcare analytics for quality and performance improvement. John Wiley & Sons;
2013.
van Bokhoven MA, Kok G, van der Weijden T: Designing a quality improvement intervention: a
systematic approach. Qual Saf Health Care 2003, 12(3):215–220.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


92

CHAPTER 4: RESEARCH IN QUALITY IMPROVEMENT

Why There Is Need for Research in Quality Improvement

The World Health Organization (WHO) Research Priority Setting Working Group states that
‘‘understanding the magnitude of the problem and the main contributing factors that lead to patient
harm is essential to devise effective and efficient solutions for different contexts and
environments and to build safer health systems.’’ Many health systems and facilities engage
regularly in activities referred to as quality assurance, quality improvement, performance
improvement or audit. (Although quality assurance and audit are terms used to refer to practices
that aim to review how care is being delivered and compare it with a set of explicit criteria to
determine how it can be improved, quality improvement is a term that is meant to encompass both
prospective and retrospective activities that are meant to improve care by determining why
preventable harms or systematic inefficiencies occur and by designing techniques to improve
them) to monitor and improve the care they provide to patients.

In addition, growing awareness of these concerns and possible strategies to address them has led
to a significant increase in the amount of research being conducted related to patient safety. Such
research is designed to document the extent, nature, and possible determinants of patient safety
incidents and to understand the effectiveness of interventions designed to prevent or reduce them.
Patient safety research is often included under the broader category of quality improvement, and
more generally of health services research. In fact, methods used to conduct patient safety research
are similar to those used in these other broader quality improvement and research activities,
including, for example, retrospective review of medical records, prospective observational data
collection, and randomized controlled trials. Patient safety research ideally results in interventions
and strategies that can be implemented in health care settings as a means of safety improvement
actions. International ethical guidelines for research require third party oversight of research by an
ethics review committee (REC) and also outline both principles and actions that should be
implemented as part of the ethical conduct of human research. As patient safety research and health
services research have become more widespread, ethics literature related to quality improvement
and patient safety activities and research has grown tremendously.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


93

Developing a Proposal or Using Existing Data


A proposal for a QI or clinical audit project should reflect the following:
a) The explicit intention not only to improve the quality or safety of patient care but also to
generate generalizable knowledge.
b) Appropriate data (quantitative or qualitative measures) of quality that are scientifically valid
and likely to produce reliable data
c) Data collection and analysis methods that are as rigorous as those that are used in research and
undertaken to the highest professional standard
d) Supervision and or conduct by someone who has been trained to carry out QI or clinical audit
projects
e) The implementing team have access to advice on the design and conduct of the project
Topics which need rigorous scrutiny for ethical or scientific validity due to likely ethical
implications
Clinical audits or QI projects that are on topics that themselves have ethical implications must have
well-defined approved standards or policies as the basis for the project (Barton, 1997; Lowe, 1997;
Kinn, 1997; Layer, 2005). Quality-of-care measures should be considered carefully to ensure
consistency with approved standards or policies, but more so for topics or areas that in themselves
have ethical implications. Examples of such topics could include:
1) End-of-life care
2) Critical care and management of emergencies
3) Do-not-resuscitate decisions
4) Any quality improvements for the care of minors or vulnerable populations
5) Healthcare-related decision-making for patients who lack mental capacity
6) Care of women who experience a miscarriage or stillbirth
The study design
A project that is poorly designed is a waste of time and is unlikely to result in improvements in the
quality of patient care. Thus, a project that does not use scientifically valid methods or is unlikely
to provide scientifically credible evidence should not be carried out. Also, the QI project
should employ scientifically valid methods in collection, handling, analysis, interpretation and
reporting of data. Also, there should be plans to address any negative outcomes that may arise
from participation. For instance, if a QI or clinical audit project unexpectedly reveals that a patient

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


94

has experienced a serious incident that has had or could have an important effect on their health or
quality of life, the organization has an obligation to ensure that the incident is disclosed to the
patient, and measures are established to prevent similar occurrences. In addition, the organization
has an obligation to ensure that further measurement of actual practice is carried out to verify that
the system or process involved has been improved and that the situation is unlikely to recur.
Campbell et al (200) offer advice on QI design for complex interventions should, that it should
follow a sequential approach involving 4 steps:
1) Development of the theoretical basis for an intervention;
2) Definition of components of the intervention (using modelling, simulation techniques or
qualitative methods);
3) Exploratory studies to develop further the intervention and plan a definitive evaluative study
(using a variety of methods);
4) Definitive evaluative study (using quantitative evaluative methods, predominantly randomized
designs).
This framework demonstrates the interrelation between quantitative evaluative methods and
other methods; it also makes explicit that the design and conduct of quantitative evaluative
studies should build upon the findings of other quality improvement research.
Check on effectiveness of actions implemented
QI and clinical audit projects aim to improve or maintain the quality or safety of patient care.
However, there is a risk that the proposed changes taken to achieve improvements will be
ineffective or even possibly harmful. Therefore, changes in patient care or service delivery need
to be risk assessed to pre-empt what could go wrong during the implementation of a change and
to identify what to do if it does (Nelson, 2004; Cave and Nichols, 2007; Davidoff, 2007). QI or
clinical audit projects that do not achieve needed changes related to patient safety or provision of
patient care may fail to meet the ethical responsibilities of healthcare professionals or organizations
to improve quality. If a project indicates that effective practice is not now being provided to
patients, it would be unethical to continue to provide substandard care and to withhold
improvements in practice from patients. On the other hand, lessons learned about the clinical
impact and outcomes of successful projects that have achieved substantial improvements should
be disseminated within the organization in order to promote organizational learning and spread the
implementation of improvements.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


95

Proposals of Research Proposals on Quality Improvement in Healthcare Systems


The contents of the proposals on for research on quality improvement in healthcare systems are
adapted from the SQUIRE guidelines, which were developed to provide a framework for reporting
new knowledge about how to improve healthcare. The guidelines are intended for reports that
describe system level work to improve the quality, safety, and value of healthcare, and used
methods to establish that observed outcomes were due to the intervention(s). The SQUIRE
guidelines acknowledge that there are several approaches for improving healthcare, and the
guidelines can be adapted to inform any reports on healthcare improvement. Guidelines for quality
improvement reports (QIR) were first proposed in 1999 by the editors of Quality in Health Care
who recommended as ‘‘a means of disseminating good practice’’ so that practitioners may have
the ‘‘opportunity to learn from each other as the science of audit and quality improvement
matures’’ (Moss and Thompson, 1999). Since then, while QIR guidelines have been used as the
structure for brief reports of quality improvement work (Thomson and Moss, 2008), there has been
a critical need for more comprehensive guidelines for larger and more complex quality
improvement studies (Ogrinc et al, 2008). With advancement in the theory and practice of quality
improvement in health care, there is increasing need to harmonize not only the reporting of
research finding, but also the protocols on which such findings are based. The guidelines for
development of quality improvement proposals have been adapted from the Standards for QUality
Improvement Reporting Excellence (SQUIRE statement) (Ogrinc et al, 2008). This is a checklist
of 19 items that authors need to consider when writing articles that describe formal studies of
quality improvement.

Quality improvement (QI) refers to basically a process of change in human behaviour that is driven
largely by experiential learning. Thus development and adoption of quality improvement
interventions depends a lot on changes in social policy, programmes or practices, within a specific
context or environment of healthcare delivery. To understand a quality improvement intervention
clearly, readers need to understand how the intervention relates to general knowledge of the care
problem that necessitates improvement. This requires the authors to place their work within the
context of issues that are known to impact the quality of care. Context means ‘‘to weave together’’.
The context thus refers to the interweaving of the issues that stimulated the improvement idea and
several spatial, social, temporal and cultural factors within the local setting, all of which form the
“canvas upon which improvement is painted” (Ogrinc etal, 2008). The explanation of context

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


96

should go beyond a description of physical setting, but should include the organization (types of
patients served, staff providing care and care processes before introducing the intervention) , the
governance structure, the health information systems, and the logistical framework so as to enable
reviewers and readers determine if findings from the study are likely to be transferable to be
generalized by transferability (readers are able to relate them to their own care setting). In studies
with multiple sites, a table or matrix can be a convenient way to summarize similarities differences
in context across sites. The table can specify the structures, processes, people and patterns of care
that are unique to each site and assist the reader in interpreting results.

Evaluation of the context is critical to rigor in QI research

Whereas controlled trials attempt to control the context to avoid selection bias, quality
improvement studies often seek to describe and understand the context in which the delivery of
care occurs. Pawson et al (2005) propose using a form of inquiry known as ‘‘realist evaluation’’
to explore complex, multi-component programmes that are designed to change performance. the
relevant questions in realist evaluation are: ‘‘what is it about this kind of intervention that works,
for whom, in what circumstances, in what respects and why?’’Answering these questions within a
quality improvement report requires a thoughtful and thorough description of the background
circumstances into which the change was introduced. The description of the background
knowledge and local context of care need to be detailed. Placing information into the exact
category is less important than ensuring the background knowledge, local context and local
problem are fully described. However, evidence-based clinical practice demands researchers
provide irrefutable evidence on how, and whether, and why quality improvement interventions
work (Davidoff and Batalden, 2005; Ogrinc et al, 2008).

Need to maintain methodological rigor

Therefore, proposals on quality improvement should seek to maintain the scientific and
methodological rigor that is necessary to generate generalizable evidence through a systematic
process of scientific inquiry that also follows acceptable ethical standards. The SQUIRE guidelines
are not exclusive of other guidelines. For instance, a quality improvement project or effectiveness
study that proposes to use a randomized controlled trial design should seriously consider using the
SPIRIT and CONSORT guidelines as well as the SQUIRE guidelines. Likewise, an improvement

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


97

project that uses extensive observational or qualitative techniques should consider the STROBE
guidelines, the QRSR guidelines as well as the SQUIRE guidelines. In case there is validation of
a prediction model or estimating the diagnostic accuracy of a new intervention or investigation,
then researchers should plan to follow both the TRIPOD and STARD guidelines.

Components of the research proposal for QI interventions

The title: Indicate that the proposal concerns and focuses on initiatives to improve healthcare
(broadly defined as including any of the following: quality, safety, effectiveness, patient-
centeredness, timeliness, cost, efficiency, and equity of healthcare). These areas of quality
parameters need to be explicit. The title may also indicate the aim of the intervention, type of
setting, approach to quality improvement or expected/intended outcomes approach.

The Introduction and literature review sections

The background

The background should provide a brief, non-selective summary of current knowledge of the care
problem being addressed, the gap in knowledge, the characteristics of organizations in which it
occurs, and the nature of possible interventions to improve quality. The literature on quality
improvement and patient safety and should highlight papers that are primarily theoretical and some
that are large-scale studies about improving quality. Including the operational definition of the
terms ‘‘quality’’, ‘‘quality improvement’’ or ‘‘patient safety’’ is important for readers to identify
the content and context of the quality improvement. Current MeSH headings include healthcare
quality, access and evaluation;quality assurance, health care; quality control; quality indicators,
health care; quality of health care; and total quality management. In case these are used, there
should be operational definitions.

1) The introduction: The introduction to a quality improvement paper should explicitly describe
the existing quality gap in relation to acceptable definitions of quality or patient safety. To be
as specific as possible, authors should describe the known standard or achievable best practice,
provide evidence that the local practice is not meeting those standard, highlight consequences
of this deficit, highlight the need for the proposed approach to quality improvement, and

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


98

emphasize the social value to be gained from improving the practice, service environment or
patient/client safety.
2) The research problem: A clear description of the problem, and why it is considered relevant
and amenable for quality improvement initiatives. From this problem, a clear purpose,
hypothesis or research question should be developed. There should be a summary of what is
currently known about the problem, including relevant previous studies, highlighting the
design, outcome measures, results and limitations.
3) The theory, theoretical model or conceptual model: Quality improvement is basically a change
in behavior. The quality improvement study should be clear about the expected “change” that
leads to improvement, for instance, personal changes, changes in interpersonal interaction,
organizational change, or system-wide change (as in change in multiple factors and parameters
of a health system). Therefore, informal or formal frameworks, models, conceptual models,
and/or theories should be used to explain the problem and how the proposed intervention is
expected to work. The intervention, as well as any reasons or assumptions that were used to
develop the intervention(s), and reasons why the intervention(s) was expected to work, should
be explicit, in line with the theory or conceptual model for the improvement or ‘change’. In
case the model is borrowed from another discipline and is to be adapted to the study, the
reasons for choosing the model, the strength of the model or theory, the intended modifications
of the model and plans to assess model or theory fit have to be explicit.
4) The Context: Contextual elements considered important at the outset of introducing
the intervention(s). The introduction should describe the nature and severity of the specific
local problem or system dysfunction to be addressed.
5) Aims or objectives: For aims or objectives, the proposal should describe the specific aim
(changes/improvements in care processes and patient outcomes) of the proposed intervention.
It should also specify who (champions, supporters) and what (events, observations) triggered
the decision to make changes, and why now (timing).
Methods section

a) Description of the intervention(s) in sufficient detail that others could reproduce it

b) Specifics of the team involved in the work

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


99

The intervention: Approach chosen for assessing the impact of the intervention(s), and approach
used to establish whether the observed outcomes were due to the intervention(s)

Study measures

a) The description of the methods of evaluation outlines what the study shall use to quantify
improvement, why the measures were chosen and how the investigators shall obtain the data.
Measures chosen for studying processes and outcomes of the intervention(s), including
rationale for choosing them, their operational definitions, and their validity and reliability

b) Description of the approach to the ongoing assessment of contextual elements that contributed
to the success, failure, efficiency, and cost

c) Methods employed for assessing completeness and accuracy of data

Data analysis

a) The analysis plan is intimately related to the study design. The analysis plan for quality
improvement data should show that the quality improvement initiatives or strategy resulted
into change (which is often multi-faceted) and led to measurable differences in the process,
outcome or impact measures.

b) Qualitative and quantitative methods may be used to draw inferences from the data

c) The data analysis should include methods for understanding variation within the data between
or within participants, including the effects of time as a variable

Outcomes: Nature of setting and improvement intervention

1) Identify relevant elements of setting or settings (for example, geography, physical resources,
organizational culture, history of change efforts), and structures and patterns of care (for
example, staffing, leadership) that provided context for the intervention

2) Explain the actual course of the intervention (for example, sequence of steps, events or phases;
type and number of participants at key points) preferably using a time-line diagram or flow
chart

3) Documents degree of success in implementing intervention components

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


100

4) Describe how and why the initial plan evolved, and the most important lessons learned from
that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)

Outcomes: Changes in processes of care and outcomes

Presents data on changes observed in the care delivery process

1) Data on changes observed in measures of patient outcome (for example, morbidity, mortality,
function, patient/staff satisfaction, service utilisation, cost, care disparities)

2) Evidence regarding the strength of association between observed changes/improvements and


intervention components/context factors d. Includes summary of missing data for intervention
and outcomes

Ethical considerations

The ethical principles of autonomy (do not deprive freedom), beneficence (act to benefit the
patient, avoiding self-interest), non-maleficence (do not harm), justice (fairness and equitable care)
and do your duty (adhering to one’s professional and organizational responsibilities) underpin the
delivery of health care and quality improvement efforts. The same principles should underpin the
planning, implementation, and publishing of quality improvement research. The research proposal
should describes ethical aspects of implementing and studying the improvement, such as privacy
concerns, protection of participants’ physical wellbeing, potential harms or risks to participants,
author conflicts of interest, formal ethical approvals and permissions, including data storage, data
sharing and material transfer.

Data interpretation

a) Once the measures have been chosen, the investigator needs to develop operational data
definitions, collection forms and determine how the data will be collected. The methods of data
collection and data quality management should be described concisely so that others may
replicate the project. Initial steps of the intervention(s) and their evolution over time (such as
time-line diagram, flow chart, or table), including modifications made to the intervention
during the project

b) Details of the process measures and outcome

c) Contextual elements that interacted with the intervention(s)

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


101

d) Observed associations between outcomes, interventions, and relevant contextual elements

e) Unintended consequences such as unexpected benefits, problems, failures, or costs associated


with the intervention(s).

f) Details about missing data

g) Nature of the association between the intervention(s) and the outcomes

h) Comparison of results with findings from other publications

i) Impact of the project on people and systems

j) Potential reasons for any differences between observed and anticipated outcomes, including
the influence of context

k) Costs and strategic trade-offs, including opportunity cost

Limitations

a) Limits to the generalizability of the work

b) Factors that might have limited internal validity such as confounding, bias, or imprecision in
the design, methods, measurement, or analysis

c) Efforts made to minimize and adjust for limitations

Likely utility of the findings


a) Usefulness of the work
b) Sustainability
c) Potential for spread to other contexts
d) Implications for practice and for further study in the field
Including Plans to Manage Ethical Issues in Quality Improvement Proposals
1) All healthcare professions participate
It has been acknowledged that all healthcare professionals have a responsibility to provide the best
possible patient care. This professional responsibility could be interpreted to mean that not being
involved in QI or clinical audit could be a breach of a professional code of conduct. Each
healthcare professional’s duty to prevent harm to patients through his or her acts or omissions
extends to the duty to participate in QI or clinical audit projects.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


102

2) All clinical services involved


All clinical directorates and services should have an active QI and clinical audit programme that
has the overall aim of achieving improvements in the quality or safety of patient care.
3) A systematic approach to setting priorities
Setting priorities for QI or clinical audit projects can be influenced by a number of factors, such as
commissioner and regulatory requirements and expectations, resources available to support the
work, pressure from patient groups, or the perceived ease or difficulty attached to carrying out
work on a particular subject.89 For example, in some organizations, there is a perception that
topics for clinical audits have tended to focus on satisfying external pressures rather than on the
integrity of clinical services’ self-measurement and self-regulation. An ethical approach to QI and
clinical audit would include a system for setting priorities for projects based on a risk-benefit
analysis of disease burden and patient need
4) Designate leadership and individual responsibility
Most healthcare organizations have appointed leads for clinical audit that are responsible for
leading and overseeing clinical audits in their services or directorates. It is less clear, however, if
leaders for QI work are designated in clinical services and directorates, and if such individuals
have training in leading staff to carry out QI projects and to oversee their effectiveness. An
individual or a team undertaking a QI or clinical audit project should inform a designated QI or
clinical audit lead or an appropriate clinical supervisor or manager that the project is being
undertaken and seek approval or authorization for the project. Individual members of staff may
not recognize when a project includes an ethics-related issue.
5) Assess organizational oversight structure
Many healthcare organizations have a Clinical Audit or Clinical Effectiveness Committee that
oversees the conduct of local and national clinical audits. However, such a committee may not
include the oversight of QI projects in its terms of reference. Mechanisms could include any or all
of the options in the box below.
6) No discrimination or segregation
The ethical principle of justice and fairness suggests that no patient group should be excluded from
the possibility of inclusion in a QI or clinical audit project, if they are likely to benefit. Likewise,
no groups should be included if they are more likely to suffer harm. Furthermore, no groups who
may not potentially benefit should be included in QI interventions. Criteria used to define patient

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


103

groups to be included or excluded (for example, patient characteristics such as gender, race,
ethnicity, age or disease site, or staff characteristics, such as profession or role in a healthcare
organization) need to be justified (O’Kane, 2007). In addition, the potential burdens or risks and
the potential benefits of QI or clinical audit projects should be distributed fairly across the
population of patients who are served by the healthcare organization.

The Role Independent Oversight by Ethics Committees and Institutions


Institutional review boards (IRBs) were established to protect the safety, rights, and welfare of
human research subjects. Their role is to perform critical oversight for research conducted on
human subjects to ensure the research is scientific and ethical and compliant with regulatory
guidelines (McNett and Lawry, 2009; Sims, 2008). They also function as a resource to researchers
providing educational mentoring and guidance to novice and expert researchers on the ethical
conduct of research, so as to ensure protection of human subjects (Kotzer & Milton, 2007).
Internationally, the World Health Organization’s Declaration of Helsinki is the most widely cited
ethical standard for research recommending approval from an independent ethical review board
(Wagner, 2003). The Belmont report provided ethical guidelines for conducting research with
human subjects. This report provides guidelines supported by three ethical principles that should
be followed for any research activity: The first principle is autonomy (respect for persons), which
includes recognizing an individual’s autonomy and dignity. The second principle is beneficence,
protecting the individual from harm by optimizing the benefits-and benefits must outweigh the
potential harms. The third principle is justice- fair distribution of the benefits and burdens of the
research. These ethical standards guide institutional review boards (IRBs) in their purpose of
protecting the rights, safety, and well-being of research participants, while catering for the rights
of researchers as well.

Important terms to consider in determining when the IRB is responsible for overseeing the rights,
safety, and well-being of human research participants include research and human subjects.
Human subjects research is defined as systematic investigation, including research development,
testing, and evaluation, designed to develop or contribute to generalizable knowledge, involving
humans. Human subject is defined as a living individual about whom an investigator conducting
research obtains (a) data through intervention or interaction with the individual or (b) or gets access
to identifiable private information (USDHHS, OHRP, 2004; USDHHS, OHRP, 2008). The OHRP

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


104

has a critical role in protecting human subjects during research activities (USDHHS, OHRP, 2009;
Wagner, 2003).

As more and more healthcare providers become involved in quality improvement and research
activities, it is challenging to determine when an activity constitutes a quality improvement (QI),
that is an activity not requiring IRB approval or a QI activity requiring IRB approval as compared
with a research activity requiring IRB approval. Along that continuum is a grey zone. This
differentiation is difficult, as evidenced by the report of a study that investigated expert opinion
congruence between QI leaders, chairs of IRBs, and journal editors found varying levels of
congruence (Lindenauer et al, 2002).

Qualifying Whether QI Interventions Constitute Research


The following three areas should be considered to determine if a project is QI or research: (a) the
purpose, (b) the design, (c) generalizability (McNett & Lawry, 2009).
a) If the purpose is to test novel issues that go beyond current knowledge or to fill a gap in existing
knowledge regarding a specific patient population, disease, or treatment approach, then the QI
project is research and requires IRB approval (McNett & Lawry, 2009; Platteborze et al.,
2010).
b) If the design of the proposed QI project includes any of the following, then the QI project
requires IRB approval: selected group of participants, randomization, evaluation of a specific
drug or device, acquisition of protected health information, access to private information of
individuals not routinely accessed, and comparison of one or more interventions, and providing
an intervention that is less or potentially more effective than standard of care or imposes risks
or burdens that go beyond standard of care (McNett & Lawry, 2009; Platteborze et al., 2010;
Wagner, 2003).
c) If the intent is to disseminate the information gained from the QI project at a conference or in
a publication external to the institution, then the project should undergo IRB approval. Often
individuals will seek IRB approval after the project is completed, when they have found
significant impact of the project. It is up to the IRB’s discretion whether to retrospectively
provide approval or not (Wagner, 2003). Therefore, if there is a foreseeable plan to disseminate
or publish the data from a QI project beyond the institution or system, IRB approval should be
sought before commencing the project (Rivera, 2008). The dissemination criterion for QI

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


105

projects beyond the health institution or system to undergo IRB approval is controversial
(OHRP (USDHHS, 2009), as the sole intent to publish the findings of a QI project is deemed
insufficient criterion for determining whether a QI activity involves research. The regulatory
definition under 45 CFR 46.102(d) is “Research means a systematic investigation including
research development, testing and evaluation designed to develop or contribute to
generalizable knowledge.” Planning to publish an account of a QI project does not necessarily
mean that the project fits the definition of research (USDHHS, 2009). To distinguish a QI
project from a research project when submitting a manuscript for publication, ( if the QI project
has not undergone IRB approval), the following headings have been recommended: Issue,
Imperative for Project, Procedures of Collecting and Evaluating Data, Information Found,
and Lessons Learned (Plattborze et al., 2010, p. 291).
d) It may not always be clear who is accountable for the effective conduct of QI and clinical audit
projects, and who is responsible for ensuring that ethical issues are identified, considered and
addressed (Bellin and Dubbler, 2001). Even then, it may not always be clear whether a QI
initiative is audit or research (Hughes, 2005). Therefore, a healthcare organization needs to
ensure that these projects have appropriate independent ethics review and oversight as part of
the clinical governance arrangements in the organization (Cretin et al, 2000; Morris and
Dracup, 2007). The ethical oversight structure also should include the organization’s patient
safety programme because these activities also can involve risks to patients (Rix and Cutting,
1996; Perneger, 2004; Wade, 2005; Boult and Maddern, 2007). Oversight should protect
patients from ad hoc or poorly conceived projects and should ensure that the organization has
a robust strategic programme that is achieving substantial improvements in the quality and
safety of patient care
e) Some organizations have considered that a Research Ethics Committee can be asked to oversee
QI and clinical audit projects from an ethics perspective(Bottrel, 2007) Another suggestion has
been that the Chair of a Research Ethics Committee could be asked for guidance in relation to
ethical issues in QI or clinical audit projects and could authorize projects that involve no more
than minimal risk to patients. However, a number of reasons have been given for not involving
a Research Ethics Committee in QI and clinical audit activities including the following: There
are significant differences between research and QI or clinical audit with regard to the
obligations of a healthcare organization. Research is an optional activity in a healthcare

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


106

organization. No individual or organization is obligated to carry out research, however QI and


clinical audit processes, on the other hand, are ethically intrinsic to the provision of care, a
morally and legally mandatory activity that should be integrated into the operations of a
healthcare organization. QI and clinical audit activities should not be viewed as a set of staff
projects, but as the heart of the operation of a healthcare organization, representing its
commitment to improve the quality and safety of patient care.
f) Individuals who are leading QI or clinical audit projects need to take responsibility for leading
changes in practice needed to achieve improvements; this responsibility cannot be delegated
to a Research Ethics Committee to oversee. Research Ethics Committees do not exist to assess
projects that involve changing practices and systems in the delivery of patient care. Therefore,
QI and/or clinical audit leads need to also assume responsibility for identifying and managing
any ethics issues related to the projects. Research Ethics Committees are often overworked and
have lengthy backlogs. Given the urgency of improvement in the quality and safety of
healthcare, it is counterproductive to contemplate delays in the important business of
redesigning the quality and safety of patient care.
g) Research Ethics Committees may lack the knowledge and expertise needed to evaluate QI or
clinical audit projects (Maxwell and Kaye, 2004). Staff members who are now involved and
committed to carrying out QI or clinical audit projects could be discouraged from undertaking
such projects in the first place if they could experience barriers such as additional paperwork,
alongside delays and frustrations associated with Research Ethics Committee review before
the work on a project could begin (Choo, 1998; Wilson et al, 1999; Doezema and Hauswald,
2002; Candib, 2007; Whicher et al, 2015); The typical Research Ethics Committee process
could have a ‘chilling effect on studies that could substantially improve error-prone systems
and that expose participants to risks no greater than those incurred during routine patient
care’(Palevsky et al, 2000; Miller, 2008; Neff, 2008).
Action to consider in case of uncertainty
Consider the following organizational systems to oversee possible ethical issues in QI or clinical
audit project (Casareti et al, 2000; Doyal, 2004; Taylor et al, 2010; Ogrinc et al, 2013):
1) Provide a corporate register of QI and clinical audit projects
2) Seek approval of an independent ethics committee
3) Disseminate organizational policies and guidance for QI and clinical audit projects

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


107

4) Provide for ethical consideration of a QI or clinical audit project that is designed to contain or
control or reduce costs
5) Include carrying out QI and clinical audit projects in job descriptions and performance
appraisals for all clinical staff
6) Teach staff about the organization’s policies and systems for identifying and managing ethics
issues in QI and clinical audit projects
7) Track completion of QI and clinical audit projects
8) Review potential publication of QI or clinical audit projects

Ethical Issues in Research in Quality Improvement


1) QI and clinical audit projects focus on translating existing knowledge about best practice,
derived from research and other forms of evidence-based information, into routine clinical
practice. They provide important information on how to apply existing knowledge and
implement changes that may be needed to achieve the best possible clinical outcomes. These
types of projects may be seen as ‘routine’ QI or clinical audit projects, but sometimes, they
are features that are typical of research (Lynn, 2004; Brown et al, 2007; Haggen et al, 2007).
There is a grey zone between what constitutes research and what is purely QI, and this is due
to several factors:
2) Changes in practice resulting from QI or clinical audit projects often involve routine
operational interventions. Examples might include: clarifying or redefining clinical policies or
procedures based on evidence-based practice; training staff to implement new policies or
procedures; implementing a new form of routinely recording patient care interventions; or
changing a process of care to eliminate steps that don’t contribute to providing quality care.
3) More complex QI projects can involve changing major systems that support the delivery of
care or service or devising completely novel interventions to achieve improvements in the
quality of care or service. Such projects may be seen as ‘non-routine’ QI activities (Gerrish
and Mason, 2005; Wise, 2007; Markman, 2007; Reynolds et al, 2008). It can be unclear how
much risk is involved in these projects, particularly for individual or groups of patients who
may experience the major systems change or novel intervention. These types of projects should
have ethical oversight by an organizational mechanism that provides for appropriate risk
assessment for patients through consideration of the balance of benefits to patients in
comparison to possible risks Lemaire, 2008; Siegel and Alfano, 2009).

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


108

4) Some QI activities involve the testing of alternative systems or methods for organizing or
delivering care. This type of activity most appropriately should be identified as QI research.
Such projects typically involve patients accessing care or services that differs from established
best practice or usual clinical care, and therefore, meet criteria that define a research study.
These QI research projects require formal ethical committee application and review (Weiserbs
et al, 2009). The results of the interventions being tested in the research are unknown, and
therefore, patients are at risk of not receiving care that will benefit them. Even then, patients
may be harmed by participation (or non-participation) in the interventions.

References
Barton A. Monitoring body is needed for audit. BMJ 1997;315:1465.
Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight.
Am J Public Health 2001;91(9):1512–7.
Bottrell MM. Accountability for the conduct of quality-improvement projects. In: Jennings B,
Baily MA, Bottrell M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory
Issues; 2007, 129–144. Available at: www. thehastingscenter.org/wp-content/uploads/Health-
Care- Quality-Improvement.pdf.
Boult M, Maddern GJ. Clinical audits: why and for whom. ANZ J Surg 2007;77:572–8.
Brown LH, Shah MN, Menegazzi JJ. Research and quality improvement: drawing lines in the grey
zone (editorial). Prehosp Emerg Care 2007;11:350–1.
Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex
interventions to improve health. BMJ 2000;321:694–6.
Candib LM. How turning a QI project into ‘research’ almost sank a great program. Hastings Center
Report 2007;37:26–30.
Carr ECJ. Talking on the telephone with people who have experienced pain in hospital: clinical
audit or research? J Adv Nurs 1999;29(1):194–200.
Casarett D, Karlawish JHT, Sugarman J. Determining when quality improvement initiatives
should be considered research. JAMA 2000;283(17):2275–80.
Cave E, Nichols C. Clinical audit and reform of the UK research ethics review system. Theor
Med Bioeth 2007;28(3):181¬203.
Choo V. Thin line between research and audit (commentary). Lancet 1998;352:337–8.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


109

Cretin S, Lynn J, Batalden PB, Berwick DM. Should patients in quality-improvement activities
have the same protections as participants in research studies? JAMA 2000;284(14):1786.
Davidoff F. Publication and the ethics of quality improvement. In: Jennings B, Baily MA, Bottrell
M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory Issues; 2007, 101–
6. Available at: www. thehastingscenter.org/wp-content/uploads/Health-Care- Quality-
Improvement.pdf.
Doezema D, Hauswald M. Quality improvement or research: a distinction without a difference?
IRB 2002; 24:9–12.
Doyal L. Preserving moral quality in research, audit, and quality improvement. Qual Saf Health
Care 2004;13:11–2.
Gerrish K, Mawson S. Research, audit, practice development and service evaluation: Implications
for research and clinical governance. Practice Development in Health Care 2005;4(1):33–9.
Hagen B, O’Beirne M, Desai S, Stingl M, Pachnowski CA, Hayward S. Innovations in the Ethical
Review of Health-related Quality Improvement and Research: The Alberta Research Ethics
Community Consensus Initiative (ARECCI). Healthc Policy 2007;2(4):1–14.
Hughes R. Is audit research? The relationships between clinical audit and social research. Int J
Health Care Qual Ass 2005;18(4):289–99.
Kaktins, N. M. (2009). Faculty guide to the institutional review board process. Nurse Educator,
34, 244-248.
Kinn S. The relationship between clinical audit and ethics. J Med Ethics 1997;23:250–3.
Kotzer, A. M. & Milton, J. (2007). An eduction initiative to increase staff knowledge of
Institutional Review Board guidelines in the USA. Nursing and Health Sciences, 9, 103-106.
Layer T. Ethical conduct recommendations for quality improvement projects. J Healthc Qual
2005;25(4):44–6.
Lemaire F. Informed consent and studies of a quality improvement program (letter). JAMA 2008;
300:1762.
Lindenauer, P. K., Benjamin, E. M., Naglieri-Prescod, D., et al (2002). The role of the institutional
review board in quality improvement: A survey of quality officers, institutional review board
chairs and journal editors. American Journal of Medicine, 113, 575-579.
Lo B, Groman M. Oversight of quality improvement. Focusing on benefits and risks. Arch Intern
Med 2003;163(12):1481–6.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


110

Lowe J, Kerridge I. Implementation of guidelines for no-CPR orders by a general medical unit in
a teaching hospital. Aust N Z J Med 1997;27(4):379–83.
Lynn J. When does quality improvement count as research? Human subject protection and theories
of knowledge. Qual Saf Health Care 2004;13:67–70.
Lynn, J., Baily, M. A., Bottrell, M., et al. (2007). The ethics of using quality improvement methods
in health care. Annals of Internal Medicine, 146, 666-674.
Markman M. The role of independent review to ensure ethical quality-improvement activities in
oncology: a commentary on the national debate regarding the distinction between quality-
improvement initiatives and clinical research. Cancer 2007;110(12):2597–600.
Maxwell DJ, Kaye KI. Multicentre research: negotiating the ethics approval obstacle course
(letter). Med J Aust 2004;181(8):460.
McNett, M, Lawry, K. (2009). Research and quality improvement activities: When is institutional
review board review needed? Journal of Neuroscience Nursing, 41, 344-347.
Miller FG, Emanuel EJ. Quality improvement research and informed consent. N Engl J Med
2008;358(8):765–7.
Morris PE, Dracup K. Quality improvement or research? The ethics of hospital project oversight.
Am J Crit Care 2007;16:424–6.
Neff MJ. Institutional Review Board consideration of chart reviews, case reports, and
observational studies. Respir Care 2008;53(10):1350–3.
Nelson WA. Proposed ethical guidelines for quality improvement. Healthc Exec 2014; 29(2):52,
54–5.
O’Kane ME. Do patients need to be protected from quality improvement? In: Jennings B, Baily
MA, Bottrell M, Lynn J, editors. Health Care Quality Improvement: Ethical and Regulatory Issues;
2007, pp. 89–99. Available at: www. thehastingscenter.org/wp-content/uploads/Health-Care-
Quality-Improvement.pdf.
Ogrinc G, Nelson WA, Adams SM, O’Hara AE. An instrument to differentiate between clinical
research and quality improvement. IRB 2013;35(5):1–8.
Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement
Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration.
Qual Saf Health Care 2008;17(Suppl I):i13–i32.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


111

Palevsky PM, Washington MS, Stevenson JA, et al. Improving compliance with the dialysis
prescription as a strategy to increase the delivered dose of hemodialysis: an ESRD network 4
quality improvement project. Adv Ren Replace Ther 2000;7(4 Suppl 1):S21–30.
Pawson R, Greenhalgh T, Harvey G, et al. Realist review—a new method of systematic review
designed for complex policy interventions. J Health Serv Res Policy 2005;10(suppl 1):21–34.

Pfadenhauer LM, Gerhadus A, Mozygemba K et al, Making sense of complexity in context and
implementation: the Context and Implementation of Complex Interventions (CICI) framework.
Implementation Science (2017) 12:21

Perneger TV. Why we need ethical oversight of quality improvement projects. Int J Qual Health
Care 2004;16(5):343–44.
Platteborze, L. S., Young-McCaughan, S., King-Letzkus, I, et al, (2010). Performance
improvement/research advisory panel: A model for determining whether a project is a performance
or quality improvement activity or research. Military Medicine, 175, 289-291.
Reynolds J, Crichton N, Fisher W, Sacks S. Determining the need for ethical review: a three-stage
Delphi study. J Med Ethics 2008;34:889–94.
Rivera, S. (2008). Clinical research from proposal to implementation: What every clinical
investigator should know about the institutional review board. Journal of Investigative Medicine,
56, 975-984.
Rix G, Cutting K. Clinical audit, the case for ethical scrutiny? Int J Health Care Qual Ass
1996;9(6):18–20.
Siegel MD, Alfano S. The ethics of quality improvement research. Crit Care Med 2009;37(2):791–
2.
Sims, J. M. (2008). An introduction to institutional review boards. Dimensions of Critical Care
Nursing, 27, 223-225.
Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight and quality improvement initiatives. Qual
Saf Health Care 2010;19(4):271–4.
United States Department of Health & Human Services Office for Human Research Protections.
(2004). Human subject regulations decision charts. Retrieved from
http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


112

United States Department of Health & Human Services Office for Human Research Protections.
(2008). Guidance on engagement of institutions in human subjects research. Retrieved from
http://www.hhs.gov/ohrp/humansubjects/guidance/engage08.html
United States Department of Health & Human Services, Office for Human Research Protections.
(2009). Quality improvement activities frequently asked questions. Retrieved from
http://www.hhs.gov/ohrp/qualityfaq.html
Wade DT. Ethics, audit, and research: all shades of grey. BMJ 2005;330:468–73.
Wagner, R. M. (2003). Ethical review of research involving human subjects: When and why is
IRB review necessary? Muscle & Nerve, 28, 27-39.
Weiserbs KF, Lyutic L, Weinberg J. Should quality improvement projects require IRB approval?
(letter). Acad Med 2009;84(2):153.
Whicher D, Kass N, Saghai Y, et al. The views of quality improvement professionals and
comparative effectiveness researchers on ethics, IRBs, and oversight. J Empir Res Hum Res Ethics
2015;10(2):132–44.
Wilson A, Grimshaw G, Baker R, Thompson J. Differentiating between audit and research: postal
survey of health authorities’ views. BMJ 1999;319:1235.
Wise LC. Ethical issues surrounding quality improvement activities. J Nurs Adm 2007;37(6):272–
8.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


113

CHAPTER 5: PERFORMANCE IMPROVEMENT PROJECTS IN HEALTHCARE


SYSTEMS

Countries and international organizations have increasing interest in how health systems perform.
This has led to the development of performance indicators for monitoring, assessing, and managing
health systems to achieve effectiveness, equity, efficiency, and quality. Although the indicators
populate conceptual frameworks, it is often not very clear just what the underlying concepts might
be or how effectiveness is conceptualized and measured. Furthermore, there is a gap in the
knowledge of how the resultant performance data are used to stimulate improvement and to ensure
health care quality. Performance Improvement represents a critical strategy for improving health
system performance as well as hospital quality of care. QI is an iterative systematic approach to
planning and implementing continuous improvement in performance. QI emphasizes continuous
examination and improvement of work processes whereby teams of organizational members,
trained in basic statistical techniques and problem solving tools, use available data for decision-
making in order to improve quality or performance. The systemic focus of performance
improvement emphasizes the increasing recognition that the quality of the care delivered by
clinicians depends on the performance capability of the organizational systems in which they work.
Patient safety
While individual clinician competence remains key for patient safety, the capability of
organizational systems to prevent errors, coordinate care among settings and practitioners, and
ensure that relevant, accurate information is available when needed are increasingly seen as
critical elements in providing high quality care (Institute of Medicine, 2000). As an indication of
the growing emphasis on organizational systems of care, the Joint Commission on Accreditation
of Healthcare Organizations, the National Committee for Quality Assurance, and the Peer Review
Organizations of the Centers for Medicare and Medicaid in the United States have all encouraged
hospitals to use QI methods. While QI holds promise for improving quality of care, hospitals that
adopt QI often struggle with its implementation (Shortell et al, 1998). Implementation refers to the
transition period, following a decision to adopt a new idea or practice, when intended users put
that new idea or practice into use, such as when clinical and nonclinical staff begin applying QI
principles and practices to improve clinical care processes (Klein and Sorra, 1996; Rogers, 2003).
Successful implementation is critical to the effectiveness of a QI initiative (Blumenthal and Kilo,

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


114

1998; Shortell et al, 1998). However, QI implementation poses several demands on individuals
and and organizations. It requires sustained leadership, extensive training and support, robust
measurement and data systems, realigned incentives and human resources practices, and cultural
receptivity to change (Shortell et al, 1998; Ferlie and Shortell, 2001; Institute of Medicine, 2001;
Meyer et al. 2004). Also, the systemic nature of many quality problems implies that the
effectiveness of a QI initiative may depend on its implementation across many conditions,
disciplines, and departments, which further adds on the challenges (Gustofson et al. 1997;
Blumenthal and Kilo, 1998; Meyer et al. 2004). If successful, though, implementing QI in this
manner creates resilient long-lasting infrastructure for enhancing organization-wide quality.

Performance Improvement and Organizational Context in Healthcare Systems


In hospital settings and other healthcare systems, several dimensions of QI implementation in
influence indicators of clinical quality. If data on hospital QI practices is evaluated using carefully
screened and validated measures indicative of patient safety, one can be able to address several
problems associated with existing research on hospital QI and quality of care (Weiner et al, 2006).
First, one needs to account for differences in how hospitals implement QI in order to understand
the relative advantage of different implementation strategies. Second, studies of hospital QI should
use adequately large numbers to ensure both availability of adequate data to enable generalization
of study findings to larger populations of hospitals and use of the findings to develop managerial
or policy recommendations. The use of a broad range of hospital quality indicators enables one to
link specific QI structures and practices with a set of quality indicators that broadly reflect quality
at the institutional level.
Performance improvement and client satisfaction
QI embraces a philosophy of meeting or exceeding customer expectations through the continuous
improvement of the processes of producing goods or services (Weiner et al, 2006). QI posits that
the quality of goods and services depends primarily on the processes by which they are designed
and delivered. Thus QI focuses on understanding, controlling, and improving work processes
rather than correcting individuals’ mistakes. QI also aims at limiting variations, on the assumption
that uncontrolled variance in work processes is the primary cause of quality problems.
Consequently, QI focuses analyzing the root causes of variability, taking appropriate steps to make
work processes predictable, and then continuously improving process performance. At the
operational level, QI combines three elements: use of cross-functional teams to identify and solve

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


115

quality problems, use of scientific methods and statistical tools by these teams to monitor and
analyze work processes, and use of process-management tools (such as flow charts and run charts
that graphically depict steps in a clinical process) to help team members use collective knowledge
effectively.
Performance improvement and multi-disciplinary teams

Cross-functional teams play an integral role in QI because most vital work processes span
individuals, disciplines, and departments. Cross-functional teams bring together the many clinical
professionals and nonclinical hospital staff members who perform a process to document the
process in its entirety, diagnose the causes of quality problems, and develop and test possible
solutions to address them with systematic analysis conducted done at the organizational level.
(Shortell et al, 1998). Several studies have examined the structures, processes, and relationships
common to designing, organizing, and implementing hospital QI efforts (Barsness et al. 1993a, b;
Blumenthal and Edwards. 1995; Gilman and Lammers, 1995; Shortell, 1995; Shortell et al. 1995b;
Weiner et al, 1996; Weiner et al, 1997; Shortell, and Alexander, 1997; Westphal et al, 1997;
Berlowitz et al. 2003). The findings of these studies are that hospitals vary widely in terms of: (1)
their approach to implementing QI; (2) the extent to which QI has ‘‘penetrated’’ core clinical
processes; and (3) the degree to which QI practices have been diffused across clinical areas. Few
of these examined the effectiveness of hospital QI practices. With few exceptions (Westphal et al,
1997; Shortell et al. 2000), most have used perceptual measures of impact or self-reported
estimates of cost or clinical impact rather than objectively derived measures of clinical quality
(Gilman and Lammers 1995; Shortell et al. 1995b; Carlin et al 1996; O’Connor et al. 1996; Gordian
and Ballard 1997; Goldberg et al. 1998; Ferguson et al. 2003).

Hypotheses for Performance Improvement Projects in Healthcare


1) The hypothesis is that higher values on multiple hospital-level quality indicators will be
associated with the implementation of QI structures and practices that provide a durable
infrastructure for continuous improvement. The effectiveness of QI at the organizational level
depends in part on the scope of QI implementation that is, the extent or range of application of
QI philosophy and methods.
2) QI achieves its full potential when it penetrates organizational routines and becomes a ‘‘way
of doing business’’ throughout the organization. Such penetration is critical for sustainable
success in enhancing quality across clinical conditions, organizational units, and time. Broad
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
116

implementation scope enhances a hospital’s capability to systematically improve work


processes and thereby enhance quality organization-wide. Besides, most vital work processes
in organizations span individuals, disciplines, and departments (Ishikawa 1985; Deming 1986;
Juran 1988; James 1989; Walton 1990).
3) The importance of team and interpersonal relationships is that improving clinical care
processes generally requires that clinical professionals and hospital staff from different
specialties, functions, or units work together in order to document how the process works in
its entirety and identify the key process factors that play a causal role in process performance.
Implementing systemic changes also typically requires collaboration across disciplinary,
functional, and unit boundaries. For example, improving cardiac surgery outcomes may entail
multiple, simultaneous changes in physician offices, inpatient units, and home-health units
(Gustofson et al, 1997). Even when implementing ‘‘local’’ changes (those within a single unit),
cross-unit collaboration is often necessary in order to avoid undesirable, unintended
consequences to arise in other units because of task interdependencies.
4) Enhancing quality on an organization-wide basis requires mobilizing large numbers of hospital
staff, equipping them with technical expertise in QI methods and tools, and empowering them
to diagnose and solve patient safety problems. A small number of people working together on
a cross-functional team could, with the right support, make systems improvements that address
a specific quality problem (e.g., stroke mortality). An organization- wide effort focusing on 5,
10, or even 15 quality problems, however, would require harnessing the knowledge, the
energy, and the creativity of many hospital staff members (Weiner et a, 2006). The extensive
involvement of hospital staff across multiple units strengthens the effectiveness of QI efforts
by promoting a ‘‘quality’’ culture. The participation in QI promotes shared values about the
importance of continuous improvement, using data and scientific methods to identify
problems, communicating openly, and collaborating to implement solutions. These shared
values, in turn, support the implementation of systemic changes that cross disciplinary,
departmental, and organizational boundaries (by reducing turf battles) and increase the
likelihood of ‘‘holding the gain’’ (O’Brien et al, 1995).
5) Direct senior management participation in cross-functional QI teams signals to other
organizational members that senior management views QI as a top priority. This, in turn, may
strengthen the effectiveness of QI efforts by increasing the commitment and contributions of

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


117

front-line workers. Moreover, senior managers who participate in QI teams may develop a
deeper understanding of the root causes of quality problems and feel greater ownership of
recommended solutions that such teams generate. As a result, senior managers may be more
willing to commit the resources and make the policy changes necessary to ameliorate systemic
causes of quality problems. Widespread physician participation in QI teams may also be
critical to QI effectiveness because physicians play a critical role in clinical resource allocation
decisions and possess the clinical expertise needed to differentiate appropriate from
inappropriate variation in care processes. Pervasive physician participation may not only
enhance the quality of analysis and problem solving in QI teams, but also support the
implementation of changes recommended by such teams. Research indicates that peer
influence can be a powerful lever for provider behavior change. Widespread physician
participation in QI teams may facilitate those changes in physician behavior needed to address
quality problems. A blended whereby hospitals exhibit higher values on hospital-level quality
indicators by encouraging many organizational members to participate in QI activities, yet
limiting hospital deployment of QI to a few organizational units. Greater participation of
hospital staff and senior managers in QI teams is positively associated with higher values on
several hospital-level quality indicators, not just one or two. Perhaps intensive mobilization of
organizational personnel within organizational units (such as acute inpatient care) creates the
‘‘critical mass’’ necessary to overcome the structural, cultural, and technical barriers that often
obstruct organization-wide application of QI or otherwise restrict the gains from QI activity to
a few clinical outcomes.

Challenges to Institutionalizing QI in Healthcare

Another practical issue: the role of physicians in clinical QI efforts. Lack of physician involvement
represents the single most important obstacle to the success of clinical QI (Berwick et al,1990;
Board 1992; Health Care Advisory Board ,1992; McLaughlin and Kaluzny 1994; Blumenthal and
Edwards 1995; Shortell 1995). Physicians play a central role in clinical resource allocation
decisions and possess the clinical expertise needed to differentiate appropriate from inappropriate
variation in care processes. Physicians are reluctant to participate in QI projects because of distrust
of hospital motives, lack of time, and fear that reducing variation in clinical processes will
compromise their ability to vary care to meet individual needs (Blumenthal and Edwards 1995;

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


118

Shortell 1995; Shortell et al, 1995a). Study results suggest that widespread physician participation
in QI teams, while perhaps desirable, might not be necessary. Widespread participation of hospital
staff and senior managers, it seems, is more important, at least for the hospital-level quality
indicators examined here. Rather than attempting to mobilize much of the medical staff, hospital
leaders could perhaps secure needed physician input by involving selected physicians on an as-
needed basis.

Key Performance Improvement Concepts

Performance information needs to be structured to demonstrate clearly how government uses


available resources to deliver on its mandate.
Inputs, activities, outputs, outcomes and impacts
When describing what government institutions do for purposes of measuring performance the
following terms are used:
a) Inputs: all the resources that contribute to the production and delivery of outputs. Inputs are
"what we use to do the work". They include finances, personnel, equipment and buildings.
b) Activities: the processes or actions that use a range of inputs to produce the desired outputs
and ultimately outcomes. In essence, activities describe "what we do".
c) Outputs: the final products, or goods and services produced for delivery. Outputs may be
defined as "what we produce or deliver".
d) Outcomes: the medium-term results for specific beneficiaries that are the consequence of
achieving specific outputs. Outcomes should relate clearly to an institution's strategic goals
and objectives set out in its plans. Outcomes are "what we wish to achieve".
e) Impacts: the results of achieving specific outcomes, such as reducing poverty and creating
jobs. When monitoring and assessing outcomes and impacts, it needs to be kept in mind that
government interventions can also have unintended consequences. These also need to be
identified and monitored so that risks can be managed and corrective action can be taken.

In managing for results, budgets are developed in relation to inputs, activities and outputs, while
the aim is to manage towards achieving the outcomes and impacts.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


119

Performance indicators
Suitable indicators need to be specified to measure performance in relation to inputs, activities,
outputs, outcomes and impacts. The challenge is to specify indicators that measure things that are
useful from a management and accountability perspective. This means managers need to be
selective when defining indicators. Defining a good performance indicator requires careful
analysis of what is to be measured. One needs to have a thorough understanding of the nature of
the input or output, the activities, the desired outcomes and impacts, and all relevant definitions
and standards used in the field. For this reason it is important to involve subject experts and line
managers in the process.
A good performance indicator should be:
a) Reliable: the indicator should be accurate enough for its intended use and respond to changes
in the level of performance.
b) Well-defined: the indicator needs to have a clear, unambiguous definition so that data will be
collected consistently, and be easy to understand and use.
c) Verifiable: it must be possible to validate the processes and systems that produce the indicator.
d) Cost-effective: the usefulness of the indicator must justify the cost of collecting the data.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


120

e) Appropriate: the indicator must avoid unintended consequences and encourage service
delivery improvements, and not give managers incentives to carry out activities simply to meet
a particular target.
f) Relevant: the indicator must relate logically and directly to an aspect of the institution's
mandate, and the realization of strategic goals and objectives.
Institutions should include performance indicators related to the provision of goods and services.

Indicators of economy, efficiency and effectiveness

These describe the interface between government and the public, and are useful for monitoring
and improving performance as it is relevant to the citizens of the country.
Where possible, indicators that directly measure inputs, activities, outputs, outcomes and impacts
should be sought. This is not always possible and in such instances, proxy indicators may need to
be considered. Typical direct indicators include, cost or price, distribution, quantity, quality,
dates and time frames, adequacy and accessibility.
a) Cost or Price indicators are both important in determining the economy and efficiency of
service delivery.
b) Distribution indicators relate to the distribution of capacity to deliver services and are critical
to assessing equity across geographical areas, urban-rural divides or demographic categories.
Such information could be presented using geographic information systems.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


121

c) Quantity indicators relate to the number of inputs, activities or outputs. Quantity indicators
should generally be time-bound; e.g. the number of inputs available at a specific point in time,
or the number of outputs produced over a specific time period.
d) Quality indicators reflect the quality of that which is being measured against predetermined
standards. Such standards should reflect the needs and expectations of affected parties while
balancing economy and effectiveness. Standards could include legislated standards and
industry codes.
e) Dates and time frame indicators reflect timeliness of service delivery. They include service
frequency measures, waiting times, response time, turnaround times, time frames for service
delivery and timeliness of service delivery.
f) Adequacy indicators reflect the quantity of input or output relative to the need or demand -
"Is enough being done to address the problem?".
g) Accessibility indicators reflect the extent to which the intended beneficiaries are able to
access services or outputs. Such indicators could include distances to service points,
travelling time, waiting time, affordability, language, accommodation of the physically
challenged. All government institutions are encouraged to pay particular attention to
developing indicators that measure economy, efficiency, effectiveness and equity using data
collected through these and other direct indicators.
h) Economy indicators: explore whether specific inputs are acquired at the lowest cost and at
the right time; and whether the method of producing the requisite outputs is economical.
Economy indicators only have meaning in a relative sense. To evaluate whether an institution
is acting economically, its economy indicators need to be compared to similar measures in
other state institutions or in the private sector, either in South Africa or abroad. Such indicators
can also be compared over time, but then prices must be adjusted for inflation.
i) Efficiency indicators: explore how productively inputs are translated into outputs. An
efficient operation maximizes the level of output for a given set of inputs, or it minimizes the
inputs required to produce a given level of output. Efficiency indicators are usually measured
by an input: output ratio or an output: input ratio. These indicators also only have meaning in
a relative sense. To evaluate whether an institution is efficient, its efficiency indicators need to
be compared to similar indicators elsewhere or across time. An institution's efficiency can also
be measured relative to predetermined efficiency targets.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


122

j) Effectiveness indicators: explore the extent to which the outputs of an institution achieve the
desired outcomes. An effectiveness indicator assumes a model of how inputs and outputs relate
to the achievement of an institution's strategic objectives and goals. Such a model also needs
to account for other factors that may affect the achievement of the outcome. Changes in
effectiveness indicators are only likely to take place over a period of years, so it is only
necessary to evaluate the effectiveness of an institution every three to five years; or an
institution may decide to evaluate the effectiveness of its different programmes on a rolling 3-
5 year schedule.
k) Equity indicators: explore whether services are being provided impartially, fairly and
equitably. Equity indicators reflect the extent to which an institution has achieved and been
able to maintain an equitable supply of comparable outputs across demographic groups,
regions, urban and rural areas, and so on.

Often specific benefit-incidence studies will be needed to gather information on equity. The aim
of such studies would be to answer the question: "Who benefits from the outputs being delivered?"
Usually equity is measured against benchmark standards or on a comparative basis.
Institutions may also use the results of opinion surveys as indicators of their performance. Such
indicators should not replace the above two categories of indicators, but rather complement them.
If an institution uses such surveys, it is important that they be professionally designed.

Performance Improvement Targets


Once a set of suitable indicators has been defined for a programme or project, the next step is to
specify what level of performance the institution and its employees will strive to achieve. This
involves specifying suitable performance targets relative to current baselines. Each institution
needs to collect a wide range of performance information for management purposes, however not
all this information is relevant in accountability documents. The institution should specify in its
planning documents a set of performance targets it will report against in its accountability
documents. The set of indicators selected for accountability reporting ought to provide a holistic
view of the institution's performance. In the case of concurrent functions, national departments
need to identify a core set of indicators that need to be reported by provincial and local
governments to ensure comparability.
The baseline assessment

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


123

This is the current level of performance that the institution aims to improve. The initial step in
setting performance targets is to identify the baseline, which in most instances is the level of
performance recorded in the year prior to the planning period. So, in the case of annual plans, the
baseline will shift each year and the first year's performance will become the following year's
baseline. Where a system for managing performance is being set up, initial baseline information
is often not available. This should not be an obstacle - one needs to start measuring results in
order to establish a baseline.
Performance targets
These express a specific level of performance that the institution, programme or individual
is aiming to achieve within a given time period.
Performance standards
These standards express the minimum acceptable level of performance, or the level of performance
that is generally expected. These should be informed by legislative requirements, departmental
policies and service-level agreements. They can also be benchmarked against performance levels
in other institutions, or according to accepted best practices. The decision to express the desired
level of performance in terms of a target or a standard depends on the nature of the performance
indicators. Often standards and targets are complementary. For example, the standard for
processing pension applications is 21 working days, and a complementary target may be to process
90 per cent of applications within this time. Performance standards and performance targets should
be specified prior to the beginning of a service cycle, which may be a strategic planning period or
a financial year. This is so that the institution and its managers know what they are responsible for,
and can be held accountable at the end of the cycle. While standards are generally "timeless",
targets need to be set in relation to a specific period. The targets for outcomes will tend to span
multi-year periods, while the targets for inputs, activities and outputs should cover either quarterly
or annual periods. An institution should use standards and targets throughout the organization, as
part of its internal management plans and individual performance management system. A useful
set of criteria for selecting performance targets is the "SMART" criteria:
a) Specific: the nature and the required level of performance can be clearly identified
b) Measurable: the required performance can be measured
c) Achievable: the target is realistic given existing capacity
d) Relevant: the required performance is linked to the achievement of a goal

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


124

e) Time-bound: the time period or deadline for delivery is specified.

Developing Performance Indicators


Even the best performance indicator information is of limited value if it is not used to identify
service delivery and performance gaps, to set targets and to work towards better results.
Determining a set of appropriate indicators depends on the nature of the institution's mandate.
Developing suitable performance indicators is a complex task. Six key steps may be identified in
this approach:

Step 1: Agree on what you are aiming to achieve


The first step in developing robust indicators is to agree on the problem you seek to remedy. Based
on an understanding of the problem, what is the solution? Or expressed in social terms, what would
society look like if the desired changes could be effected? This enables you to define a clear set of
outcomes and impacts. These are the institution's strategic goals and objectives, which need to be
defined in measurable terms. Well-defined strategic goals and objectives provide a better basis
from which to develop suitable programmes and projects, as well as appropriate indicators. Once
an institution has decided on what is to be achieved, it then needs to decide what it needs to deliver
to do so.
Step 2: Specify the outputs, activities and inputs
The second step is often the most difficult - specifying what the institution needs to do to achieve
the desired outcomes and impacts. You may find it useful to reverse the thought process: having
defined the outcomes and impacts the institution is aiming to achieve, you should then examine:
a) What parties are likely to be positively or negatively affected? What are their relevant
characteristics? This information is important when planning interventions that will affect them
and for designing appropriate indicators.
b) What does the institution need to do in the short term to achieve the desired outcomes and
impacts? These will be the outputs for the institution. The choice of outputs needs to take into
account who will be affected by the intervention.
c) What does the institution require to produce these outputs? These will be the activities the
institution needs to undertake.
d) What is needed to perform these activities? These will be the inputs the institution requires.
This approach to planning is called the "logic model", and is a useful way to plan and order
information. In determining the logic model, risk and assumptions must be identified for each of

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


125

the levels of the planning process. Specifying appropriate outputs often involves extensive policy
debates and careful analysis. The process of defining appropriate outputs needs to take into
consideration what is practical and the relative costs of different courses of action. It is also
important to assess the effectiveness of the chosen intervention.
Step 3: Select the most important indicators
There is no need to measure every aspect of service delivery and outputs. Fewer measures may
deliver a stronger message. Institutions should select indicators that measure important aspects of
the service that is being delivered, such as critical inputs, activities and key outputs. When selecting
indicators, it is important to keep the following elements in mind:
a) Clear communication: the indicators should communicate whether the institution is achieving
the strategic goals and objectives it set itself. The indicators should also be understandable to
all who need to use them.
b) Available data: the data for the chosen indicators needs to be readily available.
c) Manageability: the number of indicators needs to be manageable. Line managers would be
expected to track a greater number of indicators pertaining to a particular programme than,
say, the head official of the institution or the executive authority
Step 4: Set realistic performance targets
When developing indicators there is always a temptation to set unrealistic performance targets.
However, doing so will detract from the image of the institution and staff morale. Effective
performance management requires realistic, achievable targets that challenge the institution and
its staff. Ideally, targets should be set with reference to previous and existing levels of achievement
(i.e. current baselines), and realistic forecasts of what is possible. Where targets are set in relation
to service delivery standards it is important to recognize current service standards and what is
generally regarded as acceptable. The chosen performance targets should:
a) Communicate what will be achieved if the current policies and expenditure programmes are
maintained
b) Enable performance to be compared at regular intervals - on a monthly, quarterly or annual
basis as appropriate
c) Facilitate evaluations of the appropriateness of current policies and expenditure programmes.
Step 5: Determine the process and format for reporting performance
Performance information is only useful if it is consolidated and reported back into planning,
budgeting and implementation processes where it can be used for management decisions,

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


126

particularly for taking corrective action. This means getting the right information in the right
format to the right people at the right time. Institutions need to find out what information the
various users of performance information need, and develop formats and systems to ensure their
needs are met.
Step 6: Establish processes and mechanisms to facilitate corrective action
Regular monitoring and reporting of performance against expenditure plans and targets enables
managers to manage by giving them the information they need to take decisions to keep service
delivery on track. The information should help managers establish:
a) What has happened so far?
b) What is likely to happen if the current trends persist, say, for the rest of the financial year?
c) What actions, if any, need to be taken to achieve the agreed performance targets?
Measuring, monitoring and managing performance are integral to improving service delivery.

Integrated Performance Information Structures and Systems


Performance information systems should be integrated within existing management processes
and systems. The head official of the institution is responsible for ensuring that the institution
has:
1. Documentation addressing the following:
a) Integration of performance information structures and systems within existing management
processes and systems
b) Definitions and technical standards of all the information collected by the institution
c) Processes for identifying, collecting, collating, verifying and storing information
d) Use of information in managing for results
e) Publication of performance information.
2. Appropriate capacity to manage performance information in terms of processes for:
a) Appropriate systems to collect, collate, verify and store the information
b) Consultation to ensure the information needs of different users are taken into consideration
when specifying the range of information to be collected
c) Processes to ensure the information is appropriately used for planning, budgeting and
management within the institution
d) Processes to set performance standards and targets prior to the start of each service delivery
period

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


127

e) Processes to review performance and take management action to ensure service delivery
stays on track
f) Processes to evaluate performance at the end of a service delivery period.
g) Processes to ensure that responsibility for managing performance information is included in
the individual performance agreements of line managers and other officials
h) An identified set of performance indicators for reporting for oversight purposes.

Management capacity
The accounting officer or head official of an institution must ensure there is adequate capacity to
integrate and manage performance information with existing management systems. Each
institution will need to decide on the appropriate positioning of the responsibility to manage
performance information. Ideally, this capacity should be aligned to the planning and financial
management functions. This responsibility needs to focus on the overall design and management
of indicators, data collection, collation and verification processes within the institution. Where
such systems are lacking, it is necessary to support the relevant line manager to put them in place.
Line managers remain responsible for establishing and running performance information systems
within their sections, and for using performance information to make decisions.

Barriers to Performance Improvement


There are many barriers to performance improvement. It is beneficial to be aware of them so you
can recognize them and find ways to overcome them.
1) Resistance to change is the most pervasive and most common barrier to performance
improvement. It is human nature to resist change and it is a difficult barrier to overcome.
Change is a process that requires continuous negotiation and advocacy.
2) Lack of commitment to performance improvement from leadership and/or employees. Many
hospital administrators, managers, physicians and staff view performance improvement as a
necessary task that they must perform in order to meet accreditation requirements and do not
really believe that the process of improvement works.
3) Needs for professional autonomy such as physicians’ history of self-governance and peer
review can be a barrier. Many professionals in healthcare do not want to participate in
‘team’performance improvement activities as they feel that others are not qualified to ‘judge’
their performance. This behavior needs to be addressed.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


128

4) Healthcare organizations have limited resources and many view performance improvement
activities as merely a cost center and not adding value to the organization (lack of
commitment).
5) There exists in healthcare a culture of shame, blame and fear associated with medical errors
and undesirable performance.
6) Turf issues among professionals (such as physicians and administrators) and departments (such
as admitting and nursing) are common problems.
7) Time constraints are often cited as a reason for not being able to participate in performance
improvement activities. Historically, administrators/management have not made giving staff
time to participate in improvement activities a priority.
8) Team members and others come to the project with their own agendas and work to achieve
their own goals that may or may not be in the best interest of the project.
9) Large improvement projects that drag on for long periods of time and lose focus or have little
success may suffer from loss of momentum.
10) The performance improvement process is too complex and unwieldy. Teams get bogged
down in minutia instead of rapid cycles of improvement that obtain results and reinforce that
the process does work.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


129

References

Berwick, D. M., A. B. Godfrey, and J. Roessner. 1990. Curing Health Care: New Strategies
for Quality Improvement. San Francisco: Jossey-Bass.
Blumenthal, D., and J. N. Edwards. 1995. ‘‘Involving Physicians in Total Quality Management:
Results of a Study.’’ In Improving Clinical Practice: Total Quality Management and the Physician,
edited by D. Blumenthal and A. C. Sheck, pp. 229–66. San Francisco: Jossey-Bass.
Blumenthal, D., and C. M. Kilo. 1998. ‘‘A Report Card on Continuous Quality Improvement.’’
Milbank Quarterly 76 (4): 625–48, 511.
Bradley, E. H., J. Herrin, J. A. Mattera, E. S. et al 2005. ‘‘Quality Improvement Efforts and
Hospital Performance: Rates of Beta-Blocker Prescription after Acute Myocardial Infarction.’’
Medical Care 43 (3): 282–92.
Carlin, E., R. Carlson, J. Nordin. 1996. ‘‘Using continuous quality improvement tools to improve
pediatric immunization rates.’’ Journal on Quality Improvement 22 (4): 277–88.
Carman, J. M., SM. Shortell, RW. Foster, et al. 1996. ‘‘Keys for Successful Implementation of
Total Quality Management in Hospitals.’’ Health Care Management Review 21 (1): 48–60.
Dean JW, Bowen DE. 1994. Management theory and total quality management: improving
research and practice through theory development. Acad. Manag Rev 19 (3): 459–80.
Dubois, R. W., and R. H. Brook. 1988. ‘‘Preventable Deaths: Who, How Often, and Why?’’
Annals of Internal Medicine 109 (7): 582–9.
Ferlie, E. B., S. M. Shortell. 2001. ‘‘Improving the Quality of Health Care in the United Kingdom
and the United States: A Framework for Change.’’ Milbank Quarterly 79 (2): 281–315.
Gallivan, MJ. 2001. Information technology diffusion: a review of empirical research. Database
for Advances in Information Systems 32: 51–85.
Gilman, S. C., J. C. Lammers. 1995. ‘‘Tool use and team success in CQI: are all tools created
equal?’’ Quality Management in Health Care 4 (1): 56–61.
Hackman JR, R. Wageman. 1995. Total quality management: empirical, conceptual, and practical
issues. Administrative Science Quarterly 40 (2): 309–42.
Halm, E. A., C. Horowitz, A. Silver, et al. 2004. Limited impact of a multicenter intervention to
improve the quality and efficiency of pneumonia care. Chest 126 (1): 100–7.
Institute of Medicine. 2000. To Err Is Human. Washington, DC: National Academy Press.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


130

Institute of Medicine. 2001. Crossing the Quality Chasm. Washington, DC: National Academy
Press.
Kinsman L, E James, J Ham 2004. An interdisciplinary, evidence-based process of clinical
pathway implementation increases pathway usage. Lippincotts Case Management 9 (4): 184–96.
Kishnan, R., A. B. Shani, R. M. Grant, R. Baer. 1993. ‘‘In Search of Quality Improvement:
Problems of Design and Implementation.’’ Academy of Management Executive 7 (4): 7–20.
Klein, K. J., A. B. Conn, J. S. Sorra. 2001a. ‘‘Implementing Computerized Technology.’’
Journal of Applied Psychology 86 (5): 811–24.
Klein KJ, JS Sorra. 1996. The challenge of implementation. Acad Manag Rev 21 (4): 1055–80.
Leape LL. 1994. ‘‘Error in Medicine.’’ JAMA 272; (23): 1851–7.
Lurie, J. D., E. J. Merrens, J. Lee, and M. E. Splaine. 2002. ‘‘An Approach to Hospital
Quality Improvement.’’ Medical Clinics of North America 86 (4): 825–45.
McLaughlin, C. P., A. D. Kaluzny. 1994. Continuous Quality Improvement in Health Care:
Theory, Implementation, and Applications. Gaithersburg, MD: Aspen Publishers, Inc.
Mitchell, P. H., S. M. Shortell. 1997. ‘‘Adverse Outcomes and Variations in Organization
of Care Delivery.’’ Medical Care 35 (11 suppl): N19–32.
O’Brien JL, Shortell SM, Hughes F, et al 1995. An integrative model for organization-wide quality
improvement: lessons from the field.’’ Quality Management in HealthCare 3 (4): 19–30.
Powell, T. C. 1995. ‘‘Total Quality management as competitive advantage: a review and empirical
study.’’ Strategic Management Journal 16 (1): 15–37.
Rogers EM. 2003. The Diffusion of Innovations. New York: Free Press.
Shortell, S. M. 1995. ‘‘Physician Involvement in quality improvement: issues, challenges and
recommendations.’’ In improving clinical practice: total quality management and the physician,
edited by D. Blumenthal, A.C. Sheck, pp. 207–17. San Francisco: Jossey-Bass.
Shortell, S. M., C. L. Bennett, G. R. Byck. 1998. ‘‘Assessing the Impact of Continuous
Quality Improvement on Clinical Practice: What It Will Take to Accelerate Progress.’’ Milbank
Quarterly 76 (4): 593–624, 510.
Shortell, S. M., R. H. Jones, A. W. Rademaker, et al. 2000. ‘‘Assessing the Impact of Total Quality
Management and Organizational Culture on MultipleOutcomes of Care for Coronary Artery
Bypass Graft Surgery Patients.’’ Medical Care 38 (2): 207–17.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


131

Shortell, S. M., D. Z. Levin, J. L. O’Brien, E. F. Hughes. 1995a. ‘‘Assessing the Evidence on CQI:
Is the Glass Half Empty or Half Full?’’ Hospital & Health Services Administration 40 (1): 4–24.
Shortell, S. M., J. L. O’Brien, J. M. Carman, et al. 1995b. ‘‘Assessing the Impact of Continuous
Quality Improvement/Total Quality Management: Concept versus Implementation.’’
Health Services Research 30 (2): 377–401.
Wakefield, B. J., M. A. Blegen, T. Uden-Holman, et al. 2001. ‘‘Organizational culture, continuous
quality improvement, and medication administration error reporting.’’ American Journal of
Medical Quality 16 (4): 128–34.
Weiner, B. J., J. A. Alexander, and S. M. Shortell. 1996. ‘‘Leadership for Quality Improvement in
Health Care: Empirical Evidence on Hospital Boards, Managers, and Physicians.’’ Medical Care
Research and Review 53 (4):397–416.
Weiner BJ, Alexander JA, Shortell SM, et al. Quality improvement implementation and hospital
performance on quality indicators. Health Services Research 2006; 41:2
Weiner, B. J., S. M. Shortell, and J. A. Alexander. 1997. ‘‘Promoting Clinical Involvement
in Hospital Quality Improvement Efforts: The Effects of Top Management, Board, and Physician
Leadership.’’ Health Services Research 32 (4): 491–510.
Westphal, J. D., R. Gulati, and S. M. Shortell. 1997. ‘‘Customization or Conformity?’’
Administrative Science Quarterly 42 (2): 366–94.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


132

CHAPTER 6: ETHICAL ISSUES OF QUALITY IMPROVEMENT


There is growing recognition of the importance of quality improvement interventions to ensure
healthcare is effectively and efficiently delivered. Many healthcare organizations have created
improvement offices and identified improvement leaders to foster these efforts. The result has
led to the implementation of countless improvement initiatives to enhance the quality of patient
care. A QI project may evaluate procedures that are not greater than minimal risk to patients, if it
includes the usual care practices, allowing for ongoing modification of the project, and typically
involves personnel working at the local institution (Platteborze et al., 2010). On the other hand,
some QI project test new interventions against each other or against the usual care provided
(standard of care). Similar to ethical issues arising in human subject research, ethical issues can
arise in QI activities. Even though there have been no scandals involving QI activities, there
certainly are situations when QI activities create ethical concerns. For example, because QI
activities are data-guided interventions designed to bring improvement to specific settings, using
an inappropriate methodology to achieve the stated goals will render the resulting findings
meaningless or harmful to patients or implementers. Such a situation is an ethical concern because
of the harm or wasted resources resulting from use of an inappropriate methodology.

QI activities can create harm when privacy and confidentiality are breached or have unfairly
affected patients. Furthermore, the lack of a clearly applied distinction between QI and research
along with the lack of QI ethical standards serves as an incentive for some to designate a research
study as a QI activity, thus circumventing the more rigorous research review process. The extent
of this problem is not known, yet it does present another ethical concern. Due to both the increase
in QI activities and the potential for patient privacy breaches, wasted resources and violations in
professional integrity, there is need to ensure that QI activities are conducted within the context
of ethical behavior. These activities ought to be facilitated and monitored within the context of an
ethical framework to protect participants and the validity of the activity. Patient safety concerns
are common among health-care systems worldwide. Preventable harms result in pain, suffering,
and even death for patients and lead to increased costs for medical systems. Patient safety concerns
are now regarded as a serious public health threat. QI can be defined as systematic data-guided
activities designed to bring about immediate improvements in the delivery of health care within a
specific unit, institution, or system (Lynn et al., 2007). The purpose of QI activities is to determine

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


133

or improve quality, improve patient services, and/or improves the performance or provision of
health care usually within a specific health care unit, institution, organization or system
(Platteborze et al., 2010).
Quality Improvement Efforts and Ethical Standards
Quality care is a patient expectation and a responsibility of clinicians. Understanding the
relationship between quality and ethics can strengthen efforts to provide safe, high-quality care in
an ethical manner. Such an understanding will allow for providers and executives to see the
synergy between quality improvement efforts and ethics initiatives. Ethics is both the foundation
for quality healthcare and a driver for achieving quality healthcare. Quality and safety of care is
an expectation of all patients and is typically a prominent part of a healthcare facility’s mission
statement. Patients expect that the delivery of their care will be ethical, and this is often described
in a healthcare organization’s value statement. The expectation for (and the goal of) delivering
ethical and quality care reflect a strong and interdependent linkage between the two concepts.
Quality care is built on ethical standards and principles, and ethical practices foster quality care,
making the two inseparable. Just as quality and ethics are linked, so should healthcare programs
and quality improvement efforts.
Ethics is the foundation of quality
Several fundamental ethical principles drive the goal of providing high quality healthcare. The
principles are: autonomy (do not deprive freedom), beneficence (act to benefit the patient, avoiding
clinician or executive self-interest), nonmaleficence (do not harm), justice (fairness and equitable
care), and do your duty (adhering to one’s professional and organizational responsibility). These
ethical principles form the foundation for a healthcare organization’s mission, staff members’
values and clinicians’ professional activities. Adhering to these principles and organizational
values is required to ensure quality care and patient safety, which makes it an organization’s
mandate, to ensure that quality care is achieved in all
patient encounters. Therefore, ethics is the driver behind the goal of quality healthcare.
Ethics is the foundation for the defining dimensions of quality care.
The Institute of Medicine’s report, Crossing the Quality Chasm: A New Health System for the 21st
Century (2001), describes the key dimensions of care that need improvement. Care should be:
safe, effective, patient-centered, timely, efficient and equitable, all elements that are synergistic
with ethics and founded in the ethical concepts of quality care. For example, a patient-centered

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


134

approach to healthcare means providing a respectful adherence to the patient’s preferences and
values through a shared decision-making process. Such an approach is founded on the ethical
principles of autonomy and self-determination and is delineated in most healthcare organizations’
ethical standards of practice, an informed consent policy. Health equity is another aspect of quality
care that reflects an ethical understanding that all patients should receive quality care regardless of
their personal characteristics or socioeconomic status. Equity is based on the ethics concepts of
distributive justice and fairness.

Why Ethics of QI Matters


Quality improvement efforts should reflect ethical standards.
Just as clinical care and research should meet ethical standards, so should quality improvement
efforts. Ethical concerns can arise when quality improvement activities cause harm or use
resources inappropriately. When a quality improvement effort is considered research, the activity
needs independent review by ethics committees according to national guidelines to ensure
compliance with ethical standards. More importantly, participating patients may need to consent
individually to such a research QI effort. Even where the QI effort does not involve human
subjects’ research, such as a data-gathering activity, the activity should be undertaken in
accordance with applicable ethical standards. These need to consider the social or scientific value
from the QI activity; scientifically valid methodology; fair participant selection to achieve a fair
distribution of burdens and benefits; favorable risk-benefit, limiting risks and maximizing benefits;
respect for participants by respecting privacy and confidentiality; and informed consent (where
applicable) in minimal risk QI activities as part of a patient’s consent for treatment. Such initiatives
may require independent review of the ethical conduct and accountability of the QI activity (Lynn
et al, 2007). Healthcare managers should develop a system-oriented approach and process that
ensures quality improvement activities are planned and implemented in accordance with ethical
standards. Ethics committees can serve both as resources to clinicians and planning and
implementing QI programs and as guardians through oversight to foster such efforts.

The gaps between evidence-based practice and actual patient care delivered in healthcare
organizations are well documented. Healthcare professionals and organizations have an ethical
obligation to close the gaps in implementation of best practices and to overcome patient care
quality and safety shortcomings. Disciplined and focused QI efforts can increase the effectiveness

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


135

and safety of healthcare, and therefore, can be seen as an ethical imperative in healthcare services.
Failure to undertake QI projects could be harmful if the lack of participation perpetuates unsafe,
unnecessary or ineffective clinical practice. Widely accepted ethical standards exist for many
activities carried out in healthcare organizations, such as medical treatment and research. However,
arrangements for ensuring that QI and clinical audit projects conform to appropriate ethical
standards seem to be fragmented, and such standards have not been clearly or thoroughly
described. Many people think that only research studies require ethics review and that a QI project
or a clinical audit, which may involve using data that have been previously captured for patient
care, cannot have ethical implications. However, this assumption may not be justified. Any activity
that poses a risk of psychological or physical harm to any patient should have ethical consideration.
This includes clinical audit aimed at QI.

Healthcare organizations should provide ethical oversight of QI projects and clinical audits
because: Patients or carers can potentially experience burdens or risks through their participation
in these activities. Also, some patients may benefit at the expense of others. Besides, projects
undertaken may not represent priorities for improving care based on risk-benefit analysis from a
patient care perspective. Though QI and clinical audit projects have a different intent and focus,
the requirement for ethical consideration and oversight of QI activities should be no less stringent
than what is mandated for clinical research. Even then, QI activities can create potential conflicts
of interest when findings indicate shortfalls in care. The ethical duties of a healthcare organization
to all its patients need to be considered formally in such situations. Moreover, QI projects that are
not carried out properly are unlikely to benefit patients or patient care, and may even compromise
patient safety. If QI or clinical audit projects are poorly designed and unlikely to yield useful
results, the activity is not ethically justified. Furthermore, clinicians, intentionally or
unintentionally, could avoid the research ethics review process by designating a project as a QI
project or clinical audit rather than as research, inadvertently subjecting patients/participants to
unnecessary risk in this circumstance. True research on QI interventions or the QI process itself
may not be recognized as research, and therefore, may not have appropriate ethics review.
Why QI interventions May Require Ethical Review
Ethics review of proposed research studies is required because, while there should be clinical
equipoise (that is, genuine uncertainty whether a treatment will be beneficial) there is risk that the

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


136

person may receive a treatment that is not optimal or may even be harmful (Lo and Groman, 2003).
Participation in research is voluntary, and therefore each participant in a research study is entitled
to choose whether or not to be a research participant. Individuals who volunteer to participate in
research should be safeguarded through effective ethical review of proposed research projects. It
is necessary to distinguish research, clinical audit and QI projects to ensure that each activity has
the appropriate type of ethics review or ethical oversight, though often, there is significant overlap,
particularly in implementation research and pragmatic clinical trials. A number of concepts have
been suggested as the basis for differentiating between research and QI or clinical audit, such as
purpose, systematic approach, production of generalisable new knowledge, treatment or allocation,
intention to publish, and focus on human participants. These concepts have not been validated as
reliably discriminating between research and QI studies. However, as QI studies become more
popular and sophisticated, many of these concepts can potentially apply to both research and QI
studies.

The problem of reliable differentiation


Research Ethics Committees, medical directors, QI practitioners and journal editors are not
consistent in reaching decisions as to whether a proposed project represents research or a QI
project. There may also be disagreements among clinicians in different countries leading to or
caused by misunderstanding by colleagues or authorities as to what constitutes research as opposed
to a QI project.
Tools to distinguish between QI and research
A number of tools have been developed to help practitioners decide if the activity they propose is
a QI project or a research study, and whether or not the project requires ethics review. There could
be some situations or circumstances in a QI project or clinical audit that requires ethical
consideration before the project has started. Many healthcare organizations already have a well-
established process for reviewing proposals for clinical audits, and these processes can be used to
identify any possible ethical issues related to the topic or the design of a clinical audit. Ethical
issues also could arise when data collection for a clinical audit reveals that patients are at risk
because they don’t receive appropriate, effective or timely care. If action is not taken to improve
the quality or safety of care, the continuous risk to patients may become an organizational ethical
issue. Organizations may not have similar arrangements for reviewing QI project proposals. Staff
members are encouraged to develop and carry out QI projects, often without a definite framework

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


137

to follow that would ensure that any ethical issues embedded in a project are identified and
managed appropriately.

Independent Review Boards and review of QI Project Proposals


Ethical issues arise in QI because attempts to improve quality may inadvertently cause harm, waste
scarce resources, or affect some patients unfairly. For example, efforts at earlier administration of
antibiotics for pneumonia may lead to overuse, or efforts to encourage cancer screening may
prompt useless, risky, and expensive tests in people who are too near death to benefit. In addition,
some activities using QI methods have been categorized as research that uses patients as subjects,
which brings the activities under the ethical and regulatory requirements governing human subjects
research, including review by institutional review boards (IRBs) Yet putting improvement
activities under research regulations can precipitate substantial delays, costs, and conflicts.
1. Infringing patient rights
Review any activity that limits or restricts patients’ rights to make choices about their healthcare,
such as restricting access to evidence-based practice.
2. Risk breaching confidentiality or privacy
Review any of the following situations: collecting or disclosing data that could be used to identify
any patient; using such small sample sizes that individual patients can be identified; or having
someone collect data who does not normally have access to patients’ information or records.
3. Placing a burden on a patient beyond those of his or her routine care
Review the following types of activities: A patient is required to spend additional time for data
collection, provide samples not essential for care or attend extra clinic or home visits; a vulnerable
person is required to participate directly; or a patient is asked to answer more than a minimal
number of factually based questions or to provide sensitive information.
4. Involving any clinically significant departure from usual clinical care
Review an activity that varies from accepted current clinical practice or that causes any disruption
in the clinician-patient relationship.
5. Involving a potential conflict of obligation to patients
Review any activity that considers a trade-off between cost and quality for individual patients or a
group of patients. Also, involving the use of any untested clinical or systems intervention
Consider the risk patients could face if an activity involves implementing a new practice that is
not already established practice, however novel.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


138

6. Allocating any interventions differently among groups of patients or staff


Review if different groups of patients are to be assigned to interventions or treatments or patients
are to be recruited to participate in an activity. Also, projects not providing direct benefit to patients
or patient care (Carr, 1999). Review any activity that does not directly benefit the patients
participating to ensure that the risk to patients is acceptable.

The Importance of Addressing Ethical Issues in QI Interventions


Much attention in the literature focuses on which patient safety projects require ethical oversight.
This has been a particularly challenging area because of the confusion that exists regarding when
a patient safety activity counts as research. Research is defined as an activity that involves
systematic collection of data for the purposes of developing or contributing to generalizable
knowledge, whereas ‘‘clinical practice is an exchange of information between and individual
patient and members of a healthcare delivery team.’’ Ethical guidelines and regulations
require additional oversight for activities considered research (such as third-party prior review of
the research and informed consent to ensure that participants’ welfare or rights are not unduly
compromised) but not for those considered routine practice. According to existing ethical guidance
and regulations, being able to distinguish which activities count as research from activities that do
not becomes an important responsibility. However, the definitions of research provided in ethical
guidelines and regulations result in confusion for patient safety professionals, as many patient
safety/quality improvement activities are carried out to improve the quality of care within a given
health-care setting and yet may also produce information that is generalizable or publishable.
Furthermore, quality improvement and patient safety interventions collect data in a systematic
manner, always to improve care of future patients in that facility and sometimes to improve care
of patients more broadly.

The ethics literature related to patient safety has devoted attention to when patient safety activities
should be considered research for the purposes of requiring ethical oversight, outlining various
criteria that can be helpful for making such a determination. These criteria fall into
several broad categories including the purpose of the project, the design of the project, whether
those directing and/or funding the project are internal or external to the institution where the

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


139

project will be implemented, and the generalizability of the project’s results to other settings or
future patients. Each of these criteria has been recommended as a useful indicator of whether a
project constitutes research and whether it must be reviewed by a REC.
Generation of new knowledge versus implementing practices based on existing knowledge
1) Understanding the purpose of a patient safety activity, whether the project is intended to
generate new knowledge or is intended to implement practices based on existing knowledge is
relevant in determining whether a project should be considered research. If the stated purpose
of a project is to generate new knowledge and if it is designed with the scientific rigor to be
able to actually produce such knowledge, then, that project likely would be considered research
requiring ethical review. Yet patient safety activities designed to measure compliance with
recommended strategies, such as hand washing, or to improve compliance in individual
settings, generally would not be considered research.
2) Several aspects of the design of a patient safety activity have been cited as relevant to
determining whether the activity constitutes research. Research projects commonly rely on
strict protocol designs, whereas patient safety activities are generally more flexible because the
objective of these projects is often to bring about immediate improvements in care. QI (quality
improvement} methods often require repeated modifications in the initial protocol as
experience accumulates over time and as the desired changes engage the changing factors in
the context (local structures, processes, patterns, habits, and traditions). Also, QI project do not
discriminate participants. When the project involves randomization, it is more likely to be a
research project as projects involving randomization are generally less flexible than other
evaluation designs.
3) The turnaround time from data collection to implementation also matters in distinguishing
research from QI projects. The QI results are reported back to the health-care organization(s)
or teams where the project was implemented in a timely manner, usually as an iterative process,
and this is a necessary part of the design. This suggests that patient safety activities may be
more likely to provide direct feedback (and implement changes) to those who were involved
than research projects are. Also, in many patient safety activities, the results are continuously
reported back to clinicians and clinical managers and changes to the protocol can be made
quickly, based on the data and fed back.
Project Funding Source: External Versus Internal

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


140

The funding source is a relevant criterion for determining whether a project constitutes research.
Patient safety activities funded by external sources are more likely to be research, whereas
activities funded through internal institutional sources are more likely to not be research. However,
a project’s funding source is not a relevant criterion, but rather, the project’s intentions and goals.
Generalizability of the Study Findings
Research is being predominantly to benefit future patients or patients in other settings rather than
the participants involved with the project, unlike QI interventions, which are primarily designed
to benefit participants in a timely manner. Generalizable knowledge refers to the applicability of
the results to other settings, other practitioners, and other patients as well as to the enduring nature
of the knowledge gained. However, there is disagreement regarding the point at which knowledge
generated should be considered generalizable. Generalizability may imply that the results of a
project are applicable across settings in other organizations outside of those involved in the study.
Also, projects initially designed to improve care at a local setting may have results
that could be applied to other settings as well, making it difficult to delineate the point at which
the results of a project count as generalizable knowledge. One approach to identifying whether
results are potentially generalizable or not is to review the project’s hypothesis. If the hypothesis
is worded more generally, the findings of the study are potentially meant to be broadly applicable
to society and future patients. However, if a QI project’s hypothesis clearly specifies a time and
place where the results are meant to apply, then the project is less likely to be viewed as a research
project.

Also, research tends to be conducted in multiple settings to increase generalizability of research


findings. The number of sites participating in a patient safety activity may be used to differentiate
research from QI activities, as QI patient safety activities are primarily meant to
inform local practice. These projects are implemented in particular localized health-care settings,
and the project is designed to incorporate specific features of that setting. However, the
involvement of multiple institutions does not necessarily mean that a project ought to be
considered research since the results of that project can still be tied to the time and place where the
project was implemented, such as a large organization or healthcare system. The fact that the
organizations cooperate to share insights about the process of change within each organization

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


141

may mean that the interventions are a comparison of organizational behavior rather than being
research on human subjects.

Whether the primary intention is to disseminate the results or not may be suggestive of whether
the QI project is research or not. Plans to broadly disseminate the results of a project can be
considered a proxy for the intent to produce generalizable knowledge and the primary goal of a QI
project should not be to disseminate the information to a larger audience. If this is the
case, then the purpose is no longer to improve internal processes but rather to contribute to
generalizable knowledge and thus should be treated as a research activity, so making the
interventions research projects. This criterion may discourage patient safety professionals from
disseminating the results of their projects, which has potential to delay uptake of the QI knowledge
gained.
Oversight of patient safety practice activities
There is disagreement on whether it is useful to try to make a distinction between patient safety
research versus practice, suggesting instead that ethical protection should be in place for all such
activities (Casarett et al, 2000; Byers and Aragon, 2003; Harrington, 2007; Diamond et al, 2011;
Dovey et al, 2011). The critical issue is not whether they are QI implementers are doing research;
it is whether appropriate steps are taken to protect those people who participate in their efforts to
improve care. The guiding principle should be that activities whose goals extend beyond the
immediate interests of patients should be interpreted as research and should undergo independent
review to ensure that patient interests are protected and patient safety is optimized. Requiring
patient safety programs to undergo oversight and approval by ethics committees is necessary as
those who designed the project may have a conflict of interest. However, requiring all patient
safety projects that collect systematic data collection to undergo review by RECs may become a
disincentive for clinicians, administrators, and other health-care staff who are passionate about QI,
who may be hindered from collecting rigorous data that addresses patient safety questions.

Ethical oversight may not be required for all patient safety projects, but this should be left to the
institutional guidelines and national ethical review structures, rather than individual practitioners
or healthcare institutions, to make the decisions of which projects should be reviewed (Bellin and
Dubler, 2001; Doezema 2002; Kass et al, 2008; Nerenz, 2009; Platteborze et al, 2010). Ethical

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


142

oversight should be required for patient safety or QI research projects where the risk of harm to
participants is greater than minimal risk, where minimal risk is defined as the amount of risk
inherent in clinical practice, and where reliable confidentiality measures are in place. If ethics
oversight were to be expanded to all patient safety activities (rather than those strictly defined as
research), current RECs may not be appropriate bodies of oversight (Bottrell, 2006; Grady, 2007;
Lynn et al, 2007). Many RECs are already overburdened with reviewing research protocols, and
may not have the expertise to review, as methods used in patient safety research projects often
differ from those used for research of health technologies or health-care interventions. Also,
protocols for patient safety research are often more flexible and more closely integrated with
clinical care than other research, and members of RECs may be less familiar with the methods
used to conduct these activities (Newhouse et al, 2006; Lemaire, 2008; Cacchione, 2011). Even
so, projects submitted to RECs for review may lead to confusion among REC members regarding
whether a patient safety research project should undergo expedited or full committee review.
Besides, where multisite patient safety projects are reviewed by RECs, different RECs can vary
widely in their review of those projects (Doyal, 2004, Ezatt et al, 2011).

Baily et al (2008) have put forward models they think would be more appropriate forms of
oversight for patient safety and quality improvement efforts. They recommend 3 levels of
oversight for different types of quality improvement projects (Redman, 2007; Baily, 2008; McNett
and Lawry, 2009; Siegel, 2009):
1) Professional responsibility of QI such as minimal risk activities that ‘‘are simple in design, so
there is no need for methodological review,’’ for projects whose effects ‘‘are very local, in the
sense that their success or failure will have no repercussions on other parts of the organization’’
2) Local management review and supervision of quality improvement for ‘‘activities designed to
improve care in the local setting that require at least some monitoring by management’’
3) QI projects involving human subjects that ought to be reviewed by a REC.
REC members and investigators need to consider whether any of the interventions are
experimental, whether the introduction of the protocol increases risks to patients, as well as
whether the interventions could have been introduced in to clinical care without doing research. If
all of the interventions are based on evidence-based standards and present no additional risk
beyond standard clinical care and if the intervention could have been introduced in to clinical care

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


143

without the specific informed consent of patients, then rights of patients are not violated if
informed consent is not obtained (Cretin et al, 2008; Tapp et al, 2010). Patients should be
prohibited from opting out of minimal risk quality improvement activities, given the importance
of such activities for ongoing high-quality patient care (Baily et al, 2006). Although this guidance
from Baily and colleagues is helpful, there is a need for additional guidance on the necessity of
and/or best practices for obtaining consent when entire clinical teams are the subject of patient
safety research. Patient safety checklist studies, for example, sometimes document whether the
team, as a whole, attended to certain activities, or projects review medical charts revealing an entire
team’s interactions with a patient. Ethical concerns pertaining to the use of deception in patient
safety research are closely linked to an ethical duty of truth telling and to foundational
commitments to respect for persons.

Deception is not permissible in QI interventions. Deception occurs when researchers ‘‘deliberately


misinform subjects to study their attitudes and behaviour”. Of particular ethical concern with
regard to deception is that ‘‘covert methods can infringe on interests that people hold concerning
research participation, and the sharing of private details.’’ Given this, the use of deception in
research is still highly controversial. CIOMS adds that ‘‘Deception is not permissible in cases in
which the deception itself would disguise the possibility of the subject being exposed to more than
minimal risk. When deception is deemed indispensable to the methods of a study the investigators
must demonstrate to an ethics review committee that no other research method would suffice; that
significant advances could result from the research; and that nothing has been withheld that, if
divulged, would cause a reasonable person to refuse to participate.’’ The public health threat of
preventable harms, which compromise patient safety, is now well established within the literature.
Given this threat, it is critical that health-care organizations
and health-care systems establish best practices for improving patient safety and implement
projects to demonstrate the effectiveness of interventions aimed at improving patient
safety within their organization or system.

Creating an Ethical Framework for QI Activities


For quite some time there was no comprehensive description of an ethical framework for QI
activities. However, that has changed. A group of clinicians, improvement leaders, ethicists and
other healthcare professionals authored an important manuscript proposing requirements for the

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


144

ethical conduct of quality improvement. The suggested ethical requirements were offered in an
article in the May 1, 2007, issue of Annals of Internal Medicine by Joanne Lynn, MD, and
colleagues. The authors’ suggested requirements include that a QI activity have:
1) Social or scientific value—The anticipated improvement from the QI activity should justify
the effort in the use of time and resources.
2) Scientific validity—The QI activity must be methodologically sound.
3) Fair patient selection—The participants in the QI activity should be selected to achieve fairness
in the benefits and burdens of the intervention.
4) Favorable benefit/risk ratio—The QI activity should limitrisks, such as privacy and
confidentiality,and maximize benefits to participants.
5) Respect for participants—The QI activity is designed to protect patients’ confidentiality and
make them aware of findings relevant totheir care. Also, participants should receive basic
information regarding the activity.
6) Informed consent—When the activity is more than minimum risk, informed consent should be
sought.
7) Independent review—The proposed activity should be reviewed to ensure it meets the ethical
standards in place.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


145

References
Baily MA, Bottrell M, Lynn J, et al. Ethics of using QI methods to improve health care quality and
safety. Hastings Center Special Report. July-August 2006. Available at: http://
Baily MA. Harming through protection? N Engl J Med. 2008;358:768-769.
Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight.
Am J Public Health. 2001;91:1512-1517.
Byers JF, Aragon ED. What quality improvement professionals need to know about IRBs. J
Healthc Qual. 2003;25:4Y10.
Cacchione PZ. When is intuitional review board approval necessary for quality improvement
projects? Clin Nurs Res. 2011;20:3-6
Casarett D, Karlawish JH, Sugarman J. Determining when quality improvement initiatives should
be considered research: proposed criteria and potential implications. JAMA. 2000;283:2275-2280.
Cretin S, Keeler EB, Lynn J, et al. Should patients in quality-improvement activities have the same
protections as participants in research studies? JAMA. 2000;284:1786-1788
Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication
guidelines: the beginning of a consensus project. Qual Saf Health Care. 2005;14:319-325.
Diamond LH, Kliger AS, Goldman RS, et al. Quality improvement projects: how do we protect
patients’ rights? Am J Med Qual. 2004;19:25Y27.
Doezema D, Hauswald M. Quality improvement or research: a distinction without a difference?
IRB. 2002;24:9-12.
Dovey S, Hall K, Makeham M, et al. Seeking ethical approval for an international study in primary
care patient safety. Br J Gen Pract. 2011;61:197Y204.
Doyal L. Preserving moral quality in research, audit, and quality improvement. Qual Saf Health
Care. 2004;13:11-12.
Ezzat H, Ross S, Dadelszen P, et al. Ethical review as a component of institutional approval for a
multicenter continuous qualityimprovement project: the investigator’s perspective. BMC Health
Serv Res. 2010;10:223-229.
Grady C. Quality improvement and ethical oversight. Ann Intern Med. 2007;146:680-681.
Harrington, L. Quality improvement, research, and the institutional review board. J Healthc Qual.
2007;29:4-9.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


146

Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century.
Washington, DC: National Academies Press; 2001.
Kass N, Pronovost PJ, Sugarman J, et al. Controversy and quality improvement: lingering
questions about ethics, oversight, and patient safety research. Jt Comm J Qual Patient
Safety2008;34:349-353.
Kass NE, Pronovost PJ. Quality, safety, and institutional review boards: navigating ethics and
oversight in applied health systems research. Am J Med Qual. 2011;26:157-159.
Lemaire F. Informed consent and studies of a quality improvement program. JAMA.
2008;300:1762.
Lo B, Groman M. Oversight of quality improvement: focusing on benefits and risks. Arch Intern
Med. 2003;163:1481-1486.
Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health
care. Ann Intern Med. 2007;146:666-673.
Lynn J. When does quality improvement count as research? Human subject protection and theories
of knowledge. Qual Saf Health Care 2004;13:67Y70.
McNett M, Lawry K. Research and quality improvement activities: when is institutional review
board review needed? J Neurosci Nurs. 2009;41:344-347.
Miller FG, Emanuel EJ. Quality-improvement research and informed consent. N Engl J Med.
2008;358:765-767.
Nelson WA, Gardent PB. Ethics and quality improvement. Quality care and ethical principles
cannot be separated when considering quality improvement activities. Healthc Exec. 2008;23:40-
Nerenz DR. Ethical issues in using data from quality management programs. Eur Spine J.
2009;18(Suppl 3):S321YS330.
Newhouse RP, Poe S, Pettit JC, et al. The slippery slope: differentiating between quality
improvement and research. J Nursing Adm. 2006;36:211-219.
Perneger TV. Why we need ethical oversight of quality improvement projects. Int J Qual Health
Care. 2004;16:343-344.
Platteborze LS, Young-McCaughan S, King-Letzkus I, et al. Performance improvement/research
advisory panel: a model for determining whether a project is a performance or quality or quality
improvement activity or research. Mil Med. 2010 Apr;175(4):289-91.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


147

Siegel MD, Alfano SL. The ethics of quality improvement research. Crit Care Med. 2009;37:791-
792.
Tapp L, Edwards A, Elwyn G, et al. Quality improvement in general practice: enabling general
practitioners to judge ethical dilemmas. J Med Ethics. 2010;36:184-188.
Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight and quality improvement initiatives. Qual
Saf Health Care. 2010;19:271-274.
Weiserbs KF, Lyutic L, Weinberg J. Should quality improvement projects require IRB approval?
Acad Med. 2009;84:153.
Wise L. Quality improvement or research? A report from the trenches. Am J Crit Care. 2008;17:98-
99.
World Health Organization. Global Priorities for Research in Patient Safety (first edition).
December 2008. Available at: http://www.who.int/patientsafety/research/priorities/
www.thehastingscenter.org/Publications/SpecialReports/.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


148

CHAPTER 7: TEACHING QUALITY IMPROVEMENT

‘While healthcare organizations are initiating a number of strategies to improve care and respond
to changing regulatory and policy requirements, many clinicians practicing in them have not
received training on quality and safety as a part of their formal education’

In the last two decades, much effort has been invested to improve health by advancing the quality
of health care with an emphasis of promoting patient safety and reducing medical error (Berwick,
1989; Berwick, 1996). Accordingly, many training programs on quality improvement (QI) have
been developed for practitioners in health care (Mohr et al, 2003; Patow et al, 2009). Most of these
programs are constructed for continuous professional development (CPD) purposes aimed at
individuals who have completed their preservice clinical training, and relatively few are designed
specifically for medical residents. Numerous barriers exist to implementing these CPD based QI
training programs in residency training programs, including a lack of dedicated time in the core
residency curriculum, limited faculty who have the expertise and/or interest in the topic, and a
paucity of infrastructural support and financial resources. residents in a clinical QI initiative varied
widely (Mohr et al, 2003; Patow et al, 2009). Few studies described the educational impact of
residents’ participation in QI and even fewer studies identified specific improvement in patient
health outcomes. More recent studies have focused on the development of core residency-specific
QI curricula (Mohr et al, 2003; Djuricich et al, 2004; Holmboe et al, 2005; Kim et al, 2010).

Theoretical Frameworks That Inform Choice of Approach to QI Training


Experiential learning
This educational theory most associated with David Kolb targets individuals to improve skills,
knowledge and attitudes by the process of learning from direct experience and reflection. Kolb’s
Experiential Learning Model presents a four-stage cyclical process that allows learners to 1) be
actively involved in the experience, 2) reflect on the experience, 3) utilize analytical thinking and
4) make decisions and solve problems using new ideas. This model bears close resemblance to that
of Langley et al. and their Model for Improvement. The Model for Improvement, although a two-
part model consists of rapid cycle changes called Plan-Do-Study-Act. The ‘plan’ phase requires
individuals to know the who? what? when? where? and what data to collect—which indicates that
individuals would need to be actively involved to do this. The ‘do’ and ‘study’ phase requires both

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


149

analysis of data and reflection on what was learned from each cycle, leaving the ‘act’ phase to
determine (from that data) what modifications can be made. Due to the similarities of these two
models, teaching QI to undergraduate students using an experiential learning teaching method
could be extremely effective (Canal et al, 2007).
Social learning theory
Bandura’s Social Learning Theory suggests that learning occurs in social contexts through
continuous interactions with others by the process of observation. This means that learning may
occur as a result of observing good behaviour demonstrated by a group or individual but equally
as a result of the consequences of poor behaviour; this process is called ‘modelling’. Understanding
and combining Experiential Learning and Social Learning Theory may assist in our understanding
of how QI educational interventions may work (or not) and allow us to formulate an appropriate
hypothesis. For example, considering both impact theories, a hypothesis that experiential learning
would impact most positively on students’ skills knowledge and attitude could be made. However,
the influence of observed behaviours in the social learning context may dictate whether a positive
or negative impact on student behaviour occurs. QI should not be limited to the theoretical learning
of technical skills involved (PDSA cycles and run charts) but should include the soft skills (social
psychology of change, understanding the organization and structure of care, understanding the
context), the learning skills (problem-based learning, team learning, experiential learning, focus
on competencies, critical reflection, action learning) and indeed the interactions between them.
Why teach quality improvement?
The preservice training in QI often leaves a lot to be desired. Many programs are delivered over a
short span of time (ranging from one day to one month elective blocks), thus creating the
uncertainly of whether any short-term knowledge gains are sustained. Also, while theoretical
constructs are taught, there is no or minimal component of clinical applicability. A practice-based
QI elective rotation offered to inservice care providers may offer much needed benefit. Evidence
for this is that trainees who completed a QI project demonstrated superior knowledge retention of
QI skills on objective testing when compared to non-completers (Ogrinc et al, 2004). A
competency is an observable ability of a health professional, integrating multiple components such
as knowledge, skills, values and attitudes. Since competency is observable, it can be measured and
assessed to ensure acquisition (Frank and Danoff, 2007; Frank et al, 2010; Frank et al, 2010b).
Competency based education refers to an approach to preparing health professionals for practice

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


150

that is fundamentally oriented to graduate outcome abilities and organized around competencies
derived from an analysis of societal and patient needs. Competency-based training involves
moving away from a strictly time-based training model towards one that identifies the specific
knowledge, skills, and abilities needed for practice (Frank and Danoff, 2007; Frank et al, 2020,
Frank et al, 2020b). Acquisition, application and sustainability of quality improvement skills are
considered to be core competencies.
How can we teach quality improvement?
An emerging priority in medical education is the need to facilitate learners’ acquisition of quality
improvement (QI) competencies. There is an increasing focus on improving healthcare in order to
ensure higher quality, greater access and better value for money. In line with that goal, training
programmes have been developed to teach health professionals and students formal QI methods.
Many implementation and feasibility barriers to sustaining a successful QI curriculum have been
described (Godwin, 2001; Wong et al, 2008; Arbuckle et al, 2013; Wong et al, 2013). Such barriers
include the developmental stage, insufficient QI knowledge among faculty, a lack of value placed
on QI by the institution, competing curricular/clinical demands, unsupportive leadership, and the
absence of a promotion pathway. Several questions arise from the QI training initiatives: What
types of training about formal QI techniques are available for health professionals? What evidence
is there about the most effective methods for training clinicians in QI? What should be the content
of QI training curricula? How should the QI training be delivered to provide value for money?
What types of training about formal quality improvement techniques are available for health
professionals? What evidence is there about the most effective methods for training clinicians in
quality improvement?

The training should position quality improvement as a process that has several interrelated or
similar approaches, one of which is criteria-based audit. Clinical audit aims at continuous
improvement of the quality of care through systematic and critical review of current practice
against explicit criteria or standards developed to suit a specific setting or context for
implementation of change. The audit is a regular multidisciplinary activity by which all
participants of care including doctors, nurses and other health professionals carry out a systematic
review of their own practice. Implementation of audit should follow standard acceptable
procedures and guidelines that maximize patient safety and maintain professional values (Shaw

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


151

and Costain, 1989). QI knowledge may improve after didactic curriculum, particularly if a
competence-based curriculum is employed. While the goal is to improve patient care, the process
may compromise patient safety and rights, especially through breach of confidentiality. Also, there
must be institutional oversight to ensure that QI interventions follow acceptable standards. The
data collected during the process of audit should be handled with care, and individual data
concerning care-givers, patients or health professionals must be treated confidentially. Clinical
audit needs realistic timeframe and necessary resources as well as tolerant culture of learning
organizations. Furthermore the success of clinical audit depends on the commitment and support
of the management of the organizations. Clinical audit could relatively easily be embodied into
the current practice of peer-review processes and other quality improvement initiatives.
The aims of provider training in Quality Improvement
The aims of provider training in QI is to move a health provider from one who knows just basic
foundation knowledge, skills and value through one who applies the skills to a provider who
demonstrates basic competency in quality improvement tin healthcare. Key questions: What
should we teach? How should we teach? How will we measure the results? (Cleghorn and Baker,
2000; Hayden et al, 2002; Varkey et al, 2006; Wong et al, 2007; Wong et al, 2008; Wong and
Roberts, 2008). Quality improvement modules for medical and nursing students tend to focus on
techniques such as audit and plan, do, study, act (PDSA) cycles. Most courses run by academic
institutions tend to be unidisciplinary and classroom-based or undertaken during clinical
placements. However, there is an increasing acknowledgement of the value of multidisciplinary
training, especially in practical work-based projects, that contain a practical component.
Simulation is also becoming popular as a training approach. Continuing professional development
training about QI appears to be growing at a faster rate than university education. Ongoing
education includes workshops, online courses, collaboratives and ad hoc training set up to support
specific improvement projects. There is a growing trend for training which supports participants
to put what they have learned into practice or to learn key skills ‘on the job’.
The training approaches most commonly published include:

1) University courses about formal quality improvement approaches


2) Teaching quality improvement as one component of other modules or interspersed
throughout a curriculum
3) Using practical projects to develop skills
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
152

4) Online modules, distance learning and printed resources


5) Professional development workshops
6) Simulations and role play
7) Collaboratives with facility-based mentorship and on-the-job training

There is some evidence that training students and health professionals in quality improvement may
improve knowledge, skills and attitudes. Care processes may also be improved in some instances.
However, the impact on patient health outcomes, resource use and the overall quality of care
remains uncertain. This necessitates instituting training basing on a theoretical approach that
ensures learning. Most evaluations of training focus on perceived changes in knowledge rather
than assessing applied skills or delving deeper into the longer-term outcomes for professionals and
patients. Programmes which incorporate practical exercises and work-based activities tend to
achieve better in terms of acquisition of competences, and evaluations of these approaches are
more likely to find positive changes in care processes and patient outcomes. Active learning
strategies, where participants put quality improvement into practice, are more effective than
didactic classroom styles alone.

The Philosophy of an Ideal QI Training Curriculum


Health services are now facing significant challenges. There are constant medical and
technological advances to keep pace with, the population is growing in size, people are living
longer but often in poor health and the demand for healthcare outstrips the staffing and financial
resources available. The focus on patient-centred care, holistic practice and providing value for
money means that there is a greater need to ensure that health professionals, allied teams and
managers have the knowledge and skills to improve and develop healthcare services. Several
techniques have been used to improve healthcare including improvement cycles, clinical audit,
guidelines, evidence-based medicine, healthcare report cards, patient-held records, targets,
national service frameworks, performance management approaches, continuous quality
improvement, financial incentives, leadership, choice and competition. All of these initiatives
require health professionals and managers to learn and apply new skills. While training can be an
effective lever for improving the quality of healthcare, education and training initiatives are not
always prioritized by policy makers or practitioners. As healthcare organizations initiate strategies
to improve care and respond to changing regulatory and policy requirements, clinicians and other

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


153

healthcare providers practicing in them need training on quality and safety as a part of their formal
or in-service training.

Lack of knowledge and skills among clinicians and managers or negative attitudes are significant
barriers to improving quality in healthcare. Training health professionals in quality improvement
has the potential to impact positively on attitudes, knowledge and behaviours. Quality
improvement may be defined as a way of approaching change in healthcare that focuses on self-
reflection, assessing needs and gaps, and considers how to improve in a multifaceted manner. In
this definition, training about quality improvement aims to create a culture of continuous reflection
and a commitment to ongoing improvement in quality. Training aims to provide practitioners, care
providers and managers with the skills and knowledge needed to assess the performance of
healthcare and individual and population needs, to understand the gaps between current activities
and best practice and be in position to devise theories, strategies, tools and techniques address the
quality gap (Audet et al, 2005; Bataldem and Davidoff, 2007;Van Hoof et al, 2011).

Creating, Choosing or Adapting a QI Curriculum

Components
Needs assessment
1) Include data showing a gap between current and best practice
2) Include data showing how practices or teams could be improved
Content
1) Identify evidence-based sources for core and general programme content
2) Describe key learning from implementing known best practice
3) Discuss data before and after successful implementation
4) Include as an objective ‘by the end of course, participants will be able to summarise evidence
on…’
5) Allow time for questions about the pros and cons of evidence
Application
1) Show trainees how evidence relates to participants’ work environment
2) Ask participants to show how they will apply the evidence to their work environment

Key steps

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


154

Step 1: Determine the QI competencies that trainees must demonstrate at the end of the
curriculum

Step 2: Determine the needs of incoming trainees and set learning objectives for the curriculum

Step 3: Choose instructional methods

Step 4: Create the curriculum

Step 5: Implement the curriculum

Step 6: Assess trainees’ acquisition of the QI competencies

Step 7: Evaluating the curriculum and modifying as necessary

Objectives of the competence-based training in QI

To be most effective, training should assess the needs of learners, and target both the content and
training approach appropriately, illustrating how the content applies to the participants’ work
environment ((Ogrinc et al, 2003; Ogrinc et al 2004; Price, 2005; Ogrinc et al, 2011; Paulman,
2010). Ideally, by the end of the training, trainees should be able to:

1) Describe the connection between professional knowledge and improvement knowledge


(Varkey et al, 2008; Voss et al, 2008; Walsh et al, 2010; )
2) Articulate the structured and systematic approach to quality improvement in healthcare
(Observation, reflection, and understanding are important components of the QI training as
they enable understanding of the complexity of QI and the healthcare system)
3) Explain importance of context in QI
4) Explain the importance of multidisciplinary approach to QI (describe why and how various
disciplines must work together to achieve improvement)
5) Demonstrate how to collect appropriate data for QI under time and resource limitations
6) Demonstrate how to analyze, display and interpret data
7) Use diagrams to understand the process under study (Using QI tools to brainstorm and map
possible causes of their identified problem (for instance Ishikawa/Fishbone diagrams, Pareto
charts, 5 Whys, process maps)

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


155

8) Identify areas to change within a process and recognize whether changes are successful (how
to use PICK process (possible, implement, challenge, kill) charts and prioritization matrices to
organize their recommendations, as well as how to work with the QI facilitators and clinic
managers to develop sustainable implementation strategies).
9) Develop and implement a basic quality improvement project

Most quality improvement methods are based on the application of continuous quality
improvement theory developed by the manufacturing industry. The principle underpinning quality
improvement was that quality was not something controlled at the end of the line, but rather
throughout the entire work process. Medical students can begin to understand the role of quality
improvement methods by:
1) Asking about measures that improve quality and safety;
2) Recognizing that good ideas can come from anyone;
3) Being aware that the situation in the local environment is a key factor in trying to make
improvements;
4) Being aware that the way people think and react is as important as the structures and processes
in place;
5) Realizing that the spread of innovative practices is a result of people adopting new processes
Quality improvement methods have successfully addressed this gap and provide clinicians with
the tools to: (i) identify a problem; (ii) measure the problem; (iii) develop a range of interventions
designed to fix the problem; and (iv) test whether the interventions worked.
General topics

The training content in competence-based QI training should also include general topics such as:

1) Health care as a process, system


2) Developing new locally useful knowledge
3) Social context and accountability
4) Variation and measurement
5) Leading, following and making changes in health care
6) Collaboration
7) Subject matter on quality improvement

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


156

8) Importance of ethical considerations in QI projects as some QI projects may have significant


ethical issues and need independent review and approval by ethics committees (Rawlins,
1997; Clarke et al, 1999)

Principles Informing Teaching Methods


A problem-based competence-based training curriculum is suitable for training in QI (Cleghorn
and Baker, 2000; Hayden et al, 2002; Varkey et al, 2006; Wong et al, 2007; Wong et al, 2008;
Wong and Roberts, 2008; Wong et al, 2010).
1) Professional knowledge must be combined with knowledge for quality improvement.
2) Combining didactic and experiential learning in form of participation on QI projects is
critical (Boonyasai et al, 2003)
3) Identify the gaps in quality within the cases
4) Participate as a team member to identify strategies to close the gap
5) Demonstrate an appreciation for an inter-professional approach

Suitable approach to delivery of a competence-based QI training (Da Dalt et al, 2010)}

1) Early clinical exposure during training (Weeks et al, 2000)


2) Projects as part of continuing professional development
3) Case analysis during community placements (Gould et al, 2002)
4) Chart audit and analysis (Mohr et al, 2003)
5) Inter-professional student team projects in hospital, community or rural sites (Daniel et al,
2009; Huntigton et al, 2009)
6) Improvement projects as part of longitudinal clinical experiences (Diaz et al, 2010)

Assessing learners of competence-based QI training

1) Quality Improvement Knowledge Application Tool (QIKAT) (Varkey et al, 2009; Singh et
al, 2014).
2) Pre-post and post-test (Tests of theoretical and applied knowledge and skills self-assessment,
pre/post months later)
3) Resident satisfaction
4) Formative self assessment of attitudes and skills with feedback
5) Problem solving through performance-based assessments

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


157

Evaluation of a QI teaching program

Is it necessary to improve on an improvement curriculum? The answer is definitely, yes. Any QI


curriculum should undergo continuous improvement to maintain its validity and freshness.
Iterative changes will help to ensure that the curriculum remains current and continues to provide
a good fit for the target learners. Implement an appropriate evaluation program for the curriculum,
possibly involving tools such as Plan-Do-Study-Act (PDSA) cycling and change management
techniques. With PDSA cycling, waves of small improvements in the curriculum can be tested,
evaluated and refined if necessary.
1) Take a pragmatic approach to curriculum evaluation. As with QI projects for trainees, it is not
necessary to embark on a big study. Instead, make changes to your curriculum objectives, to
your lesson plans or to the examples you use in your teaching sessions, then evaluate whether
these changes improve the curriculum or not.
2) Evaluate both outcome and process measures. It is also important to anticipate challenges and
to develop strategies that will address challenges proactively.
3) Solicit ongoing feedback from residents. Residents who have just completed the curriculum
can provide valuable input but also, individuals who completed the program a few years earlier
can provide important information.
4) Tap into the expertise in curriculum evaluation that may be available elsewhere in your local
environment. For example, liaise with the education units in other departments or faculty units.
You may also wish to ask physicians who completed the QI curriculum in the first years of
your program to come back and teach it and to help improve it.
Components to evaluate

1) Learner performance
2) Learner satisfaction
3) Clinical outcomes

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


158

References
Arbuckle MR, Weinberg M, Cabaniss DL, et al. Training psychiatry residents in quality
improvement: An integrated, year-long curriculum. Acad Psychiatry. 2013;37:42–45

Armstrong A, Lauder W, Shepherd A. An evaluation of methods used to teach quality


improvement to undergraduate healthcare students to inform curriculum development within
preregistration nurse education: a protocol for systematic review and narrative synthesis.
Systematic Reviews 20154:8
Audet A-MJ, Doty MM, Shamasdin J, Schoenbaum S. Measure. Learn and improve: physician’s
involvement in quality improvement. Health Affairs 2005;24:843-853.

Barber KH, Schultz K, Scott A, et al. Teaching Quality Improvement in Graduate Medical
Education: An Experiential and Team-Based Approach to the Acquisition of Quality Improvement
Competencies. Acad Med. 2015 Oct; 90(10): 1363–1367.

Batalden P, Davidoff F. Teaching quality improvement: the devil is in the details. JAMA
2007;298(9):1059-1061.

Berwick DM: A primer on leading the improvement of systems. BMJ 1996, 312(7031):619–622.
Berwick DM: Continuous improvement as an ideal in health care. N Engl J Med 1989, 320(1):53–
56.
Boonyasai RT, Windish DM, Chakraborti C et al. Effectiveness of teaching quality improvement
to clinicians: a systematic review. JAMA 2007;298(9):1023-1037.

Canal DF, Torbeck L, Djuricich AM. Practice-based learning and improvement: a curriculum in
continuous quality improvement for surgery residents. Arch Surg 2007;142(5):479-482.

Clarke A, Fitzpatrick P, Hurley M, et al. Audit in health care--the process of reviewing quality.
Research Committee of the Faculty of Public Health Medicine. Ir Med J. 1999;92(1):230–231

Cleghorn GD, Baker GR: What faculty need to learn about improvement and how to teach it to
others. J Interprof Care 2000, 14(2):147–159.
Da Dalt L, Callegaro S, Mazzi A et al. A model of quality assurance and quality improvement for
post-graduate medical education in Europe. Med Teach 2010;32(2):e57-64.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


159

Daniel DM, Casey DE Jr, Levine JL et al. Taking a unified approach to teaching and implementing
quality improvements across multiple residency programs: the Atlantic Health experience. Acad
Med 2009;84(12):1788- 1795.
Diaz VA, Carek PJ, Dickerson LM, Steyer TE. Teaching quality improvement in a primary care
residency. Jt Comm J Qual Patient Saf 2010;36(10):454-460
Djuricich AM, Ciccarelli M, et al: A continuous quality improvement curriculum for residents:
addressing core competency, improving systems. Acad Med 2004, 79(10 Suppl):S65–S67.
Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among
internal medicine residents BMC Medical Education 2014, 14:252
Frank JR, Danoff D: The CanMEDS initiative: implementing an outcomes-based framework of
physician competencies. Med Teach 2007, 29(7):642–647.
Frank JR, Mungroo R, Ahmad Y, et al: Toward a definition of competency-based education in
medicine: a systematic review of published definitions. Med Teach 2010, 32(8):631–637.
Frank JR, Snell LS, Cate OT, et al:Competency-based medical education: theory to practice. Med
Teach 2010, 32(8):638–645. Godwin M. Conducting a clinical practice audit. Fourteen steps to
better patient care. Can Fam Physician. 2001;47:2331–2333
Gould BE, Grey MR, et al. Improving patient care outcomes by teaching quality improvement to
medical students in community-based practices. Acad Med 2002;77(10):1011-1018.
Hayden SR, Dufel S, Shih R: Definitions and competencies for practice-based learning and
improvement. Acad Emerg Med 2002, 9(11):1242–1248.
Holmboe ES, Prince L, Green M: Teaching and improving quality of care in a primary care internal
medicine residency clinic. Acad Med 2005, 80(6):571–577.
Van Hoof TJ, Meehan TP. Integrating essential components of quality improvement into a new
paradigm for continuing education. J Contin Educ Health Prof 2011;31(3):207- 214.
Huntington JT, Dycus P, Hix C et al. A standardized curriculum to introduce novice health
professional students to practice-based learning and improvement: a multi-institutional pilot study.
Qual Manag Health Care 2009;18(3):174-181
Kim CS, Lukela MP, Parekh VI, et al. Teaching internal medicine residents quality improvement
and patient safety: a lean thinking approach. Am J Med Qual 2010, 25(3):211–217.
Mohr JJ, Randolph GD, et al, Integrating improvement competencies into residency education: a
pilot project from a pediatric continuity clinic. Ambul Pediatr 2003, 3(3):131–136.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


160

Ogrinc G, Headrick LA, Morrison LJ, Foster T: Teaching and assessing resident competence in
practice-based learning and improvement. J Gen Intern Med 2004, 19(5 Pt 2):496–500.
Ogrinc G, Headrick LA, Mutha S, et al: A framework for teaching medical students and residents
about practice-based learning and improvement, synthesized from a literature review. Acad Med
2003, 78(7):748–756.
Ogrinc G, Nierenberg DW, Batalden PB. Building experiential learning about quality
improvement into a medical school curriculum: the Dartmouth experience. Health Aff
2011;30(4):716- 722.

Patow CA, Karpovich K, Riesenberg LA, et al. Residents’ engagement in quality improvement:
a systematic review of the literature. Acad Med 2009, 84(12):1757–1764.
Paulman P. Integrating quality improvement into a family medicine clerkship. Fam Med
2010;42(3):164- 165.

Price D. Continuing medical education, quality improvement, and organisational change:


implications of recent theories for 21st century CME. Medical Teacher 2005; 27(3):259- 268.

Rawlins R. Local research ethics committees. Research discovers the right thing to do; audit
ensures that it is done right. BMJ. 1997 Nov 29;315(7120):1464–1464.

Shaw CD, Costain DW. Guidelines for medical audit: seven principles. BMJ. 1989 Aug
19;299(6697):498–499.

Singh MK, Ogrinc G, Cox KR, et al. The Quality Improvement Knowledge Application Tool
Revised (QIKAT-R). Acad Med 2014, 89(10):1386–1391
Varkey P, Reller MK, et al.An experiential interdisciplinary quality improvement education
initiative. Am J Med Qual 2006, 21(5):317–322.
Varkey P, Gupta P, Bennet KE. An innovative method to assess negotiation skills necessary for
quality improvement. Am J Med Qual 2008;23(5):350-355.

Varkey P, Gupta P, Arnold JJ, Torsher LC. An innovative team collaboration assessment tool for
a quality improvement curriculum. Am J Med Qual 2009;24(1):6-11

Voss JD, May NB, Schorling JB, Lyman JA et al. Changing conversations: teaching safety and
quality in residency training. Acad Med 2008;83(11):1080- 1087

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


161

Walsh T, Jairath N, Paterson MA, Grandjean C. Quality and safety education for nurses clinical
evaluation tool. J Nurs Educ 2010;49(9):517-522.

Weeks W, Robinson J, Brooks W, Batalden P. Using early clinical experiences to integrate quality-
improvement learning into medical education. Acad Med 2000;75:81-84.
Wong BM, Etchells EE, Kuper A, et al: Teaching quality improvement and patient safety to
trainees: a systematic review. Acad Med 2010, 85(9):1425–1439.
Wong BM, Kuper A, et al. Sustaining quality improvement and patient safety training in graduate
medical education: Lessons from social theory. Acad Med. 2013;88:1149–1156

Wong RY, Hollohan K, et al: A descriptive report of an innovative curriculum to teach quality
improvement competencies to internal medicine residents. Can J Gen Int Med 2008, 3(1):26–29.
Wong RY, Hollohan K, Roberts JM, et al. A descriptive report of an innovative curriculum to
teach quality improvement competencies to internal medicine residents. Can J Gen Intern
Med. 2008;3:26–29.

Wong RY, O Kassen B, Hollohan K, et al: A new interactive forum to promote awareness and
skills in quality improvement among internal medicine residents: a descriptive report. Can J Gen
Int Med 2007, 2(1):35–36.
Wong RY, Roberts JM: Practical tips for teaching postgraduate residents continuous quality
improvement. Open Gen and Intern Med J 2008, 2:8–11.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


162

CHAPTER 8: APPLICATION OF QUALITY IMPROVEMENT IN PRACTICE


Core Principles of QI
Langley et al and others have identified the core principles underlying a quality improvement
intervention. Key principles include:
1) Placing patients at the centre and involving them in the co-design.
2) Understanding work processes as components of a wider system and re-designing accordingly.
3) Improving the reliability of the system and clinical processes
4) Understanding variation and measuring the processes.
5) Using data for measuring improvement.
6) Recognizing and valuing the expertise of people in the frontline. They provide a broader
definition of improvement science, which includes commitment to practical learning,
generating local wisdom, contributing to clear and explicit theories of how change happens.
7) Focusing on the design, deployment and assessment of complex multi-faceted interventions

Involvement of Healthcare Providers


The importance of measuring and monitoring healthcare quality is well known. Yet quantifying
healthcare quality is a complex and challenging process for which public and payer demands
clearly exceed current capabilities. Healthcare professionals need to engage in efforts to evaluate
quality of care to ensure its relevance and validity. From selecting patient cohorts to guiding
analyses and interpretation, the entire process of quality assessment requires judgment and choices
that should be influenced by the clinical realities of medical care, a perspective that clinicians
uniquely possess. Accordingly, healthcare providers acquire the knowledge to participate actively
in the assessment of healthcare quality. Assessing quality requires the development and application
of performance measures. Performance measures are explicit standards of care against which
actual clinical care is judged. Given the availability of evidenced-based guidelines for the
management of patients with cardiovascular and neurological disease, there is a natural inclination
to use these consensus statements as a basis for developing performance measures for the
evaluation of healthcare quality. However, guidelines are not performance measures. Guidelines
are written to suggest diagnostic or therapeutic interventions for most patients in most
circumstances. The use of guideline recommendations in diagnosing and treating individual
patients is left to the discretion of the physician. In contrast, performance measures are standards
of care that imply that physicians are in error if they do not care for patients according to these

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


163

standards. Therefore, in addition to stating an explicit diagnostic or therapeutic action to be


performed, performance measures must also define how to practically identify those patients for
whom a specific action should be taken.

Methodological Challenges in Quantifying Healthcare Quality


Conducting analyses to evaluate performance may have profound consequences on the groups
being evaluated. Obviously, such analyses are predicated on having accurate data. Yet obtaining
such data can be difficult and expensive, and errors can occur at several levels. The steps for
collecting data for healthcare-quality assessment include identifying patients with the specified
disease, evaluating the severity of their condition to determine whether they are appropriate
candidates for the performance measure, and collecting data on the process of care to compare
with the performance standard. If outcomes are assessed as well, accurate collection and risk
adjustment of outcomes to ensure that differences are attributable to quality of care and not
underlying patient characteristics present additional challenges.

Challenges to Data Quality


Identifying appropriate patients in whom to apply performance measures is complicated by
limitations in current information technologies. Patients with conditions for which hospitalization
is usually required can be found in hospital administrative records. However, administrative
sources of data lack important clinical elements and can be inaccurate with respect to the principal
diagnosis for which a patient was treated. In a patient for whom quality of care will be judged, the
latter problem may require confirmation of the diagnosis through additional parameters. The
limitations of administrative records exist because the original collection of data was for a purpose
other than assessment of healthcare quality. Retrospective chart abstraction can often further
clarify important patient characteristics, but the recording of such data by healthcare providers
may be incomplete. Even when the data are available, inaccuracies can occur in documentation or
abstraction. Prospective data collection has the potential to provide the most useful information
when the data are specifically defined and collected for quality-assessment purposes. Prospective
data collection also permits acquisition of data directly from patients or physicians and allows
assessment of variables such as health status. Unfortunately, in the absence of electronic medical
records, prospective data collection is expensive and requires substantial organization to be
incorporated into routine patient care. Collection of outcome data adds another level of complexity

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


164

and expense. Some patients will be lost to follow-up, and their characteristics and outcomes may
differ substantially from those for whom data are available. Many desired outcomes, such as health
status and readmission, require collection of data directly from patients, and inaccurate telephone
numbers, addresses, and lack of patient cooperation with follow-up efforts may limit efforts to
collect this information.
Time Frame Considerations in Tracking Outcomes
For acute, catastrophic conditions, in-hospital treatment is followed by transition to long-term
care for a chronic condition. When judging the quality of care provided by an individual or
institution, should the outcomes assessment be restricted to the initial hospitalization only or
should longer-term assessments be included as well? Two rationales support the need for longer-
term assessments. First, although certain interventions can positively influence short-term
survival (eg, 30 days), the full impact of these and other interventions are manifest only months
or years after discharge. Second, patient care does not end with the patient’s discharge from the
hospital. Rather, a smooth transition with the outpatient primary care clinician is an essential
component of high-quality care. In addition, secondary is as important as many acute therapeutic
decisions. In-hospital provider assumes a responsibility for appropriate communication with the
patient’s primary care physician.

Measuring Quality of Care as a Neglected Driver of Improved Health


High quality of health care is an important component of efforts to reach sustainable development
goal (SDG) 3: to ensure healthy lives and promote well-being for all at all ages (UN, 2015).The
United States National Academy of Medicine defines quality as the extent to which health-care
services provided to individuals and patient populations improve desired health outcomes (IOM,
2001). The key tasks for quality measurement are to assess the performance of services and to
quantify the gap between reality and expectations in reference to certain standards and guidelines.
However, a lack of consensus exists on the role of quality of care in achieving SDG 3 (Kruk et al,
2016) which is reflected in the absence of measures of quality that are appropriate to lower-income
settings.

The millennium development goals (MDGs) on health focused on combating maternal and child
mortality and a relatively small number of diseases (UN, 2015) These efforts boosted disease-
specific (vertical) funding for health services and in some cases were accompanied by strong
Dan Kabonge Kaye Quality Improvement in healthcare, 2019
165

accountability mechanisms including measurement of outcomes and service quality (de Jong et al,
2016). SDG 3 and its targets encompass more conditions, and, by including non-communicable
diseases, are also more complex to attain than the MDGs. As we move into the SDG era, the
funding and delivery streams are being interconnected and integrated into broader health systems
to promote more rational and patient-centred health care across a wide range of health needs. This
is observed at both global 6and country levels. The logistics of integration, including ensuring
technical efficiency, will be challenging, but may also provide an opportunity for adoption of best
practices in quality management in areas ranging from stand-alone vertical programmes to the
broader health system (Obure et al, 2016).

As in high income countries where the impact of health-service quality on health outcomes has
been well documented (IOM 2001; Kelly and Hurst, 2006; McGlynn and Adams 2014), data from
low- and middle-income countries poor quality is increasingly showing a failure to attain expected
health-care improvements. However, not all interventions led to performance improvement.
Indeed studies from India, Malawi and Rwanda have shown that greater access to institutional
deliveries and antenatal care did not lead to reductions in maternal and newborn mortality as it was
not accompanied by corresponding improvement in quality of care (Souza et al, 2013; Powell-
Jackson et al, 2015; Godlonton et al, 2014). Also, higher than predicted maternal mortality
occurred in hospitals in high mortality lower-income countries, despite good availability of
essential medicines, suggesting clinical management gaps or treatment delays for women who
develop obstetric complications ( souza et al 2013). In Malawi, about 30%
of all outpatients who were meant to benefit from a malaria treatment intervention received
incorrect treatment (Steinhardt et al, 2014). Also, in India, an interevtion of tuberculosis therapy
failed as providers frequently gave inaccurate care to tuberculosis patients (Das et al, 2015), for
instance only 11 of 201 private practitioners provided correct tuberculosis management (Achanta
et al, 2013).

Ensuring policy success


Quality of care is also central to the success of several health policy instruments recently
introduced in low- and middle-income countries, such as universal health coverage and results-
based financing. The universal health coverage target of SDG 3 (target 3.8) requires that everyone

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


166

have access to affordable and quality health services. But if those services are poor quality, people
are unlikely to use them or agree to pay higher taxes or insurance premiums for them (Basinga et
al, 2011; Witter et al, 2012).
Resolving ethical concerns
There is also an ethical dimension to quality of care. While the right to health care is widely
accepted, less has been said about the quality of this care. First, whereas one of the core principles
of medicine is to do no harm, there is still minimal systematic measurement of patient safety in the
health systems of low- and middle-income countries (Wilson et al, 2012; Aranaz-Andrés et al,
2011; Nguyen et al, 2015). Second, little is known about whether wealth inequalities are associated
with the quality of care. Yet the quality of care is inversely proportional to the need of the
population (Hart 1971). It is unclear how the quality of services available to poor people compares
with that of richer people in the same country. Yet the quality of care should be monitored and
evaluated regardless of who provides the care, i.e. equally in private and public settings, and for
both curative and preventive care. A third ethical issue is defining the quality baseline, especially
in developing countries where quality standards are lacking such as in countries with extremely
constrained health resources. Whether doctors in such countries should follow the same guidelines
as in high income countries is debatable. Some people argue that less effective care is ethically
acceptable in situations where the alternativeis no care, but this assumes that the care will still
bring substantial benefit to patients (Victora et al, 2016; Persad et al, 2016). The minimum
effectiveness that is tolerable or acceptable given the costs of health-care provision to governments
and to families, and the legitimate expectations of people receiving the care need to be balanced.

And once a minimum standard is defined, the pursuit of a higher level of quality must be balanced
with its attendant cost as much as the need to guarantee the minimum level of care quality to the
entire population (Donabedian, 1988). Each country needs to define a quality frontier that situates
their aspirations for quality within realistic budget constraints and that recognizes trade-offs
between speed of expanding services and ensuring minimum quality standards. Following
Donabedian’s theory of quality of care, three dimensions of quality of care that need to be tracked
and, ideally, linked: (i) structure (facility infrastructure, management and staffing), (ii) process
(technical (clinical) quality and patient experience) and (iii) outcomes (patient satisfaction, return
visits and health outcomes). In high-income countries the main measures of quality have typically

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


167

been patient outcomes that are sensitive to health-care practices, such as the association between
skilled nursing and hospital readmissions (Howell et al, 2014; Newman et al, 2014; Kasteridis et
al, 2015).

There are calls to reconsider the importance of process measures that can provide concrete
guidance on where to begin improvement efforts. Many low and middle-income countries lack the
health information systems to collect these care-sensitive outcome measures, so that it is
reasonable to begin with inputs and process measures. Inputs, such as water, sanitation and
electricity, represent the minimum threshold for a functioning health-care facility; this is
sometimes termed service readiness. Most of the existing efforts to measure quality have
emphasized this tangible element of care, yet a cabinet full of unexpired medicines does not
necessarily translate into good clinical care, and the connection between inputs and processes is
poorly understood. Much more emphasis is needed on measuring the processes of care - the content
and nature of clinical interactions and the intangible elements of care underlying those interactions
(such as health-sector organization, facility management and staff training and motivation).
Ultimately, there is need for evidence linking quality of care to health outcomes, and this is why
the benchmarking of quality of care in the specific context of low- and middle-income countries
is necessary. Given the constrained resources, it is essential for the quality of care measurement
framework to prioritize the questions asked to identify the limitations on what is being done.
Structure
Data for measuring the structure dimension of quality care, including facility infrastructure,
staffing and clinical training, come from routine health-facility records and surveys. Record
systems are often incomplete and inaccurate and reporting delays, often resulting in out-of-date
information, are of little use Mphatswe et al, 2012; Nicol et al, 2013; Kihuba et al, 2014; Nickerson
et al, 2015). Also, routinely collected health data are not standardized, precluding comparison
across and, sometimes, within countries (Ferrinho et al, 2012). Periodic health-facility surveys can
provide better quality data, but such surveys describe the situation at one point in time and are
restricted to a few services, typically excluding non-communicable diseases, injuries and mental
health, for example. A recent comprehensive review of health facilityassessment tools in low- and
middle-income countries found that among t he 10 tools that met the study´s inclusion criteria
there was substantial variation in their content and comprehensiveness.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


168

Process
Measures of process quality of health care include both its technical quality and the experience of
the patients receiving the care. The tools available for assessment of provision of clinical care
include standardized patients, clinical vignettes, abstraction of medical records, simulations or
clinical drills, and direct clinical observations (Luck et al, 2006;Aung et al, 2012). Standardized
patients are trained actors who make an unannounced visit to a health-care facility and present
symptoms of a simulated condition; they complete an assessment checklist on the clinical actions
of the provider after the visit (Luck et al, 2006;Aung et al, 2012). In clinical vignettes, practitioners
follow a written clinical case, responding to questions that replicate certain stages of an actual
clinic visit, such as taking a history, ordering tests and prescribing a treatment plan. Providers’
responses are scored against evidence-based criteria for managing the simulated disease (Franco
et al, 2002; Luck et al, 2006;Aung et al, 2012).

While abstraction of medical records to identify standards-based practice is a common way of


evaluating clinical performance; the validity is undermined by the lack and inconsistency of
records in resource-constrained settings. Also these data are often collected by trained health
personnel, making it an expensive tasks (Aung et al, 2012). Audits, such as morbidity and
mortality reviews, can also provide valuable insights into quality failures. In addition, simulation
and clinical drills, in which the practitioners are given a scenario and are instructed to demonstrate
clinical skills on a mannequin, are mainly used for teaching rather than for assessing quality in
practice. In addition, clinical observation is the direct observation or recording of a real-life patient
and is an effective, well-established method for evaluation. Clinical observation and standardized
patients are considered to be the gold standard measures but they are resource-intensive methods
and thus difficult to scale up. They also have limited utility for assessing the care of serious
conditions that are either too rare to reliably observe or cannot be simulated by an actor (Franco et
al, 2002). Sadly, interpersonal care quality and the patient experience are rarely measured. Yet
respectful treatment, convenience and good communication are important to patients as individuals
and are needed for promoting greater adherence to treatment and better health outcomes (Doyle et
al, 2013). Respectful care, for example, plays an important role in improving patient satisfaction
and encouraging return visits (Kujawski et al, 2015), incorporated into quality measurement and

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


169

improvement efforts. The scope of inquiry into drivers of quality must extend beyond the facility
and the immediate health-care team; good quality depends on district-wide service organization,
pre-service training and community accountability mechanisms, among many other factors.

To understand the root causes of quality gaps, whether for technical or non-technical quality, it is
necessary to obtain perspectives on quality from a range of health-system stakeholders. Face-to-
face interviews with patients, and written surveys, are typically used to measure the patient
experience (Nesbitt et al, 2013; Ng et al, 2014; Wagenaar et al, 2016). Patients are best-positioned
to determine whether care aligns with their values and preferences, and to convey their experience
of provider communication, service convenience and so on (Tzelepis et al ,2015).47 The expansion
of communication technology and social media provides new opportunities for getting feedback
on quality of care and returning relevant information back to users. Recommendations to improve
the measurement of quality of care and its impact on improving health outcomes in lower-income
countries include improving data collection methods and instruments; expanding the scope of
measurements; and translating the data for policy impact. The six recommendations are:
1) Redouble efforts to improve and institutionalize civil registration and vital statistics
systems;
2) Reform facility surveys and strengthen routine information systems;
3) Innovate new quality measures for low-resource contexts;
4) Assess the patient perspective on quality;
5) Invest in national quality data; and
6) Translate quality evidence for policy impact

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


170

Selecting Performance Measures


The basic principles for selecting performance measures are as follows:
1) The performance measure must be meaningful. Any potential performance measure must
be either a meaningful outcome to patients and society or be closely linked to such an
outcome.
2) The measure must be valid and reliable. To serve as a useful marker of healthcare quality,
it must be possible to measure the structure, process, or outcome of interest.
3) The measure can be adjusted for patient variability. Interpretation of quality assessments
necessitates that the observed outcomes/rates of process adherence be adjusted so that
observed differences between healthcare systems are due to the performance of those
systems and not patient characteristics.
4) The measure can be modified by improvements in the processes of care. To be a useful
measure of quality, there must be an opportunity for motivated providers to improve their
performance. This requires that the measure have variability after risk adjustment among
providers. In addition, evidence should be available that suggests that alterations in the
process of care can favorably influence this measure.
5) It is feasible to measure the performance of healthcare providers. Quantifying healthcare
quality is a complex and costly undertaking. Although certain performance measures,
such as health status, may fulfill all other criteria, the expense of collecting baseline and
follow-up health status may be too great for a healthcare system to perform on a routine
basis. Sensitivity to the fiscal implications of assessing certain performance measures
may require limited sampling or avoidance altogether of certain potential measures of
healthcare quality.

Measuring what matters –achieving balance and parsimony


1) Measure process quality: select a balanced and small set of measures to assess the quality and
safety of the process of delivering care based on a small set of critical evidence-based practices
that have a strong relationship with health outcomes.
2) Measure value: select a balanced and small set of measures to reflect health outcomes, patient
experience and per capita costs for individual patients and clinical populations to reflect the
triple aim and to anticipate value-based payment mechanisms for accountable care
organisations, bundled payments and patient-centred medical homes.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


171

3) Design data systems to support internal quality needs and spinoff external quality measures:
use a four-step process to support internal quality measurement and external reporting for
selection and accountability: build quality measures into workflows on the basis of key process
analysis, to have the greatest impact on the most patients; for a high-priority key process,
explicitly design a data system (intermediate processes, final outcomes, patient experience and
cost results) around the care delivery process, ‘roll up’ accountability measures at a clinic,
hospital, region, system, state and national level; and provide transparent reporting on quality
and value to promote learning, healthy competition on key results and to ensure public
accountability.
4) Use return on measurement investment: select measures taking into account the cost of data
collection and outcomes and costs.
5) Establish ongoing process for refining and selecting core measures: build stakeholder
agreement on vital, standard measures of performance that are used by payers, regulators,
consumers and accreditors to promote public reporting and value-based purchasing schemes
across different payers and to harmonise regulation, accreditation and certification.

References
Achanta S, Jaju J, Kumar AM, et al. Tuberculosis management practices by private practitioners
in Andhra Pradesh, India. PLoS ONE. 2013;8(8):e71119.
Aranaz-Andrés JM, Aibar-Remón C, et al. Prevalence of adverse events in the hospitals of five
Latin American countries: results of the ‘Iberoamerican study of adverse events’ (IBEAS). BMJ
Qual Saf. 2011;20(12):1043-51.
Aung T, Montagu D, Schlein K, et al. Validation of a new method for testing provider clinical
quality in rural settings in low- and middle-income countries: the observed simulated patient. PLoS
ONE. 2012;7(1):e30196.
Basinga P, Mayaka S, Condo J. Performance-based financing: the need for more research. Bull
World Health Organ. 2011 Sep 01;89(9):698–9.
Bilimoria KY. Facilitating quality improvement: pushing the pendulum back toward process
measures. JAMA. 2015 Oct 06;314(13):1333–4.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


172

Das J, Kwan A, Daniels B, Satyanarayana S, Subbaraman R, Bergkvist S, et al. Use of standardised


patients to assess quality of tuberculosis care: a pilot, cross-sectional study. Lancet Infect Dis.
2015 Nov;15(11):1305–13
de Jongh TE, Gurol-Urganci I, Allen E, et al. Barriers and enablers to integrating maternal and
child health services to antenatal care in low and middle income countries. BJOG. 2016
Mar;123(4):549–57.
Donabedian A. The quality of care. How can it be assessed? JAMA. 1988 Sep 23-17;10(1):34.
Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient
experience and clinical safety and effectiveness. BMJ Open. 2013 Jan 03;3(1):e001570
Ferrinho P, Sidat M, Goma F, Dussault G. Task-shifting: experiences and opinions of health
workers in Mozambique and Zambia. Hum Resour Health. 2012 Sep 17;10(1):34.
Franco LM, Franco C, Kumwenda N, Nkhoma W. Methods for assessing quality of provider
performance in developing countries. Int J Qual Health Care. 2002 Dec;14(90001) Suppl 1:17–24.
Godlonton S, Okeke EN. Does a ban on informal health providers save lives? Evidence from
Malawi. J Dev Econ. 2016 Jan 01;118:112–32.
Hart JT. The inverse care law. Lancet. 1971 Feb 27;1(7696):405–12.
Howell EA, Zeitlin J, Hebert PL, et al Association between hospital-level obstetric quality
indicators and maternal and neonatal morbidity. JAMA. 2014 Oct 15;312(15):1531–41.
Institute of Medicine. Crossing the quality chasm: a new health system for the 21st
Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century.
Washington: National Academies Press; 2001.
Kasteridis P, Mason AR, Goddard MK, et al. The influence of primary care quality on hospital
admissions for people with dementia in England: a regression analysis. PLoS ONE.
2015;10(3):e0121506.
Kelley E, Hurst J. Health care quality indicators project: conceptual framework paper. OECD
Health Working Papers. Paris: Organisation for Economic Cooperation and Development; 2006.
Kihuba E, Gathara D, Mwinga S, et al. Assessing the ability of health information systems in
hospitals to support evidence-informed decisions in Kenya. Glob Health Action. 2014;7(0):24859.
Kruk ME, Larson E, Twum-Danso NA. Time for a quality revolution in global health. Lancet Glob
Health. 2016 Sep;4(9):e594–6.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


173

Kujawski S, Mbaruku G, Freedman LP, et al. Association between disrespect and abuse during
childbirth and women’s confidence in health facilities in Tanzania. Matern Child Health J. 2015
Oct;19(10):2243–50.
Luck J, Peabody JW, Lewis BL. An automated scoring algorithm for computerized clinical
vignettes: evaluating physician performance against explicit quality criteria. Int J Med Inform.
2006 Oct-Nov;75(10-11):701–7.
McGlynn EA, Adams JL. What makes a good quality measure? JAMA. 2014 Oct
15;312(15):1517–8
Mphatswe W, Mate KS, et al. Improving public health information: a data quality intervention in
KwaZulu-Natal, South Africa. Bull WHO 2012 ;90(3):176–82.
Nesbitt RC, Lohela TJ, Manu A, et al. Quality along the continuum: a health facility assessment
of intrapartum and postnatal care in Ghana. PLoS ONE. 2013;8(11):e81089.
Neuman MD, Wirtalla C, Werner RM. Association between skilled nursing facility quality
indicators and hospital readmissions. JAMA. 2014 ;312(15):1542–51.
Ng M, Fullman N, Dieleman JL, et al. Effective coverage: a metric for monitoring universal health
coverage. PLoS Med. 2014 Sep;11(9):e1001730.
Nguyen HT, Nguyen TD, et al. Medication errors in Vietnamese hospitals: prevalence, potential
outcome and associated factors. PLoS ONE. 2015;10(9):e0138284.
Nickerson JW, Adams O, Attaran A, et al. Monitoring the ability to deliver care in low- and
middle-income countries: a systematic review of health facility assessment tools. Health Policy
Plan. 2015 Jun;30(5):675–86.
Nicol E, Bradshaw D, Phillips T, Dudley L. Human factors affecting the quality of
Obure CD, Jacobs R, Guinness L, et al ; Integra Initiative. Does integration of HIV and sexual and
reproductive health services improve technical efficiency in Kenya and Swaziland? An application
of a two-stage semi parametric approach incorporating quality measures. Soc Sci Med. 2016
Feb;151(151):147–56.
Persad GC, Emanuel EJ. The ethics of expanding access to cheaper, less effective treatments.
Lancet. 2016 Aug 27;388(10047):932–4.
Powell-Jackson T, Mazumdar S, Mills A. Financial incentives in health: new evidence from
India’s Janani Suraksha Yojana. J Health Econ. 2015 Sep;43:154–69.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


174

Souza JP, Gülmezoglu AM, Vogel J, et al. Moving beyond essential interventions for reduction of
maternal mortality (the WHO Multicountry Survey on Maternal and Newborn Health): a cross-
sectional study. Lancet. 2013 May 18;381(9879):1747–55.
Steinhardt LC, Chinkhumba J, Wolkon A, et al. Quality of malaria case management in Malawi:
results from a nationally representative health facility survey. PLoS ONE. 2014;9(2):e89050.
Sustainable Development Goals. 17 goals to transform our world [Internet]. New York: United
Nations; 2015. http://www.un.org/sustainabledevelopment/sustainable-development-goals/
Tzelepis F, Sanson-Fisher RW, et al. Measuring the quality of patient-centered care: why patient-
reported measures are critical to reliable assessment. Patient Prefer Adherence. 2015;9:831–5.
Victora CG, Requejo JH, Barros AJ, et al. Countdown to 2015: a decade of tracking progress for
maternal, newborn, and child survival. Lancet. 2016 May 4;387 (10032):2049–59.
Wagenaar BH, Sherr K, Fernandes Q, Wagenaar AC. Using routine health information systems
for well-designed health evaluations in low- and middle-income countries. Health Policy Plan.
2016 Feb;31(1):129–35.
Wilson RM, Michel P, Olsen S, et al.; WHO Patient Safety EMRO/AFRO Working Group. Patient
safety in developing countries: retrospective estimation of scale and nature of harm to patients in
hospital. BMJ. 2012 Mar 13;344 mar13 3:e832.
Witter S, Fretheim A, Kessy FL, Lindahl AK. Paying for performance to improve the delivery of
health interventions in low- and middle-income countries. Cochrane Database Syst Rev. 2012 Feb
15;2(2):CD007899.

Dan Kabonge Kaye Quality Improvement in healthcare, 2019


175

Dan Kabonge Kaye Quality Improvement in healthcare, 2019

You might also like