You are on page 1of 9

Modern quality management

In this file, you can ref useful information about modern quality management such as modern
quality managementforms, tools for modern quality management, modern quality
managementstrategies If you need more assistant for modern quality management, please
leave your comment at the end of file.
Other useful material for modern quality management:
qualitymanagement123.com/23-free-ebooks-for-quality-management
qualitymanagement123.com/185-free-quality-management-forms
qualitymanagement123.com/free-98-ISO-9001-templates-and-forms
qualitymanagement123.com/top-84-quality-management-KPIs
qualitymanagement123.com/top-18-quality-management-job-descriptions
qualitymanagement123.com/86-quality-management-interview-questions-and-answers

I. Contents of modern quality management


==================
Currently, there is a trend in clinical chemistry to assess laboratory quality by so-called quality
management techniques. These techniques enable managers to investigate the quality of
complex processes and allow identification of weak points within these processes. In addition,
they allow the investigation of patient-benefit-related outcome of testing. According to a recent
editorial in this journal (1), the application of these techniques in the clinical laboratory are
expected to yield healthcare that is not only better but cheaper, and much more satisfying to
practice. I agree with that statement, but I was somewhat surprised that it was evoked by two
studies in the same issue of the journal (2)(3) that, in my opinion, do not substantiate these
expectations.
The studies that were cited, and nearly all studies of that kind, come down to the message
that (a) the error frequency in the clinical laboratory is very low (2)(3); (b) most errors occur in
the pre- and postanalytical phases (2); and (c) the vast majority of analytical errors would not
have caused severe patient management problems (2)(3). In short, the reader is convinced that
current analytical quality is excellent. On those grounds and considering the cost-pressure on the
laboratory, nobody can take seriously such statements as improvement (read: of analytical
quality) should be possible (3). I do not dispute the value of the cited studies, but their approach
is limited. For analytical quality, they investigated only whether the process had been applied
correctly, and they assessed, in fact, relative quality. For example, in one of the cited articles,
results were classified as unacceptable on the basis of the imprecision of the analytical methods

used (3). Whether the imprecision itself was questionable was not discussed. In the other study
(2), clinicians were asked to identify suspect results. I assume that the clinicians reacted on the
basis of their experience with the analytical quality that was available. Whether the intrinsic
quality of the test was satisfactory was not discussed. Thus, the weakness of such studies is that
they do not recognize that error-free operation is meaningless when the intrinsic quality is poor.
The same holds true when the clinical accuracy of a test is poor or when the test had been
inappropriately ordered.
The cited articles, indeed, showed that process performance was excellent. However, the
laboratory should also feel responsible for the intrinsic analytical quality it offers and for the
value that a certain test has for the patient. I see danger in that the question of intrinsic analytical
quality and test value will be pushed out of focus by such studies, and interest will be moved to
the pre- and postanalytical phases. Indeed, the latter two might have been given too little
attention in the past. Nevertheless, analytical quality should stay in focus, because it is the most
important value the laboratory can offer. I feel it more urgent to locate the problem areas in the
laboratory than to demonstrate that, in general, everything is perfect. Otherwise, old statements
will come back in nice new clothes, such as the phrase: we were good, we are good, we will be
even better in the future, and we only have to sell ourselves better.
Many of the problem areas are, in fact, known. Among them are measurements of free hormones
or steroid hormones at low concentrations. In addition, many analytes are not unequivocally
defined, and it is often not known what is really measured. Think, for example, of glycated
polypeptides. Different tests give different answers, with the consequence that common reference
intervals or cutoff values cannot be used. This will become a serious problem in the future
because of the need for unified treatment strategies and the introduction of expert systems. For
the same reason, standardization will become a major issue; in fact, it has not yet been achieved
in many areas. Knowledge about internal quality control is still far from optimal and might even
diminish in the future because of industry promises that the new systems have built-in quality
control with no need for attention by the user. Reaction patterns when quality control rules are
violated are often overly simplistic. For example, many people recommend remeasuring the
control and, when it is in again, continuing with patient specimens.
Modern quality management, on the other hand, goes far beyond assessment of whether current
processes are correctly performed. Its strength is its ability to disclose the weak parts of the
overall process and to estimate the value of the process itself. However, this can be effective only
when all input elements are checked for validity. In this view, modern quality management
should assess actual quality on the basis of specifications for desired quality. Furthermore, it
should provide tools that allow practitioners to anticipate future quality needs in an early stage.
Modern quality management is much more than the investigation of error rates and the effects
thereof. The latter is valuable, but nowadays the more important problem for laboratories is to

demonstrate that their services are useful for patient management. The primary task is not to
prove that the measurements do no harm (which directly provokes concern that they are of no
use either) but to demonstrate their benefits for the patient. Modern quality management should,
therefore, refocus the laboratory on, for example, test selection. This needs another way of
thinking, one that is primarily focused on the clinical utility of measurements. An exemplary
article that demonstrates this kind of thinking was recently published by Hammond (4). He
applied utility analysis to the question of measuring (or not) glucose for the early identification
of diabetes. The completeness of the input data in that article is striking. Among the data are
knowledge of actual analytical quality, knowledge of the biological variation of glucose in
healthy and diabetic subjects, knowledge of the prevalence of diabetes mellitus, and a decision
threshold for glucose. On the basis of these data, a decision theoretical analysis is performed to
answer the question of whether to test or not to test. I think that this kind of article is needed
in Clinical Chemistry to demonstrate the real benefits of modern quality management for the
laboratory and the patient.

==================

III. Quality management tools

1. Check sheet
The check sheet is a form (document) used to collect data
in real time at the location where the data is generated.
The data it captures can be quantitative or qualitative.
When the information is quantitative, the check sheet is
sometimes called a tally sheet.
The defining characteristic of a check sheet is that data
are recorded by making marks ("checks") on it. A typical
check sheet is divided into regions, and marks made in
different regions have different significance. Data are
read by observing the location and number of marks on
the sheet.
Check sheets typically employ a heading that answers the
Five Ws:

Who filled out the check sheet

What was collected (what each check represents,


an identifying batch or lot number)
Where the collection took place (facility, room,
apparatus)
When the collection took place (hour, shift, day of
the week)
Why the data were collected

2. Control chart
Control charts, also known as Shewhart charts
(after Walter A. Shewhart) or process-behavior
charts, in statistical process control are tools used
to determine if a manufacturing or business
process is in a state of statistical control.
If analysis of the control chart indicates that the
process is currently under control (i.e., is stable,
with variation only coming from sources common
to the process), then no corrections or changes to
process control parameters are needed or desired.
In addition, data from the process can be used to
predict the future performance of the process. If
the chart indicates that the monitored process is
not in control, analysis of the chart can help
determine the sources of variation, as this will
result in degraded process performance.[1] A
process that is stable but operating outside of
desired (specification) limits (e.g., scrap rates
may be in statistical control but above desired
limits) needs to be improved through a deliberate
effort to understand the causes of current
performance and fundamentally improve the
process.
The control chart is one of the seven basic tools of
quality control.[3] Typically control charts are
used for time-series data, though they can be used

for data that have logical comparability (i.e. you


want to compare samples that were taken all at
the same time, or the performance of different
individuals), however the type of chart used to do
this requires consideration.

3. Pareto chart
A Pareto chart, named after Vilfredo Pareto, is a type
of chart that contains both bars and a line graph, where
individual values are represented in descending order
by bars, and the cumulative total is represented by the
line.
The left vertical axis is the frequency of occurrence,
but it can alternatively represent cost or another
important unit of measure. The right vertical axis is
the cumulative percentage of the total number of
occurrences, total cost, or total of the particular unit of
measure. Because the reasons are in decreasing order,
the cumulative function is a concave function. To take
the example above, in order to lower the amount of
late arrivals by 78%, it is sufficient to solve the first
three issues.
The purpose of the Pareto chart is to highlight the
most important among a (typically large) set of
factors. In quality control, it often represents the most
common sources of defects, the highest occurring type
of defect, or the most frequent reasons for customer
complaints, and so on. Wilkinson (2006) devised an
algorithm for producing statistically based acceptance
limits (similar to confidence intervals) for each bar in
the Pareto chart.

4. Scatter plot Method


A scatter plot, scatterplot, or scattergraph is a type of
mathematical diagram using Cartesian coordinates to
display values for two variables for a set of data.
The data is displayed as a collection of points, each
having the value of one variable determining the position
on the horizontal axis and the value of the other variable
determining the position on the vertical axis.[2] This kind
of plot is also called a scatter chart, scattergram, scatter
diagram,[3] or scatter graph.
A scatter plot is used when a variable exists that is under
the control of the experimenter. If a parameter exists that
is systematically incremented and/or decremented by the
other, it is called the control parameter or independent
variable and is customarily plotted along the horizontal
axis. The measured or dependent variable is customarily
plotted along the vertical axis. If no dependent variable
exists, either type of variable can be plotted on either axis
and a scatter plot will illustrate only the degree of
correlation (not causation) between two variables.
A scatter plot can suggest various kinds of correlations
between variables with a certain confidence interval. For
example, weight and height, weight would be on x axis
and height would be on the y axis. Correlations may be
positive (rising), negative (falling), or null (uncorrelated).
If the pattern of dots slopes from lower left to upper right,
it suggests a positive correlation between the variables
being studied. If the pattern of dots slopes from upper left
to lower right, it suggests a negative correlation. A line of
best fit (alternatively called 'trendline') can be drawn in
order to study the correlation between the variables. An
equation for the correlation between the variables can be
determined by established best-fit procedures. For a linear
correlation, the best-fit procedure is known as linear
regression and is guaranteed to generate a correct solution
in a finite time. No universal best-fit procedure is

guaranteed to generate a correct solution for arbitrary


relationships. A scatter plot is also very useful when we
wish to see how two comparable data sets agree with each
other. In this case, an identity line, i.e., a y=x line, or an
1:1 line, is often drawn as a reference. The more the two
data sets agree, the more the scatters tend to concentrate in
the vicinity of the identity line; if the two data sets are
numerically identical, the scatters fall on the identity line
exactly.

5.Ishikawa diagram
Ishikawa diagrams (also called fishbone diagrams,
herringbone diagrams, cause-and-effect diagrams, or
Fishikawa) are causal diagrams created by Kaoru
Ishikawa (1968) that show the causes of a specific event.
[1][2] Common uses of the Ishikawa diagram are product
design and quality defect prevention, to identify potential
factors causing an overall effect. Each cause or reason for
imperfection is a source of variation. Causes are usually
grouped into major categories to identify these sources of
variation. The categories typically include
People: Anyone involved with the process
Methods: How the process is performed and the
specific requirements for doing it, such as policies,
procedures, rules, regulations and laws
Machines: Any equipment, computers, tools, etc.
required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc.
used to produce the final product
Measurements: Data generated from the process
that are used to evaluate its quality
Environment: The conditions, such as location,
time, temperature, and culture in which the process
operates

6. Histogram method
A histogram is a graphical representation of the
distribution of data. It is an estimate of the probability
distribution of a continuous variable (quantitative
variable) and was first introduced by Karl Pearson.[1] To
construct a histogram, the first step is to "bin" the range of
values -- that is, divide the entire range of values into a
series of small intervals -- and then count how many
values fall into each interval. A rectangle is drawn with
height proportional to the count and width equal to the bin
size, so that rectangles abut each other. A histogram may
also be normalized displaying relative frequencies. It then
shows the proportion of cases that fall into each of several
categories, with the sum of the heights equaling 1. The
bins are usually specified as consecutive, non-overlapping
intervals of a variable. The bins (intervals) must be
adjacent, and usually equal size.[2] The rectangles of a
histogram are drawn so that they touch each other to
indicate that the original variable is continuous.[3]

III. Other topics related to Modern quality management (pdf


download)
quality management systems
quality management courses
quality management tools
iso 9001 quality management system
quality management process
quality management system example
quality system management
quality management techniques
quality management standards
quality management policy
quality management strategy

quality management books

You might also like