Professional Documents
Culture Documents
In this file, you can ref useful information about modern quality management such as modern
quality managementforms, tools for modern quality management, modern quality
managementstrategies If you need more assistant for modern quality management, please
leave your comment at the end of file.
Other useful material for modern quality management:
qualitymanagement123.com/23-free-ebooks-for-quality-management
qualitymanagement123.com/185-free-quality-management-forms
qualitymanagement123.com/free-98-ISO-9001-templates-and-forms
qualitymanagement123.com/top-84-quality-management-KPIs
qualitymanagement123.com/top-18-quality-management-job-descriptions
qualitymanagement123.com/86-quality-management-interview-questions-and-answers
used (3). Whether the imprecision itself was questionable was not discussed. In the other study
(2), clinicians were asked to identify suspect results. I assume that the clinicians reacted on the
basis of their experience with the analytical quality that was available. Whether the intrinsic
quality of the test was satisfactory was not discussed. Thus, the weakness of such studies is that
they do not recognize that error-free operation is meaningless when the intrinsic quality is poor.
The same holds true when the clinical accuracy of a test is poor or when the test had been
inappropriately ordered.
The cited articles, indeed, showed that process performance was excellent. However, the
laboratory should also feel responsible for the intrinsic analytical quality it offers and for the
value that a certain test has for the patient. I see danger in that the question of intrinsic analytical
quality and test value will be pushed out of focus by such studies, and interest will be moved to
the pre- and postanalytical phases. Indeed, the latter two might have been given too little
attention in the past. Nevertheless, analytical quality should stay in focus, because it is the most
important value the laboratory can offer. I feel it more urgent to locate the problem areas in the
laboratory than to demonstrate that, in general, everything is perfect. Otherwise, old statements
will come back in nice new clothes, such as the phrase: we were good, we are good, we will be
even better in the future, and we only have to sell ourselves better.
Many of the problem areas are, in fact, known. Among them are measurements of free hormones
or steroid hormones at low concentrations. In addition, many analytes are not unequivocally
defined, and it is often not known what is really measured. Think, for example, of glycated
polypeptides. Different tests give different answers, with the consequence that common reference
intervals or cutoff values cannot be used. This will become a serious problem in the future
because of the need for unified treatment strategies and the introduction of expert systems. For
the same reason, standardization will become a major issue; in fact, it has not yet been achieved
in many areas. Knowledge about internal quality control is still far from optimal and might even
diminish in the future because of industry promises that the new systems have built-in quality
control with no need for attention by the user. Reaction patterns when quality control rules are
violated are often overly simplistic. For example, many people recommend remeasuring the
control and, when it is in again, continuing with patient specimens.
Modern quality management, on the other hand, goes far beyond assessment of whether current
processes are correctly performed. Its strength is its ability to disclose the weak parts of the
overall process and to estimate the value of the process itself. However, this can be effective only
when all input elements are checked for validity. In this view, modern quality management
should assess actual quality on the basis of specifications for desired quality. Furthermore, it
should provide tools that allow practitioners to anticipate future quality needs in an early stage.
Modern quality management is much more than the investigation of error rates and the effects
thereof. The latter is valuable, but nowadays the more important problem for laboratories is to
demonstrate that their services are useful for patient management. The primary task is not to
prove that the measurements do no harm (which directly provokes concern that they are of no
use either) but to demonstrate their benefits for the patient. Modern quality management should,
therefore, refocus the laboratory on, for example, test selection. This needs another way of
thinking, one that is primarily focused on the clinical utility of measurements. An exemplary
article that demonstrates this kind of thinking was recently published by Hammond (4). He
applied utility analysis to the question of measuring (or not) glucose for the early identification
of diabetes. The completeness of the input data in that article is striking. Among the data are
knowledge of actual analytical quality, knowledge of the biological variation of glucose in
healthy and diabetic subjects, knowledge of the prevalence of diabetes mellitus, and a decision
threshold for glucose. On the basis of these data, a decision theoretical analysis is performed to
answer the question of whether to test or not to test. I think that this kind of article is needed
in Clinical Chemistry to demonstrate the real benefits of modern quality management for the
laboratory and the patient.
==================
1. Check sheet
The check sheet is a form (document) used to collect data
in real time at the location where the data is generated.
The data it captures can be quantitative or qualitative.
When the information is quantitative, the check sheet is
sometimes called a tally sheet.
The defining characteristic of a check sheet is that data
are recorded by making marks ("checks") on it. A typical
check sheet is divided into regions, and marks made in
different regions have different significance. Data are
read by observing the location and number of marks on
the sheet.
Check sheets typically employ a heading that answers the
Five Ws:
2. Control chart
Control charts, also known as Shewhart charts
(after Walter A. Shewhart) or process-behavior
charts, in statistical process control are tools used
to determine if a manufacturing or business
process is in a state of statistical control.
If analysis of the control chart indicates that the
process is currently under control (i.e., is stable,
with variation only coming from sources common
to the process), then no corrections or changes to
process control parameters are needed or desired.
In addition, data from the process can be used to
predict the future performance of the process. If
the chart indicates that the monitored process is
not in control, analysis of the chart can help
determine the sources of variation, as this will
result in degraded process performance.[1] A
process that is stable but operating outside of
desired (specification) limits (e.g., scrap rates
may be in statistical control but above desired
limits) needs to be improved through a deliberate
effort to understand the causes of current
performance and fundamentally improve the
process.
The control chart is one of the seven basic tools of
quality control.[3] Typically control charts are
used for time-series data, though they can be used
3. Pareto chart
A Pareto chart, named after Vilfredo Pareto, is a type
of chart that contains both bars and a line graph, where
individual values are represented in descending order
by bars, and the cumulative total is represented by the
line.
The left vertical axis is the frequency of occurrence,
but it can alternatively represent cost or another
important unit of measure. The right vertical axis is
the cumulative percentage of the total number of
occurrences, total cost, or total of the particular unit of
measure. Because the reasons are in decreasing order,
the cumulative function is a concave function. To take
the example above, in order to lower the amount of
late arrivals by 78%, it is sufficient to solve the first
three issues.
The purpose of the Pareto chart is to highlight the
most important among a (typically large) set of
factors. In quality control, it often represents the most
common sources of defects, the highest occurring type
of defect, or the most frequent reasons for customer
complaints, and so on. Wilkinson (2006) devised an
algorithm for producing statistically based acceptance
limits (similar to confidence intervals) for each bar in
the Pareto chart.
5.Ishikawa diagram
Ishikawa diagrams (also called fishbone diagrams,
herringbone diagrams, cause-and-effect diagrams, or
Fishikawa) are causal diagrams created by Kaoru
Ishikawa (1968) that show the causes of a specific event.
[1][2] Common uses of the Ishikawa diagram are product
design and quality defect prevention, to identify potential
factors causing an overall effect. Each cause or reason for
imperfection is a source of variation. Causes are usually
grouped into major categories to identify these sources of
variation. The categories typically include
People: Anyone involved with the process
Methods: How the process is performed and the
specific requirements for doing it, such as policies,
procedures, rules, regulations and laws
Machines: Any equipment, computers, tools, etc.
required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc.
used to produce the final product
Measurements: Data generated from the process
that are used to evaluate its quality
Environment: The conditions, such as location,
time, temperature, and culture in which the process
operates
6. Histogram method
A histogram is a graphical representation of the
distribution of data. It is an estimate of the probability
distribution of a continuous variable (quantitative
variable) and was first introduced by Karl Pearson.[1] To
construct a histogram, the first step is to "bin" the range of
values -- that is, divide the entire range of values into a
series of small intervals -- and then count how many
values fall into each interval. A rectangle is drawn with
height proportional to the count and width equal to the bin
size, so that rectangles abut each other. A histogram may
also be normalized displaying relative frequencies. It then
shows the proportion of cases that fall into each of several
categories, with the sum of the heights equaling 1. The
bins are usually specified as consecutive, non-overlapping
intervals of a variable. The bins (intervals) must be
adjacent, and usually equal size.[2] The rectangles of a
histogram are drawn so that they touch each other to
indicate that the original variable is continuous.[3]