You are on page 1of 8

Quality Control and Quality Assurance in a Clinical Chemistry Laboratory

Annabel Lapuz-Carungin, MD, RMT, PT, RN, RM, MAN


Types of Analysis done in a Chemistry Laboratory
Qualitative
Qualitative analysis seeks to establish the presence of a given element or inorganic compound in a
sample.
Qualitative organic analysis seeks to establish the presence of a given functional group or organic
compound in a sample.

Quantitative
Quantitative analysis seeks to establish the amount of a given element or compound in a sample
Laboratory quality control
Designed to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior
to the release of patient results and improve the quality of the results reported by the laboratory.
Quality control is a measure of precision or how well the measurement system reproduces the same
result over time and under varying operating conditions.
Laboratory quality control material is usually:
1. Run at the beginning of each shift
2. After an instrument is serviced
3. When reagent lots are changed
4. After calibration
5. When patient results seem inappropriate
Internal standard
An internal standard in analytical chemistry is a chemical substance that is added in a constant
amount to samples, the blank and calibration standards in a chemical analysis.
This substance can then be used for calibration by plotting the ratio of the analyte (substance to be
measured) signal to the internal standard signal as a function of the analyte concentration of the
standards. This is done to correct for the loss of analyte during sample preparation or sample inlet.
Internal Standard
The internal standard is a compound that matches as closely, but not completely, the chemical species
of interest in the samples, as the effects of sample preparation should, relative to the amount of each
species, be the same for the signal from the internal standard as for the signal(s) from the species of
interest in the ideal case.
Adding known quantities of analyte(s) of interest is a distinct technique called standard addition,
which is performed to correct for matrix effects.
Internal Standards or controls
Furthermore, standards or controls should have the same matrix as patient specimens, including
viscosity, turbidity, composition, and color;
Simple to use, stable for long periods of time in large enough use quantities to last at least one year
and liquid controls are more convenient than lyophilized controls because they do not have to be
reconstituted minimize pipetting error.
Calibration
Calibration is the validation of specific measurement techniques and equipment.
At the simplest level, calibration is a comparison between measurements-one of known magnitude or
correctness made or set with one device and another measurement made in as similar a way as
possible with a second device.
Calibration
The device with the known or assigned correctness is called the standard.
The second device is the unit under test (UUT), test instrument (TI) or any of several other names for
the device being calibrated.
This process establishes the calibration of the second device, with important limitations.
When should I recalibrate and what criteria should I use?
Recalibration criteria:
1. Manufacturer's recommendation
2. If the calibration is expired
3. Restore previous performance
When should I recalibrate and what criteria should I use?
CLIA 88 requires that laboratories recalibrate an analytical method at least every six months.
Manufacturers will recommend a calibration frequency determined by measurement system stability
and will communicate in product inserts specific criteria for mandatory recalibration of instrument
system. These may include:
1. Reagent lot change
2. Major component replacement
3. Instrument maintenance
4. New software installation

Standard Curve
A standard method for analysis of concentration involves the creation of a calibration curve.
This allows for determination of the amount of a chemical in a material by comparing the results of
unknown sample to those of a series known standards.
If the concentration of element or compound in a sample is too high for the detection range of the
technique, it can simply be diluted in a pure solvent.

Standard Curve
If the amount in the sample is below an instrument's range of measurement, the method of addition
can be used.
In this method a known quantity of the element or compound under study is added, and the
difference between the concentration added, and the concentration observed is the amount actually
in the sample
Quality assurance
It refers to planned and systematic production processes that provide confidence in a product's
suitability for its intended purpose.
It is a set of activities intended to ensure that products (goods and/or services) satisfy customer
requirements in a systematic, reliable fashion.
QA cannot absolutely guarantee the production of quality products, unfortunately, but makes this
more likely.

Quality assurance
QA includes regulation of the quality of raw materials, assemblies, products and components; services
related to production; and management, production and inspection processes.
It is important to realize also that quality is determined by the intended users, clients or customers,
not by society in general: it is not the same as 'expensive' or 'high quality'. Even lowly bottom-of-the-
range goods can be considered quality items if they meet a market need.
Quality assurance versus quality control
Quality control emphazises testing and blocking the release of defective products, quality assurance
is about improving and stabilizing production and associated processes to avoid or at least minimize
issues that led to the defects in the first place.
Quality Assesment
Quality Assessment - quality assessment (also known as proficiency testing) is a means to determine
the quality of the results generated by the laboratory.
Quality Assessment may be external or internal, examples of external programs include NEQAS,
HKMTA, and Q-probes.
Terminology
REAGENT
It is "a test substance that is added to a system in order to bring about a reaction or to see
whether a reaction occurs".
Examples of such analytical reagents include Fehling's reagent and Tollens' reagent.
Terminology
When purchasing or preparing chemicals, "REAGENT-GRADE" describes chemical substances of
sufficient purity for use in chemical analysis, chemical reactions or physical testing.

Variables that affect the quality of results


 The educational background and training of the laboratory personnel
 The condition of the specimens
 The controls used in the test runs
 Reagents
 Equipment
 The interpretation of the results
 The transcription of results
 The reporting of results
The educational background and training of the laboratory personnel
Skills of Phlebotomist
Knowledge on patient preparation, amount of sample to be used, test tube to be used and
technical procedure on drawing blood sample (Prolonged tourniquet application)
The condition of the specimens
Skills of the laboratorian who analyze the sample receive.
 Verification of sample label
 Checking for quantity and quality of sample
whether sample arrived on the specified time from the exactraction time, whether
sample sample maintained proper transportation
 Ensuring that reagents and equipments are calibrated and pass QC
Errors in measurement
True value - this is an ideal concept which cannot be achieved.
Accepted true value - the value approximating the true value, the difference between the two values
is negligible.
Error - the discrepancy between the result of a measurement and the true (or accepted true value).
Sources of error
 Input data required - such as standards used, calibration values, and values of
physical constants.
 Inherent characteristics of the quantity being measured - e.g. CFT and HAI titre.
 Instruments used - accuracy, repeatability.
 Observer fallibility - reading errors, blunders, equipment selection, analysis and
computation errors.
 Environment - any external influences affecting the measurement.
 Theory assumed - validity of mathematical methods and approximations.
Random Error
 An error which varies in an unpredictable manner, in magnitude and sign, when
a large number of measurements of the same quantity are made under
effectively identical conditions.
 Random errors create a characteristic spread of results for any test method and
cannot be accounted for by applying corrections. Random errors are difficult to
eliminate but repetition reduces the influences of random errors.
 Examples of random errors include errors in pipetting and changes in incubation
period. Random errors can be minimized by training, supervision and adherence
to standard operating procedures.
Systematic Error
 An error which, in the course of a number of measurements of the same value
of a given quantity, remains constant when measurements are made under the
same conditions, or varies according to a definite law when conditions change.
 Systematic errors create a characteristic bias in the test results and can be
accounted for by applying a correction.
 Systematic errors may be induced by factors such as variations in incubation
temperature, blockage of plate washer, change in the reagent batch or
modifications in testing method.
How many levels of QC do I need to use, and how many times a day should I run QC?
For tests of moderate complexity, CLIA states that laboratories comply with the more stringent of the
following requirements:
Perform and document control procedures using at least two levels of control material each day of
testing.
Follow the manufacturer's instructions for quality control.
The College of American Pathologists (CAP) requirements for accredited clinical laboratories are
aligned with CLIA; CAP requires more than one level of control with each analytical run. An analytical
run is defined as the period of time the manufacturer declares that the system is stable (e.g., if the
manufacturer states in a product insert that QC should be run for that product is 24 hours).
Note: Some specialty and subspecialty areas of the clinical laboratory may have more stringent QC
requirements (e.g., blood gas and hematology laboratories).Running more control replicates will
increase the probability of detecting a true change in the measurement system. More levels of
controls run simultaneously will result in an increased ability to detect shifts; however, this will also
increase the false rejection rate when QC limits are applied to individual replicates.
Interpretation of quality control data involves both graphical and statistical methods. Quality control
data is most easily visualized using a Levey-Jennings chart. The dates of analyses are plotted along the
X-axis and control values are plotted on the Y-axis. The mean and one, two, and three standard
deviation limits are also marked on the Y-axis. Inspecting the pattern of plotted points provides a
simple way to detect increased random error and shifts or trends in calibration
How many data points should I collect before I determine my QC limits?
The National Committee for Clinical Laboratory Standards (NCCLS) describe several methods for
estimating the mean and precision for a control level. Better estimates of both mean and precision are
reached when more data is collected. When data is collected under operating conditions that are not
typical for the laboratory (e.g., when all data points are collected over a short period of time), the
estimate of mean and standard deviation may not be as accurate in predicting the centering and
precision that may be seen in the future.
The following data collection methods are described by NCCLS:
N > 20 (20 or more runs)
NCCLS C24-A recommends that, at a minimum, 20 data points from 20 or more separate runs be
obtained to determine an estimate of mean and precision.
Provisional Ranges N > 20 (Fewer than 20 runs)
If 20 runs cannot be completed, a minimum of seven runs (three replicates per run) may be used to
set provisional ranges. A mean and standard deviation can be calculated and used to set provisional
ranges. The mean and limits derived from the abbreviated data collection should be replaced by a
new mean and limits calculated when data from 20 separate runs becomes available.
N=80 (40 Runs)
More thorough estimating methods for centering and precision are also discussed in NCCLS EP-5. The
most detailed NCCLS-recommended protocol involves running an assay for 20 days, collecting 80 data
points. Each level of material is run twice daily in replicates of two. The collected data can then be
entered into NCCLS-provided software to determine estimates of within run, between run, between
day, and total precision as well as an estimate of the mean.
N=40 (20 Runs)
This abbreviated version of the N=80 data collection is also discussed by NCCLS-EP5. It makes use of
only one run per day of two replicates for a total of 40 data points.
Sources of Variation
If any data collection is to be representative of future system performance, sources of variation that
are expected and determined acceptable may be included during the data collection period. These
may include:
Multiple stored calibrations
Multiple reagent lots
Multiple calibrator lots
Multiple bottles of control material (especially with lyophilized material)
Multiple operators
Data points collected at different times of day
 
How do I determine the mean, standard deviation, and coefficient of variation?
Consider the following data set: 3.6, 3.1, 2.7, 2.9, 3.4
Mean
The mean, or sample average, provides a measure of centering.
Standard Deviation
Standard Deviation
The first mathematical manipulation is to sum () the individual points and calculate the mean or
average.
The second manipulation is to subtract the mean value from each control value. This term, shown as X
value - Xbar, is called the difference score. Individual difference scores can be positive or negative and
the sum of the difference scores is always zero.
The third manipulation is to square the difference score to make all the terms positive.
Next the squared difference scores are summed.
Finally, the predictable dispersion or standard deviation (SD or s) can be calculated.
Degrees of freedom
The "n-1"term in the above expression represents the degrees of freedom (df). Loosely interpreted,
the term "degrees of freedom" indicates how much freedom or independence there is within a group
of numbers. For example, if you were to sum four numbers to get a total, you have the freedom to
select any numbers you like. However, if the sum of the four numbers is stipulated to be 92, the
choice of the first 3 numbers is fairly free (as long as they are low numbers), but the last choice is
restricted by the condition that the sum must equal 92. For example, if the first three numbers chosen
at random are 28, 18, and 36, these numbers add up to 82, which is 10 short of the goal. For the last
number there is no freedom of choice. The number 10 must be selected to make the sum come out to
92. Therefore, the degrees of freedom have been limited by 1 and only n-1 degrees of freedom
remain. In the SD formula, the degrees of freedom are n minus 1 because the mean of the data has
already been calculated (which imposes one condition or restriction on the data set).
Standard Deviation
Coefficient of variation
%CV normalizes the variability of a data set by calculating the SD as a percent of the mean. The %CV is
helpful in comparing precision differences that exist among assays and assay methods
How should I determine my QC limits?
When analyzing data in the clinical setting, the normal or Gaussian distribution (appearing as a bell-
shaped curve, is the most frequently used distribution. Using the true standard deviation, statistical
theory shows that 99.73 percent of the data will fall within +/-3 SDs of the mean, 95.44 percent will
fall within +/-2 SDs of the mean, and 68.26 percent will fall within +/-1 SD of the mean. This guide
assumes the use of the true standard deviation when evaluating QC limits, but it should be noted that
SD estimates from actual data may vary from the true standard deviation.
Gaussian curve
How do I define what is out-of-control and how do I identify trends and shifts?
CLIA 88 does not explicitly recommend a method for determining when a system is out-of-control, but
this federal law does explain that laboratories must establish written procedures for monitoring and
evaluating analytical testing processes. A couple of the more recognized methods are described
below.
2 SD or 3 SD Limits
With strict +/-2 or +/-3 SD limits, an out-of-control condition is marked by one QC value falling
outside of the 2 or 3 SD limit. A +/-2 SD limit offers a method very sensitive to detecting a change, but
also presents a very real problem for a laboratory — a high rate of false rejection. The difficulties
posed by a high false rejection rate are discussed in the earlier questions "What is false rejection?"
and "How should I determine my QC limits?"
Traditional Westgard Multi-Rule Procedure
The Westgard Multi-Rule Procedure (Westgard 1981) is designed to improve the power of quality
control methods using +/-3 SD limits to detect trends or shifts. While maintaining a low false rejection
rate, Westgard's procedure examines individual values and determines the status of the measuring
system.
Proper use of Westgard Multi-Rule Procedures can substantially reduce the incidence of false
rejection by as much as 88 percent when compared to strict +/-2 SD limits.
Warning rules
 Warning 12SD : It is violated if the QC value exceeds the mean by ±2SD. It is an
event likely to occur normally in less than 5% of cases.
 Warning 22SD : It detects systematic errors and is violated when two consecutive
QC values exceed the mean on the same side of the mean by ±2SD.
 Warning 41SD : It is violated if four consecutive QC values exceed the same limit
(mean ± 1SD) and this may indicate the need to perform instrument
maintenance or reagent calibration.
Westgard Multi Rule procedure
Follow-up action in the event of a violation
There are three options as to the action to be taken in the event of a violation of a Westgard rule:
 Accept the test run in its entirety - this usually applies when only a warning rule
is violated.
 Reject the whole test run - this applies only when a mandatory rule is violated.
 Enlarge the greyzone and thus re-test range for that particular assay run - this
option can be considered in the event of a violation of either a warning or
mandatory rule.
Variance
How would I document my QC?
The control charts also known as the 'Shewhart chart' or 'process-behavior chart' is a statistical tool
intended to assess the nature of variation in a process and to facilitate forecasting and management.
The control chart is one of the seven basic tools of quality control which include the histogram, check
sheet, control chart, cause and effect diagram, flowchart and scatter diagram. Control charts prevents
unnecessary process adjustments; provides information about process capability; provides diagnostic
information and it is a proven technique for improving productivity.
Levey-Jennings chart
Is a graph that Quality control data is plotted on to give a visual indication whether a laboratory test is
working well. The distance from the mean is measured in standard deviations (SD). Lines run across
the graph at the mean, as well as one, two and sometimes three standard deviations either side of the
mean. This makes it easy to see how far off the result was. Rules, such as the Westgard rules can be
applied to see whether the results from the samples when the control was done can be released, or if
they need to be rerun.
Shewhart Control Charts
A Shewhart Control Chart depend on the use of IQC specimens and is developed in the following
manner:-
 Put up the IQC specimen for at least 20 or more assay runs and record down the
O.D./cut-off value or antibody titre (whichever is applicable).
 Calculate the mean and standard deviations (s.d.)
 Make a plot with the assay run on the x-axis, and O.D./cut-off or antibody titre
on the y axis.
 Draw the following lines across the y-axis: mean, -3, -2, -2, 1, 2, and 3 s.d.
 Plot the O.D./cut-off obtained for the IQC specimen for subsequent assay runs
 Major events such as changes in the batch no. of the kit and instruments used
should be recorded on the chart.
Shewhart Chart
At what point should I re-establish my mean and QC limits?
At what point should I re-establish my mean and QC limits?
NCCLS C24-A recommends that control limits be periodically re-evaluated to determine if they are still
valid. NCCLS recognizes that changes in the centering and the spread of the measurement system may
occur over time, and the user must assess such changes.
Sustained shifts determined to be acceptable can result in new control limits. These are changes that
are statistically significant, but not practically significant. They are real changes, but they may have
little or no clinical significance. For example, consider a situation where a glucose control was running
at 70.5 mg/dL, but it now centered at 70.1 mg/dL. Although, statistically speaking, there has been a
shift of 0.4 mg/dL, this shift is probably of no significance to the treating physician.
Once a true change which is not clinically significant is identified, the user can consider accepting
result and changing control limits.
When re-establishing new QC limits, it is important to include all valid data collected under normal
operation conditions. For example, if values outside of two standard deviations are not included in
the data, an artificially-small estimate of variability may be calculated. Narrow QC limits may be set
with partial data; this may create problems for the laboratory because these data fail to reflect the
real variability of the system.
Thank you!

You might also like