Professional Documents
Culture Documents
Timothy Schofield
Vice-Chair, USP Statistics Expert Committee
Course Outline
Bioassay guidelines
Terminology
Quality by Design for analytical methods
Key Concepts
The bioassay lifecycle
Bioassay development
Bioassay validation
Bioassay maintenance
Considerations in calibration assays
3
Some Common Misconceptions
“Roadmap” chapter
(includes glossary)
Analysis of
Biological Assay
Biological Assays
Validation 5
Bioassay guidelines (cont.)
6
Bioassay guidelines (cont.)
Potency
– Approval of a biologics license application or issuance of a biologics
license shall constitute a determination that the establishment(s) and
the product meet applicable requirements to ensure the continued
safety, purity, and potency of such products [US Code of Federal
Regulations, 21 CFR 600.2(a)]
• From 21CFR 610.10, “Tests for potency shall consist of either in vitro or in
vivo tests, or both, which have been specifically designed for each product
so as to indicate its potency in a manner adequate to satisfy the
interpretation of potency given by definition in § 600.3(s) of this chapter.”
9
Terminology (cont.)
11
Quality by Design (QbD) for analytical
methods
12
QbD for analytical methods (cont.)
Many of the concepts associated with QbD for
pharmaceutical products translate to similar concepts for
analytical methods
Process Concept Analytical Counterpart
14
QbD for analytical methods (cont.)
15
QbD for analytical methods (cont.)
Variability (SEM)
15
0
0 3 6 9 12
6 8 Number of Replicates (n)
s
SEM
n
16
QbD for analytical methods (cont.)
CD t df 2 s
2
n
– Number of significant digits in the specification
– Two for 2% < %GCV ≤ 20%
– Round only after all calculations have been made (e.g., stability data)
17
QbD for analytical methods (cont.)
Reportable Value
– The reportable value is the “statistic” which is compared to the
acceptance criterion
• Linked to the intended use of an assay
• Single measurement, GM of 3, regression estimate, etc.
– Changes in the design (“n” and/or other design factors such as stability
intervals) can be used to manage the measurement uncertainty of the
reportable value for a bioassay with a given variability (s)
– Different formats (“n”) can be used for different uses of the bioassay
• Lot Release
– The reportable value is the average of multiple measurements on the
batch
• Standard qualification
– The reportable value is the assigned value for the standard
• Comparability
– The reportable value is the difference between comparators
18
QbD for analytical methods (cont.)
19
QbD for analytical methods (cont.)
β
• Producer’s risk in quality control (failing
Decision
True -
25
Key Concepts (cont.)
Response
linear log(concentration) response RP
Standard
curves between test samples and 0.1
Test
the standard 0.01
0.1 1 10
Concentration
Parallel LineCurve
Parallel Analysis using 4PL
Analysis
Response
130 60%
RP
50%
nonlinear log(concentration) 80
40%
30%
27
Key Concepts (cont.)
design 90%
80%
Percent Response
• Horizontal distance in quantal 70%
60%
28
Bioassay Development
29
Bioassay Models
– Bioassay models
– Dosing and replication
–
Linear versus nonlinear models –
Blocking and randomization
Optimization
–
– Most bioassay kinetics is governed –
Validity criteria and outlier detection
Transformation and weighting
by principles which produce a
nonlinear relationship between 200
160
Response
120
80
40
30
Linear Approximations to Sigmoid Curves
Using the approximately linear portion
of the sigmoid curve
– Select middle “linear” region
• Note: approximately linear; the linear
region can shift, yielding inaccurate
potency determination and
truncation error .
31
Choosing a Non-Linear Model
32
Choosing a Non-Linear Model (cont.)
ad
y d g
x
b
Five Parameter Logistic Regression
1
c
110%
a = 197.6 (Upper Asymptote) 100%
180 90%
80%
Note: g=1 is 4PL
ED50 dose is
70%
Response
130
60%
[C]
0.6
Observed
2
Percent
1% 80
xc2 1g
1
1b
Observed
50%
40%
4PL
1.25 13 7% 30%
5PL
2.5 14 7% 20%
5 32 16% 30
10%
9.9 42 21% 0%
19.8 80 40% d = 0.00 (Lower Asymptote)
-20 -10%
39.6 105 53%
0.1 1 10 100 1000
79.2 157 79%
158.4 192 97% Concentration
316 195 99%
33
Quantal Response
Quantal response is characterized by measurements which are dichotomous
(success or failure)
– e.g., percent survival in a challenge assay, percent responders in an
immunogenicity assay
Percent measurements are transformed to achieve a linear relationship
• Transformation methods
• Probit transformation
– Probit(p) = z-value + 5 Probit Analysis
100% Dose Rate
– z-value is the standard normal deviate 90%
10 10/10
3 9/10
corresponding to p
Percent Response
80% 1 4/10
70% 0.3 3/10
• Logit transformation 60%
0.1 0 10
34
Dosing and Replication
• Number of concentrations – Bioassay models
– Dosing and replication
– Number of concentrations should – Blocking and randomization
support desired modeling (i.e., 4 or – Optimization
– Validity criteria and outlier detection
more concentrations for linear modeling, – Transformation and weighting
8 or more concentrations to support 4PL
or 5PL)
– Note: having a full run down in response (0-100%) permits the
estimation of EC50 to assess stability of the standard
• Spacing of concentrations depends upon the bioassay model
and the expected range of potencies
– Equal (log) spacing in the linear region for a Dilutional Linearity
Response
150
0
– More dilutions/wider spacing to support a 1 10 100 1000
Concentration
wide range of potencies (note similarity)
35
Dosing and Replication (cont.)
Replication
– Independent replicates of concentration response curves are more
effective than repeat aliquots (pseudo replicates) at reducing
bioassay variability
1 1
1/ 1/ 1/
1/ 1/ 1/ / / One Dilution Series Three Dilution Series
1 3 6 1 2 1 1 1 1 1
2 4 8
6 2 4 2 5 n=3 repeat aloquots 1 1 1
/1 /1 /1
/1 /1 n=3 Replicates
8 6 / 1 / 1 / 1 /1 /1 /1 1 /1 2 /1
1 1 11 3 6
2 / 4 / 8 / 1/ 3/ 6/ 2 1/ 5 2/
2/ 4/ 8/ 6 1 2 3 4 6 8 21 6 52
2 4 8 6 2 4 82 65
6 2 4 8 6
36
Blocking and randomization
– Bioassay models
– Dosing and replication
– Blocking and randomization
– Optimization
– Validity criteria and outlier detection
– Transformation and weighting
37
Blocking and randomization
38
Blocking and Randomization (cont.)
bioassay
– Effect of time from A trend is observed across columns of the plate 39
39
Blocking and Randomization (cont.)
40
Managing random variability
• Strategic replication
– Replication is often more effective when applied
at the “higher levels” of replication factors
• e.g., using multiple analysts, rather than multiple
runs, plates or replicates can more effectively
reduce variability
Analyst Analyst
41
Managing random variability (cont.)
2
2
s Run 2
s Plate s Re
Replication at higher levels (cont.) SE p
r r p r pn
– Consider placement of 5 replicates
– Where do you place your replicates
• All on the same plate (rep), across 5 plates, or across 5 runs?
(Runs, Plates, Reps)
(1,1,1)
Increased
Reps
precision of
Plates
(1,5,1)
reportable
(1,1,5)
value
Runs
SE
(5,1,1)
1 2 3 4 5
Reps
42
Hidden source of systematic variability
Response
150
– Potential solutions 0
Lot 3 Avg
Lot 5
43
Bioassay Optimization
– Bioassay models
45
Bioassay Optimization
Factors and levels
47
Bioassay Optimization
Choosing a design
Full and fractional factorial designs
– In 2k designs the number of runs escalates dramatically with an
increase in the number of factors
Factors (k) 2 3 4 5 6 7
Runs 4 8 16 32 64 128
48
Bioassay Optimization
Choosing a design (cont.)
49
Bioassay Optimization
Identifying significant effects
Initial analysis gives
Standardized effects for Immunoassay Design
estimates of effects
(main effects and
interactions)
– Table of model
coefficients shows
relative magnitude of
effects
Note: coefficient=effect/2
Use graphical tools to
determine significant
effects
50
Bioassay Optimization
Identifying significant effects (cont.)
Half-normal Plots
– Probability plot of absolute
coefficients
– Random normal data (no
effects) should fall on a
straight line from 0
– Extreme points that deviate
from the line are “outliers”
• A = Time
• B = Temp
• AB = Time*Temp
• C = Ionic St.
51 51
Bioassay Optimization
DOE Summaries (cont.)
Main effects and interaction
plots
– Main effect plots are hard to
interpret in the presence of an
interaction
– Time and Temp combine to
increase OD reading
– Reading “robust” to changes in
Time at low Temp
– Effect due to Ionic St. is
statistically significant but may
be practically insignificant (use
equivalence approach)
52
Bioassay Optimization
Surface Plots and Design Space
37
35.
5
34
32.1
31
.0.8 1.2 1.6 2.0 2.4
53
Bioassay Optimization
Surface Plots and Design Space (cont.)
54
Validity Criteria – Bioassay models
– Dosing and replication
– Blocking and randomization
Validity criteria should be established – Optimization
– Validity criteria and outlier detection
on the system (assay validity) and – Transformation and weighting
Linearity
– Linearity is a frequently misunderstood term in method
development and validation
• Graphical linearity – GOF of a linear (or non-linear) model
to a set of data
– The basis of graphical linearity (GOF) should be established during
bioassay development; confirmation of analytical linearity should be
established during bioassay validation
• Analytical linearity – direct proportionality between
measured potencies of test samples and their “known”
values
– e.g., a test sample which is 2-fold more potent than another
test sample should give a measurement which is twice that of
the other sample
– Note in some cases analytical linearity is assessed as
parallelism of the test sample to the standard
56
Assessing Goodness-of-Fit (cont.))
ri y obs ypred
200
180
160
140
Response
120
12
100
10
Residual Plot
80 8
60 6
40 4
20 2
0 0
-6
-8
57
Assessing Goodness-of-Fit (cont.))
5 5
Residual plot Residual plot
for a linear 0 0
for non-linear
model which response
fits the data -5
R 2 =0.67
-5
R 2 =0.67
-10 -10
Residual plot 5 5
Residual plot
for data with a for data with
0 0
model outlier heterogeneous
-5 -5 variability
X R2 = 0.67
-10 -10
x max d2 ˆ log
< c, where c = exp
x min d
3
A model based approach uses “studentized residuals” from the model fit
to identify outliers
– Studentized residuals incorporate an estimate of the variability of
the residual
ri
rti
s(i) 1 hi
ri
1 10 100 1000
Response
150
100
50
0
0 1 2 3 4 5 6 7 8 9 10
0.1 1 10 100 1000
Response
Potency Concentration
61
Higher variability at higher response
Transformation (cont.)
•Titration
• Bioassay
• CPM/OD
• SD Mean
i.e. Constant RSD
0 1 2 3 4 5 6 7 8 9 10
Potency
- Geometric mean
- Fold (%) variability -4 -3 -2 -1 0 1 2 3 4
log Potency
62
Transformation (cont.)
works -5 -4 -3 -2 -1 0 1 2 3 4 5
Log Response
log Potency
– A transformation which
stabilizes variability often
generates data with
acceptable normality
• Note: A log-log fit to the log
low portion of the sigmoid
response curve
63
Transformation (cont.)
180
Response
120
100 12
residual plots 80
60
10
20
4
0
0
1000
concentration or response -2
-4
-6
-8
. . . or assessment of 250
Response
150
Standard Deviation
50
10 1000
response 0
0 50 100 150 200 250
Average Response
64
Weighting
65
Assessment of Similarity
Parallel line analysis
– Parallel linear fits of test sample and standard response data
• Approximately middle region of sigmoid curves (when response is normally
distributed)
• Lower region of sigmoid curves when response is log-normally distributed (log-log
fit)
– Uses log concentration scaling (10, 20, 40, 80, 160) – equally spaced in log
concentration
– The condition of similarity is the equality of slopes (parallelism)
– The two concentration-response curves should be parallel with a horizontal
difference of M (M is the log RP)
Parallel Line Analysis Parallel Line Analysis
10 10
Parallel Response Nonparallel Response
1 1
Response
Response
RP
RP
Standard Standard
0.1 0.1
Test Test
0.01 0.01
0.1 1 10 0.1 1 10
Concentration Concentration
0.8
Standard Data
Log10 Response
0
0.5 1 1.5 2 2.5
Conclude nonparallel!
-0.4
performance
-1.2
Log10 Concentration
Laboratory B
0.8
0
Conclude parallel! 0.5 1 1.5 2 2.5
-0.4
-1.2
Log10 Concentration 67
Assessment of Similarity
Difference versus Equivalence Tests
-1 0 1
Equivalence testing
– Declare a practically meaningful (equivalence margin, or acceptance
criterion)
– Declare equivalence if the 90% confidence interval (CI) falls within ±
• No evidence of equivalence if the CI falls outside ±
• Note: use of a 90% CI is the same as performing two one-sided
tests (TOST in bioequivalence)
-
-1 0 +
1
69
Assessment of Similarity
Difference versus Equivalence Tests (cont.)
Response
130 60%
response curve shapes RP
50%
40%
80
parameters -20
0.1 1 10 100 1000
-10%
10000
Concentration
– Upper asymptote, lower
asymptote, and slope for 4PL
– Asymptotes, slope, and
asymmetry parameter for 5PL
• One strategy is to evaluate each parameter separately
– Note issue with multiplicity – increased risk of multiple tests
• Another strategy is to evaluate an aggregate measure of the curve
parameters
71
Assessment of Similarity
Difference versus Equivalence Tests (cont.)
12
10 Standard Slope = 0.75
Test Slope = 2.37
8 RP = 3.14
Standard
6
Test
4
2
0
0 1 2 3 4 5
72
Implementing Equivalence Testing for
Similarity
Choose a measure of non-similarity
– In the parallel line case, could be the difference or ratio of
slopes
– For slope-ratio assays the measure of non-similarity is the
difference of y-intercepts
– For the four-parameter logistic model, similarity must be
addressed on the basis of three parameters: the slope and the
upper and lower asymptotes
• Can be based on a composite measure such as the parallelism
residual sum of squares (RSSE)*:
Where RSSEConstrained is the residual variability when the parameters are constrained to be equal,
RSSEUnconstrained is the residual variability when the parameters are different
*Gottschalk, P. and Dunn, J., Measuring Parallelism, Linearity, And Relative Potency In
Bioassay And Immunoassay Data, Journal of Biopharmaceutical Statistics, 15: 437–
463, 2005
73
Implementing Equivalence Testing for
Similarity (cont.)
74
Implementing Equivalence Testing for
Similarity (cont.)
– Choose an approach for defining an equivalence
margin
• Approach 2: determine a tolerance interval for the maximum
departure from similarity of the confidence interval on the
measure
• Similarity concluded if the confidence interval falls within the
interval
• Protects against passing assays with larger than usual amounts
of within-assay variation
Approach 1 Approach 2
-
Difference of Slopes
Difference of Slopes
75
Implementing Equivalence Testing for
Similarity (cont.)
Four bases for determining an equivalence margin are
discussed in USP Chapter <1032>, Development of
Biological Assays
– Approach 3: add data comparing standard to known failures (e.g.,
degraded samples)
• Determine a measure of non-parallelism which discriminates between
the distributions of ref/ref and ref/failure
• Note: this method can be used to determine which parameters are
sensitive to failures for nonlinear models
– Approach 4: based on what is known about the product and the
assay
• Conventional limits such as (0.80,1.25)
• Sensitivity might be driven by therapeutic index of the drug
76
Reporting Relative Potency
77
•
Bioassay validation •
Bioassay development
Bioassay validation
• Bioassay maintenance
180
160
Response
120
Observed
curve 100
80
Fitted
40 Predicted
20 [C] = 40
Should state:
– Number and types of samples
– Design including ruggedness and robustness
factors
– Validation replication strategy
– Intended validation parameters
– Justified target acceptance criteria
– Proposed data analysis plan
– Tentative run and sample validity criteria
79
The Bioassay Validation
Protocol (cont.)
Should state:
Number and range of samples
Design including ruggedness and
Number and range of samples
robustness factors
Validation replication strategy
– Five (5) levels are usually Intended validation parameters
Justified target acceptance criteria
recommended Proposed data analysis plan
Tentative run and sample validity
• To maximize the opportunity
criteria
for a wide range
• To perform regression analysis (trend in bias)
– Sample levels should be selected to bracket the range of
materials that will be tested in the bioassay
• Through a dilutional linearity experiment
– Sample levels manufactured through dilution
• A concentrated intermediate and/or through forced degradation
– Geometric scaling should be used to achieve equal
spacing in the log scale: 0.50, 0.71, 1.00, 1.41, 2.00
80
The Bioassay Validation
Protocol (cont.) Should state:
Number and range of samples
Design including ruggedness and
robustness factors
Validation replication strategy
Intended validation parameters
design have to replicate the intra-and inter- Justified target acceptance criteria
Proposed data analysis plan
run formulae that result in a reportable Tentative run and sample validity
criteria
value for a test material
– Thus if 3 assays are performed to obtain a reportable value,
some believe that each validation run must include 3 assays
– There is usually limited information to understand the optimal
replication strategy prior to validation
– A strategically designed validation can identify key ruggedness
factors which might have impact on long term variability of the
bioassay
• Could lead to remedial actions such as qualification and training
programs
• Could lead to a replication strategy which more effectively addresses
significant sources of variability
81
The Bioassay Validation Protocol (cont.)
82
The Bioassay Validation Protocol (cont.)
– Crossed designs
• Factorial based validation designs including
robustness and ruggedness factors
3-hrs
– Robustness (controllable) factors
» pH/time/temperature Time
2
83
The Bioassay Validation Protocol (cont.)
Should state:
Number and range of samples
Design including ruggedness and
robustness factors
• Validation replication strategy Validation replication strategy
Intended validation parameters
– Perform a sufficient number of Justified target acceptance criteria
Proposed data analysis plan
validation runs to control study risks Tentative run and sample validity criteria
• Risk that a validation parameter will not meet its target acceptance
criterion when the parameter is satisfactory (Type 1 error, )
84
The Bioassay Validation Protocol (cont.)
Intended validation parameters Should state:
Number and range of samples
– Relative accuracy (relative bias) –
– Intermediate precision
• Reported as percent geometric CV (%GCV)
» Note: intermediate precision relates to a single intra-run replicate in a
single assay
» The variability of the reportable value is format variability
85
The Bioassay Validation Protocol (cont.)
The bioassay yields log-normally distributed relative potencies (RP)
– log relative potency is equal to the Parallel Line Analysis
10
horizontal shift in log concentration (M) Parallel Response
– Relative potency is eM 1
Response
RP
(a geometric mean or GM) 0.1
Standard
Test
0.01
0.1 1 10
Concentration
criteria
the performance characteristics (relative
bias and intermediate precision) support Cpk=1.0
Cpm=1.0
Prob(OOS) = 0.0027 (~0.3%)
acceptable process capability Prob(OOS)=0.0027 (~0.3%)
87
The Bioassay Validation Protocol (cont.)
MinimumRelease
Capability
Limits
Release
Limits
Specifications Expiry Limit Loss CombinedUncertainty
Spec b̂ t s 2 sRelease
2
b̂
-6 0 6 12 18 24 30 36
88
The Bioassay Validation Protocol (cont.)
Should state:
Number and range of
samples
Proposed data analysis plan Design including
ruggedness and robustness
– The protocol should include a description factors
of the statistical analyses which will be Validation replication
strategy
performed Intended validation
89
The Bioassay Validation Protocol (cont.)
Should state:
Number and range of samples
• “Tentative” run and sample Design including ruggedness and
robustness factors
validity criteria Validation replication strategy
Intended validation parameters
– Assay and sample validity such Justified target acceptance criteria
Proposed data analysis plan
as criterion on reference slope to control
d2 ˆ log
e.g., x max / x min < c, where c = exp
d 3
91
A Bioassay Validation Example
Study size
92
A Bioassay Validation Example
Study design
93
A Bioassay Validation Example
Study design (cont.)
94
A Bioassay Validation Example
Study results
Data and plot of validation results
Media Design dimensions:
Lot/Analyst 1/1 1/2 2/1 2/2
Run 1 2 1 2 1 2 1 2 • 2 media lots and two
0.50 0.5215 0.4532 0.5667 0.5054 0.5222 0.5179 0.5314 0.5112 analysts
0.50 0.5026 0.4497 0.5581 0.5350 0.5017 0.5077 0.5411 0.5488
0.71 0.7558 0.6689 0.6843 0.7050 0.6991 0.7463 0.6928 0.7400 • 2 runs by each analyst
0.71 0.7082 0.6182 0.8217 0.7143 0.6421 0.6877 0.7688 0.7399 using each media lot
1.00 1.1052 0.9774 1.1527 0.9901 1.0890 1.0314 1.1459 1.0273
1.00 1.1551 0.8774 1.1074 1.0391 0.9233 1.0318 1.1184 1.0730
1.41 1.5220 1.2811 1.5262 1.4476 1.4199 1.3471 1.4662 1.5035
1.41 1.5164 1.3285 1.5584 1.4184 1.4025 1.4255 1.5495 1.5422
2.00 2.3529 1.8883 2.3501 2.2906 2.2402 2.1364 2.3711 2.0420
2.00 2.2307 1.9813 2.4013 2.1725 2.0966 2.1497 2.1708 2.3126 Bioassay Validation Results
Observed Potency
number of significant digits to support 1.41
statistical calculations
Tech 1
1.00
Tech 2
0.71
95
A Bioassay Validation Example
Components of random variability
Many factors influence the bioassay compounding the overall
variability
Inter -Run Variability Intra -Run Variability Overall Variability
+ =
point should be
analyzed (or mixed
effects model)
5.9
0 6 12 18 24 30 36
Time (Month)
97
A Bioassay Validation Example
Determination of intermediate precision
• Assessment of intermediate precision
– Calculation illustrated at the 0.50 level using analysis of variance (ANOVA)
• Note: the analysis is performed on ln(RP)
– For a “balanced” design like this example (equal number of replicates at all levels), the
estimated variance components may be obtained by equating the “Mean Square” with
the “Expected Mean Square”
– Intermediate precision (IP) is calculated from the sum of the variance component
estimates
Sum of Mean Expected Mean
Source df Squares Square Square
Run 7 0.055317 0.007902 Var(Error) + 2Var(Run)
Error 8 0.006130 0.000766 Var(Error)
Corrected Total 15 0.061447
Level
Component 0.50 0.71 1.00 1.41 2.00 Average
Var(Run) 0.00357 0.00065 0.00364 0.00314 0.00262 0.00272 0.00364/0.00065 = 5.6 < 10
Var(Error) 0.00076 0.0043 0.00295 0.00058 0.00226 0.00217 0.00430/0.00058 = 7.4 < 10
Overall 6.8% 7.3% 8.5% 6.3% 7.2% 7.2%
1
99
A Bioassay Validation Example
Determination of intermediate precision (cont.)
• Assessment of intermediate
precision Var (Run) Var (Error )
IP 100 e 1%
– A variance component analysis
using restricted maximum 100 e 0.00272 0.00217 1% 7.2%
likelihood estimation (REML) can
be performed including analyst
Component
and media lot as factors
Variance Estimate
• The VC analysis ignoring these Var(Media Lot) 0.0000
factors will always underestimate
Var(Analyst) 0.0014
IP when one or more of the
factors is significant (ex., 7.2% vs. Var(Analyst*Media LOt) 0.0000
7.7%) Var(Run(Analyst*Media Lot)) 0.0019
– The analysis reveals that analyst Var(Error) 0.0022
has a significant impact on long
IP 100 e i 1%
VC
term variability
• This indicates that improved
training (or use of multiple 100 e 0.0014 0.0019 0.0022 1% 7.7%
analysts) may improve bioassay
precision
100
A Bioassay Validation Example
Determination of intermediate precision (cont.)
Intermediate precision
– The estimate of intermediate precision is subject to uncertainty
and has an associated confidence interval
Upper Bound on 10% GSD 11.8%
140%
120%
100%
80%
%GSD
60%
40%
20%
0%
0 4 8 12 16 20 24
101
A Bioassay Validation Example
Bioassay characterization
Bioassay characterization
– The validation may be used to inform the laboratory of a
suitable testing format
• Variance component estimates can be used to forecast
variability for different numbers of intra-run and inter-run
replicates
σ̂Run
2
σ̂
2
Repeat
nk Component
Format Variability 100 e k
1 % Variance Estimate
Var(Media Lot) 0.0000
Var(Analyst) 0.0014
Var(Analyst*Media LOt) 0.0000
Format variability for different combinations of number of
runs (k) and number of minimal sets within run (n) Var(Run(Analyst*Media Lot)) 0.0019
Var(Error) 0.0022
Number of Runs (k)
Reps (n) 1 2 3 6
Note: The lab might replicate
1 7.2% 5.1% 4.1% 2.9%
over significant factors to
2 6.4% 4.5% 3.6% 2.6%
benefit from variance
3 6.0% 4.2% 3.4% 2.4% reduction
6 5.7% 4.0% 3.3% 2.3%
102
A Bioassay Validation Example
Determination of relative accuracy
103
A Bioassay Validation Example
Determination of relative accuracy (cont.)
Relative accuracy
– The estimated relative potency and relative bias (RB) is calculated at each
level, together with the 90% confidence interval (2 one-sided test)
RP log RP
GM e Av erage e0.0485 1.050
1.1299 0.1221
GM 1.050
RB 100 1 100 1% 5.0%,
0.9261 -0.0768 Target 1.00
1.1299 0.1221
1.0143 0.0142
CIln x t 7 s / n
1.0027 0.0027
0.0485 1.89 0.0715 / 8
1.0316 0.0311 (0.0006,0.0964)
1.1321 0.1241
e0.0006 e0.09643
1.0499 0.0487 CIRB 100 1,100 1 (0.06%,10.12%)
1.00 1.00
Average 0.0485
SD 0.0715
104
A Bioassay Validation Example
Determination of relative accuracy (cont.)
Relative accuracy
– Estimated relative potency and relative bias is calculated at each
level
Table 7. Average Potency and Relative Bias at Individual Levels
log Potency Potency Relative Bias
Level na Average (90% CI) Average (90% CI) Average (90% CI)
0.50 8 -0.6613 (-0.7034,-0.6192) 0.52 (0.49,0.54) 3.23% (-1.02,7.67)
0.71 8 -0.3419 (-0.3773,-0.3064) 0.71 (0.69,0.74) 0.06% (-3.42,3.67)
1.00b 8 0.0485 (0.0006,0.0964) 1.05 (1.00,1.10) 4.97% (0.06,10.12)
1.41 8 0.3723 (0.3331,0.4115) 1.45 (1.40,1.51) 2.91% (-1.04,7.03)
2.00 8 0.7859 (0.7449,0.8269) 2.19 (2.11,2.29) 9.72% (5.31,14.32)
105
A Bioassay Validation Example
Determination of relative accuracy (cont.)
CombinedRisk 1 (1 )k
Impact of multiplicity
Multiplicity Risk
– Increased risk with multiple 0.45
statistical tests 0.40 0.05
0.01
– Also increased risk from
0.35
0.30 0.001
correlated results (levels 0.25
Risk
performed in same 0.20
0.15
validation runs) 0.10
– Can be managed through 0.05
0.00
an analysis across levels 1 2 3 4 5 6 7 8 9 10
No. Tests (k)
106
A Bioassay Validation Example
Determination of relative accuracy (cont.)
1.41
1.00
0.71
Analyst 1
Analyst 2
Trend in Bias 100 2b-1 1 %
(bias per 2 - fold difference in activity)
0.50
108
Bioassay Maintenance
• Bioassay development
• Bioassay validation
• Bioassay maintenance
109
Bioassay Maintenance (cont.)
3.6
true long term variability of the 3.4
bioassay 3.2
-0.2
variability can be estimated from
-0.3
the data (nested errors)
-0.4
Time (Month)
- 0 +
112
Uses of bioassay
113
Uses of bioassay (cont.)
114
Uses of bioassay (cont.)
115
Uses of bioassay (cont.)
116
Uses of bioassay (cont.)
Stability monitoring
• The risk of a stability OOS during stability monitoring is
reduced through appropriate statistical modeling
• The estimated regression model gives a prediction of the
mean potency over time
• The uncertainty (confidence interval) associated with
individual measurements is greater than that on the
regression model
Regression versus Individual Stability Time Point
Potency
0 6 12 18 24 30
Time (Months)
117
Uses of bioassay (cont.)
Standard qualification/calibration
– A standard should be representative of material being
tested in the assay
• Usually a routine production lot
– Laboratories introduce new working standards using one
of two approaches
• Qualification – a demonstration of “equivalence” of the
working standard and the comparator (master standard or
the previous working standard)
• Calibration – assignment of potency to the new working
standard using the comparator
118
Uses of bioassay (cont.)
Standard qualification/calibration
• Qualification/calibration to a previous working standard
suffers from “standard drift” and serial uncertainty
(propagation of errors)
• This is controlled through use of a master (primary)
reference standard
Serial
Calibration to a Calibration
Master Standard
Master
Master 2WS1
2
WS1
2WS3 WS1
2
WS 2
2WS1 2WS2
120
Considerations in calibration assays
Considerations in calibration assays
A Quality by Design approach
250 40
35
200
30
Response
150 25
%CV
20
100 15
10
50
5
0 0
1 10 100 1000
Concentration
122
Considerations in calibration assays
A Quality by Design approach (cont.)
If you repeat the experiments under the same conditions, results will
vary run-to-run.
The range [7.25-450] may not be obtained for every run.
Noise in the signal and noise in the operating conditions will make
the performance vary
It is important to ensure
adequate performance is
achieved during future
runs.
124
Considerations in calibration assays
A Quality by Design approach (cont.)
Note:
LLOQ obtained in 10/20 runs
ULOQ obtained in17/20 runs
125
Considerations in calibration assays
A Quality by Design approach (cont.)
126
Considerations in calibration assays
A Quality by Design approach (cont.)
127
127
Considerations in calibration assays
A Quality by Design approach (cont.)
200
35
30
• LQL/UQL – lower/upper
Response
150 25
doses(concn’s)where
%CV
20
100 15
50
10
5
precision profile intersects
0
satisfactory %CV
0.20
0
1 10 100 1000
Concentration
• WR – working range =
log(UQL/LQL)
0.15
Working Range
• CVa – average CV across
0.10
CV working range
• Precision area – PA=
0.05
LQL UQL
10 100 1000 10000
impacted by weighting
Biomarker LevelConcentration
Standard (Standard Material)
128
Considerations in calibration assays
A Quality by Design approach (cont.)
129
129
Considerations in calibration assays
A Quality by Design approach (cont.)
130
Considerations in calibration assays
A Quality by Design approach (cont.)
Exp. Capt. Biot. EnzCult Vol. Incub. NBCl
1 250 250 750 100 1 1
2 250 600 300 100 1 4
3 250 600 750 50 3 1
4 750 600 300 50 1 1
5 250 600 750 50 3 1
6 750 600 300 50 1 1
131
Considerations in calibration assays
A Quality by Design approach (cont.)
132
Considerations in calibration assays
A Quality by Design approach (cont.)
Optimal conditions
133
Considerations in calibration assays
A Quality by Design approach (cont.)
However :
- Operating conditions vary run-to-
run, and thus performance will
vary run-to-run.
135
135
Considerations in calibration assays
A Quality by Design approach (cont.)
–
e.g., 90% of runs –
Volume
Incubation Time 1
100
– NB Cleanings 4
Design Space
136
Summary
137
Acknowledgements
Nancy Sajjadi
Bruno Boulanger
USP Bioassay Expert Panel
138