You are on page 1of 14

Enuironmetrics; 1991; 2(2): 139-152

Estimation of the LC50, A Review

J. A. Hoekstra'

ABSTRACT
In this paper methods to estimate the median lethal concentration
(LC50) are reviewed. Estimation from data sets with limited occurrence
of partial response (>O%, <loo%) receives particular attention.

KEY WORDS: Background Mortality; Bioassay; LC50 estimator; Likelihood


Methods; Logit Analysis; Moving Average; Probit Analysis; Spearman-
Karber Estimator; Toxicity Test; Trimming.

1. INTRODUCTION

A common measure of the acute toxicity of a substance in a species


is the median lethal concentration or LC50. This is the concentration that
kills 50% of the population within a fixed time. In Table 1 an example
is given of a bioassay to determine the toxicity of trichlorophenol to the
earthworm Eisenia andrei. Relevant for the estimation procedure is the lim-
ited occurrence of concentrations with partial mortality (>O% and <loo%).
This can be caused by small variation in tolerance between individuals of an
assay and/or lack-of-knowledge of the toxicity of the substance. A second
aspect of the data is the incidental occurrence of mortality in the control
group, indicating mortality factors other than the toxic substance on test.
The data set is characteristic for many toxicity experiments. Some statisti-
cians would argue that this type of data is too poor for statistical analysis.
However, given that it was not known beforehand between which concen-
trations the LC50 would fall, the data contain a lot of information and it

Centre for Mathematical Methods, National Institute of Public Health and


Environmental Protection, P.O. Box 1,3720 BA Bilthoven, The Netherlands.

1180-4009/91/020139-14$07.00 Received 10 October 1990


@John Wiley 8z Sons, Ltd. Revised 2 August 1991
140 J. A. HOEKSTRA

is worthwhile t o summarize this information succinctly. Many methods to


estimate the LC50 have been proposed (e.g., Finney 1971; Hamilton, Russo
and Thurston 1977; Kooijman 1981,1983; Racine et al. 1986; Stephan 1977;
Thompson 1947; Williams 1986). In this paper methods that are advocated
for routine data-analysis will be discussed and relations between methods in-
dicated. For sequential methods, which can improve the information content
of the data and thus reduce the number of animals to be tested, see Govin-
darajulu (1988). Extra-binomial variation, possibly caused by correlation
between responses, will not be considered (see e.g. Williams 1982; Moore
1987). Analysis of multifactor bioassays (Finney 1971) and modelling the
effect of different exposure-times (Kalbfleisch et al. 1983) is not considered
either.
Section 2 will deal with methods based on tolerance distributions, and
Section 3 with tolerance distribution-free methods. Comparisons will be
made in Section 4, with attention given t o their usefulness for data as de-
scribed above. Section 5 contains rccomrnendations.

2. METHODSBASEDON TOLERANCE
DISTRIBUTIONS

2.1 THEPROBIT AND THE LOGITMODEL


The probability of response in relation to the concentration of a toxic
substance is most frequently modelled by a probit or a logit model (Finney
1971; Ashton 1972). Although the models can be conceived as purely empir-
ical concentration-response curves, they are often motimted by the concept
of tolerance distributions: each individual is characterized by a (log) con-
centration just sufficient to kill it (its “tolerance”). The probability that
an individual, randomly drawn from the population, will die at a given log
concentration 2 is then:

in which f(u) is the density function of the log tolerance in the population.
The probit and logit model assume a normal and logistic density, respec-
tively. Both functions are symmetric. Symmetry can be enhanced by using
an appropriate transformation. Mostly, the log transformation is chosen, as
is described here, but sometimes there may be reasons t o prefer another or
no transformation (Finney 1971).
Some asymmetric and more heavy tailed models are discussed by Mor-
gan (1988). Generally, these models make high demands on the data for
adequate estimation, and are not often applied in routine toxicity testing.
More wideiy used is the extension of the probit or the logit model with one
~~ ~ ~ ~~~~~ ~~ ~~~~

concentration mg/kg 0 0 0 10 10 10 18 18 18 32 32 32 56 56 56 100 100 100

number tested 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

number killed 0 0 1 0 0 0 0 0 0 0 0 0 9 7 1 0 1 0 1 0 1 0

Table 1. Toxicity of 2.4.5-tricholorophenol applied to the earthworm Eisenia andrei (C.A.M. Van Gertel, unpublished data).
142 J. A. HOEKSTRA

parameter t o allow for mortality in the absence of the toxic substance:

C is the background mortality probability. The LC50 is now defined as


the point at which P = 1/2(1 + C). For interpretation, Hoekstra (1987)
distinguishes between background mortality caused by natural factors and
background mortality caused by the experimental circumstances, like spray-
ing or manipulating the animals. In the latter case, to allow extrapolation
to a natural situation without artificial mortality, it has to be assumed that
the artificially killed animals are not the ones also most sensitive to the poi-
son. If the background mortality is the result of natural processes, however,
this extrapolation is unnecessary: the LC50 can be interpreted as the con-
centration lethal t o 50% of the population of individuals that would have
survived in the absence of the poison. No assumptions about independence
of mortality causes need be made. The above model is implicitly assumed in
Abbott’s formula (1925)to “correct” the data when background mortality is
observed. This correction can precede any estimation method. However, one
should take care t o account for uncertainty in the estimate of the background
mortality probability C in the variance of the estimators of the LC50.
Hoe1 (1980) gives an alternative model for background mortality. The
difference between the models becomes substantial in the left tail of the
tolerance distributions.

2.2 ESTIMATION
Maximum likelihood estimation (MLE) of the log(LC50) = p = -a/@
is the prevalent method. The estimates are obtained either numerically
(Finney 1971) or graphically (Litchfield and Wilcoxon 1949). The latter
method seems rather outdated at least if computer facilities are available.
An alternative estimation criterion is minimum x2 (Ashton 1972).
Confidence intervals can be constructed by applying Fiellers theorem
(see Finney 1978) or the delta-method (see, e.g.! Hamilton 1980). The first
method assumes a normal distribution for & and p. The second uses a normal
approximation for &/p itself, resulting in symmetric intervals. Goedhart
(1986) found for 2 investigated data sets that the assumption for & and p
was the better.
Maximum likelihood breaks down if there is just one concentration at
which partial kill occurs: the MLE of ,d is then infinite. (Computer programs
do not always warn against this situation. A convergence criterion can be
met before ,O causes overflow, in which case faulty estimates will be given.)
Williams (1986)points out that with one partial kill a confidence inter-
val for the LC50 can still be determined by likelihood methods: the interval
ESTIMATION
OF THE LC50, A REVIEW 143

is given by the set of values not rejected by a likelihood ratio test. The like-
lihood ratio test statistic is obtained by subtraction of the deviance under
the full model (both parameters estimated by MLE) from the deviance of
the model under the null hypothesis (fixed value of 1,/3 estimated by MLE).
In case of one partial kill the full model has zero deviance.
Other parametric methods to obtain confidence limits are the paramet-
ric bootstrap (Efron 1985) and Bayesian procedures (Racine et al. 1986).
The properties of parametric bootstrapping with quantal data are not well
investigated. Confidence intervals are likely to be liberal. Racine et al.
(1986) contrast their Bayesian methods with (the failure of) Fiellers method
in small data sets. However, Fiellers method should not be used in this sit-
uation. This is not to denounce Bayesian analyses: when prior information
is available, the advantages are as expected.
Sanathanan, Gade and Shipkowitz (1987) developed a trimmed logit
method in analogy with the trimmed Spearman-Karber method (see Section
3.1), to diminish the sensitivity of logit analysis to anomalous results in the
tails of the tolerance distribution.

3. TOLERANCE METHODS
DISTRIBUTION-FREE
METHOD
3.1 SPEARMAN-K~RBER
The method of Spearman-Karber (Spearman 1908; Karber 1931; Finney
1971) obtains an estimate of 1as a weighted average of the mid-points be-
tween successive log concentrations. The weights are the estimated tolerance
densities at the midpoints. Let X I < x2 < ..-< Zk be the log concentra-
tions and p 1 , p 2 , . . .,pk the observed proportions mortality. The number
of individuals tested at x, is denoted by n,. If p1 = 0 and pk = 1, the
Spearman-Karber estimator (SK) can be written as:

1=1

If p l # 0 or p k # 1 a conventional rule is to extend the series of concentra-


tions with an (unobserved) next level, at which p is assumed to be 0 or 1,
and to apply the above formula to the extended series.
If the density vanishes outside the range ( z l , z k ) , the formula is an
approximation of
Jw= Im
-
00

f(4.du

Equating the above estimator of the mean to the median lethal dose is correct
if the underlying tolerance distribution is symmetric.
144 J. A. HOEKSTRA

The variance of jj is obtained by substituting p;( 1 - p ; ) / n , for the vari-


ance of p i :

Sometimes n; is replaced by na- 1. Two further modifications on the method


need t o be mentioned. The first is initial monotonization of the proportions
by combining the number of responses and subjects of adjacent concentra-
tions whenever the ordering p1 < p2 < --
< Pk is violated. The monotonized
proportions are the MLE’s of the true mortality probabilities in case no spe-
cific tolerance distribution is assumed (Barlow et al. 1972). Monotonization
does not affect the estimate itself, but it does affect its estimated variance.
The second modification is due to Hamilton (Hamilton et al. 1977,1978;
Hamilton 1979, 1980). He proposes symmetrically trimmed versions of the
Spearman-Karber estimate analogous to the trimmed means for continuous
data with heavy tailed distributions. The method is exemplified in Figure
1 for the 10% trimmed Spearman-Karber estimate (SKlO%). The original
SK-estimator is applied to (monotonized) proportions between 0.1 and 0.9
supplemented with the estimated values at 0.1 and 0.9. The variance formula
is adapted to account for this modification. The trimmed method assumes
symmetry of the tolerance density within the enclosed range, e.g., from the
10% point to the 90% point. The SK50% is equivalent t o the median of
the empirical tolerance distribution, and amounts to linear interpolation be-
tween the two dose-response points that enclose the 50% mortality (Hamilton
1979).
The variance is obtained by application of the delta-method (Hamilton
1979,1980). In all versions of the Spearman-Karber method 95% confidence
limits are derived as jl f 2 s.e. (jl).
If p l # 0 or P k # 1, one can avoid introducing unobserved concentra-
tions with assumed 0% or 100% mortality by the procedure of symmetrical
trimming.

3.2 MOVING AVERAGE(MA)


This method (Thompson 1947) is nowadays seldomly mentioned by
statisticians, but is popular amongst ecotoxicologists (Stephan 1977). It
amounts to calculating moving averages of the p ; and z,,followed by lin-
ear interpolation between the two averages 21 and ~ 3 + + for
~ which the p*’s
enclose 0.50:
p =z; 4- a (2;+1 - 2;)
a=
(0.5 - P;)
(G+I -P;>
The variance is approximated by the delta-method, and a confidence interval
is obtained by jl f 2 s.e. (D). Bennett (1952, 1963) improved the efficiency
OF THE LC50,A REVIEW
ESTIMATION 145

1 .o
W
ul 0.9
Z 0.8
0
a 0.7
cn
w
w 0.6

z 0.5
0
-
t- 0.4
L41
0 0.3
a 0.2
0
C
Y
a 0.1
0.0

LOG CONCENTRATION

Figure 1. lO%-Trimmed Spearman-Karber eatimate. *: monotonized proportions;


0: estimated values at 0.1 and 0.9.

of the estimator by weighted averaging according to number of tested indi-


viduals and by angular transformation of the proportions.
The span of the averaging procedure should be chosen such that p*’s
exist that enclose 0.50. A large span implies the assumption of symmetry.
When “averaging” takes place over span 1, the method is equivalent
to SK50%. Averaging over an infinite span would equal the untrimmed SK
(Finney 1953).

3.3 OTHERDISTRIBUTION-FREE METHODS


Several other robust estimators for the location of a continuous distri-
bution have been adapted for quanta1 bioassay data (Andrews et al. 1972;
James et al. 1984; Miller and Halpern 1980). All estimators are for the cen-
tre of symmetry, giving less weight to observations at the tails. The logistic
scores estimator developed by James e l al. (1984) seems promising. How-
ever, it appears that no estimator of the variance has been developed so that
the method is not ready for practical use.
The kernel methods of Kappenman (1987) and Miiller and Schmitt
(1988) form another distribution-free approach to the problem. They can be
seen as an extension of the moving-average method. The kernel methods are
not based on the assumption of symmetry as are the robust methods men-
146 J . A. HOEKSTRA

tioned above, and are likely to be more efficient than the moving-average
method.
Stephan (1977) suggested that in absence of partial kills, the highest
concentration with complete survival and the adjacent lowest concentration
with complete mortality can be used as limits of an interval with confidence
level (1-2(1/2))x 100%. This procedure, however, has no statistically sound
basis.

4. COMPARISON

Comparison of LC50-estimators is complicated by the fact that their


performance depends quite heavily on the design of the bioassay in relation
to the underlying tolerance distribution. A further complication is the fact
that most estimators or their variances cannot be calculated for particular
data sets.

4.1 LITERATURE
The merits of MLE and minimum x2 have been discussed extensively
(Berkson 1980). Which method is best seems to depend very critically on the
design and the chosen criterion for performance (Berkson 1955; Engeman,
Otis and Dusenberry 1986; Hamilton 1979; James el al. 1984; Smith et al.
1984). Probit analysis has a computational disadvantage to logit analysis: in
many computer implementations the numerical approximation of the normal
distribution is poor in cases of very high or low percentage points.
Finney (1950, 1953) compared the performance of MLE, SK and MA
with respect to asymptotic variance and bias under the probit (1950) and
logit (1953) models. The experimental design was chosen to be an unlimited
series of equally spaced log concentrations each with n observations. The
asymptotics refer to n -+ 00. Important conclusions are:
0 SK is identical to logit-MLE (apart from discontinuity in the distribu-
tion of estimates).
0 MA can best be applied with the largest possible span, once a data set
has been obtained. It then becomes similar to SK.
0 If the design is asymmetrical about p, SK and MA have a bias indepen-
dent of n. The wider the spacing of the design points, the larger the
bias becomes.
0 The variance of all three estimators is strongly dependent on the dis-
tance from k t o the nearest design point.
Hamilton (1979) and James et al. (1984) carried out large simulation
studies t o compare a number of estimators under a variety of symmetric
OF THE LC50, A REVIEW
ESTIMATION 147

tolerance distribution models. The outcomes of the two studies differ re-
markably, especially for the trimmed SK-estimators. The difference is at-
tributable to the different position of the dose levels relative to p in the
two studies (Hoekstra 1989). The conclusions of the two simulation studies
are as follows. Hamilton (1979) recommends the 10% or 20% trimmed SK
for n = 10,20 because of their favourable MSE under a variety of tolerance
models. However, due to the position of p in the design the reported MSE’s
of heavily trimmed estimators are misleadingly small. James el al. (1984)
find that SK5% performs quite well. Heavier trimmed SK’s were never more
efficient than the untrimmed SK.
Miller and Halpern (1980) study asymptotic properties (n -+ 00, dif-
ference between design points -+ 0 and infinite number of design points) of
several robust estimators and find that SKlO% performs better than SK5%
for two very heavy tailed distributions but less for moderately contaminated
distributions.
Hamilton (1980), using the same design and tolerance distributions as
he did before, compared 95% confidence intervals obtained from logit-MLE
(delta method) and trimmed SK’s in a Monte Carlo study (n = 5,10,20).
The empirical coverages never fell below 91% for any of the methods. With
strongly contaminated steep tolerance distributions MLE, SK, SK5% and
SKlO% all resulted in conservative confidence intervals, MLE giving the
widest intervals in cases where all methods could be applied (>1 partial
kill).
Stephan (1977) advocates the moving average method because of its
calculability in cases when p1 # 0 or P k # 1 (unlike the untrimmed SK
which would need an undesirable fabrication of data) and when there is
just one concentration with partial kill (unlike MLE). However, he does not
consider the efficiency of MA. Engeman et al. (1986) compared MA with
probit and logit methods using 3 to 6 dose levels and n from 2 to 20 by
simulation. MA was clearly less efficient under the logit model, but did not
fail to produce an estimate as often as the other methods.
Williams (1986) also concludes that for small sample sizes MLEintervals
obtained by the Fieller method are conservative even under the logit model
itself. He uses designs with a total of 20 to 30 observations spread over 4 to
6 dose levels and different values of the design centre relative to p. The LR-
intervals were reported to be liberal, but rarely fell below 93% confidence
probability. The simulation study included models leading to substantial
percentages of data sets with one or no concentrations with partial kill.
Using Hamilton’s (1979) design and models, Kappenman (1987) com-
pares a kernel method with SK10%; the earlier performs better, especially
with large n (n = 30). Miller and Schmitt (1988) compare 2 kernel meth-
ods under several models, including non-symmetric ones, with the probit
method. The design involves 48 equidistant log concentrations with n = 1.
The kernel methods do considerably better under the non-probit models.
148 J. A. HOEKSTRA

4.2 ADDITIONAL
SIMULATION
STUDY
An additional study was carried out t o compare some methods in sit-
uations with few partial responses. The methods considered are MLE, LR,
SK and MA with span 8; the design was x = 1,2,. . .,10 with n = 10 at each
level. To avoid undue i n h e n c e of the position of p on the outcomes, p was
drawn from a uniform distribution between 5 and 6 for each of the 1000 sim-
ulated data sets independently. The underlying tolerance distribution was
the logistic distribution with p = 1 , 3 and 6. The average percentages re-
sponse at the 10 design points are given in Table 2. The results are presented
in Table 3.

X p=1 p=3 p=6


1 1.o 0.0 0.0
2 3.0 0.0 0.0
3 7.8 0.1 0.0
4 18.5 1.6 0 .o
5 36.6 21.5 10.7
6 62.0 78.0 87.7
7 80.5 98.6 100.0
8 91.9 100.0 100.0
9 97.0 100.0 100.0
10 99.1 100.0 100.0
~~ ~~~ ~

Table 2. Average percentage response in simulation study.

For each method, all statistics are given with reference t o those data
sets on which the method could be applied. MLE failed in all cases with 0
or 1 partial kill, the other methods in cases with 0 partial kills. The results
are scaled with respect to the variance of the tolerance curve.
No difference of practical relevance exists between SK and MA8. This
agrees with Finney’s (1953) conclusion about the similarity of the 2 methods
in this type of situation. Bias played a negligible role in all methods. The
empirical confidence probabilities are all acceptably close t o the nominal
value. With a steep logistic curve or equivalently a wide spacing of the
concentrations, the methods SK and MA8 give a slightly liberal confidence
interval. Hamilton (1980) found the same with a smaller p ( p = 1.838) and
n = 5 . MLE is not generally applicable in this situation, and is conservative
even for the subset of data to which it can be applied. Fieller’s method
seems t o yield no improvement over the simpler delta-method.
OF THE LC50,A REVIEW
ESTIMATION 149

p=1 p=2 p=3

Data sets without 0% 0.8% 14.4%


partial response
Datasets with 1 0% 28.4% 74.6%
partial response
Spearman-Karber
Bias * p -0.0122 0.0123 0.0086
MSE * p 2 0.0952 0.2916 0.6768
C.I. includes p 95.6% 94% 91.5%
Average width of
C.I. * p 1.22 2.13 3.24
Moving Average
Bias * p -0.0137 0.0123 0.0084
MSE * p 2 0.0989 0.2916 0.6768
C.I. includes p 95.5% 94% 91.5%
Average width of
C.I. * p 1.25 2.13 3.24
Maximum likelihood
Bias * p -0.0127 0.0097 0.0441
MSE * p 2 0.0983 0.2969 0.5760
F.C.I. includes p 96.7% 98.9% 100%
D.C.I. includes p 95.3% 95.6% 98.2%
Average width of
F.C.I. * p 1.32 2.70 5.13
Average width of
D.C.I. * p 1.23 2.21 4.04
Likelihood Ratio method
C.I.includes p 95.6% 94 % 97%
Average width of
C.I. * p 1.25 2.11 4.00
~ ~~ ~~~ ~ ~~~

C.I. : 95% confidence interval


F.C.I. : 95% confidence interval Fieller method
D.C.I. : 95% confidence interval Delta method

Table 3. Results of simulation study.


150 J. A. HOEKSTRA

With /3 = 6 the LR-interval was conservative, a result that cannot be


attributed t o sampling error. This contrasts with Williams’ (1986) findings
with smaller sampling sizes.

5 . CONCLUSIONS

Lay-out of the design points, number of observations and underlying


model all influence the performance of the estimators in a rather unpre-
dictable way.
In practice, one never knows the underlying model. Hence, if the data
allow a sufficiently powerful form of model checking, this should be done.
For this type of data it is perhaps best to base the LC50 estimation on a
well fitting parametric model, combined with a maximum likelihood proce-
dure. One has the advantage of information on other aspects of the tolerance
curve in addition to the LC50-value. If the data give insufficient informa-
tion about the form of the tolerance curve, as is the case with few partial
responses (say less than 3) or poor coverage of the tolerance curve by the
design, a more robust procedure seems preferable. Perhaps one is still pre-
pared to assume an approximately symmetrical tolerance density, so that
the (trimmed) Spearman-Karber method can be applied. If not, the method
of moving average or, more efficiently, kernel methods are applicable.
An important difference between the methods lies in the different de-
mands they make on the data to be applicable. The appropriate statistical
method will therefore depend more strongly on the outcomes of the bioassay
than is usual for quantitative data. This will inevitably cause bias in all
methods.
Incidental deaths at low concentrations can indicate background mor-
tality or individuals with high sensitivity t o the poison. One of these two
possibilities has to be chosen to enable estimation of the LC50. In the first
case an Abbott-like modification of the chosen method is appropriate. The
second case can lead to choice of an asymmetric response model or - if
corresponding incidental survivors at high concentrations are feasible as well
- use of a trimmed Spearman-Karber estimate. In practice it will be very
difficult t o discriminate between these cases.

REFERENCES

Abbott, W.S.(1925), “A method of computing the effectiveness of an insecticide”. Journal


of Economic Entomology 18, 265-267.
Andrews, D.F., P.J. Bickel, F.R. Hampel, P.J. Huber, W.H. Rodgers and J.W. Tukey
(1972). Robust Estimates of Location - Survey a n d Advances. Princeton: Princeton
University Press.
ESTIMATION
OF THE LC50,A REVIEW 151

Ashton, W.D. (1972). The Logit Transformotion. New York: Hafner Publishing Co.
Barlow, R.E., D.J.Bartholomew, J.M. Bremner and H.D. Brunk (1972). Stotisticol Znfer-
ence Under Order Restnctions. New Yolk Wiley.
Bennett, B.M. (1952), “Estimation of LD50 by moving averages”. Journal of Hygiene 50,
157-164.
Bennett, B.M. (1963), “Optimum moving averages for the estimation of medium effective
dose in bioassay”. Journol of Hygiene 61, 401-406.
Berkson, J. (1955), “Maximum likelihood and minimum X’ estimates of the logistic func-
tion”. Journal of the American Stotisticul Association 50, 130-162.
Berkson, J. (1980), “Minimum chi-square, not maximum likelihood!” Annals of Stotistics
0 , 447-487.
Efron, B. (1985), “Bootstrap confidence intervals for a class of parametric problems”.
Biometriko 7 2 , 45-58.
Engenman, R.M.,D.L. Otis and W.E. Dusenberry (1986), “Small sample comparison of
Thompson’s estimator t o some common bioassay estimators”. Journol of Stotiaticol
Computotion ond Sirnulotaon 25, 237-250.
Finney, D.J. (1950), “The estimation of the mean of a normal tolerance distribution”.
Sonkhyo 10, 341-360.
Finney, D.J. (1953), “The estimation of the ED50 for a logistic response curve”. Sonkhyo
12, 121-136.
Finney, D.J. (1971). Prubit Anolysis, 3rd edition. Cambridge: The University Press.
Finney, D.J.(1978). Stotistiurl Methods in Biologicol Assoy, 3rd edition. London: Charles
Griffin.
Finney, D.J. (1985), “The median lethal dose and its estimation”. Archives of Toticology
56, 215-218.
Goedhart, P.W. (1986), “Statistical analysis of quantal assay data”. IT1 - T N O rapport,
nr. 86, IT1 A 31.
Govindarajulu, Z. (1988). Stotisticol Technaques an Biwssoy. New York: Karger.
Hamilton, M.A. (1979), ‘Robust estimates of the ED50”. Journol of the Americon Stotis-
ticol Associotion 74, 344-354.
Hamilton, M.A. (1980), “Inference about the ED50 using the trimmed Spearman-Kkber
procedure - a Monte Carlo investigation”. Communicotions an Stotistics B9, 235-
254.
Hamilton, M.A., P.C. Russo and R.V. Thurston (1977), “Trimmed Spearman-Kirber
method for estimating median lethal concentrations in toxicity bioassays”. Enuiron-
mentol Science ond Technology 11, 714-719. Correction (1978), 12, 417.
Hoekstra, J.A. (1987), “Acute bioassays with control mortality”. Woter, Air and Soil
Pollution 35, 311-317.
Hoekstra, J.A. (1989), “Estimation of the LC50”. Letter to the editor. Biornetrics 45,
337-338.
Hoel, D.G. (1980), “Incorporation of background in dose-response models”. Fedemtion
Proceedings 39, 73-75.
James, B.R., K.L. James and H. Westenberger (1984), “An efficient R-estimator for the
ED50”.Journol of the American Statistical Association 79, 164-173.
Kalbfleisch, J.D., D.R. Krewski and J. Van Ryzin (1983), “Dose-response models for time-
to-response toxicity data (with discussion)”. Conodion Journal of Stotistics 11,25-49.
Kappenman, R.F. (1987), “Nonparametric estimation of dodleresponse curves with appli-
cation to ED50 estimation”. Journal of Stotisticol Computotion ond Simulotion 28,
1-13.
Kirber, G. (1931). “Beitrag zur kollektieven Behandlung pharmakologischer Reihenver-
suche”. Archiu f i r Ezpenmentelle Pothologce und Phormokologte 162, 480-487.
Kooijman, S.A.L.M. (1981), “Parametric analyses of mortality rates in bioassays”. Woter
Reseorch 15, 107-119.
Kooijman, S.A.L.M. (1983), “Statistical aspects of the determination of mortality rates in
bioassays”. Water Reseorch 17, 749-759.
Litchfield, J.T. and F. Wilcoxon (1949), “A simplified method of evaluating doseeffect
experiments”. Journol of Phormocology ond Ezperimentol Therapeutics 96, 99-113.
152 J . A. HOEKSTRA

Miller, R.G. and J.W. Halpern (1980), ‘Robust estimator for quantal bioaasay”. Biometriko
67, 103-110.
Moore, D.F. (1987), “Modelling the extraneous variance in the presence of extra-binomial
variation”. Applied Stotistics 36, 8-14.
Morgan, B. J.T. (1988). ‘Extended models for quantal response data”. Statistic0 Neer-
londico 42, 253-273.
Miller, H.-G. and Th. Schmitt (1988), ‘Kernel and probit estimates in quantal bioassay”.
Joumol of the Americun Stotisticol Asaociotion 83, 750-759.
Racine, A., A.P. Grieve, H. Flibler and A.F.M. Smith (1986), ‘Bayesian methods in prac-
tice: Experiences in the pharmaceutical industry with Discussion”. Applied Stotistics
36, 93-150.
Sanathanan, L.P., E.T. Gade and N.L. Shipkowitz (1987), “Trimmed logit method for
estimating the ED50 in quantal bioassay”. Biometrics 43, 825-832.
Smith, K.C., N.E. Savin and J.L. Robertson (1984), ‘A Monte Carlo comparison of max-
imum likelihood and minimum chi-square sampling distributions in logit analysis”.
Biometrics 40, 471-482.
Spearman, C. (1908), ‘The method of ‘right and wrong cases’ (‘constant stimuli’) without
Gauss’s formulae”. British Journol of Psychology 2, 227-242.
Stephan, C.E. (1977), “Methods for calculating an LC50”. In Aguotic tozicology ond hoz-
ord euoluotion, F.L. Mayer and J.L. Hamelink (eds), 65-84. Philadelphia: American
Society for Testing and Materials.
Thompson, W.R. (1947), ‘Use of moving averages and interpolation to estimate median-
effective dose - I - Fundamental formulas, estimation of error and relation to other
methods”. Bocteriologwol R e v ~ e w s11, 115-145.
Williams, D.A. (1982), “Extra-binomial variation in logistic linear models”. Applied Stotis-
tics 31, 144-148.
Williams, D.A. (1986), ‘Interval estimation of the median lethal dose”. Biometrics 42,
641-646.

You might also like