You are on page 1of 5

Pattern Recognition Letters 33 (2012) 793797

Contents lists available at SciVerse ScienceDirect

Pattern Recognition Letters


journal homepage: www.elsevier.com/locate/patrec

Ridler and Calvards, Kittler and Illingworths and Otsus methods


for image thresholding
Jing-Hao Xue a,, Yu-Jin Zhang b
a
b

Department of Statistical Science, University College London, London WC1E 6BT, UK


Department of Electronic Engineering, Tsinghua University, Beijing 100084, China

a r t i c l e

i n f o

Article history:
Received 27 February 2011
Available online 11 January 2012
Communicated by D. Coeurjolly
Keywords:
Image thresholding
Iterative selection
Discriminant analysis
Minimum error thresholding
Mixture of Gaussian distributions
Otsus method

a b s t r a c t
There are close relationships between three popular approaches to image thresholding, namely Ridler
and Calvards iterative-selection (IS) method, Kittler and Illingworths minimum-error-thresholding
(MET) method and Otsus method. The relationships can be briey described as: the IS method is an iterative version of Otsus method; Otsus method can be regarded as a special case of the MET method. The
purpose of this correspondence is to provide a comprehensive clarication, some practical implications
and further discussions of these relationships.
2012 Elsevier B.V. All rights reserved.

1. Introduction
In this correspondence, we aim to discuss the close relationships between three approaches to image thresholding, namely
Ridler and Calvard (1978)s or Trussell (1979)s iterative-selection
(IS) method, Kittler and Illingworth (1986)s minimum-errorthresholding (MET) method and Otsu (1979)s method.
With assumptions of bimodal or multimodal probability density
functions of grey levels x, these three approaches are widely used
in practice and highly cited by scientic publications. They are covered in some popular textbooks such as that written by Gonzalez
and Woods (2002, 2008). The MET method is ranked as the best
in a comprehensive survey of image-thresholding methods by
Sezgin and Sankur (2004) recently. Otsus method is implemented
as the default approach to image thresholding in some commercial
and free software such as MATLAB (The MathWorks, Inc.) and
GIMP (www.gimp.org).
The popularity of all three approaches is not a coincidence.
Recently, Xu et al. (2011) prove that, for image binarisation (or
two-level thresholding), Otsus optimal threshold is the threshold t
that equals the average value of the two class means, denoted by
l0(t) and l1(t), for the two classes separated by t. That is,
t = {l0(t) + l1(t)}/2. This result is in fact the iterative rule underlying the IS method. Such a link between the IS method and Otsus
Corresponding author. Tel.: +44 20 7679 1863; fax: +44 20 3108 3105.
E-mail addresses: jinghao.xue@ucl.ac.uk (J.-H. Xue), zhang-yj@tsinghua.edu.cn
(Y.-J. Zhang).
0167-8655/$ - see front matter 2012 Elsevier B.V. All rights reserved.
doi:10.1016/j.patrec.2012.01.002

method has also been built by other studies, such as Reddi et al.
(1984) and Magid et al. (1990).
Indeed, as we shall clarify more comprehensively in this correspondence, these three approaches are closely related to each
other: briey speaking, the IS method is an iterative version of
Otsus method; Otsus method can be regarded as a special case
of the MET method.
We shall show that, between the IS method, Otsus method and
the MET method, the links can be readily built from the perspective
of using a Gaussian-mixture distribution to model the grey-level
distribution of an image, as indicated by Kurita et al. (1992) and
Kittler and Illingworth (1986), among others. Such a perspective
is different from, and complementary to, that of Reddi et al.
(1984), Magid et al. (1990) and Xu et al. (2011).
In this context, although this correspondence may mainly revisit some results from various classical literature, our intentions are
twofold. First, we intend to provide the practitioners with a more
comprehensive clarication and some practical implications of
the close relationships between these three popular approaches.
Secondly, we intend to encourage further discussions about effectively applying, extending and evaluating the established imagethresholding approaches.
2. Relationships between the three approaches
Here we only consider image binarisation, but the discussions
presented in the following sections can be readily generalised to
multi-level thresholding.

794

J.-H. Xue, Y.-J. Zhang / Pattern Recognition Letters 33 (2012) 793797

Hence we assume that, in an image of N pixels, there are only


two classes, C0 and C1 .
Let {p(x), x = 0, . . . , T}, for grey levels x, denote the normalised
grey-level histogram constructed from the N pixels, such that
PT
x0 px 1 and, by abuse of notation, we shall also use p(x) to
denote the probability density function of x.
In addition, let y denote the class indicator of x, for example
y = 0 for x 2 C0 and y = 1 for x 2 C1 . Hence, the histogram (or more
precisely the probability density function), p(x), can be modelled
P
by a two-component mixture distribution: px 1y0 py pxjy,
where py is the prior probability for Cy (and thus p1 = 1  p0) and
p(xjy) is the class-conditional distribution of x within Cy .
Image binarisation is a technique that uses a threshold t to partition the image into two classes C0 t and C1 t, where
C0 t fi : 0 6 xi 6 t; 1 6 i 6 Ng and C1 t fi : t < xi 6 T; 1 6 i 6
Ng. That is, C0 t includes all the pixels with grey levels x no bigger
than t and C1 t consists of the remaining pixels.
Now we discuss the relationships between three pairs of the IS
method, the MET method and Otsus method, respectively.
2.1. The IS method and the iterative version of the MET method
In each of Sections 2.1, 2.2 and 2.3, we shall rst investigate the
relationship between two of the three approaches, and then discuss some practical pitfalls and implications for the use of the corresponding two approaches.
2.1.1. The relationship
Based on the Bayes discriminant rule under which the expected
misclassication error rate is minimised, an optimal threshold t is
the grey level x such that p(y = 0jx) = p(y = 1jx) or equivalently, given p(y = 0jx) > 0 and by using a discriminant function,

log

py 1jx
p1 pxjy 1
log
0:
py 0jx
p0 pxjy 0

Let us assume that, for each class y, the class-conditional distribution p(xjy) is a Gaussian Nly ; r2y distribution, where ly and r2y
are the mean and variance for class Cy . It follows that Eq. (1)
becomes

p1
r1 x  l1 2 x  l0 2
log  log 

0:
p0
r0
2r21
2r20

l0 l1
2

2.2. Otsus method and the MET method


2.2.1. The relationship
Let us assume
that, for a candidate threshold t, p(xjy; t) is a

Gaussian N ly t; r2y t distribution, where ly(t) and r2y t are
the mean and variance of class Cy t determined by t.
Under this assumption, Kurita et al. (1992) show that the rule
adopted by the MET method for an optimal threshold is equivalent
to the search for the threshold t that provides the largest maximum
log-likelihood (or equivalently likelihood), which is based on
p(x, y; t) and on the factorisation p(x, y; t) = p(y; t)p(xjy; t). That is:

(
t argmaxt max
Xt

"
Xt

"
N
X

#)
logfpyi ; tpxi jyi ; tg

i1



where the parameters Xt p0 t; l0 t; r20 t; l1 t; r21 t are
estimated by their maximum-likelihood estimators (i.e. their sample estimators). This methodology can be traced back to Kittler
and Illingworth (1986).
Under further assumptions that p1(t) = p0(t) and r21 t r20 t,
Kurita et al. (1992) also show that Otsus method is equivalent to
the search for the threshold t that provides the largest maximum
log-likelihood based on p(xjy; t). That is,

t argmaxt max

As shown in Kittler and Illingworth (1986), solving this quadratic


equation for x leads to an iterative version of the MET method.
Let us further assume that p1 = p0 and r21 r20 , i.e., the two classes C1 and C0 are of equal sizes and equal variances. It follows that
Eq. (2) degenerates into (x  l1)2 = (x  l0)2, or simply

strategy makes the latter more robust. The strategy and its positive
inuence may be justied by the fact that the IS method can be
viewed as a special case of the iterative MET method.
Furthermore, both methods can be derived from the discriminant
function logfpC1 jx=pC0 jxg 0. From this perspective, the iterative MET method is based on the Gaussian-based quadratic discriminant analysis (QDA), and the IS method is based on a special case of
the Gaussian-based linear discriminant analysis (LDA). The QDA reverts to the LDA, if equal variances are assumed; in Eq. (2), the quadratic term x2 disappears when r20 r21 . Therefore, some practical
guidelines and pitfalls associated with the LDA and the QDA may
be applied to the IS method and the iterative MET method; this merits further empirical investigation, although beyond the scope of this
correspondence.

(
N
X

)#
log pxi jyi ; t

i1



where Xt l0 t; l1 t; r2W t , in which r2W t, denoting r21 t
2
and r0 t, is estimated by the within-class variance. In fact, Eq.
(5) can be obtained through the degeneration of Eq. (4), under those
further assumptions that p1(t) = p0(t) and r21 t r20 t.
In short, Otsus method can be viewed as a special case of the
MET method, in which case equal sizes and equal variances are further assumed for the two classes.

This can lead to the IS method with an iterative rule


t = {l0(t) + l1(t)}/2, where ly(t) can be estimated by the sample
mean of class Cy t determined by threshold t.
In short, the IS method is a special case of the iterative version
of the MET method, in which case equal sizes and equal variances
are further assumed for the two classes.
2.1.2. Some practical pitfalls and implications
In practice, an inappropriately-selected initial value often results in the failure of an iterative algorithm. This is unfortunately
also the case for the iterative MET method and the IS method;
see Kittler and Illingworth (1986), Ye and Danielsson (1988) and
Xu et al. (2011) for illustrative examples. Hence, Ye and Danielsson
(1988) propose to use the IS method to set the initial threshold for
the iterative MET method, and their experiments show that this

2.2.2. Some practical pitfalls and implications


In practice, when those further assumptions of equal sizes and
equal variances are approximately satised, Otsus method is preferred because its model is more parsimonious (Kurita et al., 1992).
However, when the assumptions are apparently violated, Otsus
threshold tends to split the class of a larger size (Kittler and
Illingworth, 1985, 1986), as illustrated in the left-hand panel of
Fig. 1 for two Gaussian classes with equal variances but distinct
sizes, and to bias towards the class of a larger variance (Kittler
and Illingworth, 1986; Xu et al., 2011), as illustrated in the righthand panel of Fig. 1 for two Gaussian classes with equal sizes but
distinct variances. In other words, a more equal-sized and equalspreaded partition is favoured by Otsus method in this case. Such
patterns may be explained by the assumptions of equal sizes and
equal variances that are underlying the derivation of Otsus
method in subSection 2.2.1.

795

6000

Frequency

2000

3000

1000

Frequency

5000

10000

J.-H. Xue, Y.-J. Zhang / Pattern Recognition Letters 33 (2012) 793797

tOtsu

50

100

150

200

tOtsu

50

Grey level

100

150

Grey level

Fig. 1. Otsus binarisation of simulated data for two Gaussian classes. Left-hand panel: two classes with equal variances but distinct sizes (5%:95%); right-hand panel: two
classes with equal sizes but distinct variances r20 256; r21 16. Otsus thresholds tOtsu, indicated by solid lines, split the class with a larger size (see the left-hand panel),
and bias towards the class with a larger variance (see the right-hand panel).

Moreover, Otsus method is based on Fishers LDA; both Otsus


method and Fishers LDA use a within-class variance r2W and/or a
between-class variance r2B (Otsu, 1979). As we know, Fishers
LDA, under the assumptions of two class-conditional Gaussian distributions with equal variances, is equivalent to the Gaussianbased LDA; the MET method, as shown in Section 2.1, is based on
the Gaussian-based QDA. In this context, as with the IS method
and the iterative MET method, some practical guidelines and pitfalls related to the LDA and the QDA may also be applied to Otsus
method and the non-iterative MET method.
2.3. The IS method and Otsus method
2.3.1. The relationship
From the relationships presented in Sections 2.1 and 2.2, we can
observe that the IS method is an iterative version of Otsus method.
As we mentioned in Section 1, such a link has been built from different perspectives, including that proposed in an early work by
Reddi et al. (1984).
Otsus original method based on optimising r2W t or r2B t is
slow for multi-level thresholding. Hence, Reddi et al. (1984) provide an iterative version of Otsus method, which can be used as
a fast algorithm for searching for optimal multiple thresholds.
For binarisation, this iterative Otsu method applies the same iterative rule, t = {l0(t) + l1(t)}/2, as that of the IS method.
The iterative rule can be derived from differentiating either
r2W t or r2B t. As with Reddi et al. (1984), let us use r2B t, which
is given by

r2B t p0 tfl0 t  lT g2 p1 tfl1 t  lT g2


p0 tl20 t p1 tl21 t  l2T ;

where lT = p0(t)l0(t) + p1(t)l1(t) is independent of t and denotes


the total mean of grey levels x.
Using a continuous representation of p(x), we can write

p0 t

pxdx;

l0 t

p0 t

Z
0

p1 t

pxdx:

l1 t

It follows from differentiating

2.3.2. Some practical pitfalls and implications


In practice, due to the characteristics of an iterative algorithm
for optimisation, the IS method is not only subject to the initial value of the threshold, as shown in Xu et al. (2011) among others, but
also subject to certain local extrema, because r2B t, or equivalently
r2W t, may not be unimodal with respect to t, as illustrated in
Kittler and Illingworth (1985) and Lee and Park (1990). In Fig. 2
we also provide an illustrative example, where: (top panel) Otsus
method supplies a suboptimal threshold at about 156; (middle panel) the within-class variance r2W t, which is used by Otsus method for searching for an optimal threshold, exhibits both global and
local minima, and hence r2W t is not unimodal; and (bottom panel) for a range of appropriate initial values the IS method can
reach a better nal threshold at about 130.
In other words, the IS method may provide an optimal threshold quite different from, although often similar to, that provided
by Otsus method. In addition, a global extremum located by Otsus
method may not be a better threshold than a local extremum located by the IS method, in particular when the sizes of two classes
are highly distinct from each other. In this case a simple valley
check, which only checks whether p(t) < p(ly(t)), may help to
choose a better extremum (Kittler and Illingworth, 1985; Xue
and Titterington, 2011a).

xpxdx;

As we have mentioned, the same result can also be obtained by


differentiating r2W t, because the sum of r2W t and r2B t, denoted
by r2T , is a constant independent of t; r2T represents the total variance of grey levels x for an image. Some variants of the differentiation of r2W t can be found in Magid et al. (1990) based on a
continuous representation of r2W t, and in Dong et al. (2008) and
Xu et al. (2011) based on discrete representations.
In short, the IS method is an iterative version of Otsus method.
Indeed, an exhaustive search for t = {l0(t) + l1(t)}/2 should provide
the same threshold as that of Otsus method.

p1 t

7
Z

3. Further Discussions

xpxdx:

r2B t with respect to t that

dr2B t=dt 2l0 ttpt  l20 tpt  2l1 ttpt l21 tpt:

Given p(t) > 0 and l0(t) l1(t), with simple algebra, we can obtain
t = {l0(t) + l1(t)}/2 from setting dr2B t=dt 0, as shown in Reddi
et al. (1984).

Our interpretations of the IS method, the MET method and


Otsus method mainly follow that by Kittler and Illingworth
(1986) and Kurita et al. (1992), based on statistical mixture models
and the maximum likelihood estimation. There exist other interpretations of one or two of these methods from various perspectives, such as those in Kittler et al. (1985) based on simple
statistics without using a histogram, in Yan (1996) based on a general weighted-cost function, in Morii (1991) and Jiulun and Winxin

J.-H. Xue, Y.-J. Zhang / Pattern Recognition Letters 33 (2012) 793797

6000
4000
0

2000

Frequency

8000

10000

796

tOtsu

100

150

200

1.0
0.8
0.6
0.4
0.2
0.0

Rescaled withinclass variance

Grey level

80

100

120

140

160

180

200

180

200

Threshold t

160
140
120
80

100

Final threshold

180

200

For the IS method

80

100

120

140

160

Initial threshold
Fig. 2. Binarisation of simulated data for two Gaussian classes. Top panel: Otsus
threshold, tOtsu, for two classes with equal variances but highly-distinct sizes
(2%:98%). Middle panel: the within-class variance r2W t used by Otsus method for
searching for an optimal threshold tOtsu; rescaled for illustrative purposes. Bottom
panel: nal thresholds obtained by the IS method versus corresponding initial
values of the thresholds.

(1997) based on entropies and in Xue and Titterington (2011b)


based on hypothesis tests, among others.
In subSections 2.1.1, 2.2.1 and 2.3.1, we have provided our
interpretations of these established approaches and thus a more
comprehensive clarication of the relationships between them. In
subSections 2.1.2, 2.2.2 and 2.3.2, we have discussed some practical pitfalls and implications for the use of these approaches. Besides these, we shall present some further discussions about
extending and evaluating them effectively, as follows.
First, as shown before, Otsus method and the MET method can
be derived from using a mixture of Gaussian distributions to model

the grey-level distribution, based on different assumptions about


the Gaussian distributions. Therefore, some of their extensions
can be and have been developed by using a mixture of other distributions, such as Poisson distributions (Pal and Bhandari, 1993),
generalised Gaussian distributions (Bazi et al., 2007; Fan et al.,
2008), skew-normal and log-concave distributions (Xue and Titterington, 2011c), Laplace distributions (Xue and Titterington, 2011a),
and certain variants of Rayleigh (Xue et al., 1999), Nakagamigamma, log-normal and Weibull distributions (Moser and Serpico,
2006), to name but a few.
Secondly, although, as with most other automatic image-thresholding approaches, Otsus method and the MET method are in
nature a clustering (or unsupervised-learning) approach, they were
motivated by and based on discriminant analysis, a supervisedlearning approach to the search for optimal separation through setting the discriminant function logfpC1 jx=pC0 jxg 0. For semiautomatic image thresholding, some pixels with class labels y
known can be collected. In this case, some semi-supervised learning techniques (Chapelle et al., 2006) can be adapted to image
thresholding.
Thirdly, the three approaches discussed in this correspondence
are usually based on assumptions of bimodal or multimodal probability density functions of grey levels x. Hence, they often perform
poorly in unimodal cases, as empirically demonstrated by Rosin
(2001) and Medina-Carnicer and Madrid-Cuevas (2008) for example. Nevertheless, as shown by a piece of recent work (MedinaCarnicer et al., 2011), with a certain transformation of the image
histogram, the performance of Otsus method can be improved
for edge detection, a common unimodal-thresholding application.
Last but not the least, a variety of measures have been taken to
evaluate image-thresholding methods, or more generally imagesegmentation methods, in comparative studies (Sahoo et al.,
1988; Rosin and Ioannidis, 2003; Sezgin and Sankur, 2004). Zhang
(1996) categorises these measures into three groups: the analytical, empirical goodness and empirical discrepancy groups; such a
categorisation has also been discussed for edge detection (Fernndez-Garca et al., 2004; Ortiz and Oliver, 2006). The empiricaldiscrepancy measures require a reference image (also called gold
standard or ground truth), while the empirical-goodness measures do not. Zhang et al. (2008) propose a hierarchy of evaluation
measures, which classies empirical-discrepancy measures as
supervised and empirical-goodness measures as unsupervised,
and provide a survey of the unsupervised measures.
In practice, each measure has its advantages and disadvantages.
Examples of the disadvantages include: a supervised measure nds
in general no gold standard available for real images; an unsupervised measure favours certain methods that explicitly or implicity
use the measure as a criterion for searching for an optimal segmentation. In our context, a widely-used measure for performance
comparison between image-thresholding methods, the grey-level
uniformity measure (Levine and Nazif, 1985), is basically equivalent to Otsus method (Ng and Lee, 1996) and thus always favours
the latter. Such an equivalence is also indicated in Zhang and Gerbrands (1994), based on the link between the uniformity measure
and the goodness criteria proposed by Weszka and Rosenfeld
(1978).

4. Summary
In this correspondence, we have provided a comprehensive
clarication of the close relationships between three popular image-thresholding approaches. That is, in short, Ridler and Calvards
IS method is an iterative version of Otsus method; Otsus method
can be regarded as a special case of Kittler and Illingworths MET
method. It was our expectation that such a clarication could help

J.-H. Xue, Y.-J. Zhang / Pattern Recognition Letters 33 (2012) 793797

the practitioners to understand more comprehensively the characteristics, thresholding performances and pitfalls of these approaches, and thus facilitate the application, extension and
evaluation of them.
Acknowledgements
The authors are grateful for the referees and the Area Editors
constructive comments, in particular those on the unimodal thresholding and the evaluation of image-thresholding methods.
References
Bazi, Y., Bruzzone, L., Melgani, F., 2007. Image thresholding based on the EM
algorithm and the generalised Gaussian distribution. Pattern Recognition 40 (2),
619634.
Chapelle, O., Schlkopf, B., Zien, A. (Eds.), 2006. Semi-Supervised Learning. The MIT
Press, Cambridge, MA.
Dong, L., Yu, G., Ogunbona, P., Li, W., 2008. An efcient iterative algorithm for image
thresholding. Pattern Recognition Letters 29 (9), 13111316.
Fan, S.-K.S., Lin, Y., Wu, C.-C., 2008. Image thresholding using a novel estimation
method in generalised Gaussian distribution mixture modelling.
Neurocomputing 72 (1-3), 500512.
Fernndez-Garca, N.L., Medina-Carnicer, R., Carmona-Poyato, A., Madrid-Cuevas,
F.J., Prieto-Villegas, M., 2004. Characterization of empirical discrepancy
evaluation measures. Pattern Recognition Letters 25 (1), 3547.
Gonzalez, R.C., Woods, R.E., 2002. Digital Image Processing, second ed. Prentice
Hall, Upper Saddle River, NJ.
Gonzalez, R.C., Woods, R.E., 2008. Digital Image Processing, third ed. Pearson
Prentice Hall, Upper Saddle River, NJ.
Jiulun, F., Winxin, X., 1997. Minimum error thresholding: A note. Pattern
Recognition Letters 18 (8), 705709.
Kittler, J., Illingworth, J., 1985. On threshold selection using clustering criteria. IEEE
Transactions on Systems, Man, and Cybernetics SMC-15 (5), 652655.
Kittler, J., Illingworth, J., 1986. Minimum error thresholding. Pattern Recognition 19
(1), 4147.
Kittler, J., Illingworth, J., Fglein, J., 1985. Threshold selection based on a simple image
statistic. Computer Vision, Graphics, and Image Processing 30 (2), 125147.
Kurita, T., Otsu, N., Abdelmalek, N., 1992. Maximum likelihood thresholding based
on population mixture models. Pattern Recognition 25 (10), 12311240.
Lee, H., Park, R.-H., 1990. Comments on An optimal multiple threshold scheme for
image segmentation. IEEE Transactions on Systems, Man, and Cybernetics 20
(3), 741742.
Levine, M.D., Nazif, A.M., 1985. Dynamic measurement of computer generated
image segmentations. IEEE Transactions on Pattern Analysis and Machine
Intelligence 7 (2), 155164.
Magid, A., Rotman, S.R., Weiss, A.M., 1990. Comment on Picture thresholding using
an iterative selection method. IEEE Transactions on Systems, Man, and
Cybernetics 20 (5), 12381239.
Medina-Carnicer, R., Madrid-Cuevas, F.J., 2008. Unimodal thresholding for edge
detection. Pattern Recognition 41 (7), 23372346.
Medina-Carnicer, R., Muoz-Salinas, R., Carmona-Poyato, A., Madrid-Cuevas, F.J.,
2011. A novel histogram transformation to improve the performance of

797

thresholding methods in edge detection. Pattern Recognition Letters 32 (5),


676693.
Morii, F., 1991. A note on minimum error thresholding. Pattern Recognition Letters
12 (6), 349351.
Moser, G., Serpico, S.B., 2006. Generalized minimum-error thresholding for
unsupervised change detection from SAR amplitude imagery. IEEE
Transactions on Geoscience and Remote Sensing 44 (10), 29722982.
Ng, W.S., Lee, C.K., 1996. Comment on using the uniformity measure for
performance measure in image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence 18 (9), 933934.
Ortiz, A., Oliver, G., 2006. On the use of the overlapping area matrix for image
segmentation evaluation: A survey and new performance measures. Pattern
Recognition Letters 27 (16), 19161926.
Otsu, N., 1979. A threshold selection method from grey-level histograms. IEEE
Transactions on Systems, Man, and Cybernetics SMC-9 (1), 6266.
Pal, N.R., Bhandari, D., 1993. Image thresholding: some new techniques. Signal
Processing 33 (2), 139158.
Reddi, S.S., Rudin, S.F., Keshavan, H.R., 1984. An optimal multiple threshold scheme
for image segmentation. IEEE Transactions on Systems, Man, and Cybernetics
SMC 14 (4), 661665.
Ridler, T., Calvard, S., 1978. Picture thresholding using an iterative selection method.
IEEE Transactions on Systems, Man, and Cybernetics SMC 8 (8), 630632.
Rosin, P.L., 2001. Unimodal thresholding. Pattern Recognition 34 (11), 20832096.
Rosin, P.L., Ioannidis, E., 2003. Evaluation of global image thresholding for change
detection. Pattern Recognition Letters 24 (14), 23452356.
Sahoo, P.K., Soltani, S., Wong, A.K.C., Chen, Y.C., 1988. A survey of thresholding
techniques. Computer Vision, Graphics, and Image Processing 41 (2), 233260.
Sezgin, M., Sankur, B., 2004. Survey over image thresholding techniques and
quantitative performance evaluation. Journal of Electronic Imaging 13 (1), 146
165.
Trussell, H.J., 1979. Comments on Picture thresholding using an iterative selection
method. IEEE Transactions on Systems, Man, and Cybernetics SMC-9 (5), 311.
Weszka, J.S., Rosenfeld, A., 1978. Threshold evaluation techniques. I EEE
Transactions on Systems, Man, and Cybernetics SMC 8 (8), 622629.
Xue, J.-H., Titterington, D.M., 2011a. Median-based image thresholding. Image and
Vision Computing 29 (9), 631637.
Xue, J.-H., Titterington, D.M., 2011b. t-tests, F-tests and Otsus methods for image
thresholding. IEEE Transactions on Image Processing 20 (8), 23922396.
Xue, J.-H., Titterington, D.M., 2011c. Threshold selection from image histograms
with skewed components based on maximum-likelihood estimation of skewnormal and log-concave distributions, manuscript.
Xue, J.-H., Zhang, Y.-J., Lin, X.G., 1999. Rayleigh-distribution based minimum error
thresholding for SAR images. Journal of Electronics (China) 16 (4), 336342.
Xu, X., Xu, S., Jin, L., Song, E., 2011. Characteristic analysis of Otsu threshold and its
applications. Pattern Recognition Letters 32 (7), 956961.
Yan, H., 1996. Unied formulation of a class of image thresholding techniques.
Pattern Recognition 29 (12), 20252032.
Ye, Q.-Z., Danielsson, P.-E., 1988. On minimum error thresholding and its
implementations. Pattern Recognition Letters 7 (4), 201206.
Zhang, Y.-J., 1996. A survey on evaluation methods for image segmentation. Pattern
Recognition 29 (8), 13351346.
Zhang, Y.-J., Gerbrands, J.J., 1994. Objective and quantitative segmentation
evaluation and comparison. Signal Processing 39 (1-2), 4354.
Zhang, H., Fritts, J.E., Goldman, S.A., 2008. Image segmentation evaluation: a survey
of unsupervised methods. Computer Vision and Image Understanding 110 (2),
260280.

You might also like