Professional Documents
Culture Documents
https://doi.org/10.1007/s41096-018-0033-4
RESEARCH ARTICLE
1 Introduction
B Manoj Chacko
manojchacko02@gmail.com
Asha P. S.
ashstat@gmail.com
123
J Indian Soc Probab Stat
n−1
f (ri )
f (r1 , r2 , . . . , rn ) = f (rn ) , −∞ < rn < . . . < r2 < r1 < ∞. (1)
F(ri )
i=1
In reliability theory, record values are used for statistical modeling, shock models,
and they are closely connected with the occurrence times of a corresponding non-
homogeneous Poisson process, with some minimal repair scheme (see Kamps 1994).
Interest in records has increased steadily over the years since Chandler (1952) for-
mulation. Useful surveys are given in Ahsanullah (1995) and Arnold et al. (1998).
Estimation of parameters using record values and prediction of future record values
have been studied by several authors, for details see Balakrishnan and Chan (1998),
Raqab (2002) and Sultan et al. (2002). Bayesian estimation and prediction for some
life distributions based on record values have been considered by Ahmadi and Doost-
parast (2006). Seo et al. (2012) developed an entropy estimation method for upper
record values from the generalized half-logistic distribution.
Entropy measures the uncertainty associated with a random variable. Let X be a
random variable having an absolutely continuous CDF F(x) and PDF f (x) then the
entropy introduced by Shannon (1948), called Shannon entropy is defined by
∞
H( f ) = − f (x)log( f (x))d x. (2)
−∞
Several generalized entropy measures have been proposed in the literature to measure
the uncertainty of a probability distribution since the seminal work of Shannon (1948).
Among these, one of the most important and applicable measures is Renyi entropy
proposed by Renyi (1961). For a random variable X with PDF f (x) the Renyi entropy
is given by
∞
1
Hα ( f ) = log f (x)α d x; α(= 1) > 0. (3)
1−α −∞
Several authors have discussed the problem of estimation for entropy functions for
different distributions. Kang et al. (2012) derived estimators of entropy for a double
exponential distribution based on multiply Type-II censored samples. Cho et al. (2014)
derived estimators for the entropy function of a Rayleigh distribution based on doubly-
generalized Type II hybrid censored samples. Baratpour et al. (2007) developed the
entropy of upper record values and provided several upper and lower bounds for
this entropy by using the hazard rate function. Cramer and Bagh (2011) discussed the
entropy in the Weibull distribution for progressive censoring. Cho et al. (2015) obtained
estimators for the entropy function of a Weibull distribution based on generalized Type
II hybrid censored samples.
However not much work on estimation of entropy based on record values in seen
done in the available literature. Hence in this paper we consider the Bayesian estimation
for entropies of a two parameter generalized exponential distribution (which is denoted
by G E(β, σ )) introduced by Kundu and Gupta (1999). It was observed that the GE
distribution can be used in situations where a skewed distribution for a nonnegative
random variable is needed. The cumulative distribution function (CDF) is given by
123
J Indian Soc Probab Stat
−x β
F(x|β, σ ) = 1 − e σ (4)
β
−x β−1 −x
f (x|β, σ ) = 1−e σ eσ , (5)
σ
where β is the shape parameter and σ is the scale parameter. When β = 1, the
GE distribution coincides with the exponential distribution. For this model, Kundu
and Gupta (1999) considered different estimation procedures and compared their per-
formances through numerical simulations. Raqab and Ahsanullah (2001) and Raqab
(2002) studied the properties of order statistics and record values, and their inferences,
respectively. Raqab and Madi (2005) considered the Bayesian estimation and predic-
tion for GE distribution, using informative priors. Kundu and Gupta (2008) consider
the estimation problem of GE using Bayesian viewpoints. Kim and Song (2010) con-
sidered Bayesian estimation of the parameters of the GE distribution based on doubly
censored sample. In various engineering applications such as independent component
analysis, image analysis, genetic analysis and time delay estimation it is useful to esti-
mate the entropy of a system or process given some observations. If the observations
available are only in the form of lower record values and the parent population follows
GE distribution then the inferential procedure developed in this work can be utilized
to estimate the entropies of the system or process.
The Shanon entropy for the generalized exponential distribution with CDF given
in (4) is given by
β β −1
H (β, σ ) = − log − + [ψ(β + 1) − ψ(1)] , (6)
σ β
where ψ(.) is the digamma function. The Renyi entropy for the generalized exponential
distribution with CDF given in (4) is given by
1
Hα (β, σ ) = [αlogβ − (α − 1)logσ + logΓ (α(β − 1) + 1) + logΓ (α)
1−α
− logΓ (αβ + 1)]. (7)
In this paper, first we consider the maximum likelihood estimation of entropies for
GE based on lower record values. The maximum likelihood method is the most widely
used estimation method. However, in many practical situations, occurrences of record
values are very rare and sample sizes are often very small. Thereby MLE can produce
substantial bias and also intervals based on the asymptotic normality of MLEs do not
perform well. Hence in the paper we also consider Bayesian estimation of entropies
of GE distribution based on lower record values.
The paper is organized as follows. In Sect. 2, we obtain the maximum likelihood
estimators (MLEs) for Shanon entropy and Renyi entropy using lower record values.
In Sect. 3, we consider the Bayes estimation of entropies using importance sampling
123
J Indian Soc Probab Stat
method. Section 4 is devoted to some simulation studies. In Sect. 5, a real data is used
for illustration and finally in Sect. 6 we have some concluding remarks.
In this section, we obtain the MLEs of entropies for the two parameter generalized
exponential distribution using lower record values. Let Ri , i = 1, 2, . . . , n be the
first n lower record values arising from GE distribution with CDF given in (4). Let
Dn = (R1 , R2 , . . . , Rn ). Then from (1) the likelihood function is given by
n −ri
β n
−rn β e σ
L(β, σ |dn ) = 1−e σ −ri , (8)
σ 1−e σ i=1
Then
∂logL −rn
= n
β + log 1 − e σ
∂β
and
−ri
∂logL n βrn e
−rn
σ
n
ri
n
e σ r
i
=− − 2 −rn + + −ri .
∂σ σ σ 1−e σ 2 σ 2
σ
i=1 i=1 1 − e σ
n −rn
+ log 1 − e σ = 0 (10)
β
and
−ri
n βrn e
−rn
σ
n
ri
n
e σ r
i
− − 2 −rn + + −ri = 0. (11)
σ σ σ2 σ 2
1−e σ2 i=1 i=1 1 − e σ
−n
β= −rn
. (12)
log 1 − e σ
123
J Indian Soc Probab Stat
n
e σ r
i
+ −ri = 0. (13)
1−e σ σ 2
i=1
Let σ̂ be the MLE of σ obtained by solving the non-linear Eq. (13) with respect to σ .
Then, the MLE of β is given by
−n
β̂ = −rn
. (14)
log 1 − e σ̂
Then by the invariant property of MLE, the MLE of Shannon entropy for GE distri-
bution based on lower record values is given by
β̂ β̂ − 1
ĤM L = − log − + ψ(β̂ + 1) − ψ(1) . (15)
σ̂ β̂
Similarly MLE of Renyi entropy for GE distribution based on lower record values is
given by
1
Ĥ(M L)α = [αlog β̂ − (α − 1)log σ̂ + logΓ (α(β̂ − 1) + 1) + logΓ (α)
1−α
− logΓ (α β̂ + 1)]. (16)
3 Confidence Interval
In this section, we obtain the asymptotic as well as bootstrap confidence interval (CI)
for Shannon and Renyi entropies.
∂ 2 log L
where Ii j = ∂β i ∂σ j
, i, j = 0, 1, 2 and i + j = 2. From (9) we have
n
I20 = − 2 ,
β
123
J Indian Soc Probab Stat
rn
−rn e− σ
I11 = rn
σ 2 (1 − e− σ )
and
n
ri
n
βri −ri −ri −ri
−2 + r i e σ − 2σ e σ 1 − e σ .
σ3 −ri 2
i=1 i=1 σ 1−e σ
4
By using delta method, we obtain the asymptotic distribution of ĤM L . For that we
have
ˆ ( ĤM L ) = var
var ˆ (H (β̂, σ̂ ))
≈ h (β̂, σ̂ )[I (β̂, σ̂ )]−1 h(β̂, σ̂ ),
where
∂ H (β,σ )
∂β hβ
h(β̂, σ̂ ) = ∂ H (β,σ ) = ,
hσ
∂σ (β,σ )=(β̂,σ̂ )
with
1 1
hβ = − − + Ψ (β̂ + 1)
β̂ β̂ 2
and
1
hσ = .
σ̂
Ĥ M L −H
Thus √ is asymptotically distributed as N (0, 1). Thus a (1−ν)100% confidence
var
ˆ ( Ĥ )
interval for Shannon entropy H based on the MLE is ( ĤM L −z ν/2 var ˆ ( ĤM L ), ĤM L +
z ν/2 varˆ ( ĤM L )), where z ν/2 is the (1 − ν/2)100 percentile of N (0, 1).
Similarly
we obtain the asymptotic CI of Renyi entropy Hα as ( Ĥ(M L)α −
z ν/2 var
ˆ ( Ĥ(M L)α ), Ĥ(M L)α + z ν/2 var
ˆ ( Ĥ(M L)α )), where
123
J Indian Soc Probab Stat
where
∂ Hα (β,σ )
∂β kβ
k(β̂, σ̂ ) = ∂ Hα (β,σ ) = ,
kσ
∂σ (β,σ )=(β̂,σ̂ )
where
1 α
kβ = + αΨ (α(β̂ − 1) + 1) − αΨ (α β̂ + 1)
1 − α β̂
and
1
kσ = .
σ̂
In this section, we consider percentile bootstrap confidence interval for entropies based
on MLEs. For that we do the following
1. Compute the MLEs β̂ (0) and σ̂ (0) of β and σ using the original lower records and
put k = 1.
2. Generate a bootstrap sample using β̂ (k−1) and σ̂ (k−1) and obtain the MLEs β̂ (k)
and σ̂ (k) using the bootstrap sample.
3. Obtain the MLEs of Shannon entropy Ĥk = H (β̂ (k) , σ̂ (k) ) and Renyi entropy
Ĥα,k = H (β̂ (k) , σ̂ (k) ).
4. Put k = k + 1.
5. Repeat 2 - 4 B times to have Ĥk and Ĥα,k for k = 1, 2, . . . , B
6. Arrange Ĥk and Ĥα,k for k = 1, 2, . . . , B in ascending order as Ĥ(1) ≤ Ĥ(2) · · · ≤
Ĥ(B) and Ĥα,(1) ≤ Ĥα,(2) · · · ≤ Ĥα,(B) respectively.
Then
the 100(1 − ν) percentile bootstrap CI for Shannon entropy is given by
Ĥ(B(ν/2)) , Ĥ(B(1−ν/2)) and 100(1 − ν) percentile bootstrap CI for Renyi entropy
is given by Ĥα,(B(ν/2)) , Ĥα,(B(1−ν/2))
4 Bayesian Estimation
In this section, we consider Bayesian estimation of entropies for the two parameter
GE distribution under symmetric as well as asymmetric loss functions.
A symmetric loss function is the squared error loss (SEL) function which is defined
as
2
L 1 d (μ) , d̂ (μ) = d̂ (μ) − d (μ) , (18)
123
J Indian Soc Probab Stat
where d̂ (μ) is an estimate of d (μ). The Bayes estimate of μ under L 1 is the posterior
mean of μ. An asymmetric loss function is the LINEX loss (LL) function which is
defined as
h d̂(μ)−d(μ)
L 2 d (μ) , d̂ (μ) = e − h d̂ (μ) − d (μ) − 1, h = 0. (19)
The Bayes estimate of d (μ) for the loss function L 2 can be obtained as
1
d̂ L B (μ) = − log E μ e−hμ |x , (20)
h
provided E μ (·) exists. Another asymmetric loss function is the general entropy loss
(EL) function given by
q
d̂ (μ) d̂ (μ)
L 3 d (μ) , d̂ (μ) = − q log − 1, q = 0. (21)
d (μ) d (μ)
− 1
d̂ E B (μ) = E μ μ−q |x q . (22)
n n −ri β−1
β n ri i=1 1−e σ
L(β, σ |dn ) = e− i=1 σ ,
n−1 −ri β
(23)
σ
i=1 1 − e σ
ba a−1 −bβ
π1 (β|a, b) = β e ; a >, b > 0 (24)
Γa
and
d c −c−1 −d
π2 (σ |c, d) = σ e σ ; c > 0, d > 0. (25)
Γc
123
J Indian Soc Probab Stat
Therefore the Bayes estimate of any function g(β, σ ) of β and σ under SEL, LL and
EL are respectively given by
g(β, σ )L(β, σ |dn )π(β, σ )dβdσ
ĝ S = , (28)
L(β, σ |dn )π(β, σ )dβdσ
1 e−h g(β,σ ) L(β, σ |dn )π(β, σ )dβdσ
ĝ L = − log (29)
h L(β, σ |dn )π(β, σ )dβdσ
and
− q1
(g(β, σ ))−q L(β, σ |dn )π(β, σ )dβdσ
ĝ E = . (30)
L(β, σ |dn )π(β, σ )dβdσ
where
−rn β
1−e σ
Q (β, σ ) = −ri n+a −ri (β+1)
(31)
n n
b − i=1 log 1 − e σ i=1 1 − e
σ
and
n
f 1 (σ ) ∝ σ −n−c−1 e
−1
σ ( i=1 ri +d ) (32)
n −ri
−β b− i=1 log 1−e σ
f 2 (β|σ ) ∝ β n+a−1 e (33)
Thus from (32) we can see thatdistribution of σ follows an Inverse Gamma distribution
n
with parameters (n + c) and ( i=1 ri + d). Therefore one can easily generate sample
from the distribution of σ given in (32). Again from (33) one can see that for a given
123
J Indian Soc Probab Stat
and
− 1
N
t=1 (g(β (t) , σ (t) ))−q Q(β (t) , σ (t) ) q
ĝ E = N . (36)
(t) (t)) )
t=1 Q(β , σ
Thus the Bayes estimator of Shannon entropy function given in (6) under SEL, LL
and EL are respectively obtained by taking g(β, σ ) = H (β, σ ) in (34)–(36). Simi-
larly the estimate for Renyi entropy given in (7) under SEL, LL and EL functions are
respectively obtained by taking g(β, σ ) = Hα (β, σ ) in (34)–(36).
In this subsection we construct HPD intervals for Shannon and Renyi entropies
as described in Chen and Shao (1999). Define Ht = H (β (t) , σ (t) ) and Hα,t =
Hα (β (t) , σ (t) ), where β (t) and σ (t) for t = 1, 2, . . . , N are posterior samples gen-
erated respectively from (32) and (33) for β and σ . Let H(t) and Hα,(t) be the ordered
values of Ht and Hα,t respectively. Define
The 100(1 − ν)%, 0 < ν < 1, confidence interval for H is given by ( Ĥ ( j/N ) ,
Ĥ (( j+[(1−ν)N ])/N ) ), j = 1, 2, . . . , N , where [.] is the greatest integer function. Then
the required HPD interval for Shannon entropy H is the interval with smallest width.
Similarly the pth quantile of Hα can be estimated as
Hα,(1) if p = 0
Ĥ ( p) = i
Hα,(i) if i−1
j=1 w j < p ≤ j=1 w j
123
J Indian Soc Probab Stat
Table 1 The bias and MSE for maximum likelihood estimator and AIL and CP for CI for Shannon entropy
of generalized exponential distribution
5 Simulation Study
In this section we carry out a simulation study for illustrating the estimation procedures
developed in previous sections. First we obtain the MLEs for Shannon entropy and
Renyi entropy using (15) and (16). We have obtained the bias and MSE of MLEs for
different values of n using 1000 simulated samples for different combinations of β and
σ and are given in Tables 1, 2 and 3. The asymptotic CI and bootstrap CI for entropies
are also obtained. The average interval length (AIL) and coverage probability (CP) are
also included in 1-3. For the simulation studies for Bayes estimators we take the hyper
parameters for the prior distributions of σ and β as a = 2, b = 2, c = 2 and d = 2.
We have obtained the Bayes estimators for entropies of GE distribution using lower
record values under SEL, LL and EL functions using importance sampling method.
123
J Indian Soc Probab Stat
Table 2 The bias and MSE for maximum likelihood estimator and AIL and CP for CI for Renyi entropy
of generalized exponential distribution for α = 0.5
(e) Calculate Ĥ (β , σ (t) ) using (6)and Ĥα (β (t) , σ (t) ) using (7).
(t)
(f) Set t = t + 1.
(g) Repeat steps (c) to (f) for N = 50,000 times.
(h) Calculate the Bayes estimators for Shannon entropy H (β, σ ) and Renyi
entropy Hα (β, σ ) using (34)–(36).
3. Repeat the steps 1 and 2 for 1000 times.
4. Calculate the bias and MSE of the estimator.
123
J Indian Soc Probab Stat
Table 3 The bias and MSE for maximum likelihood estimator and AIL and CP for CI for Renyi entropy
of generalized exponential distribution for α = 1.5
Repeat the simulation study for n = 5(1)8, and for different values of β and σ . The bias
and MSE for Bayes estimators for Shannon entropy under SEL, LL and EL functions
are given in Table 4. The Bayes estimators for Renyi entropy Hα (β, σ ) for α = 0.5
and α = 1.5 are obtained. The bias and MSE for the Bayes estimators are given in
Tables 5 and 6. From the tables we have the following inference
1. The bias and MSE of almost all estimators decrease when n increases
2. The bias and MSE of Bayes estimators are smaller than that of MLEs.
3. For Shannon entropy, the bias and MSE of Bayes estimators under SEL function
are least when β ≤ 1 and the bias and MSE are least for EL function (q = 0.5)
when β > 1.
4. For Renyi entropy also, the bias and MSE of Bayes estimators under SEL function
are least when β ≤ 1 and the bias and MSE are least for EL function (q = 0.5)
when β > 1.
123
J Indian Soc Probab Stat
To illustrate the estimation techniques developed in this paper, we consider the real
data on the amount of annual rainfall (in inches) recorded at the Los Angeles Civic
Center for 127 years, from 1878 to 2005 (season July 1–June 30). Madi and Raqab
(2007) checked the validity of the GED model using the Kolmogorov–Smirnov (K–S)
test and observed that the KS distance is KS = 0.0642 with a corresponding p-value =
0.6718. This indicates that the GE model provides a good fit to the above data.
The lower record values obtained up to 1959 from the data are:
11.35, 10.40, 9.2, 16.73, 5.59, 5.58. We have obtained the MLEs for Shannon entropy
and Renyi entropy and are given below. For the Bayes estimation we took the non-
informative priors for σ and β and are obtained when a = 0, b = 0, c = 0 and
d = 0.
The estimators for Shannon entropy using different methods are given below. The
corresponding CIs are given in brackets. For MLE bootstrap CI is obtained.
The estimators for Renyi entropy using different methods are given below
α 0.5 1.5
MLE 0.8169 (0.5641, 1.6124) 0.4730 (0.1255, 1.1231)
Bayes estimate
SE 0.8891 (0.4263, 1.6243) 0.4425 (0.2632, 1.3587)
LL
h = −1 0.7532 (0.3894, 1.2687) 0.3460 (0.1987, 1.2147)
h=1 0.6576 (0.3478, 1.4521) 0.4274 (0.1698, 1.1261)
EL
q = −0.5 0.5926 (0.2678, 1.3542) 0.3569 (0.2156, 1.1243)
q = 0.5 0.5126 (0.2196, 1.3897) 0.3241 (0.1257, 1.1246)
7 Conclusion
In this work, we have considered the problem of estimation of entropy for a two
parameter generalized exponential distribution using record values. The maximum
likelihood estimation and Bayesian estimation have been applied to obtain the estima-
tors for Shannon and Renyi entropies. For obtaining the Bayes estimates, importance
123
Table 4 The bias and MSE for Bayes estimator for Shannon entropy of generalized exponential distribution
n σ β H SEL LL EL
Bias MSE h = −1 h=1 q = −0.5 q = 0.5
Bias MSE Bias MSE Bias MSE Bias MSE
J Indian Soc Probab Stat
5 1 0.5 2.30685 − 0.19062 0.50964 0.47255 0.75360 − 0.63182 0.61830 − 0.35960 0.52952 − 0.62362 0.70941
5 1.5 0.5 2.71232 − 0.49709 0.52061 0.46334 0.84854 − 0.81792 0.74300 − 0.61457 0.85027 − 0.88872 0.82397
5 1.5 1 1.40547 0.14259 0.23616 0.51578 0.56853 − 0.29424 0.18408 − 0.12017 0.21275 − 0.18548 0.16447
5 1.5 1.5 0.94704 0.48308 0.31247 0.81594 0.78855 0.17493 0.14406 0.36944 0.22412 0.09244 0.18242
5 2 1.5 1.23472 0.16402 0.17260 0.46988 0.35149 0.07321 0.12001 0.05442 0.16297 − 0.13419 0.14434
6 1 0.5 2.30685 − 0.03624 0.49706 0.38190 0.72570 − 0.44313 0.56624 − 0.34412 0.55623 − 0.47782 0.54504
6 1.5 0.5 2.71232 − 0.46778 0.50742 0.14588 0.60147 − 0.78981 0.52629 − 0.57210 0.83243 − 0.74377 0.71070
6 1.5 1 1.40547 0.13912 0.21141 0.47356 0.57128 − 0.25088 0.13380 0.13428 0.19847 − 0.17181 0.15249
6 1.5 1.5 0.94704 0.41671 0.26463 0.66927 0.57866 0.21999 0.13232 0.31632 0.19549 0.08181 0.16511
6 2 1.5 1.23472 0.13584 0.15421 0.38294 0.33784 − 0.06075 0.10870 0.04507 0.14212 − 0.13148 0.14654
7 1 0.5 2.30685 − 0.24567 0.39631 0.28895 0.61873 − 0.41557 0.56527 − 0.33977 0.54960 − 0.41582 0.50017
7 1.5 0.5 2.71232 − 0.39023 0.38079 − 0.12830 0.57857 − 0.72762 0.51419 − 0.51795 0.69935 − 0.69094 0.61884
7 1.5 1 1.40547 − 0.09773 0.17950 0.20310 0.27025 − 0.24880 0.11937 − 0.12291 0.19036 − 0.12838 0.20618
7 1.5 1.5 0.94704 0.31218 0.20202 0.49856 0.37519 0.14009 0.12904 0.22204 0.15971 0.05980 0.12809
7 2 1.5 1.23472 0.12483 0.13706 0.31390 0.30306 − 0.05803 0.10241 0.04267 0.13131 − 0.18085 0.13312
8 1 0.5 2.30685 − 0.28832 0.37234 0.08255 0.58148 − 0.39856 0.51308 − 0.31667 0.62484 − 0.38065 0.49525
8 1.5 0.5 2.71232 − 0.28051 0.35958 − 0.20986 0.51873 − 0.62815 0.48740 − 0.47574 0.60513 − 0.68787 0.51020
8 1.5 1 1.40547 − 0.07237 0.16083 0.13147 0.25978 − 0.20086 0.10840 − 0.11545 0.18322 − 0.12619 0.18079
8 1.5 1.5 0.94704 0.20923 0.14947 0.36099 0.25229 0.11952 0.10450 0.12433 0.12752 0.05598 0.11282
8 2 1.5 1.23472 0.11489 0.10153 0.26976 0.29213 − 0.04868 0.09150 0.04111 0.12002 − 0.10785 0.12853
123
Table 5 The bias and MSE for Bayes estimator for Renyi entropy of generalized exponential distribution when α = 0.5
n σ β Hα SEL LL EL
123
Bias MSE h = −1 h=1 q = −0.5 q = 0.5
Bias MSE Bias MSE Bias MSE Bias MSE
5 1 0.5 1.05469 − 0.09488 0.06479 0.09571 0.05343 − 0.27289 0.20813 − 0.15509 0.09716 0.19628 0.40035
5 1.5 0.5 0.64922 0.14690 0.11941 0.48845 0.26225 0.19707 0.12009 0.15707 0.12971 0.39621 0.52167
5 1.5 1 0.98083 0.06354 0.08002 0.25303 0.11032 − 0.11041 0.09824 − 0.07068 0.10555 − 0.13491 0.54343
5 1.5 1.5 1.11699 − 0.28055 0.11565 − 0.12852 0.13832 − 0.38442 0.26467 − 0.29790 0.16172 − 0.12714 0.10675
5 2 1.5 0.82931 0.08236 0.10840 0.18691 0.10353 − 0.13878 0.16264 − 0.10593 0.10793 − 0.07727 0.09653
6 1 0.5 1.05469 − 0.04378 0.06463 0.08236 0.06574 − 0.19632 0.10307 − 0.16531 0.09234 − 0.17001 0.15298
6 1.5 0.5 0.64922 0.12992 0.11702 0.42383 0.24297 0.19024 0.11000 0.24587 0.13966 0.30876 0.51789
6 1.5 1 0.98083 − 0.04124 0.06853 0.17712 0.10623 − 0.10122 0.09714 − 0.08799 0.08625 0.72731 0.58177
6 1.5 1.5 1.11699 − 0.19735 0.11854 − 0.07641 0.07384 − 0.24820 0.17402 − 0.27266 0.16103 − 0.11253 0.34805
6 2 1.5 0.82931 0.05325 0.11362 0.17674 0.12521 − 0.16360 0.15807 − 0.06370 0.10819 − 0.05065 0.08557
7 1 0.5 1.05469 − 0.04280 0.04855 0.07943 0.06350 − 0.13121 0.08648 − 0.05455 0.07085 − 0.13891 0.12910
7 1.5 0.5 0.64922 0.12887 0.10204 0.41806 0.23590 0.17451 0.10929 0.26817 0.12792 0.26933 0.36646
7 1.5 1 0.98083 0.03561 0.05609 0.16003 0.11323 − 0.05927 0.09317 − 0.05910 0.10670 − 0.06724 0.11367
7 1.5 1.5 1.11699 − 0.11427 0.09693 − 0.03426 0.07155 − 0.22160 0.12692 − 0.16973 0.12165 − 0.10238 0.14622
7 2 1.5 0.82931 0.05163 0.11116 0.15774 0.12770 − 0.01451 0.14156 − 0.06697 0.09529 0.04374 0.08143
8 1 0.5 1.05469 0.03145 0.03921 0.05994 0.05333 0.04770 0.08191 − 0.04316 0.06594 − 0.07436 0.11480
8 1.5 0.5 0.64922 0.12411 0.10662 0.38949 0.21328 0.16331 0.10713 0.37111 0.12473 0.31410 0.35037
8 1.5 1 0.98083 0.02228 0.05051 0.12049 0.10956 0.05026 0.09523 0.04816 0.10379 − 0.04242 0.11190
8 1.5 1.5 1.11699 − 0.09096 0.09143 0.02562 0.06815 − 0.15166 0.11450 − 0.10802 0.11516 − 0.08120 0.06022
8 2 1.5 0.82931 0.03123 0.09146 0.12340 0.15001 0.06443 0.14084 − 0.04553 0.09420 − 0.00386 0.07235
J Indian Soc Probab Stat
Table 6 The bias and MSE for Bayes estimator for Renyi entropy of generalized exponential distribution when α = 1.5
n σ β Hα SEL LL EL
Bias MSE h = −1 h=1 q = −0.5 q = 0.5
Bias MSE Bias MSE Bias MSE Bias MSE
J Indian Soc Probab Stat
5 1 0.5 − 0.42384 − 0.06836 0.17264 0.31624 0.25148 − 0.48682 0.46707 0.21742 0.17901 0.31907 0.32399
5 1.5 0.5 − 0.82931 0.28961 0.22815 0.66013 0.58130 − 0.39646 0.31713 0.58710 0.37385 0.39442 0.29716
5 1.5 1 0.40547 − 0.11964 0.09783 0.12744 0.10745 − 0.29531 0.49479 − 0.13364 0.10515 0.23624 0.12909
5 1.5 1.5 0.66011 − 0.27051 0.16473 − 0.13832 0.10824 − 0.27340 0.33422 − 0.33606 0.14627 − 0.11770 0.10402
5 2 1.5 0.37243 − 0.07704 0.10634 0.12704 0.09867 − 0.25581 0.32080 − 0.09488 0.10690 0.06164 0.09136
6 1 0.5 − 0.42384 − 0.06052 0.15234 0.24567 0.24774 − 0.39669 0.37131 0.17228 0.17147 0.20710 0.33237
6 1.5 0.5 − 0.82931 0.24812 0.21562 0.64098 0.49424 − 0.28853 0.26007 0.49682 0.32878 0.26084 0.24564
6 1.5 1 0.40547 − 0.06803 0.09147 0.10641 0.09820 − 0.28616 0.38468 − 0.08773 0.15416 0.21650 0.12576
6 1.5 1.5 0.66011 − 0.24079 0.12598 − 0.11997 0.10694 − 0.26006 0.34990 − 0.29680 0.12865 0.10904 0.08223
6 2 1.5 0.37243 − 0.06285 0.09263 0.11307 0.09298 − 0.23815 0.29022 − 0.06837 0.05866 0.05248 0.07358
7 1 0.5 − 0.42384 − 0.05949 0.13178 0.23905 0.24528 − 0.36412 0.32451 0.16201 0.16899 0.17552 0.36008
7 1.5 0.5 − 0.82931 0.22353 0.22693 0.61684 0.42873 − 0.26425 0.24866 0.36162 0.28364 0.27892 0.23982
7 1.5 1 0.40547 − 0.05051 0.08569 0.10524 0.10868 − 0.17159 0.29917 − 0.06009 0.07721 0.23468 0.18139
7 1.5 1.5 0.66011 − 0.21275 0.11951 − 0.10928 0.09724 − 0.23324 0.19767 − 0.26486 0.11791 − 0.13774 0.07127
7 2 1.5 0.37243 0.05259 0.05361 0.10911 0.08725 − 0.14044 0.15181 − 0.03609 0.04197 0.04823 0.05762
8 1 0.5 − 0.42384 − 0.03967 0.08403 0.17238 0.23027 − 0.27761 0.27202 0.12123 0.16110 0.13937 0.24508
8 1.5 0.5 − 0.82931 0.12196 0.12648 0.58661 0.36352 − 0.19827 0.23788 0.32553 0.25524 0.19973 0.16535
8 1.5 1 0.40547 0.04816 0.08441 0.10029 0.09023 − 0.14506 0.17408 0.03140 0.07154 0.21945 0.13967
8 1.5 1.5 0.66011 − 0.16791 0.13870 − 0.10626 0.08252 − 0.21502 0.19658 − 0.21324 0.10254 0.12491 0.07020
8 2 1.5 0.37243 0.04676 0.08454 0.10556 0.08519 − 0.13522 0.14986 0.02009 0.03977 0.03053 0.06935
123
J Indian Soc Probab Stat
sampling method has been applied. Based on the simulation study we have the follow-
ing conclusions. The bias and MSE of all estimators decrease when n increases. The
Bayes estimators for both Shannon entropy Renyi entropy perform better than MLE.
Amoung the Bayes estimators, estimators under squared error loss function perform
better when the shape parameter β ≤ 1 and entropy loss function perform better when
β >1.
Acknowledgements The authors are grateful to the referees for their valuable comments and suggestions.
References
Ahmadi J, Doostparast M (2006) Bayesian estimation and prediction for some life distributions based on
record values. Stat Pap 47:373–392
Ahsanullah M (1995) Record statistics. Nova Science Publishers, New York
Arnold BC, Balakrishnan N, Nagaraja HN (1998) Records. Wiley, New York
Balakrishnan N, Chan PS (1998) On the normal record values and associated inference. Stat Prob Lett
39:73–80
Baratpour S, Ahmadi J, Arghami NR (2007) Entropy properties of record statistics. Stat Pap 48:197–213
Cramer E, Bagh C (2011) Minimum and maximum information censoring plans in progressive censoring.
Commun Stat Theory Methods 40:2511–2527
Chandler KN (1952) The distribution and frequency of record values. J Roy Stat Soc B 14:220–228
Chen MH, Shao QM (1999) Monte Carlo estimation of Bayesian credible and HPD intervals. J Comput Gr
Stat 8:69–92
Cho Y, Sun H, Lee K (2014) An estimation of the entropy for a Rayleigh distribution based on doubly-
generalized Type-II hybrid censored samples. Entropy 16:3655–3669
Cho Y, Sun H, Lee K (2015) Estimating the entropy of a Weibull distribution under generalized progressive
hybrid censoring. Entropy 17:102–122
Kim C, Song S (2010) Bayesian estimation of the parameters of the generalized exponential distribution
from doubly censored samples. Stat Pap 51:583–597
Kundu D, Gupta RD (1999) Generalized exponential distribution. Aust NZ J Stat 41:173–188
Kundu D, Gupta RD (2008) Generalized exponential distribution: Bayesian estimation. Comput Stat Data
Anal 52:1873–1883
Madi MT, Raqab MZ (2007) Bayesian prediction of rainfall records using the generalized exponential
distribution. Environmetrics 18:541–549
Raqab MZ, Madi MT (2005) Bayesian inference for the generalized exponential distribution. J Stat Comput
Simul 75:841–852
Raqab MZ (2002) Inferences for generalized exponential distribution based on record statistics. J Stat Plan
Inference 104:339–350
Raqab MZ, Ahsanullah M (2001) Estimation of location and scale parameters of generalized exponential
distribution based on order statistics. J Stat Comput Simul 69:109–124
Kamps U (1994) Reliability properties of record values from non-identically distributed random variables.
Commun Stat Theory Methods 23:2101–2112
Kang SB, Cho YS, Han JT, Kim J (2012) An estimation of the entropy for a double exponential distribution
based on multiply Type-II censored samples. Entropy 14:161–173
Raqab M (2002) Inference for generalized exponential distribution based on record statistics. J Stat Plan
Inference 104:339–350
Renyi A (1961) On measures of entropy and information. In: Proceedings of the fourth Berkeley symposium
on mathematics, statistics and probability. University of California Press, Berkeley, vol 1, pp 547–561
Seo JI, Lee HJ, Kang SB (2012) Estimation for generalized half logistic distribution based on records. J
Korean Data Inf Sci Soc 23:1249–1257
Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–432
Sultan KS, Moshref ME, Childs A (2002) Record values from generalized power function distribution and
associated inference. J Appl Stat Sci 11:143–156
123