Professional Documents
Culture Documents
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.
Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Journal of Applied
Probability.
http://www.jstor.org
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
J.Appl.Prob.39, 434-440 (2002)
Printedin Israel
@AppliedProbability Trust2002
Abstract
Asproposed byEbrahimi, uncertaintyintheresiduallifetimedistribution
canbemeasured
by meansof the Shannonentropy.In thispaper,we analysea dualcharacterization of
thatis basedon entropyappliedto thepastlifetime.Variousaspects
life distributions
of thismeasureof uncertaintyareconsidered, includingits connectionwiththeresidual
entropy,therelationbetweenits increasingnatureandtheDRFRproperty, andtheeffect
of monotonictransformations on it.
Keywords:Residuallifetime;reversedhazardfunction;DRFRproperty;monotonic
transformation
AMS2000SubjectClassification:
Primary62N05
Secondary90B25
1. Introductionand background
Let X be an absolutely continuous nonnegative honest random variable describing the
random lifetime of an item or of a living organism. As usual, we denote the probability
density function(PDF) of X as f(x), the cumulativedistributionfunction(CDF) as F(x), and
the survivalfunction as F(x) = 1 - F(x). A classical measure of uncertaintyfor X is the
differentialentropy,also known as the Shannoninformationmeasure,definedas
H = -E[log f(X)]
-=
- f (x) log f (x) dx, (1.1)
where log denotes the naturallogarithm.Since the classical contributionsby Shannon[13] and
Wiener [15], the propertiesand virtuesof H have been thoroughlyinvestigated.Furthermore,
numerousgeneralizationsof (1.1) have been proposed(see, for instance, [14]).
The role of differentialentropyas a measureof uncertaintyin residuallifetime distributions
has attractedincreasing attention in recent years. According to Ebrahimi[5], the residual
entropyat time t of a randomlifetime X is definedas the differentialentropyof [X IX > t] (as
usual, [X I B] denotes a randomvariablewhose distributionis identicalto thatof X conditional
434
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
Entropy-basedmeasureof uncertaintyin past lifetimedistributions 435
+m f(x) f(x)
H(t) = - +00F(t) log f()dx
F (t) (t)
=
log - 1.'
+00
- f (x) log f (x) dx
=log-F(t) _
= 1 f(x) log r(x) dx, (1.2)
F(t)
where r(t) = f(t)/F(t) is the hazardfunction, or failure rate, of X. Given that an item has
survivedup to time t, H(t) measuresthe uncertaintyaboutits remaininglife. Variousresults
concerning H(t) have been obtainedin recent years by Ebrahimi[5], [6], [7], Ebrahimiand
Pellerey [9], Ebrahimiand Kirmani[8], Asadi and Ebrahimi[1], Oluyede [12], and Navarro
et al. [11l].
However, it is reasonable to presume that in many realistic situations uncertaintyis not
necessarily relatedto the futurebut can also refer to the past. For instance,consider a system
whose state is observed only at certainpreassignedinspection times. If at time t the system
is inspected for the first time and it is found to be 'down', then the uncertaintyrelies on the
past, i.e. on which instantin (0, t) it has failed. It thus seems naturalto introducea notion of
uncertaintythatis dual to the residualentropy,in the sense thatit refersto past time and not to
futuretime. Withoutloss of generality,from now on we shall assume that F(O+) > 0.
Let X be a randomlifetime and recall that the PDF of [X IX < t] is given by f(x)/F(t),
O x < t. The differentialentropyof [X IX < t] for all t > 0 will be called past entropyat
<
time t of X, and will be denotedby
ff (x)
__t f (x)
H(t) log ()dx. (1.3)
F (t) F(t)
- ot
Note that H(t) e [-oo, +oo]. Given thatat time t an item has been found to be failing, H(t)
measuresthe uncertaintyaboutits past life. The following example shows the role of the past
entropyin the comparisonof randomlifetimes.
Example 1.1. Let two systems' components be characterizedby randomlifetimes X and Y
with PDFs
fx(t) = 2tl(o,1)(t) and fy(t) = 2(1 - t)1(o,1)(t)
respectively.Theirdifferentialentropyis given by Hx = Hy = log 2 - ?, so thatthe expected
uncertaintycontained in fx and fy about the predictabilityof the outcomes of X and Y is
the same. However, if both componentsare found to have failed upon an inspection at time
t E (0, 1), then the uncertaintyabouttheirunknownfailuretimes must be measuredby means
of the past entropy.From (1.3), for t E (0, 1), we have
1 t
Hx(t) = - + log-,
2 2
1 (1 -t)2 2(1 -t) 1 2
+ 1-(1-t)2 log 1-(1-t)2 -
1
r(t) -- 1-(1-t)2o -(1-t)2
so that Hx(t) > Hyr(t) for all t e (0, 1). Hence, even though Hx = Hy, the expected
uncertaintycontainedin the PDF of X given X < t aboutthe predictabilityof the failuretime
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
436 A. DI CRESCENZO
ANDM. LONGOBARDI
1 I
H(t) = log F(t) - (x) log f (x) dx
F(t) 0 f
ot
= 1- f(x) log r(x) dx, (2.1)
F(t)
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
measureof uncertainty
Entropy-based inpastlifetimedistributions 437
This is in agreementwith the principle of maximum entropy (see [7]), accordingto which
the uniformdistributionmaximizes entropy underthe constraintthat the probabilitymass is
concentratedon a finite interval. A direct consequence of (2.4) is that the past entropy of a
randomlifetime cannotbe constantfor all t > 0.
Similarlyto (2.4) we can provethatthe past entropyof a randomlifetime X distributedover
(0, b) satisfies Hx(t) S HU(O,b)(t),where
logt if 0 <
HU(,b)() t<b, (2.5)
log b if t > b, (2.5)
HU-0,(ob)(t)
is the past entropyof a randomvariableuniformlydistributedover (0, b).
Note thatlog t is not always a tight boundfor the past entropy,especially for large t. If, for
instance, X is exponentiallydistributedwith mean 1/X, then we have
Xte-Xt
H(t) = 1 + log(1 - e-kt) - t > 0,
,- e-t
while, if X has a Pareto-typeCDF F(t) = (t/(1 + t))1(o,+c0)(t), it is given by
t 1
H(t) = 1 + log + t)2, t > 0.
l+t t log(l
1
F(t)= exp-I-- 11(o,1](t)
1
+expt- 2-I1(1,2](t)
t 1 2 1t
+ exp (2.6)
1(2,+o)(t).
(2.6)
11( e
t l(2,+•)(t).
(This example has been alreadyconsideredin [2] and [4] in orderto show some resultson the
reversedfailurerate.) Then
1
= +
r(t) -t2l(0,1]U(2,+0o)(t) t1(1,2](t),
so that r(t) is not monotonic. Moreover, we can also prove that the past entropy is not
monotonic,as shown in Figure 1.
It is not hardto prove that if f(t) is decreasingin t > 0, then H(t) is increasingin t > 0.
However, this propertycan be proved underthe weaker assumptionthat the reversedfailure
rate of X is decreasing (i.e. X is DRFR). Indeed, if r (t) is decreasing, from (2.1) we have
H(t) 1 - log r(t), which gives the following result.
_
Proposition 2.2. If r (t) is decreasingfor all t > 0, then H(t) is increasingfor all t > 0.
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
438 A. DI CRESCENZO
ANDM. LONGOBARDI
(a)
0.5 .0 1.5 2.0 2.5 3.0
-2
-4
-6
-8
-10
0.6
(b)
0.5
0.4
0.3
1: (a) Thepastentropycorresponding
FIGURE to (2.6) is sketchedfor t E (0, 3). (b) An enlargement
of (a) closeto t = 2, makingit evidentthatit is notmonotonic.
Proposition 2.3. If Hl(t) is increasingfor t > 0, then r(t) < exp{l - H(t)}, t > 0 and
Remarks 2.2. (i) From(2.2) and (2.4) we have that H(t) is increasingfor t > 0 if r(t) < e/t,
t > 0.
(ii) The condition r(t) < e/t, t > 0, is not necessary for the past entropyto be increasing.
Indeed, if for instance F(t) = tc, 0 < t < 1, c > 0, we have r(t) = c/t, t > 0, so that the
reversedfailurerate is decreasingfor all t > 0, and thus Proposition2.2 implies that H(t) is
increasingin t > 0 even when r (t) > e/t, i.e. when c > e.
(iii) For all positive t such that r(t) > e/t, the bound (2.7) is betterthanthat given in (2.4).
The effect of increasingtransformationson the monotonicityof residualentropyof random
lifetimes has been consideredin Theorem2 of [8], where it is shown that,for all t > 0,
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
Entropy-basedmeasureof uncertaintyin past lifetimedistributions 439
where Y = 40(X), with 0 strictly increasing, continuous, and differentiable. We now give
a similar result for the past entropy in the case of monotonic transformations.The proof is
omitted, since it is similar to thatof Theorem2 of [8].
Proposition 2.4. Let Y = ~ (X), with 0 strictly monotonic, continuous, and differentiable;
then,for all t > 0,
= +E log{X-2} X>
Hx(t) Hx t t
+2 1 - - log(1l+t).
=-log t+1l t
[x(P - x)]a-1
f(x)
2a-1B(a, a)' < x <,
Remark 2.4. Let 41 (x) = Fx (x) and 42(x) = Fx (x), with 1 and 42 satisfyingthe assump-
tions of Proposition2.4. As Y1= 41 (X) and Y2= 02(X) areuniformlydistributedover (0, 1),
recalling (2.5) we have that, for all t > 0,
Remark 2.5. From Proposition2.4 we have Hax (t) = Hx (t/a) + log a for all t > 0 and
a >0.
Acknowledgements
This workhas been performedwithin a joint cooperationagreementbetweenJapanScience
and Technology Corporation(JST) and Universitt di Napoli Federico II. The authorsthank
Professor Luigi M. Ricciardi for stimulating discussions, and Professor Jorge Navarrofor
helpful comments.
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
440 A. DI CRESCENZOAND M. LONGOBARDI
References
[1] ASADI,M. ANDEBRAHIMI, N. (2000). Residualentropyand its characterizationsin termsof hazardfunctionand
mean residuallife function.Statist.Prob. Lett.49, 263-269.
[2] BLOCK, H. W., SAVITS, T. H. ANDSINGH,H. (1998). The reversedhazardratefunction.Prob. Eng. Inf. Sci. 12,
69-90.
[3] CHANDRA, N. K. ANDROY,D. (2001). Some resultson reversedhazardrate.Prob.Eng. Inf. Sci. 15, 95-102.
M. (2001). The up reversed hazard rate stochastic order. Sci. Math. Japon.
A. ANDLONGOBARDI,
[4] DI CRESCENZO,
54, 575-581.
[5] EBRAHIMI, N. (1996). How to measureuncertaintyin the residuallife time distribution.SankhyaA 58, 48-56.
[6] EBRAHIMI, N. (1997). Testing whetherlifetime distributionis decreasinguncertainty.J. Statist.Planning Infer
64, 9-19.
[7] EBRAHIMI, N. (2000). The maximumentropymethodfor lifetime distributions.SankhyaA 62, 236-243.
[8] EBRAHIMI, N. ANDKIRMANI, S. N. U. A. (1996). Some results on ordering of survival functions through
uncertainty.Statist.Prob.Lett. 29, 167-176.
[9] EBRAHIMI, N. ANDPELLEREY, F. (1995). New partialorderingof survival functions based on the notion of
uncertainty.J. Appl. Prob. 32, 202-211.
[10] NANDA,A. K. ANDSHAKED, M. (2001). The hazardrate and the reversedhazardrate orders,with applications
to orderstatistics.Ann. Inst. Statist.Math. 53, 853-864.
J., BELZUNCE,F., RuIz, J. M. ANDDEL AGUILA,Y. (2002). Some results on residual entropy function.
[11] NAVARRO,
To appearinAbstractsBook,3rdInternat.Conf.Math.MethodsReliab.(17-20 June2002, Trondheim,Norway).
[12] OLUYEDE, B. 0. (1999). On inequalitiesand selection of experimentsfor lengthbiaseddistributions.Prob.Eng.
Inf. Sci. 13, 169-185.
[13] SHANNON, C. E. (1948). A mathematicaltheoryof communication.Bell SystemTech.J. 27, 279-423.
[14] TANEJA,I. J. (1990). On generalized entropy with applications. In Lectures in Applied Mathematics and
Informatics,ed. L. M. Ricciardi,ManchesterUniversityPress, pp. 107-169.
[15] WIENER, N. (1961). Cybernetics,2nd edn. MIT Press and JohnWiley, New York.
This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions