You are on page 1of 8

Applied Probability Trust

Entropy-Based Measure of Uncertainty in past Lifetime Distributions


Author(s): Antonio Di Crescenzo and Maria Longobardi
Source: Journal of Applied Probability, Vol. 39, No. 2 (Jun., 2002), pp. 434-440
Published by: Applied Probability Trust
Stable URL: http://www.jstor.org/stable/3216111
Accessed: 24-10-2015 19:52 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Journal of Applied
Probability.

http://www.jstor.org

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
J.Appl.Prob.39, 434-440 (2002)
Printedin Israel
@AppliedProbability Trust2002

ENTROPY-BASED MEASURE OF UNCERTAINTY


IN PAST LIFETIME DISTRIBUTIONS

ANTONIODI CRESCENZO,* dellaBasilicata


Universitui
MARIALONGOBARDI,** di NapoliFedericoII
Universitht

Abstract
Asproposed byEbrahimi, uncertaintyintheresiduallifetimedistribution
canbemeasured
by meansof the Shannonentropy.In thispaper,we analysea dualcharacterization of
thatis basedon entropyappliedto thepastlifetime.Variousaspects
life distributions
of thismeasureof uncertaintyareconsidered, includingits connectionwiththeresidual
entropy,therelationbetweenits increasingnatureandtheDRFRproperty, andtheeffect
of monotonictransformations on it.

Keywords:Residuallifetime;reversedhazardfunction;DRFRproperty;monotonic
transformation
AMS2000SubjectClassification:
Primary62N05
Secondary90B25

1. Introductionand background
Let X be an absolutely continuous nonnegative honest random variable describing the
random lifetime of an item or of a living organism. As usual, we denote the probability
density function(PDF) of X as f(x), the cumulativedistributionfunction(CDF) as F(x), and
the survivalfunction as F(x) = 1 - F(x). A classical measure of uncertaintyfor X is the
differentialentropy,also known as the Shannoninformationmeasure,definedas

H = -E[log f(X)]

-=
- f (x) log f (x) dx, (1.1)

where log denotes the naturallogarithm.Since the classical contributionsby Shannon[13] and
Wiener [15], the propertiesand virtuesof H have been thoroughlyinvestigated.Furthermore,
numerousgeneralizationsof (1.1) have been proposed(see, for instance, [14]).
The role of differentialentropyas a measureof uncertaintyin residuallifetime distributions
has attractedincreasing attention in recent years. According to Ebrahimi[5], the residual
entropyat time t of a randomlifetime X is definedas the differentialentropyof [X IX > t] (as
usual, [X I B] denotes a randomvariablewhose distributionis identicalto thatof X conditional

Received 30 July 2001; revision received 14 March2002.


* Currentaddress:
Dipartimentodi Matematicae Informatica,Universithdi Salerno,Via S. Allende, 84081 Baronissi
(SA), Italy. Email address:adicrescenzo@unisa.it
** Postal address: Dipartimentodi Matematicae Applicazioni, Universithdi Napoli Federico II, Via Cintia, 80126
Napoli, Italy.

434

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
Entropy-basedmeasureof uncertaintyin past lifetimedistributions 435

on B). Formally,for all t > 0 the residualentropyof X is given by

+m f(x) f(x)
H(t) = - +00F(t) log f()dx
F (t) (t)
=
log - 1.'
+00
- f (x) log f (x) dx
=log-F(t) _
= 1 f(x) log r(x) dx, (1.2)
F(t)
where r(t) = f(t)/F(t) is the hazardfunction, or failure rate, of X. Given that an item has
survivedup to time t, H(t) measuresthe uncertaintyaboutits remaininglife. Variousresults
concerning H(t) have been obtainedin recent years by Ebrahimi[5], [6], [7], Ebrahimiand
Pellerey [9], Ebrahimiand Kirmani[8], Asadi and Ebrahimi[1], Oluyede [12], and Navarro
et al. [11l].
However, it is reasonable to presume that in many realistic situations uncertaintyis not
necessarily relatedto the futurebut can also refer to the past. For instance,consider a system
whose state is observed only at certainpreassignedinspection times. If at time t the system
is inspected for the first time and it is found to be 'down', then the uncertaintyrelies on the
past, i.e. on which instantin (0, t) it has failed. It thus seems naturalto introducea notion of
uncertaintythatis dual to the residualentropy,in the sense thatit refersto past time and not to
futuretime. Withoutloss of generality,from now on we shall assume that F(O+) > 0.
Let X be a randomlifetime and recall that the PDF of [X IX < t] is given by f(x)/F(t),
O x < t. The differentialentropyof [X IX < t] for all t > 0 will be called past entropyat
<
time t of X, and will be denotedby

ff (x)
__t f (x)
H(t) log ()dx. (1.3)
F (t) F(t)
- ot
Note that H(t) e [-oo, +oo]. Given thatat time t an item has been found to be failing, H(t)
measuresthe uncertaintyaboutits past life. The following example shows the role of the past
entropyin the comparisonof randomlifetimes.
Example 1.1. Let two systems' components be characterizedby randomlifetimes X and Y
with PDFs
fx(t) = 2tl(o,1)(t) and fy(t) = 2(1 - t)1(o,1)(t)
respectively.Theirdifferentialentropyis given by Hx = Hy = log 2 - ?, so thatthe expected
uncertaintycontained in fx and fy about the predictabilityof the outcomes of X and Y is
the same. However, if both componentsare found to have failed upon an inspection at time
t E (0, 1), then the uncertaintyabouttheirunknownfailuretimes must be measuredby means
of the past entropy.From (1.3), for t E (0, 1), we have
1 t
Hx(t) = - + log-,
2 2
1 (1 -t)2 2(1 -t) 1 2
+ 1-(1-t)2 log 1-(1-t)2 -
1
r(t) -- 1-(1-t)2o -(1-t)2

so that Hx(t) > Hyr(t) for all t e (0, 1). Hence, even though Hx = Hy, the expected
uncertaintycontainedin the PDF of X given X < t aboutthe predictabilityof the failuretime

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
436 A. DI CRESCENZO
ANDM. LONGOBARDI

of the firstcomponentis largerthanthe expected uncertaintycontainedin the PDF of Y given


Y < t aboutthe predictabilityof the failuretime of the second component.
We remarkthat (1.3) can also be identifiedwith the entropyof the inactivitytime [t - X I
X < t] (see [3] for variousresultson such a randomvariable).
In the following section we expressthe entropyof a randomlifetime in termsof the residual
entropyandof the past entropy.The problemof the increasingnatureof H(t) is also discussed.
In particular,we show thata randomlifetime X has increasingpastentropyif its reversedfailure
rateis decreasing,i.e. X is DRFR. Upperboundsfor H(t) and for the reversedhazardfunction
are also obtained. Finally,we analysethe effect of strictmonotonictransformationson the past
entropy,and particularlytheireffects on the entropymonotonicityproperties.
Throughoutthis paper,the terms 'decreasing'and 'increasing'are used in a non-strictsense.

2. Results on the past entropy


From (1.3) we also have the following expressionsfor the past entropy:

1 I
H(t) = log F(t) - (x) log f (x) dx
F(t) 0 f
ot
= 1- f(x) log r(x) dx, (2.1)
F(t)

where r(t) = f(t)/F(t) is the reversedhazardfunction, or reversedfailure rate, of X. The


functionr (t) is receivingincreasingattentionin reliabilitytheoryand survivalanalysis (see [2]
and [3]). As pointedout by some authors(in particular,see [10]), its role is dual to thatof r(t).
Indeed,as will appearclear in the following, the role of r (t) in the analysis of the past entropy
is analogousto thatof r(t) in the analysisof the residualentropyas performedby Ebrahimi[5].
Throughoutthe paperwe shall make use of the following relation,which is an immediate
consequence of (2.1):
d- = r(t)[1 - H(t) - log r(t)].
-H(t) (2.2)
dt
In the following propositionwe show that(1.1) can be expressedin termsof H (t) and H(t).
The proof is omitted,being straightforward.

Proposition 2.1. For all t > 0,

H = Je[F(t), F(t)] + F(t)H(t) + F(t)H(t), (2.3)

whereM[[p, 1 - p] = -p log p - (1 - p) log(1 - p) is the entropyof a Bernoullidistribution.


The identity(2.3) admitsthe following interpretation.The uncertaintyaboutthe failuretime
of an item can be decomposedinto threeparts:(i) the uncertaintyof whetherthe item has failed
before or aftertime t, (ii) the uncertaintyaboutthe failuretime in (0, t) given thatthe item has
failed before t, and (iii) the uncertaintyabout the failure time in (t, +oo) given that the item
has failed after t.
From(1.3) it is immediatelyseen thatthe entropyof a randomvariableuniformlydistributed
on (0, t) is given by log t. The latteris also an upperbound for H(t); indeed

log t forall t > 0. (2.4)


H/(t) <

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
measureof uncertainty
Entropy-based inpastlifetimedistributions 437

This is in agreementwith the principle of maximum entropy (see [7]), accordingto which
the uniformdistributionmaximizes entropy underthe constraintthat the probabilitymass is
concentratedon a finite interval. A direct consequence of (2.4) is that the past entropy of a
randomlifetime cannotbe constantfor all t > 0.
Similarlyto (2.4) we can provethatthe past entropyof a randomlifetime X distributedover
(0, b) satisfies Hx(t) S HU(O,b)(t),where

logt if 0 <
HU(,b)() t<b, (2.5)
log b if t > b, (2.5)
HU-0,(ob)(t)
is the past entropyof a randomvariableuniformlydistributedover (0, b).
Note thatlog t is not always a tight boundfor the past entropy,especially for large t. If, for
instance, X is exponentiallydistributedwith mean 1/X, then we have

Xte-Xt
H(t) = 1 + log(1 - e-kt) - t > 0,
,- e-t
while, if X has a Pareto-typeCDF F(t) = (t/(1 + t))1(o,+c0)(t), it is given by
t 1
H(t) = 1 + log + t)2, t > 0.
l+t t log(l

In both cases = 1, while log t divergesas t goes to +co.


limt+o/ H(t)
Let us now discuss a problemconcerninga conditionedmean value of a randomlifetime X.
By setting gt(t) = E(X IX < t) it is not hardto see that (d/dt)ft(t) = r(t)[t - gt(t)] > 0,
so that the mean failure time conditioned by a failure before t is increasingin t > 0. We
emphasizethata relevantdifferenceexists between the past entropyand the conditionedmean
value: while gt(t) is always increasing,the past entropyis not necessarilyincreasing,as shown
in the following example in which neitherH(t) nor r(t) is monotonicin t.

Example 2.1. Let X be a randomlifetime havingCDF

1
F(t)= exp-I-- 11(o,1](t)
1
+expt- 2-I1(1,2](t)
t 1 2 1t
+ exp (2.6)
1(2,+o)(t).
(2.6)
11( e
t l(2,+•)(t).
(This example has been alreadyconsideredin [2] and [4] in orderto show some resultson the
reversedfailurerate.) Then
1
= +
r(t) -t2l(0,1]U(2,+0o)(t) t1(1,2](t),

so that r(t) is not monotonic. Moreover, we can also prove that the past entropy is not
monotonic,as shown in Figure 1.
It is not hardto prove that if f(t) is decreasingin t > 0, then H(t) is increasingin t > 0.
However, this propertycan be proved underthe weaker assumptionthat the reversedfailure
rate of X is decreasing (i.e. X is DRFR). Indeed, if r (t) is decreasing, from (2.1) we have
H(t) 1 - log r(t), which gives the following result.
_
Proposition 2.2. If r (t) is decreasingfor all t > 0, then H(t) is increasingfor all t > 0.

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
438 A. DI CRESCENZO
ANDM. LONGOBARDI

(a)
0.5 .0 1.5 2.0 2.5 3.0
-2

-4

-6

-8

-10
0.6
(b)
0.5

0.4

0.3

1.6 1.8 2.0 2.2 2.4

1: (a) Thepastentropycorresponding
FIGURE to (2.6) is sketchedfor t E (0, 3). (b) An enlargement
of (a) closeto t = 2, makingit evidentthatit is notmonotonic.

Remark 2.1. If X is a randomlifetime with increasingfailure rate and decreasingreversed


failurerate, then the second term of the right-handside of (2.3) is increasingin t > 0 (due to
Proposition2.2), while the thirdtermis decreasingin t > 0 (due to Theorem3.1 of [5]).
The two upperboundsin the following propositioncan be obtainedby makinguse of (2.2).

Proposition 2.3. If Hl(t) is increasingfor t > 0, then r(t) < exp{l - H(t)}, t > 0 and

H(t) < 1 - log r(t), t > 0. (2.7)

Remarks 2.2. (i) From(2.2) and (2.4) we have that H(t) is increasingfor t > 0 if r(t) < e/t,
t > 0.
(ii) The condition r(t) < e/t, t > 0, is not necessary for the past entropyto be increasing.
Indeed, if for instance F(t) = tc, 0 < t < 1, c > 0, we have r(t) = c/t, t > 0, so that the
reversedfailurerate is decreasingfor all t > 0, and thus Proposition2.2 implies that H(t) is
increasingin t > 0 even when r (t) > e/t, i.e. when c > e.
(iii) For all positive t such that r(t) > e/t, the bound (2.7) is betterthanthat given in (2.4).
The effect of increasingtransformationson the monotonicityof residualentropyof random
lifetimes has been consideredin Theorem2 of [8], where it is shown that,for all t > 0,

Hy(t) = Hx (Q-' (t)) + E[log 0'(X) I X > 0-' (t)],

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
Entropy-basedmeasureof uncertaintyin past lifetimedistributions 439

where Y = 40(X), with 0 strictly increasing, continuous, and differentiable. We now give
a similar result for the past entropy in the case of monotonic transformations.The proof is
omitted, since it is similar to thatof Theorem2 of [8].
Proposition 2.4. Let Y = ~ (X), with 0 strictly monotonic, continuous, and differentiable;
then,for all t > 0,

Hx(4-'(t)) + E[log{-4'(X)} IX > -1'(t)] if 4 is strictlydecreasing,


Hx((-0(t)) + E[log 0'(X) I X < 4-1(t)] if4) is strictly increasing.

If in addition, 4 is convex, then Hy (t) is increasing in t > 0 if Hx (t) is decreasing in t > 0


or if Hx(t) is increasingin t > 0.
Remark 2.3. When Y = 0 (X) has the same distributionof X, Proposition2.4 can be used to
obtainvariousexpressionsinvolving Hx and Hx.
(i) Forinstance,if X has a Pareto-typedistribution,with F(t) = (t/(1 + t))1(o,+oo)(t),then
Y = 1/ X has the same distributionof X, so that

= +E log{X-2} X>
Hx(t) Hx t t

+2 1 - - log(1l+t).
=-log t+1l t

(ii) If X is a randomlifetime such that Y = P - X has the same distributionof X, then


Proposition2.4 yields Hxl(t) = Hx(P - t), 0 < t </(. For instance, such a relation
holds if X has the following beta-typePDF:

[x(P - x)]a-1
f(x)
2a-1B(a, a)' < x <,

with a > 0, / > 0, and B(a, a) = fo [x(1 - x)]a-1 dx for a > 0.

Remark 2.4. Let 41 (x) = Fx (x) and 42(x) = Fx (x), with 1 and 42 satisfyingthe assump-
tions of Proposition2.4. As Y1= 41 (X) and Y2= 02(X) areuniformlydistributedover (0, 1),
recalling (2.5) we have that, for all t > 0,

H o, (t) Hx(FI (t)) + E[log{-fx(X)} I X >


Fx7 (t)],
Hx (Fx (t)) + E[log fx (X) I X < F (t)].

Remark 2.5. From Proposition2.4 we have Hax (t) = Hx (t/a) + log a for all t > 0 and
a >0.

Acknowledgements
This workhas been performedwithin a joint cooperationagreementbetweenJapanScience
and Technology Corporation(JST) and Universitt di Napoli Federico II. The authorsthank
Professor Luigi M. Ricciardi for stimulating discussions, and Professor Jorge Navarrofor
helpful comments.

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions
440 A. DI CRESCENZOAND M. LONGOBARDI

References
[1] ASADI,M. ANDEBRAHIMI, N. (2000). Residualentropyand its characterizationsin termsof hazardfunctionand
mean residuallife function.Statist.Prob. Lett.49, 263-269.
[2] BLOCK, H. W., SAVITS, T. H. ANDSINGH,H. (1998). The reversedhazardratefunction.Prob. Eng. Inf. Sci. 12,
69-90.
[3] CHANDRA, N. K. ANDROY,D. (2001). Some resultson reversedhazardrate.Prob.Eng. Inf. Sci. 15, 95-102.
M. (2001). The up reversed hazard rate stochastic order. Sci. Math. Japon.
A. ANDLONGOBARDI,
[4] DI CRESCENZO,
54, 575-581.
[5] EBRAHIMI, N. (1996). How to measureuncertaintyin the residuallife time distribution.SankhyaA 58, 48-56.
[6] EBRAHIMI, N. (1997). Testing whetherlifetime distributionis decreasinguncertainty.J. Statist.Planning Infer
64, 9-19.
[7] EBRAHIMI, N. (2000). The maximumentropymethodfor lifetime distributions.SankhyaA 62, 236-243.
[8] EBRAHIMI, N. ANDKIRMANI, S. N. U. A. (1996). Some results on ordering of survival functions through
uncertainty.Statist.Prob.Lett. 29, 167-176.
[9] EBRAHIMI, N. ANDPELLEREY, F. (1995). New partialorderingof survival functions based on the notion of
uncertainty.J. Appl. Prob. 32, 202-211.
[10] NANDA,A. K. ANDSHAKED, M. (2001). The hazardrate and the reversedhazardrate orders,with applications
to orderstatistics.Ann. Inst. Statist.Math. 53, 853-864.
J., BELZUNCE,F., RuIz, J. M. ANDDEL AGUILA,Y. (2002). Some results on residual entropy function.
[11] NAVARRO,
To appearinAbstractsBook,3rdInternat.Conf.Math.MethodsReliab.(17-20 June2002, Trondheim,Norway).
[12] OLUYEDE, B. 0. (1999). On inequalitiesand selection of experimentsfor lengthbiaseddistributions.Prob.Eng.
Inf. Sci. 13, 169-185.
[13] SHANNON, C. E. (1948). A mathematicaltheoryof communication.Bell SystemTech.J. 27, 279-423.
[14] TANEJA,I. J. (1990). On generalized entropy with applications. In Lectures in Applied Mathematics and
Informatics,ed. L. M. Ricciardi,ManchesterUniversityPress, pp. 107-169.
[15] WIENER, N. (1961). Cybernetics,2nd edn. MIT Press and JohnWiley, New York.

This content downloaded from 132.174.255.116 on Sat, 24 Oct 2015 19:52:02 UTC
All use subject to JSTOR Terms and Conditions

You might also like